Patch: Write Amplification Reduction Method (WARM)

Started by Pavan Deolaseeover 9 years ago263 messages
#1Pavan Deolasee
pavan.deolasee@gmail.com
5 attachment(s)

Hi All,

As previously discussed [1]/messages/by-id/CABOikdMop5Rb_RnS2xF dAXMZGSqcJ-P-BY2ruMd%2BbuUkJ4iDPw@mail.gmail.com, WARM is a technique to reduce write
amplification when an indexed column of a table is updated. HOT fails to
handle such updates and ends up inserting a new index entry in all indexes
of the table, irrespective of whether the index key has changed or not for
a specific index. The problem was highlighted by Uber's blog post [2]https://eng.uber.com/mysql-migration/, but
it was a well known problem and affects many workloads.

Simon brought up the idea originally within 2ndQuadrant and I developed it
further with inputs from my other colleagues and community members.

There were two important problems identified during the earlier discussion.
This patch addresses those issues in a simplified way. There are other
complex ideas to solve those issues, but as the results demonstrate, even a
simple approach will go far way in improving performance characteristics of
many workloads, yet keeping the code complexity to relatively low.

Two problems have so far been identified with the WARM design.

“*Duplicate Scan*” - Claudio Freire brought up a design flaw which may lead
an IndexScan to return same tuple twice or more, thus impacting the
correctness of the solution.

“*Root Pointer Search*” - Andres raised the point that it could be
inefficient to find the root line pointer for a tuple in the HOT or WARM
chain since it may require us to scan through the entire page.

The Duplicate Scan problem has correctness issues so could block WARM
completely. We propose the following solution:

We discussed a few ideas to address the "Duplicate Scan" problem. For
example, we can teach Index AMs to discard any duplicate (key, CTID) insert
requests. Or we could guarantee uniqueness by either only allowing updates
in one lexical order. While the former is a more complete solution to avoid
duplicate entries, searching through large number of keys for non-unique
indexes could be a drag on performance. The latter approach may not be
sufficient for many workloads. Also tracking increment/decrement for many
indexes will be non-trivial.

There is another problem with allowing many index entries pointing to the
same WARM chain. It will be non-trivial to know how many index entries are
currently pointing to the WARM chain and index/heap vacuum will throw up
more challenges.

Instead, what I would like to propose and the patch currently implements is
to restrict WARM update to once per chain. So the first non-HOT update to a
tuple or a HOT chain can be a WARM update. The chain can further be HOT
updated any number of times. But it can no further be WARM updated. This
might look too restrictive, but it can still bring down the number of
regular updates by almost 50%. Further, if we devise a strategy to convert
a WARM chain back to HOT chain, it can again be WARM updated. (This part is
currently not implemented). A good side effect of this simple strategy is
that we know there can maximum two index entries pointing to any given WARM
chain.

The other problem Andres brought up can be solved by storing the root line
pointer offset in the t_ctid field of the last tuple in the update chain.
Barring some aborted update case, usually it's the last tuple in the update
chain that will be updated, hence it seems logical and sufficient if we can
find the root line pointer while accessing that tuple. Note that the t_ctid
field in the latest tuple is usually useless and is made to point to
itself. Instead, I propose to use a bit from t_infomask2 to identify the
LATEST tuple in the chain and use OffsetNumber field in t_ctid to store
root line pointer offset. For rare aborted update case, we can scan the
heap page and find root line pointer is a hard way.

Index Recheck
--------------------

As the original proposal explains, while doing index scan we must recheck
if the heap tuple matches the index keys. This has to be done only when the
chain is marked as a WARM chain. Currently we do that by setting the last
free bit in t_infomask2 to HEAP_WARM_TUPLE. The bit is set on the tuple
that gets WARM updated and all subsequent tuples in the chain. But the
information can subsequently be copied to root line pointer when it's
converted to a LP_REDIRECT line pointer.

Since each index AM has its own view of the index tuples, each AM must
implement its "amrecheck" routine. This routine to used to confirm that a
tuple returned from a WARM chain indeed satisfies the index keys. If the
index AM does not implement "amrecheck" routine, WARM update is disabled on
a table which uses such an index. The patch currently implements
"amrecheck" routines for hash and btree indexes. Hence a table with GiST or
GIN index will not honour WARM updates.

Results
----------

We used a customised pgbench workload to test the feature. In particular,
the pgbench_accounts table was widened to include many more columns and
indexes. We also added an index on "abalance" field which gets updated in
every transaction. This replicates a workload where there are many indexes
on a table and an update changes just one index key.

CREATE TABLE pgbench_accounts (
aid bigint,
bid bigint,
abalance bigint,
filler1 text DEFAULT md5(random()::text),
filler2 text DEFAULT md5(random()::text),
filler3 text DEFAULT md5(random()::text),
filler4 text DEFAULT md5(random()::text),
filler5 text DEFAULT md5(random()::text),
filler6 text DEFAULT md5(random()::text),
filler7 text DEFAULT md5(random()::text),
filler8 text DEFAULT md5(random()::text),
filler9 text DEFAULT md5(random()::text),
filler10 text DEFAULT md5(random()::text),
filler11 text DEFAULT md5(random()::text),
filler12 text DEFAULT md5(random()::text)
);

CREATE UNIQUE INDEX pgb_a_aid ON pgbench_accounts(aid);
CREATE INDEX pgb_a_abalance ON pgbench_accounts(abalance);
CREATE INDEX pgb_a_filler1 ON pgbench_accounts(filler1);
CREATE INDEX pgb_a_filler2 ON pgbench_accounts(filler2);
CREATE INDEX pgb_a_filler3 ON pgbench_accounts(filler3);
CREATE INDEX pgb_a_filler4 ON pgbench_accounts(filler4);

These tests are run on c3.4xlarge AWS instances, with 30GB of RAM, 16 vCPU
and 2x160GB SSD. Data and WAL were mounted on a separate SSD.

The scale factor of 700 was chosen to ensure that the database does not fit
in memory and implications of additional write activity is evident.

The actual transactional tests would just update the pgbench_accounts table:

\set aid random(1, 100000 * :scale)
\set delta random(-5000, 5000)
BEGIN;
UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
END;

The tests were run for a long duration of 16 hrs each with 16 pgbench
clients to ensure that effects of the patch are captured correctly.

Headline TPS numbers:

Master:

transaction type: update.sql
scaling factor: 700
query mode: simple
number of clients: 16
number of threads: 8
duration: 57600 s
number of transactions actually processed: 65552986
latency average: 14.059 ms
*tps = 1138.072117 (including connections establishing)*
tps = 1138.072156 (excluding connections establishing)

WARM:

transaction type: update.sql
scaling factor: 700
query mode: simple
number of clients: 16
number of threads: 8
duration: 57600 s
number of transactions actually processed: 116168454
latency average: 7.933 ms
*tps = 2016.812924 (including connections establishing)*
tps = 2016.812997 (excluding connections establishing)

So WARM shows about *77% increase* in TPS. Note that these are fairly long
running tests with nearly 100M transactions and the tests show a steady
performance.

We also measured the amount of WAL generated by Master and WARM per
transaction. While master generated 34967 bytes of WAL per transaction,
WARM generated 18421 bytes of WAL per transaction.

We plotted a moving average of TPS against time and also against the
percentage of WARM updates. Clearly higher the number of WARM updates,
higher is the TPS. A graph showing percentage of WARM updates is also
plotted and it shows a steady convergence to 50% mark with time.

We repeated the same tests starting with 90% heap fill factor such that
there are many more WARM updates. Since with 90% fill factor and in
combination with HOT pruning, most initial updates will be WARM updates and
that impacts TPS positively. WARM shows nearly *150% increase *in TPS for
that workload.

Master:

transaction type: update.sql
scaling factor: 700
query mode: simple
number of clients: 16
number of threads: 8
duration: 57600 s
number of transactions actually processed: 78134617
latency average: 11.795 ms
*tps = 1356.503629 (including connections establishing)*
tps = 1356.503679 (excluding connections establishing)

WARM:

transaction type: update.sql
scaling factor: 700
query mode: simple
number of clients: 16
number of threads: 8
duration: 57600 s
number of transactions actually processed: 196782770
latency average: 4.683 ms
*tps = 3416.364822 (including connections establishing)*
tps = 3416.364949 (excluding connections establishing)

In this case, master produced ~49000 bytes of WAL per transaction where as
WARM produced ~14000 bytes of WAL per transaction.
I concede that we haven't yet done many tests to measure overhead of the
technique, especially in circumstances where WARM may not be very useful.
What I have in mind are couple of tests:

- With many indexes and a good percentage of them requiring update
- A mix of read-write workload

Any other ideas to do that are welcome.

Concerns:
--------------

The additional heap recheck may have negative impact on performance. We
tried to measure this by running a SELECT only workload for 1hr after 16hr
test finished. But the TPS did not show any negative impact. The impact
could be more if the update changes many index keys, something these tests
don't test.

The patch also changes things such that index tuples are always returned
because they may be needed for recheck. It's not clear if this is something
to be worried about, but we could try to further fine tune this change.

There seems to be some modularity violations since index AM needs to access
some of the executor stuff to form index datums. If that's a real concern,
we can look at improving amrecheck signature so that it gets index datums
from the caller.

The patch uses remaining 2 free bits in t_infomask, thus closing any
further improvements which may need to use heap tuple flags. During the
patch development we tried several other approaches such as reusing
3-higher order bits in OffsetNumber since the current max BLCKSZ limits the
MaxOffsetNumber to 8192 and that can be represented in 13 bits. We finally
reverted that change to keep the patch simple. But there is clearly a way
to free up more bits if required.

Converting WARM chains back to HOT chains (VACUUM ?)
---------------------------------------------------------------------------------

The current implementation of WARM allows only one WARM update per chain.
This
simplifies the design and addresses certain issues around duplicate scans.
But
this also implies that the benefit of WARM will be no more than 50%, which
is
still significant, but if we could return WARM chains back to normal
status, we
could do far more WARM updates.

A distinct property of a WARM chain is that at least one index has more than
one live index entries pointing to the root of the chain. In other words,
if we
can remove duplicate entry from every index or conclusively prove that there
are no duplicate index entries for the root line pointer, the chain can
again
be marked as HOT.

Here is one idea, but more thoughts/suggestions are most welcome.

A WARM chain has two parts, separated by the tuple that caused WARM update.
All
tuples in each part has matching index keys, but certain index keys may not
match between these two parts. Lets say we mark heap tuples in each part
with a
special Red-Blue flag. The same flag is replicated in the index tuples. For
example, when new rows are inserted in a table, they are marked with Blue
flag
and the index entries associated with those rows are also marked with Blue
flag. When a row is WARM updated, the new version is marked with Red flag
and
the new index entry created by the update is also marked with Red flag.

Heap chain: lp [1]/messages/by-id/CABOikdMop5Rb_RnS2xF dAXMZGSqcJ-P-BY2ruMd%2BbuUkJ4iDPw@mail.gmail.com [2]https://eng.uber.com/mysql-migration/ [3] [4]
[aaaa, 1111]B -> [aaaa, 1111]B -> [bbbb, 1111]R -> [bbbb, 1111]R

Index1: (aaaa)B points to 1 (satisfies only tuples marked with B)
(bbbb)R points to 1 (satisfies only tuples marked with R)

Index2: (1111)B points to 1 (satisfies both B and R tuples)

It's clear that for indexes with Red and Blue pointers, a heap tuple with
Blue
flag will be reachable from Blue pointer and that with Red flag will be
reachable from Red pointer. But for indexes which did not create a new
entry,
both Blue and Red tuples will be reachable from Blue pointer (there is no
Red
pointer in such indexes). So, as a side note, matching Red and Blue flags is
not enough from index scan perspective.

During first heap scan of VACUUM, we look for tuples with HEAP_WARM_TUPLE
set.
If all live tuples in the chain are either marked with Blue flag or Red flag
(but no mix of Red and Blue), then the chain is a candidate for HOT
conversion.
We remember the root line pointer and Red-Blue flag of the WARM chain in a
separate array.

If we have a Red WARM chain, then our goal is to remove Blue pointers and
vice
versa. But there is a catch. For Index2 above, there is only Blue pointer
and that must not be removed. IOW we should remove Blue pointer iff a Red
pointer exists. Since index vacuum may visit Red and Blue pointers in any
order, I think we will need another index pass to remove dead
index pointers. So in the first index pass we check which WARM candidates
have
2 index pointers. In the second pass, we remove the dead pointer and reset
Red
flag is the surviving index pointer is Red.

During the second heap scan, we fix WARM chain by clearing HEAP_WARM_TUPLE
flag
and also reset Red flag to Blue.

There are some more problems around aborted vacuums. For example, if vacuum
aborts after changing Red index flag to Blue but before removing the other
Blue
pointer, we will end up with two Blue pointers to a Red WARM chain. But
since
the HEAP_WARM_TUPLE flag on the heap tuple is still set, further WARM
updates
to the chain will be blocked. I guess we will need some special handling for
case with multiple Blue pointers. We can either leave these WARM chains
alone
and let them die with a subsequent non-WARM update or must apply
heap-recheck
logic during index vacuum to find the dead pointer. Given that vacuum-aborts
are not common, I am inclined to leave this case unhandled. We must still
check
for presence of multiple Blue pointers and ensure that we don't accidently
remove any of the Blue pointers and not clear WARM chains either.

Of course, the idea requires one bit each in index and heap tuple. There is
already a free bit in index tuple and I've some ideas to free up additional
bits in heap tuple (as mentioned above).

Further Work
------------------

1.The patch currently disables WARM updates on system relations. This is
mostly to keep the patch simple, but in theory we should be able to support
WARM updates on system tables too. It's not clear if its worth the
complexity though.

2. AFAICS both CREATE INDEX and CIC should just work fine, but need
validation for that.

3. GiST and GIN indexes are currently disabled for WARM. I don't see a
fundamental reason why they won't work once we implement "amrecheck"
method, but I don't understand those indexes well enough.

4. There are some modularity invasions I am worried about (is amrecheck
signature ok?). There are also couple of hacks around to get access to
index tuples during scans and I hope to get them correct during review
process, with some feedback.

5. Patch does not implement machinery to convert WARM chains into HOT
chains. I would give it go unless someone finds a problem with the idea or
has a better idea.

Thanks,
Pavan

[1]: /messages/by-id/CABOikdMop5Rb_RnS2xF dAXMZGSqcJ-P-BY2ruMd%2BbuUkJ4iDPw@mail.gmail.com
dAXMZGSqcJ-P-BY2ruMd%2BbuUkJ4iDPw@mail.gmail.com
[2]: https://eng.uber.com/mysql-migration/

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

Master-vs-WARM-TPS.pngimage/png; name=Master-vs-WARM-TPS.pngDownload
Percentage-WARM-with-time.pngimage/png; name=Percentage-WARM-with-time.pngDownload
WARM-vs-TPS.pngimage/png; name=WARM-vs-TPS.pngDownload
0001_track_root_lp_v2.patchapplication/octet-stream; name=0001_track_root_lp_v2.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index c63dfa0..ae5839a 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -93,7 +93,8 @@ static HeapTuple heap_prepare_insert(Relation relation, HeapTuple tup,
 					TransactionId xid, CommandId cid, int options);
 static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
-				HeapTuple newtup, HeapTuple old_key_tup,
+				HeapTuple newtup, OffsetNumber root_offnum,
+				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
 static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
 							 Bitmapset *hot_attrs,
@@ -2250,13 +2251,13 @@ heap_get_latest_tid(Relation relation,
 		 */
 		if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) ||
 			HeapTupleHeaderIsOnlyLocked(tp.t_data) ||
-			ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid))
+			HeapTupleHeaderIsHeapLatest(tp.t_data, ctid))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
 		}
 
-		ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tp.t_data, &ctid, offnum);
 		priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		UnlockReleaseBuffer(buffer);
 	}							/* end of loop */
@@ -2415,7 +2416,8 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	START_CRIT_SECTION();
 
 	RelationPutHeapTuple(relation, buffer, heaptup,
-						 (options & HEAP_INSERT_SPECULATIVE) != 0);
+						 (options & HEAP_INSERT_SPECULATIVE) != 0,
+						 InvalidOffsetNumber);
 
 	if (PageIsAllVisible(BufferGetPage(buffer)))
 	{
@@ -2713,7 +2715,8 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		 * RelationGetBufferForTuple has ensured that the first tuple fits.
 		 * Put that on the page, and then as many other tuples as fit.
 		 */
-		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false);
+		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false,
+				InvalidOffsetNumber);
 		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++)
 		{
 			HeapTuple	heaptup = heaptuples[ndone + nthispage];
@@ -2721,7 +2724,8 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace)
 				break;
 
-			RelationPutHeapTuple(relation, buffer, heaptup, false);
+			RelationPutHeapTuple(relation, buffer, heaptup, false,
+					InvalidOffsetNumber);
 
 			/*
 			 * We don't use heap_multi_insert for catalog tuples yet, but
@@ -2993,6 +2997,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	HeapTupleData tp;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	TransactionId new_xmax;
@@ -3044,7 +3049,8 @@ heap_delete(Relation relation, ItemPointer tid,
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 	}
 
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
+	offnum = ItemPointerGetOffsetNumber(tid);
+	lp = PageGetItemId(page, offnum);
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
@@ -3174,7 +3180,7 @@ l1:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tp.t_data, &hufd->ctid, offnum);
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tp.t_data);
@@ -3251,7 +3257,7 @@ l1:
 	HeapTupleHeaderSetXmax(tp.t_data, new_xmax);
 	HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo);
 	/* Make sure there is no forward chain link in t_ctid */
-	tp.t_data->t_ctid = tp.t_self;
+	HeapTupleHeaderSetHeapLatest(tp.t_data);
 
 	MarkBufferDirty(buffer);
 
@@ -3450,6 +3456,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
+	OffsetNumber	root_offnum;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3506,6 +3514,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
 
 	block = ItemPointerGetBlockNumber(otid);
+	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3789,7 +3798,7 @@ l2:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = oldtup.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(oldtup.t_data, &hufd->ctid, offnum);
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
@@ -3967,7 +3976,7 @@ l2:
 		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		HeapTupleHeaderSetHeapLatest(oldtup.t_data);
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4148,6 +4157,20 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			heap_get_root_tuple_one(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
+					&root_offnum);
 	}
 	else
 	{
@@ -4155,10 +4178,29 @@ l2:
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		root_offnum = InvalidOffsetNumber;
 	}
 
-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
+	/* insert new tuple */
+	RelationPutHeapTuple(relation, newbuf, heaptup, false, root_offnum);
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data);
 
+	/*
+	 * Also update the in-memory copy with the root line pointer information
+	 */
+	if (OffsetNumberIsValid(root_offnum))
+	{
+		HeapTupleHeaderSetRootOffset(heaptup->t_data, root_offnum);
+		HeapTupleHeaderSetRootOffset(newtup->t_data, root_offnum);
+	}
+	else
+	{
+		HeapTupleHeaderSetRootOffset(heaptup->t_data,
+				ItemPointerGetOffsetNumber(&heaptup->t_self));
+		HeapTupleHeaderSetRootOffset(newtup->t_data,
+				ItemPointerGetOffsetNumber(&heaptup->t_self));
+	}
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
 	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
@@ -4171,7 +4213,9 @@ l2:
 	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	HeapTupleHeaderSetNextCtid(oldtup.t_data,
+			ItemPointerGetBlockNumber(&(heaptup->t_self)),
+			ItemPointerGetOffsetNumber(&(heaptup->t_self)));
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4210,6 +4254,7 @@ l2:
 
 		recptr = log_heap_update(relation, buffer,
 								 newbuf, &oldtup, heaptup,
+								 root_offnum,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4571,7 +4616,8 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	ItemId		lp;
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
-	BlockNumber block;
+	BlockNumber	block;
+	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4583,6 +4629,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
+	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -4629,7 +4676,7 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+		HeapTupleHeaderGetNextCtid(tuple->t_data, &t_ctid, offnum);
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -5067,7 +5114,7 @@ failed:
 		Assert(result == HeapTupleSelfUpdated || result == HeapTupleUpdated ||
 			   result == HeapTupleWouldBlock);
 		Assert(!(tuple->t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tuple->t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tuple->t_data, &hufd->ctid, offnum);
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tuple->t_data);
@@ -5143,7 +5190,7 @@ failed:
 	 * the tuple as well.
 	 */
 	if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask))
-		tuple->t_data->t_ctid = *tid;
+		HeapTupleHeaderSetHeapLatest(tuple->t_data);
 
 	/* Clear only the all-frozen bit on visibility map if needed */
 	if (PageIsAllVisible(page) &&
@@ -5657,6 +5704,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
+	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5665,6 +5713,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
+		offnum = ItemPointerGetOffsetNumber(&tupid);
+
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
@@ -5883,7 +5933,7 @@ l4:
 
 		/* if we find the end of update chain, we're done. */
 		if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID ||
-			ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) ||
+			HeapTupleHeaderIsHeapLatest(mytup.t_data, mytup.t_self) ||
 			HeapTupleHeaderIsOnlyLocked(mytup.t_data))
 		{
 			result = HeapTupleMayBeUpdated;
@@ -5892,7 +5942,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		HeapTupleHeaderGetNextCtid(mytup.t_data, &tupid, offnum);
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
@@ -6009,7 +6059,8 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	 * Replace the speculative insertion token with a real t_ctid, pointing to
 	 * itself like it does on regular tuples.
 	 */
-	htup->t_ctid = tuple->t_self;
+	HeapTupleHeaderSetHeapLatest(htup);
+	HeapTupleHeaderSetRootOffset(htup, offnum);
 
 	/* XLOG stuff */
 	if (RelationNeedsWAL(relation))
@@ -6135,7 +6186,9 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
 
 	/* Clear the speculative insertion token too */
-	tp.t_data->t_ctid = tp.t_self;
+	HeapTupleHeaderSetNextCtid(tp.t_data,
+			ItemPointerGetBlockNumber(&tp.t_self),
+			ItemPointerGetOffsetNumber(&tp.t_self));
 
 	MarkBufferDirty(buffer);
 
@@ -7484,6 +7537,7 @@ log_heap_visible(RelFileNode rnode, Buffer heap_buffer, Buffer vm_buffer,
 static XLogRecPtr
 log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup, HeapTuple newtup,
+				OffsetNumber root_offnum,
 				HeapTuple old_key_tuple,
 				bool all_visible_cleared, bool new_all_visible_cleared)
 {
@@ -7603,6 +7657,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	/* Prepare WAL data for the new page */
 	xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self);
 	xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data);
+	xlrec.root_offnum = root_offnum;
 
 	bufflags = REGBUF_STANDARD;
 	if (init)
@@ -8258,7 +8313,7 @@ heap_xlog_delete(XLogReaderState *record)
 			PageClearAllVisible(page);
 
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = target_tid;
+		HeapTupleHeaderSetHeapLatest(htup);
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8348,7 +8403,9 @@ heap_xlog_insert(XLogReaderState *record)
 		htup->t_hoff = xlhdr.t_hoff;
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
-		htup->t_ctid = target_tid;
+
+		HeapTupleHeaderSetHeapLatest(htup);
+		HeapTupleHeaderSetRootOffset(htup, xlrec->offnum);
 
 		if (PageAddItem(page, (Item) htup, newlen, xlrec->offnum,
 						true, true) == InvalidOffsetNumber)
@@ -8483,8 +8540,9 @@ heap_xlog_multi_insert(XLogReaderState *record)
 			htup->t_hoff = xlhdr->t_hoff;
 			HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 			HeapTupleHeaderSetCmin(htup, FirstCommandId);
-			ItemPointerSetBlockNumber(&htup->t_ctid, blkno);
-			ItemPointerSetOffsetNumber(&htup->t_ctid, offnum);
+
+			HeapTupleHeaderSetHeapLatest(htup);
+			HeapTupleHeaderSetRootOffset(htup, offnum);
 
 			offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 			if (offnum == InvalidOffsetNumber)
@@ -8620,7 +8678,8 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 		/* Set forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetNextCtid(htup, ItemPointerGetBlockNumber(&newtid),
+				ItemPointerGetOffsetNumber(&newtid));
 
 		/* Mark the page as a candidate for pruning */
 		PageSetPrunable(page, XLogRecGetXid(record));
@@ -8754,12 +8813,17 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetHeapLatest(htup);
 
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
 
+		if (OffsetNumberIsValid(xlrec->root_offnum))
+			HeapTupleHeaderSetRootOffset(htup, xlrec->root_offnum);
+		else
+			HeapTupleHeaderSetRootOffset(htup, offnum);
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
@@ -8887,9 +8951,7 @@ heap_xlog_lock(XLogReaderState *record)
 		{
 			HeapTupleHeaderClearHotUpdated(htup);
 			/* Make sure there is no forward chain link in t_ctid */
-			ItemPointerSet(&htup->t_ctid,
-						   BufferGetBlockNumber(buffer),
-						   offnum);
+			HeapTupleHeaderSetHeapLatest(htup);
 		}
 		HeapTupleHeaderSetXmax(htup, xlrec->locking_xid);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index c90fb71..8183920 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -31,12 +31,17 @@
  * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
  * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *
+ * The caller can optionally tell us to set the root offset to the given value.
+ * Otherwise, the root offset is set to the offset of the new location once its
+ * known
  */
 void
 RelationPutHeapTuple(Relation relation,
 					 Buffer buffer,
 					 HeapTuple tuple,
-					 bool token)
+					 bool token,
+					 OffsetNumber root_offnum)
 {
 	Page		pageHeader;
 	OffsetNumber offnum;
@@ -69,7 +74,13 @@ RelationPutHeapTuple(Relation relation,
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
-		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item);
+		if (OffsetNumberIsValid(root_offnum))
+			HeapTupleHeaderSetRootOffset((HeapTupleHeader) item,
+					root_offnum);
+		else
+			HeapTupleHeaderSetRootOffset((HeapTupleHeader) item,
+					offnum);
 	}
 }
 
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index 6ff9251..f0cbf77 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -55,6 +55,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
 
+static void heap_get_root_tuples_internal(Page page,
+				OffsetNumber target_offnum, OffsetNumber *root_offsets);
 
 /*
  * Optionally prune and repair fragmentation in the specified page.
@@ -740,8 +742,9 @@ heap_page_prune_execute(Buffer buffer,
  * holds a pin on the buffer. Once pin is released, a tuple might be pruned
  * and reused by a completely unrelated tuple.
  */
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
 {
 	OffsetNumber offnum,
 				maxoff;
@@ -820,6 +823,14 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 			/* Remember the root line pointer for this item */
 			root_offsets[nextoffnum - 1] = offnum;
 
+			/* 
+			 * If the caller is interested in just one offset and we found
+			 * that, just return
+			 */
+			if (OffsetNumberIsValid(target_offnum) &&
+					(nextoffnum == target_offnum))
+				return;
+
 			/* Advance to next chain member, if any */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				break;
@@ -829,3 +840,25 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 		}
 	}
 }
+
+/*
+ * Get root line pointer for the given tuple
+ */
+void
+heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offnum)
+{
+	OffsetNumber offsets[MaxHeapTuplesPerPage];
+	heap_get_root_tuples_internal(page, target_offnum, offsets);
+	*root_offnum = offsets[target_offnum - 1];
+}
+
+/*
+ * Get root line pointers for all tuples in the page
+ */
+void
+heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+{
+	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+			root_offsets);
+}
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index f9ce986..4656533 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -421,14 +421,14 @@ rewrite_heap_tuple(RewriteState state,
 	 */
 	if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
 		  HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
-		!(ItemPointerEquals(&(old_tuple->t_self),
-							&(old_tuple->t_data->t_ctid))))
+		!(HeapTupleHeaderIsHeapLatest(old_tuple->t_data, old_tuple->t_self)))
 	{
 		OldToNewMapping mapping;
 
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
-		hashkey.tid = old_tuple->t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(old_tuple->t_data, &hashkey.tid,
+				ItemPointerGetOffsetNumber(&old_tuple->t_self));
 
 		mapping = (OldToNewMapping)
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -441,7 +441,9 @@ rewrite_heap_tuple(RewriteState state,
 			 * set the ctid of this tuple to point to the new location, and
 			 * insert it right away.
 			 */
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextCtid(new_tuple->t_data,
+					ItemPointerGetBlockNumber(&mapping->new_tid),
+					ItemPointerGetOffsetNumber(&mapping->new_tid));
 
 			/* We don't need the mapping entry anymore */
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -527,7 +529,9 @@ rewrite_heap_tuple(RewriteState state,
 				new_tuple = unresolved->tuple;
 				free_new = true;
 				old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextCtid(new_tuple->t_data,
+						ItemPointerGetBlockNumber(&new_tid),
+						ItemPointerGetOffsetNumber(&new_tid));
 
 				/*
 				 * We don't need the hash entry anymore, but don't free its
@@ -733,7 +737,10 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		onpage_tup->t_ctid = tup->t_self;
+		HeapTupleHeaderSetNextCtid(onpage_tup,
+				ItemPointerGetBlockNumber(&tup->t_self),
+				ItemPointerGetOffsetNumber(&tup->t_self));
+		HeapTupleHeaderSetHeapLatest(onpage_tup);
 	}
 
 	/* If heaptup is a private copy, release it. */
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 32bb3f9..079a77f 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2443,7 +2443,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		 * As above, it should be safe to examine xmax and t_ctid without the
 		 * buffer content lock, because they can't be changing.
 		 */
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, tuple.t_self))
 		{
 			/* deleted, so forget about it */
 			ReleaseBuffer(buffer);
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index b3a595c..94b46b8 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -188,6 +188,8 @@ extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
+extern void heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offnum);
 extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
 
 /* in heap/syncscan.c */
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index 06a8242..5a04561 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -193,6 +193,8 @@ typedef struct xl_heap_update
 	uint8		flags;
 	TransactionId new_xmax;		/* xmax of the new tuple */
 	OffsetNumber new_offnum;	/* new tuple's offset */
+	OffsetNumber root_offnum;	/* offset of the root line pointer in case of
+								   HOT or WARM update */
 
 	/*
 	 * If XLOG_HEAP_CONTAINS_OLD_TUPLE or XLOG_HEAP_CONTAINS_OLD_KEY flags are
@@ -200,7 +202,7 @@ typedef struct xl_heap_update
 	 */
 } xl_heap_update;
 
-#define SizeOfHeapUpdate	(offsetof(xl_heap_update, new_offnum) + sizeof(OffsetNumber))
+#define SizeOfHeapUpdate	(offsetof(xl_heap_update, root_offnum) + sizeof(OffsetNumber))
 
 /*
  * This is what we need to know about vacuum page cleanup/redirect
diff --git a/src/include/access/hio.h b/src/include/access/hio.h
index a174b34..82e5b5f 100644
--- a/src/include/access/hio.h
+++ b/src/include/access/hio.h
@@ -36,7 +36,7 @@ typedef struct BulkInsertStateData
 
 
 extern void RelationPutHeapTuple(Relation relation, Buffer buffer,
-					 HeapTuple tuple, bool token);
+					 HeapTuple tuple, bool token, OffsetNumber root_offnum);
 extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
 						  Buffer otherBuffer, int options,
 						  BulkInsertState bistate,
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index d7e5fad..23a330a 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,13 +260,19 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1800 are available */
+/* bits 0x1000 are available */
+#define HEAP_LATEST_TUPLE		0x1000	/* 
+										 * This is the last tuple in chain and
+										 * ip_posid points to the root line
+										 * pointer
+										 */
 #define HEAP_KEYS_UPDATED		0x2000	/* tuple was updated and key cols
 										 * modified, or tuple deleted */
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xE000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+
 
 /*
  * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins.  It is
@@ -504,6 +510,24 @@ do { \
   (tup)->t_infomask2 & HEAP_ONLY_TUPLE \
 )
 
+#define HeapTupleHeaderSetHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE \
+)
+
+#define HeapTupleHeaderClearHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 &= ~HEAP_LATEST_TUPLE \
+)
+
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  ((tup)->t_infomask2 & HEAP_LATEST_TUPLE) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(&tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(&tid))) \
+)
+
+
 #define HeapTupleHeaderSetHeapOnly(tup) \
 ( \
   (tup)->t_infomask2 |= HEAP_ONLY_TUPLE \
@@ -541,6 +565,44 @@ do { \
 		(((tup)->t_infomask & HEAP_HASEXTERNAL) != 0)
 
 
+#define HeapTupleHeaderSetNextCtid(tup, block, offset) \
+do { \
+		ItemPointerSetBlockNumber(&((tup)->t_ctid), (block)); \
+		ItemPointerSetOffsetNumber(&((tup)->t_ctid), (offset)); \
+		HeapTupleHeaderClearHeapLatest((tup)); \
+} while (0)
+
+#define HeapTupleHeaderGetNextCtid(tup, next_ctid, offnum) \
+do { \
+	if ((tup)->t_infomask2 & HEAP_LATEST_TUPLE) \
+	{ \
+		ItemPointerSet((next_ctid), ItemPointerGetBlockNumber(&(tup)->t_ctid), \
+				(offnum)); \
+	} \
+	else \
+	{ \
+		ItemPointerSet((next_ctid), ItemPointerGetBlockNumber(&(tup)->t_ctid), \
+				ItemPointerGetOffsetNumber(&(tup)->t_ctid)); \
+	} \
+} while (0)
+	
+
+#define HeapTupleHeaderSetRootOffset(tup, offset) \
+do { \
+	AssertMacro(!HeapTupleHeaderIsHotUpdated(tup)); \
+	ItemPointerSetOffsetNumber(&(tup)->t_ctid, (offset)); \
+} while (0)
+
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+  ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)
+
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+  (tup)->t_infomask2 & HEAP_LATEST_TUPLE \
+)
+
 /*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
0002_warm_updates_v2.patchapplication/octet-stream; name=0002_warm_updates_v2.patchDownload
diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c
index debf4f4..d49d179 100644
--- a/contrib/bloom/blutils.c
+++ b/contrib/bloom/blutils.c
@@ -138,6 +138,7 @@ blhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = blendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index b194d33..cefb071 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -111,6 +111,7 @@ brinhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = brinendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index 9a417ca..8b83955 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -88,6 +88,7 @@ gisthandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = gistendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index 07496f8..0cc37c0 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -84,6 +84,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = hashendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = hashrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -264,6 +265,8 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 	OffsetNumber offnum;
 	ItemPointer current;
 	bool		res;
+	IndexTuple	itup;
+
 
 	/* Hash indexes are always lossy since we store only the hash code */
 	scan->xs_recheck = true;
@@ -301,8 +304,6 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 			 offnum <= maxoffnum;
 			 offnum = OffsetNumberNext(offnum))
 		{
-			IndexTuple	itup;
-
 			itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 			if (ItemPointerEquals(&(so->hashso_heappos), &(itup->t_tid)))
 				break;
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index 4825558..cf44214 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -59,6 +59,8 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
 	return true;
 }
 
@@ -263,6 +265,9 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
+
 	return true;
 }
 
diff --git a/src/backend/access/hash/hashutil.c b/src/backend/access/hash/hashutil.c
index 822862d..c11a7ac 100644
--- a/src/backend/access/hash/hashutil.c
+++ b/src/backend/access/hash/hashutil.c
@@ -17,8 +17,12 @@
 #include "access/hash.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
+#include "nodes/execnodes.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 /*
@@ -352,3 +356,107 @@ _hash_binsearch_last(Page page, uint32 hash_value)
 
 	return lower;
 }
+
+bool
+hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	Datum		values2[INDEX_MAX_KEYS];
+	bool		isnull2[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	/*
+	 * HASH indexes compute a hash value of the key and store that in the
+	 * index. So we must first obtain the hash of the value obtained from the
+	 * heap and then do a comparison
+	 */
+	_hash_convert_tuple(indexRel, values, isnull, values2, isnull2);
+		
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL then they are equal
+		 */
+		if (isnull2[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If either is NULL then they are not equal
+		 */
+		if (isnull2[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now do a raw memory comparison
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values2[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+
+}
diff --git a/src/backend/access/heap/README.WARM b/src/backend/access/heap/README.WARM
new file mode 100644
index 0000000..5f81b7c
--- /dev/null
+++ b/src/backend/access/heap/README.WARM
@@ -0,0 +1,268 @@
+src/backend/access/heap/README.WARM
+
+Write Amplification Reduction Method (WARM)
+===========================================
+
+The Heap Only Tuple (HOT) feature greatly eliminated redudant index
+entries and allowed re-use of the dead space occupied by previously
+updated or deleted tuples (see src/backend/access/heap/README.HOT)
+
+One of the necessary conditions for satisfying HOT update is that the
+update must not change a column used in any of the indexes on the table.
+The condition is sometimes hard to meet, especially for complex
+workloads with several indexes on large yet frequently updated tables.
+Worse, sometimes only one or two index columns may be updated, but
+the regular non-HOT update will still insert a new index entry in every
+index on the table, irrespective of whether the key pertaining to the index
+changed or not.
+
+WARM is a technique devised to address these problems.
+
+
+Update Chains With Multiple Index Entries Pointing to the Root
+--------------------------------------------------------------
+
+When a non-HOT update is caused by an index key change, a new index entry
+must be inserted for the changed index. But if the index key hasn't changed
+for other indexes, we don't really need to insert a new entry. Even though the
+existing index entry is pointing to the old tuple, the new tuple is reachable
+via the t_ctid chain. To keep things simple, a WARM update requires that the
+heap block must have enough space to store the new version of the tuple. This
+is same as HOT updates.
+
+In WARM, we ensure that every index entry always points to the root of
+the WARM chain. In fact, a WARM chain looks exactly like a HOT chain
+except for the fact that there could be multiple index entries pointing
+to the root of the chain. So when new entry is inserted in an index for updated
+tuple, and if we are doing a WARM update, the new entry is made point to the
+root of the WARM chain.
+
+For example, if we have a table with two columns and two indexes on each
+of the column. When a tuple is first inserted the table, we have exactly
+one index entry pointing to the tuple from both indexes.
+
+	lp [1]
+	[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's entry (aaaa) also points to 1
+
+Now if the tuple's second column is updated and if there is room on the
+page, we perform a WARM update. To do so, Index1 does not get any new
+entry and Index2's new entry will still point to the root tuple of the
+chain.
+
+	lp [1]  [2]
+	[1111, aaaa]->[111, bbbb]
+
+	Index1's entry (1111) points to 1
+	Index2's old entry (aaaa) points to 1
+	Index2's new entry (bbbb) also points to 1
+
+"A update chain which has more than one index entries pointing to its
+root line pointer is called WARM chain and the action that creates a
+WARM chain is called WARM update."
+
+Since all indexes always point to the root of the WARM chain, even when
+there are more than one index entries, WARM chains can be pruned and
+dead tuples can be removed without a need to do corresponding index
+cleanup.
+
+While this solves the problem of pruning dead tuples from a HOT/WARM
+chain, it also opens up a new technical challenge because now we have a
+situation where a heap tuple is reachable from multiple index entries,
+each having a different index key. While MVCC still ensures that only
+valid tuples are returned, a tuple with a wrong index key may be
+returned because of wrong index entries. In the above example, tuple
+[1111, bbbb] is reachable from both keys (aaaa) as well as (bbbb). For
+this reason, tuples returned from a WARM chain must always be rechecked
+for index key-match.
+
+Recheck Index Key Againt Heap Tuple
+-----------------------------------
+
+Since every Index AM has it's own notion of index tuples, each Index AM must
+implement its own method to recheck heap tuples. For example, a hash index
+stores the hash value of the column and hence recheck routine for hash AM must
+first compute the hash value of the heap attribute and then compare it against
+the value stored in the index tuple.
+
+The patch currently implement recheck routines for hash and btree indexes. If
+the table has an index which doesn't support recheck routine, WARM updates are
+disabled on such tables.
+
+Problem With Duplicate (key, ctid) Index Entries
+------------------------------------------------
+
+The index-key recheck logic works as long as there are no duplicate
+index keys, both pointing to the same WARM chain. In that case, the same
+valid tuple will be reachable via multiple index keys, yet satisfying
+the index key checks. In the above example, if the tuple [1111, bbbb] is
+again updated to [1111, aaaa] and if we insert a new index entry (aaaa)
+pointing to the root line pointer, we will end up with the following
+structure:
+
+	lp [1]  [2]  [3]
+	[1111, aaaa]->[1111, bbbb]->[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's oldest entry (aaaa) points to 1
+	Index2's old entry (bbbb) also points to 1
+	Index2's new entry (aaaa) also points to 1
+
+We must solve this problem to ensure that the same tuple is not
+reachable via multiple index pointers. There are couple of ways to
+address this issue:
+
+1. Do not allow WARM update to a tuple from a WARM chain. This
+guarantees that there can never be duplicate index entries to the same
+root line pointer because we must have checked for old and new index
+keys while doing the first WARM update.
+
+2. Do not allow duplicate (key, ctid) index pointers. In the above
+example, since (aaaa, 1) already exists in the index, we must not insert
+a duplicate index entry.
+
+The patch currently implements 1 i.e. do not do WARM updates to a
+tuple from a WARM chain. HOT updates are fine because they do not add a
+new index entry.
+
+Even with the restriction, this is a significant improvement because the
+number of regular UPDATEs are curtailed down to half.
+
+Expression and Partial Indexes
+------------------------------
+
+Expressions may evaluate to the same value even if the underlying column
+values have changed. A simple example is an index on "lower(col)" which
+will return the same value if the new heap value only differs in the
+case sensitivity. So we can not solely rely on the heap column check to
+decide whether or not to insert a new index entry for expression indexes. Similarly, for
+partial indexes, the predicate expression must be evaluated to decide
+whether or not to cause a new index entry when columns referred in the
+predicate expressions change.
+
+(None of these things are currently implemented and we squarely disallow
+WARM update if a column from expression indexes or predicate has
+changed).
+
+
+Efficiently Finding the Root Line Pointer
+-----------------------------------------
+
+During WARM update, we must be able to find the root line pointer of the
+tuple being updated. It must be noted that the t_ctid field in the heap
+tuple header is usually used to find the next tuple in the update chain.
+But the tuple that we are updating, must be the last tuple in the
+update chain. In such cases, the c_tid field usually points the tuple
+itself. So in theory, we could use the t_ctid to store additional
+information in the last tuple of the update chain, if the information
+about the tuple being the last tuple is stored elsewhere.
+
+We now utilize another bit from t_infomask2 to explicitly identify that this is
+the last tuple in the update chain. 
+
+HEAP_LATEST_TUPLE - When this bit is set, the tuple is the last tuple in
+the update chain. The OffsetNumber part of t_ctid points to the root
+line pointer of the chain when HEAP_LATEST_TUPLE flag is set.
+
+If UPDATE operation is aborted, the last tuple in the update chain becomes
+dead. The root line pointer information stored in the tuple which remains the
+last valid tuple in the chain is also lost. In such rare cases, the root line
+pointer must be found in a hard way by scanning the entire heap page.
+
+Tracking WARM Chains
+--------------------
+
+The old and every subsequent tuple in the chain is marked with a special
+HEAP_WARM_TUPLE flag. We use the last remaining bit in t_infomask2 to store
+this information.
+
+When a tuple is returned from a WARM chain, the caller must do
+additional checks to ensure that the tuple matches the index key. Even
+if the tuple comes precedes the WARM update in the chain,
+it must still be rechecked for the index key match (case when
+old tuple is returned by the new index key). So we must follow the
+update chain everytime to the end to see check if this is a WARM chain. 
+
+When the old updated tuple is retired and the root line pointer is
+converted into a redirected line pointer, we can copy the information
+about WARM chain to the redirected line pointer by storing a special
+value in the lp_len field of the line pointer. This will handle the most
+common case where a WARM chain is replaced by a redirect line pointer
+and a single tuple in the chain.
+
+Converting WARM chains back to HOT chains (VACUUM ?)
+----------------------------------------------------
+
+The current implementation of WARM allows only one WARM update per chain. This
+simplifies the design and addresses certain issues around duplicate scans. But
+this also implies that the benefit of WARM will be no more than 50%, which is
+still significant, but if we could return WARM chains back to normal status, we
+could do far more WARM updates.
+
+A distinct property of a WARM chain is that at least one index has more than
+one live index entries pointing to the root of the chain. In other words, if we
+can remove duplicate entry from every index or conclusively prove that there
+are no duplicate index entries for the root line pointer, the chain can again
+be marked as HOT.
+
+Here is one idea:
+
+A WARM chain has two parts, separated by the tuple that caused WARM update. All
+tuples in each part has matching index keys, but certain index keys may not
+match between these two parts. Lets say we mark heap tuples in each part with a
+special Red-Blue flag. The same flag is replicated in the index tuples. For
+example, when new rows are inserted in a table, they are marked with Blue flag
+and the index entries associated with those rows are also marked with Blue
+flag. When a row is WARM updated, the new version is marked with Red flag and
+the new index entry created by the update is also marked with Red flag.
+
+
+Heap chain: [1] [2] [3] [4]
+			[aaaa, 1111]B -> [aaaa, 1111]B -> [bbbb, 1111]R -> [bbbb, 1111]R
+
+Index1: 	(aaaa)B points to 1 (satisfies only tuples marked with B)
+			(bbbb)R points to 1 (satisfies only tuples marked with R)
+
+Index2:		(1111)B points to 1 (satisfied bith B and R tuples)
+
+
+It's clear that for indexes with Red and Blue pointers, a heap tuple with Blue
+flag will be reachable from Blue pointer and that with Red flag will be
+reachable from Red pointer. But for indexes which did not create a new entry,
+both Blue and Red tuples will be reachable from Blue pointer (there is no Red
+pointer in such indexes). So, as a side note, matching Red and Blue flags is
+not enough from index scan perspective.
+
+During first heap scan of VACUUM, we look for tuples with HEAP_WARM_TUPLE set.
+If all live tuples in the chain are either marked with Blue flag or Red flag
+(but no mix of Red and Blue), then the chain is a candidate for HOT conversion.
+We remember the root line pointer and Red-Blue flag of the WARM chain in a
+separate array.
+
+If we have a Red WARM chain, then our goal is to remove Blue pointers and vice
+versa. But there is a catch. For Index2 above, there is only Blue pointer
+and that must not be removed. IOW we should remove Blue pointer iff a Red
+pointer exists. Since index vacuum may visit Red and Blue pointers in any
+order, I think we will need another index pass to remove dead
+index pointers. So in the first index pass we check which WARM candidates have
+2 index pointers. In the second pass, we remove the dead pointer and reset Red
+flag is the surviving index pointer is Red.
+
+During the second heap scan, we fix WARM chain by clearing HEAP_WARM_TUPLE flag
+and also reset Red flag to Blue.
+
+There are some more problems around aborted vacuums. For example, if vacuum
+aborts after changing Red index flag to Blue but before removing the other Blue
+pointer, we will end up with two Blue pointers to a Red WARM chain. But since
+the HEAP_WARM_TUPLE flag on the heap tuple is still set, further WARM updates
+to the chain will be blocked. I guess we will need some special handling for
+case with multiple Blue pointers. We can either leave these WARM chains alone
+and let them die with a subsequent non-WARM update or must apply heap-recheck
+logic during index vacuum to find the dead pointer. Given that vacuum-aborts
+are not common, I am inclined to leave this case unhandled. We must still check
+for presence of multiple Blue pointers and ensure that we don't accidently
+remove either of the Blue pointers and not clear WARM chains either.
+
+
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index ae5839a..eafedae 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -99,7 +99,10 @@ static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
 							 Bitmapset *hot_attrs,
 							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
+							 Bitmapset *exprindx_attrs,
+							 Bitmapset **updated_attrs,
+							 bool *satisfies_hot, bool *satisfies_warm,
+							 bool *satisfies_key,
 							 bool *satisfies_id,
 							 HeapTuple oldtup, HeapTuple newtup);
 static bool heap_acquire_tuplock(Relation relation, ItemPointer tid,
@@ -1960,6 +1963,76 @@ heap_fetch(Relation relation,
 }
 
 /*
+ * Check if the HOT chain is originating or continuing at tid ever became a
+ * WARM chain, even if the actual UPDATE operation finally aborted.
+ */
+static void
+hot_check_warm_chain(Page dp, ItemPointer tid, bool *recheck)
+{
+	TransactionId prev_xmax = InvalidTransactionId;
+	OffsetNumber offnum;
+	HeapTupleData heapTuple;
+
+	if (*recheck == true)
+		return;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+			break;
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Presence of either WARM or WARM updated tuple signals possible
+		 * breakage and the caller must recheck tuple returned from this chain
+		 * for index satisfaction
+		 */
+		if (HeapTupleHeaderIsHeapWarmTuple(heapTuple.t_data))
+		{
+			*recheck = true;
+			break;
+		}
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (HeapTupleIsHotUpdated(&heapTuple))
+		{
+			offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+			prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+		}
+		else
+			break;				/* end of chain */
+	}
+
+}
+
+/*
  *	heap_hot_search_buffer	- search HOT chain for tuple satisfying snapshot
  *
  * On entry, *tid is the TID of a tuple (either a simple tuple, or the root
@@ -1979,11 +2052,14 @@ heap_fetch(Relation relation,
  * Unlike heap_fetch, the caller must already have pin and (at least) share
  * lock on the buffer; it is still pinned/locked at exit.  Also unlike
  * heap_fetch, we do not report any pgstats count; caller may do so if wanted.
+ *
+ * recheck should be set false on entry by caller, will be set true on exit
+ * if a WARM tuple is encountered.
  */
 bool
 heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 					   Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call)
+					   bool *all_dead, bool first_call, bool *recheck)
 {
 	Page		dp = (Page) BufferGetPage(buffer);
 	TransactionId prev_xmax = InvalidTransactionId;
@@ -2025,6 +2101,16 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 				/* Follow the redirect */
 				offnum = ItemIdGetRedirect(lp);
 				at_chain_start = false;
+
+				/* Check if it's a WARM chain */
+				if (recheck && *recheck == false)
+				{
+					if (ItemIdIsHeapWarm(lp))
+					{
+						*recheck = true;
+						Assert(!IsSystemRelation(relation));
+					}
+				}
 				continue;
 			}
 			/* else must be end of chain */
@@ -2039,7 +2125,8 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		/*
 		 * Shouldn't see a HEAP_ONLY tuple at chain start.
 		 */
-		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple))
+		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple) &&
+				!HeapTupleIsHeapWarmTuple(heapTuple))
 			break;
 
 		/*
@@ -2051,6 +2138,20 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 								 HeapTupleHeaderGetXmin(heapTuple->t_data)))
 			break;
 
+		/* 
+		 * Check if there exists a WARM tuple somewhere down the chain and set
+		 * recheck to TRUE.
+		 *
+		 * XXX This is not very efficient right now, and we should look for
+		 * possible improvements here
+		 */
+		if (recheck && *recheck == false)
+		{
+			hot_check_warm_chain(dp, &heapTuple->t_self, recheck);
+			if (recheck && *recheck == true)
+				Assert(!IsSystemRelation(relation));
+		}
+
 		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
 		 * return the first tuple we find.  But on later passes, heapTuple
@@ -2124,18 +2225,41 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
  */
 bool
 heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
-				bool *all_dead)
+				bool *all_dead, bool *recheck, Buffer *cbuffer,
+				HeapTuple heapTuple)
 {
 	bool		result;
 	Buffer		buffer;
-	HeapTupleData heapTuple;
+	ItemPointerData ret_tid = *tid;
 
 	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	LockBuffer(buffer, BUFFER_LOCK_SHARE);
-	result = heap_hot_search_buffer(tid, relation, buffer, snapshot,
-									&heapTuple, all_dead, true);
-	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buffer);
+	result = heap_hot_search_buffer(&ret_tid, relation, buffer, snapshot,
+									heapTuple, all_dead, true, recheck);
+
+	/*
+	 * If we are returning a potential candidate tuple from this chain and the
+	 * caller has requested for "recheck" hint, keep the buffer locked and
+	 * pinned. The caller must release the lock and pin on the buffer in all
+	 * such cases
+	 */
+	if (!result || !recheck || !(*recheck))
+	{
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	/*
+	 * Set the caller supplied tid with the actual location of the tuple being
+	 * returned
+	 */
+	if (result)
+	{
+		*tid = ret_tid;
+		if (cbuffer)
+			*cbuffer = buffer;
+	}
+
 	return result;
 }
 
@@ -3442,13 +3566,15 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 HTSU_Result
 heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode)
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **updated_attrs, bool *warm_update)
 {
 	HTSU_Result result;
 	TransactionId xid = GetCurrentTransactionId();
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *exprindx_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3469,9 +3595,11 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		satisfies_hot;
+	bool		satisfies_warm;
 	bool		satisfies_key;
 	bool		satisfies_id;
 	bool		use_hot_update = false;
+	bool		use_warm_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
 	bool		all_visible_cleared_new = false;
@@ -3496,6 +3624,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot update tuples during a parallel operation")));
 
+	/* Assume no-warm update */
+	if (warm_update)
+		*warm_update = false;
+
 	/*
 	 * Fetch the list of attributes to be checked for HOT update.  This is
 	 * wasted effort if we fail to update or have to put the new tuple on a
@@ -3512,6 +3644,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	exprindx_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_EXPR_PREDICATE);
 
 	block = ItemPointerGetBlockNumber(otid);
 	offnum = ItemPointerGetOffsetNumber(otid);
@@ -3571,7 +3705,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	 * serendipitiously arrive at the same key values.
 	 */
 	HeapSatisfiesHOTandKeyUpdate(relation, hot_attrs, key_attrs, id_attrs,
-								 &satisfies_hot, &satisfies_key,
+								 exprindx_attrs,
+								 updated_attrs,
+								 &satisfies_hot, &satisfies_warm,
+								 &satisfies_key,
 								 &satisfies_id, &oldtup, newtup);
 	if (satisfies_key)
 	{
@@ -4117,6 +4254,34 @@ l2:
 		 */
 		if (satisfies_hot)
 			use_hot_update = true;
+		else
+		{
+			/*
+			 * If no WARM updates yet on this chain, let this update be a WARM
+			 * update.
+			 *
+			 * We check for both warm and warm updated tuples since if the
+			 * previous WARM update aborted, we may still have added
+			 * another index entry for this HOT chain. In such situations, we
+			 * must not attempt a WARM update until duplicate (key, CTID) index
+			 * entry issue is sorted out
+			 *
+			 * XXX Later we'll add more checks to ensure WARM chains can
+			 * further be WARM updated. This is probably good to do first rounf
+			 * of tests of remaining functionality
+			 *
+			 * XXX Disable WARM updates on system tables. There is nothing in
+			 * principle that stops us from supporting this. But it would
+			 * require API change to propogate the changed columns back to the
+			 * caller so that CatalogUpdateIndexes() can avoid adding new
+			 * entries to indexes that are not changed by update. This will be
+			 * fixed once basic patch is tested. !!FIXME
+			 */
+			if (satisfies_warm &&
+				!HeapTupleIsHeapWarmTuple(&oldtup) &&
+				!IsSystemRelation(relation))
+				use_warm_update = true;
+		}
 	}
 	else
 	{
@@ -4157,6 +4322,21 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+
+		/*
+		 * Even if we are doing a HOT update, we must carry forward the WARM
+		 * flag because we may have already inserted another index entry
+		 * pointing to our root and a third entry may create duplicates
+		 *
+		 * XXX This should be revisited if we get index (key, CTID) duplicate
+		 * detection mechanism in place
+		 */
+		if (HeapTupleIsHeapWarmTuple(&oldtup))
+		{
+			HeapTupleSetHeapWarmTuple(heaptup);
+			HeapTupleSetHeapWarmTuple(newtup);
+		}
+
 		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
@@ -4172,12 +4352,38 @@ l2:
 					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
 					&root_offnum);
 	}
+	else if (use_warm_update)
+	{
+		Assert(!IsSystemRelation(relation));
+
+		/* Mark the old tuple as HOT-updated */
+		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetHeapWarmTuple(&oldtup);
+		/* And mark the new tuple as heap-only */
+		HeapTupleSetHeapOnly(heaptup);
+		HeapTupleSetHeapWarmTuple(heaptup);
+		/* Mark the caller's copy too, in case different from heaptup */
+		HeapTupleSetHeapOnly(newtup);
+		HeapTupleSetHeapWarmTuple(newtup);
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			heap_get_root_tuple_one(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
+					&root_offnum);
+
+		/* Let the caller know we did a WARM update */
+		if (warm_update)
+			*warm_update = true;
+	}
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		HeapTupleClearHeapWarmTuple(heaptup);
+		HeapTupleClearHeapWarmTuple(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4296,7 +4502,12 @@ l2:
 	if (have_tuple_lock)
 		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
 
-	pgstat_count_heap_update(relation, use_hot_update);
+	/*
+	 * Even with WARM we still count stats using use_hot_update,
+	 * since we continue to still use that term even though it is
+	 * now more frequent that previously.
+	 */
+	pgstat_count_heap_update(relation, use_hot_update || use_warm_update);
 
 	/*
 	 * If heaptup is a private copy, release it.  Don't forget to copy t_self
@@ -4403,6 +4614,13 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
  * will be checking very similar sets of columns, and doing the same tests on
  * them, it makes sense to optimize and do them together.
  *
+ * The exprindx_attrs designates the set of attributes used in expression or
+ * predicate indexes. In this version, we don't allow WARM updates if
+ * expression or predicate index column is updated
+ *
+ * If updated_attrs is not NULL, then the caller is always interested in
+ * knowing the list of changed attributes
+ *
  * We receive three bitmapsets comprising the three sets of columns we're
  * interested in.  Note these are destructively modified; that is OK since
  * this is invoked at most once in heap_update.
@@ -4415,7 +4633,11 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 static void
 HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
+							 Bitmapset *exprindx_attrs,
+							 Bitmapset **updated_attrs,
+							 bool *satisfies_hot,
+							 bool *satisfies_warm,
+							 bool *satisfies_key,
 							 bool *satisfies_id,
 							 HeapTuple oldtup, HeapTuple newtup)
 {
@@ -4452,8 +4674,11 @@ HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 		 * Since the HOT attributes are a superset of the key attributes and
 		 * the key attributes are a superset of the id attributes, this logic
 		 * is guaranteed to identify the next column that needs to be checked.
+		 *
+		 * If the caller also wants to know the list of updated index
+		 * attributes, we must scan through all the attributes
 		 */
-		if (hot_result && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
+		if ((hot_result || updated_attrs) && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
 			check_now = next_hot_attnum;
 		else if (key_result && next_key_attnum > FirstLowInvalidHeapAttributeNumber)
 			check_now = next_key_attnum;
@@ -4474,8 +4699,12 @@ HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 			if (check_now == next_id_attnum)
 				id_result = false;
 
+			if (updated_attrs)
+				*updated_attrs = bms_add_member(*updated_attrs, check_now -
+						FirstLowInvalidHeapAttributeNumber);
+
 			/* if all are false now, we can stop checking */
-			if (!hot_result && !key_result && !id_result)
+			if (!hot_result && !key_result && !id_result && !updated_attrs)
 				break;
 		}
 
@@ -4486,7 +4715,7 @@ HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 		 * bms_first_member() will return -1 and the attribute number will end
 		 * up with a value less than FirstLowInvalidHeapAttributeNumber.
 		 */
-		if (hot_result && check_now == next_hot_attnum)
+		if ((hot_result || updated_attrs) && check_now == next_hot_attnum)
 		{
 			next_hot_attnum = bms_first_member(hot_attrs);
 			next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
@@ -4503,6 +4732,13 @@ HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 		}
 	}
 
+	if (updated_attrs && bms_overlap(*updated_attrs, exprindx_attrs))
+		*satisfies_warm = false;
+	else if (!relation->rd_supportswarm)
+		*satisfies_warm = false;
+	else
+		*satisfies_warm = true;
+
 	*satisfies_hot = hot_result;
 	*satisfies_key = key_result;
 	*satisfies_id = id_result;
@@ -4526,7 +4762,7 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &hufd, &lockmode);
+						 &hufd, &lockmode, NULL, NULL);
 	switch (result)
 	{
 		case HeapTupleSelfUpdated:
@@ -7413,6 +7649,7 @@ log_heap_cleanup_info(RelFileNode rnode, TransactionId latestRemovedXid)
 XLogRecPtr
 log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *redirected, int nredirected,
+			   OffsetNumber *warm, int nwarm,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid)
@@ -7426,6 +7663,7 @@ log_heap_clean(Relation reln, Buffer buffer,
 	xlrec.latestRemovedXid = latestRemovedXid;
 	xlrec.nredirected = nredirected;
 	xlrec.ndead = ndead;
+	xlrec.nwarm = nwarm;
 
 	XLogBeginInsert();
 	XLogRegisterData((char *) &xlrec, SizeOfHeapClean);
@@ -7448,6 +7686,10 @@ log_heap_clean(Relation reln, Buffer buffer,
 		XLogRegisterBufData(0, (char *) nowdead,
 							ndead * sizeof(OffsetNumber));
 
+	if (nwarm > 0)
+		XLogRegisterBufData(0, (char *) warm,
+							nwarm * sizeof(OffsetNumber));
+
 	if (nunused > 0)
 		XLogRegisterBufData(0, (char *) nowunused,
 							nunused * sizeof(OffsetNumber));
@@ -7553,6 +7795,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
+	bool		warm_update;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
@@ -7564,6 +7807,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	else
 		info = XLOG_HEAP_UPDATE;
 
+	if (HeapTupleIsHeapWarmTuple(newtup))
+		warm_update = true;
+
 	/*
 	 * If the old and new tuple are on the same page, we only need to log the
 	 * parts of the new tuple that were changed.  That saves on the amount of
@@ -7637,6 +7883,8 @@ log_heap_update(Relation reln, Buffer oldbuf,
 				xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY;
 		}
 	}
+	if (warm_update)
+		xlrec.flags |= XLH_UPDATE_WARM_UPDATE;
 
 	/* If new tuple is the single and first tuple on page... */
 	if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber &&
@@ -8004,24 +8252,38 @@ heap_xlog_clean(XLogReaderState *record)
 		OffsetNumber *redirected;
 		OffsetNumber *nowdead;
 		OffsetNumber *nowunused;
+		OffsetNumber *warm;
 		int			nredirected;
 		int			ndead;
 		int			nunused;
+		int			nwarm;
+		int			i;
 		Size		datalen;
+		bool		warmchain[MaxHeapTuplesPerPage + 1];
 
 		redirected = (OffsetNumber *) XLogRecGetBlockData(record, 0, &datalen);
 
 		nredirected = xlrec->nredirected;
 		ndead = xlrec->ndead;
+		nwarm = xlrec->nwarm;
+
 		end = (OffsetNumber *) ((char *) redirected + datalen);
 		nowdead = redirected + (nredirected * 2);
-		nowunused = nowdead + ndead;
-		nunused = (end - nowunused);
+		warm = nowdead + ndead;
+		nowunused = warm + nwarm;
+
+		nunused = (end - warm);
 		Assert(nunused >= 0);
 
+		memset(warmchain, 0, sizeof (warmchain));
+		for (i = 0; i < nwarm; i++)
+			warmchain[warm[i]] = true;
+
+
 		/* Update all item pointers per the record, and repair fragmentation */
 		heap_page_prune_execute(buffer,
 								redirected, nredirected,
+								warmchain,
 								nowdead, ndead,
 								nowunused, nunused);
 
@@ -8608,16 +8870,22 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	Size		freespace = 0;
 	XLogRedoAction oldaction;
 	XLogRedoAction newaction;
+	bool		warm_update = false;
 
 	/* initialize to keep the compiler quiet */
 	oldtup.t_data = NULL;
 	oldtup.t_len = 0;
 
+	if (xlrec->flags & XLH_UPDATE_WARM_UPDATE)
+		warm_update = true;
+
 	XLogRecGetBlockTag(record, 0, &rnode, NULL, &newblk);
 	if (XLogRecGetBlockTag(record, 1, NULL, NULL, &oldblk))
 	{
 		/* HOT updates are never done across pages */
 		Assert(!hot_update);
+		/* WARM updates are never done across pages */
+		Assert(!warm_update);
 	}
 	else
 		oldblk = newblk;
@@ -8677,6 +8945,10 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 								   &htup->t_infomask2);
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
+
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		/* Set forward chain link in t_ctid */
 		HeapTupleHeaderSetNextCtid(htup, ItemPointerGetBlockNumber(&newtid),
 				ItemPointerGetOffsetNumber(&newtid));
@@ -8812,6 +9084,10 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
+
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		/* Make sure there is no forward chain link in t_ctid */
 		HeapTupleHeaderSetHeapLatest(htup);
 
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index f0cbf77..6a03f9d 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -36,12 +36,19 @@ typedef struct
 	int			nredirected;	/* numbers of entries in arrays below */
 	int			ndead;
 	int			nunused;
+	int			nwarm;
 	/* arrays that accumulate indexes of items to be changed */
 	OffsetNumber redirected[MaxHeapTuplesPerPage * 2];
 	OffsetNumber nowdead[MaxHeapTuplesPerPage];
 	OffsetNumber nowunused[MaxHeapTuplesPerPage];
+	OffsetNumber warm[MaxHeapTuplesPerPage];
 	/* marked[i] is TRUE if item i is entered in one of the above arrays */
 	bool		marked[MaxHeapTuplesPerPage + 1];
+	/* 
+	 * warmchain[i] is TRUE if item is becoming redirected lp and points a WARM
+	 * chain
+	 */
+	bool		warmchain[MaxHeapTuplesPerPage + 1];
 } PruneState;
 
 /* Local functions */
@@ -54,6 +61,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 						   OffsetNumber offnum, OffsetNumber rdoffnum);
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
+static void heap_prune_record_warmupdate(PruneState *prstate,
+						   OffsetNumber offnum);
 
 static void heap_get_root_tuples_internal(Page page,
 				OffsetNumber target_offnum, OffsetNumber *root_offsets);
@@ -203,8 +212,9 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 	 */
 	prstate.new_prune_xid = InvalidTransactionId;
 	prstate.latestRemovedXid = *latestRemovedXid;
-	prstate.nredirected = prstate.ndead = prstate.nunused = 0;
+	prstate.nredirected = prstate.ndead = prstate.nunused = prstate.nwarm = 0;
 	memset(prstate.marked, 0, sizeof(prstate.marked));
+	memset(prstate.warmchain, 0, sizeof(prstate.marked));
 
 	/* Scan the page */
 	maxoff = PageGetMaxOffsetNumber(page);
@@ -241,6 +251,7 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 		 */
 		heap_page_prune_execute(buffer,
 								prstate.redirected, prstate.nredirected,
+								prstate.warmchain,
 								prstate.nowdead, prstate.ndead,
 								prstate.nowunused, prstate.nunused);
 
@@ -268,6 +279,7 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 
 			recptr = log_heap_clean(relation, buffer,
 									prstate.redirected, prstate.nredirected,
+									prstate.warm, prstate.nwarm,
 									prstate.nowdead, prstate.ndead,
 									prstate.nowunused, prstate.nunused,
 									prstate.latestRemovedXid);
@@ -479,6 +491,12 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 			!TransactionIdEquals(HeapTupleHeaderGetXmin(htup), priorXmax))
 			break;
 
+		if (HeapTupleHeaderIsHeapWarmTuple(htup))
+		{
+			Assert(!IsSystemRelation(relation));
+			heap_prune_record_warmupdate(prstate, rootoffnum);
+		}
+
 		/*
 		 * OK, this tuple is indeed a member of the chain.
 		 */
@@ -668,6 +686,18 @@ heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum)
 	prstate->marked[offnum] = true;
 }
 
+/* Record item pointer which is a root of a WARM chain */
+static void
+heap_prune_record_warmupdate(PruneState *prstate, OffsetNumber offnum)
+{
+	Assert(prstate->nwarm < MaxHeapTuplesPerPage);
+	if (prstate->warmchain[offnum])
+		return;
+	prstate->warm[prstate->nwarm] = offnum;
+	prstate->nwarm++;
+	prstate->warmchain[offnum] = true;
+}
+
 
 /*
  * Perform the actual page changes needed by heap_page_prune.
@@ -681,6 +711,7 @@ heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum)
 void
 heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
+						bool *warmchain,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused)
 {
@@ -697,6 +728,12 @@ heap_page_prune_execute(Buffer buffer,
 		ItemId		fromlp = PageGetItemId(page, fromoff);
 
 		ItemIdSetRedirect(fromlp, tooff);
+
+		/*
+		 * Save information about WARM chains in the item itself
+		 */
+		if (warmchain[fromoff])
+			ItemIdSetHeapWarm(fromlp);
 	}
 
 	/* Update all now-dead line pointers */
diff --git a/src/backend/access/index/genam.c b/src/backend/access/index/genam.c
index 65c941d..4f9fb12 100644
--- a/src/backend/access/index/genam.c
+++ b/src/backend/access/index/genam.c
@@ -99,7 +99,7 @@ RelationGetIndexScan(Relation indexRelation, int nkeys, int norderbys)
 	else
 		scan->orderByData = NULL;
 
-	scan->xs_want_itup = false; /* may be set later */
+	scan->xs_want_itup = true; /* hack for now to always get index tuple */
 
 	/*
 	 * During recovery we ignore killed tuples and don't bother to kill them
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index 54b71cb..6d9dc68 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -71,10 +71,12 @@
 #include "access/xlog.h"
 #include "catalog/catalog.h"
 #include "catalog/index.h"
+#include "executor/executor.h"
 #include "pgstat.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
+#include "utils/datum.h"
 #include "utils/snapmgr.h"
 #include "utils/tqual.h"
 
@@ -409,7 +411,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
 	/*
 	 * The AM's amgettuple proc finds the next index entry matching the scan
 	 * keys, and puts the TID into scan->xs_ctup.t_self.  It should also set
-	 * scan->xs_recheck and possibly scan->xs_itup, though we pay no attention
+	 * scan->xs_tuple_recheck and possibly scan->xs_itup, though we pay no attention
 	 * to those fields here.
 	 */
 	found = scan->indexRelation->rd_amroutine->amgettuple(scan, direction);
@@ -448,7 +450,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
  * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
  * call).
  *
- * Note: caller must check scan->xs_recheck, and perform rechecking of the
+ * Note: caller must check scan->xs_tuple_recheck, and perform rechecking of the
  * scan keys if required.  We do not do that here because we don't have
  * enough information to do it efficiently in the general case.
  * ----------------
@@ -475,6 +477,13 @@ index_fetch_heap(IndexScanDesc scan)
 		 */
 		if (prev_buf != scan->xs_cbuf)
 			heap_page_prune_opt(scan->heapRelation, scan->xs_cbuf);
+
+		/*
+		 * If we're not always re-checking, reset recheck for this tuple
+		 */
+		if (!scan->xs_recheck)
+			scan->xs_tuple_recheck = false;
+
 	}
 
 	/* Obtain share-lock on the buffer so we can examine visibility */
@@ -484,32 +493,50 @@ index_fetch_heap(IndexScanDesc scan)
 											scan->xs_snapshot,
 											&scan->xs_ctup,
 											&all_dead,
-											!scan->xs_continue_hot);
+											!scan->xs_continue_hot,
+											&scan->xs_tuple_recheck);
 	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
 
 	if (got_heap_tuple)
 	{
+		bool res = true;
+		if (scan->xs_tuple_recheck &&
+				scan->indexRelation->rd_amroutine->amrecheck) 
+		{
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
+			res = scan->indexRelation->rd_amroutine->amrecheck(
+						scan->indexRelation,
+						scan->xs_itup,
+						scan->heapRelation,
+						&scan->xs_ctup);
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+		}
+
 		/*
 		 * Only in a non-MVCC snapshot can more than one member of the HOT
 		 * chain be visible.
 		 */
 		scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
 		pgstat_count_heap_fetch(scan->indexRelation);
-		return &scan->xs_ctup;
-	}
 
-	/* We've reached the end of the HOT chain. */
-	scan->xs_continue_hot = false;
+		if (res)
+			return &scan->xs_ctup;
+	}
+	else
+	{
+		/* We've reached the end of the HOT chain. */
+		scan->xs_continue_hot = false;
 
-	/*
-	 * If we scanned a whole HOT chain and found only dead tuples, tell index
-	 * AM to kill its entry for that TID (this will take effect in the next
-	 * amgettuple call, in index_getnext_tid).  We do not do this when in
-	 * recovery because it may violate MVCC to do so.  See comments in
-	 * RelationGetIndexScan().
-	 */
-	if (!scan->xactStartedInRecovery)
-		scan->kill_prior_tuple = all_dead;
+		/*
+		 * If we scanned a whole HOT chain and found only dead tuples, tell index
+		 * AM to kill its entry for that TID (this will take effect in the next
+		 * amgettuple call, in index_getnext_tid).  We do not do this when in
+		 * recovery because it may violate MVCC to do so.  See comments in
+		 * RelationGetIndexScan().
+		 */
+		if (!scan->xactStartedInRecovery)
+			scan->kill_prior_tuple = all_dead;
+	}
 
 	return NULL;
 }
diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c
index ef69290..1fb077e 100644
--- a/src/backend/access/nbtree/nbtinsert.c
+++ b/src/backend/access/nbtree/nbtinsert.c
@@ -19,11 +19,14 @@
 #include "access/nbtree.h"
 #include "access/transam.h"
 #include "access/xloginsert.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
 #include "utils/tqual.h"
-
+#include "utils/datum.h"
 
 typedef struct
 {
@@ -249,6 +252,9 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 	BTPageOpaque opaque;
 	Buffer		nbuf = InvalidBuffer;
 	bool		found = false;
+	Buffer		buffer;
+	HeapTupleData	heapTuple;
+	bool		recheck = false;
 
 	/* Assume unique until we find a duplicate */
 	*is_unique = true;
@@ -308,6 +314,8 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				curitup = (IndexTuple) PageGetItem(page, curitemid);
 				htid = curitup->t_tid;
 
+				recheck = false;
+
 				/*
 				 * If we are doing a recheck, we expect to find the tuple we
 				 * are rechecking.  It's not a duplicate, but we have to keep
@@ -322,115 +330,156 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				/*
 				 * We check the whole HOT-chain to see if there is any tuple
 				 * that satisfies SnapshotDirty.  This is necessary because we
-				 * have just a single index entry for the entire chain.
-				 */
-				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
-										 &all_dead))
+  				 * have just a single index entry for the entire chain.
+  				 */
+  				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
+							&all_dead, &recheck, &buffer,
+							&heapTuple))
 				{
 					TransactionId xwait;
+					bool result = true;
 
 					/*
-					 * It is a duplicate. If we are only doing a partial
-					 * check, then don't bother checking if the tuple is being
-					 * updated in another transaction. Just return the fact
-					 * that it is a potential conflict and leave the full
-					 * check till later.
+					 * If the tuple was WARM update, we may again see our own
+					 * tuple. Since WARM updates don't create new index
+					 * entries, our own tuple is only reachable via the old
+					 * index pointer
 					 */
-					if (checkUnique == UNIQUE_CHECK_PARTIAL)
+					if (checkUnique == UNIQUE_CHECK_EXISTING &&
+							ItemPointerCompare(&htid, &itup->t_tid) == 0)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						*is_unique = false;
-						return InvalidTransactionId;
+						found = true;
+						result = false;
+						if (recheck)
+							UnlockReleaseBuffer(buffer);
 					}
-
-					/*
-					 * If this tuple is being updated by other transaction
-					 * then we have to wait for its commit/abort.
-					 */
-					xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
-						SnapshotDirty.xmin : SnapshotDirty.xmax;
-
-					if (TransactionIdIsValid(xwait))
+					else if (recheck)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						/* Tell _bt_doinsert to wait... */
-						*speculativeToken = SnapshotDirty.speculativeToken;
-						return xwait;
+						result = btrecheck(rel, curitup, heapRel, &heapTuple);
+						UnlockReleaseBuffer(buffer);
 					}
 
-					/*
-					 * Otherwise we have a definite conflict.  But before
-					 * complaining, look to see if the tuple we want to insert
-					 * is itself now committed dead --- if so, don't complain.
-					 * This is a waste of time in normal scenarios but we must
-					 * do it to support CREATE INDEX CONCURRENTLY.
-					 *
-					 * We must follow HOT-chains here because during
-					 * concurrent index build, we insert the root TID though
-					 * the actual tuple may be somewhere in the HOT-chain.
-					 * While following the chain we might not stop at the
-					 * exact tuple which triggered the insert, but that's OK
-					 * because if we find a live tuple anywhere in this chain,
-					 * we have a unique key conflict.  The other live tuple is
-					 * not part of this chain because it had a different index
-					 * entry.
-					 */
-					htid = itup->t_tid;
-					if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL))
-					{
-						/* Normal case --- it's still live */
-					}
-					else
+					if (result)
 					{
 						/*
-						 * It's been deleted, so no error, and no need to
-						 * continue searching
+						 * It is a duplicate. If we are only doing a partial
+						 * check, then don't bother checking if the tuple is being
+						 * updated in another transaction. Just return the fact
+						 * that it is a potential conflict and leave the full
+						 * check till later.
 						 */
-						break;
-					}
+						if (checkUnique == UNIQUE_CHECK_PARTIAL)
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							*is_unique = false;
+							return InvalidTransactionId;
+						}
 
-					/*
-					 * Check for a conflict-in as we would if we were going to
-					 * write to this page.  We aren't actually going to write,
-					 * but we want a chance to report SSI conflicts that would
-					 * otherwise be masked by this unique constraint
-					 * violation.
-					 */
-					CheckForSerializableConflictIn(rel, NULL, buf);
+						/*
+						 * If this tuple is being updated by other transaction
+						 * then we have to wait for its commit/abort.
+						 */
+						xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
+							SnapshotDirty.xmin : SnapshotDirty.xmax;
+
+						if (TransactionIdIsValid(xwait))
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							/* Tell _bt_doinsert to wait... */
+							*speculativeToken = SnapshotDirty.speculativeToken;
+							return xwait;
+						}
 
-					/*
-					 * This is a definite conflict.  Break the tuple down into
-					 * datums and report the error.  But first, make sure we
-					 * release the buffer locks we're holding ---
-					 * BuildIndexValueDescription could make catalog accesses,
-					 * which in the worst case might touch this same index and
-					 * cause deadlocks.
-					 */
-					if (nbuf != InvalidBuffer)
-						_bt_relbuf(rel, nbuf);
-					_bt_relbuf(rel, buf);
+						/*
+						 * Otherwise we have a definite conflict.  But before
+						 * complaining, look to see if the tuple we want to insert
+						 * is itself now committed dead --- if so, don't complain.
+						 * This is a waste of time in normal scenarios but we must
+						 * do it to support CREATE INDEX CONCURRENTLY.
+						 *
+						 * We must follow HOT-chains here because during
+						 * concurrent index build, we insert the root TID though
+						 * the actual tuple may be somewhere in the HOT-chain.
+						 * While following the chain we might not stop at the
+						 * exact tuple which triggered the insert, but that's OK
+						 * because if we find a live tuple anywhere in this chain,
+						 * we have a unique key conflict.  The other live tuple is
+						 * not part of this chain because it had a different index
+						 * entry.
+						 */
+						recheck = false;
+						ItemPointerCopy(&itup->t_tid, &htid);
+						if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL,
+									&recheck, &buffer, &heapTuple))
+						{
+							bool result = true;
+							if (recheck)
+							{
+								/*
+								 * Recheck if the tuple actually satisfies the
+								 * index key. Otherwise, we might be following
+								 * a wrong index pointer and mustn't entertain
+								 * this tuple
+								 */
+								result = btrecheck(rel, itup, heapRel, &heapTuple);
+								UnlockReleaseBuffer(buffer);
+							}
+							if (!result)
+								break;
+							/* Normal case --- it's still live */
+						}
+						else
+						{
+							/*
+							 * It's been deleted, so no error, and no need to
+							 * continue searching
+							 */
+							break;
+						}
 
-					{
-						Datum		values[INDEX_MAX_KEYS];
-						bool		isnull[INDEX_MAX_KEYS];
-						char	   *key_desc;
-
-						index_deform_tuple(itup, RelationGetDescr(rel),
-										   values, isnull);
-
-						key_desc = BuildIndexValueDescription(rel, values,
-															  isnull);
-
-						ereport(ERROR,
-								(errcode(ERRCODE_UNIQUE_VIOLATION),
-								 errmsg("duplicate key value violates unique constraint \"%s\"",
-										RelationGetRelationName(rel)),
-							   key_desc ? errdetail("Key %s already exists.",
-													key_desc) : 0,
-								 errtableconstraint(heapRel,
-											 RelationGetRelationName(rel))));
+						/*
+						 * Check for a conflict-in as we would if we were going to
+						 * write to this page.  We aren't actually going to write,
+						 * but we want a chance to report SSI conflicts that would
+						 * otherwise be masked by this unique constraint
+						 * violation.
+						 */
+						CheckForSerializableConflictIn(rel, NULL, buf);
+
+						/*
+						 * This is a definite conflict.  Break the tuple down into
+						 * datums and report the error.  But first, make sure we
+						 * release the buffer locks we're holding ---
+						 * BuildIndexValueDescription could make catalog accesses,
+						 * which in the worst case might touch this same index and
+						 * cause deadlocks.
+						 */
+						if (nbuf != InvalidBuffer)
+							_bt_relbuf(rel, nbuf);
+						_bt_relbuf(rel, buf);
+
+						{
+							Datum		values[INDEX_MAX_KEYS];
+							bool		isnull[INDEX_MAX_KEYS];
+							char	   *key_desc;
+
+							index_deform_tuple(itup, RelationGetDescr(rel),
+									values, isnull);
+
+							key_desc = BuildIndexValueDescription(rel, values,
+									isnull);
+
+							ereport(ERROR,
+									(errcode(ERRCODE_UNIQUE_VIOLATION),
+									 errmsg("duplicate key value violates unique constraint \"%s\"",
+										 RelationGetRelationName(rel)),
+									 key_desc ? errdetail("Key %s already exists.",
+										 key_desc) : 0,
+									 errtableconstraint(heapRel,
+										 RelationGetRelationName(rel))));
+						}
 					}
 				}
 				else if (all_dead)
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 4668c5e..7a59a7f 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -23,6 +23,7 @@
 #include "access/xlog.h"
 #include "catalog/index.h"
 #include "commands/vacuum.h"
+#include "executor/nodeIndexscan.h"
 #include "storage/indexfsm.h"
 #include "storage/ipc.h"
 #include "storage/lmgr.h"
@@ -117,6 +118,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = btendscan;
 	amroutine->ammarkpos = btmarkpos;
 	amroutine->amrestrpos = btrestrpos;
+	amroutine->amrecheck = btrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -292,8 +294,9 @@ btgettuple(IndexScanDesc scan, ScanDirection dir)
 	BTScanOpaque so = (BTScanOpaque) scan->opaque;
 	bool		res;
 
-	/* btree indexes are never lossy */
-	scan->xs_recheck = false;
+	/* btree indexes are never lossy, except for WARM tuples */
+	scan->xs_recheck = indexscan_recheck;
+	scan->xs_tuple_recheck = indexscan_recheck;
 
 	/*
 	 * If we have any array keys, initialize them during first call for a
diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c
index 5d335c7..72b5750 100644
--- a/src/backend/access/nbtree/nbtutils.c
+++ b/src/backend/access/nbtree/nbtutils.c
@@ -20,11 +20,15 @@
 #include "access/nbtree.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "utils/array.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 typedef struct BTSortArrayContext
@@ -2067,3 +2071,103 @@ btproperty(Oid index_oid, int attno,
 			return false;		/* punt to generic code */
 	}
 }
+
+/*
+ * Check if the index tuple's key matches the one computed from the given heap
+ * tuple's attribute
+ */
+bool
+btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	/* Get IndexInfo for this index */
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL, then they are equal
+		 */
+		if (isnull[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If just one is NULL, then they are not equal
+		 */
+		if (isnull[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now just do a raw memory comparison. If the index tuple was formed
+		 * using this heap tuple, the computed index values must match
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+}
diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c
index d570ae5..813b5c3 100644
--- a/src/backend/access/spgist/spgutils.c
+++ b/src/backend/access/spgist/spgutils.c
@@ -67,6 +67,7 @@ spghandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = spgendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index b0b43cf..36467b2 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -54,6 +54,7 @@
 #include "nodes/makefuncs.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/var.h"
 #include "parser/parser.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
@@ -1674,6 +1675,20 @@ BuildIndexInfo(Relation index)
 	ii->ii_Concurrent = false;
 	ii->ii_BrokenHotChain = false;
 
+	/* build a bitmap of all table attributes referred by this index */
+	for (i = 0; i < ii->ii_NumIndexAttrs; i++)
+	{
+		AttrNumber attr = ii->ii_KeyAttrNumbers[i];
+		ii->ii_indxattrs = bms_add_member(ii->ii_indxattrs, attr -
+				FirstLowInvalidHeapAttributeNumber);
+	}
+
+	/* Collect all attributes used in expressions, too */
+	pull_varattnos((Node *) ii->ii_Expressions, 1, &ii->ii_indxattrs);
+
+	/* Collect all attributes in the index predicate, too */
+	pull_varattnos((Node *) ii->ii_Predicate, 1, &ii->ii_indxattrs);
+
 	return ii;
 }
 
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index 26f9114..997c8f5 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -40,6 +40,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	TriggerData *trigdata = (TriggerData *) fcinfo->context;
 	const char *funcname = "unique_key_recheck";
 	HeapTuple	new_row;
+	HeapTupleData heapTuple;
 	ItemPointerData tmptid;
 	Relation	indexRel;
 	IndexInfo  *indexInfo;
@@ -102,7 +103,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	 * removed.
 	 */
 	tmptid = new_row->t_self;
-	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL,
+				NULL, NULL, &heapTuple))
 	{
 		/*
 		 * All rows in the HOT chain are dead, so skip the check.
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index f45b330..392c102 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -2495,6 +2495,7 @@ CopyFrom(CopyState cstate)
 
 				if (resultRelInfo->ri_NumIndices > 0)
 					recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+														 &(tuple->t_self), NULL,
 														 estate, false, NULL,
 														   NIL);
 
@@ -2610,6 +2611,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
 			ExecStoreTuple(bufferedTuples[i], myslot, InvalidBuffer, false);
 			recheckIndexes =
 				ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
+									  &(bufferedTuples[i]->t_self), NULL,
 									  estate, false, NULL, NIL);
 			ExecARInsertTriggers(estate, resultRelInfo,
 								 bufferedTuples[i],
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 231e92d..ca40e1b 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -1468,6 +1468,7 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 
 		recptr = log_heap_clean(onerel, buffer,
 								NULL, 0, NULL, 0,
+								NULL, 0,
 								unused, uncnt,
 								vacrelstats->latestRemovedXid);
 		PageSetLSN(page, recptr);
@@ -2128,6 +2129,22 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 						break;
 					}
 
+					/*
+					 * If this tuple was ever WARM updated or is a WARM tuple,
+					 * there could be multiple index entries pointing to the
+					 * root of this chain. We can't do index-only scans for
+					 * such tuples without verifying index key check. So mark
+					 * the page as !all_visible
+					 *
+					 * XXX Should we look at the root line pointer and check if
+					 * WARM flag is set there or checking for tuples in the
+					 * chain is good enough?
+					 */
+					if (HeapTupleHeaderIsHeapWarmTuple(tuple.t_data))
+					{
+						all_visible = false;
+					}
+
 					/* Track newest xmin on page. */
 					if (TransactionIdFollows(xmin, *visibility_cutoff_xid))
 						*visibility_cutoff_xid = xmin;
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 0e2d834..da27cf6 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -270,6 +270,8 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
 List *
 ExecInsertIndexTuples(TupleTableSlot *slot,
 					  ItemPointer tupleid,
+					  ItemPointer root_tid,
+					  Bitmapset *updated_attrs,
 					  EState *estate,
 					  bool noDupErr,
 					  bool *specConflict,
@@ -324,6 +326,17 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 		if (!indexInfo->ii_ReadyForInserts)
 			continue;
 
+		/*
+		 * If updated_attrs is set, we only insert index entries for those
+		 * indexes whose column has changed. All other indexes can use their
+		 * existing index pointers to look up the new tuple
+		 */
+		if (updated_attrs)
+		{
+			if (!bms_overlap(updated_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
 		/* Check for partial index */
 		if (indexInfo->ii_Predicate != NIL)
 		{
@@ -389,7 +402,7 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 			index_insert(indexRelation, /* index relation */
 						 values,	/* array of index Datums */
 						 isnull,	/* null flags */
-						 tupleid,		/* tid of heap tuple */
+						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique);	/* type of uniqueness check to do */
 
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index 449aacb..ff77349 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -37,6 +37,7 @@
 
 #include "access/relscan.h"
 #include "access/transam.h"
+#include "access/valid.h"
 #include "executor/execdebug.h"
 #include "executor/nodeBitmapHeapscan.h"
 #include "pgstat.h"
@@ -362,11 +363,23 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			OffsetNumber offnum = tbmres->offsets[curslot];
 			ItemPointerData tid;
 			HeapTupleData heapTuple;
+			bool recheck = false;
 
 			ItemPointerSet(&tid, page, offnum);
 			if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
-									   &heapTuple, NULL, true))
-				scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+									   &heapTuple, NULL, true, &recheck))
+			{
+				bool valid = true;
+
+				if (scan->rs_key)
+					HeapKeyTest(&heapTuple, RelationGetDescr(scan->rs_rd),
+							scan->rs_nkeys, scan->rs_key, valid);
+				if (valid)
+					scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+
+				if (recheck)
+					tbmres->recheck = true;
+			}
 		}
 	}
 	else
diff --git a/src/backend/executor/nodeIndexonlyscan.c b/src/backend/executor/nodeIndexonlyscan.c
index 4f6f91c..49bda34 100644
--- a/src/backend/executor/nodeIndexonlyscan.c
+++ b/src/backend/executor/nodeIndexonlyscan.c
@@ -141,6 +141,26 @@ IndexOnlyNext(IndexOnlyScanState *node)
 			 * but it's not clear whether it's a win to do so.  The next index
 			 * entry might require a visit to the same heap page.
 			 */
+
+			/*
+			 * If the index was lossy or the tuple was WARM, we have to recheck
+			 * the index quals using the fetched tuple.
+			 */
+			if (scandesc->xs_tuple_recheck)
+			{
+				ExecStoreTuple(tuple,	/* tuple to store */
+						slot,	/* slot to store in */
+						scandesc->xs_cbuf,		/* buffer containing tuple */
+						false);	/* don't pfree */
+				econtext->ecxt_scantuple = slot;
+				ResetExprContext(econtext);
+				if (!ExecQual(node->indexqual, econtext, false))
+				{
+					/* Fails recheck, so drop it and loop back for another */
+					InstrCountFiltered2(node, 1);
+					continue;
+				}
+			}
 		}
 
 		/*
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index 3143bd9..0b04bb8 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -39,6 +39,8 @@
 #include "utils/memutils.h"
 #include "utils/rel.h"
 
+bool indexscan_recheck = false;
+
 /*
  * When an ordering operator is used, tuples fetched from the index that
  * need to be reordered are queued in a pairing heap, as ReorderTuples.
@@ -115,10 +117,10 @@ IndexNext(IndexScanState *node)
 					   false);	/* don't pfree */
 
 		/*
-		 * If the index was lossy, we have to recheck the index quals using
-		 * the fetched tuple.
+		 * If the index was lossy or the tuple was WARM, we have to recheck
+		 * the index quals using the fetched tuple.
 		 */
-		if (scandesc->xs_recheck)
+		if (scandesc->xs_tuple_recheck)
 		{
 			econtext->ecxt_scantuple = slot;
 			ResetExprContext(econtext);
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index af7b26c..7367e9a 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -433,6 +433,7 @@ ExecInsert(ModifyTableState *mtstate,
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												 &(tuple->t_self), NULL,
 												 estate, true, &specConflict,
 												   arbiterIndexes);
 
@@ -479,6 +480,7 @@ ExecInsert(ModifyTableState *mtstate,
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
 				recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+													   &(tuple->t_self), NULL,
 													   estate, false, NULL,
 													   arbiterIndexes);
 		}
@@ -809,6 +811,9 @@ ExecUpdate(ItemPointer tupleid,
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
 	List	   *recheckIndexes = NIL;
+	Bitmapset  *updated_attrs = NULL;
+	ItemPointerData	root_tid;
+	bool		warm_update;
 
 	/*
 	 * abort the operation if not running transactions
@@ -923,7 +928,7 @@ lreplace:;
 							 estate->es_output_cid,
 							 estate->es_crosscheck_snapshot,
 							 true /* wait for commit */ ,
-							 &hufd, &lockmode);
+							 &hufd, &lockmode, &updated_attrs, &warm_update);
 		switch (result)
 		{
 			case HeapTupleSelfUpdated:
@@ -1011,9 +1016,24 @@ lreplace:;
 		 *
 		 * If it's a HOT update, we mustn't insert new index entries.
 		 */
-		if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
+		if (resultRelInfo->ri_NumIndices > 0 &&
+			(!HeapTupleIsHeapOnly(tuple) || warm_update))
+		{
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(updated_attrs);
+				updated_attrs = NULL;
+			}
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   updated_attrs,
 												   estate, false, NULL, NIL);
+		}
 	}
 
 	if (canSetTag)
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 8d2ad01..7706a37 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2038,6 +2038,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	list_free_deep(relation->rd_fkeylist);
 	list_free(relation->rd_indexlist);
 	bms_free(relation->rd_indexattr);
+	bms_free(relation->rd_exprindexattr);
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_idattr);
 	if (relation->rd_options)
@@ -4381,12 +4382,15 @@ Bitmapset *
 RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 {
 	Bitmapset  *indexattrs;		/* indexed columns */
+	Bitmapset  *exprindexattrs;	/* indexed columns in expression/prediacate
+									 indexes */
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
 	List	   *indexoidlist;
 	Oid			relreplindex;
 	ListCell   *l;
 	MemoryContext oldcxt;
+	bool		supportswarm = true;/* True if the table can be WARM updated */
 
 	/* Quick exit if we already computed the result. */
 	if (relation->rd_indexattr != NULL)
@@ -4399,6 +4403,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_keyattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
+			case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+				return bms_copy(relation->rd_exprindexattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -4437,6 +4443,7 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	 * won't be returned at all by RelationGetIndexList.
 	 */
 	indexattrs = NULL;
+	exprindexattrs = NULL;
 	uindexattrs = NULL;
 	idindexattrs = NULL;
 	foreach(l, indexoidlist)
@@ -4482,19 +4489,32 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 		}
 
 		/* Collect all attributes used in expressions, too */
-		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &exprindexattrs);
 
 		/* Collect all attributes in the index predicate, too */
-		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
+
+		/*
+		 * Check if the index has amrecheck method defined. If the method is
+		 * not defined, the index does not support WARM update. Completely
+		 * disable WARM updates on such tables
+		 */
+		if (!indexDesc->rd_amroutine->amrecheck)
+			supportswarm = false;
 
 		index_close(indexDesc, AccessShareLock);
 	}
 
 	list_free(indexoidlist);
 
+	/* Remember if the table can do WARM updates */
+	relation->rd_supportswarm = supportswarm;
+
 	/* Don't leak the old values of these bitmaps, if any */
 	bms_free(relation->rd_indexattr);
 	relation->rd_indexattr = NULL;
+	bms_free(relation->rd_exprindexattr);
+	relation->rd_exprindexattr = NULL;
 	bms_free(relation->rd_keyattr);
 	relation->rd_keyattr = NULL;
 	bms_free(relation->rd_idattr);
@@ -4510,7 +4530,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	oldcxt = MemoryContextSwitchTo(CacheMemoryContext);
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_indexattr = bms_copy(indexattrs);
+	relation->rd_exprindexattr = bms_copy(exprindexattrs);
+	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -4522,6 +4543,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 			return uindexattrs;
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
+		case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+			return exprindexattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index c5178f7..aa7b265 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -111,6 +111,7 @@ extern char *default_tablespace;
 extern char *temp_tablespaces;
 extern bool ignore_checksum_failure;
 extern bool synchronize_seqscans;
+extern bool indexscan_recheck;
 
 #ifdef TRACE_SYNCSCAN
 extern bool trace_syncscan;
@@ -1271,6 +1272,16 @@ static struct config_bool ConfigureNamesBool[] =
 		NULL, NULL, NULL
 	},
 	{
+		{"indexscan_recheck", PGC_USERSET, DEVELOPER_OPTIONS,
+			gettext_noop("Recheck heap rows returned from an index scan."),
+			NULL,
+			GUC_NOT_IN_SAMPLE
+		},
+		&indexscan_recheck,
+		false,
+		NULL, NULL, NULL
+	},
+	{
 		{"debug_deadlocks", PGC_SUSET, DEVELOPER_OPTIONS,
 			gettext_noop("Dumps information about all current locks when a deadlock timeout occurs."),
 			NULL,
diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h
index 1036cca..2031a76 100644
--- a/src/include/access/amapi.h
+++ b/src/include/access/amapi.h
@@ -13,6 +13,7 @@
 #define AMAPI_H
 
 #include "access/genam.h"
+#include "access/itup.h"
 
 /*
  * We don't wish to include planner header files here, since most of an index
@@ -137,6 +138,9 @@ typedef void (*ammarkpos_function) (IndexScanDesc scan);
 /* restore marked scan position */
 typedef void (*amrestrpos_function) (IndexScanDesc scan);
 
+/* recheck index tuple and heap tuple match */
+typedef bool (*amrecheck_function) (Relation indexRel, IndexTuple indexTuple,
+	   	Relation heapRel, HeapTuple heapTuple);
 
 /*
  * API struct for an index AM.  Note this must be stored in a single palloc'd
@@ -196,6 +200,7 @@ typedef struct IndexAmRoutine
 	amendscan_function amendscan;
 	ammarkpos_function ammarkpos;		/* can be NULL */
 	amrestrpos_function amrestrpos;		/* can be NULL */
+	amrecheck_function amrecheck;		/* can be NULL */
 } IndexAmRoutine;
 
 
diff --git a/src/include/access/hash.h b/src/include/access/hash.h
index ce31418..7950739 100644
--- a/src/include/access/hash.h
+++ b/src/include/access/hash.h
@@ -369,5 +369,7 @@ extern OffsetNumber _hash_binsearch_last(Page page, uint32 hash_value);
 extern void hash_redo(XLogReaderState *record);
 extern void hash_desc(StringInfo buf, XLogReaderState *record);
 extern const char *hash_identify(uint8 info);
+extern bool hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 #endif   /* HASH_H */
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 94b46b8..4c05947 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -137,9 +137,10 @@ extern bool heap_fetch(Relation relation, Snapshot snapshot,
 		   Relation stats_relation);
 extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
 					   Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call);
+					   bool *all_dead, bool first_call, bool *recheck);
 extern bool heap_hot_search(ItemPointer tid, Relation relation,
-				Snapshot snapshot, bool *all_dead);
+				Snapshot snapshot, bool *all_dead,
+				bool *recheck, Buffer *buffer, HeapTuple heapTuple);
 
 extern void heap_get_latest_tid(Relation relation, Snapshot snapshot,
 					ItemPointer tid);
@@ -160,7 +161,8 @@ extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
 extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **updated_attrs, bool *warm_update);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
@@ -186,6 +188,7 @@ extern int heap_page_prune(Relation relation, Buffer buffer,
 				bool report_stats, TransactionId *latestRemovedXid);
 extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
+						bool *warmchain,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
 extern void heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index 5a04561..ddc3a7a 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -80,6 +80,7 @@
 #define XLH_UPDATE_CONTAINS_NEW_TUPLE			(1<<4)
 #define XLH_UPDATE_PREFIX_FROM_OLD				(1<<5)
 #define XLH_UPDATE_SUFFIX_FROM_OLD				(1<<6)
+#define XLH_UPDATE_WARM_UPDATE					(1<<7)
 
 /* convenience macro for checking whether any form of old tuple was logged */
 #define XLH_UPDATE_CONTAINS_OLD						\
@@ -211,7 +212,9 @@ typedef struct xl_heap_update
  *	* for each redirected item: the item offset, then the offset redirected to
  *	* for each now-dead item: the item offset
  *	* for each now-unused item: the item offset
- * The total number of OffsetNumbers is therefore 2*nredirected+ndead+nunused.
+ *	* for each now-warm item: the item offset
+ * The total number of OffsetNumbers is therefore
+ * 2*nredirected+ndead+nunused+nwarm.
  * Note that nunused is not explicitly stored, but may be found by reference
  * to the total record length.
  */
@@ -220,10 +223,11 @@ typedef struct xl_heap_clean
 	TransactionId latestRemovedXid;
 	uint16		nredirected;
 	uint16		ndead;
+	uint16		nwarm;
 	/* OFFSET NUMBERS are in the block reference 0 */
 } xl_heap_clean;
 
-#define SizeOfHeapClean (offsetof(xl_heap_clean, ndead) + sizeof(uint16))
+#define SizeOfHeapClean (offsetof(xl_heap_clean, nwarm) + sizeof(uint16))
 
 /*
  * Cleanup_info is required in some cases during a lazy VACUUM.
@@ -384,6 +388,7 @@ extern XLogRecPtr log_heap_cleanup_info(RelFileNode rnode,
 					  TransactionId latestRemovedXid);
 extern XLogRecPtr log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *redirected, int nredirected,
+			   OffsetNumber *warm, int nwarm,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid);
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 23a330a..441dfac 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,7 +260,9 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1000 are available */
+
+#define HEAP_WARM_TUPLE			0x0800	/* This tuple is a part of WARM chain
+										 */
 #define HEAP_LATEST_TUPLE		0x1000	/* 
 										 * This is the last tuple in chain and
 										 * ip_posid points to the root line
@@ -271,7 +273,7 @@ struct HeapTupleHeaderData
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF800	/* visibility-related bits */
 
 
 /*
@@ -510,6 +512,21 @@ do { \
   (tup)->t_infomask2 & HEAP_ONLY_TUPLE \
 )
 
+#define HeapTupleHeaderSetHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 |= HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderClearHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 &= ~HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderIsHeapWarmTuple(tup) \
+( \
+  ((tup)->t_infomask2 & HEAP_WARM_TUPLE) \
+)
+
 #define HeapTupleHeaderSetHeapLatest(tup) \
 ( \
 	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE \
@@ -754,6 +771,15 @@ struct MinimalTupleData
 #define HeapTupleClearHeapOnly(tuple) \
 		HeapTupleHeaderClearHeapOnly((tuple)->t_data)
 
+#define HeapTupleIsHeapWarmTuple(tuple) \
+		HeapTupleHeaderIsHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleSetHeapWarmTuple(tuple) \
+		HeapTupleHeaderSetHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleClearHeapWarmTuple(tuple) \
+		HeapTupleHeaderClearHeapWarmTuple((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index c580f51..83af072 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -751,6 +751,8 @@ extern bytea *btoptions(Datum reloptions, bool validate);
 extern bool btproperty(Oid index_oid, int attno,
 		   IndexAMProperty prop, const char *propname,
 		   bool *res, bool *isnull);
+extern bool btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * prototypes for functions in nbtvalidate.c
diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h
index 49c2a6f..880e62e 100644
--- a/src/include/access/relscan.h
+++ b/src/include/access/relscan.h
@@ -110,7 +110,8 @@ typedef struct IndexScanDescData
 	HeapTupleData xs_ctup;		/* current heap tuple, if any */
 	Buffer		xs_cbuf;		/* current heap buffer in scan, if any */
 	/* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */
-	bool		xs_recheck;		/* T means scan keys must be rechecked */
+	bool		xs_recheck;		/* T means scan keys must be rechecked for each tuple */
+	bool		xs_tuple_recheck;	/* T means scan keys must be rechecked for current tuple */
 
 	/*
 	 * When fetching with an ordering operator, the values of the ORDER BY
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 39521ed..60a5445 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -366,6 +366,7 @@ extern void UnregisterExprContextCallback(ExprContext *econtext,
 extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
 extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
 extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
+					  ItemPointer root_tid, Bitmapset *updated_attrs,
 					  EState *estate, bool noDupErr, bool *specConflict,
 					  List *arbiterIndexes);
 extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
diff --git a/src/include/executor/nodeIndexscan.h b/src/include/executor/nodeIndexscan.h
index 194fadb..fe9c78e 100644
--- a/src/include/executor/nodeIndexscan.h
+++ b/src/include/executor/nodeIndexscan.h
@@ -38,4 +38,5 @@ extern bool ExecIndexEvalArrayKeys(ExprContext *econtext,
 					   IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 extern bool ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 
+extern bool indexscan_recheck;
 #endif   /* NODEINDEXSCAN_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index e7fd7bd..3b2c012 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -60,6 +60,7 @@ typedef struct IndexInfo
 	NodeTag		type;
 	int			ii_NumIndexAttrs;
 	AttrNumber	ii_KeyAttrNumbers[INDEX_MAX_KEYS];
+	Bitmapset  *ii_indxattrs;	/* bitmap of all columns used in this index */
 	List	   *ii_Expressions; /* list of Expr */
 	List	   *ii_ExpressionsState;	/* list of ExprState */
 	List	   *ii_Predicate;	/* list of Expr */
diff --git a/src/include/storage/itemid.h b/src/include/storage/itemid.h
index 509c577..166ef3b 100644
--- a/src/include/storage/itemid.h
+++ b/src/include/storage/itemid.h
@@ -46,6 +46,12 @@ typedef ItemIdData *ItemId;
 typedef uint16 ItemOffset;
 typedef uint16 ItemLength;
 
+/*
+ * Special value used in lp_len to indicate that the chain starting at line
+ * pointer may contain WARM tuples. This must only be interpreted along with
+ * LP_REDIRECT flag
+ */
+#define SpecHeapWarmLen	0x1ffb
 
 /* ----------------
  *		support macros
@@ -112,12 +118,15 @@ typedef uint16 ItemLength;
 #define ItemIdIsDead(itemId) \
 	((itemId)->lp_flags == LP_DEAD)
 
+#define ItemIdIsHeapWarm(itemId) \
+	(((itemId)->lp_flags == LP_REDIRECT) && \
+	 ((itemId)->lp_len == SpecHeapWarmLen))
 /*
  * ItemIdHasStorage
  *		True iff item identifier has associated storage.
  */
 #define ItemIdHasStorage(itemId) \
-	((itemId)->lp_len != 0)
+	(!ItemIdIsRedirected(itemId) && (itemId)->lp_len != 0)
 
 /*
  * ItemIdSetUnused
@@ -168,6 +177,26 @@ typedef uint16 ItemLength;
 )
 
 /*
+ * ItemIdSetHeapWarm
+ * 		Set the item identifier to identify as starting of a WARM chain
+ * 
+ * Note: Since all bits in lp_flags are currently used, we store a special
+ * value in lp_len field to indicate this state. This is required only for
+ * LP_REDIRECT tuple and lp_len field is unused for such line pointers.
+ */
+#define ItemIdSetHeapWarm(itemId) \
+do { \
+  	AssertMacro((itemId)->lp_flags == LP_REDIRECT); \
+	(itemId)->lp_len = SpecHeapWarmLen; \
+} while (0)
+
+#define ItemIdClearHeapWarm(itemId) \
+( \
+  	AssertMacro((itemId)->lp_flags == LP_REDIRECT); \
+  	(itemId)->lp_len = 0; \
+)
+
+/*
  * ItemIdMarkDead
  *		Set the item identifier to be DEAD, keeping its existing storage.
  *
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index ed14442..dac32b5 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -101,8 +101,11 @@ typedef struct RelationData
 
 	/* data managed by RelationGetIndexAttrBitmap: */
 	Bitmapset  *rd_indexattr;	/* identifies columns used in indexes */
+	Bitmapset  *rd_exprindexattr; /* indentified columns used in expression or
+									 predicate indexes */
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	/*
 	 * rd_options is set whenever rd_rel is loaded into the relcache entry.
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index 6ea7dd2..290e9b7 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -48,7 +48,8 @@ typedef enum IndexAttrBitmapKind
 {
 	INDEX_ATTR_BITMAP_ALL,
 	INDEX_ATTR_BITMAP_KEY,
-	INDEX_ATTR_BITMAP_IDENTITY_KEY
+	INDEX_ATTR_BITMAP_IDENTITY_KEY,
+	INDEX_ATTR_BITMAP_EXPR_PREDICATE
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
#2Claudio Freire
klaussfreire@gmail.com
In reply to: Pavan Deolasee (#1)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Aug 31, 2016 at 1:45 PM, Pavan Deolasee <pavan.deolasee@gmail.com>
wrote:

We discussed a few ideas to address the "Duplicate Scan" problem. For
example, we can teach Index AMs to discard any duplicate (key, CTID) insert
requests. Or we could guarantee uniqueness by either only allowing updates
in one lexical order. While the former is a more complete solution to avoid
duplicate entries, searching through large number of keys for non-unique
indexes could be a drag on performance. The latter approach may not be
sufficient for many workloads. Also tracking increment/decrement for many
indexes will be non-trivial.

There is another problem with allowing many index entries pointing to the
same WARM chain. It will be non-trivial to know how many index entries are
currently pointing to the WARM chain and index/heap vacuum will throw up
more challenges.

Instead, what I would like to propose and the patch currently implements
is to restrict WARM update to once per chain. So the first non-HOT update
to a tuple or a HOT chain can be a WARM update. The chain can further be
HOT updated any number of times. But it can no further be WARM updated.
This might look too restrictive, but it can still bring down the number of
regular updates by almost 50%. Further, if we devise a strategy to convert
a WARM chain back to HOT chain, it can again be WARM updated. (This part is
currently not implemented). A good side effect of this simple strategy is
that we know there can maximum two index entries pointing to any given WARM
chain.

We should probably think about coordinating with my btree patch.

From the description above, the strategy is quite readily "upgradable" to
one in which the indexam discards duplicate (key,ctid) pairs and that would
remove the limitation of only one WARM update... right?

#3Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Claudio Freire (#2)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Aug 31, 2016 at 10:38 PM, Claudio Freire <klaussfreire@gmail.com>
wrote:

On Wed, Aug 31, 2016 at 1:45 PM, Pavan Deolasee <pavan.deolasee@gmail.com>
wrote:

We discussed a few ideas to address the "Duplicate Scan" problem. For
example, we can teach Index AMs to discard any duplicate (key, CTID) insert
requests. Or we could guarantee uniqueness by either only allowing updates
in one lexical order. While the former is a more complete solution to avoid
duplicate entries, searching through large number of keys for non-unique
indexes could be a drag on performance. The latter approach may not be
sufficient for many workloads. Also tracking increment/decrement for many
indexes will be non-trivial.

There is another problem with allowing many index entries pointing to the
same WARM chain. It will be non-trivial to know how many index entries are
currently pointing to the WARM chain and index/heap vacuum will throw up
more challenges.

Instead, what I would like to propose and the patch currently implements
is to restrict WARM update to once per chain. So the first non-HOT update
to a tuple or a HOT chain can be a WARM update. The chain can further be
HOT updated any number of times. But it can no further be WARM updated.
This might look too restrictive, but it can still bring down the number of
regular updates by almost 50%. Further, if we devise a strategy to convert
a WARM chain back to HOT chain, it can again be WARM updated. (This part is
currently not implemented). A good side effect of this simple strategy is
that we know there can maximum two index entries pointing to any given WARM
chain.

We should probably think about coordinating with my btree patch.

From the description above, the strategy is quite readily "upgradable" to
one in which the indexam discards duplicate (key,ctid) pairs and that would
remove the limitation of only one WARM update... right?

Yes, we should be able to add further optimisations on lines you're working
on, but what I like about the current approach is that a) it reduces
complexity of the patch and b) having thought about cleaning up WARM
chains, limiting number of index entries per root chain to a small number
will simplify that aspect too.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#4Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Pavan Deolasee (#1)
2 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Aug 31, 2016 at 10:15 PM, Pavan Deolasee <pavan.deolasee@gmail.com>
wrote:

Hi All,

As previously discussed [1], WARM is a technique to reduce write
amplification when an indexed column of a table is updated. HOT fails to
handle such updates and ends up inserting a new index entry in all indexes
of the table, irrespective of whether the index key has changed or not for
a specific index. The problem was highlighted by Uber's blog post [2], but
it was a well known problem and affects many workloads.

I realised that the patches were bit-rotten because of 8e1e3f958fb. Rebased
patches on the current master are attached. I also took this opportunity to
correct some white space errors and improve formatting of the README.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0001_track_root_lp_v3.patchapplication/octet-stream; name=0001_track_root_lp_v3.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 6a27ef4..69cd066 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -93,7 +93,8 @@ static HeapTuple heap_prepare_insert(Relation relation, HeapTuple tup,
 					TransactionId xid, CommandId cid, int options);
 static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
-				HeapTuple newtup, HeapTuple old_key_tup,
+				HeapTuple newtup, OffsetNumber root_offnum,
+				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
 static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
 							 Bitmapset *hot_attrs,
@@ -2250,13 +2251,13 @@ heap_get_latest_tid(Relation relation,
 		 */
 		if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) ||
 			HeapTupleHeaderIsOnlyLocked(tp.t_data) ||
-			ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid))
+			HeapTupleHeaderIsHeapLatest(tp.t_data, ctid))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
 		}
 
-		ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tp.t_data, &ctid, offnum);
 		priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		UnlockReleaseBuffer(buffer);
 	}							/* end of loop */
@@ -2415,7 +2416,8 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	START_CRIT_SECTION();
 
 	RelationPutHeapTuple(relation, buffer, heaptup,
-						 (options & HEAP_INSERT_SPECULATIVE) != 0);
+						 (options & HEAP_INSERT_SPECULATIVE) != 0,
+						 InvalidOffsetNumber);
 
 	if (PageIsAllVisible(BufferGetPage(buffer)))
 	{
@@ -2713,7 +2715,8 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		 * RelationGetBufferForTuple has ensured that the first tuple fits.
 		 * Put that on the page, and then as many other tuples as fit.
 		 */
-		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false);
+		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false,
+				InvalidOffsetNumber);
 		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++)
 		{
 			HeapTuple	heaptup = heaptuples[ndone + nthispage];
@@ -2721,7 +2724,8 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace)
 				break;
 
-			RelationPutHeapTuple(relation, buffer, heaptup, false);
+			RelationPutHeapTuple(relation, buffer, heaptup, false,
+					InvalidOffsetNumber);
 
 			/*
 			 * We don't use heap_multi_insert for catalog tuples yet, but
@@ -2993,6 +2997,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	HeapTupleData tp;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	TransactionId new_xmax;
@@ -3044,7 +3049,8 @@ heap_delete(Relation relation, ItemPointer tid,
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 	}
 
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
+	offnum = ItemPointerGetOffsetNumber(tid);
+	lp = PageGetItemId(page, offnum);
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
@@ -3174,7 +3180,7 @@ l1:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tp.t_data, &hufd->ctid, offnum);
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tp.t_data);
@@ -3251,7 +3257,7 @@ l1:
 	HeapTupleHeaderSetXmax(tp.t_data, new_xmax);
 	HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo);
 	/* Make sure there is no forward chain link in t_ctid */
-	tp.t_data->t_ctid = tp.t_self;
+	HeapTupleHeaderSetHeapLatest(tp.t_data);
 
 	MarkBufferDirty(buffer);
 
@@ -3450,6 +3456,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
+	OffsetNumber	root_offnum;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3506,6 +3514,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
 
 	block = ItemPointerGetBlockNumber(otid);
+	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3789,7 +3798,7 @@ l2:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = oldtup.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(oldtup.t_data, &hufd->ctid, offnum);
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
@@ -3968,7 +3977,7 @@ l2:
 		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		HeapTupleHeaderSetHeapLatest(oldtup.t_data);
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4149,6 +4158,20 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			heap_get_root_tuple_one(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
+					&root_offnum);
 	}
 	else
 	{
@@ -4156,10 +4179,29 @@ l2:
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		root_offnum = InvalidOffsetNumber;
 	}
 
-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
+	/* insert new tuple */
+	RelationPutHeapTuple(relation, newbuf, heaptup, false, root_offnum);
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data);
 
+	/*
+	 * Also update the in-memory copy with the root line pointer information
+	 */
+	if (OffsetNumberIsValid(root_offnum))
+	{
+		HeapTupleHeaderSetRootOffset(heaptup->t_data, root_offnum);
+		HeapTupleHeaderSetRootOffset(newtup->t_data, root_offnum);
+	}
+	else
+	{
+		HeapTupleHeaderSetRootOffset(heaptup->t_data,
+				ItemPointerGetOffsetNumber(&heaptup->t_self));
+		HeapTupleHeaderSetRootOffset(newtup->t_data,
+				ItemPointerGetOffsetNumber(&heaptup->t_self));
+	}
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
 	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
@@ -4172,7 +4214,9 @@ l2:
 	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	HeapTupleHeaderSetNextCtid(oldtup.t_data,
+			ItemPointerGetBlockNumber(&(heaptup->t_self)),
+			ItemPointerGetOffsetNumber(&(heaptup->t_self)));
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4211,6 +4255,7 @@ l2:
 
 		recptr = log_heap_update(relation, buffer,
 								 newbuf, &oldtup, heaptup,
+								 root_offnum,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4573,7 +4618,8 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	ItemId		lp;
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
-	BlockNumber block;
+	BlockNumber	block;
+	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4585,6 +4631,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
+	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -4631,7 +4678,7 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+		HeapTupleHeaderGetNextCtid(tuple->t_data, &t_ctid, offnum);
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -5069,7 +5116,7 @@ failed:
 		Assert(result == HeapTupleSelfUpdated || result == HeapTupleUpdated ||
 			   result == HeapTupleWouldBlock);
 		Assert(!(tuple->t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tuple->t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tuple->t_data, &hufd->ctid, offnum);
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tuple->t_data);
@@ -5145,7 +5192,7 @@ failed:
 	 * the tuple as well.
 	 */
 	if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask))
-		tuple->t_data->t_ctid = *tid;
+		HeapTupleHeaderSetHeapLatest(tuple->t_data);
 
 	/* Clear only the all-frozen bit on visibility map if needed */
 	if (PageIsAllVisible(page) &&
@@ -5659,6 +5706,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
+	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5667,6 +5715,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
+		offnum = ItemPointerGetOffsetNumber(&tupid);
+
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
@@ -5885,7 +5935,7 @@ l4:
 
 		/* if we find the end of update chain, we're done. */
 		if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID ||
-			ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) ||
+			HeapTupleHeaderIsHeapLatest(mytup.t_data, mytup.t_self) ||
 			HeapTupleHeaderIsOnlyLocked(mytup.t_data))
 		{
 			result = HeapTupleMayBeUpdated;
@@ -5894,7 +5944,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		HeapTupleHeaderGetNextCtid(mytup.t_data, &tupid, offnum);
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
@@ -6011,7 +6061,8 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	 * Replace the speculative insertion token with a real t_ctid, pointing to
 	 * itself like it does on regular tuples.
 	 */
-	htup->t_ctid = tuple->t_self;
+	HeapTupleHeaderSetHeapLatest(htup);
+	HeapTupleHeaderSetRootOffset(htup, offnum);
 
 	/* XLOG stuff */
 	if (RelationNeedsWAL(relation))
@@ -6137,7 +6188,9 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
 
 	/* Clear the speculative insertion token too */
-	tp.t_data->t_ctid = tp.t_self;
+	HeapTupleHeaderSetNextCtid(tp.t_data,
+			ItemPointerGetBlockNumber(&tp.t_self),
+			ItemPointerGetOffsetNumber(&tp.t_self));
 
 	MarkBufferDirty(buffer);
 
@@ -7486,6 +7539,7 @@ log_heap_visible(RelFileNode rnode, Buffer heap_buffer, Buffer vm_buffer,
 static XLogRecPtr
 log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup, HeapTuple newtup,
+				OffsetNumber root_offnum,
 				HeapTuple old_key_tuple,
 				bool all_visible_cleared, bool new_all_visible_cleared)
 {
@@ -7605,6 +7659,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	/* Prepare WAL data for the new page */
 	xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self);
 	xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data);
+	xlrec.root_offnum = root_offnum;
 
 	bufflags = REGBUF_STANDARD;
 	if (init)
@@ -8260,7 +8315,7 @@ heap_xlog_delete(XLogReaderState *record)
 			PageClearAllVisible(page);
 
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = target_tid;
+		HeapTupleHeaderSetHeapLatest(htup);
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8350,7 +8405,9 @@ heap_xlog_insert(XLogReaderState *record)
 		htup->t_hoff = xlhdr.t_hoff;
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
-		htup->t_ctid = target_tid;
+
+		HeapTupleHeaderSetHeapLatest(htup);
+		HeapTupleHeaderSetRootOffset(htup, xlrec->offnum);
 
 		if (PageAddItem(page, (Item) htup, newlen, xlrec->offnum,
 						true, true) == InvalidOffsetNumber)
@@ -8485,8 +8542,9 @@ heap_xlog_multi_insert(XLogReaderState *record)
 			htup->t_hoff = xlhdr->t_hoff;
 			HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 			HeapTupleHeaderSetCmin(htup, FirstCommandId);
-			ItemPointerSetBlockNumber(&htup->t_ctid, blkno);
-			ItemPointerSetOffsetNumber(&htup->t_ctid, offnum);
+
+			HeapTupleHeaderSetHeapLatest(htup);
+			HeapTupleHeaderSetRootOffset(htup, offnum);
 
 			offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 			if (offnum == InvalidOffsetNumber)
@@ -8622,7 +8680,8 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 		/* Set forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetNextCtid(htup, ItemPointerGetBlockNumber(&newtid),
+				ItemPointerGetOffsetNumber(&newtid));
 
 		/* Mark the page as a candidate for pruning */
 		PageSetPrunable(page, XLogRecGetXid(record));
@@ -8756,12 +8815,17 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetHeapLatest(htup);
 
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
 
+		if (OffsetNumberIsValid(xlrec->root_offnum))
+			HeapTupleHeaderSetRootOffset(htup, xlrec->root_offnum);
+		else
+			HeapTupleHeaderSetRootOffset(htup, offnum);
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
@@ -8889,9 +8953,7 @@ heap_xlog_lock(XLogReaderState *record)
 		{
 			HeapTupleHeaderClearHotUpdated(htup);
 			/* Make sure there is no forward chain link in t_ctid */
-			ItemPointerSet(&htup->t_ctid,
-						   BufferGetBlockNumber(buffer),
-						   offnum);
+			HeapTupleHeaderSetHeapLatest(htup);
 		}
 		HeapTupleHeaderSetXmax(htup, xlrec->locking_xid);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index c90fb71..8183920 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -31,12 +31,17 @@
  * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
  * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *
+ * The caller can optionally tell us to set the root offset to the given value.
+ * Otherwise, the root offset is set to the offset of the new location once its
+ * known
  */
 void
 RelationPutHeapTuple(Relation relation,
 					 Buffer buffer,
 					 HeapTuple tuple,
-					 bool token)
+					 bool token,
+					 OffsetNumber root_offnum)
 {
 	Page		pageHeader;
 	OffsetNumber offnum;
@@ -69,7 +74,13 @@ RelationPutHeapTuple(Relation relation,
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
-		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item);
+		if (OffsetNumberIsValid(root_offnum))
+			HeapTupleHeaderSetRootOffset((HeapTupleHeader) item,
+					root_offnum);
+		else
+			HeapTupleHeaderSetRootOffset((HeapTupleHeader) item,
+					offnum);
 	}
 }
 
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index 6ff9251..7c2231a 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -55,6 +55,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
 
+static void heap_get_root_tuples_internal(Page page,
+				OffsetNumber target_offnum, OffsetNumber *root_offsets);
 
 /*
  * Optionally prune and repair fragmentation in the specified page.
@@ -740,8 +742,9 @@ heap_page_prune_execute(Buffer buffer,
  * holds a pin on the buffer. Once pin is released, a tuple might be pruned
  * and reused by a completely unrelated tuple.
  */
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
 {
 	OffsetNumber offnum,
 				maxoff;
@@ -820,6 +823,14 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 			/* Remember the root line pointer for this item */
 			root_offsets[nextoffnum - 1] = offnum;
 
+			/*
+			 * If the caller is interested in just one offset and we found
+			 * that, just return
+			 */
+			if (OffsetNumberIsValid(target_offnum) &&
+					(nextoffnum == target_offnum))
+				return;
+
 			/* Advance to next chain member, if any */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				break;
@@ -829,3 +840,25 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 		}
 	}
 }
+
+/*
+ * Get root line pointer for the given tuple
+ */
+void
+heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offnum)
+{
+	OffsetNumber offsets[MaxHeapTuplesPerPage];
+	heap_get_root_tuples_internal(page, target_offnum, offsets);
+	*root_offnum = offsets[target_offnum - 1];
+}
+
+/*
+ * Get root line pointers for all tuples in the page
+ */
+void
+heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+{
+	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+			root_offsets);
+}
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index 17584ba..09a164c 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -419,14 +419,14 @@ rewrite_heap_tuple(RewriteState state,
 	 */
 	if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
 		  HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
-		!(ItemPointerEquals(&(old_tuple->t_self),
-							&(old_tuple->t_data->t_ctid))))
+		!(HeapTupleHeaderIsHeapLatest(old_tuple->t_data, old_tuple->t_self)))
 	{
 		OldToNewMapping mapping;
 
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
-		hashkey.tid = old_tuple->t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(old_tuple->t_data, &hashkey.tid,
+				ItemPointerGetOffsetNumber(&old_tuple->t_self));
 
 		mapping = (OldToNewMapping)
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -439,7 +439,9 @@ rewrite_heap_tuple(RewriteState state,
 			 * set the ctid of this tuple to point to the new location, and
 			 * insert it right away.
 			 */
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextCtid(new_tuple->t_data,
+					ItemPointerGetBlockNumber(&mapping->new_tid),
+					ItemPointerGetOffsetNumber(&mapping->new_tid));
 
 			/* We don't need the mapping entry anymore */
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -525,7 +527,9 @@ rewrite_heap_tuple(RewriteState state,
 				new_tuple = unresolved->tuple;
 				free_new = true;
 				old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextCtid(new_tuple->t_data,
+						ItemPointerGetBlockNumber(&new_tid),
+						ItemPointerGetOffsetNumber(&new_tid));
 
 				/*
 				 * We don't need the hash entry anymore, but don't free its
@@ -731,7 +735,10 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		onpage_tup->t_ctid = tup->t_self;
+		HeapTupleHeaderSetNextCtid(onpage_tup,
+				ItemPointerGetBlockNumber(&tup->t_self),
+				ItemPointerGetOffsetNumber(&tup->t_self));
+		HeapTupleHeaderSetHeapLatest(onpage_tup);
 	}
 
 	/* If heaptup is a private copy, release it. */
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 32bb3f9..079a77f 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2443,7 +2443,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		 * As above, it should be safe to examine xmax and t_ctid without the
 		 * buffer content lock, because they can't be changing.
 		 */
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, tuple.t_self))
 		{
 			/* deleted, so forget about it */
 			ReleaseBuffer(buffer);
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index b3a595c..94b46b8 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -188,6 +188,8 @@ extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
+extern void heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offnum);
 extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
 
 /* in heap/syncscan.c */
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index 06a8242..5a04561 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -193,6 +193,8 @@ typedef struct xl_heap_update
 	uint8		flags;
 	TransactionId new_xmax;		/* xmax of the new tuple */
 	OffsetNumber new_offnum;	/* new tuple's offset */
+	OffsetNumber root_offnum;	/* offset of the root line pointer in case of
+								   HOT or WARM update */
 
 	/*
 	 * If XLOG_HEAP_CONTAINS_OLD_TUPLE or XLOG_HEAP_CONTAINS_OLD_KEY flags are
@@ -200,7 +202,7 @@ typedef struct xl_heap_update
 	 */
 } xl_heap_update;
 
-#define SizeOfHeapUpdate	(offsetof(xl_heap_update, new_offnum) + sizeof(OffsetNumber))
+#define SizeOfHeapUpdate	(offsetof(xl_heap_update, root_offnum) + sizeof(OffsetNumber))
 
 /*
  * This is what we need to know about vacuum page cleanup/redirect
diff --git a/src/include/access/hio.h b/src/include/access/hio.h
index a174b34..82e5b5f 100644
--- a/src/include/access/hio.h
+++ b/src/include/access/hio.h
@@ -36,7 +36,7 @@ typedef struct BulkInsertStateData
 
 
 extern void RelationPutHeapTuple(Relation relation, Buffer buffer,
-					 HeapTuple tuple, bool token);
+					 HeapTuple tuple, bool token, OffsetNumber root_offnum);
 extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
 						  Buffer otherBuffer, int options,
 						  BulkInsertState bistate,
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index d7e5fad..76328ff 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,13 +260,19 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1800 are available */
+/* bits 0x1000 are available */
+#define HEAP_LATEST_TUPLE		0x1000	/*
+										 * This is the last tuple in chain and
+										 * ip_posid points to the root line
+										 * pointer
+										 */
 #define HEAP_KEYS_UPDATED		0x2000	/* tuple was updated and key cols
 										 * modified, or tuple deleted */
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xE000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+
 
 /*
  * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins.  It is
@@ -504,6 +510,24 @@ do { \
   (tup)->t_infomask2 & HEAP_ONLY_TUPLE \
 )
 
+#define HeapTupleHeaderSetHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE \
+)
+
+#define HeapTupleHeaderClearHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 &= ~HEAP_LATEST_TUPLE \
+)
+
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  ((tup)->t_infomask2 & HEAP_LATEST_TUPLE) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(&tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(&tid))) \
+)
+
+
 #define HeapTupleHeaderSetHeapOnly(tup) \
 ( \
   (tup)->t_infomask2 |= HEAP_ONLY_TUPLE \
@@ -541,6 +565,43 @@ do { \
 		(((tup)->t_infomask & HEAP_HASEXTERNAL) != 0)
 
 
+#define HeapTupleHeaderSetNextCtid(tup, block, offset) \
+do { \
+		ItemPointerSetBlockNumber(&((tup)->t_ctid), (block)); \
+		ItemPointerSetOffsetNumber(&((tup)->t_ctid), (offset)); \
+		HeapTupleHeaderClearHeapLatest((tup)); \
+} while (0)
+
+#define HeapTupleHeaderGetNextCtid(tup, next_ctid, offnum) \
+do { \
+	if ((tup)->t_infomask2 & HEAP_LATEST_TUPLE) \
+	{ \
+		ItemPointerSet((next_ctid), ItemPointerGetBlockNumber(&(tup)->t_ctid), \
+				(offnum)); \
+	} \
+	else \
+	{ \
+		ItemPointerSet((next_ctid), ItemPointerGetBlockNumber(&(tup)->t_ctid), \
+				ItemPointerGetOffsetNumber(&(tup)->t_ctid)); \
+	} \
+} while (0)
+
+#define HeapTupleHeaderSetRootOffset(tup, offset) \
+do { \
+	AssertMacro(!HeapTupleHeaderIsHotUpdated(tup)); \
+	ItemPointerSetOffsetNumber(&(tup)->t_ctid, (offset)); \
+} while (0)
+
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+  ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)
+
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+  (tup)->t_infomask2 & HEAP_LATEST_TUPLE \
+)
+
 /*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
0002_warm_updates_v3.patchapplication/octet-stream; name=0002_warm_updates_v3.patchDownload
diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c
index debf4f4..d49d179 100644
--- a/contrib/bloom/blutils.c
+++ b/contrib/bloom/blutils.c
@@ -138,6 +138,7 @@ blhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = blendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index 1b45a4c..ba3fffb 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -111,6 +111,7 @@ brinhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = brinendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index f7f44b4..813c2c3 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -88,6 +88,7 @@ gisthandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = gistendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index e3b1eef..d7c50c1 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -85,6 +85,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = hashendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = hashrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -265,6 +266,8 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 	OffsetNumber offnum;
 	ItemPointer current;
 	bool		res;
+	IndexTuple	itup;
+
 
 	/* Hash indexes are always lossy since we store only the hash code */
 	scan->xs_recheck = true;
@@ -302,8 +305,6 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 			 offnum <= maxoffnum;
 			 offnum = OffsetNumberNext(offnum))
 		{
-			IndexTuple	itup;
-
 			itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 			if (ItemPointerEquals(&(so->hashso_heappos), &(itup->t_tid)))
 				break;
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index 4825558..cf44214 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -59,6 +59,8 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
 	return true;
 }
 
@@ -263,6 +265,9 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
+
 	return true;
 }
 
diff --git a/src/backend/access/hash/hashutil.c b/src/backend/access/hash/hashutil.c
index 822862d..ebb9d6c 100644
--- a/src/backend/access/hash/hashutil.c
+++ b/src/backend/access/hash/hashutil.c
@@ -17,8 +17,12 @@
 #include "access/hash.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
+#include "nodes/execnodes.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 /*
@@ -352,3 +356,107 @@ _hash_binsearch_last(Page page, uint32 hash_value)
 
 	return lower;
 }
+
+bool
+hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	Datum		values2[INDEX_MAX_KEYS];
+	bool		isnull2[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	/*
+	 * HASH indexes compute a hash value of the key and store that in the
+	 * index. So we must first obtain the hash of the value obtained from the
+	 * heap and then do a comparison
+	 */
+	_hash_convert_tuple(indexRel, values, isnull, values2, isnull2);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL then they are equal
+		 */
+		if (isnull2[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If either is NULL then they are not equal
+		 */
+		if (isnull2[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now do a raw memory comparison
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values2[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+
+}
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 69cd066..e84f041 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -99,7 +99,10 @@ static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
 							 Bitmapset *hot_attrs,
 							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
+							 Bitmapset *exprindx_attrs,
+							 Bitmapset **updated_attrs,
+							 bool *satisfies_hot, bool *satisfies_warm,
+							 bool *satisfies_key,
 							 bool *satisfies_id,
 							 HeapTuple oldtup, HeapTuple newtup);
 static bool heap_acquire_tuplock(Relation relation, ItemPointer tid,
@@ -1960,6 +1963,76 @@ heap_fetch(Relation relation,
 }
 
 /*
+ * Check if the HOT chain is originating or continuing at tid ever became a
+ * WARM chain, even if the actual UPDATE operation finally aborted.
+ */
+static void
+hot_check_warm_chain(Page dp, ItemPointer tid, bool *recheck)
+{
+	TransactionId prev_xmax = InvalidTransactionId;
+	OffsetNumber offnum;
+	HeapTupleData heapTuple;
+
+	if (*recheck == true)
+		return;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+			break;
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Presence of either WARM or WARM updated tuple signals possible
+		 * breakage and the caller must recheck tuple returned from this chain
+		 * for index satisfaction
+		 */
+		if (HeapTupleHeaderIsHeapWarmTuple(heapTuple.t_data))
+		{
+			*recheck = true;
+			break;
+		}
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (HeapTupleIsHotUpdated(&heapTuple))
+		{
+			offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+			prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+		}
+		else
+			break;				/* end of chain */
+	}
+
+}
+
+/*
  *	heap_hot_search_buffer	- search HOT chain for tuple satisfying snapshot
  *
  * On entry, *tid is the TID of a tuple (either a simple tuple, or the root
@@ -1979,11 +2052,14 @@ heap_fetch(Relation relation,
  * Unlike heap_fetch, the caller must already have pin and (at least) share
  * lock on the buffer; it is still pinned/locked at exit.  Also unlike
  * heap_fetch, we do not report any pgstats count; caller may do so if wanted.
+ *
+ * recheck should be set false on entry by caller, will be set true on exit
+ * if a WARM tuple is encountered.
  */
 bool
 heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 					   Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call)
+					   bool *all_dead, bool first_call, bool *recheck)
 {
 	Page		dp = (Page) BufferGetPage(buffer);
 	TransactionId prev_xmax = InvalidTransactionId;
@@ -2025,6 +2101,16 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 				/* Follow the redirect */
 				offnum = ItemIdGetRedirect(lp);
 				at_chain_start = false;
+
+				/* Check if it's a WARM chain */
+				if (recheck && *recheck == false)
+				{
+					if (ItemIdIsHeapWarm(lp))
+					{
+						*recheck = true;
+						Assert(!IsSystemRelation(relation));
+					}
+				}
 				continue;
 			}
 			/* else must be end of chain */
@@ -2039,7 +2125,8 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		/*
 		 * Shouldn't see a HEAP_ONLY tuple at chain start.
 		 */
-		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple))
+		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple) &&
+				!HeapTupleIsHeapWarmTuple(heapTuple))
 			break;
 
 		/*
@@ -2052,6 +2139,20 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 			break;
 
 		/*
+		 * Check if there exists a WARM tuple somewhere down the chain and set
+		 * recheck to TRUE.
+		 *
+		 * XXX This is not very efficient right now, and we should look for
+		 * possible improvements here
+		 */
+		if (recheck && *recheck == false)
+		{
+			hot_check_warm_chain(dp, &heapTuple->t_self, recheck);
+			if (recheck && *recheck == true)
+				Assert(!IsSystemRelation(relation));
+		}
+
+		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
 		 * return the first tuple we find.  But on later passes, heapTuple
 		 * will initially be pointing to the tuple we returned last time.
@@ -2124,18 +2225,41 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
  */
 bool
 heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
-				bool *all_dead)
+				bool *all_dead, bool *recheck, Buffer *cbuffer,
+				HeapTuple heapTuple)
 {
 	bool		result;
 	Buffer		buffer;
-	HeapTupleData heapTuple;
+	ItemPointerData ret_tid = *tid;
 
 	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	LockBuffer(buffer, BUFFER_LOCK_SHARE);
-	result = heap_hot_search_buffer(tid, relation, buffer, snapshot,
-									&heapTuple, all_dead, true);
-	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buffer);
+	result = heap_hot_search_buffer(&ret_tid, relation, buffer, snapshot,
+									heapTuple, all_dead, true, recheck);
+
+	/*
+	 * If we are returning a potential candidate tuple from this chain and the
+	 * caller has requested for "recheck" hint, keep the buffer locked and
+	 * pinned. The caller must release the lock and pin on the buffer in all
+	 * such cases
+	 */
+	if (!result || !recheck || !(*recheck))
+	{
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	/*
+	 * Set the caller supplied tid with the actual location of the tuple being
+	 * returned
+	 */
+	if (result)
+	{
+		*tid = ret_tid;
+		if (cbuffer)
+			*cbuffer = buffer;
+	}
+
 	return result;
 }
 
@@ -3442,13 +3566,15 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 HTSU_Result
 heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode)
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **updated_attrs, bool *warm_update)
 {
 	HTSU_Result result;
 	TransactionId xid = GetCurrentTransactionId();
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *exprindx_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3469,9 +3595,11 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		satisfies_hot;
+	bool		satisfies_warm;
 	bool		satisfies_key;
 	bool		satisfies_id;
 	bool		use_hot_update = false;
+	bool		use_warm_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
 	bool		all_visible_cleared_new = false;
@@ -3496,6 +3624,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot update tuples during a parallel operation")));
 
+	/* Assume no-warm update */
+	if (warm_update)
+		*warm_update = false;
+
 	/*
 	 * Fetch the list of attributes to be checked for HOT update.  This is
 	 * wasted effort if we fail to update or have to put the new tuple on a
@@ -3512,6 +3644,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	exprindx_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_EXPR_PREDICATE);
 
 	block = ItemPointerGetBlockNumber(otid);
 	offnum = ItemPointerGetOffsetNumber(otid);
@@ -3571,7 +3705,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	 * serendipitiously arrive at the same key values.
 	 */
 	HeapSatisfiesHOTandKeyUpdate(relation, hot_attrs, key_attrs, id_attrs,
-								 &satisfies_hot, &satisfies_key,
+								 exprindx_attrs,
+								 updated_attrs,
+								 &satisfies_hot, &satisfies_warm,
+								 &satisfies_key,
 								 &satisfies_id, &oldtup, newtup);
 	if (satisfies_key)
 	{
@@ -4118,6 +4255,34 @@ l2:
 		 */
 		if (satisfies_hot)
 			use_hot_update = true;
+		else
+		{
+			/*
+			 * If no WARM updates yet on this chain, let this update be a WARM
+			 * update.
+			 *
+			 * We check for both warm and warm updated tuples since if the
+			 * previous WARM update aborted, we may still have added
+			 * another index entry for this HOT chain. In such situations, we
+			 * must not attempt a WARM update until duplicate (key, CTID) index
+			 * entry issue is sorted out
+			 *
+			 * XXX Later we'll add more checks to ensure WARM chains can
+			 * further be WARM updated. This is probably good to do first rounf
+			 * of tests of remaining functionality
+			 *
+			 * XXX Disable WARM updates on system tables. There is nothing in
+			 * principle that stops us from supporting this. But it would
+			 * require API change to propogate the changed columns back to the
+			 * caller so that CatalogUpdateIndexes() can avoid adding new
+			 * entries to indexes that are not changed by update. This will be
+			 * fixed once basic patch is tested. !!FIXME
+			 */
+			if (satisfies_warm &&
+				!HeapTupleIsHeapWarmTuple(&oldtup) &&
+				!IsSystemRelation(relation))
+				use_warm_update = true;
+		}
 	}
 	else
 	{
@@ -4158,6 +4323,21 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+
+		/*
+		 * Even if we are doing a HOT update, we must carry forward the WARM
+		 * flag because we may have already inserted another index entry
+		 * pointing to our root and a third entry may create duplicates
+		 *
+		 * XXX This should be revisited if we get index (key, CTID) duplicate
+		 * detection mechanism in place
+		 */
+		if (HeapTupleIsHeapWarmTuple(&oldtup))
+		{
+			HeapTupleSetHeapWarmTuple(heaptup);
+			HeapTupleSetHeapWarmTuple(newtup);
+		}
+
 		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
@@ -4173,12 +4353,38 @@ l2:
 					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
 					&root_offnum);
 	}
+	else if (use_warm_update)
+	{
+		Assert(!IsSystemRelation(relation));
+
+		/* Mark the old tuple as HOT-updated */
+		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetHeapWarmTuple(&oldtup);
+		/* And mark the new tuple as heap-only */
+		HeapTupleSetHeapOnly(heaptup);
+		HeapTupleSetHeapWarmTuple(heaptup);
+		/* Mark the caller's copy too, in case different from heaptup */
+		HeapTupleSetHeapOnly(newtup);
+		HeapTupleSetHeapWarmTuple(newtup);
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			heap_get_root_tuple_one(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
+					&root_offnum);
+
+		/* Let the caller know we did a WARM update */
+		if (warm_update)
+			*warm_update = true;
+	}
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		HeapTupleClearHeapWarmTuple(heaptup);
+		HeapTupleClearHeapWarmTuple(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4297,7 +4503,12 @@ l2:
 	if (have_tuple_lock)
 		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
 
-	pgstat_count_heap_update(relation, use_hot_update);
+	/*
+	 * Even with WARM we still count stats using use_hot_update,
+	 * since we continue to still use that term even though it is
+	 * now more frequent that previously.
+	 */
+	pgstat_count_heap_update(relation, use_hot_update || use_warm_update);
 
 	/*
 	 * If heaptup is a private copy, release it.  Don't forget to copy t_self
@@ -4405,6 +4616,13 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
  * will be checking very similar sets of columns, and doing the same tests on
  * them, it makes sense to optimize and do them together.
  *
+ * The exprindx_attrs designates the set of attributes used in expression or
+ * predicate indexes. In this version, we don't allow WARM updates if
+ * expression or predicate index column is updated
+ *
+ * If updated_attrs is not NULL, then the caller is always interested in
+ * knowing the list of changed attributes
+ *
  * We receive three bitmapsets comprising the three sets of columns we're
  * interested in.  Note these are destructively modified; that is OK since
  * this is invoked at most once in heap_update.
@@ -4417,7 +4635,11 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 static void
 HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
+							 Bitmapset *exprindx_attrs,
+							 Bitmapset **updated_attrs,
+							 bool *satisfies_hot,
+							 bool *satisfies_warm,
+							 bool *satisfies_key,
 							 bool *satisfies_id,
 							 HeapTuple oldtup, HeapTuple newtup)
 {
@@ -4454,8 +4676,11 @@ HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 		 * Since the HOT attributes are a superset of the key attributes and
 		 * the key attributes are a superset of the id attributes, this logic
 		 * is guaranteed to identify the next column that needs to be checked.
+		 *
+		 * If the caller also wants to know the list of updated index
+		 * attributes, we must scan through all the attributes
 		 */
-		if (hot_result && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
+		if ((hot_result || updated_attrs) && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
 			check_now = next_hot_attnum;
 		else if (key_result && next_key_attnum > FirstLowInvalidHeapAttributeNumber)
 			check_now = next_key_attnum;
@@ -4476,8 +4701,12 @@ HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 			if (check_now == next_id_attnum)
 				id_result = false;
 
+			if (updated_attrs)
+				*updated_attrs = bms_add_member(*updated_attrs, check_now -
+						FirstLowInvalidHeapAttributeNumber);
+
 			/* if all are false now, we can stop checking */
-			if (!hot_result && !key_result && !id_result)
+			if (!hot_result && !key_result && !id_result && !updated_attrs)
 				break;
 		}
 
@@ -4488,7 +4717,7 @@ HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 		 * bms_first_member() will return -1 and the attribute number will end
 		 * up with a value less than FirstLowInvalidHeapAttributeNumber.
 		 */
-		if (hot_result && check_now == next_hot_attnum)
+		if ((hot_result || updated_attrs) && check_now == next_hot_attnum)
 		{
 			next_hot_attnum = bms_first_member(hot_attrs);
 			next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
@@ -4505,6 +4734,13 @@ HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 		}
 	}
 
+	if (updated_attrs && bms_overlap(*updated_attrs, exprindx_attrs))
+		*satisfies_warm = false;
+	else if (!relation->rd_supportswarm)
+		*satisfies_warm = false;
+	else
+		*satisfies_warm = true;
+
 	*satisfies_hot = hot_result;
 	*satisfies_key = key_result;
 	*satisfies_id = id_result;
@@ -4528,7 +4764,7 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &hufd, &lockmode);
+						 &hufd, &lockmode, NULL, NULL);
 	switch (result)
 	{
 		case HeapTupleSelfUpdated:
@@ -7415,6 +7651,7 @@ log_heap_cleanup_info(RelFileNode rnode, TransactionId latestRemovedXid)
 XLogRecPtr
 log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *redirected, int nredirected,
+			   OffsetNumber *warm, int nwarm,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid)
@@ -7428,6 +7665,7 @@ log_heap_clean(Relation reln, Buffer buffer,
 	xlrec.latestRemovedXid = latestRemovedXid;
 	xlrec.nredirected = nredirected;
 	xlrec.ndead = ndead;
+	xlrec.nwarm = nwarm;
 
 	XLogBeginInsert();
 	XLogRegisterData((char *) &xlrec, SizeOfHeapClean);
@@ -7450,6 +7688,10 @@ log_heap_clean(Relation reln, Buffer buffer,
 		XLogRegisterBufData(0, (char *) nowdead,
 							ndead * sizeof(OffsetNumber));
 
+	if (nwarm > 0)
+		XLogRegisterBufData(0, (char *) warm,
+							nwarm * sizeof(OffsetNumber));
+
 	if (nunused > 0)
 		XLogRegisterBufData(0, (char *) nowunused,
 							nunused * sizeof(OffsetNumber));
@@ -7555,6 +7797,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
+	bool		warm_update;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
@@ -7566,6 +7809,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	else
 		info = XLOG_HEAP_UPDATE;
 
+	if (HeapTupleIsHeapWarmTuple(newtup))
+		warm_update = true;
+
 	/*
 	 * If the old and new tuple are on the same page, we only need to log the
 	 * parts of the new tuple that were changed.  That saves on the amount of
@@ -7639,6 +7885,8 @@ log_heap_update(Relation reln, Buffer oldbuf,
 				xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY;
 		}
 	}
+	if (warm_update)
+		xlrec.flags |= XLH_UPDATE_WARM_UPDATE;
 
 	/* If new tuple is the single and first tuple on page... */
 	if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber &&
@@ -8006,24 +8254,38 @@ heap_xlog_clean(XLogReaderState *record)
 		OffsetNumber *redirected;
 		OffsetNumber *nowdead;
 		OffsetNumber *nowunused;
+		OffsetNumber *warm;
 		int			nredirected;
 		int			ndead;
 		int			nunused;
+		int			nwarm;
+		int			i;
 		Size		datalen;
+		bool		warmchain[MaxHeapTuplesPerPage + 1];
 
 		redirected = (OffsetNumber *) XLogRecGetBlockData(record, 0, &datalen);
 
 		nredirected = xlrec->nredirected;
 		ndead = xlrec->ndead;
+		nwarm = xlrec->nwarm;
+
 		end = (OffsetNumber *) ((char *) redirected + datalen);
 		nowdead = redirected + (nredirected * 2);
-		nowunused = nowdead + ndead;
-		nunused = (end - nowunused);
+		warm = nowdead + ndead;
+		nowunused = warm + nwarm;
+
+		nunused = (end - warm);
 		Assert(nunused >= 0);
 
+		memset(warmchain, 0, sizeof (warmchain));
+		for (i = 0; i < nwarm; i++)
+			warmchain[warm[i]] = true;
+
+
 		/* Update all item pointers per the record, and repair fragmentation */
 		heap_page_prune_execute(buffer,
 								redirected, nredirected,
+								warmchain,
 								nowdead, ndead,
 								nowunused, nunused);
 
@@ -8610,16 +8872,22 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	Size		freespace = 0;
 	XLogRedoAction oldaction;
 	XLogRedoAction newaction;
+	bool		warm_update = false;
 
 	/* initialize to keep the compiler quiet */
 	oldtup.t_data = NULL;
 	oldtup.t_len = 0;
 
+	if (xlrec->flags & XLH_UPDATE_WARM_UPDATE)
+		warm_update = true;
+
 	XLogRecGetBlockTag(record, 0, &rnode, NULL, &newblk);
 	if (XLogRecGetBlockTag(record, 1, NULL, NULL, &oldblk))
 	{
 		/* HOT updates are never done across pages */
 		Assert(!hot_update);
+		/* WARM updates are never done across pages */
+		Assert(!warm_update);
 	}
 	else
 		oldblk = newblk;
@@ -8679,6 +8947,10 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 								   &htup->t_infomask2);
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
+
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		/* Set forward chain link in t_ctid */
 		HeapTupleHeaderSetNextCtid(htup, ItemPointerGetBlockNumber(&newtid),
 				ItemPointerGetOffsetNumber(&newtid));
@@ -8814,6 +9086,10 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
+
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		/* Make sure there is no forward chain link in t_ctid */
 		HeapTupleHeaderSetHeapLatest(htup);
 
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index 7c2231a..d71a297 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -36,12 +36,19 @@ typedef struct
 	int			nredirected;	/* numbers of entries in arrays below */
 	int			ndead;
 	int			nunused;
+	int			nwarm;
 	/* arrays that accumulate indexes of items to be changed */
 	OffsetNumber redirected[MaxHeapTuplesPerPage * 2];
 	OffsetNumber nowdead[MaxHeapTuplesPerPage];
 	OffsetNumber nowunused[MaxHeapTuplesPerPage];
+	OffsetNumber warm[MaxHeapTuplesPerPage];
 	/* marked[i] is TRUE if item i is entered in one of the above arrays */
 	bool		marked[MaxHeapTuplesPerPage + 1];
+	/*
+	 * warmchain[i] is TRUE if item is becoming redirected lp and points a WARM
+	 * chain
+	 */
+	bool		warmchain[MaxHeapTuplesPerPage + 1];
 } PruneState;
 
 /* Local functions */
@@ -54,6 +61,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 						   OffsetNumber offnum, OffsetNumber rdoffnum);
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
+static void heap_prune_record_warmupdate(PruneState *prstate,
+						   OffsetNumber offnum);
 
 static void heap_get_root_tuples_internal(Page page,
 				OffsetNumber target_offnum, OffsetNumber *root_offsets);
@@ -203,8 +212,9 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 	 */
 	prstate.new_prune_xid = InvalidTransactionId;
 	prstate.latestRemovedXid = *latestRemovedXid;
-	prstate.nredirected = prstate.ndead = prstate.nunused = 0;
+	prstate.nredirected = prstate.ndead = prstate.nunused = prstate.nwarm = 0;
 	memset(prstate.marked, 0, sizeof(prstate.marked));
+	memset(prstate.warmchain, 0, sizeof(prstate.marked));
 
 	/* Scan the page */
 	maxoff = PageGetMaxOffsetNumber(page);
@@ -241,6 +251,7 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 		 */
 		heap_page_prune_execute(buffer,
 								prstate.redirected, prstate.nredirected,
+								prstate.warmchain,
 								prstate.nowdead, prstate.ndead,
 								prstate.nowunused, prstate.nunused);
 
@@ -268,6 +279,7 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 
 			recptr = log_heap_clean(relation, buffer,
 									prstate.redirected, prstate.nredirected,
+									prstate.warm, prstate.nwarm,
 									prstate.nowdead, prstate.ndead,
 									prstate.nowunused, prstate.nunused,
 									prstate.latestRemovedXid);
@@ -479,6 +491,12 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 			!TransactionIdEquals(HeapTupleHeaderGetXmin(htup), priorXmax))
 			break;
 
+		if (HeapTupleHeaderIsHeapWarmTuple(htup))
+		{
+			Assert(!IsSystemRelation(relation));
+			heap_prune_record_warmupdate(prstate, rootoffnum);
+		}
+
 		/*
 		 * OK, this tuple is indeed a member of the chain.
 		 */
@@ -668,6 +686,18 @@ heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum)
 	prstate->marked[offnum] = true;
 }
 
+/* Record item pointer which is a root of a WARM chain */
+static void
+heap_prune_record_warmupdate(PruneState *prstate, OffsetNumber offnum)
+{
+	Assert(prstate->nwarm < MaxHeapTuplesPerPage);
+	if (prstate->warmchain[offnum])
+		return;
+	prstate->warm[prstate->nwarm] = offnum;
+	prstate->nwarm++;
+	prstate->warmchain[offnum] = true;
+}
+
 
 /*
  * Perform the actual page changes needed by heap_page_prune.
@@ -681,6 +711,7 @@ heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum)
 void
 heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
+						bool *warmchain,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused)
 {
@@ -697,6 +728,12 @@ heap_page_prune_execute(Buffer buffer,
 		ItemId		fromlp = PageGetItemId(page, fromoff);
 
 		ItemIdSetRedirect(fromlp, tooff);
+
+		/*
+		 * Save information about WARM chains in the item itself
+		 */
+		if (warmchain[fromoff])
+			ItemIdSetHeapWarm(fromlp);
 	}
 
 	/* Update all now-dead line pointers */
diff --git a/src/backend/access/index/genam.c b/src/backend/access/index/genam.c
index 65c941d..4f9fb12 100644
--- a/src/backend/access/index/genam.c
+++ b/src/backend/access/index/genam.c
@@ -99,7 +99,7 @@ RelationGetIndexScan(Relation indexRelation, int nkeys, int norderbys)
 	else
 		scan->orderByData = NULL;
 
-	scan->xs_want_itup = false; /* may be set later */
+	scan->xs_want_itup = true; /* hack for now to always get index tuple */
 
 	/*
 	 * During recovery we ignore killed tuples and don't bother to kill them
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index 54b71cb..6ca1d15 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -71,10 +71,12 @@
 #include "access/xlog.h"
 #include "catalog/catalog.h"
 #include "catalog/index.h"
+#include "executor/executor.h"
 #include "pgstat.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
+#include "utils/datum.h"
 #include "utils/snapmgr.h"
 #include "utils/tqual.h"
 
@@ -409,7 +411,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
 	/*
 	 * The AM's amgettuple proc finds the next index entry matching the scan
 	 * keys, and puts the TID into scan->xs_ctup.t_self.  It should also set
-	 * scan->xs_recheck and possibly scan->xs_itup, though we pay no attention
+	 * scan->xs_tuple_recheck and possibly scan->xs_itup, though we pay no attention
 	 * to those fields here.
 	 */
 	found = scan->indexRelation->rd_amroutine->amgettuple(scan, direction);
@@ -448,7 +450,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
  * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
  * call).
  *
- * Note: caller must check scan->xs_recheck, and perform rechecking of the
+ * Note: caller must check scan->xs_tuple_recheck, and perform rechecking of the
  * scan keys if required.  We do not do that here because we don't have
  * enough information to do it efficiently in the general case.
  * ----------------
@@ -475,6 +477,13 @@ index_fetch_heap(IndexScanDesc scan)
 		 */
 		if (prev_buf != scan->xs_cbuf)
 			heap_page_prune_opt(scan->heapRelation, scan->xs_cbuf);
+
+		/*
+		 * If we're not always re-checking, reset recheck for this tuple
+		 */
+		if (!scan->xs_recheck)
+			scan->xs_tuple_recheck = false;
+
 	}
 
 	/* Obtain share-lock on the buffer so we can examine visibility */
@@ -484,32 +493,50 @@ index_fetch_heap(IndexScanDesc scan)
 											scan->xs_snapshot,
 											&scan->xs_ctup,
 											&all_dead,
-											!scan->xs_continue_hot);
+											!scan->xs_continue_hot,
+											&scan->xs_tuple_recheck);
 	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
 
 	if (got_heap_tuple)
 	{
+		bool res = true;
+		if (scan->xs_tuple_recheck &&
+				scan->indexRelation->rd_amroutine->amrecheck)
+		{
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
+			res = scan->indexRelation->rd_amroutine->amrecheck(
+						scan->indexRelation,
+						scan->xs_itup,
+						scan->heapRelation,
+						&scan->xs_ctup);
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+		}
+
 		/*
 		 * Only in a non-MVCC snapshot can more than one member of the HOT
 		 * chain be visible.
 		 */
 		scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
 		pgstat_count_heap_fetch(scan->indexRelation);
-		return &scan->xs_ctup;
-	}
 
-	/* We've reached the end of the HOT chain. */
-	scan->xs_continue_hot = false;
+		if (res)
+			return &scan->xs_ctup;
+	}
+	else
+	{
+		/* We've reached the end of the HOT chain. */
+		scan->xs_continue_hot = false;
 
-	/*
-	 * If we scanned a whole HOT chain and found only dead tuples, tell index
-	 * AM to kill its entry for that TID (this will take effect in the next
-	 * amgettuple call, in index_getnext_tid).  We do not do this when in
-	 * recovery because it may violate MVCC to do so.  See comments in
-	 * RelationGetIndexScan().
-	 */
-	if (!scan->xactStartedInRecovery)
-		scan->kill_prior_tuple = all_dead;
+		/*
+		 * If we scanned a whole HOT chain and found only dead tuples, tell index
+		 * AM to kill its entry for that TID (this will take effect in the next
+		 * amgettuple call, in index_getnext_tid).  We do not do this when in
+		 * recovery because it may violate MVCC to do so.  See comments in
+		 * RelationGetIndexScan().
+		 */
+		if (!scan->xactStartedInRecovery)
+			scan->kill_prior_tuple = all_dead;
+	}
 
 	return NULL;
 }
diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c
index ef69290..e0afffd 100644
--- a/src/backend/access/nbtree/nbtinsert.c
+++ b/src/backend/access/nbtree/nbtinsert.c
@@ -19,11 +19,14 @@
 #include "access/nbtree.h"
 #include "access/transam.h"
 #include "access/xloginsert.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
 #include "utils/tqual.h"
-
+#include "utils/datum.h"
 
 typedef struct
 {
@@ -249,6 +252,9 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 	BTPageOpaque opaque;
 	Buffer		nbuf = InvalidBuffer;
 	bool		found = false;
+	Buffer		buffer;
+	HeapTupleData	heapTuple;
+	bool		recheck = false;
 
 	/* Assume unique until we find a duplicate */
 	*is_unique = true;
@@ -308,6 +314,8 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				curitup = (IndexTuple) PageGetItem(page, curitemid);
 				htid = curitup->t_tid;
 
+				recheck = false;
+
 				/*
 				 * If we are doing a recheck, we expect to find the tuple we
 				 * are rechecking.  It's not a duplicate, but we have to keep
@@ -325,112 +333,153 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				 * have just a single index entry for the entire chain.
 				 */
 				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
-										 &all_dead))
+							&all_dead, &recheck, &buffer,
+							&heapTuple))
 				{
 					TransactionId xwait;
+					bool result = true;
 
 					/*
-					 * It is a duplicate. If we are only doing a partial
-					 * check, then don't bother checking if the tuple is being
-					 * updated in another transaction. Just return the fact
-					 * that it is a potential conflict and leave the full
-					 * check till later.
+					 * If the tuple was WARM update, we may again see our own
+					 * tuple. Since WARM updates don't create new index
+					 * entries, our own tuple is only reachable via the old
+					 * index pointer
 					 */
-					if (checkUnique == UNIQUE_CHECK_PARTIAL)
+					if (checkUnique == UNIQUE_CHECK_EXISTING &&
+							ItemPointerCompare(&htid, &itup->t_tid) == 0)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						*is_unique = false;
-						return InvalidTransactionId;
+						found = true;
+						result = false;
+						if (recheck)
+							UnlockReleaseBuffer(buffer);
 					}
-
-					/*
-					 * If this tuple is being updated by other transaction
-					 * then we have to wait for its commit/abort.
-					 */
-					xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
-						SnapshotDirty.xmin : SnapshotDirty.xmax;
-
-					if (TransactionIdIsValid(xwait))
+					else if (recheck)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						/* Tell _bt_doinsert to wait... */
-						*speculativeToken = SnapshotDirty.speculativeToken;
-						return xwait;
+						result = btrecheck(rel, curitup, heapRel, &heapTuple);
+						UnlockReleaseBuffer(buffer);
 					}
 
-					/*
-					 * Otherwise we have a definite conflict.  But before
-					 * complaining, look to see if the tuple we want to insert
-					 * is itself now committed dead --- if so, don't complain.
-					 * This is a waste of time in normal scenarios but we must
-					 * do it to support CREATE INDEX CONCURRENTLY.
-					 *
-					 * We must follow HOT-chains here because during
-					 * concurrent index build, we insert the root TID though
-					 * the actual tuple may be somewhere in the HOT-chain.
-					 * While following the chain we might not stop at the
-					 * exact tuple which triggered the insert, but that's OK
-					 * because if we find a live tuple anywhere in this chain,
-					 * we have a unique key conflict.  The other live tuple is
-					 * not part of this chain because it had a different index
-					 * entry.
-					 */
-					htid = itup->t_tid;
-					if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL))
-					{
-						/* Normal case --- it's still live */
-					}
-					else
+					if (result)
 					{
 						/*
-						 * It's been deleted, so no error, and no need to
-						 * continue searching
+						 * It is a duplicate. If we are only doing a partial
+						 * check, then don't bother checking if the tuple is being
+						 * updated in another transaction. Just return the fact
+						 * that it is a potential conflict and leave the full
+						 * check till later.
 						 */
-						break;
-					}
+						if (checkUnique == UNIQUE_CHECK_PARTIAL)
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							*is_unique = false;
+							return InvalidTransactionId;
+						}
 
-					/*
-					 * Check for a conflict-in as we would if we were going to
-					 * write to this page.  We aren't actually going to write,
-					 * but we want a chance to report SSI conflicts that would
-					 * otherwise be masked by this unique constraint
-					 * violation.
-					 */
-					CheckForSerializableConflictIn(rel, NULL, buf);
+						/*
+						 * If this tuple is being updated by other transaction
+						 * then we have to wait for its commit/abort.
+						 */
+						xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
+							SnapshotDirty.xmin : SnapshotDirty.xmax;
+
+						if (TransactionIdIsValid(xwait))
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							/* Tell _bt_doinsert to wait... */
+							*speculativeToken = SnapshotDirty.speculativeToken;
+							return xwait;
+						}
 
-					/*
-					 * This is a definite conflict.  Break the tuple down into
-					 * datums and report the error.  But first, make sure we
-					 * release the buffer locks we're holding ---
-					 * BuildIndexValueDescription could make catalog accesses,
-					 * which in the worst case might touch this same index and
-					 * cause deadlocks.
-					 */
-					if (nbuf != InvalidBuffer)
-						_bt_relbuf(rel, nbuf);
-					_bt_relbuf(rel, buf);
+						/*
+						 * Otherwise we have a definite conflict.  But before
+						 * complaining, look to see if the tuple we want to insert
+						 * is itself now committed dead --- if so, don't complain.
+						 * This is a waste of time in normal scenarios but we must
+						 * do it to support CREATE INDEX CONCURRENTLY.
+						 *
+						 * We must follow HOT-chains here because during
+						 * concurrent index build, we insert the root TID though
+						 * the actual tuple may be somewhere in the HOT-chain.
+						 * While following the chain we might not stop at the
+						 * exact tuple which triggered the insert, but that's OK
+						 * because if we find a live tuple anywhere in this chain,
+						 * we have a unique key conflict.  The other live tuple is
+						 * not part of this chain because it had a different index
+						 * entry.
+						 */
+						recheck = false;
+						ItemPointerCopy(&itup->t_tid, &htid);
+						if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL,
+									&recheck, &buffer, &heapTuple))
+						{
+							bool result = true;
+							if (recheck)
+							{
+								/*
+								 * Recheck if the tuple actually satisfies the
+								 * index key. Otherwise, we might be following
+								 * a wrong index pointer and mustn't entertain
+								 * this tuple
+								 */
+								result = btrecheck(rel, itup, heapRel, &heapTuple);
+								UnlockReleaseBuffer(buffer);
+							}
+							if (!result)
+								break;
+							/* Normal case --- it's still live */
+						}
+						else
+						{
+							/*
+							 * It's been deleted, so no error, and no need to
+							 * continue searching
+							 */
+							break;
+						}
 
-					{
-						Datum		values[INDEX_MAX_KEYS];
-						bool		isnull[INDEX_MAX_KEYS];
-						char	   *key_desc;
-
-						index_deform_tuple(itup, RelationGetDescr(rel),
-										   values, isnull);
-
-						key_desc = BuildIndexValueDescription(rel, values,
-															  isnull);
-
-						ereport(ERROR,
-								(errcode(ERRCODE_UNIQUE_VIOLATION),
-								 errmsg("duplicate key value violates unique constraint \"%s\"",
-										RelationGetRelationName(rel)),
-							   key_desc ? errdetail("Key %s already exists.",
-													key_desc) : 0,
-								 errtableconstraint(heapRel,
-											 RelationGetRelationName(rel))));
+						/*
+						 * Check for a conflict-in as we would if we were going to
+						 * write to this page.  We aren't actually going to write,
+						 * but we want a chance to report SSI conflicts that would
+						 * otherwise be masked by this unique constraint
+						 * violation.
+						 */
+						CheckForSerializableConflictIn(rel, NULL, buf);
+
+						/*
+						 * This is a definite conflict.  Break the tuple down into
+						 * datums and report the error.  But first, make sure we
+						 * release the buffer locks we're holding ---
+						 * BuildIndexValueDescription could make catalog accesses,
+						 * which in the worst case might touch this same index and
+						 * cause deadlocks.
+						 */
+						if (nbuf != InvalidBuffer)
+							_bt_relbuf(rel, nbuf);
+						_bt_relbuf(rel, buf);
+
+						{
+							Datum		values[INDEX_MAX_KEYS];
+							bool		isnull[INDEX_MAX_KEYS];
+							char	   *key_desc;
+
+							index_deform_tuple(itup, RelationGetDescr(rel),
+									values, isnull);
+
+							key_desc = BuildIndexValueDescription(rel, values,
+									isnull);
+
+							ereport(ERROR,
+									(errcode(ERRCODE_UNIQUE_VIOLATION),
+									 errmsg("duplicate key value violates unique constraint \"%s\"",
+										 RelationGetRelationName(rel)),
+									 key_desc ? errdetail("Key %s already exists.",
+										 key_desc) : 0,
+									 errtableconstraint(heapRel,
+										 RelationGetRelationName(rel))));
+						}
 					}
 				}
 				else if (all_dead)
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 128744c..6b1236a 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -23,6 +23,7 @@
 #include "access/xlog.h"
 #include "catalog/index.h"
 #include "commands/vacuum.h"
+#include "executor/nodeIndexscan.h"
 #include "storage/indexfsm.h"
 #include "storage/ipc.h"
 #include "storage/lmgr.h"
@@ -117,6 +118,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = btendscan;
 	amroutine->ammarkpos = btmarkpos;
 	amroutine->amrestrpos = btrestrpos;
+	amroutine->amrecheck = btrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -292,8 +294,9 @@ btgettuple(IndexScanDesc scan, ScanDirection dir)
 	BTScanOpaque so = (BTScanOpaque) scan->opaque;
 	bool		res;
 
-	/* btree indexes are never lossy */
-	scan->xs_recheck = false;
+	/* btree indexes are never lossy, except for WARM tuples */
+	scan->xs_recheck = indexscan_recheck;
+	scan->xs_tuple_recheck = indexscan_recheck;
 
 	/*
 	 * If we have any array keys, initialize them during first call for a
diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c
index 063c988..c9c0501 100644
--- a/src/backend/access/nbtree/nbtutils.c
+++ b/src/backend/access/nbtree/nbtutils.c
@@ -20,11 +20,15 @@
 #include "access/nbtree.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "utils/array.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 typedef struct BTSortArrayContext
@@ -2065,3 +2069,103 @@ btproperty(Oid index_oid, int attno,
 			return false;		/* punt to generic code */
 	}
 }
+
+/*
+ * Check if the index tuple's key matches the one computed from the given heap
+ * tuple's attribute
+ */
+bool
+btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	/* Get IndexInfo for this index */
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL, then they are equal
+		 */
+		if (isnull[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If just one is NULL, then they are not equal
+		 */
+		if (isnull[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now just do a raw memory comparison. If the index tuple was formed
+		 * using this heap tuple, the computed index values must match
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+}
diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c
index d570ae5..813b5c3 100644
--- a/src/backend/access/spgist/spgutils.c
+++ b/src/backend/access/spgist/spgutils.c
@@ -67,6 +67,7 @@ spghandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = spgendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index b0b43cf..36467b2 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -54,6 +54,7 @@
 #include "nodes/makefuncs.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/var.h"
 #include "parser/parser.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
@@ -1674,6 +1675,20 @@ BuildIndexInfo(Relation index)
 	ii->ii_Concurrent = false;
 	ii->ii_BrokenHotChain = false;
 
+	/* build a bitmap of all table attributes referred by this index */
+	for (i = 0; i < ii->ii_NumIndexAttrs; i++)
+	{
+		AttrNumber attr = ii->ii_KeyAttrNumbers[i];
+		ii->ii_indxattrs = bms_add_member(ii->ii_indxattrs, attr -
+				FirstLowInvalidHeapAttributeNumber);
+	}
+
+	/* Collect all attributes used in expressions, too */
+	pull_varattnos((Node *) ii->ii_Expressions, 1, &ii->ii_indxattrs);
+
+	/* Collect all attributes in the index predicate, too */
+	pull_varattnos((Node *) ii->ii_Predicate, 1, &ii->ii_indxattrs);
+
 	return ii;
 }
 
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index 26f9114..997c8f5 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -40,6 +40,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	TriggerData *trigdata = (TriggerData *) fcinfo->context;
 	const char *funcname = "unique_key_recheck";
 	HeapTuple	new_row;
+	HeapTupleData heapTuple;
 	ItemPointerData tmptid;
 	Relation	indexRel;
 	IndexInfo  *indexInfo;
@@ -102,7 +103,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	 * removed.
 	 */
 	tmptid = new_row->t_self;
-	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL,
+				NULL, NULL, &heapTuple))
 	{
 		/*
 		 * All rows in the HOT chain are dead, so skip the check.
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index 5947e72..75af34c 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -2491,6 +2491,7 @@ CopyFrom(CopyState cstate)
 
 				if (resultRelInfo->ri_NumIndices > 0)
 					recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+														 &(tuple->t_self), NULL,
 														 estate, false, NULL,
 														   NIL);
 
@@ -2606,6 +2607,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
 			ExecStoreTuple(bufferedTuples[i], myslot, InvalidBuffer, false);
 			recheckIndexes =
 				ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
+									  &(bufferedTuples[i]->t_self), NULL,
 									  estate, false, NULL, NIL);
 			ExecARInsertTriggers(estate, resultRelInfo,
 								 bufferedTuples[i],
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 231e92d..ca40e1b 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -1468,6 +1468,7 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 
 		recptr = log_heap_clean(onerel, buffer,
 								NULL, 0, NULL, 0,
+								NULL, 0,
 								unused, uncnt,
 								vacrelstats->latestRemovedXid);
 		PageSetLSN(page, recptr);
@@ -2128,6 +2129,22 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 						break;
 					}
 
+					/*
+					 * If this tuple was ever WARM updated or is a WARM tuple,
+					 * there could be multiple index entries pointing to the
+					 * root of this chain. We can't do index-only scans for
+					 * such tuples without verifying index key check. So mark
+					 * the page as !all_visible
+					 *
+					 * XXX Should we look at the root line pointer and check if
+					 * WARM flag is set there or checking for tuples in the
+					 * chain is good enough?
+					 */
+					if (HeapTupleHeaderIsHeapWarmTuple(tuple.t_data))
+					{
+						all_visible = false;
+					}
+
 					/* Track newest xmin on page. */
 					if (TransactionIdFollows(xmin, *visibility_cutoff_xid))
 						*visibility_cutoff_xid = xmin;
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 0e2d834..da27cf6 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -270,6 +270,8 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
 List *
 ExecInsertIndexTuples(TupleTableSlot *slot,
 					  ItemPointer tupleid,
+					  ItemPointer root_tid,
+					  Bitmapset *updated_attrs,
 					  EState *estate,
 					  bool noDupErr,
 					  bool *specConflict,
@@ -324,6 +326,17 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 		if (!indexInfo->ii_ReadyForInserts)
 			continue;
 
+		/*
+		 * If updated_attrs is set, we only insert index entries for those
+		 * indexes whose column has changed. All other indexes can use their
+		 * existing index pointers to look up the new tuple
+		 */
+		if (updated_attrs)
+		{
+			if (!bms_overlap(updated_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
 		/* Check for partial index */
 		if (indexInfo->ii_Predicate != NIL)
 		{
@@ -389,7 +402,7 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 			index_insert(indexRelation, /* index relation */
 						 values,	/* array of index Datums */
 						 isnull,	/* null flags */
-						 tupleid,		/* tid of heap tuple */
+						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique);	/* type of uniqueness check to do */
 
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index 449aacb..ff77349 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -37,6 +37,7 @@
 
 #include "access/relscan.h"
 #include "access/transam.h"
+#include "access/valid.h"
 #include "executor/execdebug.h"
 #include "executor/nodeBitmapHeapscan.h"
 #include "pgstat.h"
@@ -362,11 +363,23 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			OffsetNumber offnum = tbmres->offsets[curslot];
 			ItemPointerData tid;
 			HeapTupleData heapTuple;
+			bool recheck = false;
 
 			ItemPointerSet(&tid, page, offnum);
 			if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
-									   &heapTuple, NULL, true))
-				scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+									   &heapTuple, NULL, true, &recheck))
+			{
+				bool valid = true;
+
+				if (scan->rs_key)
+					HeapKeyTest(&heapTuple, RelationGetDescr(scan->rs_rd),
+							scan->rs_nkeys, scan->rs_key, valid);
+				if (valid)
+					scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+
+				if (recheck)
+					tbmres->recheck = true;
+			}
 		}
 	}
 	else
diff --git a/src/backend/executor/nodeIndexonlyscan.c b/src/backend/executor/nodeIndexonlyscan.c
index 4f6f91c..49bda34 100644
--- a/src/backend/executor/nodeIndexonlyscan.c
+++ b/src/backend/executor/nodeIndexonlyscan.c
@@ -141,6 +141,26 @@ IndexOnlyNext(IndexOnlyScanState *node)
 			 * but it's not clear whether it's a win to do so.  The next index
 			 * entry might require a visit to the same heap page.
 			 */
+
+			/*
+			 * If the index was lossy or the tuple was WARM, we have to recheck
+			 * the index quals using the fetched tuple.
+			 */
+			if (scandesc->xs_tuple_recheck)
+			{
+				ExecStoreTuple(tuple,	/* tuple to store */
+						slot,	/* slot to store in */
+						scandesc->xs_cbuf,		/* buffer containing tuple */
+						false);	/* don't pfree */
+				econtext->ecxt_scantuple = slot;
+				ResetExprContext(econtext);
+				if (!ExecQual(node->indexqual, econtext, false))
+				{
+					/* Fails recheck, so drop it and loop back for another */
+					InstrCountFiltered2(node, 1);
+					continue;
+				}
+			}
 		}
 
 		/*
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index 3143bd9..0b04bb8 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -39,6 +39,8 @@
 #include "utils/memutils.h"
 #include "utils/rel.h"
 
+bool indexscan_recheck = false;
+
 /*
  * When an ordering operator is used, tuples fetched from the index that
  * need to be reordered are queued in a pairing heap, as ReorderTuples.
@@ -115,10 +117,10 @@ IndexNext(IndexScanState *node)
 					   false);	/* don't pfree */
 
 		/*
-		 * If the index was lossy, we have to recheck the index quals using
-		 * the fetched tuple.
+		 * If the index was lossy or the tuple was WARM, we have to recheck
+		 * the index quals using the fetched tuple.
 		 */
-		if (scandesc->xs_recheck)
+		if (scandesc->xs_tuple_recheck)
 		{
 			econtext->ecxt_scantuple = slot;
 			ResetExprContext(econtext);
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index af7b26c..7367e9a 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -433,6 +433,7 @@ ExecInsert(ModifyTableState *mtstate,
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												 &(tuple->t_self), NULL,
 												 estate, true, &specConflict,
 												   arbiterIndexes);
 
@@ -479,6 +480,7 @@ ExecInsert(ModifyTableState *mtstate,
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
 				recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+													   &(tuple->t_self), NULL,
 													   estate, false, NULL,
 													   arbiterIndexes);
 		}
@@ -809,6 +811,9 @@ ExecUpdate(ItemPointer tupleid,
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
 	List	   *recheckIndexes = NIL;
+	Bitmapset  *updated_attrs = NULL;
+	ItemPointerData	root_tid;
+	bool		warm_update;
 
 	/*
 	 * abort the operation if not running transactions
@@ -923,7 +928,7 @@ lreplace:;
 							 estate->es_output_cid,
 							 estate->es_crosscheck_snapshot,
 							 true /* wait for commit */ ,
-							 &hufd, &lockmode);
+							 &hufd, &lockmode, &updated_attrs, &warm_update);
 		switch (result)
 		{
 			case HeapTupleSelfUpdated:
@@ -1011,9 +1016,24 @@ lreplace:;
 		 *
 		 * If it's a HOT update, we mustn't insert new index entries.
 		 */
-		if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
+		if (resultRelInfo->ri_NumIndices > 0 &&
+			(!HeapTupleIsHeapOnly(tuple) || warm_update))
+		{
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(updated_attrs);
+				updated_attrs = NULL;
+			}
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   updated_attrs,
 												   estate, false, NULL, NIL);
+		}
 	}
 
 	if (canSetTag)
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 79e0b1f..37874ca 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2030,6 +2030,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	list_free_deep(relation->rd_fkeylist);
 	list_free(relation->rd_indexlist);
 	bms_free(relation->rd_indexattr);
+	bms_free(relation->rd_exprindexattr);
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_idattr);
 	if (relation->rd_options)
@@ -4373,12 +4374,15 @@ Bitmapset *
 RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 {
 	Bitmapset  *indexattrs;		/* indexed columns */
+	Bitmapset  *exprindexattrs;	/* indexed columns in expression/prediacate
+									 indexes */
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
 	List	   *indexoidlist;
 	Oid			relreplindex;
 	ListCell   *l;
 	MemoryContext oldcxt;
+	bool		supportswarm = true;/* True if the table can be WARM updated */
 
 	/* Quick exit if we already computed the result. */
 	if (relation->rd_indexattr != NULL)
@@ -4391,6 +4395,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_keyattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
+			case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+				return bms_copy(relation->rd_exprindexattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -4429,6 +4435,7 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	 * won't be returned at all by RelationGetIndexList.
 	 */
 	indexattrs = NULL;
+	exprindexattrs = NULL;
 	uindexattrs = NULL;
 	idindexattrs = NULL;
 	foreach(l, indexoidlist)
@@ -4474,19 +4481,32 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 		}
 
 		/* Collect all attributes used in expressions, too */
-		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &exprindexattrs);
 
 		/* Collect all attributes in the index predicate, too */
-		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
+
+		/*
+		 * Check if the index has amrecheck method defined. If the method is
+		 * not defined, the index does not support WARM update. Completely
+		 * disable WARM updates on such tables
+		 */
+		if (!indexDesc->rd_amroutine->amrecheck)
+			supportswarm = false;
 
 		index_close(indexDesc, AccessShareLock);
 	}
 
 	list_free(indexoidlist);
 
+	/* Remember if the table can do WARM updates */
+	relation->rd_supportswarm = supportswarm;
+
 	/* Don't leak the old values of these bitmaps, if any */
 	bms_free(relation->rd_indexattr);
 	relation->rd_indexattr = NULL;
+	bms_free(relation->rd_exprindexattr);
+	relation->rd_exprindexattr = NULL;
 	bms_free(relation->rd_keyattr);
 	relation->rd_keyattr = NULL;
 	bms_free(relation->rd_idattr);
@@ -4502,7 +4522,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	oldcxt = MemoryContextSwitchTo(CacheMemoryContext);
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_indexattr = bms_copy(indexattrs);
+	relation->rd_exprindexattr = bms_copy(exprindexattrs);
+	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -4514,6 +4535,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 			return uindexattrs;
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
+		case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+			return exprindexattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index c5178f7..aa7b265 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -111,6 +111,7 @@ extern char *default_tablespace;
 extern char *temp_tablespaces;
 extern bool ignore_checksum_failure;
 extern bool synchronize_seqscans;
+extern bool indexscan_recheck;
 
 #ifdef TRACE_SYNCSCAN
 extern bool trace_syncscan;
@@ -1271,6 +1272,16 @@ static struct config_bool ConfigureNamesBool[] =
 		NULL, NULL, NULL
 	},
 	{
+		{"indexscan_recheck", PGC_USERSET, DEVELOPER_OPTIONS,
+			gettext_noop("Recheck heap rows returned from an index scan."),
+			NULL,
+			GUC_NOT_IN_SAMPLE
+		},
+		&indexscan_recheck,
+		false,
+		NULL, NULL, NULL
+	},
+	{
 		{"debug_deadlocks", PGC_SUSET, DEVELOPER_OPTIONS,
 			gettext_noop("Dumps information about all current locks when a deadlock timeout occurs."),
 			NULL,
diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h
index 1036cca..37eaf76 100644
--- a/src/include/access/amapi.h
+++ b/src/include/access/amapi.h
@@ -13,6 +13,7 @@
 #define AMAPI_H
 
 #include "access/genam.h"
+#include "access/itup.h"
 
 /*
  * We don't wish to include planner header files here, since most of an index
@@ -137,6 +138,9 @@ typedef void (*ammarkpos_function) (IndexScanDesc scan);
 /* restore marked scan position */
 typedef void (*amrestrpos_function) (IndexScanDesc scan);
 
+/* recheck index tuple and heap tuple match */
+typedef bool (*amrecheck_function) (Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * API struct for an index AM.  Note this must be stored in a single palloc'd
@@ -196,6 +200,7 @@ typedef struct IndexAmRoutine
 	amendscan_function amendscan;
 	ammarkpos_function ammarkpos;		/* can be NULL */
 	amrestrpos_function amrestrpos;		/* can be NULL */
+	amrecheck_function amrecheck;		/* can be NULL */
 } IndexAmRoutine;
 
 
diff --git a/src/include/access/hash.h b/src/include/access/hash.h
index d9df904..a25ce5a 100644
--- a/src/include/access/hash.h
+++ b/src/include/access/hash.h
@@ -364,4 +364,8 @@ extern bool _hash_convert_tuple(Relation index,
 extern OffsetNumber _hash_binsearch(Page page, uint32 hash_value);
 extern OffsetNumber _hash_binsearch_last(Page page, uint32 hash_value);
 
+/* hash.c */
+extern bool hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 #endif   /* HASH_H */
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 94b46b8..4c05947 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -137,9 +137,10 @@ extern bool heap_fetch(Relation relation, Snapshot snapshot,
 		   Relation stats_relation);
 extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
 					   Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call);
+					   bool *all_dead, bool first_call, bool *recheck);
 extern bool heap_hot_search(ItemPointer tid, Relation relation,
-				Snapshot snapshot, bool *all_dead);
+				Snapshot snapshot, bool *all_dead,
+				bool *recheck, Buffer *buffer, HeapTuple heapTuple);
 
 extern void heap_get_latest_tid(Relation relation, Snapshot snapshot,
 					ItemPointer tid);
@@ -160,7 +161,8 @@ extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
 extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **updated_attrs, bool *warm_update);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
@@ -186,6 +188,7 @@ extern int heap_page_prune(Relation relation, Buffer buffer,
 				bool report_stats, TransactionId *latestRemovedXid);
 extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
+						bool *warmchain,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
 extern void heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index 5a04561..ddc3a7a 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -80,6 +80,7 @@
 #define XLH_UPDATE_CONTAINS_NEW_TUPLE			(1<<4)
 #define XLH_UPDATE_PREFIX_FROM_OLD				(1<<5)
 #define XLH_UPDATE_SUFFIX_FROM_OLD				(1<<6)
+#define XLH_UPDATE_WARM_UPDATE					(1<<7)
 
 /* convenience macro for checking whether any form of old tuple was logged */
 #define XLH_UPDATE_CONTAINS_OLD						\
@@ -211,7 +212,9 @@ typedef struct xl_heap_update
  *	* for each redirected item: the item offset, then the offset redirected to
  *	* for each now-dead item: the item offset
  *	* for each now-unused item: the item offset
- * The total number of OffsetNumbers is therefore 2*nredirected+ndead+nunused.
+ *	* for each now-warm item: the item offset
+ * The total number of OffsetNumbers is therefore
+ * 2*nredirected+ndead+nunused+nwarm.
  * Note that nunused is not explicitly stored, but may be found by reference
  * to the total record length.
  */
@@ -220,10 +223,11 @@ typedef struct xl_heap_clean
 	TransactionId latestRemovedXid;
 	uint16		nredirected;
 	uint16		ndead;
+	uint16		nwarm;
 	/* OFFSET NUMBERS are in the block reference 0 */
 } xl_heap_clean;
 
-#define SizeOfHeapClean (offsetof(xl_heap_clean, ndead) + sizeof(uint16))
+#define SizeOfHeapClean (offsetof(xl_heap_clean, nwarm) + sizeof(uint16))
 
 /*
  * Cleanup_info is required in some cases during a lazy VACUUM.
@@ -384,6 +388,7 @@ extern XLogRecPtr log_heap_cleanup_info(RelFileNode rnode,
 					  TransactionId latestRemovedXid);
 extern XLogRecPtr log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *redirected, int nredirected,
+			   OffsetNumber *warm, int nwarm,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid);
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 76328ff..b139bb2 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,7 +260,8 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1000 are available */
+#define HEAP_WARM_TUPLE			0x0800	/* This tuple is a part of WARM chain
+										 */
 #define HEAP_LATEST_TUPLE		0x1000	/*
 										 * This is the last tuple in chain and
 										 * ip_posid points to the root line
@@ -271,7 +272,7 @@ struct HeapTupleHeaderData
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF800	/* visibility-related bits */
 
 
 /*
@@ -510,6 +511,21 @@ do { \
   (tup)->t_infomask2 & HEAP_ONLY_TUPLE \
 )
 
+#define HeapTupleHeaderSetHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 |= HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderClearHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 &= ~HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderIsHeapWarmTuple(tup) \
+( \
+  ((tup)->t_infomask2 & HEAP_WARM_TUPLE) \
+)
+
 #define HeapTupleHeaderSetHeapLatest(tup) \
 ( \
 	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE \
@@ -753,6 +769,15 @@ struct MinimalTupleData
 #define HeapTupleClearHeapOnly(tuple) \
 		HeapTupleHeaderClearHeapOnly((tuple)->t_data)
 
+#define HeapTupleIsHeapWarmTuple(tuple) \
+		HeapTupleHeaderIsHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleSetHeapWarmTuple(tuple) \
+		HeapTupleHeaderSetHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleClearHeapWarmTuple(tuple) \
+		HeapTupleHeaderClearHeapWarmTuple((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index c580f51..83af072 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -751,6 +751,8 @@ extern bytea *btoptions(Datum reloptions, bool validate);
 extern bool btproperty(Oid index_oid, int attno,
 		   IndexAMProperty prop, const char *propname,
 		   bool *res, bool *isnull);
+extern bool btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * prototypes for functions in nbtvalidate.c
diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h
index 49c2a6f..880e62e 100644
--- a/src/include/access/relscan.h
+++ b/src/include/access/relscan.h
@@ -110,7 +110,8 @@ typedef struct IndexScanDescData
 	HeapTupleData xs_ctup;		/* current heap tuple, if any */
 	Buffer		xs_cbuf;		/* current heap buffer in scan, if any */
 	/* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */
-	bool		xs_recheck;		/* T means scan keys must be rechecked */
+	bool		xs_recheck;		/* T means scan keys must be rechecked for each tuple */
+	bool		xs_tuple_recheck;	/* T means scan keys must be rechecked for current tuple */
 
 	/*
 	 * When fetching with an ordering operator, the values of the ORDER BY
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 39521ed..60a5445 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -366,6 +366,7 @@ extern void UnregisterExprContextCallback(ExprContext *econtext,
 extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
 extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
 extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
+					  ItemPointer root_tid, Bitmapset *updated_attrs,
 					  EState *estate, bool noDupErr, bool *specConflict,
 					  List *arbiterIndexes);
 extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
diff --git a/src/include/executor/nodeIndexscan.h b/src/include/executor/nodeIndexscan.h
index 194fadb..fe9c78e 100644
--- a/src/include/executor/nodeIndexscan.h
+++ b/src/include/executor/nodeIndexscan.h
@@ -38,4 +38,5 @@ extern bool ExecIndexEvalArrayKeys(ExprContext *econtext,
 					   IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 extern bool ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 
+extern bool indexscan_recheck;
 #endif   /* NODEINDEXSCAN_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index a4ea1b9..42f8ecf 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -60,6 +60,7 @@ typedef struct IndexInfo
 	NodeTag		type;
 	int			ii_NumIndexAttrs;
 	AttrNumber	ii_KeyAttrNumbers[INDEX_MAX_KEYS];
+	Bitmapset  *ii_indxattrs;	/* bitmap of all columns used in this index */
 	List	   *ii_Expressions; /* list of Expr */
 	List	   *ii_ExpressionsState;	/* list of ExprState */
 	List	   *ii_Predicate;	/* list of Expr */
diff --git a/src/include/storage/itemid.h b/src/include/storage/itemid.h
index 509c577..8c9cc99 100644
--- a/src/include/storage/itemid.h
+++ b/src/include/storage/itemid.h
@@ -46,6 +46,12 @@ typedef ItemIdData *ItemId;
 typedef uint16 ItemOffset;
 typedef uint16 ItemLength;
 
+/*
+ * Special value used in lp_len to indicate that the chain starting at line
+ * pointer may contain WARM tuples. This must only be interpreted along with
+ * LP_REDIRECT flag
+ */
+#define SpecHeapWarmLen	0x1ffb
 
 /* ----------------
  *		support macros
@@ -112,12 +118,15 @@ typedef uint16 ItemLength;
 #define ItemIdIsDead(itemId) \
 	((itemId)->lp_flags == LP_DEAD)
 
+#define ItemIdIsHeapWarm(itemId) \
+	(((itemId)->lp_flags == LP_REDIRECT) && \
+	 ((itemId)->lp_len == SpecHeapWarmLen))
 /*
  * ItemIdHasStorage
  *		True iff item identifier has associated storage.
  */
 #define ItemIdHasStorage(itemId) \
-	((itemId)->lp_len != 0)
+	(!ItemIdIsRedirected(itemId) && (itemId)->lp_len != 0)
 
 /*
  * ItemIdSetUnused
@@ -168,6 +177,26 @@ typedef uint16 ItemLength;
 )
 
 /*
+ * ItemIdSetHeapWarm
+ * 		Set the item identifier to identify as starting of a WARM chain
+ *
+ * Note: Since all bits in lp_flags are currently used, we store a special
+ * value in lp_len field to indicate this state. This is required only for
+ * LP_REDIRECT tuple and lp_len field is unused for such line pointers.
+ */
+#define ItemIdSetHeapWarm(itemId) \
+do { \
+	AssertMacro((itemId)->lp_flags == LP_REDIRECT); \
+	(itemId)->lp_len = SpecHeapWarmLen; \
+} while (0)
+
+#define ItemIdClearHeapWarm(itemId) \
+( \
+	AssertMacro((itemId)->lp_flags == LP_REDIRECT); \
+	(itemId)->lp_len = 0; \
+)
+
+/*
  * ItemIdMarkDead
  *		Set the item identifier to be DEAD, keeping its existing storage.
  *
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index ed14442..dac32b5 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -101,8 +101,11 @@ typedef struct RelationData
 
 	/* data managed by RelationGetIndexAttrBitmap: */
 	Bitmapset  *rd_indexattr;	/* identifies columns used in indexes */
+	Bitmapset  *rd_exprindexattr; /* indentified columns used in expression or
+									 predicate indexes */
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	/*
 	 * rd_options is set whenever rd_rel is loaded into the relcache entry.
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index 6ea7dd2..290e9b7 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -48,7 +48,8 @@ typedef enum IndexAttrBitmapKind
 {
 	INDEX_ATTR_BITMAP_ALL,
 	INDEX_ATTR_BITMAP_KEY,
-	INDEX_ATTR_BITMAP_IDENTITY_KEY
+	INDEX_ATTR_BITMAP_IDENTITY_KEY,
+	INDEX_ATTR_BITMAP_EXPR_PREDICATE
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
diff --git a/src/backend/access/heap/README.WARM b/src/backend/access/heap/README.WARM
new file mode 100644
index 0000000..f793570
--- /dev/null
+++ b/src/backend/access/heap/README.WARM
@@ -0,0 +1,271 @@
+src/backend/access/heap/README.WARM
+
+Write Amplification Reduction Method (WARM)
+===========================================
+
+The Heap Only Tuple (HOT) feature greatly eliminated redudant index
+entries and allowed re-use of the dead space occupied by previously
+updated or deleted tuples (see src/backend/access/heap/README.HOT)
+
+One of the necessary conditions for satisfying HOT update is that the
+update must not change a column used in any of the indexes on the table.
+The condition is sometimes hard to meet, especially for complex
+workloads with several indexes on large yet frequently updated tables.
+Worse, sometimes only one or two index columns may be updated, but the
+regular non-HOT update will still insert a new index entry in every
+index on the table, irrespective of whether the key pertaining to the
+index changed or not.
+
+WARM is a technique devised to address these problems.
+
+
+Update Chains With Multiple Index Entries Pointing to the Root
+--------------------------------------------------------------
+
+When a non-HOT update is caused by an index key change, a new index
+entry must be inserted for the changed index. But if the index key
+hasn't changed for other indexes, we don't really need to insert a new
+entry. Even though the existing index entry is pointing to the old
+tuple, the new tuple is reachable via the t_ctid chain. To keep things
+simple, a WARM update requires that the heap block must have enough
+space to store the new version of the tuple. This is same as HOT
+updates.
+
+In WARM, we ensure that every index entry always points to the root of
+the WARM chain. In fact, a WARM chain looks exactly like a HOT chain
+except for the fact that there could be multiple index entries pointing
+to the root of the chain. So when new entry is inserted in an index for
+updated tuple, and if we are doing a WARM update, the new entry is made
+point to the root of the WARM chain.
+
+For example, if we have a table with two columns and two indexes on each
+of the column. When a tuple is first inserted the table, we have exactly
+one index entry pointing to the tuple from both indexes.
+
+	lp [1]
+	[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's entry (aaaa) also points to 1
+
+Now if the tuple's second column is updated and if there is room on the
+page, we perform a WARM update. To do so, Index1 does not get any new
+entry and Index2's new entry will still point to the root tuple of the
+chain.
+
+	lp [1]  [2]
+	[1111, aaaa]->[111, bbbb]
+
+	Index1's entry (1111) points to 1
+	Index2's old entry (aaaa) points to 1
+	Index2's new entry (bbbb) also points to 1
+
+"A update chain which has more than one index entries pointing to its
+root line pointer is called WARM chain and the action that creates a
+WARM chain is called WARM update."
+
+Since all indexes always point to the root of the WARM chain, even when
+there are more than one index entries, WARM chains can be pruned and
+dead tuples can be removed without a need to do corresponding index
+cleanup.
+
+While this solves the problem of pruning dead tuples from a HOT/WARM
+chain, it also opens up a new technical challenge because now we have a
+situation where a heap tuple is reachable from multiple index entries,
+each having a different index key. While MVCC still ensures that only
+valid tuples are returned, a tuple with a wrong index key may be
+returned because of wrong index entries. In the above example, tuple
+[1111, bbbb] is reachable from both keys (aaaa) as well as (bbbb). For
+this reason, tuples returned from a WARM chain must always be rechecked
+for index key-match.
+
+Recheck Index Key Againt Heap Tuple
+-----------------------------------
+
+Since every Index AM has it's own notion of index tuples, each Index AM
+must implement its own method to recheck heap tuples. For example, a
+hash index stores the hash value of the column and hence recheck routine
+for hash AM must first compute the hash value of the heap attribute and
+then compare it against the value stored in the index tuple.
+
+The patch currently implement recheck routines for hash and btree
+indexes. If the table has an index which doesn't support recheck
+routine, WARM updates are disabled on such tables.
+
+Problem With Duplicate (key, ctid) Index Entries
+------------------------------------------------
+
+The index-key recheck logic works as long as there are no duplicate
+index keys, both pointing to the same WARM chain. In that case, the same
+valid tuple will be reachable via multiple index keys, yet satisfying
+the index key checks. In the above example, if the tuple [1111, bbbb] is
+again updated to [1111, aaaa] and if we insert a new index entry (aaaa)
+pointing to the root line pointer, we will end up with the following
+structure:
+
+	lp [1]  [2]  [3]
+	[1111, aaaa]->[1111, bbbb]->[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's oldest entry (aaaa) points to 1
+	Index2's old entry (bbbb) also points to 1
+	Index2's new entry (aaaa) also points to 1
+
+We must solve this problem to ensure that the same tuple is not
+reachable via multiple index pointers. There are couple of ways to
+address this issue:
+
+1. Do not allow WARM update to a tuple from a WARM chain. This
+guarantees that there can never be duplicate index entries to the same
+root line pointer because we must have checked for old and new index
+keys while doing the first WARM update.
+
+2. Do not allow duplicate (key, ctid) index pointers. In the above
+example, since (aaaa, 1) already exists in the index, we must not insert
+a duplicate index entry.
+
+The patch currently implements 1 i.e. do not do WARM updates to a tuple
+from a WARM chain. HOT updates are fine because they do not add a new
+index entry.
+
+Even with the restriction, this is a significant improvement because the
+number of regular UPDATEs are curtailed down to half.
+
+Expression and Partial Indexes
+------------------------------
+
+Expressions may evaluate to the same value even if the underlying column
+values have changed. A simple example is an index on "lower(col)" which
+will return the same value if the new heap value only differs in the
+case sensitivity. So we can not solely rely on the heap column check to
+decide whether or not to insert a new index entry for expression
+indexes. Similarly, for partial indexes, the predicate expression must
+be evaluated to decide whether or not to cause a new index entry when
+columns referred in the predicate expressions change.
+
+(None of these things are currently implemented and we squarely disallow
+WARM update if a column from expression indexes or predicate has
+changed).
+
+
+Efficiently Finding the Root Line Pointer
+-----------------------------------------
+
+During WARM update, we must be able to find the root line pointer of the
+tuple being updated. It must be noted that the t_ctid field in the heap
+tuple header is usually used to find the next tuple in the update chain.
+But the tuple that we are updating, must be the last tuple in the update
+chain. In such cases, the c_tid field usually points the tuple itself.
+So in theory, we could use the t_ctid to store additional information in
+the last tuple of the update chain, if the information about the tuple
+being the last tuple is stored elsewhere.
+
+We now utilize another bit from t_infomask2 to explicitly identify that
+this is the last tuple in the update chain.
+
+HEAP_LATEST_TUPLE - When this bit is set, the tuple is the last tuple in
+the update chain. The OffsetNumber part of t_ctid points to the root
+line pointer of the chain when HEAP_LATEST_TUPLE flag is set.
+
+If UPDATE operation is aborted, the last tuple in the update chain
+becomes dead. The root line pointer information stored in the tuple
+which remains the last valid tuple in the chain is also lost. In such
+rare cases, the root line pointer must be found in a hard way by
+scanning the entire heap page.
+
+Tracking WARM Chains
+--------------------
+
+The old and every subsequent tuple in the chain is marked with a special
+HEAP_WARM_TUPLE flag. We use the last remaining bit in t_infomask2 to
+store this information.
+
+When a tuple is returned from a WARM chain, the caller must do
+additional checks to ensure that the tuple matches the index key. Even
+if the tuple comes precedes the WARM update in the chain, it must still
+be rechecked for the index key match (case when old tuple is returned by
+the new index key). So we must follow the update chain everytime to the
+end to see check if this is a WARM chain.
+
+When the old updated tuple is retired and the root line pointer is
+converted into a redirected line pointer, we can copy the information
+about WARM chain to the redirected line pointer by storing a special
+value in the lp_len field of the line pointer. This will handle the most
+common case where a WARM chain is replaced by a redirect line pointer
+and a single tuple in the chain.
+
+Converting WARM chains back to HOT chains (VACUUM ?)
+----------------------------------------------------
+
+The current implementation of WARM allows only one WARM update per
+chain. This simplifies the design and addresses certain issues around
+duplicate scans. But this also implies that the benefit of WARM will be
+no more than 50%, which is still significant, but if we could return
+WARM chains back to normal status, we could do far more WARM updates.
+
+A distinct property of a WARM chain is that at least one index has more
+than one live index entries pointing to the root of the chain. In other
+words, if we can remove duplicate entry from every index or conclusively
+prove that there are no duplicate index entries for the root line
+pointer, the chain can again be marked as HOT.
+
+Here is one idea:
+
+A WARM chain has two parts, separated by the tuple that caused WARM
+update. All tuples in each part has matching index keys, but certain
+index keys may not match between these two parts. Lets say we mark heap
+tuples in each part with a special Red-Blue flag. The same flag is
+replicated in the index tuples. For example, when new rows are inserted
+in a table, they are marked with Blue flag and the index entries
+associated with those rows are also marked with Blue flag. When a row is
+WARM updated, the new version is marked with Red flag and the new index
+entry created by the update is also marked with Red flag.
+
+
+Heap chain: [1] [2] [3] [4]
+			[aaaa, 1111]B -> [aaaa, 1111]B -> [bbbb, 1111]R -> [bbbb, 1111]R
+
+Index1: 	(aaaa)B points to 1 (satisfies only tuples marked with B)
+			(bbbb)R points to 1 (satisfies only tuples marked with R)
+
+Index2:		(1111)B points to 1 (satisfied bith B and R tuples)
+
+
+It's clear that for indexes with Red and Blue pointers, a heap tuple
+with Blue flag will be reachable from Blue pointer and that with Red
+flag will be reachable from Red pointer. But for indexes which did not
+create a new entry, both Blue and Red tuples will be reachable from Blue
+pointer (there is no Red pointer in such indexes). So, as a side note,
+matching Red and Blue flags is not enough from index scan perspective.
+
+During first heap scan of VACUUM, we look for tuples with
+HEAP_WARM_TUPLE set.  If all live tuples in the chain are either marked
+with Blue flag or Red flag (but no mix of Red and Blue), then the chain
+is a candidate for HOT conversion.  We remember the root line pointer
+and Red-Blue flag of the WARM chain in a separate array.
+
+If we have a Red WARM chain, then our goal is to remove Blue pointers
+and vice versa. But there is a catch. For Index2 above, there is only
+Blue pointer and that must not be removed. IOW we should remove Blue
+pointer iff a Red pointer exists. Since index vacuum may visit Red and
+Blue pointers in any order, I think we will need another index pass to
+remove dead index pointers. So in the first index pass we check which
+WARM candidates have 2 index pointers. In the second pass, we remove the
+dead pointer and reset Red flag is the surviving index pointer is Red.
+
+During the second heap scan, we fix WARM chain by clearing
+HEAP_WARM_TUPLE flag and also reset Red flag to Blue.
+
+There are some more problems around aborted vacuums. For example, if
+vacuum aborts after changing Red index flag to Blue but before removing
+the other Blue pointer, we will end up with two Blue pointers to a Red
+WARM chain. But since the HEAP_WARM_TUPLE flag on the heap tuple is
+still set, further WARM updates to the chain will be blocked. I guess we
+will need some special handling for case with multiple Blue pointers. We
+can either leave these WARM chains alone and let them die with a
+subsequent non-WARM update or must apply heap-recheck logic during index
+vacuum to find the dead pointer. Given that vacuum-aborts are not
+common, I am inclined to leave this case unhandled. We must still check
+for presence of multiple Blue pointers and ensure that we don't
+accidently remove either of the Blue pointers and not clear WARM chains
+either.
#5Bruce Momjian
bruce@momjian.us
In reply to: Pavan Deolasee (#1)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Aug 31, 2016 at 10:15:33PM +0530, Pavan Deolasee wrote:

Instead, what I would like to propose and the patch currently implements is to
restrict WARM update to once per chain. So the first non-HOT update to a tuple
or a HOT chain can be a WARM update. The chain can further be HOT updated any
number of times. But it can no further be WARM updated. This might look too
restrictive, but it can still bring down the number of regular updates by
almost 50%. Further, if we devise a strategy to convert a WARM chain back to
HOT chain, it can again be WARM updated. (This part is currently not
implemented). A good side effect of this simple strategy is that we know there
can maximum two index entries pointing to any given WARM chain.

I like the simplified approach, as long as it doesn't block further
improvements.

Headline TPS numbers:

Master:

transaction type: update.sql
scaling factor: 700
query mode: simple
number of clients: 16
number of threads: 8
duration: 57600 s
number of transactions actually processed: 65552986
latency average: 14.059 ms
tps = 1138.072117 (including connections establishing)
tps = 1138.072156 (excluding connections establishing)

WARM:

transaction type: update.sql
scaling factor: 700
query mode: simple
number of clients: 16
number of threads: 8
duration: 57600 s
number of transactions actually processed: 116168454
latency average: 7.933 ms
tps = 2016.812924 (including connections establishing)
tps = 2016.812997 (excluding connections establishing)

These are very impressive results.

Converting WARM chains back to HOT chains (VACUUM ?)
---------------------------------------------------------------------------------

The current implementation of WARM allows only one WARM update per chain. This
simplifies the design and addresses certain issues around duplicate scans. But
this also implies that the benefit of WARM will be no more than 50%, which is
still significant, but if we could return WARM chains back to normal status, we
could do far more WARM updates.

A distinct property of a WARM chain is that at least one index has more than
one live index entries pointing to the root of the chain. In other words, if we
can remove duplicate entry from every index or conclusively prove that there
are no duplicate index entries for the root line pointer, the chain can again
be marked as HOT.

I had not thought of how to convert from WARM to HOT yet.

Here is one idea, but more thoughts/suggestions are most welcome.�

A WARM chain has two parts, separated by the tuple that caused WARM update. All
tuples in each part has matching index keys, but certain index keys may not
match between these two parts. Lets say we mark heap tuples in each part with a
special Red-Blue flag. The same flag is replicated in the index tuples. For
example, when new rows are inserted in a table, they are marked with Blue flag
and the index entries associated with those rows are also marked with Blue
flag. When a row is WARM updated, the new version is marked with Red flag and
the new index entry created by the update is also marked with Red flag.

Heap chain: lp �[1] [2] [3] [4]
� [aaaa, 1111]B -> [aaaa, 1111]B -> [bbbb, 1111]R -> [bbbb, 1111]R

Index1: (aaaa)B points to 1 (satisfies only tuples marked with B)
(bbbb)R points to 1 (satisfies only tuples marked with R)

Index2: (1111)B points to 1 (satisfies both B and R tuples)

It's clear that for indexes with Red and Blue pointers, a heap tuple with Blue
flag will be reachable from Blue pointer and that with Red flag will be
reachable from Red pointer. But for indexes which did not create a new entry,
both Blue and Red tuples will be reachable from Blue pointer (there is no Red
pointer in such indexes). So, as a side note, matching Red and Blue flags is
not enough from index scan perspective.

During first heap scan of VACUUM, we look for tuples with HEAP_WARM_TUPLE set.
If all live tuples in the chain are either marked with Blue flag or Red flag
(but no mix of Red and Blue), then the chain is a candidate for HOT conversion.

Uh, if the chain is all blue, then there is are WARM entries so it
already a HOT chain, so there is nothing to do, right?

We remember the root line pointer and Red-Blue flag of the WARM chain in a
separate array.

If we have a Red WARM chain, then our goal is to remove Blue pointers and vice
versa. But there is a catch. For Index2 above, there is only Blue pointer
and that must not be removed. IOW we should remove Blue pointer iff a Red
pointer exists. Since index vacuum may visit Red and Blue pointers in any
order, I think we will need another index pass to remove dead
index pointers. So in the first index pass we check which WARM candidates have
2 index pointers. In the second pass, we remove the dead pointer and reset Red
flag is the surviving index pointer is Red.

Why not just remember the tid of chains converted from WARM to HOT, then
use "amrecheck" on an index entry matching that tid to see if the index
matches one of the entries in the chain. (It will match all of them or
none of them, because they are all red.) I don't see a point in
coloring the index entries as reds as later you would have to convert to
blue in the WARM-to-HOT conversion, and a vacuum crash could lead to
inconsistencies. Consider that you can just call "amrecheck" on the few
chains that have converted from WARM to HOT. I believe this is more
crash-safe too. However, if you have converted WARM to HOT in the heap,
but crash durin the index entry removal, you could potentially have
duplicates in the index later, which is bad.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+                     Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#6Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#5)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Aug 31, 2016 at 04:03:29PM -0400, Bruce Momjian wrote:

Why not just remember the tid of chains converted from WARM to HOT, then
use "amrecheck" on an index entry matching that tid to see if the index
matches one of the entries in the chain. (It will match all of them or
none of them, because they are all red.) I don't see a point in
coloring the index entries as reds as later you would have to convert to
blue in the WARM-to-HOT conversion, and a vacuum crash could lead to
inconsistencies. Consider that you can just call "amrecheck" on the few
chains that have converted from WARM to HOT. I believe this is more
crash-safe too. However, if you have converted WARM to HOT in the heap,
but crash during the index entry removal, you could potentially have
duplicates in the index later, which is bad.

I think Pavan had the "crash durin the index entry removal" fixed via:

During the second heap scan, we fix WARM chain by clearing HEAP_WARM_TUPLE flag
and also reset Red flag to Blue.

so the marking from WARM to HOT only happens after the index has been cleaned.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+                     Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#7Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Bruce Momjian (#5)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Sep 1, 2016 at 1:33 AM, Bruce Momjian <bruce@momjian.us> wrote:

On Wed, Aug 31, 2016 at 10:15:33PM +0530, Pavan Deolasee wrote:

Instead, what I would like to propose and the patch currently implements

is to

restrict WARM update to once per chain. So the first non-HOT update to a

tuple

or a HOT chain can be a WARM update. The chain can further be HOT

updated any

number of times. But it can no further be WARM updated. This might look

too

restrictive, but it can still bring down the number of regular updates by
almost 50%. Further, if we devise a strategy to convert a WARM chain

back to

HOT chain, it can again be WARM updated. (This part is currently not
implemented). A good side effect of this simple strategy is that we know

there

can maximum two index entries pointing to any given WARM chain.

I like the simplified approach, as long as it doesn't block further
improvements.

Yes, the proposed approach is simple yet does not stop us from improving
things further. Moreover it has shown good performance characteristics and
I believe it's a good first step.

Master:
tps = 1138.072117 (including connections establishing)

WARM:
tps = 2016.812924 (including connections establishing)

These are very impressive results.

Thanks. What's also interesting and something that headline numbers don't
show is that WARM TPS is as much as 3 times of master TPS when the
percentage of WARM updates is very high. Notice the spike in TPS in the
comparison graph.

Results with non-default heap fill factor are even better. In both cases,
the improvement in TPS stays constant over long periods.

During first heap scan of VACUUM, we look for tuples with

HEAP_WARM_TUPLE set.

If all live tuples in the chain are either marked with Blue flag or Red

flag

(but no mix of Red and Blue), then the chain is a candidate for HOT

conversion.

Uh, if the chain is all blue, then there is are WARM entries so it
already a HOT chain, so there is nothing to do, right?

For aborted WARM updates, the heap chain may be all blue, but there may
still be a red index pointer which must be cleared before we allow further
WARM updates to the chain.

We remember the root line pointer and Red-Blue flag of the WARM chain in

a

separate array.

If we have a Red WARM chain, then our goal is to remove Blue pointers

and vice

versa. But there is a catch. For Index2 above, there is only Blue pointer
and that must not be removed. IOW we should remove Blue pointer iff a Red
pointer exists. Since index vacuum may visit Red and Blue pointers in any
order, I think we will need another index pass to remove dead
index pointers. So in the first index pass we check which WARM

candidates have

2 index pointers. In the second pass, we remove the dead pointer and

reset Red

flag is the surviving index pointer is Red.

Why not just remember the tid of chains converted from WARM to HOT, then
use "amrecheck" on an index entry matching that tid to see if the index
matches one of the entries in the chain.

That will require random access to heap during index vacuum phase,
something I would like to avoid. But we can have that as a fall back
solution for handling aborted vacuums.

(It will match all of them or
none of them, because they are all red.) I don't see a point in
coloring the index entries as reds as later you would have to convert to
blue in the WARM-to-HOT conversion, and a vacuum crash could lead to
inconsistencies.

Yes, that's a concern since the conversion of red to blue will also need to
WAL logged to ensure that a crash doesn't leave us in inconsistent state. I
still think that this will be an overall improvement as compared to
allowing one WARM update per chain.

Consider that you can just call "amrecheck" on the few
chains that have converted from WARM to HOT. I believe this is more
crash-safe too. However, if you have converted WARM to HOT in the heap,
but crash durin the index entry removal, you could potentially have
duplicates in the index later, which is bad.

As you probably already noted, we clear heap flags only after all indexes
are cleared of duplicate entries and hence a crash in between should not
cause any correctness issue. As long as heap tuples are marked as warm,
amrecheck will ensure that only valid tuples are returned to the caller.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#8Bruce Momjian
bruce@momjian.us
In reply to: Pavan Deolasee (#7)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Sep 1, 2016 at 02:37:40PM +0530, Pavan Deolasee wrote:

I like the simplified approach, as long as it doesn't block further
improvements.

Yes, the proposed approach is simple yet does not stop us from improving things
further. Moreover it has shown good performance characteristics and I believe
it's a good first step.

Agreed. This is BIG. Do you think it can be done for PG 10?

Thanks. What's also interesting and something that headline numbers don't show
is that WARM TPS is as much as 3 times of master TPS when the percentage of
WARM updates is very high. Notice the spike in TPS in the comparison graph.

Results with non-default heap fill factor are even better. In both cases, the
improvement in TPS stays constant over long periods.�

Yes, I expect the benefits of this to show up in better long-term
performance.

During first heap scan of VACUUM, we look for tuples with HEAP_WARM_TUPLE

set.

If all live tuples in the chain are either marked with Blue flag or Red

flag

(but no mix of Red and Blue), then the chain is a candidate for HOT

conversion.

Uh, if the chain is all blue, then there is are WARM entries so it
already a HOT chain, so there is nothing to do, right?

For aborted WARM updates, the heap chain may be all blue, but there may still
be a red index pointer which must be cleared before we allow further WARM
updates to the chain.

Ah, understood now. Thanks.

Why not just remember the tid of chains converted from WARM to HOT, then
use "amrecheck" on an index entry matching that tid to see if the index
matches one of the entries in the chain.�

That will require random access to heap during index vacuum phase, something I
would like to avoid. But we can have that as a fall back solution for handling
aborted vacuums.�

Yes, that is true. So the challenge is figuring how which of the index
entries pointing to the same tid is valid, and coloring helps with that?

(It will match all of them or
none of them, because they are all red.)� I don't see a point in
coloring the index entries as reds as later you would have to convert to
blue in the WARM-to-HOT conversion, and a vacuum crash could lead to
inconsistencies.�

Yes, that's a concern since the conversion of red to blue will also need to WAL
logged to ensure that a crash doesn't leave us in inconsistent state. I still
think that this will be an overall improvement as compared to allowing one WARM
update per chain.

OK. I will think some more on this to see if I can come with another
approach.

�

Consider that you can just call "amrecheck" on the few
chains that have converted from WARM to HOT.� I believe this is more
crash-safe too.� However, if you have converted WARM to HOT in the heap,
but crash durin the index entry removal, you could potentially have
duplicates in the index later, which is bad.

As you probably already noted, we clear heap flags only after all indexes are
cleared of duplicate entries and hence a crash in between should not cause any
correctness issue. As long as heap tuples are marked as warm, amrecheck will
ensure that only valid tuples are returned to the caller.

OK, got it.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I. As I am, so you will be. +
+                     Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#9Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Bruce Momjian (#8)
2 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Sep 1, 2016 at 9:44 PM, Bruce Momjian <bruce@momjian.us> wrote:

On Thu, Sep 1, 2016 at 02:37:40PM +0530, Pavan Deolasee wrote:

I like the simplified approach, as long as it doesn't block further
improvements.

Yes, the proposed approach is simple yet does not stop us from improving

things

further. Moreover it has shown good performance characteristics and I

believe

it's a good first step.

Agreed. This is BIG. Do you think it can be done for PG 10?

I definitely think so. The patches as submitted are fully functional and
sufficient. Of course, there are XXX and TODOs that I hope to sort out
during the review process. There are also further tests needed to ensure
that the feature does not cause significant regression in the worst cases.
Again something I'm willing to do once I get some feedback on the broader
design and test cases. What I am looking at this stage is to know if I've
missed something important in terms of design or if there is some show
stopper that I overlooked.

Latest patches rebased with current master are attached. I also added a few
more comments to the code. I forgot to give a brief about the patches, so
including that as well.

0001_track_root_lp_v4.patch: This patch uses a free t_infomask2 bit to
track latest tuple in an update chain. The t_ctid.ip_posid is used to track
the root line pointer of the update chain. We do this only in the latest
tuple in the chain because most often that tuple will be updated and we
need to quickly find the root only during update.

0002_warm_updates_v4.patch: This patch implements the core of WARM logic.
During WARM update, we only insert new entries in the indexes whose key has
changed. But instead of indexing the real TID of the new tuple, we index
the root line pointer and then use additional recheck logic to ensure only
correct tuples are returned from such potentially broken HOT chains. Each
index AM must implement a amrecheck method to support WARM. The patch
currently implements this for hash and btree indexes.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0001_track_root_lp_v4.patchapplication/octet-stream; name=0001_track_root_lp_v4.patchDownload
commit f33ee503463137aa1a2ae4c3ab04d1468ae1941c
Author: Pavan Deolasee <pavan.deolasee@gmail.com>
Date:   Sat Sep 3 14:51:00 2016 +0530

    Use HEAP_TUPLE_LATEST to mark a tuple as the latest tuple in an update chain
    and use OffsetNumber in t_ctid to store the root line pointer of the chain.
    
    t_ctid field in the tuple header is usually used to store TID of the next tuple
    in an update chain. But for the last tuple in the chain, t_ctid is made to
    point to itself. When t_ctid points to itself, that signals the end of the
    chain. With this patch, information about a tuple being the last tuple in the
    chain is stored a separate HEAP_TUPLE_LATEST flag. This uses another free bit
    in t_infomask2. When HEAP_TUPLE_LATEST is set, OffsetNumber field in the t_ctid
    stores the root line pointer of the chain. This will help us quickly find root
    of a update chain.

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 6a27ef4..ccf84be 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -93,7 +93,8 @@ static HeapTuple heap_prepare_insert(Relation relation, HeapTuple tup,
 					TransactionId xid, CommandId cid, int options);
 static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
-				HeapTuple newtup, HeapTuple old_key_tup,
+				HeapTuple newtup, OffsetNumber root_offnum,
+				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
 static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
 							 Bitmapset *hot_attrs,
@@ -2250,13 +2251,13 @@ heap_get_latest_tid(Relation relation,
 		 */
 		if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) ||
 			HeapTupleHeaderIsOnlyLocked(tp.t_data) ||
-			ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid))
+			HeapTupleHeaderIsHeapLatest(tp.t_data, ctid))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
 		}
 
-		ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tp.t_data, &ctid, offnum);
 		priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		UnlockReleaseBuffer(buffer);
 	}							/* end of loop */
@@ -2415,7 +2416,8 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	START_CRIT_SECTION();
 
 	RelationPutHeapTuple(relation, buffer, heaptup,
-						 (options & HEAP_INSERT_SPECULATIVE) != 0);
+						 (options & HEAP_INSERT_SPECULATIVE) != 0,
+						 InvalidOffsetNumber);
 
 	if (PageIsAllVisible(BufferGetPage(buffer)))
 	{
@@ -2713,7 +2715,8 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		 * RelationGetBufferForTuple has ensured that the first tuple fits.
 		 * Put that on the page, and then as many other tuples as fit.
 		 */
-		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false);
+		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false,
+				InvalidOffsetNumber);
 		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++)
 		{
 			HeapTuple	heaptup = heaptuples[ndone + nthispage];
@@ -2721,7 +2724,8 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace)
 				break;
 
-			RelationPutHeapTuple(relation, buffer, heaptup, false);
+			RelationPutHeapTuple(relation, buffer, heaptup, false,
+					InvalidOffsetNumber);
 
 			/*
 			 * We don't use heap_multi_insert for catalog tuples yet, but
@@ -2993,6 +2997,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	HeapTupleData tp;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	TransactionId new_xmax;
@@ -3044,7 +3049,8 @@ heap_delete(Relation relation, ItemPointer tid,
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 	}
 
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
+	offnum = ItemPointerGetOffsetNumber(tid);
+	lp = PageGetItemId(page, offnum);
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
@@ -3174,7 +3180,7 @@ l1:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tp.t_data, &hufd->ctid, offnum);
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tp.t_data);
@@ -3250,8 +3256,8 @@ l1:
 	HeapTupleHeaderClearHotUpdated(tp.t_data);
 	HeapTupleHeaderSetXmax(tp.t_data, new_xmax);
 	HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo);
-	/* Make sure there is no forward chain link in t_ctid */
-	tp.t_data->t_ctid = tp.t_self;
+	/* Mark this tuple as the latest tuple in the update chain */
+	HeapTupleHeaderSetHeapLatest(tp.t_data);
 
 	MarkBufferDirty(buffer);
 
@@ -3450,6 +3456,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
+	OffsetNumber	root_offnum;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3506,6 +3514,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
 
 	block = ItemPointerGetBlockNumber(otid);
+	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3789,7 +3798,7 @@ l2:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = oldtup.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(oldtup.t_data, &hufd->ctid, offnum);
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
@@ -3968,7 +3977,7 @@ l2:
 		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		HeapTupleHeaderSetHeapLatest(oldtup.t_data);
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4149,6 +4158,20 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			heap_get_root_tuple_one(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
+					&root_offnum);
 	}
 	else
 	{
@@ -4156,10 +4179,29 @@ l2:
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		root_offnum = InvalidOffsetNumber;
 	}
 
-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
+	/* insert new tuple */
+	RelationPutHeapTuple(relation, newbuf, heaptup, false, root_offnum);
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data);
 
+	/*
+	 * Also update the in-memory copy with the root line pointer information
+	 */
+	if (OffsetNumberIsValid(root_offnum))
+	{
+		HeapTupleHeaderSetRootOffset(heaptup->t_data, root_offnum);
+		HeapTupleHeaderSetRootOffset(newtup->t_data, root_offnum);
+	}
+	else
+	{
+		HeapTupleHeaderSetRootOffset(heaptup->t_data,
+				ItemPointerGetOffsetNumber(&heaptup->t_self));
+		HeapTupleHeaderSetRootOffset(newtup->t_data,
+				ItemPointerGetOffsetNumber(&heaptup->t_self));
+	}
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
 	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
@@ -4172,7 +4214,9 @@ l2:
 	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	HeapTupleHeaderSetNextCtid(oldtup.t_data,
+			ItemPointerGetBlockNumber(&(heaptup->t_self)),
+			ItemPointerGetOffsetNumber(&(heaptup->t_self)));
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4211,6 +4255,7 @@ l2:
 
 		recptr = log_heap_update(relation, buffer,
 								 newbuf, &oldtup, heaptup,
+								 root_offnum,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4573,7 +4618,8 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	ItemId		lp;
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
-	BlockNumber block;
+	BlockNumber	block;
+	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4585,6 +4631,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
+	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -4631,7 +4678,7 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+		HeapTupleHeaderGetNextCtid(tuple->t_data, &t_ctid, offnum);
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -5069,7 +5116,7 @@ failed:
 		Assert(result == HeapTupleSelfUpdated || result == HeapTupleUpdated ||
 			   result == HeapTupleWouldBlock);
 		Assert(!(tuple->t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tuple->t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tuple->t_data, &hufd->ctid, offnum);
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tuple->t_data);
@@ -5145,7 +5192,7 @@ failed:
 	 * the tuple as well.
 	 */
 	if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask))
-		tuple->t_data->t_ctid = *tid;
+		HeapTupleHeaderSetHeapLatest(tuple->t_data);
 
 	/* Clear only the all-frozen bit on visibility map if needed */
 	if (PageIsAllVisible(page) &&
@@ -5659,6 +5706,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
+	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5667,6 +5715,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
+		offnum = ItemPointerGetOffsetNumber(&tupid);
+
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
@@ -5885,7 +5935,7 @@ l4:
 
 		/* if we find the end of update chain, we're done. */
 		if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID ||
-			ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) ||
+			HeapTupleHeaderIsHeapLatest(mytup.t_data, mytup.t_self) ||
 			HeapTupleHeaderIsOnlyLocked(mytup.t_data))
 		{
 			result = HeapTupleMayBeUpdated;
@@ -5894,7 +5944,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		HeapTupleHeaderGetNextCtid(mytup.t_data, &tupid, offnum);
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
@@ -6011,7 +6061,8 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	 * Replace the speculative insertion token with a real t_ctid, pointing to
 	 * itself like it does on regular tuples.
 	 */
-	htup->t_ctid = tuple->t_self;
+	HeapTupleHeaderSetHeapLatest(htup);
+	HeapTupleHeaderSetRootOffset(htup, offnum);
 
 	/* XLOG stuff */
 	if (RelationNeedsWAL(relation))
@@ -6137,7 +6188,9 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
 
 	/* Clear the speculative insertion token too */
-	tp.t_data->t_ctid = tp.t_self;
+	HeapTupleHeaderSetNextCtid(tp.t_data,
+			ItemPointerGetBlockNumber(&tp.t_self),
+			ItemPointerGetOffsetNumber(&tp.t_self));
 
 	MarkBufferDirty(buffer);
 
@@ -7486,6 +7539,7 @@ log_heap_visible(RelFileNode rnode, Buffer heap_buffer, Buffer vm_buffer,
 static XLogRecPtr
 log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup, HeapTuple newtup,
+				OffsetNumber root_offnum,
 				HeapTuple old_key_tuple,
 				bool all_visible_cleared, bool new_all_visible_cleared)
 {
@@ -7605,6 +7659,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	/* Prepare WAL data for the new page */
 	xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self);
 	xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data);
+	xlrec.root_offnum = root_offnum;
 
 	bufflags = REGBUF_STANDARD;
 	if (init)
@@ -8260,7 +8315,7 @@ heap_xlog_delete(XLogReaderState *record)
 			PageClearAllVisible(page);
 
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = target_tid;
+		HeapTupleHeaderSetHeapLatest(htup);
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8350,7 +8405,9 @@ heap_xlog_insert(XLogReaderState *record)
 		htup->t_hoff = xlhdr.t_hoff;
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
-		htup->t_ctid = target_tid;
+
+		HeapTupleHeaderSetHeapLatest(htup);
+		HeapTupleHeaderSetRootOffset(htup, xlrec->offnum);
 
 		if (PageAddItem(page, (Item) htup, newlen, xlrec->offnum,
 						true, true) == InvalidOffsetNumber)
@@ -8485,8 +8542,9 @@ heap_xlog_multi_insert(XLogReaderState *record)
 			htup->t_hoff = xlhdr->t_hoff;
 			HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 			HeapTupleHeaderSetCmin(htup, FirstCommandId);
-			ItemPointerSetBlockNumber(&htup->t_ctid, blkno);
-			ItemPointerSetOffsetNumber(&htup->t_ctid, offnum);
+
+			HeapTupleHeaderSetHeapLatest(htup);
+			HeapTupleHeaderSetRootOffset(htup, offnum);
 
 			offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 			if (offnum == InvalidOffsetNumber)
@@ -8622,7 +8680,8 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 		/* Set forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetNextCtid(htup, ItemPointerGetBlockNumber(&newtid),
+				ItemPointerGetOffsetNumber(&newtid));
 
 		/* Mark the page as a candidate for pruning */
 		PageSetPrunable(page, XLogRecGetXid(record));
@@ -8756,12 +8815,17 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetHeapLatest(htup);
 
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
 
+		if (OffsetNumberIsValid(xlrec->root_offnum))
+			HeapTupleHeaderSetRootOffset(htup, xlrec->root_offnum);
+		else
+			HeapTupleHeaderSetRootOffset(htup, offnum);
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
@@ -8889,9 +8953,7 @@ heap_xlog_lock(XLogReaderState *record)
 		{
 			HeapTupleHeaderClearHotUpdated(htup);
 			/* Make sure there is no forward chain link in t_ctid */
-			ItemPointerSet(&htup->t_ctid,
-						   BufferGetBlockNumber(buffer),
-						   offnum);
+			HeapTupleHeaderSetHeapLatest(htup);
 		}
 		HeapTupleHeaderSetXmax(htup, xlrec->locking_xid);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index c90fb71..e32deb1 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -31,12 +31,18 @@
  * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
  * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *
+ * The caller can optionally tell us to set the root offset to the given value.
+ * Otherwise, the root offset is set to the offset of the new location once its
+ * known. The former is used while updating an existing tuple while latter is
+ * used during insertion of a new row.
  */
 void
 RelationPutHeapTuple(Relation relation,
 					 Buffer buffer,
 					 HeapTuple tuple,
-					 bool token)
+					 bool token,
+					 OffsetNumber root_offnum)
 {
 	Page		pageHeader;
 	OffsetNumber offnum;
@@ -69,7 +75,13 @@ RelationPutHeapTuple(Relation relation,
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
-		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item);
+		if (OffsetNumberIsValid(root_offnum))
+			HeapTupleHeaderSetRootOffset((HeapTupleHeader) item,
+					root_offnum);
+		else
+			HeapTupleHeaderSetRootOffset((HeapTupleHeader) item,
+					offnum);
 	}
 }
 
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index 6ff9251..7c2231a 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -55,6 +55,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
 
+static void heap_get_root_tuples_internal(Page page,
+				OffsetNumber target_offnum, OffsetNumber *root_offsets);
 
 /*
  * Optionally prune and repair fragmentation in the specified page.
@@ -740,8 +742,9 @@ heap_page_prune_execute(Buffer buffer,
  * holds a pin on the buffer. Once pin is released, a tuple might be pruned
  * and reused by a completely unrelated tuple.
  */
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
 {
 	OffsetNumber offnum,
 				maxoff;
@@ -820,6 +823,14 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 			/* Remember the root line pointer for this item */
 			root_offsets[nextoffnum - 1] = offnum;
 
+			/*
+			 * If the caller is interested in just one offset and we found
+			 * that, just return
+			 */
+			if (OffsetNumberIsValid(target_offnum) &&
+					(nextoffnum == target_offnum))
+				return;
+
 			/* Advance to next chain member, if any */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				break;
@@ -829,3 +840,25 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 		}
 	}
 }
+
+/*
+ * Get root line pointer for the given tuple
+ */
+void
+heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offnum)
+{
+	OffsetNumber offsets[MaxHeapTuplesPerPage];
+	heap_get_root_tuples_internal(page, target_offnum, offsets);
+	*root_offnum = offsets[target_offnum - 1];
+}
+
+/*
+ * Get root line pointers for all tuples in the page
+ */
+void
+heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+{
+	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+			root_offsets);
+}
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index 17584ba..09a164c 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -419,14 +419,14 @@ rewrite_heap_tuple(RewriteState state,
 	 */
 	if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
 		  HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
-		!(ItemPointerEquals(&(old_tuple->t_self),
-							&(old_tuple->t_data->t_ctid))))
+		!(HeapTupleHeaderIsHeapLatest(old_tuple->t_data, old_tuple->t_self)))
 	{
 		OldToNewMapping mapping;
 
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
-		hashkey.tid = old_tuple->t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(old_tuple->t_data, &hashkey.tid,
+				ItemPointerGetOffsetNumber(&old_tuple->t_self));
 
 		mapping = (OldToNewMapping)
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -439,7 +439,9 @@ rewrite_heap_tuple(RewriteState state,
 			 * set the ctid of this tuple to point to the new location, and
 			 * insert it right away.
 			 */
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextCtid(new_tuple->t_data,
+					ItemPointerGetBlockNumber(&mapping->new_tid),
+					ItemPointerGetOffsetNumber(&mapping->new_tid));
 
 			/* We don't need the mapping entry anymore */
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -525,7 +527,9 @@ rewrite_heap_tuple(RewriteState state,
 				new_tuple = unresolved->tuple;
 				free_new = true;
 				old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextCtid(new_tuple->t_data,
+						ItemPointerGetBlockNumber(&new_tid),
+						ItemPointerGetOffsetNumber(&new_tid));
 
 				/*
 				 * We don't need the hash entry anymore, but don't free its
@@ -731,7 +735,10 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		onpage_tup->t_ctid = tup->t_self;
+		HeapTupleHeaderSetNextCtid(onpage_tup,
+				ItemPointerGetBlockNumber(&tup->t_self),
+				ItemPointerGetOffsetNumber(&tup->t_self));
+		HeapTupleHeaderSetHeapLatest(onpage_tup);
 	}
 
 	/* If heaptup is a private copy, release it. */
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 32bb3f9..079a77f 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2443,7 +2443,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		 * As above, it should be safe to examine xmax and t_ctid without the
 		 * buffer content lock, because they can't be changing.
 		 */
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, tuple.t_self))
 		{
 			/* deleted, so forget about it */
 			ReleaseBuffer(buffer);
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index b3a595c..94b46b8 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -188,6 +188,8 @@ extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
+extern void heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offnum);
 extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
 
 /* in heap/syncscan.c */
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index 06a8242..5a04561 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -193,6 +193,8 @@ typedef struct xl_heap_update
 	uint8		flags;
 	TransactionId new_xmax;		/* xmax of the new tuple */
 	OffsetNumber new_offnum;	/* new tuple's offset */
+	OffsetNumber root_offnum;	/* offset of the root line pointer in case of
+								   HOT or WARM update */
 
 	/*
 	 * If XLOG_HEAP_CONTAINS_OLD_TUPLE or XLOG_HEAP_CONTAINS_OLD_KEY flags are
@@ -200,7 +202,7 @@ typedef struct xl_heap_update
 	 */
 } xl_heap_update;
 
-#define SizeOfHeapUpdate	(offsetof(xl_heap_update, new_offnum) + sizeof(OffsetNumber))
+#define SizeOfHeapUpdate	(offsetof(xl_heap_update, root_offnum) + sizeof(OffsetNumber))
 
 /*
  * This is what we need to know about vacuum page cleanup/redirect
diff --git a/src/include/access/hio.h b/src/include/access/hio.h
index a174b34..82e5b5f 100644
--- a/src/include/access/hio.h
+++ b/src/include/access/hio.h
@@ -36,7 +36,7 @@ typedef struct BulkInsertStateData
 
 
 extern void RelationPutHeapTuple(Relation relation, Buffer buffer,
-					 HeapTuple tuple, bool token);
+					 HeapTuple tuple, bool token, OffsetNumber root_offnum);
 extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
 						  Buffer otherBuffer, int options,
 						  BulkInsertState bistate,
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index d7e5fad..d01e0d8 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,13 +260,19 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1800 are available */
+/* bits 0x0800 are available */
+#define HEAP_LATEST_TUPLE		0x1000	/*
+										 * This is the last tuple in chain and
+										 * ip_posid points to the root line
+										 * pointer
+										 */
 #define HEAP_KEYS_UPDATED		0x2000	/* tuple was updated and key cols
 										 * modified, or tuple deleted */
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xE000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+
 
 /*
  * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins.  It is
@@ -504,6 +510,30 @@ do { \
   (tup)->t_infomask2 & HEAP_ONLY_TUPLE \
 )
 
+#define HeapTupleHeaderSetHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE \
+)
+
+#define HeapTupleHeaderClearHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 &= ~HEAP_LATEST_TUPLE \
+)
+
+/*
+ * If HEAP_LATEST_TUPLE is set in the last tuple in the update chain. But for
+ * clusters which are upgraded from pre-10.0 release, we still check if c_tid
+ * is pointing to itself and declare such tuple as the latest tuple in the
+ * chain
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  ((tup)->t_infomask2 & HEAP_LATEST_TUPLE) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(&tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(&tid))) \
+)
+
+
 #define HeapTupleHeaderSetHeapOnly(tup) \
 ( \
   (tup)->t_infomask2 |= HEAP_ONLY_TUPLE \
@@ -542,6 +572,55 @@ do { \
 
 
 /*
+ * Set the t_ctid chain and also clear the HEAP_LATEST_TUPLE flag since we
+ * probably have a new tuple in the chain
+ */
+#define HeapTupleHeaderSetNextCtid(tup, block, offset) \
+do { \
+		ItemPointerSetBlockNumber(&((tup)->t_ctid), (block)); \
+		ItemPointerSetOffsetNumber(&((tup)->t_ctid), (offset)); \
+		HeapTupleHeaderClearHeapLatest((tup)); \
+} while (0)
+
+/*
+ * Get TID of next tuple in the update chain. Traditionally, we have stored
+ * self TID in the t_ctid field if the tuple is the last tuple in the chain. We
+ * try to preserve that behaviour by returning self-TID if HEAP_LATEST_TUPLE
+ * flag is set.
+ */
+#define HeapTupleHeaderGetNextCtid(tup, next_ctid, offnum) \
+do { \
+	if ((tup)->t_infomask2 & HEAP_LATEST_TUPLE) \
+	{ \
+		ItemPointerSet((next_ctid), ItemPointerGetBlockNumber(&(tup)->t_ctid), \
+				(offnum)); \
+	} \
+	else \
+	{ \
+		ItemPointerSet((next_ctid), ItemPointerGetBlockNumber(&(tup)->t_ctid), \
+				ItemPointerGetOffsetNumber(&(tup)->t_ctid)); \
+	} \
+} while (0)
+
+#define HeapTupleHeaderSetRootOffset(tup, offset) \
+do { \
+	AssertMacro(!HeapTupleHeaderIsHotUpdated(tup)); \
+	AssertMacro((tup)->t_infomask2 & HEAP_LATEST_TUPLE); \
+	ItemPointerSetOffsetNumber(&(tup)->t_ctid, (offset)); \
+} while (0)
+
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+	AssertMacro((tup)->t_infomask2 & HEAP_LATEST_TUPLE), \
+	ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)
+
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+	(tup)->t_infomask2 & HEAP_LATEST_TUPLE \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
0002_warm_updates_v4.patchapplication/octet-stream; name=0002_warm_updates_v4.patchDownload
commit b0fa1d2aeadecbcba10ef90cf467c835aef693b1
Author: Pavan Deolasee <pavan.deolasee@gmail.com>
Date:   Mon Sep 5 09:26:11 2016 +0530

    Add support for WARM (Write Amplification Reduction Method)
    
    We have used HOT updates to handle the cases when the UPDATE does not
    change any index column. In such cases, we could avoid inserting duplicate
    index entries, which not only reduces index bloat, but also makes it easier to
    clean up dead tuples without requiring to visit the indexes. But when an UPDATE
    changes an index column, we insert duplicate entries in all indexes. This
    results in write amplification especially for tables with large number of
    indexes.
    
    WARM takes it further by now avoiding duplicate index entries for indexes
    whose columns are not being updated. What we do is only insert new entries
    those indexes which have changed, but instead of indexing the actual TID of the
    new tuple, we index the root line pointer of the HOT chain. As a side effect,
    for correctness we now must verify that the index is pointing to a tuple which
    really satisifes the index key.
    
    Each index AM must implement an amrecheck method which returns true iff the
    index key constructed from the given heap tuple matches the given index key.
    
    The patch currently works with several restrictions:
    
    1. WARM updates on system tables are disabled. While we disabled them for
    ease of development, there could be some issues with system tables because they
    apparently do not support lossy indexes.
    
    2. Only one WARM update per HOT chain is allowed. That seems very
    restrictive, but even that should reduce index bloat by 50%. Subsequently, we
    will optimise this by either allowing multiple WARM updates or by turning WARM
    chains to HOT chains as and when tuples retire.
    
    3. Expression and partial indexes don't work with WARM updates. For
    expression indexes, we will need to find a way to determine if the old and new
    tuple computes to the same index expression and avoid adding a duplicate index
    entry in such cases. This is not only required to avoid unnecessary index
    bloat, but also for correctness purposes. Similarly, for partial indexes, we
    must index the new entry if the old tuple did not satisfy the predicate but the
    new one does.
    
    4. If table has an index which does not support amrecheck method, WARM is
    disabled on such tables.

diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c
index debf4f4..d49d179 100644
--- a/contrib/bloom/blutils.c
+++ b/contrib/bloom/blutils.c
@@ -138,6 +138,7 @@ blhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = blendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index 1b45a4c..ba3fffb 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -111,6 +111,7 @@ brinhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = brinendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index f7f44b4..813c2c3 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -88,6 +88,7 @@ gisthandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = gistendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index e3b1eef..d7c50c1 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -85,6 +85,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = hashendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = hashrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -265,6 +266,8 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 	OffsetNumber offnum;
 	ItemPointer current;
 	bool		res;
+	IndexTuple	itup;
+
 
 	/* Hash indexes are always lossy since we store only the hash code */
 	scan->xs_recheck = true;
@@ -302,8 +305,6 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 			 offnum <= maxoffnum;
 			 offnum = OffsetNumberNext(offnum))
 		{
-			IndexTuple	itup;
-
 			itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 			if (ItemPointerEquals(&(so->hashso_heappos), &(itup->t_tid)))
 				break;
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index 4825558..cf44214 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -59,6 +59,8 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
 	return true;
 }
 
@@ -263,6 +265,9 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
+
 	return true;
 }
 
diff --git a/src/backend/access/hash/hashutil.c b/src/backend/access/hash/hashutil.c
index 822862d..71377ab 100644
--- a/src/backend/access/hash/hashutil.c
+++ b/src/backend/access/hash/hashutil.c
@@ -17,8 +17,12 @@
 #include "access/hash.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
+#include "nodes/execnodes.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 /*
@@ -352,3 +356,110 @@ _hash_binsearch_last(Page page, uint32 hash_value)
 
 	return lower;
 }
+
+/*
+ * Recheck if the heap tuple satisfies the key stored in the index tuple
+ */
+bool
+hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	Datum		values2[INDEX_MAX_KEYS];
+	bool		isnull2[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	/*
+	 * HASH indexes compute a hash value of the key and store that in the
+	 * index. So we must first obtain the hash of the value obtained from the
+	 * heap and then do a comparison
+	 */
+	_hash_convert_tuple(indexRel, values, isnull, values2, isnull2);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL then they are equal
+		 */
+		if (isnull2[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If either is NULL then they are not equal
+		 */
+		if (isnull2[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now do a raw memory comparison
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values2[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+
+}
diff --git a/src/backend/access/heap/README.WARM b/src/backend/access/heap/README.WARM
new file mode 100644
index 0000000..f793570
--- /dev/null
+++ b/src/backend/access/heap/README.WARM
@@ -0,0 +1,271 @@
+src/backend/access/heap/README.WARM
+
+Write Amplification Reduction Method (WARM)
+===========================================
+
+The Heap Only Tuple (HOT) feature greatly eliminated redudant index
+entries and allowed re-use of the dead space occupied by previously
+updated or deleted tuples (see src/backend/access/heap/README.HOT)
+
+One of the necessary conditions for satisfying HOT update is that the
+update must not change a column used in any of the indexes on the table.
+The condition is sometimes hard to meet, especially for complex
+workloads with several indexes on large yet frequently updated tables.
+Worse, sometimes only one or two index columns may be updated, but the
+regular non-HOT update will still insert a new index entry in every
+index on the table, irrespective of whether the key pertaining to the
+index changed or not.
+
+WARM is a technique devised to address these problems.
+
+
+Update Chains With Multiple Index Entries Pointing to the Root
+--------------------------------------------------------------
+
+When a non-HOT update is caused by an index key change, a new index
+entry must be inserted for the changed index. But if the index key
+hasn't changed for other indexes, we don't really need to insert a new
+entry. Even though the existing index entry is pointing to the old
+tuple, the new tuple is reachable via the t_ctid chain. To keep things
+simple, a WARM update requires that the heap block must have enough
+space to store the new version of the tuple. This is same as HOT
+updates.
+
+In WARM, we ensure that every index entry always points to the root of
+the WARM chain. In fact, a WARM chain looks exactly like a HOT chain
+except for the fact that there could be multiple index entries pointing
+to the root of the chain. So when new entry is inserted in an index for
+updated tuple, and if we are doing a WARM update, the new entry is made
+point to the root of the WARM chain.
+
+For example, if we have a table with two columns and two indexes on each
+of the column. When a tuple is first inserted the table, we have exactly
+one index entry pointing to the tuple from both indexes.
+
+	lp [1]
+	[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's entry (aaaa) also points to 1
+
+Now if the tuple's second column is updated and if there is room on the
+page, we perform a WARM update. To do so, Index1 does not get any new
+entry and Index2's new entry will still point to the root tuple of the
+chain.
+
+	lp [1]  [2]
+	[1111, aaaa]->[111, bbbb]
+
+	Index1's entry (1111) points to 1
+	Index2's old entry (aaaa) points to 1
+	Index2's new entry (bbbb) also points to 1
+
+"A update chain which has more than one index entries pointing to its
+root line pointer is called WARM chain and the action that creates a
+WARM chain is called WARM update."
+
+Since all indexes always point to the root of the WARM chain, even when
+there are more than one index entries, WARM chains can be pruned and
+dead tuples can be removed without a need to do corresponding index
+cleanup.
+
+While this solves the problem of pruning dead tuples from a HOT/WARM
+chain, it also opens up a new technical challenge because now we have a
+situation where a heap tuple is reachable from multiple index entries,
+each having a different index key. While MVCC still ensures that only
+valid tuples are returned, a tuple with a wrong index key may be
+returned because of wrong index entries. In the above example, tuple
+[1111, bbbb] is reachable from both keys (aaaa) as well as (bbbb). For
+this reason, tuples returned from a WARM chain must always be rechecked
+for index key-match.
+
+Recheck Index Key Againt Heap Tuple
+-----------------------------------
+
+Since every Index AM has it's own notion of index tuples, each Index AM
+must implement its own method to recheck heap tuples. For example, a
+hash index stores the hash value of the column and hence recheck routine
+for hash AM must first compute the hash value of the heap attribute and
+then compare it against the value stored in the index tuple.
+
+The patch currently implement recheck routines for hash and btree
+indexes. If the table has an index which doesn't support recheck
+routine, WARM updates are disabled on such tables.
+
+Problem With Duplicate (key, ctid) Index Entries
+------------------------------------------------
+
+The index-key recheck logic works as long as there are no duplicate
+index keys, both pointing to the same WARM chain. In that case, the same
+valid tuple will be reachable via multiple index keys, yet satisfying
+the index key checks. In the above example, if the tuple [1111, bbbb] is
+again updated to [1111, aaaa] and if we insert a new index entry (aaaa)
+pointing to the root line pointer, we will end up with the following
+structure:
+
+	lp [1]  [2]  [3]
+	[1111, aaaa]->[1111, bbbb]->[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's oldest entry (aaaa) points to 1
+	Index2's old entry (bbbb) also points to 1
+	Index2's new entry (aaaa) also points to 1
+
+We must solve this problem to ensure that the same tuple is not
+reachable via multiple index pointers. There are couple of ways to
+address this issue:
+
+1. Do not allow WARM update to a tuple from a WARM chain. This
+guarantees that there can never be duplicate index entries to the same
+root line pointer because we must have checked for old and new index
+keys while doing the first WARM update.
+
+2. Do not allow duplicate (key, ctid) index pointers. In the above
+example, since (aaaa, 1) already exists in the index, we must not insert
+a duplicate index entry.
+
+The patch currently implements 1 i.e. do not do WARM updates to a tuple
+from a WARM chain. HOT updates are fine because they do not add a new
+index entry.
+
+Even with the restriction, this is a significant improvement because the
+number of regular UPDATEs are curtailed down to half.
+
+Expression and Partial Indexes
+------------------------------
+
+Expressions may evaluate to the same value even if the underlying column
+values have changed. A simple example is an index on "lower(col)" which
+will return the same value if the new heap value only differs in the
+case sensitivity. So we can not solely rely on the heap column check to
+decide whether or not to insert a new index entry for expression
+indexes. Similarly, for partial indexes, the predicate expression must
+be evaluated to decide whether or not to cause a new index entry when
+columns referred in the predicate expressions change.
+
+(None of these things are currently implemented and we squarely disallow
+WARM update if a column from expression indexes or predicate has
+changed).
+
+
+Efficiently Finding the Root Line Pointer
+-----------------------------------------
+
+During WARM update, we must be able to find the root line pointer of the
+tuple being updated. It must be noted that the t_ctid field in the heap
+tuple header is usually used to find the next tuple in the update chain.
+But the tuple that we are updating, must be the last tuple in the update
+chain. In such cases, the c_tid field usually points the tuple itself.
+So in theory, we could use the t_ctid to store additional information in
+the last tuple of the update chain, if the information about the tuple
+being the last tuple is stored elsewhere.
+
+We now utilize another bit from t_infomask2 to explicitly identify that
+this is the last tuple in the update chain.
+
+HEAP_LATEST_TUPLE - When this bit is set, the tuple is the last tuple in
+the update chain. The OffsetNumber part of t_ctid points to the root
+line pointer of the chain when HEAP_LATEST_TUPLE flag is set.
+
+If UPDATE operation is aborted, the last tuple in the update chain
+becomes dead. The root line pointer information stored in the tuple
+which remains the last valid tuple in the chain is also lost. In such
+rare cases, the root line pointer must be found in a hard way by
+scanning the entire heap page.
+
+Tracking WARM Chains
+--------------------
+
+The old and every subsequent tuple in the chain is marked with a special
+HEAP_WARM_TUPLE flag. We use the last remaining bit in t_infomask2 to
+store this information.
+
+When a tuple is returned from a WARM chain, the caller must do
+additional checks to ensure that the tuple matches the index key. Even
+if the tuple comes precedes the WARM update in the chain, it must still
+be rechecked for the index key match (case when old tuple is returned by
+the new index key). So we must follow the update chain everytime to the
+end to see check if this is a WARM chain.
+
+When the old updated tuple is retired and the root line pointer is
+converted into a redirected line pointer, we can copy the information
+about WARM chain to the redirected line pointer by storing a special
+value in the lp_len field of the line pointer. This will handle the most
+common case where a WARM chain is replaced by a redirect line pointer
+and a single tuple in the chain.
+
+Converting WARM chains back to HOT chains (VACUUM ?)
+----------------------------------------------------
+
+The current implementation of WARM allows only one WARM update per
+chain. This simplifies the design and addresses certain issues around
+duplicate scans. But this also implies that the benefit of WARM will be
+no more than 50%, which is still significant, but if we could return
+WARM chains back to normal status, we could do far more WARM updates.
+
+A distinct property of a WARM chain is that at least one index has more
+than one live index entries pointing to the root of the chain. In other
+words, if we can remove duplicate entry from every index or conclusively
+prove that there are no duplicate index entries for the root line
+pointer, the chain can again be marked as HOT.
+
+Here is one idea:
+
+A WARM chain has two parts, separated by the tuple that caused WARM
+update. All tuples in each part has matching index keys, but certain
+index keys may not match between these two parts. Lets say we mark heap
+tuples in each part with a special Red-Blue flag. The same flag is
+replicated in the index tuples. For example, when new rows are inserted
+in a table, they are marked with Blue flag and the index entries
+associated with those rows are also marked with Blue flag. When a row is
+WARM updated, the new version is marked with Red flag and the new index
+entry created by the update is also marked with Red flag.
+
+
+Heap chain: [1] [2] [3] [4]
+			[aaaa, 1111]B -> [aaaa, 1111]B -> [bbbb, 1111]R -> [bbbb, 1111]R
+
+Index1: 	(aaaa)B points to 1 (satisfies only tuples marked with B)
+			(bbbb)R points to 1 (satisfies only tuples marked with R)
+
+Index2:		(1111)B points to 1 (satisfied bith B and R tuples)
+
+
+It's clear that for indexes with Red and Blue pointers, a heap tuple
+with Blue flag will be reachable from Blue pointer and that with Red
+flag will be reachable from Red pointer. But for indexes which did not
+create a new entry, both Blue and Red tuples will be reachable from Blue
+pointer (there is no Red pointer in such indexes). So, as a side note,
+matching Red and Blue flags is not enough from index scan perspective.
+
+During first heap scan of VACUUM, we look for tuples with
+HEAP_WARM_TUPLE set.  If all live tuples in the chain are either marked
+with Blue flag or Red flag (but no mix of Red and Blue), then the chain
+is a candidate for HOT conversion.  We remember the root line pointer
+and Red-Blue flag of the WARM chain in a separate array.
+
+If we have a Red WARM chain, then our goal is to remove Blue pointers
+and vice versa. But there is a catch. For Index2 above, there is only
+Blue pointer and that must not be removed. IOW we should remove Blue
+pointer iff a Red pointer exists. Since index vacuum may visit Red and
+Blue pointers in any order, I think we will need another index pass to
+remove dead index pointers. So in the first index pass we check which
+WARM candidates have 2 index pointers. In the second pass, we remove the
+dead pointer and reset Red flag is the surviving index pointer is Red.
+
+During the second heap scan, we fix WARM chain by clearing
+HEAP_WARM_TUPLE flag and also reset Red flag to Blue.
+
+There are some more problems around aborted vacuums. For example, if
+vacuum aborts after changing Red index flag to Blue but before removing
+the other Blue pointer, we will end up with two Blue pointers to a Red
+WARM chain. But since the HEAP_WARM_TUPLE flag on the heap tuple is
+still set, further WARM updates to the chain will be blocked. I guess we
+will need some special handling for case with multiple Blue pointers. We
+can either leave these WARM chains alone and let them die with a
+subsequent non-WARM update or must apply heap-recheck logic during index
+vacuum to find the dead pointer. Given that vacuum-aborts are not
+common, I am inclined to leave this case unhandled. We must still check
+for presence of multiple Blue pointers and ensure that we don't
+accidently remove either of the Blue pointers and not clear WARM chains
+either.
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index ccf84be..800a7c0 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -99,7 +99,10 @@ static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
 							 Bitmapset *hot_attrs,
 							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
+							 Bitmapset *exprindx_attrs,
+							 Bitmapset **updated_attrs,
+							 bool *satisfies_hot, bool *satisfies_warm,
+							 bool *satisfies_key,
 							 bool *satisfies_id,
 							 HeapTuple oldtup, HeapTuple newtup);
 static bool heap_acquire_tuplock(Relation relation, ItemPointer tid,
@@ -1960,6 +1963,76 @@ heap_fetch(Relation relation,
 }
 
 /*
+ * Check if the HOT chain is originating or continuing at tid ever became a
+ * WARM chain, even if the actual UPDATE operation finally aborted.
+ */
+static void
+hot_check_warm_chain(Page dp, ItemPointer tid, bool *recheck)
+{
+	TransactionId prev_xmax = InvalidTransactionId;
+	OffsetNumber offnum;
+	HeapTupleData heapTuple;
+
+	if (*recheck == true)
+		return;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+			break;
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Presence of either WARM or WARM updated tuple signals possible
+		 * breakage and the caller must recheck tuple returned from this chain
+		 * for index satisfaction
+		 */
+		if (HeapTupleHeaderIsHeapWarmTuple(heapTuple.t_data))
+		{
+			*recheck = true;
+			break;
+		}
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (HeapTupleIsHotUpdated(&heapTuple))
+		{
+			offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+			prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+		}
+		else
+			break;				/* end of chain */
+	}
+
+}
+
+/*
  *	heap_hot_search_buffer	- search HOT chain for tuple satisfying snapshot
  *
  * On entry, *tid is the TID of a tuple (either a simple tuple, or the root
@@ -1979,11 +2052,14 @@ heap_fetch(Relation relation,
  * Unlike heap_fetch, the caller must already have pin and (at least) share
  * lock on the buffer; it is still pinned/locked at exit.  Also unlike
  * heap_fetch, we do not report any pgstats count; caller may do so if wanted.
+ *
+ * recheck should be set false on entry by caller, will be set true on exit
+ * if a WARM tuple is encountered.
  */
 bool
 heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 					   Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call)
+					   bool *all_dead, bool first_call, bool *recheck)
 {
 	Page		dp = (Page) BufferGetPage(buffer);
 	TransactionId prev_xmax = InvalidTransactionId;
@@ -2025,6 +2101,16 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 				/* Follow the redirect */
 				offnum = ItemIdGetRedirect(lp);
 				at_chain_start = false;
+
+				/* Check if it's a WARM chain */
+				if (recheck && *recheck == false)
+				{
+					if (ItemIdIsHeapWarm(lp))
+					{
+						*recheck = true;
+						Assert(!IsSystemRelation(relation));
+					}
+				}
 				continue;
 			}
 			/* else must be end of chain */
@@ -2037,9 +2123,12 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		ItemPointerSetOffsetNumber(&heapTuple->t_self, offnum);
 
 		/*
-		 * Shouldn't see a HEAP_ONLY tuple at chain start.
+		 * Shouldn't see a HEAP_ONLY tuple at chain start, unless we are
+		 * dealing with a WARM updated tuple in which case deferred triggers
+		 * may request to fetch a WARM tuple from middle of a chain.
 		 */
-		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple))
+		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple) &&
+				!HeapTupleIsHeapWarmTuple(heapTuple))
 			break;
 
 		/*
@@ -2052,6 +2141,22 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 			break;
 
 		/*
+		 * Check if there exists a WARM tuple somewhere down the chain and set
+		 * recheck to TRUE.
+		 *
+		 * XXX This is not very efficient right now, and we should look for
+		 * possible improvements here
+		 */
+		if (recheck && *recheck == false)
+		{
+			hot_check_warm_chain(dp, &heapTuple->t_self, recheck);
+
+			/* WARM is not supported on system tables yet */
+			if (*recheck == true)
+				Assert(!IsSystemRelation(relation));
+		}
+
+		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
 		 * return the first tuple we find.  But on later passes, heapTuple
 		 * will initially be pointing to the tuple we returned last time.
@@ -2124,18 +2229,41 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
  */
 bool
 heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
-				bool *all_dead)
+				bool *all_dead, bool *recheck, Buffer *cbuffer,
+				HeapTuple heapTuple)
 {
 	bool		result;
 	Buffer		buffer;
-	HeapTupleData heapTuple;
+	ItemPointerData ret_tid = *tid;
 
 	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	LockBuffer(buffer, BUFFER_LOCK_SHARE);
-	result = heap_hot_search_buffer(tid, relation, buffer, snapshot,
-									&heapTuple, all_dead, true);
-	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buffer);
+	result = heap_hot_search_buffer(&ret_tid, relation, buffer, snapshot,
+									heapTuple, all_dead, true, recheck);
+
+	/*
+	 * If we are returning a potential candidate tuple from this chain and the
+	 * caller has requested for "recheck" hint, keep the buffer locked and
+	 * pinned. The caller must release the lock and pin on the buffer in all
+	 * such cases
+	 */
+	if (!result || !recheck || !(*recheck))
+	{
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	/*
+	 * Set the caller supplied tid with the actual location of the tuple being
+	 * returned
+	 */
+	if (result)
+	{
+		*tid = ret_tid;
+		if (cbuffer)
+			*cbuffer = buffer;
+	}
+
 	return result;
 }
 
@@ -3442,13 +3570,15 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 HTSU_Result
 heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode)
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **updated_attrs, bool *warm_update)
 {
 	HTSU_Result result;
 	TransactionId xid = GetCurrentTransactionId();
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *exprindx_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3469,9 +3599,11 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		satisfies_hot;
+	bool		satisfies_warm;
 	bool		satisfies_key;
 	bool		satisfies_id;
 	bool		use_hot_update = false;
+	bool		use_warm_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
 	bool		all_visible_cleared_new = false;
@@ -3496,6 +3628,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot update tuples during a parallel operation")));
 
+	/* Assume no-warm update */
+	if (warm_update)
+		*warm_update = false;
+
 	/*
 	 * Fetch the list of attributes to be checked for HOT update.  This is
 	 * wasted effort if we fail to update or have to put the new tuple on a
@@ -3512,6 +3648,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	exprindx_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_EXPR_PREDICATE);
 
 	block = ItemPointerGetBlockNumber(otid);
 	offnum = ItemPointerGetOffsetNumber(otid);
@@ -3571,7 +3709,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	 * serendipitiously arrive at the same key values.
 	 */
 	HeapSatisfiesHOTandKeyUpdate(relation, hot_attrs, key_attrs, id_attrs,
-								 &satisfies_hot, &satisfies_key,
+								 exprindx_attrs,
+								 updated_attrs,
+								 &satisfies_hot, &satisfies_warm,
+								 &satisfies_key,
 								 &satisfies_id, &oldtup, newtup);
 	if (satisfies_key)
 	{
@@ -4118,6 +4259,34 @@ l2:
 		 */
 		if (satisfies_hot)
 			use_hot_update = true;
+		else
+		{
+			/*
+			 * If no WARM updates yet on this chain, let this update be a WARM
+			 * update.
+			 *
+			 * We check for both warm and warm updated tuples since if the
+			 * previous WARM update aborted, we may still have added
+			 * another index entry for this HOT chain. In such situations, we
+			 * must not attempt a WARM update until duplicate (key, CTID) index
+			 * entry issue is sorted out
+			 *
+			 * XXX Later we'll add more checks to ensure WARM chains can
+			 * further be WARM updated. This is probably good to do first rounf
+			 * of tests of remaining functionality
+			 *
+			 * XXX Disable WARM updates on system tables. There is nothing in
+			 * principle that stops us from supporting this. But it would
+			 * require API change to propogate the changed columns back to the
+			 * caller so that CatalogUpdateIndexes() can avoid adding new
+			 * entries to indexes that are not changed by update. This will be
+			 * fixed once basic patch is tested. !!FIXME
+			 */
+			if (satisfies_warm &&
+				!HeapTupleIsHeapWarmTuple(&oldtup) &&
+				!IsSystemRelation(relation))
+				use_warm_update = true;
+		}
 	}
 	else
 	{
@@ -4158,6 +4327,21 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+
+		/*
+		 * Even if we are doing a HOT update, we must carry forward the WARM
+		 * flag because we may have already inserted another index entry
+		 * pointing to our root and a third entry may create duplicates
+		 *
+		 * XXX This should be revisited if we get index (key, CTID) duplicate
+		 * detection mechanism in place
+		 */
+		if (HeapTupleIsHeapWarmTuple(&oldtup))
+		{
+			HeapTupleSetHeapWarmTuple(heaptup);
+			HeapTupleSetHeapWarmTuple(newtup);
+		}
+
 		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
@@ -4173,12 +4357,38 @@ l2:
 					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
 					&root_offnum);
 	}
+	else if (use_warm_update)
+	{
+		Assert(!IsSystemRelation(relation));
+
+		/* Mark the old tuple as HOT-updated */
+		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetHeapWarmTuple(&oldtup);
+		/* And mark the new tuple as heap-only */
+		HeapTupleSetHeapOnly(heaptup);
+		HeapTupleSetHeapWarmTuple(heaptup);
+		/* Mark the caller's copy too, in case different from heaptup */
+		HeapTupleSetHeapOnly(newtup);
+		HeapTupleSetHeapWarmTuple(newtup);
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			heap_get_root_tuple_one(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
+					&root_offnum);
+
+		/* Let the caller know we did a WARM update */
+		if (warm_update)
+			*warm_update = true;
+	}
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		HeapTupleClearHeapWarmTuple(heaptup);
+		HeapTupleClearHeapWarmTuple(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4297,7 +4507,12 @@ l2:
 	if (have_tuple_lock)
 		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
 
-	pgstat_count_heap_update(relation, use_hot_update);
+	/*
+	 * Even with WARM we still count stats using use_hot_update,
+	 * since we continue to still use that term even though it is
+	 * now more frequent that previously.
+	 */
+	pgstat_count_heap_update(relation, use_hot_update || use_warm_update);
 
 	/*
 	 * If heaptup is a private copy, release it.  Don't forget to copy t_self
@@ -4405,6 +4620,13 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
  * will be checking very similar sets of columns, and doing the same tests on
  * them, it makes sense to optimize and do them together.
  *
+ * The exprindx_attrs designates the set of attributes used in expression or
+ * predicate indexes. Currently, we don't allow WARM updates if expression or
+ * predicate index column is updated
+ *
+ * If updated_attrs is not NULL, then the caller is always interested in
+ * knowing the list of changed attributes
+ *
  * We receive three bitmapsets comprising the three sets of columns we're
  * interested in.  Note these are destructively modified; that is OK since
  * this is invoked at most once in heap_update.
@@ -4417,7 +4639,11 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 static void
 HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
+							 Bitmapset *exprindx_attrs,
+							 Bitmapset **updated_attrs,
+							 bool *satisfies_hot,
+							 bool *satisfies_warm,
+							 bool *satisfies_key,
 							 bool *satisfies_id,
 							 HeapTuple oldtup, HeapTuple newtup)
 {
@@ -4454,8 +4680,11 @@ HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 		 * Since the HOT attributes are a superset of the key attributes and
 		 * the key attributes are a superset of the id attributes, this logic
 		 * is guaranteed to identify the next column that needs to be checked.
+		 *
+		 * If the caller also wants to know the list of updated index
+		 * attributes, we must scan through all the attributes
 		 */
-		if (hot_result && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
+		if ((hot_result || updated_attrs) && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
 			check_now = next_hot_attnum;
 		else if (key_result && next_key_attnum > FirstLowInvalidHeapAttributeNumber)
 			check_now = next_key_attnum;
@@ -4476,8 +4705,16 @@ HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 			if (check_now == next_id_attnum)
 				id_result = false;
 
+			/*
+			 * Add the changed attribute to updated_attrs if the caller has
+			 * asked for it
+			 */
+			if (updated_attrs)
+				*updated_attrs = bms_add_member(*updated_attrs, check_now -
+						FirstLowInvalidHeapAttributeNumber);
+
 			/* if all are false now, we can stop checking */
-			if (!hot_result && !key_result && !id_result)
+			if (!hot_result && !key_result && !id_result && !updated_attrs)
 				break;
 		}
 
@@ -4488,7 +4725,7 @@ HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 		 * bms_first_member() will return -1 and the attribute number will end
 		 * up with a value less than FirstLowInvalidHeapAttributeNumber.
 		 */
-		if (hot_result && check_now == next_hot_attnum)
+		if ((hot_result || updated_attrs) && check_now == next_hot_attnum)
 		{
 			next_hot_attnum = bms_first_member(hot_attrs);
 			next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
@@ -4505,6 +4742,23 @@ HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 		}
 	}
 
+	/*
+	 * If an attributed used in the expression of an expression index or
+	 * predicate of a predicate index has changed, we don't yet support WARM
+	 * update
+	 */
+	if (updated_attrs && bms_overlap(*updated_attrs, exprindx_attrs))
+		*satisfies_warm = false;
+	/* If the table does not support WARM update, honour that */
+	else if (!relation->rd_supportswarm)
+		*satisfies_warm = false;
+	/*
+	 * XXX Should we handle some more cases, such as when an update will result
+	 * in many or most indexes, should we fall back to a regular update?
+	 */
+	else
+		*satisfies_warm = true;
+
 	*satisfies_hot = hot_result;
 	*satisfies_key = key_result;
 	*satisfies_id = id_result;
@@ -4528,7 +4782,7 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &hufd, &lockmode);
+						 &hufd, &lockmode, NULL, NULL);
 	switch (result)
 	{
 		case HeapTupleSelfUpdated:
@@ -7415,6 +7669,7 @@ log_heap_cleanup_info(RelFileNode rnode, TransactionId latestRemovedXid)
 XLogRecPtr
 log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *redirected, int nredirected,
+			   OffsetNumber *warm, int nwarm,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid)
@@ -7428,6 +7683,7 @@ log_heap_clean(Relation reln, Buffer buffer,
 	xlrec.latestRemovedXid = latestRemovedXid;
 	xlrec.nredirected = nredirected;
 	xlrec.ndead = ndead;
+	xlrec.nwarm = nwarm;
 
 	XLogBeginInsert();
 	XLogRegisterData((char *) &xlrec, SizeOfHeapClean);
@@ -7450,6 +7706,10 @@ log_heap_clean(Relation reln, Buffer buffer,
 		XLogRegisterBufData(0, (char *) nowdead,
 							ndead * sizeof(OffsetNumber));
 
+	if (nwarm > 0)
+		XLogRegisterBufData(0, (char *) warm,
+							nwarm * sizeof(OffsetNumber));
+
 	if (nunused > 0)
 		XLogRegisterBufData(0, (char *) nowunused,
 							nunused * sizeof(OffsetNumber));
@@ -7555,6 +7815,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
+	bool		warm_update;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
@@ -7566,6 +7827,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	else
 		info = XLOG_HEAP_UPDATE;
 
+	if (HeapTupleIsHeapWarmTuple(newtup))
+		warm_update = true;
+
 	/*
 	 * If the old and new tuple are on the same page, we only need to log the
 	 * parts of the new tuple that were changed.  That saves on the amount of
@@ -7639,6 +7903,8 @@ log_heap_update(Relation reln, Buffer oldbuf,
 				xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY;
 		}
 	}
+	if (warm_update)
+		xlrec.flags |= XLH_UPDATE_WARM_UPDATE;
 
 	/* If new tuple is the single and first tuple on page... */
 	if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber &&
@@ -8006,24 +8272,38 @@ heap_xlog_clean(XLogReaderState *record)
 		OffsetNumber *redirected;
 		OffsetNumber *nowdead;
 		OffsetNumber *nowunused;
+		OffsetNumber *warm;
 		int			nredirected;
 		int			ndead;
 		int			nunused;
+		int			nwarm;
+		int			i;
 		Size		datalen;
+		bool		warmchain[MaxHeapTuplesPerPage + 1];
 
 		redirected = (OffsetNumber *) XLogRecGetBlockData(record, 0, &datalen);
 
 		nredirected = xlrec->nredirected;
 		ndead = xlrec->ndead;
+		nwarm = xlrec->nwarm;
+
 		end = (OffsetNumber *) ((char *) redirected + datalen);
 		nowdead = redirected + (nredirected * 2);
-		nowunused = nowdead + ndead;
-		nunused = (end - nowunused);
+		warm = nowdead + ndead;
+		nowunused = warm + nwarm;
+
+		nunused = (end - warm);
 		Assert(nunused >= 0);
 
+		memset(warmchain, 0, sizeof (warmchain));
+		for (i = 0; i < nwarm; i++)
+			warmchain[warm[i]] = true;
+
+
 		/* Update all item pointers per the record, and repair fragmentation */
 		heap_page_prune_execute(buffer,
 								redirected, nredirected,
+								warmchain,
 								nowdead, ndead,
 								nowunused, nunused);
 
@@ -8610,16 +8890,22 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	Size		freespace = 0;
 	XLogRedoAction oldaction;
 	XLogRedoAction newaction;
+	bool		warm_update = false;
 
 	/* initialize to keep the compiler quiet */
 	oldtup.t_data = NULL;
 	oldtup.t_len = 0;
 
+	if (xlrec->flags & XLH_UPDATE_WARM_UPDATE)
+		warm_update = true;
+
 	XLogRecGetBlockTag(record, 0, &rnode, NULL, &newblk);
 	if (XLogRecGetBlockTag(record, 1, NULL, NULL, &oldblk))
 	{
 		/* HOT updates are never done across pages */
 		Assert(!hot_update);
+		/* WARM updates are never done across pages */
+		Assert(!warm_update);
 	}
 	else
 		oldblk = newblk;
@@ -8679,6 +8965,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 								   &htup->t_infomask2);
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
+
+		/* Mark the old tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		/* Set forward chain link in t_ctid */
 		HeapTupleHeaderSetNextCtid(htup, ItemPointerGetBlockNumber(&newtid),
 				ItemPointerGetOffsetNumber(&newtid));
@@ -8814,6 +9105,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
+
+		/* Mark the new tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		/* Make sure there is no forward chain link in t_ctid */
 		HeapTupleHeaderSetHeapLatest(htup);
 
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index 7c2231a..d71a297 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -36,12 +36,19 @@ typedef struct
 	int			nredirected;	/* numbers of entries in arrays below */
 	int			ndead;
 	int			nunused;
+	int			nwarm;
 	/* arrays that accumulate indexes of items to be changed */
 	OffsetNumber redirected[MaxHeapTuplesPerPage * 2];
 	OffsetNumber nowdead[MaxHeapTuplesPerPage];
 	OffsetNumber nowunused[MaxHeapTuplesPerPage];
+	OffsetNumber warm[MaxHeapTuplesPerPage];
 	/* marked[i] is TRUE if item i is entered in one of the above arrays */
 	bool		marked[MaxHeapTuplesPerPage + 1];
+	/*
+	 * warmchain[i] is TRUE if item is becoming redirected lp and points a WARM
+	 * chain
+	 */
+	bool		warmchain[MaxHeapTuplesPerPage + 1];
 } PruneState;
 
 /* Local functions */
@@ -54,6 +61,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 						   OffsetNumber offnum, OffsetNumber rdoffnum);
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
+static void heap_prune_record_warmupdate(PruneState *prstate,
+						   OffsetNumber offnum);
 
 static void heap_get_root_tuples_internal(Page page,
 				OffsetNumber target_offnum, OffsetNumber *root_offsets);
@@ -203,8 +212,9 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 	 */
 	prstate.new_prune_xid = InvalidTransactionId;
 	prstate.latestRemovedXid = *latestRemovedXid;
-	prstate.nredirected = prstate.ndead = prstate.nunused = 0;
+	prstate.nredirected = prstate.ndead = prstate.nunused = prstate.nwarm = 0;
 	memset(prstate.marked, 0, sizeof(prstate.marked));
+	memset(prstate.warmchain, 0, sizeof(prstate.marked));
 
 	/* Scan the page */
 	maxoff = PageGetMaxOffsetNumber(page);
@@ -241,6 +251,7 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 		 */
 		heap_page_prune_execute(buffer,
 								prstate.redirected, prstate.nredirected,
+								prstate.warmchain,
 								prstate.nowdead, prstate.ndead,
 								prstate.nowunused, prstate.nunused);
 
@@ -268,6 +279,7 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 
 			recptr = log_heap_clean(relation, buffer,
 									prstate.redirected, prstate.nredirected,
+									prstate.warm, prstate.nwarm,
 									prstate.nowdead, prstate.ndead,
 									prstate.nowunused, prstate.nunused,
 									prstate.latestRemovedXid);
@@ -479,6 +491,12 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 			!TransactionIdEquals(HeapTupleHeaderGetXmin(htup), priorXmax))
 			break;
 
+		if (HeapTupleHeaderIsHeapWarmTuple(htup))
+		{
+			Assert(!IsSystemRelation(relation));
+			heap_prune_record_warmupdate(prstate, rootoffnum);
+		}
+
 		/*
 		 * OK, this tuple is indeed a member of the chain.
 		 */
@@ -668,6 +686,18 @@ heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum)
 	prstate->marked[offnum] = true;
 }
 
+/* Record item pointer which is a root of a WARM chain */
+static void
+heap_prune_record_warmupdate(PruneState *prstate, OffsetNumber offnum)
+{
+	Assert(prstate->nwarm < MaxHeapTuplesPerPage);
+	if (prstate->warmchain[offnum])
+		return;
+	prstate->warm[prstate->nwarm] = offnum;
+	prstate->nwarm++;
+	prstate->warmchain[offnum] = true;
+}
+
 
 /*
  * Perform the actual page changes needed by heap_page_prune.
@@ -681,6 +711,7 @@ heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum)
 void
 heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
+						bool *warmchain,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused)
 {
@@ -697,6 +728,12 @@ heap_page_prune_execute(Buffer buffer,
 		ItemId		fromlp = PageGetItemId(page, fromoff);
 
 		ItemIdSetRedirect(fromlp, tooff);
+
+		/*
+		 * Save information about WARM chains in the item itself
+		 */
+		if (warmchain[fromoff])
+			ItemIdSetHeapWarm(fromlp);
 	}
 
 	/* Update all now-dead line pointers */
diff --git a/src/backend/access/index/genam.c b/src/backend/access/index/genam.c
index 65c941d..4f9fb12 100644
--- a/src/backend/access/index/genam.c
+++ b/src/backend/access/index/genam.c
@@ -99,7 +99,7 @@ RelationGetIndexScan(Relation indexRelation, int nkeys, int norderbys)
 	else
 		scan->orderByData = NULL;
 
-	scan->xs_want_itup = false; /* may be set later */
+	scan->xs_want_itup = true; /* hack for now to always get index tuple */
 
 	/*
 	 * During recovery we ignore killed tuples and don't bother to kill them
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index 54b71cb..149a02d 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -71,10 +71,12 @@
 #include "access/xlog.h"
 #include "catalog/catalog.h"
 #include "catalog/index.h"
+#include "executor/executor.h"
 #include "pgstat.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
+#include "utils/datum.h"
 #include "utils/snapmgr.h"
 #include "utils/tqual.h"
 
@@ -409,7 +411,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
 	/*
 	 * The AM's amgettuple proc finds the next index entry matching the scan
 	 * keys, and puts the TID into scan->xs_ctup.t_self.  It should also set
-	 * scan->xs_recheck and possibly scan->xs_itup, though we pay no attention
+	 * scan->xs_tuple_recheck and possibly scan->xs_itup, though we pay no attention
 	 * to those fields here.
 	 */
 	found = scan->indexRelation->rd_amroutine->amgettuple(scan, direction);
@@ -448,7 +450,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
  * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
  * call).
  *
- * Note: caller must check scan->xs_recheck, and perform rechecking of the
+ * Note: caller must check scan->xs_tuple_recheck, and perform rechecking of the
  * scan keys if required.  We do not do that here because we don't have
  * enough information to do it efficiently in the general case.
  * ----------------
@@ -475,6 +477,13 @@ index_fetch_heap(IndexScanDesc scan)
 		 */
 		if (prev_buf != scan->xs_cbuf)
 			heap_page_prune_opt(scan->heapRelation, scan->xs_cbuf);
+
+		/*
+		 * If we're not always re-checking, reset recheck for this tuple
+		 */
+		if (!scan->xs_recheck)
+			scan->xs_tuple_recheck = false;
+
 	}
 
 	/* Obtain share-lock on the buffer so we can examine visibility */
@@ -484,32 +493,63 @@ index_fetch_heap(IndexScanDesc scan)
 											scan->xs_snapshot,
 											&scan->xs_ctup,
 											&all_dead,
-											!scan->xs_continue_hot);
+											!scan->xs_continue_hot,
+											&scan->xs_tuple_recheck);
 	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
 
 	if (got_heap_tuple)
 	{
+		bool res = true;
+
+		/*
+		 * Ok we got a tuple which satisfies the snapshot, but if its part of a
+		 * WARM chain, we must do additional checks to ensure that we are
+		 * indeed returning a correct tuple. Note that if the index AM does not
+		 * implement amrecheck method, then we don't any additional checks
+		 * since WARM must have been disabled on such tables
+		 *
+		 * XXX What happens when a new index which does not support amcheck is
+		 * added to the table? Do we need to handle this case or is CREATE
+		 * INDEX and CREATE INDEX CONCURRENTLY smart enough to handle this
+		 * issue?
+		 */
+		if (scan->xs_tuple_recheck &&
+				scan->indexRelation->rd_amroutine->amrecheck)
+		{
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
+			res = scan->indexRelation->rd_amroutine->amrecheck(
+						scan->indexRelation,
+						scan->xs_itup,
+						scan->heapRelation,
+						&scan->xs_ctup);
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+		}
+
 		/*
 		 * Only in a non-MVCC snapshot can more than one member of the HOT
 		 * chain be visible.
 		 */
 		scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
 		pgstat_count_heap_fetch(scan->indexRelation);
-		return &scan->xs_ctup;
-	}
 
-	/* We've reached the end of the HOT chain. */
-	scan->xs_continue_hot = false;
+		if (res)
+			return &scan->xs_ctup;
+	}
+	else
+	{
+		/* We've reached the end of the HOT chain. */
+		scan->xs_continue_hot = false;
 
-	/*
-	 * If we scanned a whole HOT chain and found only dead tuples, tell index
-	 * AM to kill its entry for that TID (this will take effect in the next
-	 * amgettuple call, in index_getnext_tid).  We do not do this when in
-	 * recovery because it may violate MVCC to do so.  See comments in
-	 * RelationGetIndexScan().
-	 */
-	if (!scan->xactStartedInRecovery)
-		scan->kill_prior_tuple = all_dead;
+		/*
+		 * If we scanned a whole HOT chain and found only dead tuples, tell index
+		 * AM to kill its entry for that TID (this will take effect in the next
+		 * amgettuple call, in index_getnext_tid).  We do not do this when in
+		 * recovery because it may violate MVCC to do so.  See comments in
+		 * RelationGetIndexScan().
+		 */
+		if (!scan->xactStartedInRecovery)
+			scan->kill_prior_tuple = all_dead;
+	}
 
 	return NULL;
 }
diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c
index ef69290..e0afffd 100644
--- a/src/backend/access/nbtree/nbtinsert.c
+++ b/src/backend/access/nbtree/nbtinsert.c
@@ -19,11 +19,14 @@
 #include "access/nbtree.h"
 #include "access/transam.h"
 #include "access/xloginsert.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
 #include "utils/tqual.h"
-
+#include "utils/datum.h"
 
 typedef struct
 {
@@ -249,6 +252,9 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 	BTPageOpaque opaque;
 	Buffer		nbuf = InvalidBuffer;
 	bool		found = false;
+	Buffer		buffer;
+	HeapTupleData	heapTuple;
+	bool		recheck = false;
 
 	/* Assume unique until we find a duplicate */
 	*is_unique = true;
@@ -308,6 +314,8 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				curitup = (IndexTuple) PageGetItem(page, curitemid);
 				htid = curitup->t_tid;
 
+				recheck = false;
+
 				/*
 				 * If we are doing a recheck, we expect to find the tuple we
 				 * are rechecking.  It's not a duplicate, but we have to keep
@@ -325,112 +333,153 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				 * have just a single index entry for the entire chain.
 				 */
 				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
-										 &all_dead))
+							&all_dead, &recheck, &buffer,
+							&heapTuple))
 				{
 					TransactionId xwait;
+					bool result = true;
 
 					/*
-					 * It is a duplicate. If we are only doing a partial
-					 * check, then don't bother checking if the tuple is being
-					 * updated in another transaction. Just return the fact
-					 * that it is a potential conflict and leave the full
-					 * check till later.
+					 * If the tuple was WARM update, we may again see our own
+					 * tuple. Since WARM updates don't create new index
+					 * entries, our own tuple is only reachable via the old
+					 * index pointer
 					 */
-					if (checkUnique == UNIQUE_CHECK_PARTIAL)
+					if (checkUnique == UNIQUE_CHECK_EXISTING &&
+							ItemPointerCompare(&htid, &itup->t_tid) == 0)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						*is_unique = false;
-						return InvalidTransactionId;
+						found = true;
+						result = false;
+						if (recheck)
+							UnlockReleaseBuffer(buffer);
 					}
-
-					/*
-					 * If this tuple is being updated by other transaction
-					 * then we have to wait for its commit/abort.
-					 */
-					xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
-						SnapshotDirty.xmin : SnapshotDirty.xmax;
-
-					if (TransactionIdIsValid(xwait))
+					else if (recheck)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						/* Tell _bt_doinsert to wait... */
-						*speculativeToken = SnapshotDirty.speculativeToken;
-						return xwait;
+						result = btrecheck(rel, curitup, heapRel, &heapTuple);
+						UnlockReleaseBuffer(buffer);
 					}
 
-					/*
-					 * Otherwise we have a definite conflict.  But before
-					 * complaining, look to see if the tuple we want to insert
-					 * is itself now committed dead --- if so, don't complain.
-					 * This is a waste of time in normal scenarios but we must
-					 * do it to support CREATE INDEX CONCURRENTLY.
-					 *
-					 * We must follow HOT-chains here because during
-					 * concurrent index build, we insert the root TID though
-					 * the actual tuple may be somewhere in the HOT-chain.
-					 * While following the chain we might not stop at the
-					 * exact tuple which triggered the insert, but that's OK
-					 * because if we find a live tuple anywhere in this chain,
-					 * we have a unique key conflict.  The other live tuple is
-					 * not part of this chain because it had a different index
-					 * entry.
-					 */
-					htid = itup->t_tid;
-					if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL))
-					{
-						/* Normal case --- it's still live */
-					}
-					else
+					if (result)
 					{
 						/*
-						 * It's been deleted, so no error, and no need to
-						 * continue searching
+						 * It is a duplicate. If we are only doing a partial
+						 * check, then don't bother checking if the tuple is being
+						 * updated in another transaction. Just return the fact
+						 * that it is a potential conflict and leave the full
+						 * check till later.
 						 */
-						break;
-					}
+						if (checkUnique == UNIQUE_CHECK_PARTIAL)
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							*is_unique = false;
+							return InvalidTransactionId;
+						}
 
-					/*
-					 * Check for a conflict-in as we would if we were going to
-					 * write to this page.  We aren't actually going to write,
-					 * but we want a chance to report SSI conflicts that would
-					 * otherwise be masked by this unique constraint
-					 * violation.
-					 */
-					CheckForSerializableConflictIn(rel, NULL, buf);
+						/*
+						 * If this tuple is being updated by other transaction
+						 * then we have to wait for its commit/abort.
+						 */
+						xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
+							SnapshotDirty.xmin : SnapshotDirty.xmax;
+
+						if (TransactionIdIsValid(xwait))
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							/* Tell _bt_doinsert to wait... */
+							*speculativeToken = SnapshotDirty.speculativeToken;
+							return xwait;
+						}
 
-					/*
-					 * This is a definite conflict.  Break the tuple down into
-					 * datums and report the error.  But first, make sure we
-					 * release the buffer locks we're holding ---
-					 * BuildIndexValueDescription could make catalog accesses,
-					 * which in the worst case might touch this same index and
-					 * cause deadlocks.
-					 */
-					if (nbuf != InvalidBuffer)
-						_bt_relbuf(rel, nbuf);
-					_bt_relbuf(rel, buf);
+						/*
+						 * Otherwise we have a definite conflict.  But before
+						 * complaining, look to see if the tuple we want to insert
+						 * is itself now committed dead --- if so, don't complain.
+						 * This is a waste of time in normal scenarios but we must
+						 * do it to support CREATE INDEX CONCURRENTLY.
+						 *
+						 * We must follow HOT-chains here because during
+						 * concurrent index build, we insert the root TID though
+						 * the actual tuple may be somewhere in the HOT-chain.
+						 * While following the chain we might not stop at the
+						 * exact tuple which triggered the insert, but that's OK
+						 * because if we find a live tuple anywhere in this chain,
+						 * we have a unique key conflict.  The other live tuple is
+						 * not part of this chain because it had a different index
+						 * entry.
+						 */
+						recheck = false;
+						ItemPointerCopy(&itup->t_tid, &htid);
+						if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL,
+									&recheck, &buffer, &heapTuple))
+						{
+							bool result = true;
+							if (recheck)
+							{
+								/*
+								 * Recheck if the tuple actually satisfies the
+								 * index key. Otherwise, we might be following
+								 * a wrong index pointer and mustn't entertain
+								 * this tuple
+								 */
+								result = btrecheck(rel, itup, heapRel, &heapTuple);
+								UnlockReleaseBuffer(buffer);
+							}
+							if (!result)
+								break;
+							/* Normal case --- it's still live */
+						}
+						else
+						{
+							/*
+							 * It's been deleted, so no error, and no need to
+							 * continue searching
+							 */
+							break;
+						}
 
-					{
-						Datum		values[INDEX_MAX_KEYS];
-						bool		isnull[INDEX_MAX_KEYS];
-						char	   *key_desc;
-
-						index_deform_tuple(itup, RelationGetDescr(rel),
-										   values, isnull);
-
-						key_desc = BuildIndexValueDescription(rel, values,
-															  isnull);
-
-						ereport(ERROR,
-								(errcode(ERRCODE_UNIQUE_VIOLATION),
-								 errmsg("duplicate key value violates unique constraint \"%s\"",
-										RelationGetRelationName(rel)),
-							   key_desc ? errdetail("Key %s already exists.",
-													key_desc) : 0,
-								 errtableconstraint(heapRel,
-											 RelationGetRelationName(rel))));
+						/*
+						 * Check for a conflict-in as we would if we were going to
+						 * write to this page.  We aren't actually going to write,
+						 * but we want a chance to report SSI conflicts that would
+						 * otherwise be masked by this unique constraint
+						 * violation.
+						 */
+						CheckForSerializableConflictIn(rel, NULL, buf);
+
+						/*
+						 * This is a definite conflict.  Break the tuple down into
+						 * datums and report the error.  But first, make sure we
+						 * release the buffer locks we're holding ---
+						 * BuildIndexValueDescription could make catalog accesses,
+						 * which in the worst case might touch this same index and
+						 * cause deadlocks.
+						 */
+						if (nbuf != InvalidBuffer)
+							_bt_relbuf(rel, nbuf);
+						_bt_relbuf(rel, buf);
+
+						{
+							Datum		values[INDEX_MAX_KEYS];
+							bool		isnull[INDEX_MAX_KEYS];
+							char	   *key_desc;
+
+							index_deform_tuple(itup, RelationGetDescr(rel),
+									values, isnull);
+
+							key_desc = BuildIndexValueDescription(rel, values,
+									isnull);
+
+							ereport(ERROR,
+									(errcode(ERRCODE_UNIQUE_VIOLATION),
+									 errmsg("duplicate key value violates unique constraint \"%s\"",
+										 RelationGetRelationName(rel)),
+									 key_desc ? errdetail("Key %s already exists.",
+										 key_desc) : 0,
+									 errtableconstraint(heapRel,
+										 RelationGetRelationName(rel))));
+						}
 					}
 				}
 				else if (all_dead)
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 128744c..6b1236a 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -23,6 +23,7 @@
 #include "access/xlog.h"
 #include "catalog/index.h"
 #include "commands/vacuum.h"
+#include "executor/nodeIndexscan.h"
 #include "storage/indexfsm.h"
 #include "storage/ipc.h"
 #include "storage/lmgr.h"
@@ -117,6 +118,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = btendscan;
 	amroutine->ammarkpos = btmarkpos;
 	amroutine->amrestrpos = btrestrpos;
+	amroutine->amrecheck = btrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -292,8 +294,9 @@ btgettuple(IndexScanDesc scan, ScanDirection dir)
 	BTScanOpaque so = (BTScanOpaque) scan->opaque;
 	bool		res;
 
-	/* btree indexes are never lossy */
-	scan->xs_recheck = false;
+	/* btree indexes are never lossy, except for WARM tuples */
+	scan->xs_recheck = indexscan_recheck;
+	scan->xs_tuple_recheck = indexscan_recheck;
 
 	/*
 	 * If we have any array keys, initialize them during first call for a
diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c
index 063c988..c9c0501 100644
--- a/src/backend/access/nbtree/nbtutils.c
+++ b/src/backend/access/nbtree/nbtutils.c
@@ -20,11 +20,15 @@
 #include "access/nbtree.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "utils/array.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 typedef struct BTSortArrayContext
@@ -2065,3 +2069,103 @@ btproperty(Oid index_oid, int attno,
 			return false;		/* punt to generic code */
 	}
 }
+
+/*
+ * Check if the index tuple's key matches the one computed from the given heap
+ * tuple's attribute
+ */
+bool
+btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	/* Get IndexInfo for this index */
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL, then they are equal
+		 */
+		if (isnull[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If just one is NULL, then they are not equal
+		 */
+		if (isnull[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now just do a raw memory comparison. If the index tuple was formed
+		 * using this heap tuple, the computed index values must match
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+}
diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c
index d570ae5..813b5c3 100644
--- a/src/backend/access/spgist/spgutils.c
+++ b/src/backend/access/spgist/spgutils.c
@@ -67,6 +67,7 @@ spghandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = spgendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index b0b43cf..36467b2 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -54,6 +54,7 @@
 #include "nodes/makefuncs.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/var.h"
 #include "parser/parser.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
@@ -1674,6 +1675,20 @@ BuildIndexInfo(Relation index)
 	ii->ii_Concurrent = false;
 	ii->ii_BrokenHotChain = false;
 
+	/* build a bitmap of all table attributes referred by this index */
+	for (i = 0; i < ii->ii_NumIndexAttrs; i++)
+	{
+		AttrNumber attr = ii->ii_KeyAttrNumbers[i];
+		ii->ii_indxattrs = bms_add_member(ii->ii_indxattrs, attr -
+				FirstLowInvalidHeapAttributeNumber);
+	}
+
+	/* Collect all attributes used in expressions, too */
+	pull_varattnos((Node *) ii->ii_Expressions, 1, &ii->ii_indxattrs);
+
+	/* Collect all attributes in the index predicate, too */
+	pull_varattnos((Node *) ii->ii_Predicate, 1, &ii->ii_indxattrs);
+
 	return ii;
 }
 
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index 26f9114..997c8f5 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -40,6 +40,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	TriggerData *trigdata = (TriggerData *) fcinfo->context;
 	const char *funcname = "unique_key_recheck";
 	HeapTuple	new_row;
+	HeapTupleData heapTuple;
 	ItemPointerData tmptid;
 	Relation	indexRel;
 	IndexInfo  *indexInfo;
@@ -102,7 +103,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	 * removed.
 	 */
 	tmptid = new_row->t_self;
-	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL,
+				NULL, NULL, &heapTuple))
 	{
 		/*
 		 * All rows in the HOT chain are dead, so skip the check.
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index 5947e72..75af34c 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -2491,6 +2491,7 @@ CopyFrom(CopyState cstate)
 
 				if (resultRelInfo->ri_NumIndices > 0)
 					recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+														 &(tuple->t_self), NULL,
 														 estate, false, NULL,
 														   NIL);
 
@@ -2606,6 +2607,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
 			ExecStoreTuple(bufferedTuples[i], myslot, InvalidBuffer, false);
 			recheckIndexes =
 				ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
+									  &(bufferedTuples[i]->t_self), NULL,
 									  estate, false, NULL, NIL);
 			ExecARInsertTriggers(estate, resultRelInfo,
 								 bufferedTuples[i],
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 231e92d..ca40e1b 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -1468,6 +1468,7 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 
 		recptr = log_heap_clean(onerel, buffer,
 								NULL, 0, NULL, 0,
+								NULL, 0,
 								unused, uncnt,
 								vacrelstats->latestRemovedXid);
 		PageSetLSN(page, recptr);
@@ -2128,6 +2129,22 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 						break;
 					}
 
+					/*
+					 * If this tuple was ever WARM updated or is a WARM tuple,
+					 * there could be multiple index entries pointing to the
+					 * root of this chain. We can't do index-only scans for
+					 * such tuples without verifying index key check. So mark
+					 * the page as !all_visible
+					 *
+					 * XXX Should we look at the root line pointer and check if
+					 * WARM flag is set there or checking for tuples in the
+					 * chain is good enough?
+					 */
+					if (HeapTupleHeaderIsHeapWarmTuple(tuple.t_data))
+					{
+						all_visible = false;
+					}
+
 					/* Track newest xmin on page. */
 					if (TransactionIdFollows(xmin, *visibility_cutoff_xid))
 						*visibility_cutoff_xid = xmin;
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 0e2d834..da27cf6 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -270,6 +270,8 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
 List *
 ExecInsertIndexTuples(TupleTableSlot *slot,
 					  ItemPointer tupleid,
+					  ItemPointer root_tid,
+					  Bitmapset *updated_attrs,
 					  EState *estate,
 					  bool noDupErr,
 					  bool *specConflict,
@@ -324,6 +326,17 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 		if (!indexInfo->ii_ReadyForInserts)
 			continue;
 
+		/*
+		 * If updated_attrs is set, we only insert index entries for those
+		 * indexes whose column has changed. All other indexes can use their
+		 * existing index pointers to look up the new tuple
+		 */
+		if (updated_attrs)
+		{
+			if (!bms_overlap(updated_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
 		/* Check for partial index */
 		if (indexInfo->ii_Predicate != NIL)
 		{
@@ -389,7 +402,7 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 			index_insert(indexRelation, /* index relation */
 						 values,	/* array of index Datums */
 						 isnull,	/* null flags */
-						 tupleid,		/* tid of heap tuple */
+						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique);	/* type of uniqueness check to do */
 
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index 449aacb..ff77349 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -37,6 +37,7 @@
 
 #include "access/relscan.h"
 #include "access/transam.h"
+#include "access/valid.h"
 #include "executor/execdebug.h"
 #include "executor/nodeBitmapHeapscan.h"
 #include "pgstat.h"
@@ -362,11 +363,23 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			OffsetNumber offnum = tbmres->offsets[curslot];
 			ItemPointerData tid;
 			HeapTupleData heapTuple;
+			bool recheck = false;
 
 			ItemPointerSet(&tid, page, offnum);
 			if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
-									   &heapTuple, NULL, true))
-				scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+									   &heapTuple, NULL, true, &recheck))
+			{
+				bool valid = true;
+
+				if (scan->rs_key)
+					HeapKeyTest(&heapTuple, RelationGetDescr(scan->rs_rd),
+							scan->rs_nkeys, scan->rs_key, valid);
+				if (valid)
+					scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+
+				if (recheck)
+					tbmres->recheck = true;
+			}
 		}
 	}
 	else
diff --git a/src/backend/executor/nodeIndexonlyscan.c b/src/backend/executor/nodeIndexonlyscan.c
index 4f6f91c..49bda34 100644
--- a/src/backend/executor/nodeIndexonlyscan.c
+++ b/src/backend/executor/nodeIndexonlyscan.c
@@ -141,6 +141,26 @@ IndexOnlyNext(IndexOnlyScanState *node)
 			 * but it's not clear whether it's a win to do so.  The next index
 			 * entry might require a visit to the same heap page.
 			 */
+
+			/*
+			 * If the index was lossy or the tuple was WARM, we have to recheck
+			 * the index quals using the fetched tuple.
+			 */
+			if (scandesc->xs_tuple_recheck)
+			{
+				ExecStoreTuple(tuple,	/* tuple to store */
+						slot,	/* slot to store in */
+						scandesc->xs_cbuf,		/* buffer containing tuple */
+						false);	/* don't pfree */
+				econtext->ecxt_scantuple = slot;
+				ResetExprContext(econtext);
+				if (!ExecQual(node->indexqual, econtext, false))
+				{
+					/* Fails recheck, so drop it and loop back for another */
+					InstrCountFiltered2(node, 1);
+					continue;
+				}
+			}
 		}
 
 		/*
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index 3143bd9..0b04bb8 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -39,6 +39,8 @@
 #include "utils/memutils.h"
 #include "utils/rel.h"
 
+bool indexscan_recheck = false;
+
 /*
  * When an ordering operator is used, tuples fetched from the index that
  * need to be reordered are queued in a pairing heap, as ReorderTuples.
@@ -115,10 +117,10 @@ IndexNext(IndexScanState *node)
 					   false);	/* don't pfree */
 
 		/*
-		 * If the index was lossy, we have to recheck the index quals using
-		 * the fetched tuple.
+		 * If the index was lossy or the tuple was WARM, we have to recheck
+		 * the index quals using the fetched tuple.
 		 */
-		if (scandesc->xs_recheck)
+		if (scandesc->xs_tuple_recheck)
 		{
 			econtext->ecxt_scantuple = slot;
 			ResetExprContext(econtext);
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index af7b26c..11bd3c0 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -433,6 +433,7 @@ ExecInsert(ModifyTableState *mtstate,
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												 &(tuple->t_self), NULL,
 												 estate, true, &specConflict,
 												   arbiterIndexes);
 
@@ -479,6 +480,7 @@ ExecInsert(ModifyTableState *mtstate,
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
 				recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+													   &(tuple->t_self), NULL,
 													   estate, false, NULL,
 													   arbiterIndexes);
 		}
@@ -809,6 +811,9 @@ ExecUpdate(ItemPointer tupleid,
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
 	List	   *recheckIndexes = NIL;
+	Bitmapset  *updated_attrs = NULL;
+	ItemPointerData	root_tid;
+	bool		warm_update;
 
 	/*
 	 * abort the operation if not running transactions
@@ -923,7 +928,7 @@ lreplace:;
 							 estate->es_output_cid,
 							 estate->es_crosscheck_snapshot,
 							 true /* wait for commit */ ,
-							 &hufd, &lockmode);
+							 &hufd, &lockmode, &updated_attrs, &warm_update);
 		switch (result)
 		{
 			case HeapTupleSelfUpdated:
@@ -1010,10 +1015,28 @@ lreplace:;
 		 * the t_self field.
 		 *
 		 * If it's a HOT update, we mustn't insert new index entries.
+		 *
+		 * If it's a WARM update, then we must insert new entries with TID
+		 * pointing to the root of the WARM chain.
 		 */
-		if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
+		if (resultRelInfo->ri_NumIndices > 0 &&
+			(!HeapTupleIsHeapOnly(tuple) || warm_update))
+		{
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(updated_attrs);
+				updated_attrs = NULL;
+			}
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   updated_attrs,
 												   estate, false, NULL, NIL);
+		}
 	}
 
 	if (canSetTag)
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 79e0b1f..37874ca 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2030,6 +2030,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	list_free_deep(relation->rd_fkeylist);
 	list_free(relation->rd_indexlist);
 	bms_free(relation->rd_indexattr);
+	bms_free(relation->rd_exprindexattr);
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_idattr);
 	if (relation->rd_options)
@@ -4373,12 +4374,15 @@ Bitmapset *
 RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 {
 	Bitmapset  *indexattrs;		/* indexed columns */
+	Bitmapset  *exprindexattrs;	/* indexed columns in expression/prediacate
+									 indexes */
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
 	List	   *indexoidlist;
 	Oid			relreplindex;
 	ListCell   *l;
 	MemoryContext oldcxt;
+	bool		supportswarm = true;/* True if the table can be WARM updated */
 
 	/* Quick exit if we already computed the result. */
 	if (relation->rd_indexattr != NULL)
@@ -4391,6 +4395,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_keyattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
+			case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+				return bms_copy(relation->rd_exprindexattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -4429,6 +4435,7 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	 * won't be returned at all by RelationGetIndexList.
 	 */
 	indexattrs = NULL;
+	exprindexattrs = NULL;
 	uindexattrs = NULL;
 	idindexattrs = NULL;
 	foreach(l, indexoidlist)
@@ -4474,19 +4481,32 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 		}
 
 		/* Collect all attributes used in expressions, too */
-		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &exprindexattrs);
 
 		/* Collect all attributes in the index predicate, too */
-		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
+
+		/*
+		 * Check if the index has amrecheck method defined. If the method is
+		 * not defined, the index does not support WARM update. Completely
+		 * disable WARM updates on such tables
+		 */
+		if (!indexDesc->rd_amroutine->amrecheck)
+			supportswarm = false;
 
 		index_close(indexDesc, AccessShareLock);
 	}
 
 	list_free(indexoidlist);
 
+	/* Remember if the table can do WARM updates */
+	relation->rd_supportswarm = supportswarm;
+
 	/* Don't leak the old values of these bitmaps, if any */
 	bms_free(relation->rd_indexattr);
 	relation->rd_indexattr = NULL;
+	bms_free(relation->rd_exprindexattr);
+	relation->rd_exprindexattr = NULL;
 	bms_free(relation->rd_keyattr);
 	relation->rd_keyattr = NULL;
 	bms_free(relation->rd_idattr);
@@ -4502,7 +4522,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	oldcxt = MemoryContextSwitchTo(CacheMemoryContext);
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_indexattr = bms_copy(indexattrs);
+	relation->rd_exprindexattr = bms_copy(exprindexattrs);
+	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -4514,6 +4535,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 			return uindexattrs;
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
+		case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+			return exprindexattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index c5178f7..aa7b265 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -111,6 +111,7 @@ extern char *default_tablespace;
 extern char *temp_tablespaces;
 extern bool ignore_checksum_failure;
 extern bool synchronize_seqscans;
+extern bool indexscan_recheck;
 
 #ifdef TRACE_SYNCSCAN
 extern bool trace_syncscan;
@@ -1271,6 +1272,16 @@ static struct config_bool ConfigureNamesBool[] =
 		NULL, NULL, NULL
 	},
 	{
+		{"indexscan_recheck", PGC_USERSET, DEVELOPER_OPTIONS,
+			gettext_noop("Recheck heap rows returned from an index scan."),
+			NULL,
+			GUC_NOT_IN_SAMPLE
+		},
+		&indexscan_recheck,
+		false,
+		NULL, NULL, NULL
+	},
+	{
 		{"debug_deadlocks", PGC_SUSET, DEVELOPER_OPTIONS,
 			gettext_noop("Dumps information about all current locks when a deadlock timeout occurs."),
 			NULL,
diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h
index 1036cca..37eaf76 100644
--- a/src/include/access/amapi.h
+++ b/src/include/access/amapi.h
@@ -13,6 +13,7 @@
 #define AMAPI_H
 
 #include "access/genam.h"
+#include "access/itup.h"
 
 /*
  * We don't wish to include planner header files here, since most of an index
@@ -137,6 +138,9 @@ typedef void (*ammarkpos_function) (IndexScanDesc scan);
 /* restore marked scan position */
 typedef void (*amrestrpos_function) (IndexScanDesc scan);
 
+/* recheck index tuple and heap tuple match */
+typedef bool (*amrecheck_function) (Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * API struct for an index AM.  Note this must be stored in a single palloc'd
@@ -196,6 +200,7 @@ typedef struct IndexAmRoutine
 	amendscan_function amendscan;
 	ammarkpos_function ammarkpos;		/* can be NULL */
 	amrestrpos_function amrestrpos;		/* can be NULL */
+	amrecheck_function amrecheck;		/* can be NULL */
 } IndexAmRoutine;
 
 
diff --git a/src/include/access/hash.h b/src/include/access/hash.h
index d9df904..a25ce5a 100644
--- a/src/include/access/hash.h
+++ b/src/include/access/hash.h
@@ -364,4 +364,8 @@ extern bool _hash_convert_tuple(Relation index,
 extern OffsetNumber _hash_binsearch(Page page, uint32 hash_value);
 extern OffsetNumber _hash_binsearch_last(Page page, uint32 hash_value);
 
+/* hash.c */
+extern bool hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 #endif   /* HASH_H */
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 94b46b8..4c05947 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -137,9 +137,10 @@ extern bool heap_fetch(Relation relation, Snapshot snapshot,
 		   Relation stats_relation);
 extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
 					   Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call);
+					   bool *all_dead, bool first_call, bool *recheck);
 extern bool heap_hot_search(ItemPointer tid, Relation relation,
-				Snapshot snapshot, bool *all_dead);
+				Snapshot snapshot, bool *all_dead,
+				bool *recheck, Buffer *buffer, HeapTuple heapTuple);
 
 extern void heap_get_latest_tid(Relation relation, Snapshot snapshot,
 					ItemPointer tid);
@@ -160,7 +161,8 @@ extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
 extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **updated_attrs, bool *warm_update);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
@@ -186,6 +188,7 @@ extern int heap_page_prune(Relation relation, Buffer buffer,
 				bool report_stats, TransactionId *latestRemovedXid);
 extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
+						bool *warmchain,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
 extern void heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index 5a04561..ddc3a7a 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -80,6 +80,7 @@
 #define XLH_UPDATE_CONTAINS_NEW_TUPLE			(1<<4)
 #define XLH_UPDATE_PREFIX_FROM_OLD				(1<<5)
 #define XLH_UPDATE_SUFFIX_FROM_OLD				(1<<6)
+#define XLH_UPDATE_WARM_UPDATE					(1<<7)
 
 /* convenience macro for checking whether any form of old tuple was logged */
 #define XLH_UPDATE_CONTAINS_OLD						\
@@ -211,7 +212,9 @@ typedef struct xl_heap_update
  *	* for each redirected item: the item offset, then the offset redirected to
  *	* for each now-dead item: the item offset
  *	* for each now-unused item: the item offset
- * The total number of OffsetNumbers is therefore 2*nredirected+ndead+nunused.
+ *	* for each now-warm item: the item offset
+ * The total number of OffsetNumbers is therefore
+ * 2*nredirected+ndead+nunused+nwarm.
  * Note that nunused is not explicitly stored, but may be found by reference
  * to the total record length.
  */
@@ -220,10 +223,11 @@ typedef struct xl_heap_clean
 	TransactionId latestRemovedXid;
 	uint16		nredirected;
 	uint16		ndead;
+	uint16		nwarm;
 	/* OFFSET NUMBERS are in the block reference 0 */
 } xl_heap_clean;
 
-#define SizeOfHeapClean (offsetof(xl_heap_clean, ndead) + sizeof(uint16))
+#define SizeOfHeapClean (offsetof(xl_heap_clean, nwarm) + sizeof(uint16))
 
 /*
  * Cleanup_info is required in some cases during a lazy VACUUM.
@@ -384,6 +388,7 @@ extern XLogRecPtr log_heap_cleanup_info(RelFileNode rnode,
 					  TransactionId latestRemovedXid);
 extern XLogRecPtr log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *redirected, int nredirected,
+			   OffsetNumber *warm, int nwarm,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid);
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index d01e0d8..3a51681 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,7 +260,8 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x0800 are available */
+#define HEAP_WARM_TUPLE			0x0800	/* This tuple is a part of a WARM chain
+										 */
 #define HEAP_LATEST_TUPLE		0x1000	/*
 										 * This is the last tuple in chain and
 										 * ip_posid points to the root line
@@ -271,7 +272,7 @@ struct HeapTupleHeaderData
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF800	/* visibility-related bits */
 
 
 /*
@@ -510,6 +511,21 @@ do { \
   (tup)->t_infomask2 & HEAP_ONLY_TUPLE \
 )
 
+#define HeapTupleHeaderSetHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 |= HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderClearHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 &= ~HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderIsHeapWarmTuple(tup) \
+( \
+  ((tup)->t_infomask2 & HEAP_WARM_TUPLE) \
+)
+
 #define HeapTupleHeaderSetHeapLatest(tup) \
 ( \
 	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE \
@@ -771,6 +787,15 @@ struct MinimalTupleData
 #define HeapTupleClearHeapOnly(tuple) \
 		HeapTupleHeaderClearHeapOnly((tuple)->t_data)
 
+#define HeapTupleIsHeapWarmTuple(tuple) \
+		HeapTupleHeaderIsHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleSetHeapWarmTuple(tuple) \
+		HeapTupleHeaderSetHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleClearHeapWarmTuple(tuple) \
+		HeapTupleHeaderClearHeapWarmTuple((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index c580f51..83af072 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -751,6 +751,8 @@ extern bytea *btoptions(Datum reloptions, bool validate);
 extern bool btproperty(Oid index_oid, int attno,
 		   IndexAMProperty prop, const char *propname,
 		   bool *res, bool *isnull);
+extern bool btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * prototypes for functions in nbtvalidate.c
diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h
index 49c2a6f..880e62e 100644
--- a/src/include/access/relscan.h
+++ b/src/include/access/relscan.h
@@ -110,7 +110,8 @@ typedef struct IndexScanDescData
 	HeapTupleData xs_ctup;		/* current heap tuple, if any */
 	Buffer		xs_cbuf;		/* current heap buffer in scan, if any */
 	/* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */
-	bool		xs_recheck;		/* T means scan keys must be rechecked */
+	bool		xs_recheck;		/* T means scan keys must be rechecked for each tuple */
+	bool		xs_tuple_recheck;	/* T means scan keys must be rechecked for current tuple */
 
 	/*
 	 * When fetching with an ordering operator, the values of the ORDER BY
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 39521ed..60a5445 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -366,6 +366,7 @@ extern void UnregisterExprContextCallback(ExprContext *econtext,
 extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
 extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
 extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
+					  ItemPointer root_tid, Bitmapset *updated_attrs,
 					  EState *estate, bool noDupErr, bool *specConflict,
 					  List *arbiterIndexes);
 extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
diff --git a/src/include/executor/nodeIndexscan.h b/src/include/executor/nodeIndexscan.h
index 194fadb..fe9c78e 100644
--- a/src/include/executor/nodeIndexscan.h
+++ b/src/include/executor/nodeIndexscan.h
@@ -38,4 +38,5 @@ extern bool ExecIndexEvalArrayKeys(ExprContext *econtext,
 					   IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 extern bool ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 
+extern bool indexscan_recheck;
 #endif   /* NODEINDEXSCAN_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index a4ea1b9..42f8ecf 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -60,6 +60,7 @@ typedef struct IndexInfo
 	NodeTag		type;
 	int			ii_NumIndexAttrs;
 	AttrNumber	ii_KeyAttrNumbers[INDEX_MAX_KEYS];
+	Bitmapset  *ii_indxattrs;	/* bitmap of all columns used in this index */
 	List	   *ii_Expressions; /* list of Expr */
 	List	   *ii_ExpressionsState;	/* list of ExprState */
 	List	   *ii_Predicate;	/* list of Expr */
diff --git a/src/include/storage/itemid.h b/src/include/storage/itemid.h
index 509c577..8c9cc99 100644
--- a/src/include/storage/itemid.h
+++ b/src/include/storage/itemid.h
@@ -46,6 +46,12 @@ typedef ItemIdData *ItemId;
 typedef uint16 ItemOffset;
 typedef uint16 ItemLength;
 
+/*
+ * Special value used in lp_len to indicate that the chain starting at line
+ * pointer may contain WARM tuples. This must only be interpreted along with
+ * LP_REDIRECT flag
+ */
+#define SpecHeapWarmLen	0x1ffb
 
 /* ----------------
  *		support macros
@@ -112,12 +118,15 @@ typedef uint16 ItemLength;
 #define ItemIdIsDead(itemId) \
 	((itemId)->lp_flags == LP_DEAD)
 
+#define ItemIdIsHeapWarm(itemId) \
+	(((itemId)->lp_flags == LP_REDIRECT) && \
+	 ((itemId)->lp_len == SpecHeapWarmLen))
 /*
  * ItemIdHasStorage
  *		True iff item identifier has associated storage.
  */
 #define ItemIdHasStorage(itemId) \
-	((itemId)->lp_len != 0)
+	(!ItemIdIsRedirected(itemId) && (itemId)->lp_len != 0)
 
 /*
  * ItemIdSetUnused
@@ -168,6 +177,26 @@ typedef uint16 ItemLength;
 )
 
 /*
+ * ItemIdSetHeapWarm
+ * 		Set the item identifier to identify as starting of a WARM chain
+ *
+ * Note: Since all bits in lp_flags are currently used, we store a special
+ * value in lp_len field to indicate this state. This is required only for
+ * LP_REDIRECT tuple and lp_len field is unused for such line pointers.
+ */
+#define ItemIdSetHeapWarm(itemId) \
+do { \
+	AssertMacro((itemId)->lp_flags == LP_REDIRECT); \
+	(itemId)->lp_len = SpecHeapWarmLen; \
+} while (0)
+
+#define ItemIdClearHeapWarm(itemId) \
+( \
+	AssertMacro((itemId)->lp_flags == LP_REDIRECT); \
+	(itemId)->lp_len = 0; \
+)
+
+/*
  * ItemIdMarkDead
  *		Set the item identifier to be DEAD, keeping its existing storage.
  *
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index ed14442..dac32b5 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -101,8 +101,11 @@ typedef struct RelationData
 
 	/* data managed by RelationGetIndexAttrBitmap: */
 	Bitmapset  *rd_indexattr;	/* identifies columns used in indexes */
+	Bitmapset  *rd_exprindexattr; /* indentified columns used in expression or
+									 predicate indexes */
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	/*
 	 * rd_options is set whenever rd_rel is loaded into the relcache entry.
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index 6ea7dd2..290e9b7 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -48,7 +48,8 @@ typedef enum IndexAttrBitmapKind
 {
 	INDEX_ATTR_BITMAP_ALL,
 	INDEX_ATTR_BITMAP_KEY,
-	INDEX_ATTR_BITMAP_IDENTITY_KEY
+	INDEX_ATTR_BITMAP_IDENTITY_KEY,
+	INDEX_ATTR_BITMAP_EXPR_PREDICATE
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
#10Michael Paquier
michael.paquier@gmail.com
In reply to: Pavan Deolasee (#9)
Re: Patch: Write Amplification Reduction Method (WARM)

On Mon, Sep 5, 2016 at 1:53 PM, Pavan Deolasee <pavan.deolasee@gmail.com> wrote:

0001_track_root_lp_v4.patch: This patch uses a free t_infomask2 bit to track
latest tuple in an update chain. The t_ctid.ip_posid is used to track the
root line pointer of the update chain. We do this only in the latest tuple
in the chain because most often that tuple will be updated and we need to
quickly find the root only during update.

0002_warm_updates_v4.patch: This patch implements the core of WARM logic.
During WARM update, we only insert new entries in the indexes whose key has
changed. But instead of indexing the real TID of the new tuple, we index the
root line pointer and then use additional recheck logic to ensure only
correct tuples are returned from such potentially broken HOT chains. Each
index AM must implement a amrecheck method to support WARM. The patch
currently implements this for hash and btree indexes.

Moved to next CF, I was surprised to see that it is not *that* large:
43 files changed, 1539 insertions(+), 199 deletions(-)
--
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#11Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Pavan Deolasee (#9)
Re: Patch: Write Amplification Reduction Method (WARM)

On 09/05/2016 06:53 AM, Pavan Deolasee wrote:

On Thu, Sep 1, 2016 at 9:44 PM, Bruce Momjian <bruce@momjian.us
<mailto:bruce@momjian.us>> wrote:

On Thu, Sep 1, 2016 at 02:37:40PM +0530, Pavan Deolasee wrote:

I like the simplified approach, as long as it doesn't block further
improvements.

Yes, the proposed approach is simple yet does not stop us from improving things
further. Moreover it has shown good performance characteristics and I believe
it's a good first step.

Agreed. This is BIG. Do you think it can be done for PG 10?

I definitely think so. The patches as submitted are fully functional and
sufficient. Of course, there are XXX and TODOs that I hope to sort out
during the review process. There are also further tests needed to ensure
that the feature does not cause significant regression in the worst
cases. Again something I'm willing to do once I get some feedback on the
broader design and test cases. What I am looking at this stage is to
know if I've missed something important in terms of design or if there
is some show stopper that I overlooked.

Latest patches rebased with current master are attached. I also added a
few more comments to the code. I forgot to give a brief about the
patches, so including that as well.

0001_track_root_lp_v4.patch: This patch uses a free t_infomask2 bit to
track latest tuple in an update chain. The t_ctid.ip_posid is used to
track the root line pointer of the update chain. We do this only in the
latest tuple in the chain because most often that tuple will be updated
and we need to quickly find the root only during update.

0002_warm_updates_v4.patch: This patch implements the core of WARM
logic. During WARM update, we only insert new entries in the indexes
whose key has changed. But instead of indexing the real TID of the new
tuple, we index the root line pointer and then use additional recheck
logic to ensure only correct tuples are returned from such potentially
broken HOT chains. Each index AM must implement a amrecheck method to
support WARM. The patch currently implements this for hash and btree
indexes.

Hi,

I've been looking at the patch over the past few days, running a bunch
of benchmarks etc. I can confirm the significant speedup, often by more
than 75% (depending on number of indexes, whether the data set fits into
RAM, etc.). Similarly for the amount of WAL generated, although that's a
bit more difficult to evaluate due to full_page_writes.

I'm not going to send detailed results, as that probably does not make
much sense at this stage of the development - I can repeat the tests
once the open questions get resolved.

There's a lot of useful and important feedback in the thread(s) so far,
particularly the descriptions of various failure cases. I think it'd be
very useful to collect those examples and turn them into regression
tests - that's something the patch should include anyway.

I don't really have much comments regarding the code, but during the
testing I noticed a bit strange behavior when updating statistics.
Consider a table like this:

create table t (a int, b int, c int) with (fillfactor = 10);
insert into t select i, i from generate_series(1,1000) s(i);
create index on t(a);
create index on t(b);

and update:

update t set a = a+1, b=b+1;

which has to update all indexes on the table, but:

select n_tup_upd, n_tup_hot_upd from pg_stat_user_tables

n_tup_upd | n_tup_hot_upd
-----------+---------------
1000 | 1000

So it's still counted as "WARM" - does it make sense? I mean, we're
creating a WARM chain on the page, yet we have to add pointers into all
indexes (so not really saving anything). Doesn't this waste the one WARM
update per HOT chain without actually getting anything in return?

The way this is piggy-backed on the current HOT statistics seems a bit
strange for another reason, although WARM is a relaxed version of HOT.
Until now, HOT was "all or nothing" - we've either added index entries
to all indexes or none of them. So the n_tup_hot_upd was fine.

But WARM changes that - it allows adding index entries only to a subset
of indexes, which means the "per row" n_tup_hot_upd counter is not
sufficient. When you have a table with 10 indexes, and the counter
increases by 1, does that mean the update added index tuple to 1 index
or 9 of them?

So I think we'll need two counters to track WARM - number of index
tuples we've added, and number of index tuples we've skipped. So
something like blks_hit and blks_read. I'm not sure whether we should
replace the n_tup_hot_upd entirely, or keep it for backwards
compatibility (and to track perfectly HOT updates).

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#12Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Tomas Vondra (#11)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Oct 5, 2016 at 1:43 PM, Tomas Vondra <tomas.vondra@2ndquadrant.com>
wrote:

I've been looking at the patch over the past few days, running a bunch of
benchmarks etc.

Thanks for doing that.

I can confirm the significant speedup, often by more than 75% (depending
on number of indexes, whether the data set fits into RAM, etc.). Similarly
for the amount of WAL generated, although that's a bit more difficult to
evaluate due to full_page_writes.

I'm not going to send detailed results, as that probably does not make
much sense at this stage of the development - I can repeat the tests once
the open questions get resolved.

Sure. Anything that stands out? Any regression that you see? I'm not sure
if your benchmarks exercise the paths which might show overheads without
any tangible benefits. For example, I wonder if a test with many indexes
where most of them get updated and then querying the table via those
updated indexes could be one such test case.

There's a lot of useful and important feedback in the thread(s) so far,
particularly the descriptions of various failure cases. I think it'd be
very useful to collect those examples and turn them into regression tests -
that's something the patch should include anyway.

Sure. I added only a handful test cases which I knew regression isn't
covering. But I'll write more of them. One good thing is that the code gets
heavily exercised even during regression. I caught and fixed multiple bugs
running regression. I'm not saying that's enough, but it certainly gives
some confidence.

and update:

update t set a = a+1, b=b+1;

which has to update all indexes on the table, but:

select n_tup_upd, n_tup_hot_upd from pg_stat_user_tables

n_tup_upd | n_tup_hot_upd
-----------+---------------
1000 | 1000

So it's still counted as "WARM" - does it make sense?

No, it does not. The code currently just marks any update as a WARM update
if the table supports it and there is enough free space in the page. And
yes, you're right. It's worth fixing that because of one-WARM update per
chain limitation. Will fix.

The way this is piggy-backed on the current HOT statistics seems a bit
strange for another reason,

Agree. We could add a similar n_tup_warm_upd counter.

But WARM changes that - it allows adding index entries only to a subset of
indexes, which means the "per row" n_tup_hot_upd counter is not sufficient.
When you have a table with 10 indexes, and the counter increases by 1, does
that mean the update added index tuple to 1 index or 9 of them?

How about having counters similar to n_tup_ins/n_tup_del for indexes as
well? Today it does not make sense because every index gets the same number
of inserts, but WARM will change that.

For example, we could have idx_tup_insert and idx_tup_delete that shows up
in pg_stat_user_indexes. I don't know if idx_tup_delete adds any value, but
one can then look at idx_tup_insert for various indexes to get a sense
which indexes receives more inserts than others. The indexes which receive
more inserts are the ones being frequently updated as compared to other
indexes.

This also relates to vacuuming strategies. Today HOT updates do not count
for triggering vacuum (or to be more precise, HOT pruned tuples are
discounted while counting dead tuples). WARM tuples get the same treatment
as far as pruning is concerned, but since they cause fresh index inserts, I
wonder if we need some mechanism to cleanup the dead line pointers and dead
index entries. This will become more important if we do something to
convert WARM chains into HOT chains, something that only VACUUM can do in
the design I've proposed so far.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#13Tomas Vondra
tomas.vondra@2ndquadrant.com
In reply to: Pavan Deolasee (#12)
Re: Patch: Write Amplification Reduction Method (WARM)

On 10/06/2016 07:36 AM, Pavan Deolasee wrote:

On Wed, Oct 5, 2016 at 1:43 PM, Tomas Vondra
<tomas.vondra@2ndquadrant.com <mailto:tomas.vondra@2ndquadrant.com>> wrote:

...

I can confirm the significant speedup, often by more than 75%
(depending on number of indexes, whether the data set fits into RAM,
etc.). Similarly for the amount of WAL generated, although that's a
bit more difficult to evaluate due to full_page_writes.

I'm not going to send detailed results, as that probably does not
make much sense at this stage of the development - I can repeat the
tests once the open questions get resolved.

Sure. Anything that stands out? Any regression that you see? I'm not
sure if your benchmarks exercise the paths which might show overheads
without any tangible benefits. For example, I wonder if a test with many
indexes where most of them get updated and then querying the table via
those updated indexes could be one such test case.

No, nothing that would stand out. Let me explain what benchmark(s) I've
done. I've made some minor mistakes when running the benchmarks, so I
plan to rerun them and post the results after that. So let's take the
data with a grain of salt.

My goal was to compare current non-HOT behavior (updating all indexes)
with the WARM (updating only indexes on modified columns), and I've
taken two approaches:

1) fixed number of indexes, update variable number of columns

Create a table with 8 secondary indexes and then run a bunch of
benchmarks updating increasing number of columns. So the first run did

UPDATE t SET c1 = c1+1 WHERE id = :id;

while the second did

UPDATE t SET c1 = c1+1, c2 = c2+1 WHERE id = :id;

and so on, up to updating all the columns in the last run. I've used
multiple scripts to update all the columns / indexes uniformly
(essentially using multiple "-f" flags with pgbench). The runs were
fairly long (2h, enough to get stable behavior).

For a small data set (fits into RAM), the results look like this:

master patched diff
1 5994 8490 +42%
2 4347 7903 +81%
3 4340 7400 +70%
4 4324 6929 +60%
5 4256 6495 +52%
6 4253 5059 +19%
7 4235 4534 +7%
8 4194 4237 +1%

and the amount of WAL generated (after correction for tps difference)
looks like this (numbers are MBs)

master patched diff
1 27257 18508 -32%
2 21753 14599 -33%
3 21912 15864 -28%
4 22021 17135 -22%
5 21819 18258 -16%
6 21929 20659 -6%
7 21994 22234 +1%
8 21851 23267 +6%

So this is quite significant difference. I'm pretty sure the minor WAL
increase for the last two runs is due to full page writes (which also
affects the preceding runs, making the WAL reduction smaller than the
tps increase).

I do have results for larger data sets (>RAM), the results are very
similar although the speedup seems a bit smaller. But I need to rerun those.

2) single-row update, adding indexes between runs

This is kinda the opposite of the previous approach, i.e. transactions
always update a single column (multiple scripts to update the columns
uniformly), but there are new indexes added between runs. The results
(for a large data set, exceeding RAM) look like this:

master patched diff
0 954 1404 +47%
1 701 1045 +49%
2 484 816 +70%
3 346 683 +97%
4 248 608 +145%
5 190 525 +176%
6 152 397 +161%
7 123 315 +156%
8 123 270 +119%

So this looks really interesting.

There's a lot of useful and important feedback in the thread(s) so
far, particularly the descriptions of various failure cases. I think
it'd be very useful to collect those examples and turn them into
regression tests - that's something the patch should include anyway.

Sure. I added only a handful test cases which I knew regression isn't
covering. But I'll write more of them. One good thing is that the code
gets heavily exercised even during regression. I caught and fixed
multiple bugs running regression. I'm not saying that's enough, but it
certainly gives some confidence.

I don't see any changes to src/test in the patch, so I'm not sure what
you mean when you say you added a handful of test cases?

and update:

update t set a = a+1, b=b+1;

which has to update all indexes on the table, but:

select n_tup_upd, n_tup_hot_upd from pg_stat_user_tables

n_tup_upd | n_tup_hot_upd
-----------+---------------
1000 | 1000

So it's still counted as "WARM" - does it make sense?

No, it does not. The code currently just marks any update as a WARM
update if the table supports it and there is enough free space in the
page. And yes, you're right. It's worth fixing that because of one-WARM
update per chain limitation. Will fix.

Hmmm, so this makes monitoring of %WARM during benchmarks less reliable
than I hoped for :-(

The way this is piggy-backed on the current HOT statistics seems a
bit strange for another reason,

Agree. We could add a similar n_tup_warm_upd counter.

Yes, although HOT is a special case of WARM. But it probably makes sense
to differentiate them, I guess.

But WARM changes that - it allows adding index entries only to a
subset of indexes, which means the "per row" n_tup_hot_upd counter
is not sufficient. When you have a table with 10 indexes, and the
counter increases by 1, does that mean the update added index tuple
to 1 index or 9 of them?

How about having counters similar to n_tup_ins/n_tup_del for indexes
as well? Today it does not make sense because every index gets the
same number of inserts, but WARM will change that.

For example, we could have idx_tup_insert and idx_tup_delete that shows
up in pg_stat_user_indexes. I don't know if idx_tup_delete adds any
value, but one can then look at idx_tup_insert for various indexes to
get a sense which indexes receives more inserts than others. The indexes
which receive more inserts are the ones being frequently updated as
compared to other indexes.

Hmmm, I'm not sure that'll work. I mean, those metrics would be useful
(although I can't think of a use case for idx_tup_delete), but I'm not
sure it's a enough to measure WARM. We need to compute

index_tuples_inserted / index_tuples_total

where (index_tuples_total - index_tuples_inserted) is the number of
index tuples we've been able to skip thanks to WARM. So we'd also need
to track the number of index tuples that we skipped for the index, and
I'm not sure that's a good idea.

Also, we really don't care about inserted tuples - what matters for WARM
are updates, so idx_tup_insert is either useless (because it also
includes non-UPDATE entries) or the naming is misleading.

This also relates to vacuuming strategies. Today HOT updates do not
count for triggering vacuum (or to be more precise, HOT pruned tuples
are discounted while counting dead tuples). WARM tuples get the same
treatment as far as pruning is concerned, but since they cause fresh
index inserts, I wonder if we need some mechanism to cleanup the dead
line pointers and dead index entries. This will become more important if
we do something to convert WARM chains into HOT chains, something that
only VACUUM can do in the design I've proposed so far.

True.

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#14Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Tomas Vondra (#13)
Re: Patch: Write Amplification Reduction Method (WARM)

Thanks for the patch. This shows a very good performance improvement.

I started reviewing the patch, during this process and I ran the regression
test on the WARM patch. I observed a failure in create_index test.
This may be a bug in code or expected that needs to be corrected.

Regards,
Hari Babu
Fujitsu Australia

#15Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Haribabu Kommi (#14)
2 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Nov 8, 2016 at 9:13 AM, Haribabu Kommi <kommi.haribabu@gmail.com>
wrote:

Thanks for the patch. This shows a very good performance improvement.

Thank you. Can you please share the benchmark you ran, results and
observations?

I started reviewing the patch, during this process and I ran the regression
test on the WARM patch. I observed a failure in create_index test.
This may be a bug in code or expected that needs to be corrected.

Can you please share the diff? I ran regression after applying patch on the
current master and did not find any change? Does it happen consistently?

I'm also attaching fresh set of patches. The first patch hasn't changed at
all (though I changed the name to v5 to keep it consistent with the other
patch). The second patch has the following changes:

1. WARM updates are now tracked separately. We still don't count number of
index inserts separately as suggested by Tomas.
2. We don't do a WARM update if all columns referenced by all indexes have
changed. Ideally, we should check if all indexes will require an update and
avoid WARM. So there is still some room for improvement here
3. I added a very minimal regression test case. But really, it just
contains one test case which I specifically wanted to test.

So not a whole lot of changes since the last version. I'm still waiting for
some serious review of the design/code before I spend a lot more time on
the patch. I hope the patch receives some attention in this CF.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0001_track_root_lp_v5.patchapplication/octet-stream; name=0001_track_root_lp_v5.patchDownload
commit f33ee503463137aa1a2ae4c3ab04d1468ae1941c
Author: Pavan Deolasee <pavan.deolasee@gmail.com>
Date:   Sat Sep 3 14:51:00 2016 +0530

    Use HEAP_TUPLE_LATEST to mark a tuple as the latest tuple in an update chain
    and use OffsetNumber in t_ctid to store the root line pointer of the chain.
    
    t_ctid field in the tuple header is usually used to store TID of the next tuple
    in an update chain. But for the last tuple in the chain, t_ctid is made to
    point to itself. When t_ctid points to itself, that signals the end of the
    chain. With this patch, information about a tuple being the last tuple in the
    chain is stored a separate HEAP_TUPLE_LATEST flag. This uses another free bit
    in t_infomask2. When HEAP_TUPLE_LATEST is set, OffsetNumber field in the t_ctid
    stores the root line pointer of the chain. This will help us quickly find root
    of a update chain.

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 6a27ef4..ccf84be 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -93,7 +93,8 @@ static HeapTuple heap_prepare_insert(Relation relation, HeapTuple tup,
 					TransactionId xid, CommandId cid, int options);
 static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
-				HeapTuple newtup, HeapTuple old_key_tup,
+				HeapTuple newtup, OffsetNumber root_offnum,
+				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
 static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
 							 Bitmapset *hot_attrs,
@@ -2250,13 +2251,13 @@ heap_get_latest_tid(Relation relation,
 		 */
 		if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) ||
 			HeapTupleHeaderIsOnlyLocked(tp.t_data) ||
-			ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid))
+			HeapTupleHeaderIsHeapLatest(tp.t_data, ctid))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
 		}
 
-		ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tp.t_data, &ctid, offnum);
 		priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		UnlockReleaseBuffer(buffer);
 	}							/* end of loop */
@@ -2415,7 +2416,8 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	START_CRIT_SECTION();
 
 	RelationPutHeapTuple(relation, buffer, heaptup,
-						 (options & HEAP_INSERT_SPECULATIVE) != 0);
+						 (options & HEAP_INSERT_SPECULATIVE) != 0,
+						 InvalidOffsetNumber);
 
 	if (PageIsAllVisible(BufferGetPage(buffer)))
 	{
@@ -2713,7 +2715,8 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		 * RelationGetBufferForTuple has ensured that the first tuple fits.
 		 * Put that on the page, and then as many other tuples as fit.
 		 */
-		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false);
+		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false,
+				InvalidOffsetNumber);
 		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++)
 		{
 			HeapTuple	heaptup = heaptuples[ndone + nthispage];
@@ -2721,7 +2724,8 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace)
 				break;
 
-			RelationPutHeapTuple(relation, buffer, heaptup, false);
+			RelationPutHeapTuple(relation, buffer, heaptup, false,
+					InvalidOffsetNumber);
 
 			/*
 			 * We don't use heap_multi_insert for catalog tuples yet, but
@@ -2993,6 +2997,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	HeapTupleData tp;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	TransactionId new_xmax;
@@ -3044,7 +3049,8 @@ heap_delete(Relation relation, ItemPointer tid,
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 	}
 
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
+	offnum = ItemPointerGetOffsetNumber(tid);
+	lp = PageGetItemId(page, offnum);
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
@@ -3174,7 +3180,7 @@ l1:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tp.t_data, &hufd->ctid, offnum);
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tp.t_data);
@@ -3250,8 +3256,8 @@ l1:
 	HeapTupleHeaderClearHotUpdated(tp.t_data);
 	HeapTupleHeaderSetXmax(tp.t_data, new_xmax);
 	HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo);
-	/* Make sure there is no forward chain link in t_ctid */
-	tp.t_data->t_ctid = tp.t_self;
+	/* Mark this tuple as the latest tuple in the update chain */
+	HeapTupleHeaderSetHeapLatest(tp.t_data);
 
 	MarkBufferDirty(buffer);
 
@@ -3450,6 +3456,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
+	OffsetNumber	root_offnum;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3506,6 +3514,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
 
 	block = ItemPointerGetBlockNumber(otid);
+	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3789,7 +3798,7 @@ l2:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = oldtup.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(oldtup.t_data, &hufd->ctid, offnum);
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
@@ -3968,7 +3977,7 @@ l2:
 		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		HeapTupleHeaderSetHeapLatest(oldtup.t_data);
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4149,6 +4158,20 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			heap_get_root_tuple_one(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
+					&root_offnum);
 	}
 	else
 	{
@@ -4156,10 +4179,29 @@ l2:
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		root_offnum = InvalidOffsetNumber;
 	}
 
-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
+	/* insert new tuple */
+	RelationPutHeapTuple(relation, newbuf, heaptup, false, root_offnum);
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data);
 
+	/*
+	 * Also update the in-memory copy with the root line pointer information
+	 */
+	if (OffsetNumberIsValid(root_offnum))
+	{
+		HeapTupleHeaderSetRootOffset(heaptup->t_data, root_offnum);
+		HeapTupleHeaderSetRootOffset(newtup->t_data, root_offnum);
+	}
+	else
+	{
+		HeapTupleHeaderSetRootOffset(heaptup->t_data,
+				ItemPointerGetOffsetNumber(&heaptup->t_self));
+		HeapTupleHeaderSetRootOffset(newtup->t_data,
+				ItemPointerGetOffsetNumber(&heaptup->t_self));
+	}
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
 	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
@@ -4172,7 +4214,9 @@ l2:
 	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	HeapTupleHeaderSetNextCtid(oldtup.t_data,
+			ItemPointerGetBlockNumber(&(heaptup->t_self)),
+			ItemPointerGetOffsetNumber(&(heaptup->t_self)));
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4211,6 +4255,7 @@ l2:
 
 		recptr = log_heap_update(relation, buffer,
 								 newbuf, &oldtup, heaptup,
+								 root_offnum,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4573,7 +4618,8 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	ItemId		lp;
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
-	BlockNumber block;
+	BlockNumber	block;
+	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4585,6 +4631,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
+	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -4631,7 +4678,7 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+		HeapTupleHeaderGetNextCtid(tuple->t_data, &t_ctid, offnum);
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -5069,7 +5116,7 @@ failed:
 		Assert(result == HeapTupleSelfUpdated || result == HeapTupleUpdated ||
 			   result == HeapTupleWouldBlock);
 		Assert(!(tuple->t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tuple->t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tuple->t_data, &hufd->ctid, offnum);
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tuple->t_data);
@@ -5145,7 +5192,7 @@ failed:
 	 * the tuple as well.
 	 */
 	if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask))
-		tuple->t_data->t_ctid = *tid;
+		HeapTupleHeaderSetHeapLatest(tuple->t_data);
 
 	/* Clear only the all-frozen bit on visibility map if needed */
 	if (PageIsAllVisible(page) &&
@@ -5659,6 +5706,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
+	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5667,6 +5715,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
+		offnum = ItemPointerGetOffsetNumber(&tupid);
+
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
@@ -5885,7 +5935,7 @@ l4:
 
 		/* if we find the end of update chain, we're done. */
 		if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID ||
-			ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) ||
+			HeapTupleHeaderIsHeapLatest(mytup.t_data, mytup.t_self) ||
 			HeapTupleHeaderIsOnlyLocked(mytup.t_data))
 		{
 			result = HeapTupleMayBeUpdated;
@@ -5894,7 +5944,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		HeapTupleHeaderGetNextCtid(mytup.t_data, &tupid, offnum);
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
@@ -6011,7 +6061,8 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	 * Replace the speculative insertion token with a real t_ctid, pointing to
 	 * itself like it does on regular tuples.
 	 */
-	htup->t_ctid = tuple->t_self;
+	HeapTupleHeaderSetHeapLatest(htup);
+	HeapTupleHeaderSetRootOffset(htup, offnum);
 
 	/* XLOG stuff */
 	if (RelationNeedsWAL(relation))
@@ -6137,7 +6188,9 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
 
 	/* Clear the speculative insertion token too */
-	tp.t_data->t_ctid = tp.t_self;
+	HeapTupleHeaderSetNextCtid(tp.t_data,
+			ItemPointerGetBlockNumber(&tp.t_self),
+			ItemPointerGetOffsetNumber(&tp.t_self));
 
 	MarkBufferDirty(buffer);
 
@@ -7486,6 +7539,7 @@ log_heap_visible(RelFileNode rnode, Buffer heap_buffer, Buffer vm_buffer,
 static XLogRecPtr
 log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup, HeapTuple newtup,
+				OffsetNumber root_offnum,
 				HeapTuple old_key_tuple,
 				bool all_visible_cleared, bool new_all_visible_cleared)
 {
@@ -7605,6 +7659,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	/* Prepare WAL data for the new page */
 	xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self);
 	xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data);
+	xlrec.root_offnum = root_offnum;
 
 	bufflags = REGBUF_STANDARD;
 	if (init)
@@ -8260,7 +8315,7 @@ heap_xlog_delete(XLogReaderState *record)
 			PageClearAllVisible(page);
 
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = target_tid;
+		HeapTupleHeaderSetHeapLatest(htup);
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8350,7 +8405,9 @@ heap_xlog_insert(XLogReaderState *record)
 		htup->t_hoff = xlhdr.t_hoff;
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
-		htup->t_ctid = target_tid;
+
+		HeapTupleHeaderSetHeapLatest(htup);
+		HeapTupleHeaderSetRootOffset(htup, xlrec->offnum);
 
 		if (PageAddItem(page, (Item) htup, newlen, xlrec->offnum,
 						true, true) == InvalidOffsetNumber)
@@ -8485,8 +8542,9 @@ heap_xlog_multi_insert(XLogReaderState *record)
 			htup->t_hoff = xlhdr->t_hoff;
 			HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 			HeapTupleHeaderSetCmin(htup, FirstCommandId);
-			ItemPointerSetBlockNumber(&htup->t_ctid, blkno);
-			ItemPointerSetOffsetNumber(&htup->t_ctid, offnum);
+
+			HeapTupleHeaderSetHeapLatest(htup);
+			HeapTupleHeaderSetRootOffset(htup, offnum);
 
 			offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 			if (offnum == InvalidOffsetNumber)
@@ -8622,7 +8680,8 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 		/* Set forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetNextCtid(htup, ItemPointerGetBlockNumber(&newtid),
+				ItemPointerGetOffsetNumber(&newtid));
 
 		/* Mark the page as a candidate for pruning */
 		PageSetPrunable(page, XLogRecGetXid(record));
@@ -8756,12 +8815,17 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetHeapLatest(htup);
 
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
 
+		if (OffsetNumberIsValid(xlrec->root_offnum))
+			HeapTupleHeaderSetRootOffset(htup, xlrec->root_offnum);
+		else
+			HeapTupleHeaderSetRootOffset(htup, offnum);
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
@@ -8889,9 +8953,7 @@ heap_xlog_lock(XLogReaderState *record)
 		{
 			HeapTupleHeaderClearHotUpdated(htup);
 			/* Make sure there is no forward chain link in t_ctid */
-			ItemPointerSet(&htup->t_ctid,
-						   BufferGetBlockNumber(buffer),
-						   offnum);
+			HeapTupleHeaderSetHeapLatest(htup);
 		}
 		HeapTupleHeaderSetXmax(htup, xlrec->locking_xid);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index c90fb71..e32deb1 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -31,12 +31,18 @@
  * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
  * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *
+ * The caller can optionally tell us to set the root offset to the given value.
+ * Otherwise, the root offset is set to the offset of the new location once its
+ * known. The former is used while updating an existing tuple while latter is
+ * used during insertion of a new row.
  */
 void
 RelationPutHeapTuple(Relation relation,
 					 Buffer buffer,
 					 HeapTuple tuple,
-					 bool token)
+					 bool token,
+					 OffsetNumber root_offnum)
 {
 	Page		pageHeader;
 	OffsetNumber offnum;
@@ -69,7 +75,13 @@ RelationPutHeapTuple(Relation relation,
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
-		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item);
+		if (OffsetNumberIsValid(root_offnum))
+			HeapTupleHeaderSetRootOffset((HeapTupleHeader) item,
+					root_offnum);
+		else
+			HeapTupleHeaderSetRootOffset((HeapTupleHeader) item,
+					offnum);
 	}
 }
 
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index 6ff9251..7c2231a 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -55,6 +55,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
 
+static void heap_get_root_tuples_internal(Page page,
+				OffsetNumber target_offnum, OffsetNumber *root_offsets);
 
 /*
  * Optionally prune and repair fragmentation in the specified page.
@@ -740,8 +742,9 @@ heap_page_prune_execute(Buffer buffer,
  * holds a pin on the buffer. Once pin is released, a tuple might be pruned
  * and reused by a completely unrelated tuple.
  */
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
 {
 	OffsetNumber offnum,
 				maxoff;
@@ -820,6 +823,14 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 			/* Remember the root line pointer for this item */
 			root_offsets[nextoffnum - 1] = offnum;
 
+			/*
+			 * If the caller is interested in just one offset and we found
+			 * that, just return
+			 */
+			if (OffsetNumberIsValid(target_offnum) &&
+					(nextoffnum == target_offnum))
+				return;
+
 			/* Advance to next chain member, if any */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				break;
@@ -829,3 +840,25 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 		}
 	}
 }
+
+/*
+ * Get root line pointer for the given tuple
+ */
+void
+heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offnum)
+{
+	OffsetNumber offsets[MaxHeapTuplesPerPage];
+	heap_get_root_tuples_internal(page, target_offnum, offsets);
+	*root_offnum = offsets[target_offnum - 1];
+}
+
+/*
+ * Get root line pointers for all tuples in the page
+ */
+void
+heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+{
+	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+			root_offsets);
+}
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index 17584ba..09a164c 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -419,14 +419,14 @@ rewrite_heap_tuple(RewriteState state,
 	 */
 	if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
 		  HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
-		!(ItemPointerEquals(&(old_tuple->t_self),
-							&(old_tuple->t_data->t_ctid))))
+		!(HeapTupleHeaderIsHeapLatest(old_tuple->t_data, old_tuple->t_self)))
 	{
 		OldToNewMapping mapping;
 
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
-		hashkey.tid = old_tuple->t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(old_tuple->t_data, &hashkey.tid,
+				ItemPointerGetOffsetNumber(&old_tuple->t_self));
 
 		mapping = (OldToNewMapping)
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -439,7 +439,9 @@ rewrite_heap_tuple(RewriteState state,
 			 * set the ctid of this tuple to point to the new location, and
 			 * insert it right away.
 			 */
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextCtid(new_tuple->t_data,
+					ItemPointerGetBlockNumber(&mapping->new_tid),
+					ItemPointerGetOffsetNumber(&mapping->new_tid));
 
 			/* We don't need the mapping entry anymore */
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -525,7 +527,9 @@ rewrite_heap_tuple(RewriteState state,
 				new_tuple = unresolved->tuple;
 				free_new = true;
 				old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextCtid(new_tuple->t_data,
+						ItemPointerGetBlockNumber(&new_tid),
+						ItemPointerGetOffsetNumber(&new_tid));
 
 				/*
 				 * We don't need the hash entry anymore, but don't free its
@@ -731,7 +735,10 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		onpage_tup->t_ctid = tup->t_self;
+		HeapTupleHeaderSetNextCtid(onpage_tup,
+				ItemPointerGetBlockNumber(&tup->t_self),
+				ItemPointerGetOffsetNumber(&tup->t_self));
+		HeapTupleHeaderSetHeapLatest(onpage_tup);
 	}
 
 	/* If heaptup is a private copy, release it. */
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 32bb3f9..079a77f 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2443,7 +2443,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		 * As above, it should be safe to examine xmax and t_ctid without the
 		 * buffer content lock, because they can't be changing.
 		 */
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, tuple.t_self))
 		{
 			/* deleted, so forget about it */
 			ReleaseBuffer(buffer);
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index b3a595c..94b46b8 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -188,6 +188,8 @@ extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
+extern void heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offnum);
 extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
 
 /* in heap/syncscan.c */
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index 06a8242..5a04561 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -193,6 +193,8 @@ typedef struct xl_heap_update
 	uint8		flags;
 	TransactionId new_xmax;		/* xmax of the new tuple */
 	OffsetNumber new_offnum;	/* new tuple's offset */
+	OffsetNumber root_offnum;	/* offset of the root line pointer in case of
+								   HOT or WARM update */
 
 	/*
 	 * If XLOG_HEAP_CONTAINS_OLD_TUPLE or XLOG_HEAP_CONTAINS_OLD_KEY flags are
@@ -200,7 +202,7 @@ typedef struct xl_heap_update
 	 */
 } xl_heap_update;
 
-#define SizeOfHeapUpdate	(offsetof(xl_heap_update, new_offnum) + sizeof(OffsetNumber))
+#define SizeOfHeapUpdate	(offsetof(xl_heap_update, root_offnum) + sizeof(OffsetNumber))
 
 /*
  * This is what we need to know about vacuum page cleanup/redirect
diff --git a/src/include/access/hio.h b/src/include/access/hio.h
index a174b34..82e5b5f 100644
--- a/src/include/access/hio.h
+++ b/src/include/access/hio.h
@@ -36,7 +36,7 @@ typedef struct BulkInsertStateData
 
 
 extern void RelationPutHeapTuple(Relation relation, Buffer buffer,
-					 HeapTuple tuple, bool token);
+					 HeapTuple tuple, bool token, OffsetNumber root_offnum);
 extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
 						  Buffer otherBuffer, int options,
 						  BulkInsertState bistate,
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index d7e5fad..d01e0d8 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,13 +260,19 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1800 are available */
+/* bits 0x0800 are available */
+#define HEAP_LATEST_TUPLE		0x1000	/*
+										 * This is the last tuple in chain and
+										 * ip_posid points to the root line
+										 * pointer
+										 */
 #define HEAP_KEYS_UPDATED		0x2000	/* tuple was updated and key cols
 										 * modified, or tuple deleted */
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xE000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+
 
 /*
  * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins.  It is
@@ -504,6 +510,30 @@ do { \
   (tup)->t_infomask2 & HEAP_ONLY_TUPLE \
 )
 
+#define HeapTupleHeaderSetHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE \
+)
+
+#define HeapTupleHeaderClearHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 &= ~HEAP_LATEST_TUPLE \
+)
+
+/*
+ * If HEAP_LATEST_TUPLE is set in the last tuple in the update chain. But for
+ * clusters which are upgraded from pre-10.0 release, we still check if c_tid
+ * is pointing to itself and declare such tuple as the latest tuple in the
+ * chain
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  ((tup)->t_infomask2 & HEAP_LATEST_TUPLE) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(&tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(&tid))) \
+)
+
+
 #define HeapTupleHeaderSetHeapOnly(tup) \
 ( \
   (tup)->t_infomask2 |= HEAP_ONLY_TUPLE \
@@ -542,6 +572,55 @@ do { \
 
 
 /*
+ * Set the t_ctid chain and also clear the HEAP_LATEST_TUPLE flag since we
+ * probably have a new tuple in the chain
+ */
+#define HeapTupleHeaderSetNextCtid(tup, block, offset) \
+do { \
+		ItemPointerSetBlockNumber(&((tup)->t_ctid), (block)); \
+		ItemPointerSetOffsetNumber(&((tup)->t_ctid), (offset)); \
+		HeapTupleHeaderClearHeapLatest((tup)); \
+} while (0)
+
+/*
+ * Get TID of next tuple in the update chain. Traditionally, we have stored
+ * self TID in the t_ctid field if the tuple is the last tuple in the chain. We
+ * try to preserve that behaviour by returning self-TID if HEAP_LATEST_TUPLE
+ * flag is set.
+ */
+#define HeapTupleHeaderGetNextCtid(tup, next_ctid, offnum) \
+do { \
+	if ((tup)->t_infomask2 & HEAP_LATEST_TUPLE) \
+	{ \
+		ItemPointerSet((next_ctid), ItemPointerGetBlockNumber(&(tup)->t_ctid), \
+				(offnum)); \
+	} \
+	else \
+	{ \
+		ItemPointerSet((next_ctid), ItemPointerGetBlockNumber(&(tup)->t_ctid), \
+				ItemPointerGetOffsetNumber(&(tup)->t_ctid)); \
+	} \
+} while (0)
+
+#define HeapTupleHeaderSetRootOffset(tup, offset) \
+do { \
+	AssertMacro(!HeapTupleHeaderIsHotUpdated(tup)); \
+	AssertMacro((tup)->t_infomask2 & HEAP_LATEST_TUPLE); \
+	ItemPointerSetOffsetNumber(&(tup)->t_ctid, (offset)); \
+} while (0)
+
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+	AssertMacro((tup)->t_infomask2 & HEAP_LATEST_TUPLE), \
+	ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)
+
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+	(tup)->t_infomask2 & HEAP_LATEST_TUPLE \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
0002_warm_updates_v5.patchapplication/octet-stream; name=0002_warm_updates_v5.patchDownload
diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c
index b68a0d1..b95275f 100644
--- a/contrib/bloom/blutils.c
+++ b/contrib/bloom/blutils.c
@@ -138,6 +138,7 @@ blhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = blendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index 1b45a4c..ba3fffb 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -111,6 +111,7 @@ brinhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = brinendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index b8aa9bc..491e411 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -88,6 +88,7 @@ gisthandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = gistendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index e3b1eef..d7c50c1 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -85,6 +85,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = hashendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = hashrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -265,6 +266,8 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 	OffsetNumber offnum;
 	ItemPointer current;
 	bool		res;
+	IndexTuple	itup;
+
 
 	/* Hash indexes are always lossy since we store only the hash code */
 	scan->xs_recheck = true;
@@ -302,8 +305,6 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 			 offnum <= maxoffnum;
 			 offnum = OffsetNumberNext(offnum))
 		{
-			IndexTuple	itup;
-
 			itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 			if (ItemPointerEquals(&(so->hashso_heappos), &(itup->t_tid)))
 				break;
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index 4825558..cf44214 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -59,6 +59,8 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
 	return true;
 }
 
@@ -263,6 +265,9 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
+
 	return true;
 }
 
diff --git a/src/backend/access/hash/hashutil.c b/src/backend/access/hash/hashutil.c
index 822862d..71377ab 100644
--- a/src/backend/access/hash/hashutil.c
+++ b/src/backend/access/hash/hashutil.c
@@ -17,8 +17,12 @@
 #include "access/hash.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
+#include "nodes/execnodes.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 /*
@@ -352,3 +356,110 @@ _hash_binsearch_last(Page page, uint32 hash_value)
 
 	return lower;
 }
+
+/*
+ * Recheck if the heap tuple satisfies the key stored in the index tuple
+ */
+bool
+hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	Datum		values2[INDEX_MAX_KEYS];
+	bool		isnull2[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	/*
+	 * HASH indexes compute a hash value of the key and store that in the
+	 * index. So we must first obtain the hash of the value obtained from the
+	 * heap and then do a comparison
+	 */
+	_hash_convert_tuple(indexRel, values, isnull, values2, isnull2);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL then they are equal
+		 */
+		if (isnull2[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If either is NULL then they are not equal
+		 */
+		if (isnull2[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now do a raw memory comparison
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values2[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+
+}
diff --git a/src/backend/access/heap/README.WARM b/src/backend/access/heap/README.WARM
new file mode 100644
index 0000000..f793570
--- /dev/null
+++ b/src/backend/access/heap/README.WARM
@@ -0,0 +1,271 @@
+src/backend/access/heap/README.WARM
+
+Write Amplification Reduction Method (WARM)
+===========================================
+
+The Heap Only Tuple (HOT) feature greatly eliminated redudant index
+entries and allowed re-use of the dead space occupied by previously
+updated or deleted tuples (see src/backend/access/heap/README.HOT)
+
+One of the necessary conditions for satisfying HOT update is that the
+update must not change a column used in any of the indexes on the table.
+The condition is sometimes hard to meet, especially for complex
+workloads with several indexes on large yet frequently updated tables.
+Worse, sometimes only one or two index columns may be updated, but the
+regular non-HOT update will still insert a new index entry in every
+index on the table, irrespective of whether the key pertaining to the
+index changed or not.
+
+WARM is a technique devised to address these problems.
+
+
+Update Chains With Multiple Index Entries Pointing to the Root
+--------------------------------------------------------------
+
+When a non-HOT update is caused by an index key change, a new index
+entry must be inserted for the changed index. But if the index key
+hasn't changed for other indexes, we don't really need to insert a new
+entry. Even though the existing index entry is pointing to the old
+tuple, the new tuple is reachable via the t_ctid chain. To keep things
+simple, a WARM update requires that the heap block must have enough
+space to store the new version of the tuple. This is same as HOT
+updates.
+
+In WARM, we ensure that every index entry always points to the root of
+the WARM chain. In fact, a WARM chain looks exactly like a HOT chain
+except for the fact that there could be multiple index entries pointing
+to the root of the chain. So when new entry is inserted in an index for
+updated tuple, and if we are doing a WARM update, the new entry is made
+point to the root of the WARM chain.
+
+For example, if we have a table with two columns and two indexes on each
+of the column. When a tuple is first inserted the table, we have exactly
+one index entry pointing to the tuple from both indexes.
+
+	lp [1]
+	[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's entry (aaaa) also points to 1
+
+Now if the tuple's second column is updated and if there is room on the
+page, we perform a WARM update. To do so, Index1 does not get any new
+entry and Index2's new entry will still point to the root tuple of the
+chain.
+
+	lp [1]  [2]
+	[1111, aaaa]->[111, bbbb]
+
+	Index1's entry (1111) points to 1
+	Index2's old entry (aaaa) points to 1
+	Index2's new entry (bbbb) also points to 1
+
+"A update chain which has more than one index entries pointing to its
+root line pointer is called WARM chain and the action that creates a
+WARM chain is called WARM update."
+
+Since all indexes always point to the root of the WARM chain, even when
+there are more than one index entries, WARM chains can be pruned and
+dead tuples can be removed without a need to do corresponding index
+cleanup.
+
+While this solves the problem of pruning dead tuples from a HOT/WARM
+chain, it also opens up a new technical challenge because now we have a
+situation where a heap tuple is reachable from multiple index entries,
+each having a different index key. While MVCC still ensures that only
+valid tuples are returned, a tuple with a wrong index key may be
+returned because of wrong index entries. In the above example, tuple
+[1111, bbbb] is reachable from both keys (aaaa) as well as (bbbb). For
+this reason, tuples returned from a WARM chain must always be rechecked
+for index key-match.
+
+Recheck Index Key Againt Heap Tuple
+-----------------------------------
+
+Since every Index AM has it's own notion of index tuples, each Index AM
+must implement its own method to recheck heap tuples. For example, a
+hash index stores the hash value of the column and hence recheck routine
+for hash AM must first compute the hash value of the heap attribute and
+then compare it against the value stored in the index tuple.
+
+The patch currently implement recheck routines for hash and btree
+indexes. If the table has an index which doesn't support recheck
+routine, WARM updates are disabled on such tables.
+
+Problem With Duplicate (key, ctid) Index Entries
+------------------------------------------------
+
+The index-key recheck logic works as long as there are no duplicate
+index keys, both pointing to the same WARM chain. In that case, the same
+valid tuple will be reachable via multiple index keys, yet satisfying
+the index key checks. In the above example, if the tuple [1111, bbbb] is
+again updated to [1111, aaaa] and if we insert a new index entry (aaaa)
+pointing to the root line pointer, we will end up with the following
+structure:
+
+	lp [1]  [2]  [3]
+	[1111, aaaa]->[1111, bbbb]->[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's oldest entry (aaaa) points to 1
+	Index2's old entry (bbbb) also points to 1
+	Index2's new entry (aaaa) also points to 1
+
+We must solve this problem to ensure that the same tuple is not
+reachable via multiple index pointers. There are couple of ways to
+address this issue:
+
+1. Do not allow WARM update to a tuple from a WARM chain. This
+guarantees that there can never be duplicate index entries to the same
+root line pointer because we must have checked for old and new index
+keys while doing the first WARM update.
+
+2. Do not allow duplicate (key, ctid) index pointers. In the above
+example, since (aaaa, 1) already exists in the index, we must not insert
+a duplicate index entry.
+
+The patch currently implements 1 i.e. do not do WARM updates to a tuple
+from a WARM chain. HOT updates are fine because they do not add a new
+index entry.
+
+Even with the restriction, this is a significant improvement because the
+number of regular UPDATEs are curtailed down to half.
+
+Expression and Partial Indexes
+------------------------------
+
+Expressions may evaluate to the same value even if the underlying column
+values have changed. A simple example is an index on "lower(col)" which
+will return the same value if the new heap value only differs in the
+case sensitivity. So we can not solely rely on the heap column check to
+decide whether or not to insert a new index entry for expression
+indexes. Similarly, for partial indexes, the predicate expression must
+be evaluated to decide whether or not to cause a new index entry when
+columns referred in the predicate expressions change.
+
+(None of these things are currently implemented and we squarely disallow
+WARM update if a column from expression indexes or predicate has
+changed).
+
+
+Efficiently Finding the Root Line Pointer
+-----------------------------------------
+
+During WARM update, we must be able to find the root line pointer of the
+tuple being updated. It must be noted that the t_ctid field in the heap
+tuple header is usually used to find the next tuple in the update chain.
+But the tuple that we are updating, must be the last tuple in the update
+chain. In such cases, the c_tid field usually points the tuple itself.
+So in theory, we could use the t_ctid to store additional information in
+the last tuple of the update chain, if the information about the tuple
+being the last tuple is stored elsewhere.
+
+We now utilize another bit from t_infomask2 to explicitly identify that
+this is the last tuple in the update chain.
+
+HEAP_LATEST_TUPLE - When this bit is set, the tuple is the last tuple in
+the update chain. The OffsetNumber part of t_ctid points to the root
+line pointer of the chain when HEAP_LATEST_TUPLE flag is set.
+
+If UPDATE operation is aborted, the last tuple in the update chain
+becomes dead. The root line pointer information stored in the tuple
+which remains the last valid tuple in the chain is also lost. In such
+rare cases, the root line pointer must be found in a hard way by
+scanning the entire heap page.
+
+Tracking WARM Chains
+--------------------
+
+The old and every subsequent tuple in the chain is marked with a special
+HEAP_WARM_TUPLE flag. We use the last remaining bit in t_infomask2 to
+store this information.
+
+When a tuple is returned from a WARM chain, the caller must do
+additional checks to ensure that the tuple matches the index key. Even
+if the tuple comes precedes the WARM update in the chain, it must still
+be rechecked for the index key match (case when old tuple is returned by
+the new index key). So we must follow the update chain everytime to the
+end to see check if this is a WARM chain.
+
+When the old updated tuple is retired and the root line pointer is
+converted into a redirected line pointer, we can copy the information
+about WARM chain to the redirected line pointer by storing a special
+value in the lp_len field of the line pointer. This will handle the most
+common case where a WARM chain is replaced by a redirect line pointer
+and a single tuple in the chain.
+
+Converting WARM chains back to HOT chains (VACUUM ?)
+----------------------------------------------------
+
+The current implementation of WARM allows only one WARM update per
+chain. This simplifies the design and addresses certain issues around
+duplicate scans. But this also implies that the benefit of WARM will be
+no more than 50%, which is still significant, but if we could return
+WARM chains back to normal status, we could do far more WARM updates.
+
+A distinct property of a WARM chain is that at least one index has more
+than one live index entries pointing to the root of the chain. In other
+words, if we can remove duplicate entry from every index or conclusively
+prove that there are no duplicate index entries for the root line
+pointer, the chain can again be marked as HOT.
+
+Here is one idea:
+
+A WARM chain has two parts, separated by the tuple that caused WARM
+update. All tuples in each part has matching index keys, but certain
+index keys may not match between these two parts. Lets say we mark heap
+tuples in each part with a special Red-Blue flag. The same flag is
+replicated in the index tuples. For example, when new rows are inserted
+in a table, they are marked with Blue flag and the index entries
+associated with those rows are also marked with Blue flag. When a row is
+WARM updated, the new version is marked with Red flag and the new index
+entry created by the update is also marked with Red flag.
+
+
+Heap chain: [1] [2] [3] [4]
+			[aaaa, 1111]B -> [aaaa, 1111]B -> [bbbb, 1111]R -> [bbbb, 1111]R
+
+Index1: 	(aaaa)B points to 1 (satisfies only tuples marked with B)
+			(bbbb)R points to 1 (satisfies only tuples marked with R)
+
+Index2:		(1111)B points to 1 (satisfied bith B and R tuples)
+
+
+It's clear that for indexes with Red and Blue pointers, a heap tuple
+with Blue flag will be reachable from Blue pointer and that with Red
+flag will be reachable from Red pointer. But for indexes which did not
+create a new entry, both Blue and Red tuples will be reachable from Blue
+pointer (there is no Red pointer in such indexes). So, as a side note,
+matching Red and Blue flags is not enough from index scan perspective.
+
+During first heap scan of VACUUM, we look for tuples with
+HEAP_WARM_TUPLE set.  If all live tuples in the chain are either marked
+with Blue flag or Red flag (but no mix of Red and Blue), then the chain
+is a candidate for HOT conversion.  We remember the root line pointer
+and Red-Blue flag of the WARM chain in a separate array.
+
+If we have a Red WARM chain, then our goal is to remove Blue pointers
+and vice versa. But there is a catch. For Index2 above, there is only
+Blue pointer and that must not be removed. IOW we should remove Blue
+pointer iff a Red pointer exists. Since index vacuum may visit Red and
+Blue pointers in any order, I think we will need another index pass to
+remove dead index pointers. So in the first index pass we check which
+WARM candidates have 2 index pointers. In the second pass, we remove the
+dead pointer and reset Red flag is the surviving index pointer is Red.
+
+During the second heap scan, we fix WARM chain by clearing
+HEAP_WARM_TUPLE flag and also reset Red flag to Blue.
+
+There are some more problems around aborted vacuums. For example, if
+vacuum aborts after changing Red index flag to Blue but before removing
+the other Blue pointer, we will end up with two Blue pointers to a Red
+WARM chain. But since the HEAP_WARM_TUPLE flag on the heap tuple is
+still set, further WARM updates to the chain will be blocked. I guess we
+will need some special handling for case with multiple Blue pointers. We
+can either leave these WARM chains alone and let them die with a
+subsequent non-WARM update or must apply heap-recheck logic during index
+vacuum to find the dead pointer. Given that vacuum-aborts are not
+common, I am inclined to leave this case unhandled. We must still check
+for presence of multiple Blue pointers and ensure that we don't
+accidently remove either of the Blue pointers and not clear WARM chains
+either.
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index bef9c84..b3de79c 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -99,7 +99,10 @@ static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
 							 Bitmapset *hot_attrs,
 							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
+							 Bitmapset *exprindx_attrs,
+							 Bitmapset **updated_attrs,
+							 bool *satisfies_hot, bool *satisfies_warm,
+							 bool *satisfies_key,
 							 bool *satisfies_id,
 							 HeapTuple oldtup, HeapTuple newtup);
 static bool heap_acquire_tuplock(Relation relation, ItemPointer tid,
@@ -1960,6 +1963,76 @@ heap_fetch(Relation relation,
 }
 
 /*
+ * Check if the HOT chain is originating or continuing at tid ever became a
+ * WARM chain, even if the actual UPDATE operation finally aborted.
+ */
+static void
+hot_check_warm_chain(Page dp, ItemPointer tid, bool *recheck)
+{
+	TransactionId prev_xmax = InvalidTransactionId;
+	OffsetNumber offnum;
+	HeapTupleData heapTuple;
+
+	if (*recheck == true)
+		return;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+			break;
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Presence of either WARM or WARM updated tuple signals possible
+		 * breakage and the caller must recheck tuple returned from this chain
+		 * for index satisfaction
+		 */
+		if (HeapTupleHeaderIsHeapWarmTuple(heapTuple.t_data))
+		{
+			*recheck = true;
+			break;
+		}
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (HeapTupleIsHotUpdated(&heapTuple))
+		{
+			offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+			prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+		}
+		else
+			break;				/* end of chain */
+	}
+
+}
+
+/*
  *	heap_hot_search_buffer	- search HOT chain for tuple satisfying snapshot
  *
  * On entry, *tid is the TID of a tuple (either a simple tuple, or the root
@@ -1979,11 +2052,14 @@ heap_fetch(Relation relation,
  * Unlike heap_fetch, the caller must already have pin and (at least) share
  * lock on the buffer; it is still pinned/locked at exit.  Also unlike
  * heap_fetch, we do not report any pgstats count; caller may do so if wanted.
+ *
+ * recheck should be set false on entry by caller, will be set true on exit
+ * if a WARM tuple is encountered.
  */
 bool
 heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 					   Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call)
+					   bool *all_dead, bool first_call, bool *recheck)
 {
 	Page		dp = (Page) BufferGetPage(buffer);
 	TransactionId prev_xmax = InvalidTransactionId;
@@ -2025,6 +2101,16 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 				/* Follow the redirect */
 				offnum = ItemIdGetRedirect(lp);
 				at_chain_start = false;
+
+				/* Check if it's a WARM chain */
+				if (recheck && *recheck == false)
+				{
+					if (ItemIdIsHeapWarm(lp))
+					{
+						*recheck = true;
+						Assert(!IsSystemRelation(relation));
+					}
+				}
 				continue;
 			}
 			/* else must be end of chain */
@@ -2037,9 +2123,12 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		ItemPointerSetOffsetNumber(&heapTuple->t_self, offnum);
 
 		/*
-		 * Shouldn't see a HEAP_ONLY tuple at chain start.
+		 * Shouldn't see a HEAP_ONLY tuple at chain start, unless we are
+		 * dealing with a WARM updated tuple in which case deferred triggers
+		 * may request to fetch a WARM tuple from middle of a chain.
 		 */
-		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple))
+		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple) &&
+				!HeapTupleIsHeapWarmTuple(heapTuple))
 			break;
 
 		/*
@@ -2052,6 +2141,22 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 			break;
 
 		/*
+		 * Check if there exists a WARM tuple somewhere down the chain and set
+		 * recheck to TRUE.
+		 *
+		 * XXX This is not very efficient right now, and we should look for
+		 * possible improvements here
+		 */
+		if (recheck && *recheck == false)
+		{
+			hot_check_warm_chain(dp, &heapTuple->t_self, recheck);
+
+			/* WARM is not supported on system tables yet */
+			if (*recheck == true)
+				Assert(!IsSystemRelation(relation));
+		}
+
+		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
 		 * return the first tuple we find.  But on later passes, heapTuple
 		 * will initially be pointing to the tuple we returned last time.
@@ -2124,18 +2229,41 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
  */
 bool
 heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
-				bool *all_dead)
+				bool *all_dead, bool *recheck, Buffer *cbuffer,
+				HeapTuple heapTuple)
 {
 	bool		result;
 	Buffer		buffer;
-	HeapTupleData heapTuple;
+	ItemPointerData ret_tid = *tid;
 
 	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	LockBuffer(buffer, BUFFER_LOCK_SHARE);
-	result = heap_hot_search_buffer(tid, relation, buffer, snapshot,
-									&heapTuple, all_dead, true);
-	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buffer);
+	result = heap_hot_search_buffer(&ret_tid, relation, buffer, snapshot,
+									heapTuple, all_dead, true, recheck);
+
+	/*
+	 * If we are returning a potential candidate tuple from this chain and the
+	 * caller has requested for "recheck" hint, keep the buffer locked and
+	 * pinned. The caller must release the lock and pin on the buffer in all
+	 * such cases
+	 */
+	if (!result || !recheck || !(*recheck))
+	{
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	/*
+	 * Set the caller supplied tid with the actual location of the tuple being
+	 * returned
+	 */
+	if (result)
+	{
+		*tid = ret_tid;
+		if (cbuffer)
+			*cbuffer = buffer;
+	}
+
 	return result;
 }
 
@@ -3442,13 +3570,15 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 HTSU_Result
 heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode)
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **updated_attrs, bool *warm_update)
 {
 	HTSU_Result result;
 	TransactionId xid = GetCurrentTransactionId();
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *exprindx_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3469,9 +3599,11 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		satisfies_hot;
+	bool		satisfies_warm;
 	bool		satisfies_key;
 	bool		satisfies_id;
 	bool		use_hot_update = false;
+	bool		use_warm_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
 	bool		all_visible_cleared_new = false;
@@ -3496,6 +3628,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot update tuples during a parallel operation")));
 
+	/* Assume no-warm update */
+	if (warm_update)
+		*warm_update = false;
+
 	/*
 	 * Fetch the list of attributes to be checked for HOT update.  This is
 	 * wasted effort if we fail to update or have to put the new tuple on a
@@ -3512,6 +3648,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	exprindx_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_EXPR_PREDICATE);
 
 	block = ItemPointerGetBlockNumber(otid);
 	offnum = ItemPointerGetOffsetNumber(otid);
@@ -3571,7 +3709,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	 * serendipitiously arrive at the same key values.
 	 */
 	HeapSatisfiesHOTandKeyUpdate(relation, hot_attrs, key_attrs, id_attrs,
-								 &satisfies_hot, &satisfies_key,
+								 exprindx_attrs,
+								 updated_attrs,
+								 &satisfies_hot, &satisfies_warm,
+								 &satisfies_key,
 								 &satisfies_id, &oldtup, newtup);
 	if (satisfies_key)
 	{
@@ -4118,6 +4259,34 @@ l2:
 		 */
 		if (satisfies_hot)
 			use_hot_update = true;
+		else
+		{
+			/*
+			 * If no WARM updates yet on this chain, let this update be a WARM
+			 * update.
+			 *
+			 * We check for both warm and warm updated tuples since if the
+			 * previous WARM update aborted, we may still have added
+			 * another index entry for this HOT chain. In such situations, we
+			 * must not attempt a WARM update until duplicate (key, CTID) index
+			 * entry issue is sorted out
+			 *
+			 * XXX Later we'll add more checks to ensure WARM chains can
+			 * further be WARM updated. This is probably good to do first rounf
+			 * of tests of remaining functionality
+			 *
+			 * XXX Disable WARM updates on system tables. There is nothing in
+			 * principle that stops us from supporting this. But it would
+			 * require API change to propogate the changed columns back to the
+			 * caller so that CatalogUpdateIndexes() can avoid adding new
+			 * entries to indexes that are not changed by update. This will be
+			 * fixed once basic patch is tested. !!FIXME
+			 */
+			if (satisfies_warm &&
+				!HeapTupleIsHeapWarmTuple(&oldtup) &&
+				!IsSystemRelation(relation))
+				use_warm_update = true;
+		}
 	}
 	else
 	{
@@ -4158,6 +4327,21 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+
+		/*
+		 * Even if we are doing a HOT update, we must carry forward the WARM
+		 * flag because we may have already inserted another index entry
+		 * pointing to our root and a third entry may create duplicates
+		 *
+		 * XXX This should be revisited if we get index (key, CTID) duplicate
+		 * detection mechanism in place
+		 */
+		if (HeapTupleIsHeapWarmTuple(&oldtup))
+		{
+			HeapTupleSetHeapWarmTuple(heaptup);
+			HeapTupleSetHeapWarmTuple(newtup);
+		}
+
 		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
@@ -4173,12 +4357,38 @@ l2:
 					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
 					&root_offnum);
 	}
+	else if (use_warm_update)
+	{
+		Assert(!IsSystemRelation(relation));
+
+		/* Mark the old tuple as HOT-updated */
+		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetHeapWarmTuple(&oldtup);
+		/* And mark the new tuple as heap-only */
+		HeapTupleSetHeapOnly(heaptup);
+		HeapTupleSetHeapWarmTuple(heaptup);
+		/* Mark the caller's copy too, in case different from heaptup */
+		HeapTupleSetHeapOnly(newtup);
+		HeapTupleSetHeapWarmTuple(newtup);
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			heap_get_root_tuple_one(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
+					&root_offnum);
+
+		/* Let the caller know we did a WARM update */
+		if (warm_update)
+			*warm_update = true;
+	}
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		HeapTupleClearHeapWarmTuple(heaptup);
+		HeapTupleClearHeapWarmTuple(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4297,7 +4507,10 @@ l2:
 	if (have_tuple_lock)
 		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
 
-	pgstat_count_heap_update(relation, use_hot_update);
+	/*
+	 * Count HOT and WARM updates separately
+	 */
+	pgstat_count_heap_update(relation, use_hot_update, use_warm_update);
 
 	/*
 	 * If heaptup is a private copy, release it.  Don't forget to copy t_self
@@ -4405,6 +4618,13 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
  * will be checking very similar sets of columns, and doing the same tests on
  * them, it makes sense to optimize and do them together.
  *
+ * The exprindx_attrs designates the set of attributes used in expression or
+ * predicate indexes. Currently, we don't allow WARM updates if expression or
+ * predicate index column is updated
+ *
+ * If updated_attrs is not NULL, then the caller is always interested in
+ * knowing the list of changed attributes
+ *
  * We receive three bitmapsets comprising the three sets of columns we're
  * interested in.  Note these are destructively modified; that is OK since
  * this is invoked at most once in heap_update.
@@ -4417,7 +4637,11 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 static void
 HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
+							 Bitmapset *exprindx_attrs,
+							 Bitmapset **updated_attrs,
+							 bool *satisfies_hot,
+							 bool *satisfies_warm,
+							 bool *satisfies_key,
 							 bool *satisfies_id,
 							 HeapTuple oldtup, HeapTuple newtup)
 {
@@ -4427,6 +4651,7 @@ HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 	bool		hot_result = true;
 	bool		key_result = true;
 	bool		id_result = true;
+	Bitmapset	*hot_attrs_copy = bms_copy(hot_attrs);
 
 	/* If REPLICA IDENTITY is set to FULL, id_attrs will be empty. */
 	Assert(bms_is_subset(id_attrs, key_attrs));
@@ -4454,8 +4679,11 @@ HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 		 * Since the HOT attributes are a superset of the key attributes and
 		 * the key attributes are a superset of the id attributes, this logic
 		 * is guaranteed to identify the next column that needs to be checked.
+		 *
+		 * If the caller also wants to know the list of updated index
+		 * attributes, we must scan through all the attributes
 		 */
-		if (hot_result && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
+		if ((hot_result || updated_attrs) && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
 			check_now = next_hot_attnum;
 		else if (key_result && next_key_attnum > FirstLowInvalidHeapAttributeNumber)
 			check_now = next_key_attnum;
@@ -4476,8 +4704,16 @@ HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 			if (check_now == next_id_attnum)
 				id_result = false;
 
+			/*
+			 * Add the changed attribute to updated_attrs if the caller has
+			 * asked for it
+			 */
+			if (updated_attrs)
+				*updated_attrs = bms_add_member(*updated_attrs, check_now -
+						FirstLowInvalidHeapAttributeNumber);
+
 			/* if all are false now, we can stop checking */
-			if (!hot_result && !key_result && !id_result)
+			if (!hot_result && !key_result && !id_result && !updated_attrs)
 				break;
 		}
 
@@ -4488,7 +4724,7 @@ HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 		 * bms_first_member() will return -1 and the attribute number will end
 		 * up with a value less than FirstLowInvalidHeapAttributeNumber.
 		 */
-		if (hot_result && check_now == next_hot_attnum)
+		if ((hot_result || updated_attrs) && check_now == next_hot_attnum)
 		{
 			next_hot_attnum = bms_first_member(hot_attrs);
 			next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
@@ -4505,6 +4741,29 @@ HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 		}
 	}
 
+	/*
+	 * If an attributed used in the expression of an expression index or
+	 * predicate of a predicate index has changed, we don't yet support WARM
+	 * update
+	 */
+	if (updated_attrs && bms_overlap(*updated_attrs, exprindx_attrs))
+		*satisfies_warm = false;
+	/* If the table does not support WARM update, honour that */
+	else if (!relation->rd_supportswarm)
+		*satisfies_warm = false;
+	/*
+	 * If all index keys are being updated, there is hardly any point in doing
+	 * a WARM update.
+	 */
+	else if (updated_attrs && bms_is_subset(hot_attrs_copy, *updated_attrs))
+		*satisfies_warm = false;
+	/*
+	 * XXX Should we handle some more cases, such as when an update will result
+	 * in many or most indexes, should we fall back to a regular update?
+	 */
+	else
+		*satisfies_warm = true;
+
 	*satisfies_hot = hot_result;
 	*satisfies_key = key_result;
 	*satisfies_id = id_result;
@@ -4528,7 +4787,7 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &hufd, &lockmode);
+						 &hufd, &lockmode, NULL, NULL);
 	switch (result)
 	{
 		case HeapTupleSelfUpdated:
@@ -7426,6 +7685,7 @@ log_heap_cleanup_info(RelFileNode rnode, TransactionId latestRemovedXid)
 XLogRecPtr
 log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *redirected, int nredirected,
+			   OffsetNumber *warm, int nwarm,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid)
@@ -7439,6 +7699,7 @@ log_heap_clean(Relation reln, Buffer buffer,
 	xlrec.latestRemovedXid = latestRemovedXid;
 	xlrec.nredirected = nredirected;
 	xlrec.ndead = ndead;
+	xlrec.nwarm = nwarm;
 
 	XLogBeginInsert();
 	XLogRegisterData((char *) &xlrec, SizeOfHeapClean);
@@ -7461,6 +7722,10 @@ log_heap_clean(Relation reln, Buffer buffer,
 		XLogRegisterBufData(0, (char *) nowdead,
 							ndead * sizeof(OffsetNumber));
 
+	if (nwarm > 0)
+		XLogRegisterBufData(0, (char *) warm,
+							nwarm * sizeof(OffsetNumber));
+
 	if (nunused > 0)
 		XLogRegisterBufData(0, (char *) nowunused,
 							nunused * sizeof(OffsetNumber));
@@ -7566,6 +7831,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
+	bool		warm_update;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
@@ -7577,6 +7843,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	else
 		info = XLOG_HEAP_UPDATE;
 
+	if (HeapTupleIsHeapWarmTuple(newtup))
+		warm_update = true;
+
 	/*
 	 * If the old and new tuple are on the same page, we only need to log the
 	 * parts of the new tuple that were changed.  That saves on the amount of
@@ -7650,6 +7919,8 @@ log_heap_update(Relation reln, Buffer oldbuf,
 				xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY;
 		}
 	}
+	if (warm_update)
+		xlrec.flags |= XLH_UPDATE_WARM_UPDATE;
 
 	/* If new tuple is the single and first tuple on page... */
 	if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber &&
@@ -8017,24 +8288,38 @@ heap_xlog_clean(XLogReaderState *record)
 		OffsetNumber *redirected;
 		OffsetNumber *nowdead;
 		OffsetNumber *nowunused;
+		OffsetNumber *warm;
 		int			nredirected;
 		int			ndead;
 		int			nunused;
+		int			nwarm;
+		int			i;
 		Size		datalen;
+		bool		warmchain[MaxHeapTuplesPerPage + 1];
 
 		redirected = (OffsetNumber *) XLogRecGetBlockData(record, 0, &datalen);
 
 		nredirected = xlrec->nredirected;
 		ndead = xlrec->ndead;
+		nwarm = xlrec->nwarm;
+
 		end = (OffsetNumber *) ((char *) redirected + datalen);
 		nowdead = redirected + (nredirected * 2);
-		nowunused = nowdead + ndead;
-		nunused = (end - nowunused);
+		warm = nowdead + ndead;
+		nowunused = warm + nwarm;
+
+		nunused = (end - warm);
 		Assert(nunused >= 0);
 
+		memset(warmchain, 0, sizeof (warmchain));
+		for (i = 0; i < nwarm; i++)
+			warmchain[warm[i]] = true;
+
+
 		/* Update all item pointers per the record, and repair fragmentation */
 		heap_page_prune_execute(buffer,
 								redirected, nredirected,
+								warmchain,
 								nowdead, ndead,
 								nowunused, nunused);
 
@@ -8621,16 +8906,22 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	Size		freespace = 0;
 	XLogRedoAction oldaction;
 	XLogRedoAction newaction;
+	bool		warm_update = false;
 
 	/* initialize to keep the compiler quiet */
 	oldtup.t_data = NULL;
 	oldtup.t_len = 0;
 
+	if (xlrec->flags & XLH_UPDATE_WARM_UPDATE)
+		warm_update = true;
+
 	XLogRecGetBlockTag(record, 0, &rnode, NULL, &newblk);
 	if (XLogRecGetBlockTag(record, 1, NULL, NULL, &oldblk))
 	{
 		/* HOT updates are never done across pages */
 		Assert(!hot_update);
+		/* WARM updates are never done across pages */
+		Assert(!warm_update);
 	}
 	else
 		oldblk = newblk;
@@ -8690,6 +8981,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 								   &htup->t_infomask2);
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
+
+		/* Mark the old tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		/* Set forward chain link in t_ctid */
 		HeapTupleHeaderSetNextCtid(htup, ItemPointerGetBlockNumber(&newtid),
 				ItemPointerGetOffsetNumber(&newtid));
@@ -8825,6 +9121,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
+
+		/* Mark the new tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		/* Make sure there is no forward chain link in t_ctid */
 		HeapTupleHeaderSetHeapLatest(htup);
 
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index 7c2231a..d71a297 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -36,12 +36,19 @@ typedef struct
 	int			nredirected;	/* numbers of entries in arrays below */
 	int			ndead;
 	int			nunused;
+	int			nwarm;
 	/* arrays that accumulate indexes of items to be changed */
 	OffsetNumber redirected[MaxHeapTuplesPerPage * 2];
 	OffsetNumber nowdead[MaxHeapTuplesPerPage];
 	OffsetNumber nowunused[MaxHeapTuplesPerPage];
+	OffsetNumber warm[MaxHeapTuplesPerPage];
 	/* marked[i] is TRUE if item i is entered in one of the above arrays */
 	bool		marked[MaxHeapTuplesPerPage + 1];
+	/*
+	 * warmchain[i] is TRUE if item is becoming redirected lp and points a WARM
+	 * chain
+	 */
+	bool		warmchain[MaxHeapTuplesPerPage + 1];
 } PruneState;
 
 /* Local functions */
@@ -54,6 +61,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 						   OffsetNumber offnum, OffsetNumber rdoffnum);
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
+static void heap_prune_record_warmupdate(PruneState *prstate,
+						   OffsetNumber offnum);
 
 static void heap_get_root_tuples_internal(Page page,
 				OffsetNumber target_offnum, OffsetNumber *root_offsets);
@@ -203,8 +212,9 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 	 */
 	prstate.new_prune_xid = InvalidTransactionId;
 	prstate.latestRemovedXid = *latestRemovedXid;
-	prstate.nredirected = prstate.ndead = prstate.nunused = 0;
+	prstate.nredirected = prstate.ndead = prstate.nunused = prstate.nwarm = 0;
 	memset(prstate.marked, 0, sizeof(prstate.marked));
+	memset(prstate.warmchain, 0, sizeof(prstate.marked));
 
 	/* Scan the page */
 	maxoff = PageGetMaxOffsetNumber(page);
@@ -241,6 +251,7 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 		 */
 		heap_page_prune_execute(buffer,
 								prstate.redirected, prstate.nredirected,
+								prstate.warmchain,
 								prstate.nowdead, prstate.ndead,
 								prstate.nowunused, prstate.nunused);
 
@@ -268,6 +279,7 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 
 			recptr = log_heap_clean(relation, buffer,
 									prstate.redirected, prstate.nredirected,
+									prstate.warm, prstate.nwarm,
 									prstate.nowdead, prstate.ndead,
 									prstate.nowunused, prstate.nunused,
 									prstate.latestRemovedXid);
@@ -479,6 +491,12 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 			!TransactionIdEquals(HeapTupleHeaderGetXmin(htup), priorXmax))
 			break;
 
+		if (HeapTupleHeaderIsHeapWarmTuple(htup))
+		{
+			Assert(!IsSystemRelation(relation));
+			heap_prune_record_warmupdate(prstate, rootoffnum);
+		}
+
 		/*
 		 * OK, this tuple is indeed a member of the chain.
 		 */
@@ -668,6 +686,18 @@ heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum)
 	prstate->marked[offnum] = true;
 }
 
+/* Record item pointer which is a root of a WARM chain */
+static void
+heap_prune_record_warmupdate(PruneState *prstate, OffsetNumber offnum)
+{
+	Assert(prstate->nwarm < MaxHeapTuplesPerPage);
+	if (prstate->warmchain[offnum])
+		return;
+	prstate->warm[prstate->nwarm] = offnum;
+	prstate->nwarm++;
+	prstate->warmchain[offnum] = true;
+}
+
 
 /*
  * Perform the actual page changes needed by heap_page_prune.
@@ -681,6 +711,7 @@ heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum)
 void
 heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
+						bool *warmchain,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused)
 {
@@ -697,6 +728,12 @@ heap_page_prune_execute(Buffer buffer,
 		ItemId		fromlp = PageGetItemId(page, fromoff);
 
 		ItemIdSetRedirect(fromlp, tooff);
+
+		/*
+		 * Save information about WARM chains in the item itself
+		 */
+		if (warmchain[fromoff])
+			ItemIdSetHeapWarm(fromlp);
 	}
 
 	/* Update all now-dead line pointers */
diff --git a/src/backend/access/index/genam.c b/src/backend/access/index/genam.c
index 65c941d..4f9fb12 100644
--- a/src/backend/access/index/genam.c
+++ b/src/backend/access/index/genam.c
@@ -99,7 +99,7 @@ RelationGetIndexScan(Relation indexRelation, int nkeys, int norderbys)
 	else
 		scan->orderByData = NULL;
 
-	scan->xs_want_itup = false; /* may be set later */
+	scan->xs_want_itup = true; /* hack for now to always get index tuple */
 
 	/*
 	 * During recovery we ignore killed tuples and don't bother to kill them
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index 54b71cb..149a02d 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -71,10 +71,12 @@
 #include "access/xlog.h"
 #include "catalog/catalog.h"
 #include "catalog/index.h"
+#include "executor/executor.h"
 #include "pgstat.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
+#include "utils/datum.h"
 #include "utils/snapmgr.h"
 #include "utils/tqual.h"
 
@@ -409,7 +411,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
 	/*
 	 * The AM's amgettuple proc finds the next index entry matching the scan
 	 * keys, and puts the TID into scan->xs_ctup.t_self.  It should also set
-	 * scan->xs_recheck and possibly scan->xs_itup, though we pay no attention
+	 * scan->xs_tuple_recheck and possibly scan->xs_itup, though we pay no attention
 	 * to those fields here.
 	 */
 	found = scan->indexRelation->rd_amroutine->amgettuple(scan, direction);
@@ -448,7 +450,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
  * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
  * call).
  *
- * Note: caller must check scan->xs_recheck, and perform rechecking of the
+ * Note: caller must check scan->xs_tuple_recheck, and perform rechecking of the
  * scan keys if required.  We do not do that here because we don't have
  * enough information to do it efficiently in the general case.
  * ----------------
@@ -475,6 +477,13 @@ index_fetch_heap(IndexScanDesc scan)
 		 */
 		if (prev_buf != scan->xs_cbuf)
 			heap_page_prune_opt(scan->heapRelation, scan->xs_cbuf);
+
+		/*
+		 * If we're not always re-checking, reset recheck for this tuple
+		 */
+		if (!scan->xs_recheck)
+			scan->xs_tuple_recheck = false;
+
 	}
 
 	/* Obtain share-lock on the buffer so we can examine visibility */
@@ -484,32 +493,63 @@ index_fetch_heap(IndexScanDesc scan)
 											scan->xs_snapshot,
 											&scan->xs_ctup,
 											&all_dead,
-											!scan->xs_continue_hot);
+											!scan->xs_continue_hot,
+											&scan->xs_tuple_recheck);
 	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
 
 	if (got_heap_tuple)
 	{
+		bool res = true;
+
+		/*
+		 * Ok we got a tuple which satisfies the snapshot, but if its part of a
+		 * WARM chain, we must do additional checks to ensure that we are
+		 * indeed returning a correct tuple. Note that if the index AM does not
+		 * implement amrecheck method, then we don't any additional checks
+		 * since WARM must have been disabled on such tables
+		 *
+		 * XXX What happens when a new index which does not support amcheck is
+		 * added to the table? Do we need to handle this case or is CREATE
+		 * INDEX and CREATE INDEX CONCURRENTLY smart enough to handle this
+		 * issue?
+		 */
+		if (scan->xs_tuple_recheck &&
+				scan->indexRelation->rd_amroutine->amrecheck)
+		{
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
+			res = scan->indexRelation->rd_amroutine->amrecheck(
+						scan->indexRelation,
+						scan->xs_itup,
+						scan->heapRelation,
+						&scan->xs_ctup);
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+		}
+
 		/*
 		 * Only in a non-MVCC snapshot can more than one member of the HOT
 		 * chain be visible.
 		 */
 		scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
 		pgstat_count_heap_fetch(scan->indexRelation);
-		return &scan->xs_ctup;
-	}
 
-	/* We've reached the end of the HOT chain. */
-	scan->xs_continue_hot = false;
+		if (res)
+			return &scan->xs_ctup;
+	}
+	else
+	{
+		/* We've reached the end of the HOT chain. */
+		scan->xs_continue_hot = false;
 
-	/*
-	 * If we scanned a whole HOT chain and found only dead tuples, tell index
-	 * AM to kill its entry for that TID (this will take effect in the next
-	 * amgettuple call, in index_getnext_tid).  We do not do this when in
-	 * recovery because it may violate MVCC to do so.  See comments in
-	 * RelationGetIndexScan().
-	 */
-	if (!scan->xactStartedInRecovery)
-		scan->kill_prior_tuple = all_dead;
+		/*
+		 * If we scanned a whole HOT chain and found only dead tuples, tell index
+		 * AM to kill its entry for that TID (this will take effect in the next
+		 * amgettuple call, in index_getnext_tid).  We do not do this when in
+		 * recovery because it may violate MVCC to do so.  See comments in
+		 * RelationGetIndexScan().
+		 */
+		if (!scan->xactStartedInRecovery)
+			scan->kill_prior_tuple = all_dead;
+	}
 
 	return NULL;
 }
diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c
index ef69290..e0afffd 100644
--- a/src/backend/access/nbtree/nbtinsert.c
+++ b/src/backend/access/nbtree/nbtinsert.c
@@ -19,11 +19,14 @@
 #include "access/nbtree.h"
 #include "access/transam.h"
 #include "access/xloginsert.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
 #include "utils/tqual.h"
-
+#include "utils/datum.h"
 
 typedef struct
 {
@@ -249,6 +252,9 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 	BTPageOpaque opaque;
 	Buffer		nbuf = InvalidBuffer;
 	bool		found = false;
+	Buffer		buffer;
+	HeapTupleData	heapTuple;
+	bool		recheck = false;
 
 	/* Assume unique until we find a duplicate */
 	*is_unique = true;
@@ -308,6 +314,8 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				curitup = (IndexTuple) PageGetItem(page, curitemid);
 				htid = curitup->t_tid;
 
+				recheck = false;
+
 				/*
 				 * If we are doing a recheck, we expect to find the tuple we
 				 * are rechecking.  It's not a duplicate, but we have to keep
@@ -325,112 +333,153 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				 * have just a single index entry for the entire chain.
 				 */
 				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
-										 &all_dead))
+							&all_dead, &recheck, &buffer,
+							&heapTuple))
 				{
 					TransactionId xwait;
+					bool result = true;
 
 					/*
-					 * It is a duplicate. If we are only doing a partial
-					 * check, then don't bother checking if the tuple is being
-					 * updated in another transaction. Just return the fact
-					 * that it is a potential conflict and leave the full
-					 * check till later.
+					 * If the tuple was WARM update, we may again see our own
+					 * tuple. Since WARM updates don't create new index
+					 * entries, our own tuple is only reachable via the old
+					 * index pointer
 					 */
-					if (checkUnique == UNIQUE_CHECK_PARTIAL)
+					if (checkUnique == UNIQUE_CHECK_EXISTING &&
+							ItemPointerCompare(&htid, &itup->t_tid) == 0)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						*is_unique = false;
-						return InvalidTransactionId;
+						found = true;
+						result = false;
+						if (recheck)
+							UnlockReleaseBuffer(buffer);
 					}
-
-					/*
-					 * If this tuple is being updated by other transaction
-					 * then we have to wait for its commit/abort.
-					 */
-					xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
-						SnapshotDirty.xmin : SnapshotDirty.xmax;
-
-					if (TransactionIdIsValid(xwait))
+					else if (recheck)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						/* Tell _bt_doinsert to wait... */
-						*speculativeToken = SnapshotDirty.speculativeToken;
-						return xwait;
+						result = btrecheck(rel, curitup, heapRel, &heapTuple);
+						UnlockReleaseBuffer(buffer);
 					}
 
-					/*
-					 * Otherwise we have a definite conflict.  But before
-					 * complaining, look to see if the tuple we want to insert
-					 * is itself now committed dead --- if so, don't complain.
-					 * This is a waste of time in normal scenarios but we must
-					 * do it to support CREATE INDEX CONCURRENTLY.
-					 *
-					 * We must follow HOT-chains here because during
-					 * concurrent index build, we insert the root TID though
-					 * the actual tuple may be somewhere in the HOT-chain.
-					 * While following the chain we might not stop at the
-					 * exact tuple which triggered the insert, but that's OK
-					 * because if we find a live tuple anywhere in this chain,
-					 * we have a unique key conflict.  The other live tuple is
-					 * not part of this chain because it had a different index
-					 * entry.
-					 */
-					htid = itup->t_tid;
-					if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL))
-					{
-						/* Normal case --- it's still live */
-					}
-					else
+					if (result)
 					{
 						/*
-						 * It's been deleted, so no error, and no need to
-						 * continue searching
+						 * It is a duplicate. If we are only doing a partial
+						 * check, then don't bother checking if the tuple is being
+						 * updated in another transaction. Just return the fact
+						 * that it is a potential conflict and leave the full
+						 * check till later.
 						 */
-						break;
-					}
+						if (checkUnique == UNIQUE_CHECK_PARTIAL)
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							*is_unique = false;
+							return InvalidTransactionId;
+						}
 
-					/*
-					 * Check for a conflict-in as we would if we were going to
-					 * write to this page.  We aren't actually going to write,
-					 * but we want a chance to report SSI conflicts that would
-					 * otherwise be masked by this unique constraint
-					 * violation.
-					 */
-					CheckForSerializableConflictIn(rel, NULL, buf);
+						/*
+						 * If this tuple is being updated by other transaction
+						 * then we have to wait for its commit/abort.
+						 */
+						xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
+							SnapshotDirty.xmin : SnapshotDirty.xmax;
+
+						if (TransactionIdIsValid(xwait))
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							/* Tell _bt_doinsert to wait... */
+							*speculativeToken = SnapshotDirty.speculativeToken;
+							return xwait;
+						}
 
-					/*
-					 * This is a definite conflict.  Break the tuple down into
-					 * datums and report the error.  But first, make sure we
-					 * release the buffer locks we're holding ---
-					 * BuildIndexValueDescription could make catalog accesses,
-					 * which in the worst case might touch this same index and
-					 * cause deadlocks.
-					 */
-					if (nbuf != InvalidBuffer)
-						_bt_relbuf(rel, nbuf);
-					_bt_relbuf(rel, buf);
+						/*
+						 * Otherwise we have a definite conflict.  But before
+						 * complaining, look to see if the tuple we want to insert
+						 * is itself now committed dead --- if so, don't complain.
+						 * This is a waste of time in normal scenarios but we must
+						 * do it to support CREATE INDEX CONCURRENTLY.
+						 *
+						 * We must follow HOT-chains here because during
+						 * concurrent index build, we insert the root TID though
+						 * the actual tuple may be somewhere in the HOT-chain.
+						 * While following the chain we might not stop at the
+						 * exact tuple which triggered the insert, but that's OK
+						 * because if we find a live tuple anywhere in this chain,
+						 * we have a unique key conflict.  The other live tuple is
+						 * not part of this chain because it had a different index
+						 * entry.
+						 */
+						recheck = false;
+						ItemPointerCopy(&itup->t_tid, &htid);
+						if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL,
+									&recheck, &buffer, &heapTuple))
+						{
+							bool result = true;
+							if (recheck)
+							{
+								/*
+								 * Recheck if the tuple actually satisfies the
+								 * index key. Otherwise, we might be following
+								 * a wrong index pointer and mustn't entertain
+								 * this tuple
+								 */
+								result = btrecheck(rel, itup, heapRel, &heapTuple);
+								UnlockReleaseBuffer(buffer);
+							}
+							if (!result)
+								break;
+							/* Normal case --- it's still live */
+						}
+						else
+						{
+							/*
+							 * It's been deleted, so no error, and no need to
+							 * continue searching
+							 */
+							break;
+						}
 
-					{
-						Datum		values[INDEX_MAX_KEYS];
-						bool		isnull[INDEX_MAX_KEYS];
-						char	   *key_desc;
-
-						index_deform_tuple(itup, RelationGetDescr(rel),
-										   values, isnull);
-
-						key_desc = BuildIndexValueDescription(rel, values,
-															  isnull);
-
-						ereport(ERROR,
-								(errcode(ERRCODE_UNIQUE_VIOLATION),
-								 errmsg("duplicate key value violates unique constraint \"%s\"",
-										RelationGetRelationName(rel)),
-							   key_desc ? errdetail("Key %s already exists.",
-													key_desc) : 0,
-								 errtableconstraint(heapRel,
-											 RelationGetRelationName(rel))));
+						/*
+						 * Check for a conflict-in as we would if we were going to
+						 * write to this page.  We aren't actually going to write,
+						 * but we want a chance to report SSI conflicts that would
+						 * otherwise be masked by this unique constraint
+						 * violation.
+						 */
+						CheckForSerializableConflictIn(rel, NULL, buf);
+
+						/*
+						 * This is a definite conflict.  Break the tuple down into
+						 * datums and report the error.  But first, make sure we
+						 * release the buffer locks we're holding ---
+						 * BuildIndexValueDescription could make catalog accesses,
+						 * which in the worst case might touch this same index and
+						 * cause deadlocks.
+						 */
+						if (nbuf != InvalidBuffer)
+							_bt_relbuf(rel, nbuf);
+						_bt_relbuf(rel, buf);
+
+						{
+							Datum		values[INDEX_MAX_KEYS];
+							bool		isnull[INDEX_MAX_KEYS];
+							char	   *key_desc;
+
+							index_deform_tuple(itup, RelationGetDescr(rel),
+									values, isnull);
+
+							key_desc = BuildIndexValueDescription(rel, values,
+									isnull);
+
+							ereport(ERROR,
+									(errcode(ERRCODE_UNIQUE_VIOLATION),
+									 errmsg("duplicate key value violates unique constraint \"%s\"",
+										 RelationGetRelationName(rel)),
+									 key_desc ? errdetail("Key %s already exists.",
+										 key_desc) : 0,
+									 errtableconstraint(heapRel,
+										 RelationGetRelationName(rel))));
+						}
 					}
 				}
 				else if (all_dead)
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 128744c..6b1236a 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -23,6 +23,7 @@
 #include "access/xlog.h"
 #include "catalog/index.h"
 #include "commands/vacuum.h"
+#include "executor/nodeIndexscan.h"
 #include "storage/indexfsm.h"
 #include "storage/ipc.h"
 #include "storage/lmgr.h"
@@ -117,6 +118,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = btendscan;
 	amroutine->ammarkpos = btmarkpos;
 	amroutine->amrestrpos = btrestrpos;
+	amroutine->amrecheck = btrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -292,8 +294,9 @@ btgettuple(IndexScanDesc scan, ScanDirection dir)
 	BTScanOpaque so = (BTScanOpaque) scan->opaque;
 	bool		res;
 
-	/* btree indexes are never lossy */
-	scan->xs_recheck = false;
+	/* btree indexes are never lossy, except for WARM tuples */
+	scan->xs_recheck = indexscan_recheck;
+	scan->xs_tuple_recheck = indexscan_recheck;
 
 	/*
 	 * If we have any array keys, initialize them during first call for a
diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c
index 063c988..c9c0501 100644
--- a/src/backend/access/nbtree/nbtutils.c
+++ b/src/backend/access/nbtree/nbtutils.c
@@ -20,11 +20,15 @@
 #include "access/nbtree.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "utils/array.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 typedef struct BTSortArrayContext
@@ -2065,3 +2069,103 @@ btproperty(Oid index_oid, int attno,
 			return false;		/* punt to generic code */
 	}
 }
+
+/*
+ * Check if the index tuple's key matches the one computed from the given heap
+ * tuple's attribute
+ */
+bool
+btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	/* Get IndexInfo for this index */
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL, then they are equal
+		 */
+		if (isnull[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If just one is NULL, then they are not equal
+		 */
+		if (isnull[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now just do a raw memory comparison. If the index tuple was formed
+		 * using this heap tuple, the computed index values must match
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+}
diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c
index d570ae5..813b5c3 100644
--- a/src/backend/access/spgist/spgutils.c
+++ b/src/backend/access/spgist/spgutils.c
@@ -67,6 +67,7 @@ spghandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = spgendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 08b646d..e76e928 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -54,6 +54,7 @@
 #include "nodes/makefuncs.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/var.h"
 #include "parser/parser.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
@@ -1691,6 +1692,20 @@ BuildIndexInfo(Relation index)
 	ii->ii_Concurrent = false;
 	ii->ii_BrokenHotChain = false;
 
+	/* build a bitmap of all table attributes referred by this index */
+	for (i = 0; i < ii->ii_NumIndexAttrs; i++)
+	{
+		AttrNumber attr = ii->ii_KeyAttrNumbers[i];
+		ii->ii_indxattrs = bms_add_member(ii->ii_indxattrs, attr -
+				FirstLowInvalidHeapAttributeNumber);
+	}
+
+	/* Collect all attributes used in expressions, too */
+	pull_varattnos((Node *) ii->ii_Expressions, 1, &ii->ii_indxattrs);
+
+	/* Collect all attributes in the index predicate, too */
+	pull_varattnos((Node *) ii->ii_Predicate, 1, &ii->ii_indxattrs);
+
 	return ii;
 }
 
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index ada2142..b3db673 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -455,6 +455,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_tuples_deleted(C.oid) AS n_tup_del,
             pg_stat_get_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
@@ -485,7 +486,8 @@ CREATE VIEW pg_stat_xact_all_tables AS
             pg_stat_get_xact_tuples_inserted(C.oid) AS n_tup_ins,
             pg_stat_get_xact_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_xact_tuples_deleted(C.oid) AS n_tup_del,
-            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd
+            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_xact_tuples_warm_updated(C.oid) AS n_tup_warm_upd
     FROM pg_class C LEFT JOIN
          pg_index I ON C.oid = I.indrelid
          LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index 26f9114..997c8f5 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -40,6 +40,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	TriggerData *trigdata = (TriggerData *) fcinfo->context;
 	const char *funcname = "unique_key_recheck";
 	HeapTuple	new_row;
+	HeapTupleData heapTuple;
 	ItemPointerData tmptid;
 	Relation	indexRel;
 	IndexInfo  *indexInfo;
@@ -102,7 +103,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	 * removed.
 	 */
 	tmptid = new_row->t_self;
-	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL,
+				NULL, NULL, &heapTuple))
 	{
 		/*
 		 * All rows in the HOT chain are dead, so skip the check.
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index b4140eb..2126c61 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -2534,6 +2534,7 @@ CopyFrom(CopyState cstate)
 
 				if (resultRelInfo->ri_NumIndices > 0)
 					recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+														 &(tuple->t_self), NULL,
 														 estate, false, NULL,
 														   NIL);
 
@@ -2649,6 +2650,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
 			ExecStoreTuple(bufferedTuples[i], myslot, InvalidBuffer, false);
 			recheckIndexes =
 				ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
+									  &(bufferedTuples[i]->t_self), NULL,
 									  estate, false, NULL, NIL);
 			ExecARInsertTriggers(estate, resultRelInfo,
 								 bufferedTuples[i],
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index b5fb325..cd9b9a7 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -1468,6 +1468,7 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 
 		recptr = log_heap_clean(onerel, buffer,
 								NULL, 0, NULL, 0,
+								NULL, 0,
 								unused, uncnt,
 								vacrelstats->latestRemovedXid);
 		PageSetLSN(page, recptr);
@@ -2128,6 +2129,22 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 						break;
 					}
 
+					/*
+					 * If this tuple was ever WARM updated or is a WARM tuple,
+					 * there could be multiple index entries pointing to the
+					 * root of this chain. We can't do index-only scans for
+					 * such tuples without verifying index key check. So mark
+					 * the page as !all_visible
+					 *
+					 * XXX Should we look at the root line pointer and check if
+					 * WARM flag is set there or checking for tuples in the
+					 * chain is good enough?
+					 */
+					if (HeapTupleHeaderIsHeapWarmTuple(tuple.t_data))
+					{
+						all_visible = false;
+					}
+
 					/* Track newest xmin on page. */
 					if (TransactionIdFollows(xmin, *visibility_cutoff_xid))
 						*visibility_cutoff_xid = xmin;
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 009c1b7..03c6b62 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -270,6 +270,8 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
 List *
 ExecInsertIndexTuples(TupleTableSlot *slot,
 					  ItemPointer tupleid,
+					  ItemPointer root_tid,
+					  Bitmapset *updated_attrs,
 					  EState *estate,
 					  bool noDupErr,
 					  bool *specConflict,
@@ -324,6 +326,17 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 		if (!indexInfo->ii_ReadyForInserts)
 			continue;
 
+		/*
+		 * If updated_attrs is set, we only insert index entries for those
+		 * indexes whose column has changed. All other indexes can use their
+		 * existing index pointers to look up the new tuple
+		 */
+		if (updated_attrs)
+		{
+			if (!bms_overlap(updated_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
 		/* Check for partial index */
 		if (indexInfo->ii_Predicate != NIL)
 		{
@@ -389,7 +402,7 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 			index_insert(indexRelation, /* index relation */
 						 values,	/* array of index Datums */
 						 isnull,	/* null flags */
-						 tupleid,		/* tid of heap tuple */
+						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique);	/* type of uniqueness check to do */
 
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index 449aacb..ff77349 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -37,6 +37,7 @@
 
 #include "access/relscan.h"
 #include "access/transam.h"
+#include "access/valid.h"
 #include "executor/execdebug.h"
 #include "executor/nodeBitmapHeapscan.h"
 #include "pgstat.h"
@@ -362,11 +363,23 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			OffsetNumber offnum = tbmres->offsets[curslot];
 			ItemPointerData tid;
 			HeapTupleData heapTuple;
+			bool recheck = false;
 
 			ItemPointerSet(&tid, page, offnum);
 			if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
-									   &heapTuple, NULL, true))
-				scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+									   &heapTuple, NULL, true, &recheck))
+			{
+				bool valid = true;
+
+				if (scan->rs_key)
+					HeapKeyTest(&heapTuple, RelationGetDescr(scan->rs_rd),
+							scan->rs_nkeys, scan->rs_key, valid);
+				if (valid)
+					scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+
+				if (recheck)
+					tbmres->recheck = true;
+			}
 		}
 	}
 	else
diff --git a/src/backend/executor/nodeIndexonlyscan.c b/src/backend/executor/nodeIndexonlyscan.c
index 4f6f91c..49bda34 100644
--- a/src/backend/executor/nodeIndexonlyscan.c
+++ b/src/backend/executor/nodeIndexonlyscan.c
@@ -141,6 +141,26 @@ IndexOnlyNext(IndexOnlyScanState *node)
 			 * but it's not clear whether it's a win to do so.  The next index
 			 * entry might require a visit to the same heap page.
 			 */
+
+			/*
+			 * If the index was lossy or the tuple was WARM, we have to recheck
+			 * the index quals using the fetched tuple.
+			 */
+			if (scandesc->xs_tuple_recheck)
+			{
+				ExecStoreTuple(tuple,	/* tuple to store */
+						slot,	/* slot to store in */
+						scandesc->xs_cbuf,		/* buffer containing tuple */
+						false);	/* don't pfree */
+				econtext->ecxt_scantuple = slot;
+				ResetExprContext(econtext);
+				if (!ExecQual(node->indexqual, econtext, false))
+				{
+					/* Fails recheck, so drop it and loop back for another */
+					InstrCountFiltered2(node, 1);
+					continue;
+				}
+			}
 		}
 
 		/*
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index 3143bd9..0b04bb8 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -39,6 +39,8 @@
 #include "utils/memutils.h"
 #include "utils/rel.h"
 
+bool indexscan_recheck = false;
+
 /*
  * When an ordering operator is used, tuples fetched from the index that
  * need to be reordered are queued in a pairing heap, as ReorderTuples.
@@ -115,10 +117,10 @@ IndexNext(IndexScanState *node)
 					   false);	/* don't pfree */
 
 		/*
-		 * If the index was lossy, we have to recheck the index quals using
-		 * the fetched tuple.
+		 * If the index was lossy or the tuple was WARM, we have to recheck
+		 * the index quals using the fetched tuple.
 		 */
-		if (scandesc->xs_recheck)
+		if (scandesc->xs_tuple_recheck)
 		{
 			econtext->ecxt_scantuple = slot;
 			ResetExprContext(econtext);
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index efb0c5e..0ba71a3 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -448,6 +448,7 @@ ExecInsert(ModifyTableState *mtstate,
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												 &(tuple->t_self), NULL,
 												 estate, true, &specConflict,
 												   arbiterIndexes);
 
@@ -494,6 +495,7 @@ ExecInsert(ModifyTableState *mtstate,
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
 				recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+													   &(tuple->t_self), NULL,
 													   estate, false, NULL,
 													   arbiterIndexes);
 		}
@@ -824,6 +826,9 @@ ExecUpdate(ItemPointer tupleid,
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
 	List	   *recheckIndexes = NIL;
+	Bitmapset  *updated_attrs = NULL;
+	ItemPointerData	root_tid;
+	bool		warm_update;
 
 	/*
 	 * abort the operation if not running transactions
@@ -938,7 +943,7 @@ lreplace:;
 							 estate->es_output_cid,
 							 estate->es_crosscheck_snapshot,
 							 true /* wait for commit */ ,
-							 &hufd, &lockmode);
+							 &hufd, &lockmode, &updated_attrs, &warm_update);
 		switch (result)
 		{
 			case HeapTupleSelfUpdated:
@@ -1025,10 +1030,28 @@ lreplace:;
 		 * the t_self field.
 		 *
 		 * If it's a HOT update, we mustn't insert new index entries.
+		 *
+		 * If it's a WARM update, then we must insert new entries with TID
+		 * pointing to the root of the WARM chain.
 		 */
-		if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
+		if (resultRelInfo->ri_NumIndices > 0 &&
+			(!HeapTupleIsHeapOnly(tuple) || warm_update))
+		{
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(updated_attrs);
+				updated_attrs = NULL;
+			}
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   updated_attrs,
 												   estate, false, NULL, NIL);
+		}
 	}
 
 	if (canSetTag)
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index a392197..86f803a 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -1823,7 +1823,7 @@ pgstat_count_heap_insert(Relation rel, int n)
  * pgstat_count_heap_update - count a tuple update
  */
 void
-pgstat_count_heap_update(Relation rel, bool hot)
+pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 {
 	PgStat_TableStatus *pgstat_info = rel->pgstat_info;
 
@@ -1841,6 +1841,8 @@ pgstat_count_heap_update(Relation rel, bool hot)
 		/* t_tuples_hot_updated is nontransactional, so just advance it */
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
+		else if (warm)
+			pgstat_info->t_counts.t_tuples_warm_updated++;
 	}
 }
 
@@ -4080,6 +4082,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_updated = 0;
 		result->tuples_deleted = 0;
 		result->tuples_hot_updated = 0;
+		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
 		result->changes_since_analyze = 0;
@@ -5189,6 +5192,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated = tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted = tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated = tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
@@ -5216,6 +5220,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated += tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted += tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated += tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated += tabmsg->t_counts.t_tuples_warm_updated;
 			/* If table was truncated, first reset the live/dead counters */
 			if (tabmsg->t_counts.t_truncated)
 			{
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 2d3cf9e..25752b0 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -115,6 +115,7 @@ extern Datum pg_stat_get_xact_tuples_inserted(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_xact_tuples_updated(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_xact_tuples_deleted(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS);
+extern Datum pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_xact_blocks_hit(PG_FUNCTION_ARGS);
 
@@ -245,6 +246,22 @@ pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS)
 
 
 Datum
+pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+
+Datum
 pg_stat_get_live_tuples(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
@@ -1744,6 +1761,21 @@ pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS)
 }
 
 Datum
+pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_TableStatus *tabentry;
+
+	if ((tabentry = find_tabstat_entry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->t_counts.t_tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+Datum
 pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 79e0b1f..37874ca 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2030,6 +2030,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	list_free_deep(relation->rd_fkeylist);
 	list_free(relation->rd_indexlist);
 	bms_free(relation->rd_indexattr);
+	bms_free(relation->rd_exprindexattr);
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_idattr);
 	if (relation->rd_options)
@@ -4373,12 +4374,15 @@ Bitmapset *
 RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 {
 	Bitmapset  *indexattrs;		/* indexed columns */
+	Bitmapset  *exprindexattrs;	/* indexed columns in expression/prediacate
+									 indexes */
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
 	List	   *indexoidlist;
 	Oid			relreplindex;
 	ListCell   *l;
 	MemoryContext oldcxt;
+	bool		supportswarm = true;/* True if the table can be WARM updated */
 
 	/* Quick exit if we already computed the result. */
 	if (relation->rd_indexattr != NULL)
@@ -4391,6 +4395,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_keyattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
+			case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+				return bms_copy(relation->rd_exprindexattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -4429,6 +4435,7 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	 * won't be returned at all by RelationGetIndexList.
 	 */
 	indexattrs = NULL;
+	exprindexattrs = NULL;
 	uindexattrs = NULL;
 	idindexattrs = NULL;
 	foreach(l, indexoidlist)
@@ -4474,19 +4481,32 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 		}
 
 		/* Collect all attributes used in expressions, too */
-		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &exprindexattrs);
 
 		/* Collect all attributes in the index predicate, too */
-		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
+
+		/*
+		 * Check if the index has amrecheck method defined. If the method is
+		 * not defined, the index does not support WARM update. Completely
+		 * disable WARM updates on such tables
+		 */
+		if (!indexDesc->rd_amroutine->amrecheck)
+			supportswarm = false;
 
 		index_close(indexDesc, AccessShareLock);
 	}
 
 	list_free(indexoidlist);
 
+	/* Remember if the table can do WARM updates */
+	relation->rd_supportswarm = supportswarm;
+
 	/* Don't leak the old values of these bitmaps, if any */
 	bms_free(relation->rd_indexattr);
 	relation->rd_indexattr = NULL;
+	bms_free(relation->rd_exprindexattr);
+	relation->rd_exprindexattr = NULL;
 	bms_free(relation->rd_keyattr);
 	relation->rd_keyattr = NULL;
 	bms_free(relation->rd_idattr);
@@ -4502,7 +4522,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	oldcxt = MemoryContextSwitchTo(CacheMemoryContext);
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_indexattr = bms_copy(indexattrs);
+	relation->rd_exprindexattr = bms_copy(exprindexattrs);
+	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -4514,6 +4535,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 			return uindexattrs;
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
+		case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+			return exprindexattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 65660c1..e7bf734 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -112,6 +112,7 @@ extern char *default_tablespace;
 extern char *temp_tablespaces;
 extern bool ignore_checksum_failure;
 extern bool synchronize_seqscans;
+extern bool indexscan_recheck;
 
 #ifdef TRACE_SYNCSCAN
 extern bool trace_syncscan;
@@ -1288,6 +1289,16 @@ static struct config_bool ConfigureNamesBool[] =
 		NULL, NULL, NULL
 	},
 	{
+		{"indexscan_recheck", PGC_USERSET, DEVELOPER_OPTIONS,
+			gettext_noop("Recheck heap rows returned from an index scan."),
+			NULL,
+			GUC_NOT_IN_SAMPLE
+		},
+		&indexscan_recheck,
+		false,
+		NULL, NULL, NULL
+	},
+	{
 		{"debug_deadlocks", PGC_SUSET, DEVELOPER_OPTIONS,
 			gettext_noop("Dumps information about all current locks when a deadlock timeout occurs."),
 			NULL,
diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h
index 1036cca..37eaf76 100644
--- a/src/include/access/amapi.h
+++ b/src/include/access/amapi.h
@@ -13,6 +13,7 @@
 #define AMAPI_H
 
 #include "access/genam.h"
+#include "access/itup.h"
 
 /*
  * We don't wish to include planner header files here, since most of an index
@@ -137,6 +138,9 @@ typedef void (*ammarkpos_function) (IndexScanDesc scan);
 /* restore marked scan position */
 typedef void (*amrestrpos_function) (IndexScanDesc scan);
 
+/* recheck index tuple and heap tuple match */
+typedef bool (*amrecheck_function) (Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * API struct for an index AM.  Note this must be stored in a single palloc'd
@@ -196,6 +200,7 @@ typedef struct IndexAmRoutine
 	amendscan_function amendscan;
 	ammarkpos_function ammarkpos;		/* can be NULL */
 	amrestrpos_function amrestrpos;		/* can be NULL */
+	amrecheck_function amrecheck;		/* can be NULL */
 } IndexAmRoutine;
 
 
diff --git a/src/include/access/hash.h b/src/include/access/hash.h
index 725e2f2..2f5ef36 100644
--- a/src/include/access/hash.h
+++ b/src/include/access/hash.h
@@ -363,4 +363,8 @@ extern bool _hash_convert_tuple(Relation index,
 extern OffsetNumber _hash_binsearch(Page page, uint32 hash_value);
 extern OffsetNumber _hash_binsearch_last(Page page, uint32 hash_value);
 
+/* hash.c */
+extern bool hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 #endif   /* HASH_H */
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 81f7982..78e16a9 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -137,9 +137,10 @@ extern bool heap_fetch(Relation relation, Snapshot snapshot,
 		   Relation stats_relation);
 extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
 					   Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call);
+					   bool *all_dead, bool first_call, bool *recheck);
 extern bool heap_hot_search(ItemPointer tid, Relation relation,
-				Snapshot snapshot, bool *all_dead);
+				Snapshot snapshot, bool *all_dead,
+				bool *recheck, Buffer *buffer, HeapTuple heapTuple);
 
 extern void heap_get_latest_tid(Relation relation, Snapshot snapshot,
 					ItemPointer tid);
@@ -160,7 +161,8 @@ extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
 extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **updated_attrs, bool *warm_update);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
@@ -186,6 +188,7 @@ extern int heap_page_prune(Relation relation, Buffer buffer,
 				bool report_stats, TransactionId *latestRemovedXid);
 extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
+						bool *warmchain,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
 extern void heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index 5a04561..ddc3a7a 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -80,6 +80,7 @@
 #define XLH_UPDATE_CONTAINS_NEW_TUPLE			(1<<4)
 #define XLH_UPDATE_PREFIX_FROM_OLD				(1<<5)
 #define XLH_UPDATE_SUFFIX_FROM_OLD				(1<<6)
+#define XLH_UPDATE_WARM_UPDATE					(1<<7)
 
 /* convenience macro for checking whether any form of old tuple was logged */
 #define XLH_UPDATE_CONTAINS_OLD						\
@@ -211,7 +212,9 @@ typedef struct xl_heap_update
  *	* for each redirected item: the item offset, then the offset redirected to
  *	* for each now-dead item: the item offset
  *	* for each now-unused item: the item offset
- * The total number of OffsetNumbers is therefore 2*nredirected+ndead+nunused.
+ *	* for each now-warm item: the item offset
+ * The total number of OffsetNumbers is therefore
+ * 2*nredirected+ndead+nunused+nwarm.
  * Note that nunused is not explicitly stored, but may be found by reference
  * to the total record length.
  */
@@ -220,10 +223,11 @@ typedef struct xl_heap_clean
 	TransactionId latestRemovedXid;
 	uint16		nredirected;
 	uint16		ndead;
+	uint16		nwarm;
 	/* OFFSET NUMBERS are in the block reference 0 */
 } xl_heap_clean;
 
-#define SizeOfHeapClean (offsetof(xl_heap_clean, ndead) + sizeof(uint16))
+#define SizeOfHeapClean (offsetof(xl_heap_clean, nwarm) + sizeof(uint16))
 
 /*
  * Cleanup_info is required in some cases during a lazy VACUUM.
@@ -384,6 +388,7 @@ extern XLogRecPtr log_heap_cleanup_info(RelFileNode rnode,
 					  TransactionId latestRemovedXid);
 extern XLogRecPtr log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *redirected, int nredirected,
+			   OffsetNumber *warm, int nwarm,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid);
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index d01e0d8..3a51681 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,7 +260,8 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x0800 are available */
+#define HEAP_WARM_TUPLE			0x0800	/* This tuple is a part of a WARM chain
+										 */
 #define HEAP_LATEST_TUPLE		0x1000	/*
 										 * This is the last tuple in chain and
 										 * ip_posid points to the root line
@@ -271,7 +272,7 @@ struct HeapTupleHeaderData
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF800	/* visibility-related bits */
 
 
 /*
@@ -510,6 +511,21 @@ do { \
   (tup)->t_infomask2 & HEAP_ONLY_TUPLE \
 )
 
+#define HeapTupleHeaderSetHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 |= HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderClearHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 &= ~HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderIsHeapWarmTuple(tup) \
+( \
+  ((tup)->t_infomask2 & HEAP_WARM_TUPLE) \
+)
+
 #define HeapTupleHeaderSetHeapLatest(tup) \
 ( \
 	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE \
@@ -771,6 +787,15 @@ struct MinimalTupleData
 #define HeapTupleClearHeapOnly(tuple) \
 		HeapTupleHeaderClearHeapOnly((tuple)->t_data)
 
+#define HeapTupleIsHeapWarmTuple(tuple) \
+		HeapTupleHeaderIsHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleSetHeapWarmTuple(tuple) \
+		HeapTupleHeaderSetHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleClearHeapWarmTuple(tuple) \
+		HeapTupleHeaderClearHeapWarmTuple((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index c580f51..83af072 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -751,6 +751,8 @@ extern bytea *btoptions(Datum reloptions, bool validate);
 extern bool btproperty(Oid index_oid, int attno,
 		   IndexAMProperty prop, const char *propname,
 		   bool *res, bool *isnull);
+extern bool btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * prototypes for functions in nbtvalidate.c
diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h
index de98dd6..da7ec84 100644
--- a/src/include/access/relscan.h
+++ b/src/include/access/relscan.h
@@ -111,7 +111,8 @@ typedef struct IndexScanDescData
 	HeapTupleData xs_ctup;		/* current heap tuple, if any */
 	Buffer		xs_cbuf;		/* current heap buffer in scan, if any */
 	/* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */
-	bool		xs_recheck;		/* T means scan keys must be rechecked */
+	bool		xs_recheck;		/* T means scan keys must be rechecked for each tuple */
+	bool		xs_tuple_recheck;	/* T means scan keys must be rechecked for current tuple */
 
 	/*
 	 * When fetching with an ordering operator, the values of the ORDER BY
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 17ec71d..0c4b160 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2732,6 +2732,8 @@ DATA(insert OID = 1933 (  pg_stat_get_tuples_deleted	PGNSP PGUID 12 1 0 0 0 f f
 DESCR("statistics: number of tuples deleted");
 DATA(insert OID = 1972 (  pg_stat_get_tuples_hot_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated");
+DATA(insert OID = 3344 (  pg_stat_get_tuples_warm_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated");
 DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_live_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
@@ -2882,6 +2884,8 @@ DATA(insert OID = 3042 (  pg_stat_get_xact_tuples_deleted		PGNSP PGUID 12 1 0 0
 DESCR("statistics: number of tuples deleted in current transaction");
 DATA(insert OID = 3043 (  pg_stat_get_xact_tuples_hot_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated in current transaction");
+DATA(insert OID = 3343 (  pg_stat_get_xact_tuples_warm_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated in current transaction");
 DATA(insert OID = 3044 (  pg_stat_get_xact_blocks_fetched		PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_fetched _null_ _null_ _null_ ));
 DESCR("statistics: number of blocks fetched in current transaction");
 DATA(insert OID = 3045 (  pg_stat_get_xact_blocks_hit			PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_hit _null_ _null_ _null_ ));
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 136276b..f463014 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -366,6 +366,7 @@ extern void UnregisterExprContextCallback(ExprContext *econtext,
 extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
 extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
 extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
+					  ItemPointer root_tid, Bitmapset *updated_attrs,
 					  EState *estate, bool noDupErr, bool *specConflict,
 					  List *arbiterIndexes);
 extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
diff --git a/src/include/executor/nodeIndexscan.h b/src/include/executor/nodeIndexscan.h
index 194fadb..fe9c78e 100644
--- a/src/include/executor/nodeIndexscan.h
+++ b/src/include/executor/nodeIndexscan.h
@@ -38,4 +38,5 @@ extern bool ExecIndexEvalArrayKeys(ExprContext *econtext,
 					   IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 extern bool ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 
+extern bool indexscan_recheck;
 #endif   /* NODEINDEXSCAN_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index f6f73f3..b0bdc46 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -61,6 +61,7 @@ typedef struct IndexInfo
 	NodeTag		type;
 	int			ii_NumIndexAttrs;
 	AttrNumber	ii_KeyAttrNumbers[INDEX_MAX_KEYS];
+	Bitmapset  *ii_indxattrs;	/* bitmap of all columns used in this index */
 	List	   *ii_Expressions; /* list of Expr */
 	List	   *ii_ExpressionsState;	/* list of ExprState */
 	List	   *ii_Predicate;	/* list of Expr */
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 4e8dac6..8e18c16 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -105,6 +105,7 @@ typedef struct PgStat_TableCounts
 	PgStat_Counter t_tuples_updated;
 	PgStat_Counter t_tuples_deleted;
 	PgStat_Counter t_tuples_hot_updated;
+	PgStat_Counter t_tuples_warm_updated;
 	bool		t_truncated;
 
 	PgStat_Counter t_delta_live_tuples;
@@ -625,6 +626,7 @@ typedef struct PgStat_StatTabEntry
 	PgStat_Counter tuples_updated;
 	PgStat_Counter tuples_deleted;
 	PgStat_Counter tuples_hot_updated;
+	PgStat_Counter tuples_warm_updated;
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
@@ -1175,7 +1177,7 @@ pgstat_report_wait_end(void)
 	(pgStatBlockWriteTime += (n))
 
 extern void pgstat_count_heap_insert(Relation rel, int n);
-extern void pgstat_count_heap_update(Relation rel, bool hot);
+extern void pgstat_count_heap_update(Relation rel, bool hot, bool warm);
 extern void pgstat_count_heap_delete(Relation rel);
 extern void pgstat_count_truncate(Relation rel);
 extern void pgstat_update_heap_dead_tuples(Relation rel, int delta);
diff --git a/src/include/storage/itemid.h b/src/include/storage/itemid.h
index 509c577..8c9cc99 100644
--- a/src/include/storage/itemid.h
+++ b/src/include/storage/itemid.h
@@ -46,6 +46,12 @@ typedef ItemIdData *ItemId;
 typedef uint16 ItemOffset;
 typedef uint16 ItemLength;
 
+/*
+ * Special value used in lp_len to indicate that the chain starting at line
+ * pointer may contain WARM tuples. This must only be interpreted along with
+ * LP_REDIRECT flag
+ */
+#define SpecHeapWarmLen	0x1ffb
 
 /* ----------------
  *		support macros
@@ -112,12 +118,15 @@ typedef uint16 ItemLength;
 #define ItemIdIsDead(itemId) \
 	((itemId)->lp_flags == LP_DEAD)
 
+#define ItemIdIsHeapWarm(itemId) \
+	(((itemId)->lp_flags == LP_REDIRECT) && \
+	 ((itemId)->lp_len == SpecHeapWarmLen))
 /*
  * ItemIdHasStorage
  *		True iff item identifier has associated storage.
  */
 #define ItemIdHasStorage(itemId) \
-	((itemId)->lp_len != 0)
+	(!ItemIdIsRedirected(itemId) && (itemId)->lp_len != 0)
 
 /*
  * ItemIdSetUnused
@@ -168,6 +177,26 @@ typedef uint16 ItemLength;
 )
 
 /*
+ * ItemIdSetHeapWarm
+ * 		Set the item identifier to identify as starting of a WARM chain
+ *
+ * Note: Since all bits in lp_flags are currently used, we store a special
+ * value in lp_len field to indicate this state. This is required only for
+ * LP_REDIRECT tuple and lp_len field is unused for such line pointers.
+ */
+#define ItemIdSetHeapWarm(itemId) \
+do { \
+	AssertMacro((itemId)->lp_flags == LP_REDIRECT); \
+	(itemId)->lp_len = SpecHeapWarmLen; \
+} while (0)
+
+#define ItemIdClearHeapWarm(itemId) \
+( \
+	AssertMacro((itemId)->lp_flags == LP_REDIRECT); \
+	(itemId)->lp_len = 0; \
+)
+
+/*
  * ItemIdMarkDead
  *		Set the item identifier to be DEAD, keeping its existing storage.
  *
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index c867ebb..af25f44 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -101,8 +101,11 @@ typedef struct RelationData
 
 	/* data managed by RelationGetIndexAttrBitmap: */
 	Bitmapset  *rd_indexattr;	/* identifies columns used in indexes */
+	Bitmapset  *rd_exprindexattr; /* indentified columns used in expression or
+									 predicate indexes */
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	/*
 	 * rd_options is set whenever rd_rel is loaded into the relcache entry.
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index 6ea7dd2..290e9b7 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -48,7 +48,8 @@ typedef enum IndexAttrBitmapKind
 {
 	INDEX_ATTR_BITMAP_ALL,
 	INDEX_ATTR_BITMAP_KEY,
-	INDEX_ATTR_BITMAP_IDENTITY_KEY
+	INDEX_ATTR_BITMAP_IDENTITY_KEY,
+	INDEX_ATTR_BITMAP_EXPR_PREDICATE
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
new file mode 100644
index 0000000..0aa1b83
--- /dev/null
+++ b/src/test/regress/expected/warm.out
@@ -0,0 +1,51 @@
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+explain select * from test_warm where lower(a) = 'test';
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on test_warm  (cost=4.18..12.65 rows=4 width=64)
+   Recheck Cond: (lower(a) = 'test'::text)
+   ->  Bitmap Index Scan on test_warmindx  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (lower(a) = 'test'::text)
+(4 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+select *, ctid from test_warm where a = 'test';
+ a | b | ctid 
+---+---+------
+(0 rows)
+
+select *, ctid from test_warm where a = 'TEST';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+                                   QUERY PLAN                                    
+---------------------------------------------------------------------------------
+ Index Scan using test_warmindx on test_warm  (cost=0.15..20.22 rows=4 width=64)
+   Index Cond: (lower(a) = 'test'::text)
+(2 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+DROP TABLE test_warm;
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 8641769..a610039 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -42,6 +42,8 @@ test: create_type
 test: create_table
 test: create_function_2
 
+test: warm
+
 # ----------
 # Load huge amounts of data
 # We should split the data files into single files and then
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
new file mode 100644
index 0000000..166ea37
--- /dev/null
+++ b/src/test/regress/sql/warm.sql
@@ -0,0 +1,15 @@
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where a = 'test';
+select *, ctid from test_warm where a = 'TEST';
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+DROP TABLE test_warm;
+
+
#16Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Pavan Deolasee (#15)
1 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Sat, Nov 12, 2016 at 10:12 PM, Pavan Deolasee <pavan.deolasee@gmail.com>
wrote:

On Tue, Nov 8, 2016 at 9:13 AM, Haribabu Kommi <kommi.haribabu@gmail.com>
wrote:

Thanks for the patch. This shows a very good performance improvement.

Thank you. Can you please share the benchmark you ran, results and
observations?

I just ran a performance test on my laptop with minimal configuration, it
didn't show much
improvement, currently I don't have access to a big machine to test the
performance.

I started reviewing the patch, during this process and I ran the regression

test on the WARM patch. I observed a failure in create_index test.
This may be a bug in code or expected that needs to be corrected.

Can you please share the diff? I ran regression after applying patch on
the current master and did not find any change? Does it happen consistently?

Yes, it is happening consistently. I ran the make installcheck. Attached
the regression.diffs file with the failed test.
I applied the previous warm patch on this commit -
e3e66d8a9813d22c2aa027d8f373a96d4d4c1b15

Regards,
Hari Babu
Fujitsu Australia

Attachments:

regression.diffsapplication/octet-stream; name=regression.diffsDownload
*** /media/sf_code/fast/fujitsu-oss-postgres/src/test/regress/expected/create_index.out	2016-11-09 12:50:55.017043300 +1100
--- /media/sf_code/fast/fujitsu-oss-postgres/src/test/regress/results/create_index.out	2016-11-15 11:16:39.341650900 +1100
***************
*** 473,479 ****
           f1          
  ---------------------
   ((2,0),(2,4),(0,0))
! (1 row)
  
  EXPLAIN (COSTS OFF)
  SELECT * FROM circle_tbl WHERE f1 && circle(point(1,-2), 1)
--- 473,480 ----
           f1          
  ---------------------
   ((2,0),(2,4),(0,0))
!  ((3,1),(3,3),(1,0))
! (2 rows)
  
  EXPLAIN (COSTS OFF)
  SELECT * FROM circle_tbl WHERE f1 && circle(point(1,-2), 1)
***************
*** 508,514 ****
  SELECT count(*) FROM gpolygon_tbl WHERE f1 && '(1000,1000,0,0)'::polygon;
   count 
  -------
!      2
  (1 row)
  
  EXPLAIN (COSTS OFF)
--- 509,515 ----
  SELECT count(*) FROM gpolygon_tbl WHERE f1 && '(1000,1000,0,0)'::polygon;
   count 
  -------
!      4
  (1 row)
  
  EXPLAIN (COSTS OFF)

======================================================================

#17Haribabu Kommi
kommi.haribabu@gmail.com
In reply to: Haribabu Kommi (#16)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Nov 15, 2016 at 5:58 PM, Haribabu Kommi <kommi.haribabu@gmail.com>
wrote:

On Sat, Nov 12, 2016 at 10:12 PM, Pavan Deolasee <pavan.deolasee@gmail.com

wrote:

On Tue, Nov 8, 2016 at 9:13 AM, Haribabu Kommi <kommi.haribabu@gmail.com>
wrote:

Thanks for the patch. This shows a very good performance improvement.

Thank you. Can you please share the benchmark you ran, results and
observations?

I just ran a performance test on my laptop with minimal configuration, it
didn't show much
improvement, currently I don't have access to a big machine to test the
performance.

I started reviewing the patch, during this process and I ran the

regression
test on the WARM patch. I observed a failure in create_index test.
This may be a bug in code or expected that needs to be corrected.

Can you please share the diff? I ran regression after applying patch on
the current master and did not find any change? Does it happen consistently?

Yes, it is happening consistently. I ran the make installcheck. Attached
the regression.diffs file with the failed test.
I applied the previous warm patch on this commit
- e3e66d8a9813d22c2aa027d8f373a96d4d4c1b15

Are you able to reproduce the issue?

Currently the patch is moved to next CF with "needs review" state.

Regards,
Hari Babu
Fujitsu Australia

#18Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Haribabu Kommi (#17)
2 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Dec 2, 2016 at 8:34 AM, Haribabu Kommi <kommi.haribabu@gmail.com>
wrote:

On Tue, Nov 15, 2016 at 5:58 PM, Haribabu Kommi <kommi.haribabu@gmail.com>
wrote:

Yes, it is happening consistently. I ran the make installcheck. Attached
the regression.diffs file with the failed test.
I applied the previous warm patch on this commit
- e3e66d8a9813d22c2aa027d8f373a96d4d4c1b15

Are you able to reproduce the issue?

Apologies for the delay. I could reproduce this on a different environment.
It was a case of uninitialised variable and hence the inconsistent results.

I've updated the patches after fixing the issue. Multiple rounds of
regression passes for me without any issue. Please let me know if it works
for you.

Currently the patch is moved to next CF with "needs review" state.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0001_track_root_lp_v6.patchapplication/octet-stream; name=0001_track_root_lp_v6.patchDownload
commit f33ee503463137aa1a2ae4c3ab04d1468ae1941c
Author: Pavan Deolasee <pavan.deolasee@gmail.com>
Date:   Sat Sep 3 14:51:00 2016 +0530

    Use HEAP_TUPLE_LATEST to mark a tuple as the latest tuple in an update chain
    and use OffsetNumber in t_ctid to store the root line pointer of the chain.
    
    t_ctid field in the tuple header is usually used to store TID of the next tuple
    in an update chain. But for the last tuple in the chain, t_ctid is made to
    point to itself. When t_ctid points to itself, that signals the end of the
    chain. With this patch, information about a tuple being the last tuple in the
    chain is stored a separate HEAP_TUPLE_LATEST flag. This uses another free bit
    in t_infomask2. When HEAP_TUPLE_LATEST is set, OffsetNumber field in the t_ctid
    stores the root line pointer of the chain. This will help us quickly find root
    of a update chain.

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 6a27ef4..ccf84be 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -93,7 +93,8 @@ static HeapTuple heap_prepare_insert(Relation relation, HeapTuple tup,
 					TransactionId xid, CommandId cid, int options);
 static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
-				HeapTuple newtup, HeapTuple old_key_tup,
+				HeapTuple newtup, OffsetNumber root_offnum,
+				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
 static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
 							 Bitmapset *hot_attrs,
@@ -2250,13 +2251,13 @@ heap_get_latest_tid(Relation relation,
 		 */
 		if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) ||
 			HeapTupleHeaderIsOnlyLocked(tp.t_data) ||
-			ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid))
+			HeapTupleHeaderIsHeapLatest(tp.t_data, ctid))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
 		}
 
-		ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tp.t_data, &ctid, offnum);
 		priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		UnlockReleaseBuffer(buffer);
 	}							/* end of loop */
@@ -2415,7 +2416,8 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	START_CRIT_SECTION();
 
 	RelationPutHeapTuple(relation, buffer, heaptup,
-						 (options & HEAP_INSERT_SPECULATIVE) != 0);
+						 (options & HEAP_INSERT_SPECULATIVE) != 0,
+						 InvalidOffsetNumber);
 
 	if (PageIsAllVisible(BufferGetPage(buffer)))
 	{
@@ -2713,7 +2715,8 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		 * RelationGetBufferForTuple has ensured that the first tuple fits.
 		 * Put that on the page, and then as many other tuples as fit.
 		 */
-		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false);
+		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false,
+				InvalidOffsetNumber);
 		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++)
 		{
 			HeapTuple	heaptup = heaptuples[ndone + nthispage];
@@ -2721,7 +2724,8 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace)
 				break;
 
-			RelationPutHeapTuple(relation, buffer, heaptup, false);
+			RelationPutHeapTuple(relation, buffer, heaptup, false,
+					InvalidOffsetNumber);
 
 			/*
 			 * We don't use heap_multi_insert for catalog tuples yet, but
@@ -2993,6 +2997,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	HeapTupleData tp;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	TransactionId new_xmax;
@@ -3044,7 +3049,8 @@ heap_delete(Relation relation, ItemPointer tid,
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 	}
 
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
+	offnum = ItemPointerGetOffsetNumber(tid);
+	lp = PageGetItemId(page, offnum);
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
@@ -3174,7 +3180,7 @@ l1:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tp.t_data, &hufd->ctid, offnum);
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tp.t_data);
@@ -3250,8 +3256,8 @@ l1:
 	HeapTupleHeaderClearHotUpdated(tp.t_data);
 	HeapTupleHeaderSetXmax(tp.t_data, new_xmax);
 	HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo);
-	/* Make sure there is no forward chain link in t_ctid */
-	tp.t_data->t_ctid = tp.t_self;
+	/* Mark this tuple as the latest tuple in the update chain */
+	HeapTupleHeaderSetHeapLatest(tp.t_data);
 
 	MarkBufferDirty(buffer);
 
@@ -3450,6 +3456,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
+	OffsetNumber	root_offnum;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3506,6 +3514,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
 
 	block = ItemPointerGetBlockNumber(otid);
+	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3789,7 +3798,7 @@ l2:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = oldtup.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(oldtup.t_data, &hufd->ctid, offnum);
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
@@ -3968,7 +3977,7 @@ l2:
 		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		HeapTupleHeaderSetHeapLatest(oldtup.t_data);
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4149,6 +4158,20 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			heap_get_root_tuple_one(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
+					&root_offnum);
 	}
 	else
 	{
@@ -4156,10 +4179,29 @@ l2:
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		root_offnum = InvalidOffsetNumber;
 	}
 
-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
+	/* insert new tuple */
+	RelationPutHeapTuple(relation, newbuf, heaptup, false, root_offnum);
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data);
 
+	/*
+	 * Also update the in-memory copy with the root line pointer information
+	 */
+	if (OffsetNumberIsValid(root_offnum))
+	{
+		HeapTupleHeaderSetRootOffset(heaptup->t_data, root_offnum);
+		HeapTupleHeaderSetRootOffset(newtup->t_data, root_offnum);
+	}
+	else
+	{
+		HeapTupleHeaderSetRootOffset(heaptup->t_data,
+				ItemPointerGetOffsetNumber(&heaptup->t_self));
+		HeapTupleHeaderSetRootOffset(newtup->t_data,
+				ItemPointerGetOffsetNumber(&heaptup->t_self));
+	}
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
 	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
@@ -4172,7 +4214,9 @@ l2:
 	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	HeapTupleHeaderSetNextCtid(oldtup.t_data,
+			ItemPointerGetBlockNumber(&(heaptup->t_self)),
+			ItemPointerGetOffsetNumber(&(heaptup->t_self)));
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4211,6 +4255,7 @@ l2:
 
 		recptr = log_heap_update(relation, buffer,
 								 newbuf, &oldtup, heaptup,
+								 root_offnum,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4573,7 +4618,8 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	ItemId		lp;
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
-	BlockNumber block;
+	BlockNumber	block;
+	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4585,6 +4631,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
+	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -4631,7 +4678,7 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+		HeapTupleHeaderGetNextCtid(tuple->t_data, &t_ctid, offnum);
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -5069,7 +5116,7 @@ failed:
 		Assert(result == HeapTupleSelfUpdated || result == HeapTupleUpdated ||
 			   result == HeapTupleWouldBlock);
 		Assert(!(tuple->t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tuple->t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tuple->t_data, &hufd->ctid, offnum);
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tuple->t_data);
@@ -5145,7 +5192,7 @@ failed:
 	 * the tuple as well.
 	 */
 	if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask))
-		tuple->t_data->t_ctid = *tid;
+		HeapTupleHeaderSetHeapLatest(tuple->t_data);
 
 	/* Clear only the all-frozen bit on visibility map if needed */
 	if (PageIsAllVisible(page) &&
@@ -5659,6 +5706,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
+	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5667,6 +5715,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
+		offnum = ItemPointerGetOffsetNumber(&tupid);
+
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
@@ -5885,7 +5935,7 @@ l4:
 
 		/* if we find the end of update chain, we're done. */
 		if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID ||
-			ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) ||
+			HeapTupleHeaderIsHeapLatest(mytup.t_data, mytup.t_self) ||
 			HeapTupleHeaderIsOnlyLocked(mytup.t_data))
 		{
 			result = HeapTupleMayBeUpdated;
@@ -5894,7 +5944,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		HeapTupleHeaderGetNextCtid(mytup.t_data, &tupid, offnum);
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
@@ -6011,7 +6061,8 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	 * Replace the speculative insertion token with a real t_ctid, pointing to
 	 * itself like it does on regular tuples.
 	 */
-	htup->t_ctid = tuple->t_self;
+	HeapTupleHeaderSetHeapLatest(htup);
+	HeapTupleHeaderSetRootOffset(htup, offnum);
 
 	/* XLOG stuff */
 	if (RelationNeedsWAL(relation))
@@ -6137,7 +6188,9 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
 
 	/* Clear the speculative insertion token too */
-	tp.t_data->t_ctid = tp.t_self;
+	HeapTupleHeaderSetNextCtid(tp.t_data,
+			ItemPointerGetBlockNumber(&tp.t_self),
+			ItemPointerGetOffsetNumber(&tp.t_self));
 
 	MarkBufferDirty(buffer);
 
@@ -7486,6 +7539,7 @@ log_heap_visible(RelFileNode rnode, Buffer heap_buffer, Buffer vm_buffer,
 static XLogRecPtr
 log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup, HeapTuple newtup,
+				OffsetNumber root_offnum,
 				HeapTuple old_key_tuple,
 				bool all_visible_cleared, bool new_all_visible_cleared)
 {
@@ -7605,6 +7659,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	/* Prepare WAL data for the new page */
 	xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self);
 	xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data);
+	xlrec.root_offnum = root_offnum;
 
 	bufflags = REGBUF_STANDARD;
 	if (init)
@@ -8260,7 +8315,7 @@ heap_xlog_delete(XLogReaderState *record)
 			PageClearAllVisible(page);
 
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = target_tid;
+		HeapTupleHeaderSetHeapLatest(htup);
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8350,7 +8405,9 @@ heap_xlog_insert(XLogReaderState *record)
 		htup->t_hoff = xlhdr.t_hoff;
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
-		htup->t_ctid = target_tid;
+
+		HeapTupleHeaderSetHeapLatest(htup);
+		HeapTupleHeaderSetRootOffset(htup, xlrec->offnum);
 
 		if (PageAddItem(page, (Item) htup, newlen, xlrec->offnum,
 						true, true) == InvalidOffsetNumber)
@@ -8485,8 +8542,9 @@ heap_xlog_multi_insert(XLogReaderState *record)
 			htup->t_hoff = xlhdr->t_hoff;
 			HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 			HeapTupleHeaderSetCmin(htup, FirstCommandId);
-			ItemPointerSetBlockNumber(&htup->t_ctid, blkno);
-			ItemPointerSetOffsetNumber(&htup->t_ctid, offnum);
+
+			HeapTupleHeaderSetHeapLatest(htup);
+			HeapTupleHeaderSetRootOffset(htup, offnum);
 
 			offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 			if (offnum == InvalidOffsetNumber)
@@ -8622,7 +8680,8 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 		/* Set forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetNextCtid(htup, ItemPointerGetBlockNumber(&newtid),
+				ItemPointerGetOffsetNumber(&newtid));
 
 		/* Mark the page as a candidate for pruning */
 		PageSetPrunable(page, XLogRecGetXid(record));
@@ -8756,12 +8815,17 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetHeapLatest(htup);
 
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
 
+		if (OffsetNumberIsValid(xlrec->root_offnum))
+			HeapTupleHeaderSetRootOffset(htup, xlrec->root_offnum);
+		else
+			HeapTupleHeaderSetRootOffset(htup, offnum);
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
@@ -8889,9 +8953,7 @@ heap_xlog_lock(XLogReaderState *record)
 		{
 			HeapTupleHeaderClearHotUpdated(htup);
 			/* Make sure there is no forward chain link in t_ctid */
-			ItemPointerSet(&htup->t_ctid,
-						   BufferGetBlockNumber(buffer),
-						   offnum);
+			HeapTupleHeaderSetHeapLatest(htup);
 		}
 		HeapTupleHeaderSetXmax(htup, xlrec->locking_xid);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index c90fb71..e32deb1 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -31,12 +31,18 @@
  * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
  * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *
+ * The caller can optionally tell us to set the root offset to the given value.
+ * Otherwise, the root offset is set to the offset of the new location once its
+ * known. The former is used while updating an existing tuple while latter is
+ * used during insertion of a new row.
  */
 void
 RelationPutHeapTuple(Relation relation,
 					 Buffer buffer,
 					 HeapTuple tuple,
-					 bool token)
+					 bool token,
+					 OffsetNumber root_offnum)
 {
 	Page		pageHeader;
 	OffsetNumber offnum;
@@ -69,7 +75,13 @@ RelationPutHeapTuple(Relation relation,
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
-		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item);
+		if (OffsetNumberIsValid(root_offnum))
+			HeapTupleHeaderSetRootOffset((HeapTupleHeader) item,
+					root_offnum);
+		else
+			HeapTupleHeaderSetRootOffset((HeapTupleHeader) item,
+					offnum);
 	}
 }
 
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index 6ff9251..7c2231a 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -55,6 +55,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
 
+static void heap_get_root_tuples_internal(Page page,
+				OffsetNumber target_offnum, OffsetNumber *root_offsets);
 
 /*
  * Optionally prune and repair fragmentation in the specified page.
@@ -740,8 +742,9 @@ heap_page_prune_execute(Buffer buffer,
  * holds a pin on the buffer. Once pin is released, a tuple might be pruned
  * and reused by a completely unrelated tuple.
  */
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
 {
 	OffsetNumber offnum,
 				maxoff;
@@ -820,6 +823,14 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 			/* Remember the root line pointer for this item */
 			root_offsets[nextoffnum - 1] = offnum;
 
+			/*
+			 * If the caller is interested in just one offset and we found
+			 * that, just return
+			 */
+			if (OffsetNumberIsValid(target_offnum) &&
+					(nextoffnum == target_offnum))
+				return;
+
 			/* Advance to next chain member, if any */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				break;
@@ -829,3 +840,25 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 		}
 	}
 }
+
+/*
+ * Get root line pointer for the given tuple
+ */
+void
+heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offnum)
+{
+	OffsetNumber offsets[MaxHeapTuplesPerPage];
+	heap_get_root_tuples_internal(page, target_offnum, offsets);
+	*root_offnum = offsets[target_offnum - 1];
+}
+
+/*
+ * Get root line pointers for all tuples in the page
+ */
+void
+heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+{
+	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+			root_offsets);
+}
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index 17584ba..09a164c 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -419,14 +419,14 @@ rewrite_heap_tuple(RewriteState state,
 	 */
 	if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
 		  HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
-		!(ItemPointerEquals(&(old_tuple->t_self),
-							&(old_tuple->t_data->t_ctid))))
+		!(HeapTupleHeaderIsHeapLatest(old_tuple->t_data, old_tuple->t_self)))
 	{
 		OldToNewMapping mapping;
 
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
-		hashkey.tid = old_tuple->t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(old_tuple->t_data, &hashkey.tid,
+				ItemPointerGetOffsetNumber(&old_tuple->t_self));
 
 		mapping = (OldToNewMapping)
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -439,7 +439,9 @@ rewrite_heap_tuple(RewriteState state,
 			 * set the ctid of this tuple to point to the new location, and
 			 * insert it right away.
 			 */
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextCtid(new_tuple->t_data,
+					ItemPointerGetBlockNumber(&mapping->new_tid),
+					ItemPointerGetOffsetNumber(&mapping->new_tid));
 
 			/* We don't need the mapping entry anymore */
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -525,7 +527,9 @@ rewrite_heap_tuple(RewriteState state,
 				new_tuple = unresolved->tuple;
 				free_new = true;
 				old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextCtid(new_tuple->t_data,
+						ItemPointerGetBlockNumber(&new_tid),
+						ItemPointerGetOffsetNumber(&new_tid));
 
 				/*
 				 * We don't need the hash entry anymore, but don't free its
@@ -731,7 +735,10 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		onpage_tup->t_ctid = tup->t_self;
+		HeapTupleHeaderSetNextCtid(onpage_tup,
+				ItemPointerGetBlockNumber(&tup->t_self),
+				ItemPointerGetOffsetNumber(&tup->t_self));
+		HeapTupleHeaderSetHeapLatest(onpage_tup);
 	}
 
 	/* If heaptup is a private copy, release it. */
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 32bb3f9..079a77f 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2443,7 +2443,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		 * As above, it should be safe to examine xmax and t_ctid without the
 		 * buffer content lock, because they can't be changing.
 		 */
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, tuple.t_self))
 		{
 			/* deleted, so forget about it */
 			ReleaseBuffer(buffer);
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index b3a595c..94b46b8 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -188,6 +188,8 @@ extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
+extern void heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offnum);
 extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
 
 /* in heap/syncscan.c */
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index 06a8242..5a04561 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -193,6 +193,8 @@ typedef struct xl_heap_update
 	uint8		flags;
 	TransactionId new_xmax;		/* xmax of the new tuple */
 	OffsetNumber new_offnum;	/* new tuple's offset */
+	OffsetNumber root_offnum;	/* offset of the root line pointer in case of
+								   HOT or WARM update */
 
 	/*
 	 * If XLOG_HEAP_CONTAINS_OLD_TUPLE or XLOG_HEAP_CONTAINS_OLD_KEY flags are
@@ -200,7 +202,7 @@ typedef struct xl_heap_update
 	 */
 } xl_heap_update;
 
-#define SizeOfHeapUpdate	(offsetof(xl_heap_update, new_offnum) + sizeof(OffsetNumber))
+#define SizeOfHeapUpdate	(offsetof(xl_heap_update, root_offnum) + sizeof(OffsetNumber))
 
 /*
  * This is what we need to know about vacuum page cleanup/redirect
diff --git a/src/include/access/hio.h b/src/include/access/hio.h
index a174b34..82e5b5f 100644
--- a/src/include/access/hio.h
+++ b/src/include/access/hio.h
@@ -36,7 +36,7 @@ typedef struct BulkInsertStateData
 
 
 extern void RelationPutHeapTuple(Relation relation, Buffer buffer,
-					 HeapTuple tuple, bool token);
+					 HeapTuple tuple, bool token, OffsetNumber root_offnum);
 extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
 						  Buffer otherBuffer, int options,
 						  BulkInsertState bistate,
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index d7e5fad..d01e0d8 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,13 +260,19 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1800 are available */
+/* bits 0x0800 are available */
+#define HEAP_LATEST_TUPLE		0x1000	/*
+										 * This is the last tuple in chain and
+										 * ip_posid points to the root line
+										 * pointer
+										 */
 #define HEAP_KEYS_UPDATED		0x2000	/* tuple was updated and key cols
 										 * modified, or tuple deleted */
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xE000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+
 
 /*
  * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins.  It is
@@ -504,6 +510,30 @@ do { \
   (tup)->t_infomask2 & HEAP_ONLY_TUPLE \
 )
 
+#define HeapTupleHeaderSetHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE \
+)
+
+#define HeapTupleHeaderClearHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 &= ~HEAP_LATEST_TUPLE \
+)
+
+/*
+ * If HEAP_LATEST_TUPLE is set in the last tuple in the update chain. But for
+ * clusters which are upgraded from pre-10.0 release, we still check if c_tid
+ * is pointing to itself and declare such tuple as the latest tuple in the
+ * chain
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  ((tup)->t_infomask2 & HEAP_LATEST_TUPLE) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(&tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(&tid))) \
+)
+
+
 #define HeapTupleHeaderSetHeapOnly(tup) \
 ( \
   (tup)->t_infomask2 |= HEAP_ONLY_TUPLE \
@@ -542,6 +572,55 @@ do { \
 
 
 /*
+ * Set the t_ctid chain and also clear the HEAP_LATEST_TUPLE flag since we
+ * probably have a new tuple in the chain
+ */
+#define HeapTupleHeaderSetNextCtid(tup, block, offset) \
+do { \
+		ItemPointerSetBlockNumber(&((tup)->t_ctid), (block)); \
+		ItemPointerSetOffsetNumber(&((tup)->t_ctid), (offset)); \
+		HeapTupleHeaderClearHeapLatest((tup)); \
+} while (0)
+
+/*
+ * Get TID of next tuple in the update chain. Traditionally, we have stored
+ * self TID in the t_ctid field if the tuple is the last tuple in the chain. We
+ * try to preserve that behaviour by returning self-TID if HEAP_LATEST_TUPLE
+ * flag is set.
+ */
+#define HeapTupleHeaderGetNextCtid(tup, next_ctid, offnum) \
+do { \
+	if ((tup)->t_infomask2 & HEAP_LATEST_TUPLE) \
+	{ \
+		ItemPointerSet((next_ctid), ItemPointerGetBlockNumber(&(tup)->t_ctid), \
+				(offnum)); \
+	} \
+	else \
+	{ \
+		ItemPointerSet((next_ctid), ItemPointerGetBlockNumber(&(tup)->t_ctid), \
+				ItemPointerGetOffsetNumber(&(tup)->t_ctid)); \
+	} \
+} while (0)
+
+#define HeapTupleHeaderSetRootOffset(tup, offset) \
+do { \
+	AssertMacro(!HeapTupleHeaderIsHotUpdated(tup)); \
+	AssertMacro((tup)->t_infomask2 & HEAP_LATEST_TUPLE); \
+	ItemPointerSetOffsetNumber(&(tup)->t_ctid, (offset)); \
+} while (0)
+
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+	AssertMacro((tup)->t_infomask2 & HEAP_LATEST_TUPLE), \
+	ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)
+
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+	(tup)->t_infomask2 & HEAP_LATEST_TUPLE \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
0002_warm_updates_v6.patchapplication/octet-stream; name=0002_warm_updates_v6.patchDownload
diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c
index b68a0d1..b95275f 100644
--- a/contrib/bloom/blutils.c
+++ b/contrib/bloom/blutils.c
@@ -138,6 +138,7 @@ blhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = blendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index 1b45a4c..ba3fffb 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -111,6 +111,7 @@ brinhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = brinendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index b8aa9bc..491e411 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -88,6 +88,7 @@ gisthandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = gistendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index 6806e32..2026004 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -85,6 +85,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = hashendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = hashrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -265,6 +266,8 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 	OffsetNumber offnum;
 	ItemPointer current;
 	bool		res;
+	IndexTuple	itup;
+
 
 	/* Hash indexes are always lossy since we store only the hash code */
 	scan->xs_recheck = true;
@@ -302,8 +305,6 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 			 offnum <= maxoffnum;
 			 offnum = OffsetNumberNext(offnum))
 		{
-			IndexTuple	itup;
-
 			itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 			if (ItemPointerEquals(&(so->hashso_heappos), &(itup->t_tid)))
 				break;
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index 8d43b38..05b078f 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -59,6 +59,8 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
 	return true;
 }
 
@@ -407,6 +409,9 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
+
 	return true;
 }
 
diff --git a/src/backend/access/hash/hashutil.c b/src/backend/access/hash/hashutil.c
index fa9cbdc..6897985 100644
--- a/src/backend/access/hash/hashutil.c
+++ b/src/backend/access/hash/hashutil.c
@@ -17,8 +17,12 @@
 #include "access/hash.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
+#include "nodes/execnodes.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 #define CALC_NEW_BUCKET(old_bucket, lowmask) \
 			old_bucket | (lowmask + 1)
@@ -446,3 +450,109 @@ _hash_get_newbucket_from_oldbucket(Relation rel, Bucket old_bucket,
 
 	return new_bucket;
 }
+
+/*
+ * Recheck if the heap tuple satisfies the key stored in the index tuple
+ */
+bool
+hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	Datum		values2[INDEX_MAX_KEYS];
+	bool		isnull2[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	/*
+	 * HASH indexes compute a hash value of the key and store that in the
+	 * index. So we must first obtain the hash of the value obtained from the
+	 * heap and then do a comparison
+	 */
+	_hash_convert_tuple(indexRel, values, isnull, values2, isnull2);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL then they are equal
+		 */
+		if (isnull2[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If either is NULL then they are not equal
+		 */
+		if (isnull2[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now do a raw memory comparison
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values2[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+}
diff --git a/src/backend/access/heap/README.WARM b/src/backend/access/heap/README.WARM
new file mode 100644
index 0000000..f793570
--- /dev/null
+++ b/src/backend/access/heap/README.WARM
@@ -0,0 +1,271 @@
+src/backend/access/heap/README.WARM
+
+Write Amplification Reduction Method (WARM)
+===========================================
+
+The Heap Only Tuple (HOT) feature greatly eliminated redudant index
+entries and allowed re-use of the dead space occupied by previously
+updated or deleted tuples (see src/backend/access/heap/README.HOT)
+
+One of the necessary conditions for satisfying HOT update is that the
+update must not change a column used in any of the indexes on the table.
+The condition is sometimes hard to meet, especially for complex
+workloads with several indexes on large yet frequently updated tables.
+Worse, sometimes only one or two index columns may be updated, but the
+regular non-HOT update will still insert a new index entry in every
+index on the table, irrespective of whether the key pertaining to the
+index changed or not.
+
+WARM is a technique devised to address these problems.
+
+
+Update Chains With Multiple Index Entries Pointing to the Root
+--------------------------------------------------------------
+
+When a non-HOT update is caused by an index key change, a new index
+entry must be inserted for the changed index. But if the index key
+hasn't changed for other indexes, we don't really need to insert a new
+entry. Even though the existing index entry is pointing to the old
+tuple, the new tuple is reachable via the t_ctid chain. To keep things
+simple, a WARM update requires that the heap block must have enough
+space to store the new version of the tuple. This is same as HOT
+updates.
+
+In WARM, we ensure that every index entry always points to the root of
+the WARM chain. In fact, a WARM chain looks exactly like a HOT chain
+except for the fact that there could be multiple index entries pointing
+to the root of the chain. So when new entry is inserted in an index for
+updated tuple, and if we are doing a WARM update, the new entry is made
+point to the root of the WARM chain.
+
+For example, if we have a table with two columns and two indexes on each
+of the column. When a tuple is first inserted the table, we have exactly
+one index entry pointing to the tuple from both indexes.
+
+	lp [1]
+	[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's entry (aaaa) also points to 1
+
+Now if the tuple's second column is updated and if there is room on the
+page, we perform a WARM update. To do so, Index1 does not get any new
+entry and Index2's new entry will still point to the root tuple of the
+chain.
+
+	lp [1]  [2]
+	[1111, aaaa]->[111, bbbb]
+
+	Index1's entry (1111) points to 1
+	Index2's old entry (aaaa) points to 1
+	Index2's new entry (bbbb) also points to 1
+
+"A update chain which has more than one index entries pointing to its
+root line pointer is called WARM chain and the action that creates a
+WARM chain is called WARM update."
+
+Since all indexes always point to the root of the WARM chain, even when
+there are more than one index entries, WARM chains can be pruned and
+dead tuples can be removed without a need to do corresponding index
+cleanup.
+
+While this solves the problem of pruning dead tuples from a HOT/WARM
+chain, it also opens up a new technical challenge because now we have a
+situation where a heap tuple is reachable from multiple index entries,
+each having a different index key. While MVCC still ensures that only
+valid tuples are returned, a tuple with a wrong index key may be
+returned because of wrong index entries. In the above example, tuple
+[1111, bbbb] is reachable from both keys (aaaa) as well as (bbbb). For
+this reason, tuples returned from a WARM chain must always be rechecked
+for index key-match.
+
+Recheck Index Key Againt Heap Tuple
+-----------------------------------
+
+Since every Index AM has it's own notion of index tuples, each Index AM
+must implement its own method to recheck heap tuples. For example, a
+hash index stores the hash value of the column and hence recheck routine
+for hash AM must first compute the hash value of the heap attribute and
+then compare it against the value stored in the index tuple.
+
+The patch currently implement recheck routines for hash and btree
+indexes. If the table has an index which doesn't support recheck
+routine, WARM updates are disabled on such tables.
+
+Problem With Duplicate (key, ctid) Index Entries
+------------------------------------------------
+
+The index-key recheck logic works as long as there are no duplicate
+index keys, both pointing to the same WARM chain. In that case, the same
+valid tuple will be reachable via multiple index keys, yet satisfying
+the index key checks. In the above example, if the tuple [1111, bbbb] is
+again updated to [1111, aaaa] and if we insert a new index entry (aaaa)
+pointing to the root line pointer, we will end up with the following
+structure:
+
+	lp [1]  [2]  [3]
+	[1111, aaaa]->[1111, bbbb]->[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's oldest entry (aaaa) points to 1
+	Index2's old entry (bbbb) also points to 1
+	Index2's new entry (aaaa) also points to 1
+
+We must solve this problem to ensure that the same tuple is not
+reachable via multiple index pointers. There are couple of ways to
+address this issue:
+
+1. Do not allow WARM update to a tuple from a WARM chain. This
+guarantees that there can never be duplicate index entries to the same
+root line pointer because we must have checked for old and new index
+keys while doing the first WARM update.
+
+2. Do not allow duplicate (key, ctid) index pointers. In the above
+example, since (aaaa, 1) already exists in the index, we must not insert
+a duplicate index entry.
+
+The patch currently implements 1 i.e. do not do WARM updates to a tuple
+from a WARM chain. HOT updates are fine because they do not add a new
+index entry.
+
+Even with the restriction, this is a significant improvement because the
+number of regular UPDATEs are curtailed down to half.
+
+Expression and Partial Indexes
+------------------------------
+
+Expressions may evaluate to the same value even if the underlying column
+values have changed. A simple example is an index on "lower(col)" which
+will return the same value if the new heap value only differs in the
+case sensitivity. So we can not solely rely on the heap column check to
+decide whether or not to insert a new index entry for expression
+indexes. Similarly, for partial indexes, the predicate expression must
+be evaluated to decide whether or not to cause a new index entry when
+columns referred in the predicate expressions change.
+
+(None of these things are currently implemented and we squarely disallow
+WARM update if a column from expression indexes or predicate has
+changed).
+
+
+Efficiently Finding the Root Line Pointer
+-----------------------------------------
+
+During WARM update, we must be able to find the root line pointer of the
+tuple being updated. It must be noted that the t_ctid field in the heap
+tuple header is usually used to find the next tuple in the update chain.
+But the tuple that we are updating, must be the last tuple in the update
+chain. In such cases, the c_tid field usually points the tuple itself.
+So in theory, we could use the t_ctid to store additional information in
+the last tuple of the update chain, if the information about the tuple
+being the last tuple is stored elsewhere.
+
+We now utilize another bit from t_infomask2 to explicitly identify that
+this is the last tuple in the update chain.
+
+HEAP_LATEST_TUPLE - When this bit is set, the tuple is the last tuple in
+the update chain. The OffsetNumber part of t_ctid points to the root
+line pointer of the chain when HEAP_LATEST_TUPLE flag is set.
+
+If UPDATE operation is aborted, the last tuple in the update chain
+becomes dead. The root line pointer information stored in the tuple
+which remains the last valid tuple in the chain is also lost. In such
+rare cases, the root line pointer must be found in a hard way by
+scanning the entire heap page.
+
+Tracking WARM Chains
+--------------------
+
+The old and every subsequent tuple in the chain is marked with a special
+HEAP_WARM_TUPLE flag. We use the last remaining bit in t_infomask2 to
+store this information.
+
+When a tuple is returned from a WARM chain, the caller must do
+additional checks to ensure that the tuple matches the index key. Even
+if the tuple comes precedes the WARM update in the chain, it must still
+be rechecked for the index key match (case when old tuple is returned by
+the new index key). So we must follow the update chain everytime to the
+end to see check if this is a WARM chain.
+
+When the old updated tuple is retired and the root line pointer is
+converted into a redirected line pointer, we can copy the information
+about WARM chain to the redirected line pointer by storing a special
+value in the lp_len field of the line pointer. This will handle the most
+common case where a WARM chain is replaced by a redirect line pointer
+and a single tuple in the chain.
+
+Converting WARM chains back to HOT chains (VACUUM ?)
+----------------------------------------------------
+
+The current implementation of WARM allows only one WARM update per
+chain. This simplifies the design and addresses certain issues around
+duplicate scans. But this also implies that the benefit of WARM will be
+no more than 50%, which is still significant, but if we could return
+WARM chains back to normal status, we could do far more WARM updates.
+
+A distinct property of a WARM chain is that at least one index has more
+than one live index entries pointing to the root of the chain. In other
+words, if we can remove duplicate entry from every index or conclusively
+prove that there are no duplicate index entries for the root line
+pointer, the chain can again be marked as HOT.
+
+Here is one idea:
+
+A WARM chain has two parts, separated by the tuple that caused WARM
+update. All tuples in each part has matching index keys, but certain
+index keys may not match between these two parts. Lets say we mark heap
+tuples in each part with a special Red-Blue flag. The same flag is
+replicated in the index tuples. For example, when new rows are inserted
+in a table, they are marked with Blue flag and the index entries
+associated with those rows are also marked with Blue flag. When a row is
+WARM updated, the new version is marked with Red flag and the new index
+entry created by the update is also marked with Red flag.
+
+
+Heap chain: [1] [2] [3] [4]
+			[aaaa, 1111]B -> [aaaa, 1111]B -> [bbbb, 1111]R -> [bbbb, 1111]R
+
+Index1: 	(aaaa)B points to 1 (satisfies only tuples marked with B)
+			(bbbb)R points to 1 (satisfies only tuples marked with R)
+
+Index2:		(1111)B points to 1 (satisfied bith B and R tuples)
+
+
+It's clear that for indexes with Red and Blue pointers, a heap tuple
+with Blue flag will be reachable from Blue pointer and that with Red
+flag will be reachable from Red pointer. But for indexes which did not
+create a new entry, both Blue and Red tuples will be reachable from Blue
+pointer (there is no Red pointer in such indexes). So, as a side note,
+matching Red and Blue flags is not enough from index scan perspective.
+
+During first heap scan of VACUUM, we look for tuples with
+HEAP_WARM_TUPLE set.  If all live tuples in the chain are either marked
+with Blue flag or Red flag (but no mix of Red and Blue), then the chain
+is a candidate for HOT conversion.  We remember the root line pointer
+and Red-Blue flag of the WARM chain in a separate array.
+
+If we have a Red WARM chain, then our goal is to remove Blue pointers
+and vice versa. But there is a catch. For Index2 above, there is only
+Blue pointer and that must not be removed. IOW we should remove Blue
+pointer iff a Red pointer exists. Since index vacuum may visit Red and
+Blue pointers in any order, I think we will need another index pass to
+remove dead index pointers. So in the first index pass we check which
+WARM candidates have 2 index pointers. In the second pass, we remove the
+dead pointer and reset Red flag is the surviving index pointer is Red.
+
+During the second heap scan, we fix WARM chain by clearing
+HEAP_WARM_TUPLE flag and also reset Red flag to Blue.
+
+There are some more problems around aborted vacuums. For example, if
+vacuum aborts after changing Red index flag to Blue but before removing
+the other Blue pointer, we will end up with two Blue pointers to a Red
+WARM chain. But since the HEAP_WARM_TUPLE flag on the heap tuple is
+still set, further WARM updates to the chain will be blocked. I guess we
+will need some special handling for case with multiple Blue pointers. We
+can either leave these WARM chains alone and let them die with a
+subsequent non-WARM update or must apply heap-recheck logic during index
+vacuum to find the dead pointer. Given that vacuum-aborts are not
+common, I am inclined to leave this case unhandled. We must still check
+for presence of multiple Blue pointers and ensure that we don't
+accidently remove either of the Blue pointers and not clear WARM chains
+either.
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index bef9c84..b3de79c 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -99,7 +99,10 @@ static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
 							 Bitmapset *hot_attrs,
 							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
+							 Bitmapset *exprindx_attrs,
+							 Bitmapset **updated_attrs,
+							 bool *satisfies_hot, bool *satisfies_warm,
+							 bool *satisfies_key,
 							 bool *satisfies_id,
 							 HeapTuple oldtup, HeapTuple newtup);
 static bool heap_acquire_tuplock(Relation relation, ItemPointer tid,
@@ -1960,6 +1963,76 @@ heap_fetch(Relation relation,
 }
 
 /*
+ * Check if the HOT chain is originating or continuing at tid ever became a
+ * WARM chain, even if the actual UPDATE operation finally aborted.
+ */
+static void
+hot_check_warm_chain(Page dp, ItemPointer tid, bool *recheck)
+{
+	TransactionId prev_xmax = InvalidTransactionId;
+	OffsetNumber offnum;
+	HeapTupleData heapTuple;
+
+	if (*recheck == true)
+		return;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+			break;
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Presence of either WARM or WARM updated tuple signals possible
+		 * breakage and the caller must recheck tuple returned from this chain
+		 * for index satisfaction
+		 */
+		if (HeapTupleHeaderIsHeapWarmTuple(heapTuple.t_data))
+		{
+			*recheck = true;
+			break;
+		}
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (HeapTupleIsHotUpdated(&heapTuple))
+		{
+			offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+			prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+		}
+		else
+			break;				/* end of chain */
+	}
+
+}
+
+/*
  *	heap_hot_search_buffer	- search HOT chain for tuple satisfying snapshot
  *
  * On entry, *tid is the TID of a tuple (either a simple tuple, or the root
@@ -1979,11 +2052,14 @@ heap_fetch(Relation relation,
  * Unlike heap_fetch, the caller must already have pin and (at least) share
  * lock on the buffer; it is still pinned/locked at exit.  Also unlike
  * heap_fetch, we do not report any pgstats count; caller may do so if wanted.
+ *
+ * recheck should be set false on entry by caller, will be set true on exit
+ * if a WARM tuple is encountered.
  */
 bool
 heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 					   Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call)
+					   bool *all_dead, bool first_call, bool *recheck)
 {
 	Page		dp = (Page) BufferGetPage(buffer);
 	TransactionId prev_xmax = InvalidTransactionId;
@@ -2025,6 +2101,16 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 				/* Follow the redirect */
 				offnum = ItemIdGetRedirect(lp);
 				at_chain_start = false;
+
+				/* Check if it's a WARM chain */
+				if (recheck && *recheck == false)
+				{
+					if (ItemIdIsHeapWarm(lp))
+					{
+						*recheck = true;
+						Assert(!IsSystemRelation(relation));
+					}
+				}
 				continue;
 			}
 			/* else must be end of chain */
@@ -2037,9 +2123,12 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		ItemPointerSetOffsetNumber(&heapTuple->t_self, offnum);
 
 		/*
-		 * Shouldn't see a HEAP_ONLY tuple at chain start.
+		 * Shouldn't see a HEAP_ONLY tuple at chain start, unless we are
+		 * dealing with a WARM updated tuple in which case deferred triggers
+		 * may request to fetch a WARM tuple from middle of a chain.
 		 */
-		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple))
+		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple) &&
+				!HeapTupleIsHeapWarmTuple(heapTuple))
 			break;
 
 		/*
@@ -2052,6 +2141,22 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 			break;
 
 		/*
+		 * Check if there exists a WARM tuple somewhere down the chain and set
+		 * recheck to TRUE.
+		 *
+		 * XXX This is not very efficient right now, and we should look for
+		 * possible improvements here
+		 */
+		if (recheck && *recheck == false)
+		{
+			hot_check_warm_chain(dp, &heapTuple->t_self, recheck);
+
+			/* WARM is not supported on system tables yet */
+			if (*recheck == true)
+				Assert(!IsSystemRelation(relation));
+		}
+
+		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
 		 * return the first tuple we find.  But on later passes, heapTuple
 		 * will initially be pointing to the tuple we returned last time.
@@ -2124,18 +2229,41 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
  */
 bool
 heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
-				bool *all_dead)
+				bool *all_dead, bool *recheck, Buffer *cbuffer,
+				HeapTuple heapTuple)
 {
 	bool		result;
 	Buffer		buffer;
-	HeapTupleData heapTuple;
+	ItemPointerData ret_tid = *tid;
 
 	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	LockBuffer(buffer, BUFFER_LOCK_SHARE);
-	result = heap_hot_search_buffer(tid, relation, buffer, snapshot,
-									&heapTuple, all_dead, true);
-	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buffer);
+	result = heap_hot_search_buffer(&ret_tid, relation, buffer, snapshot,
+									heapTuple, all_dead, true, recheck);
+
+	/*
+	 * If we are returning a potential candidate tuple from this chain and the
+	 * caller has requested for "recheck" hint, keep the buffer locked and
+	 * pinned. The caller must release the lock and pin on the buffer in all
+	 * such cases
+	 */
+	if (!result || !recheck || !(*recheck))
+	{
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	/*
+	 * Set the caller supplied tid with the actual location of the tuple being
+	 * returned
+	 */
+	if (result)
+	{
+		*tid = ret_tid;
+		if (cbuffer)
+			*cbuffer = buffer;
+	}
+
 	return result;
 }
 
@@ -3442,13 +3570,15 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 HTSU_Result
 heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode)
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **updated_attrs, bool *warm_update)
 {
 	HTSU_Result result;
 	TransactionId xid = GetCurrentTransactionId();
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *exprindx_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3469,9 +3599,11 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		satisfies_hot;
+	bool		satisfies_warm;
 	bool		satisfies_key;
 	bool		satisfies_id;
 	bool		use_hot_update = false;
+	bool		use_warm_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
 	bool		all_visible_cleared_new = false;
@@ -3496,6 +3628,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot update tuples during a parallel operation")));
 
+	/* Assume no-warm update */
+	if (warm_update)
+		*warm_update = false;
+
 	/*
 	 * Fetch the list of attributes to be checked for HOT update.  This is
 	 * wasted effort if we fail to update or have to put the new tuple on a
@@ -3512,6 +3648,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	exprindx_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_EXPR_PREDICATE);
 
 	block = ItemPointerGetBlockNumber(otid);
 	offnum = ItemPointerGetOffsetNumber(otid);
@@ -3571,7 +3709,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	 * serendipitiously arrive at the same key values.
 	 */
 	HeapSatisfiesHOTandKeyUpdate(relation, hot_attrs, key_attrs, id_attrs,
-								 &satisfies_hot, &satisfies_key,
+								 exprindx_attrs,
+								 updated_attrs,
+								 &satisfies_hot, &satisfies_warm,
+								 &satisfies_key,
 								 &satisfies_id, &oldtup, newtup);
 	if (satisfies_key)
 	{
@@ -4118,6 +4259,34 @@ l2:
 		 */
 		if (satisfies_hot)
 			use_hot_update = true;
+		else
+		{
+			/*
+			 * If no WARM updates yet on this chain, let this update be a WARM
+			 * update.
+			 *
+			 * We check for both warm and warm updated tuples since if the
+			 * previous WARM update aborted, we may still have added
+			 * another index entry for this HOT chain. In such situations, we
+			 * must not attempt a WARM update until duplicate (key, CTID) index
+			 * entry issue is sorted out
+			 *
+			 * XXX Later we'll add more checks to ensure WARM chains can
+			 * further be WARM updated. This is probably good to do first rounf
+			 * of tests of remaining functionality
+			 *
+			 * XXX Disable WARM updates on system tables. There is nothing in
+			 * principle that stops us from supporting this. But it would
+			 * require API change to propogate the changed columns back to the
+			 * caller so that CatalogUpdateIndexes() can avoid adding new
+			 * entries to indexes that are not changed by update. This will be
+			 * fixed once basic patch is tested. !!FIXME
+			 */
+			if (satisfies_warm &&
+				!HeapTupleIsHeapWarmTuple(&oldtup) &&
+				!IsSystemRelation(relation))
+				use_warm_update = true;
+		}
 	}
 	else
 	{
@@ -4158,6 +4327,21 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+
+		/*
+		 * Even if we are doing a HOT update, we must carry forward the WARM
+		 * flag because we may have already inserted another index entry
+		 * pointing to our root and a third entry may create duplicates
+		 *
+		 * XXX This should be revisited if we get index (key, CTID) duplicate
+		 * detection mechanism in place
+		 */
+		if (HeapTupleIsHeapWarmTuple(&oldtup))
+		{
+			HeapTupleSetHeapWarmTuple(heaptup);
+			HeapTupleSetHeapWarmTuple(newtup);
+		}
+
 		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
@@ -4173,12 +4357,38 @@ l2:
 					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
 					&root_offnum);
 	}
+	else if (use_warm_update)
+	{
+		Assert(!IsSystemRelation(relation));
+
+		/* Mark the old tuple as HOT-updated */
+		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetHeapWarmTuple(&oldtup);
+		/* And mark the new tuple as heap-only */
+		HeapTupleSetHeapOnly(heaptup);
+		HeapTupleSetHeapWarmTuple(heaptup);
+		/* Mark the caller's copy too, in case different from heaptup */
+		HeapTupleSetHeapOnly(newtup);
+		HeapTupleSetHeapWarmTuple(newtup);
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			heap_get_root_tuple_one(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
+					&root_offnum);
+
+		/* Let the caller know we did a WARM update */
+		if (warm_update)
+			*warm_update = true;
+	}
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		HeapTupleClearHeapWarmTuple(heaptup);
+		HeapTupleClearHeapWarmTuple(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4297,7 +4507,10 @@ l2:
 	if (have_tuple_lock)
 		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
 
-	pgstat_count_heap_update(relation, use_hot_update);
+	/*
+	 * Count HOT and WARM updates separately
+	 */
+	pgstat_count_heap_update(relation, use_hot_update, use_warm_update);
 
 	/*
 	 * If heaptup is a private copy, release it.  Don't forget to copy t_self
@@ -4405,6 +4618,13 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
  * will be checking very similar sets of columns, and doing the same tests on
  * them, it makes sense to optimize and do them together.
  *
+ * The exprindx_attrs designates the set of attributes used in expression or
+ * predicate indexes. Currently, we don't allow WARM updates if expression or
+ * predicate index column is updated
+ *
+ * If updated_attrs is not NULL, then the caller is always interested in
+ * knowing the list of changed attributes
+ *
  * We receive three bitmapsets comprising the three sets of columns we're
  * interested in.  Note these are destructively modified; that is OK since
  * this is invoked at most once in heap_update.
@@ -4417,7 +4637,11 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 static void
 HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
+							 Bitmapset *exprindx_attrs,
+							 Bitmapset **updated_attrs,
+							 bool *satisfies_hot,
+							 bool *satisfies_warm,
+							 bool *satisfies_key,
 							 bool *satisfies_id,
 							 HeapTuple oldtup, HeapTuple newtup)
 {
@@ -4427,6 +4651,7 @@ HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 	bool		hot_result = true;
 	bool		key_result = true;
 	bool		id_result = true;
+	Bitmapset	*hot_attrs_copy = bms_copy(hot_attrs);
 
 	/* If REPLICA IDENTITY is set to FULL, id_attrs will be empty. */
 	Assert(bms_is_subset(id_attrs, key_attrs));
@@ -4454,8 +4679,11 @@ HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 		 * Since the HOT attributes are a superset of the key attributes and
 		 * the key attributes are a superset of the id attributes, this logic
 		 * is guaranteed to identify the next column that needs to be checked.
+		 *
+		 * If the caller also wants to know the list of updated index
+		 * attributes, we must scan through all the attributes
 		 */
-		if (hot_result && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
+		if ((hot_result || updated_attrs) && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
 			check_now = next_hot_attnum;
 		else if (key_result && next_key_attnum > FirstLowInvalidHeapAttributeNumber)
 			check_now = next_key_attnum;
@@ -4476,8 +4704,16 @@ HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 			if (check_now == next_id_attnum)
 				id_result = false;
 
+			/*
+			 * Add the changed attribute to updated_attrs if the caller has
+			 * asked for it
+			 */
+			if (updated_attrs)
+				*updated_attrs = bms_add_member(*updated_attrs, check_now -
+						FirstLowInvalidHeapAttributeNumber);
+
 			/* if all are false now, we can stop checking */
-			if (!hot_result && !key_result && !id_result)
+			if (!hot_result && !key_result && !id_result && !updated_attrs)
 				break;
 		}
 
@@ -4488,7 +4724,7 @@ HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 		 * bms_first_member() will return -1 and the attribute number will end
 		 * up with a value less than FirstLowInvalidHeapAttributeNumber.
 		 */
-		if (hot_result && check_now == next_hot_attnum)
+		if ((hot_result || updated_attrs) && check_now == next_hot_attnum)
 		{
 			next_hot_attnum = bms_first_member(hot_attrs);
 			next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
@@ -4505,6 +4741,29 @@ HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
 		}
 	}
 
+	/*
+	 * If an attributed used in the expression of an expression index or
+	 * predicate of a predicate index has changed, we don't yet support WARM
+	 * update
+	 */
+	if (updated_attrs && bms_overlap(*updated_attrs, exprindx_attrs))
+		*satisfies_warm = false;
+	/* If the table does not support WARM update, honour that */
+	else if (!relation->rd_supportswarm)
+		*satisfies_warm = false;
+	/*
+	 * If all index keys are being updated, there is hardly any point in doing
+	 * a WARM update.
+	 */
+	else if (updated_attrs && bms_is_subset(hot_attrs_copy, *updated_attrs))
+		*satisfies_warm = false;
+	/*
+	 * XXX Should we handle some more cases, such as when an update will result
+	 * in many or most indexes, should we fall back to a regular update?
+	 */
+	else
+		*satisfies_warm = true;
+
 	*satisfies_hot = hot_result;
 	*satisfies_key = key_result;
 	*satisfies_id = id_result;
@@ -4528,7 +4787,7 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &hufd, &lockmode);
+						 &hufd, &lockmode, NULL, NULL);
 	switch (result)
 	{
 		case HeapTupleSelfUpdated:
@@ -7426,6 +7685,7 @@ log_heap_cleanup_info(RelFileNode rnode, TransactionId latestRemovedXid)
 XLogRecPtr
 log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *redirected, int nredirected,
+			   OffsetNumber *warm, int nwarm,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid)
@@ -7439,6 +7699,7 @@ log_heap_clean(Relation reln, Buffer buffer,
 	xlrec.latestRemovedXid = latestRemovedXid;
 	xlrec.nredirected = nredirected;
 	xlrec.ndead = ndead;
+	xlrec.nwarm = nwarm;
 
 	XLogBeginInsert();
 	XLogRegisterData((char *) &xlrec, SizeOfHeapClean);
@@ -7461,6 +7722,10 @@ log_heap_clean(Relation reln, Buffer buffer,
 		XLogRegisterBufData(0, (char *) nowdead,
 							ndead * sizeof(OffsetNumber));
 
+	if (nwarm > 0)
+		XLogRegisterBufData(0, (char *) warm,
+							nwarm * sizeof(OffsetNumber));
+
 	if (nunused > 0)
 		XLogRegisterBufData(0, (char *) nowunused,
 							nunused * sizeof(OffsetNumber));
@@ -7566,6 +7831,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
+	bool		warm_update;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
@@ -7577,6 +7843,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	else
 		info = XLOG_HEAP_UPDATE;
 
+	if (HeapTupleIsHeapWarmTuple(newtup))
+		warm_update = true;
+
 	/*
 	 * If the old and new tuple are on the same page, we only need to log the
 	 * parts of the new tuple that were changed.  That saves on the amount of
@@ -7650,6 +7919,8 @@ log_heap_update(Relation reln, Buffer oldbuf,
 				xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY;
 		}
 	}
+	if (warm_update)
+		xlrec.flags |= XLH_UPDATE_WARM_UPDATE;
 
 	/* If new tuple is the single and first tuple on page... */
 	if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber &&
@@ -8017,24 +8288,38 @@ heap_xlog_clean(XLogReaderState *record)
 		OffsetNumber *redirected;
 		OffsetNumber *nowdead;
 		OffsetNumber *nowunused;
+		OffsetNumber *warm;
 		int			nredirected;
 		int			ndead;
 		int			nunused;
+		int			nwarm;
+		int			i;
 		Size		datalen;
+		bool		warmchain[MaxHeapTuplesPerPage + 1];
 
 		redirected = (OffsetNumber *) XLogRecGetBlockData(record, 0, &datalen);
 
 		nredirected = xlrec->nredirected;
 		ndead = xlrec->ndead;
+		nwarm = xlrec->nwarm;
+
 		end = (OffsetNumber *) ((char *) redirected + datalen);
 		nowdead = redirected + (nredirected * 2);
-		nowunused = nowdead + ndead;
-		nunused = (end - nowunused);
+		warm = nowdead + ndead;
+		nowunused = warm + nwarm;
+
+		nunused = (end - warm);
 		Assert(nunused >= 0);
 
+		memset(warmchain, 0, sizeof (warmchain));
+		for (i = 0; i < nwarm; i++)
+			warmchain[warm[i]] = true;
+
+
 		/* Update all item pointers per the record, and repair fragmentation */
 		heap_page_prune_execute(buffer,
 								redirected, nredirected,
+								warmchain,
 								nowdead, ndead,
 								nowunused, nunused);
 
@@ -8621,16 +8906,22 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	Size		freespace = 0;
 	XLogRedoAction oldaction;
 	XLogRedoAction newaction;
+	bool		warm_update = false;
 
 	/* initialize to keep the compiler quiet */
 	oldtup.t_data = NULL;
 	oldtup.t_len = 0;
 
+	if (xlrec->flags & XLH_UPDATE_WARM_UPDATE)
+		warm_update = true;
+
 	XLogRecGetBlockTag(record, 0, &rnode, NULL, &newblk);
 	if (XLogRecGetBlockTag(record, 1, NULL, NULL, &oldblk))
 	{
 		/* HOT updates are never done across pages */
 		Assert(!hot_update);
+		/* WARM updates are never done across pages */
+		Assert(!warm_update);
 	}
 	else
 		oldblk = newblk;
@@ -8690,6 +8981,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 								   &htup->t_infomask2);
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
+
+		/* Mark the old tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		/* Set forward chain link in t_ctid */
 		HeapTupleHeaderSetNextCtid(htup, ItemPointerGetBlockNumber(&newtid),
 				ItemPointerGetOffsetNumber(&newtid));
@@ -8825,6 +9121,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
+
+		/* Mark the new tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		/* Make sure there is no forward chain link in t_ctid */
 		HeapTupleHeaderSetHeapLatest(htup);
 
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index 7c2231a..d71a297 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -36,12 +36,19 @@ typedef struct
 	int			nredirected;	/* numbers of entries in arrays below */
 	int			ndead;
 	int			nunused;
+	int			nwarm;
 	/* arrays that accumulate indexes of items to be changed */
 	OffsetNumber redirected[MaxHeapTuplesPerPage * 2];
 	OffsetNumber nowdead[MaxHeapTuplesPerPage];
 	OffsetNumber nowunused[MaxHeapTuplesPerPage];
+	OffsetNumber warm[MaxHeapTuplesPerPage];
 	/* marked[i] is TRUE if item i is entered in one of the above arrays */
 	bool		marked[MaxHeapTuplesPerPage + 1];
+	/*
+	 * warmchain[i] is TRUE if item is becoming redirected lp and points a WARM
+	 * chain
+	 */
+	bool		warmchain[MaxHeapTuplesPerPage + 1];
 } PruneState;
 
 /* Local functions */
@@ -54,6 +61,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 						   OffsetNumber offnum, OffsetNumber rdoffnum);
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
+static void heap_prune_record_warmupdate(PruneState *prstate,
+						   OffsetNumber offnum);
 
 static void heap_get_root_tuples_internal(Page page,
 				OffsetNumber target_offnum, OffsetNumber *root_offsets);
@@ -203,8 +212,9 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 	 */
 	prstate.new_prune_xid = InvalidTransactionId;
 	prstate.latestRemovedXid = *latestRemovedXid;
-	prstate.nredirected = prstate.ndead = prstate.nunused = 0;
+	prstate.nredirected = prstate.ndead = prstate.nunused = prstate.nwarm = 0;
 	memset(prstate.marked, 0, sizeof(prstate.marked));
+	memset(prstate.warmchain, 0, sizeof(prstate.marked));
 
 	/* Scan the page */
 	maxoff = PageGetMaxOffsetNumber(page);
@@ -241,6 +251,7 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 		 */
 		heap_page_prune_execute(buffer,
 								prstate.redirected, prstate.nredirected,
+								prstate.warmchain,
 								prstate.nowdead, prstate.ndead,
 								prstate.nowunused, prstate.nunused);
 
@@ -268,6 +279,7 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 
 			recptr = log_heap_clean(relation, buffer,
 									prstate.redirected, prstate.nredirected,
+									prstate.warm, prstate.nwarm,
 									prstate.nowdead, prstate.ndead,
 									prstate.nowunused, prstate.nunused,
 									prstate.latestRemovedXid);
@@ -479,6 +491,12 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 			!TransactionIdEquals(HeapTupleHeaderGetXmin(htup), priorXmax))
 			break;
 
+		if (HeapTupleHeaderIsHeapWarmTuple(htup))
+		{
+			Assert(!IsSystemRelation(relation));
+			heap_prune_record_warmupdate(prstate, rootoffnum);
+		}
+
 		/*
 		 * OK, this tuple is indeed a member of the chain.
 		 */
@@ -668,6 +686,18 @@ heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum)
 	prstate->marked[offnum] = true;
 }
 
+/* Record item pointer which is a root of a WARM chain */
+static void
+heap_prune_record_warmupdate(PruneState *prstate, OffsetNumber offnum)
+{
+	Assert(prstate->nwarm < MaxHeapTuplesPerPage);
+	if (prstate->warmchain[offnum])
+		return;
+	prstate->warm[prstate->nwarm] = offnum;
+	prstate->nwarm++;
+	prstate->warmchain[offnum] = true;
+}
+
 
 /*
  * Perform the actual page changes needed by heap_page_prune.
@@ -681,6 +711,7 @@ heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum)
 void
 heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
+						bool *warmchain,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused)
 {
@@ -697,6 +728,12 @@ heap_page_prune_execute(Buffer buffer,
 		ItemId		fromlp = PageGetItemId(page, fromoff);
 
 		ItemIdSetRedirect(fromlp, tooff);
+
+		/*
+		 * Save information about WARM chains in the item itself
+		 */
+		if (warmchain[fromoff])
+			ItemIdSetHeapWarm(fromlp);
 	}
 
 	/* Update all now-dead line pointers */
diff --git a/src/backend/access/index/genam.c b/src/backend/access/index/genam.c
index 65c941d..4f9fb12 100644
--- a/src/backend/access/index/genam.c
+++ b/src/backend/access/index/genam.c
@@ -99,7 +99,7 @@ RelationGetIndexScan(Relation indexRelation, int nkeys, int norderbys)
 	else
 		scan->orderByData = NULL;
 
-	scan->xs_want_itup = false; /* may be set later */
+	scan->xs_want_itup = true; /* hack for now to always get index tuple */
 
 	/*
 	 * During recovery we ignore killed tuples and don't bother to kill them
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index 54b71cb..7632573 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -71,10 +71,12 @@
 #include "access/xlog.h"
 #include "catalog/catalog.h"
 #include "catalog/index.h"
+#include "executor/executor.h"
 #include "pgstat.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
+#include "utils/datum.h"
 #include "utils/snapmgr.h"
 #include "utils/tqual.h"
 
@@ -409,7 +411,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
 	/*
 	 * The AM's amgettuple proc finds the next index entry matching the scan
 	 * keys, and puts the TID into scan->xs_ctup.t_self.  It should also set
-	 * scan->xs_recheck and possibly scan->xs_itup, though we pay no attention
+	 * scan->xs_tuple_recheck and possibly scan->xs_itup, though we pay no attention
 	 * to those fields here.
 	 */
 	found = scan->indexRelation->rd_amroutine->amgettuple(scan, direction);
@@ -448,7 +450,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
  * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
  * call).
  *
- * Note: caller must check scan->xs_recheck, and perform rechecking of the
+ * Note: caller must check scan->xs_tuple_recheck, and perform rechecking of the
  * scan keys if required.  We do not do that here because we don't have
  * enough information to do it efficiently in the general case.
  * ----------------
@@ -475,6 +477,15 @@ index_fetch_heap(IndexScanDesc scan)
 		 */
 		if (prev_buf != scan->xs_cbuf)
 			heap_page_prune_opt(scan->heapRelation, scan->xs_cbuf);
+
+		/*
+		 * If we're not always re-checking, reset recheck for this tuple
+		 */
+		if (!scan->xs_recheck)
+			scan->xs_tuple_recheck = false;
+		else
+			scan->xs_tuple_recheck = true;
+
 	}
 
 	/* Obtain share-lock on the buffer so we can examine visibility */
@@ -484,32 +495,63 @@ index_fetch_heap(IndexScanDesc scan)
 											scan->xs_snapshot,
 											&scan->xs_ctup,
 											&all_dead,
-											!scan->xs_continue_hot);
+											!scan->xs_continue_hot,
+											&scan->xs_tuple_recheck);
 	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
 
 	if (got_heap_tuple)
 	{
+		bool res = true;
+
+		/*
+		 * Ok we got a tuple which satisfies the snapshot, but if its part of a
+		 * WARM chain, we must do additional checks to ensure that we are
+		 * indeed returning a correct tuple. Note that if the index AM does not
+		 * implement amrecheck method, then we don't any additional checks
+		 * since WARM must have been disabled on such tables
+		 *
+		 * XXX What happens when a new index which does not support amcheck is
+		 * added to the table? Do we need to handle this case or is CREATE
+		 * INDEX and CREATE INDEX CONCURRENTLY smart enough to handle this
+		 * issue?
+		 */
+		if (scan->xs_tuple_recheck &&
+				scan->indexRelation->rd_amroutine->amrecheck)
+		{
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
+			res = scan->indexRelation->rd_amroutine->amrecheck(
+						scan->indexRelation,
+						scan->xs_itup,
+						scan->heapRelation,
+						&scan->xs_ctup);
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+		}
+
 		/*
 		 * Only in a non-MVCC snapshot can more than one member of the HOT
 		 * chain be visible.
 		 */
 		scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
 		pgstat_count_heap_fetch(scan->indexRelation);
-		return &scan->xs_ctup;
-	}
 
-	/* We've reached the end of the HOT chain. */
-	scan->xs_continue_hot = false;
+		if (res)
+			return &scan->xs_ctup;
+	}
+	else
+	{
+		/* We've reached the end of the HOT chain. */
+		scan->xs_continue_hot = false;
 
-	/*
-	 * If we scanned a whole HOT chain and found only dead tuples, tell index
-	 * AM to kill its entry for that TID (this will take effect in the next
-	 * amgettuple call, in index_getnext_tid).  We do not do this when in
-	 * recovery because it may violate MVCC to do so.  See comments in
-	 * RelationGetIndexScan().
-	 */
-	if (!scan->xactStartedInRecovery)
-		scan->kill_prior_tuple = all_dead;
+		/*
+		 * If we scanned a whole HOT chain and found only dead tuples, tell index
+		 * AM to kill its entry for that TID (this will take effect in the next
+		 * amgettuple call, in index_getnext_tid).  We do not do this when in
+		 * recovery because it may violate MVCC to do so.  See comments in
+		 * RelationGetIndexScan().
+		 */
+		if (!scan->xactStartedInRecovery)
+			scan->kill_prior_tuple = all_dead;
+	}
 
 	return NULL;
 }
diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c
index ef69290..e0afffd 100644
--- a/src/backend/access/nbtree/nbtinsert.c
+++ b/src/backend/access/nbtree/nbtinsert.c
@@ -19,11 +19,14 @@
 #include "access/nbtree.h"
 #include "access/transam.h"
 #include "access/xloginsert.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
 #include "utils/tqual.h"
-
+#include "utils/datum.h"
 
 typedef struct
 {
@@ -249,6 +252,9 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 	BTPageOpaque opaque;
 	Buffer		nbuf = InvalidBuffer;
 	bool		found = false;
+	Buffer		buffer;
+	HeapTupleData	heapTuple;
+	bool		recheck = false;
 
 	/* Assume unique until we find a duplicate */
 	*is_unique = true;
@@ -308,6 +314,8 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				curitup = (IndexTuple) PageGetItem(page, curitemid);
 				htid = curitup->t_tid;
 
+				recheck = false;
+
 				/*
 				 * If we are doing a recheck, we expect to find the tuple we
 				 * are rechecking.  It's not a duplicate, but we have to keep
@@ -325,112 +333,153 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				 * have just a single index entry for the entire chain.
 				 */
 				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
-										 &all_dead))
+							&all_dead, &recheck, &buffer,
+							&heapTuple))
 				{
 					TransactionId xwait;
+					bool result = true;
 
 					/*
-					 * It is a duplicate. If we are only doing a partial
-					 * check, then don't bother checking if the tuple is being
-					 * updated in another transaction. Just return the fact
-					 * that it is a potential conflict and leave the full
-					 * check till later.
+					 * If the tuple was WARM update, we may again see our own
+					 * tuple. Since WARM updates don't create new index
+					 * entries, our own tuple is only reachable via the old
+					 * index pointer
 					 */
-					if (checkUnique == UNIQUE_CHECK_PARTIAL)
+					if (checkUnique == UNIQUE_CHECK_EXISTING &&
+							ItemPointerCompare(&htid, &itup->t_tid) == 0)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						*is_unique = false;
-						return InvalidTransactionId;
+						found = true;
+						result = false;
+						if (recheck)
+							UnlockReleaseBuffer(buffer);
 					}
-
-					/*
-					 * If this tuple is being updated by other transaction
-					 * then we have to wait for its commit/abort.
-					 */
-					xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
-						SnapshotDirty.xmin : SnapshotDirty.xmax;
-
-					if (TransactionIdIsValid(xwait))
+					else if (recheck)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						/* Tell _bt_doinsert to wait... */
-						*speculativeToken = SnapshotDirty.speculativeToken;
-						return xwait;
+						result = btrecheck(rel, curitup, heapRel, &heapTuple);
+						UnlockReleaseBuffer(buffer);
 					}
 
-					/*
-					 * Otherwise we have a definite conflict.  But before
-					 * complaining, look to see if the tuple we want to insert
-					 * is itself now committed dead --- if so, don't complain.
-					 * This is a waste of time in normal scenarios but we must
-					 * do it to support CREATE INDEX CONCURRENTLY.
-					 *
-					 * We must follow HOT-chains here because during
-					 * concurrent index build, we insert the root TID though
-					 * the actual tuple may be somewhere in the HOT-chain.
-					 * While following the chain we might not stop at the
-					 * exact tuple which triggered the insert, but that's OK
-					 * because if we find a live tuple anywhere in this chain,
-					 * we have a unique key conflict.  The other live tuple is
-					 * not part of this chain because it had a different index
-					 * entry.
-					 */
-					htid = itup->t_tid;
-					if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL))
-					{
-						/* Normal case --- it's still live */
-					}
-					else
+					if (result)
 					{
 						/*
-						 * It's been deleted, so no error, and no need to
-						 * continue searching
+						 * It is a duplicate. If we are only doing a partial
+						 * check, then don't bother checking if the tuple is being
+						 * updated in another transaction. Just return the fact
+						 * that it is a potential conflict and leave the full
+						 * check till later.
 						 */
-						break;
-					}
+						if (checkUnique == UNIQUE_CHECK_PARTIAL)
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							*is_unique = false;
+							return InvalidTransactionId;
+						}
 
-					/*
-					 * Check for a conflict-in as we would if we were going to
-					 * write to this page.  We aren't actually going to write,
-					 * but we want a chance to report SSI conflicts that would
-					 * otherwise be masked by this unique constraint
-					 * violation.
-					 */
-					CheckForSerializableConflictIn(rel, NULL, buf);
+						/*
+						 * If this tuple is being updated by other transaction
+						 * then we have to wait for its commit/abort.
+						 */
+						xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
+							SnapshotDirty.xmin : SnapshotDirty.xmax;
+
+						if (TransactionIdIsValid(xwait))
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							/* Tell _bt_doinsert to wait... */
+							*speculativeToken = SnapshotDirty.speculativeToken;
+							return xwait;
+						}
 
-					/*
-					 * This is a definite conflict.  Break the tuple down into
-					 * datums and report the error.  But first, make sure we
-					 * release the buffer locks we're holding ---
-					 * BuildIndexValueDescription could make catalog accesses,
-					 * which in the worst case might touch this same index and
-					 * cause deadlocks.
-					 */
-					if (nbuf != InvalidBuffer)
-						_bt_relbuf(rel, nbuf);
-					_bt_relbuf(rel, buf);
+						/*
+						 * Otherwise we have a definite conflict.  But before
+						 * complaining, look to see if the tuple we want to insert
+						 * is itself now committed dead --- if so, don't complain.
+						 * This is a waste of time in normal scenarios but we must
+						 * do it to support CREATE INDEX CONCURRENTLY.
+						 *
+						 * We must follow HOT-chains here because during
+						 * concurrent index build, we insert the root TID though
+						 * the actual tuple may be somewhere in the HOT-chain.
+						 * While following the chain we might not stop at the
+						 * exact tuple which triggered the insert, but that's OK
+						 * because if we find a live tuple anywhere in this chain,
+						 * we have a unique key conflict.  The other live tuple is
+						 * not part of this chain because it had a different index
+						 * entry.
+						 */
+						recheck = false;
+						ItemPointerCopy(&itup->t_tid, &htid);
+						if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL,
+									&recheck, &buffer, &heapTuple))
+						{
+							bool result = true;
+							if (recheck)
+							{
+								/*
+								 * Recheck if the tuple actually satisfies the
+								 * index key. Otherwise, we might be following
+								 * a wrong index pointer and mustn't entertain
+								 * this tuple
+								 */
+								result = btrecheck(rel, itup, heapRel, &heapTuple);
+								UnlockReleaseBuffer(buffer);
+							}
+							if (!result)
+								break;
+							/* Normal case --- it's still live */
+						}
+						else
+						{
+							/*
+							 * It's been deleted, so no error, and no need to
+							 * continue searching
+							 */
+							break;
+						}
 
-					{
-						Datum		values[INDEX_MAX_KEYS];
-						bool		isnull[INDEX_MAX_KEYS];
-						char	   *key_desc;
-
-						index_deform_tuple(itup, RelationGetDescr(rel),
-										   values, isnull);
-
-						key_desc = BuildIndexValueDescription(rel, values,
-															  isnull);
-
-						ereport(ERROR,
-								(errcode(ERRCODE_UNIQUE_VIOLATION),
-								 errmsg("duplicate key value violates unique constraint \"%s\"",
-										RelationGetRelationName(rel)),
-							   key_desc ? errdetail("Key %s already exists.",
-													key_desc) : 0,
-								 errtableconstraint(heapRel,
-											 RelationGetRelationName(rel))));
+						/*
+						 * Check for a conflict-in as we would if we were going to
+						 * write to this page.  We aren't actually going to write,
+						 * but we want a chance to report SSI conflicts that would
+						 * otherwise be masked by this unique constraint
+						 * violation.
+						 */
+						CheckForSerializableConflictIn(rel, NULL, buf);
+
+						/*
+						 * This is a definite conflict.  Break the tuple down into
+						 * datums and report the error.  But first, make sure we
+						 * release the buffer locks we're holding ---
+						 * BuildIndexValueDescription could make catalog accesses,
+						 * which in the worst case might touch this same index and
+						 * cause deadlocks.
+						 */
+						if (nbuf != InvalidBuffer)
+							_bt_relbuf(rel, nbuf);
+						_bt_relbuf(rel, buf);
+
+						{
+							Datum		values[INDEX_MAX_KEYS];
+							bool		isnull[INDEX_MAX_KEYS];
+							char	   *key_desc;
+
+							index_deform_tuple(itup, RelationGetDescr(rel),
+									values, isnull);
+
+							key_desc = BuildIndexValueDescription(rel, values,
+									isnull);
+
+							ereport(ERROR,
+									(errcode(ERRCODE_UNIQUE_VIOLATION),
+									 errmsg("duplicate key value violates unique constraint \"%s\"",
+										 RelationGetRelationName(rel)),
+									 key_desc ? errdetail("Key %s already exists.",
+										 key_desc) : 0,
+									 errtableconstraint(heapRel,
+										 RelationGetRelationName(rel))));
+						}
 					}
 				}
 				else if (all_dead)
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 128744c..6b1236a 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -23,6 +23,7 @@
 #include "access/xlog.h"
 #include "catalog/index.h"
 #include "commands/vacuum.h"
+#include "executor/nodeIndexscan.h"
 #include "storage/indexfsm.h"
 #include "storage/ipc.h"
 #include "storage/lmgr.h"
@@ -117,6 +118,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = btendscan;
 	amroutine->ammarkpos = btmarkpos;
 	amroutine->amrestrpos = btrestrpos;
+	amroutine->amrecheck = btrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -292,8 +294,9 @@ btgettuple(IndexScanDesc scan, ScanDirection dir)
 	BTScanOpaque so = (BTScanOpaque) scan->opaque;
 	bool		res;
 
-	/* btree indexes are never lossy */
-	scan->xs_recheck = false;
+	/* btree indexes are never lossy, except for WARM tuples */
+	scan->xs_recheck = indexscan_recheck;
+	scan->xs_tuple_recheck = indexscan_recheck;
 
 	/*
 	 * If we have any array keys, initialize them during first call for a
diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c
index 063c988..c9c0501 100644
--- a/src/backend/access/nbtree/nbtutils.c
+++ b/src/backend/access/nbtree/nbtutils.c
@@ -20,11 +20,15 @@
 #include "access/nbtree.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "utils/array.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 typedef struct BTSortArrayContext
@@ -2065,3 +2069,103 @@ btproperty(Oid index_oid, int attno,
 			return false;		/* punt to generic code */
 	}
 }
+
+/*
+ * Check if the index tuple's key matches the one computed from the given heap
+ * tuple's attribute
+ */
+bool
+btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	/* Get IndexInfo for this index */
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL, then they are equal
+		 */
+		if (isnull[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If just one is NULL, then they are not equal
+		 */
+		if (isnull[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now just do a raw memory comparison. If the index tuple was formed
+		 * using this heap tuple, the computed index values must match
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+}
diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c
index d570ae5..813b5c3 100644
--- a/src/backend/access/spgist/spgutils.c
+++ b/src/backend/access/spgist/spgutils.c
@@ -67,6 +67,7 @@ spghandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = spgendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 08b646d..e76e928 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -54,6 +54,7 @@
 #include "nodes/makefuncs.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/var.h"
 #include "parser/parser.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
@@ -1691,6 +1692,20 @@ BuildIndexInfo(Relation index)
 	ii->ii_Concurrent = false;
 	ii->ii_BrokenHotChain = false;
 
+	/* build a bitmap of all table attributes referred by this index */
+	for (i = 0; i < ii->ii_NumIndexAttrs; i++)
+	{
+		AttrNumber attr = ii->ii_KeyAttrNumbers[i];
+		ii->ii_indxattrs = bms_add_member(ii->ii_indxattrs, attr -
+				FirstLowInvalidHeapAttributeNumber);
+	}
+
+	/* Collect all attributes used in expressions, too */
+	pull_varattnos((Node *) ii->ii_Expressions, 1, &ii->ii_indxattrs);
+
+	/* Collect all attributes in the index predicate, too */
+	pull_varattnos((Node *) ii->ii_Predicate, 1, &ii->ii_indxattrs);
+
 	return ii;
 }
 
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index e011af1..97672a9 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -472,6 +472,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_tuples_deleted(C.oid) AS n_tup_del,
             pg_stat_get_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
@@ -502,7 +503,8 @@ CREATE VIEW pg_stat_xact_all_tables AS
             pg_stat_get_xact_tuples_inserted(C.oid) AS n_tup_ins,
             pg_stat_get_xact_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_xact_tuples_deleted(C.oid) AS n_tup_del,
-            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd
+            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_xact_tuples_warm_updated(C.oid) AS n_tup_warm_upd
     FROM pg_class C LEFT JOIN
          pg_index I ON C.oid = I.indrelid
          LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index 26f9114..997c8f5 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -40,6 +40,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	TriggerData *trigdata = (TriggerData *) fcinfo->context;
 	const char *funcname = "unique_key_recheck";
 	HeapTuple	new_row;
+	HeapTupleData heapTuple;
 	ItemPointerData tmptid;
 	Relation	indexRel;
 	IndexInfo  *indexInfo;
@@ -102,7 +103,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	 * removed.
 	 */
 	tmptid = new_row->t_self;
-	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL,
+				NULL, NULL, &heapTuple))
 	{
 		/*
 		 * All rows in the HOT chain are dead, so skip the check.
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index ec5d6f1..5e57cc9 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -2551,6 +2551,8 @@ CopyFrom(CopyState cstate)
 					if (resultRelInfo->ri_NumIndices > 0)
 						recheckIndexes = ExecInsertIndexTuples(slot,
 															&(tuple->t_self),
+															&(tuple->t_self),
+															NULL,
 															   estate,
 															   false,
 															   NULL,
@@ -2669,6 +2671,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
 			ExecStoreTuple(bufferedTuples[i], myslot, InvalidBuffer, false);
 			recheckIndexes =
 				ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
+									  &(bufferedTuples[i]->t_self), NULL,
 									  estate, false, NULL, NIL);
 			ExecARInsertTriggers(estate, resultRelInfo,
 								 bufferedTuples[i],
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index b5fb325..cd9b9a7 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -1468,6 +1468,7 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 
 		recptr = log_heap_clean(onerel, buffer,
 								NULL, 0, NULL, 0,
+								NULL, 0,
 								unused, uncnt,
 								vacrelstats->latestRemovedXid);
 		PageSetLSN(page, recptr);
@@ -2128,6 +2129,22 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 						break;
 					}
 
+					/*
+					 * If this tuple was ever WARM updated or is a WARM tuple,
+					 * there could be multiple index entries pointing to the
+					 * root of this chain. We can't do index-only scans for
+					 * such tuples without verifying index key check. So mark
+					 * the page as !all_visible
+					 *
+					 * XXX Should we look at the root line pointer and check if
+					 * WARM flag is set there or checking for tuples in the
+					 * chain is good enough?
+					 */
+					if (HeapTupleHeaderIsHeapWarmTuple(tuple.t_data))
+					{
+						all_visible = false;
+					}
+
 					/* Track newest xmin on page. */
 					if (TransactionIdFollows(xmin, *visibility_cutoff_xid))
 						*visibility_cutoff_xid = xmin;
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 009c1b7..03c6b62 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -270,6 +270,8 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
 List *
 ExecInsertIndexTuples(TupleTableSlot *slot,
 					  ItemPointer tupleid,
+					  ItemPointer root_tid,
+					  Bitmapset *updated_attrs,
 					  EState *estate,
 					  bool noDupErr,
 					  bool *specConflict,
@@ -324,6 +326,17 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 		if (!indexInfo->ii_ReadyForInserts)
 			continue;
 
+		/*
+		 * If updated_attrs is set, we only insert index entries for those
+		 * indexes whose column has changed. All other indexes can use their
+		 * existing index pointers to look up the new tuple
+		 */
+		if (updated_attrs)
+		{
+			if (!bms_overlap(updated_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
 		/* Check for partial index */
 		if (indexInfo->ii_Predicate != NIL)
 		{
@@ -389,7 +402,7 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 			index_insert(indexRelation, /* index relation */
 						 values,	/* array of index Datums */
 						 isnull,	/* null flags */
-						 tupleid,		/* tid of heap tuple */
+						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique);	/* type of uniqueness check to do */
 
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index 449aacb..ff77349 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -37,6 +37,7 @@
 
 #include "access/relscan.h"
 #include "access/transam.h"
+#include "access/valid.h"
 #include "executor/execdebug.h"
 #include "executor/nodeBitmapHeapscan.h"
 #include "pgstat.h"
@@ -362,11 +363,23 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			OffsetNumber offnum = tbmres->offsets[curslot];
 			ItemPointerData tid;
 			HeapTupleData heapTuple;
+			bool recheck = false;
 
 			ItemPointerSet(&tid, page, offnum);
 			if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
-									   &heapTuple, NULL, true))
-				scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+									   &heapTuple, NULL, true, &recheck))
+			{
+				bool valid = true;
+
+				if (scan->rs_key)
+					HeapKeyTest(&heapTuple, RelationGetDescr(scan->rs_rd),
+							scan->rs_nkeys, scan->rs_key, valid);
+				if (valid)
+					scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+
+				if (recheck)
+					tbmres->recheck = true;
+			}
 		}
 	}
 	else
diff --git a/src/backend/executor/nodeIndexonlyscan.c b/src/backend/executor/nodeIndexonlyscan.c
index 4f6f91c..49bda34 100644
--- a/src/backend/executor/nodeIndexonlyscan.c
+++ b/src/backend/executor/nodeIndexonlyscan.c
@@ -141,6 +141,26 @@ IndexOnlyNext(IndexOnlyScanState *node)
 			 * but it's not clear whether it's a win to do so.  The next index
 			 * entry might require a visit to the same heap page.
 			 */
+
+			/*
+			 * If the index was lossy or the tuple was WARM, we have to recheck
+			 * the index quals using the fetched tuple.
+			 */
+			if (scandesc->xs_tuple_recheck)
+			{
+				ExecStoreTuple(tuple,	/* tuple to store */
+						slot,	/* slot to store in */
+						scandesc->xs_cbuf,		/* buffer containing tuple */
+						false);	/* don't pfree */
+				econtext->ecxt_scantuple = slot;
+				ResetExprContext(econtext);
+				if (!ExecQual(node->indexqual, econtext, false))
+				{
+					/* Fails recheck, so drop it and loop back for another */
+					InstrCountFiltered2(node, 1);
+					continue;
+				}
+			}
 		}
 
 		/*
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index 3143bd9..daa0826 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -39,6 +39,8 @@
 #include "utils/memutils.h"
 #include "utils/rel.h"
 
+bool indexscan_recheck = false;
+
 /*
  * When an ordering operator is used, tuples fetched from the index that
  * need to be reordered are queued in a pairing heap, as ReorderTuples.
@@ -115,10 +117,10 @@ IndexNext(IndexScanState *node)
 					   false);	/* don't pfree */
 
 		/*
-		 * If the index was lossy, we have to recheck the index quals using
-		 * the fetched tuple.
+		 * If the index was lossy or the tuple was WARM, we have to recheck
+		 * the index quals using the fetched tuple.
 		 */
-		if (scandesc->xs_recheck)
+		if (scandesc->xs_recheck || scandesc->xs_tuple_recheck)
 		{
 			econtext->ecxt_scantuple = slot;
 			ResetExprContext(econtext);
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index efb0c5e..0ba71a3 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -448,6 +448,7 @@ ExecInsert(ModifyTableState *mtstate,
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												 &(tuple->t_self), NULL,
 												 estate, true, &specConflict,
 												   arbiterIndexes);
 
@@ -494,6 +495,7 @@ ExecInsert(ModifyTableState *mtstate,
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
 				recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+													   &(tuple->t_self), NULL,
 													   estate, false, NULL,
 													   arbiterIndexes);
 		}
@@ -824,6 +826,9 @@ ExecUpdate(ItemPointer tupleid,
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
 	List	   *recheckIndexes = NIL;
+	Bitmapset  *updated_attrs = NULL;
+	ItemPointerData	root_tid;
+	bool		warm_update;
 
 	/*
 	 * abort the operation if not running transactions
@@ -938,7 +943,7 @@ lreplace:;
 							 estate->es_output_cid,
 							 estate->es_crosscheck_snapshot,
 							 true /* wait for commit */ ,
-							 &hufd, &lockmode);
+							 &hufd, &lockmode, &updated_attrs, &warm_update);
 		switch (result)
 		{
 			case HeapTupleSelfUpdated:
@@ -1025,10 +1030,28 @@ lreplace:;
 		 * the t_self field.
 		 *
 		 * If it's a HOT update, we mustn't insert new index entries.
+		 *
+		 * If it's a WARM update, then we must insert new entries with TID
+		 * pointing to the root of the WARM chain.
 		 */
-		if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
+		if (resultRelInfo->ri_NumIndices > 0 &&
+			(!HeapTupleIsHeapOnly(tuple) || warm_update))
+		{
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(updated_attrs);
+				updated_attrs = NULL;
+			}
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   updated_attrs,
 												   estate, false, NULL, NIL);
+		}
 	}
 
 	if (canSetTag)
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index c7584cb..d89d37b 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -1823,7 +1823,7 @@ pgstat_count_heap_insert(Relation rel, int n)
  * pgstat_count_heap_update - count a tuple update
  */
 void
-pgstat_count_heap_update(Relation rel, bool hot)
+pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 {
 	PgStat_TableStatus *pgstat_info = rel->pgstat_info;
 
@@ -1841,6 +1841,8 @@ pgstat_count_heap_update(Relation rel, bool hot)
 		/* t_tuples_hot_updated is nontransactional, so just advance it */
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
+		else if (warm)
+			pgstat_info->t_counts.t_tuples_warm_updated++;
 	}
 }
 
@@ -4083,6 +4085,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_updated = 0;
 		result->tuples_deleted = 0;
 		result->tuples_hot_updated = 0;
+		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
 		result->changes_since_analyze = 0;
@@ -5192,6 +5195,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated = tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted = tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated = tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
@@ -5219,6 +5223,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated += tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted += tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated += tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated += tabmsg->t_counts.t_tuples_warm_updated;
 			/* If table was truncated, first reset the live/dead counters */
 			if (tabmsg->t_counts.t_truncated)
 			{
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 2d3cf9e..25752b0 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -115,6 +115,7 @@ extern Datum pg_stat_get_xact_tuples_inserted(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_xact_tuples_updated(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_xact_tuples_deleted(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS);
+extern Datum pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_xact_blocks_hit(PG_FUNCTION_ARGS);
 
@@ -245,6 +246,22 @@ pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS)
 
 
 Datum
+pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+
+Datum
 pg_stat_get_live_tuples(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
@@ -1744,6 +1761,21 @@ pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS)
 }
 
 Datum
+pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_TableStatus *tabentry;
+
+	if ((tabentry = find_tabstat_entry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->t_counts.t_tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+Datum
 pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 79e0b1f..37874ca 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2030,6 +2030,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	list_free_deep(relation->rd_fkeylist);
 	list_free(relation->rd_indexlist);
 	bms_free(relation->rd_indexattr);
+	bms_free(relation->rd_exprindexattr);
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_idattr);
 	if (relation->rd_options)
@@ -4373,12 +4374,15 @@ Bitmapset *
 RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 {
 	Bitmapset  *indexattrs;		/* indexed columns */
+	Bitmapset  *exprindexattrs;	/* indexed columns in expression/prediacate
+									 indexes */
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
 	List	   *indexoidlist;
 	Oid			relreplindex;
 	ListCell   *l;
 	MemoryContext oldcxt;
+	bool		supportswarm = true;/* True if the table can be WARM updated */
 
 	/* Quick exit if we already computed the result. */
 	if (relation->rd_indexattr != NULL)
@@ -4391,6 +4395,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_keyattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
+			case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+				return bms_copy(relation->rd_exprindexattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -4429,6 +4435,7 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	 * won't be returned at all by RelationGetIndexList.
 	 */
 	indexattrs = NULL;
+	exprindexattrs = NULL;
 	uindexattrs = NULL;
 	idindexattrs = NULL;
 	foreach(l, indexoidlist)
@@ -4474,19 +4481,32 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 		}
 
 		/* Collect all attributes used in expressions, too */
-		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &exprindexattrs);
 
 		/* Collect all attributes in the index predicate, too */
-		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
+
+		/*
+		 * Check if the index has amrecheck method defined. If the method is
+		 * not defined, the index does not support WARM update. Completely
+		 * disable WARM updates on such tables
+		 */
+		if (!indexDesc->rd_amroutine->amrecheck)
+			supportswarm = false;
 
 		index_close(indexDesc, AccessShareLock);
 	}
 
 	list_free(indexoidlist);
 
+	/* Remember if the table can do WARM updates */
+	relation->rd_supportswarm = supportswarm;
+
 	/* Don't leak the old values of these bitmaps, if any */
 	bms_free(relation->rd_indexattr);
 	relation->rd_indexattr = NULL;
+	bms_free(relation->rd_exprindexattr);
+	relation->rd_exprindexattr = NULL;
 	bms_free(relation->rd_keyattr);
 	relation->rd_keyattr = NULL;
 	bms_free(relation->rd_idattr);
@@ -4502,7 +4522,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	oldcxt = MemoryContextSwitchTo(CacheMemoryContext);
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_indexattr = bms_copy(indexattrs);
+	relation->rd_exprindexattr = bms_copy(exprindexattrs);
+	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -4514,6 +4535,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 			return uindexattrs;
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
+		case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+			return exprindexattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 28ebcb6..2241ffb 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -112,6 +112,7 @@ extern char *default_tablespace;
 extern char *temp_tablespaces;
 extern bool ignore_checksum_failure;
 extern bool synchronize_seqscans;
+extern bool indexscan_recheck;
 
 #ifdef TRACE_SYNCSCAN
 extern bool trace_syncscan;
@@ -1288,6 +1289,16 @@ static struct config_bool ConfigureNamesBool[] =
 		NULL, NULL, NULL
 	},
 	{
+		{"indexscan_recheck", PGC_USERSET, DEVELOPER_OPTIONS,
+			gettext_noop("Recheck heap rows returned from an index scan."),
+			NULL,
+			GUC_NOT_IN_SAMPLE
+		},
+		&indexscan_recheck,
+		false,
+		NULL, NULL, NULL
+	},
+	{
 		{"debug_deadlocks", PGC_SUSET, DEVELOPER_OPTIONS,
 			gettext_noop("Dumps information about all current locks when a deadlock timeout occurs."),
 			NULL,
diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h
index 1036cca..37eaf76 100644
--- a/src/include/access/amapi.h
+++ b/src/include/access/amapi.h
@@ -13,6 +13,7 @@
 #define AMAPI_H
 
 #include "access/genam.h"
+#include "access/itup.h"
 
 /*
  * We don't wish to include planner header files here, since most of an index
@@ -137,6 +138,9 @@ typedef void (*ammarkpos_function) (IndexScanDesc scan);
 /* restore marked scan position */
 typedef void (*amrestrpos_function) (IndexScanDesc scan);
 
+/* recheck index tuple and heap tuple match */
+typedef bool (*amrecheck_function) (Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * API struct for an index AM.  Note this must be stored in a single palloc'd
@@ -196,6 +200,7 @@ typedef struct IndexAmRoutine
 	amendscan_function amendscan;
 	ammarkpos_function ammarkpos;		/* can be NULL */
 	amrestrpos_function amrestrpos;		/* can be NULL */
+	amrecheck_function amrecheck;		/* can be NULL */
 } IndexAmRoutine;
 
 
diff --git a/src/include/access/hash.h b/src/include/access/hash.h
index 6dfc41f..f1c73a0 100644
--- a/src/include/access/hash.h
+++ b/src/include/access/hash.h
@@ -389,4 +389,8 @@ extern void hashbucketcleanup(Relation rel, Bucket cur_bucket,
 				  bool bucket_has_garbage,
 				  IndexBulkDeleteCallback callback, void *callback_state);
 
+/* hash.c */
+extern bool hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 #endif   /* HASH_H */
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 81f7982..78e16a9 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -137,9 +137,10 @@ extern bool heap_fetch(Relation relation, Snapshot snapshot,
 		   Relation stats_relation);
 extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
 					   Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call);
+					   bool *all_dead, bool first_call, bool *recheck);
 extern bool heap_hot_search(ItemPointer tid, Relation relation,
-				Snapshot snapshot, bool *all_dead);
+				Snapshot snapshot, bool *all_dead,
+				bool *recheck, Buffer *buffer, HeapTuple heapTuple);
 
 extern void heap_get_latest_tid(Relation relation, Snapshot snapshot,
 					ItemPointer tid);
@@ -160,7 +161,8 @@ extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
 extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **updated_attrs, bool *warm_update);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
@@ -186,6 +188,7 @@ extern int heap_page_prune(Relation relation, Buffer buffer,
 				bool report_stats, TransactionId *latestRemovedXid);
 extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
+						bool *warmchain,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
 extern void heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index 5a04561..ddc3a7a 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -80,6 +80,7 @@
 #define XLH_UPDATE_CONTAINS_NEW_TUPLE			(1<<4)
 #define XLH_UPDATE_PREFIX_FROM_OLD				(1<<5)
 #define XLH_UPDATE_SUFFIX_FROM_OLD				(1<<6)
+#define XLH_UPDATE_WARM_UPDATE					(1<<7)
 
 /* convenience macro for checking whether any form of old tuple was logged */
 #define XLH_UPDATE_CONTAINS_OLD						\
@@ -211,7 +212,9 @@ typedef struct xl_heap_update
  *	* for each redirected item: the item offset, then the offset redirected to
  *	* for each now-dead item: the item offset
  *	* for each now-unused item: the item offset
- * The total number of OffsetNumbers is therefore 2*nredirected+ndead+nunused.
+ *	* for each now-warm item: the item offset
+ * The total number of OffsetNumbers is therefore
+ * 2*nredirected+ndead+nunused+nwarm.
  * Note that nunused is not explicitly stored, but may be found by reference
  * to the total record length.
  */
@@ -220,10 +223,11 @@ typedef struct xl_heap_clean
 	TransactionId latestRemovedXid;
 	uint16		nredirected;
 	uint16		ndead;
+	uint16		nwarm;
 	/* OFFSET NUMBERS are in the block reference 0 */
 } xl_heap_clean;
 
-#define SizeOfHeapClean (offsetof(xl_heap_clean, ndead) + sizeof(uint16))
+#define SizeOfHeapClean (offsetof(xl_heap_clean, nwarm) + sizeof(uint16))
 
 /*
  * Cleanup_info is required in some cases during a lazy VACUUM.
@@ -384,6 +388,7 @@ extern XLogRecPtr log_heap_cleanup_info(RelFileNode rnode,
 					  TransactionId latestRemovedXid);
 extern XLogRecPtr log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *redirected, int nredirected,
+			   OffsetNumber *warm, int nwarm,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid);
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 4313eb9..09246b2 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,7 +260,8 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x0800 are available */
+#define HEAP_WARM_TUPLE			0x0800	/* This tuple is a part of a WARM chain
+										 */
 #define HEAP_LATEST_TUPLE		0x1000	/*
 										 * This is the last tuple in chain and
 										 * ip_posid points to the root line
@@ -271,7 +272,7 @@ struct HeapTupleHeaderData
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF800	/* visibility-related bits */
 
 
 /*
@@ -510,6 +511,21 @@ do { \
   (tup)->t_infomask2 & HEAP_ONLY_TUPLE \
 )
 
+#define HeapTupleHeaderSetHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 |= HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderClearHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 &= ~HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderIsHeapWarmTuple(tup) \
+( \
+  ((tup)->t_infomask2 & HEAP_WARM_TUPLE) \
+)
+
 #define HeapTupleHeaderSetHeapLatest(tup) \
 ( \
 	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE \
@@ -771,6 +787,15 @@ struct MinimalTupleData
 #define HeapTupleClearHeapOnly(tuple) \
 		HeapTupleHeaderClearHeapOnly((tuple)->t_data)
 
+#define HeapTupleIsHeapWarmTuple(tuple) \
+		HeapTupleHeaderIsHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleSetHeapWarmTuple(tuple) \
+		HeapTupleHeaderSetHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleClearHeapWarmTuple(tuple) \
+		HeapTupleHeaderClearHeapWarmTuple((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index c580f51..83af072 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -751,6 +751,8 @@ extern bytea *btoptions(Datum reloptions, bool validate);
 extern bool btproperty(Oid index_oid, int attno,
 		   IndexAMProperty prop, const char *propname,
 		   bool *res, bool *isnull);
+extern bool btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * prototypes for functions in nbtvalidate.c
diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h
index de98dd6..da7ec84 100644
--- a/src/include/access/relscan.h
+++ b/src/include/access/relscan.h
@@ -111,7 +111,8 @@ typedef struct IndexScanDescData
 	HeapTupleData xs_ctup;		/* current heap tuple, if any */
 	Buffer		xs_cbuf;		/* current heap buffer in scan, if any */
 	/* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */
-	bool		xs_recheck;		/* T means scan keys must be rechecked */
+	bool		xs_recheck;		/* T means scan keys must be rechecked for each tuple */
+	bool		xs_tuple_recheck;	/* T means scan keys must be rechecked for current tuple */
 
 	/*
 	 * When fetching with an ordering operator, the values of the ORDER BY
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 047a1ce..31f295f 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2734,6 +2734,8 @@ DATA(insert OID = 1933 (  pg_stat_get_tuples_deleted	PGNSP PGUID 12 1 0 0 0 f f
 DESCR("statistics: number of tuples deleted");
 DATA(insert OID = 1972 (  pg_stat_get_tuples_hot_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated");
+DATA(insert OID = 3344 (  pg_stat_get_tuples_warm_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated");
 DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_live_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
@@ -2884,6 +2886,8 @@ DATA(insert OID = 3042 (  pg_stat_get_xact_tuples_deleted		PGNSP PGUID 12 1 0 0
 DESCR("statistics: number of tuples deleted in current transaction");
 DATA(insert OID = 3043 (  pg_stat_get_xact_tuples_hot_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated in current transaction");
+DATA(insert OID = 3343 (  pg_stat_get_xact_tuples_warm_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated in current transaction");
 DATA(insert OID = 3044 (  pg_stat_get_xact_blocks_fetched		PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_fetched _null_ _null_ _null_ ));
 DESCR("statistics: number of blocks fetched in current transaction");
 DATA(insert OID = 3045 (  pg_stat_get_xact_blocks_hit			PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_hit _null_ _null_ _null_ ));
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 136276b..f463014 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -366,6 +366,7 @@ extern void UnregisterExprContextCallback(ExprContext *econtext,
 extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
 extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
 extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
+					  ItemPointer root_tid, Bitmapset *updated_attrs,
 					  EState *estate, bool noDupErr, bool *specConflict,
 					  List *arbiterIndexes);
 extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
diff --git a/src/include/executor/nodeIndexscan.h b/src/include/executor/nodeIndexscan.h
index 194fadb..fe9c78e 100644
--- a/src/include/executor/nodeIndexscan.h
+++ b/src/include/executor/nodeIndexscan.h
@@ -38,4 +38,5 @@ extern bool ExecIndexEvalArrayKeys(ExprContext *econtext,
 					   IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 extern bool ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 
+extern bool indexscan_recheck;
 #endif   /* NODEINDEXSCAN_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 8004d85..3bf4b5f 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -61,6 +61,7 @@ typedef struct IndexInfo
 	NodeTag		type;
 	int			ii_NumIndexAttrs;
 	AttrNumber	ii_KeyAttrNumbers[INDEX_MAX_KEYS];
+	Bitmapset  *ii_indxattrs;	/* bitmap of all columns used in this index */
 	List	   *ii_Expressions; /* list of Expr */
 	List	   *ii_ExpressionsState;	/* list of ExprState */
 	List	   *ii_Predicate;	/* list of Expr */
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 152ff06..e0c8a90 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -105,6 +105,7 @@ typedef struct PgStat_TableCounts
 	PgStat_Counter t_tuples_updated;
 	PgStat_Counter t_tuples_deleted;
 	PgStat_Counter t_tuples_hot_updated;
+	PgStat_Counter t_tuples_warm_updated;
 	bool		t_truncated;
 
 	PgStat_Counter t_delta_live_tuples;
@@ -625,6 +626,7 @@ typedef struct PgStat_StatTabEntry
 	PgStat_Counter tuples_updated;
 	PgStat_Counter tuples_deleted;
 	PgStat_Counter tuples_hot_updated;
+	PgStat_Counter tuples_warm_updated;
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
@@ -1176,7 +1178,7 @@ pgstat_report_wait_end(void)
 	(pgStatBlockWriteTime += (n))
 
 extern void pgstat_count_heap_insert(Relation rel, int n);
-extern void pgstat_count_heap_update(Relation rel, bool hot);
+extern void pgstat_count_heap_update(Relation rel, bool hot, bool warm);
 extern void pgstat_count_heap_delete(Relation rel);
 extern void pgstat_count_truncate(Relation rel);
 extern void pgstat_update_heap_dead_tuples(Relation rel, int delta);
diff --git a/src/include/storage/itemid.h b/src/include/storage/itemid.h
index 509c577..8c9cc99 100644
--- a/src/include/storage/itemid.h
+++ b/src/include/storage/itemid.h
@@ -46,6 +46,12 @@ typedef ItemIdData *ItemId;
 typedef uint16 ItemOffset;
 typedef uint16 ItemLength;
 
+/*
+ * Special value used in lp_len to indicate that the chain starting at line
+ * pointer may contain WARM tuples. This must only be interpreted along with
+ * LP_REDIRECT flag
+ */
+#define SpecHeapWarmLen	0x1ffb
 
 /* ----------------
  *		support macros
@@ -112,12 +118,15 @@ typedef uint16 ItemLength;
 #define ItemIdIsDead(itemId) \
 	((itemId)->lp_flags == LP_DEAD)
 
+#define ItemIdIsHeapWarm(itemId) \
+	(((itemId)->lp_flags == LP_REDIRECT) && \
+	 ((itemId)->lp_len == SpecHeapWarmLen))
 /*
  * ItemIdHasStorage
  *		True iff item identifier has associated storage.
  */
 #define ItemIdHasStorage(itemId) \
-	((itemId)->lp_len != 0)
+	(!ItemIdIsRedirected(itemId) && (itemId)->lp_len != 0)
 
 /*
  * ItemIdSetUnused
@@ -168,6 +177,26 @@ typedef uint16 ItemLength;
 )
 
 /*
+ * ItemIdSetHeapWarm
+ * 		Set the item identifier to identify as starting of a WARM chain
+ *
+ * Note: Since all bits in lp_flags are currently used, we store a special
+ * value in lp_len field to indicate this state. This is required only for
+ * LP_REDIRECT tuple and lp_len field is unused for such line pointers.
+ */
+#define ItemIdSetHeapWarm(itemId) \
+do { \
+	AssertMacro((itemId)->lp_flags == LP_REDIRECT); \
+	(itemId)->lp_len = SpecHeapWarmLen; \
+} while (0)
+
+#define ItemIdClearHeapWarm(itemId) \
+( \
+	AssertMacro((itemId)->lp_flags == LP_REDIRECT); \
+	(itemId)->lp_len = 0; \
+)
+
+/*
  * ItemIdMarkDead
  *		Set the item identifier to be DEAD, keeping its existing storage.
  *
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index fa15f28..982bf4c 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -101,8 +101,11 @@ typedef struct RelationData
 
 	/* data managed by RelationGetIndexAttrBitmap: */
 	Bitmapset  *rd_indexattr;	/* identifies columns used in indexes */
+	Bitmapset  *rd_exprindexattr; /* indentified columns used in expression or
+									 predicate indexes */
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	/*
 	 * rd_options is set whenever rd_rel is loaded into the relcache entry.
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index 6ea7dd2..290e9b7 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -48,7 +48,8 @@ typedef enum IndexAttrBitmapKind
 {
 	INDEX_ATTR_BITMAP_ALL,
 	INDEX_ATTR_BITMAP_KEY,
-	INDEX_ATTR_BITMAP_IDENTITY_KEY
+	INDEX_ATTR_BITMAP_IDENTITY_KEY,
+	INDEX_ATTR_BITMAP_EXPR_PREDICATE
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 031e8c2..c416fe6 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1705,6 +1705,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_tuples_deleted(c.oid) AS n_tup_del,
     pg_stat_get_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
@@ -1838,6 +1839,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1881,6 +1883,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1918,7 +1921,8 @@ pg_stat_xact_all_tables| SELECT c.oid AS relid,
     pg_stat_get_xact_tuples_inserted(c.oid) AS n_tup_ins,
     pg_stat_get_xact_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_xact_tuples_deleted(c.oid) AS n_tup_del,
-    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd
+    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_xact_tuples_warm_updated(c.oid) AS n_tup_warm_upd
    FROM ((pg_class c
      LEFT JOIN pg_index i ON ((c.oid = i.indrelid)))
      LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
@@ -1934,7 +1938,8 @@ pg_stat_xact_sys_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname = ANY (ARRAY['pg_catalog'::name, 'information_schema'::name])) OR (pg_stat_xact_all_tables.schemaname ~ '^pg_toast'::text));
 pg_stat_xact_user_functions| SELECT p.oid AS funcid,
@@ -1956,7 +1961,8 @@ pg_stat_xact_user_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname <> ALL (ARRAY['pg_catalog'::name, 'information_schema'::name])) AND (pg_stat_xact_all_tables.schemaname !~ '^pg_toast'::text));
 pg_statio_all_indexes| SELECT c.oid AS relid,
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
new file mode 100644
index 0000000..0aa1b83
--- /dev/null
+++ b/src/test/regress/expected/warm.out
@@ -0,0 +1,51 @@
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+explain select * from test_warm where lower(a) = 'test';
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on test_warm  (cost=4.18..12.65 rows=4 width=64)
+   Recheck Cond: (lower(a) = 'test'::text)
+   ->  Bitmap Index Scan on test_warmindx  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (lower(a) = 'test'::text)
+(4 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+select *, ctid from test_warm where a = 'test';
+ a | b | ctid 
+---+---+------
+(0 rows)
+
+select *, ctid from test_warm where a = 'TEST';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+                                   QUERY PLAN                                    
+---------------------------------------------------------------------------------
+ Index Scan using test_warmindx on test_warm  (cost=0.15..20.22 rows=4 width=64)
+   Index Cond: (lower(a) = 'test'::text)
+(2 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+DROP TABLE test_warm;
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 8641769..a610039 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -42,6 +42,8 @@ test: create_type
 test: create_table
 test: create_function_2
 
+test: warm
+
 # ----------
 # Load huge amounts of data
 # We should split the data files into single files and then
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
new file mode 100644
index 0000000..166ea37
--- /dev/null
+++ b/src/test/regress/sql/warm.sql
@@ -0,0 +1,15 @@
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where a = 'test';
+select *, ctid from test_warm where a = 'TEST';
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+DROP TABLE test_warm;
+
+
#19Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Pavan Deolasee (#18)
Re: Patch: Write Amplification Reduction Method (WARM)

I noticed that this patch changes HeapSatisfiesHOTAndKeyUpdate() by
adding one more set of attributes to check, and one more output boolean
flag. My patch to add indirect indexes also modifies that routine to
add the same set of things. I think after committing both these
patches, the API is going to be fairly ridiculous. I propose to use a
different approach.

With your WARM and my indirect indexes, plus the additions for for-key
locks, plus identity columns, there is no longer a real expectation that
we can exit early from the function. In your patch, as well as mine,
there is a semblance of optimization that tries to avoid computing the
updated_attrs output bitmapset if the pointer is not passed in, but it's
effectively pointless because the only interesting use case is from
ExecUpdate() which always activates the feature. Can we just agree to
drop that?

If we do drop that, then the function can become much simpler: compare
all columns in new vs. old, return output bitmapset of changed columns.
Then "satisfies_hot" and all the other boolean output flags can be
computed simply in the caller by doing bms_overlap().

Thoughts?

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#20Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#19)
Re: Patch: Write Amplification Reduction Method (WARM)

Alvaro Herrera wrote:

With your WARM and my indirect indexes, plus the additions for for-key
locks, plus identity columns, there is no longer a real expectation that
we can exit early from the function. In your patch, as well as mine,
there is a semblance of optimization that tries to avoid computing the
updated_attrs output bitmapset if the pointer is not passed in, but it's
effectively pointless because the only interesting use case is from
ExecUpdate() which always activates the feature. Can we just agree to
drop that?

I think the only case that gets worse is the path that does
simple_heap_update, which is used for DDL. I would be very surprised if
a change there is noticeable, when compared to the rest of the stuff
that goes on for DDL commands.

Now, after saying that, I think that a table with a very large number of
columns is going to be affected by this. But we don't really need to
compute the output bits for every single column -- we only care about
those that are covered by some index. So we should pass an input
bitmapset comprising all such columns, and the output bitmapset only
considers those columns, and ignores columns not indexed. My patch for
indirect indexes already does something similar (though it passes a
bitmapset of columns indexed by indirect indexes only, so it needs a
tweak there.)

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#21Jaime Casanova
jaime.casanova@2ndquadrant.com
In reply to: Pavan Deolasee (#18)
6 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On 2 December 2016 at 07:36, Pavan Deolasee <pavan.deolasee@gmail.com> wrote:

I've updated the patches after fixing the issue. Multiple rounds of
regression passes for me without any issue. Please let me know if it works
for you.

Hi Pavan,

Today i was playing with your patch and running some tests and found
some problems i wanted to report before i forget them ;)

* You need to add a prototype in src/backend/utils/adt/pgstatfuncs.c:
extern Datum pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS);

* The isolation test for partial_index fails (attached the regression.diffs)

* running a home-made test i have at hand i got this assertion:
"""
TRAP: FailedAssertion("!(buf_state & (1U << 24))", File: "bufmgr.c", Line: 837)
LOG: server process (PID 18986) was terminated by signal 6: Aborted
"""
To reproduce:
1) run prepare_test.sql
2) then run the following pgbench command (sql scripts attached):
pgbench -c 24 -j 24 -T 600 -n -f inserts.sql@15 -f updates_1.sql@20 -f
updates_2.sql@20 -f deletes.sql@45 db_test

* sometimes when i have made the server crash the attempt to recovery
fails with this assertion:
"""
LOG: database system was not properly shut down; automatic recovery in progress
LOG: redo starts at 0/157F970
TRAP: FailedAssertion("!(!warm_update)", File: "heapam.c", Line: 8924)
LOG: startup process (PID 14031) was terminated by signal 6: Aborted
LOG: aborting startup due to startup process failure
"""
still cannot reproduce this one consistently but happens often enough

will continue playing with it...

--
Jaime Casanova www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachments:

regression.diffsapplication/octet-stream; name=regression.diffsDownload
*** /home/jcasanov/Documentos/postgres/postgresql/src/test/isolation/expected/partial-index.out	2016-11-19 11:25:53.839629689 -0500
--- /home/jcasanov/Documentos/postgres/postgresql/src/test/isolation/results/partial-index.out	2016-12-26 01:05:09.594369943 -0500
***************
*** 30,35 ****
--- 30,37 ----
  6              a              1              
  7              a              1              
  8              a              1              
+ 9              a              2              
+ 10             a              2              
  step c2: COMMIT;
  
  starting permutation: rxy1 wx1 wy2 c1 rxy2 c2
***************
*** 83,88 ****
--- 85,91 ----
  6              a              1              
  7              a              1              
  8              a              1              
+ 9              a              2              
  10             a              1              
  step c1: COMMIT;
  step c2: COMMIT;
***************
*** 117,122 ****
--- 120,126 ----
  6              a              1              
  7              a              1              
  8              a              1              
+ 9              a              2              
  10             a              1              
  step c2: COMMIT;
  step c1: COMMIT;
***************
*** 173,178 ****
--- 177,183 ----
  6              a              1              
  7              a              1              
  8              a              1              
+ 9              a              2              
  10             a              1              
  step c1: COMMIT;
  step c2: COMMIT;
***************
*** 207,212 ****
--- 212,218 ----
  6              a              1              
  7              a              1              
  8              a              1              
+ 9              a              2              
  10             a              1              
  step c2: COMMIT;
  step c1: COMMIT;
***************
*** 240,245 ****
--- 246,252 ----
  6              a              1              
  7              a              1              
  8              a              1              
+ 9              a              2              
  10             a              1              
  step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
  step c1: COMMIT;
***************
*** 274,279 ****
--- 281,287 ----
  6              a              1              
  7              a              1              
  8              a              1              
+ 9              a              2              
  10             a              1              
  step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
  step c2: COMMIT;
***************
*** 308,313 ****
--- 316,322 ----
  6              a              1              
  7              a              1              
  8              a              1              
+ 9              a              2              
  10             a              1              
  step c2: COMMIT;
  step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
***************
*** 365,370 ****
--- 374,380 ----
  6              a              1              
  7              a              1              
  8              a              1              
+ 9              a              2              
  10             a              1              
  step c1: COMMIT;
  step c2: COMMIT;
***************
*** 399,404 ****
--- 409,415 ----
  6              a              1              
  7              a              1              
  8              a              1              
+ 9              a              2              
  10             a              1              
  step c2: COMMIT;
  step c1: COMMIT;
***************
*** 432,437 ****
--- 443,449 ----
  6              a              1              
  7              a              1              
  8              a              1              
+ 9              a              2              
  10             a              1              
  step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
  step c1: COMMIT;
***************
*** 466,471 ****
--- 478,484 ----
  6              a              1              
  7              a              1              
  8              a              1              
+ 9              a              2              
  10             a              1              
  step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
  step c2: COMMIT;
***************
*** 500,505 ****
--- 513,519 ----
  6              a              1              
  7              a              1              
  8              a              1              
+ 9              a              2              
  10             a              1              
  step c2: COMMIT;
  step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
***************
*** 520,525 ****
--- 534,540 ----
  6              a              1              
  7              a              1              
  8              a              1              
+ 9              a              2              
  10             a              1              
  step rxy1: select * from test_t where val2 = 1;
  id             val1           val2           
***************
*** 554,559 ****
--- 569,575 ----
  6              a              1              
  7              a              1              
  8              a              1              
+ 9              a              2              
  10             a              1              
  step rxy1: select * from test_t where val2 = 1;
  id             val1           val2           
***************
*** 588,593 ****
--- 604,610 ----
  6              a              1              
  7              a              1              
  8              a              1              
+ 9              a              2              
  10             a              1              
  step rxy1: select * from test_t where val2 = 1;
  id             val1           val2           
***************
*** 622,627 ****
--- 639,645 ----
  6              a              1              
  7              a              1              
  8              a              1              
+ 9              a              2              
  10             a              1              
  step c2: COMMIT;
  step rxy1: select * from test_t where val2 = 1;
***************
*** 636,641 ****
--- 654,660 ----
  6              a              1              
  7              a              1              
  8              a              1              
+ 9              a              2              
  10             a              1              
  step wx1: update test_t set val2 = 2 where val2 = 1 and id = 10;
  step c1: COMMIT;

======================================================================

deletes.sqlapplication/sql; name=deletes.sqlDownload
inserts.sqlapplication/sql; name=inserts.sqlDownload
prepare_test.sqlapplication/sql; name=prepare_test.sqlDownload
updates_1.sqlapplication/sql; name=updates_1.sqlDownload
updates_2.sqlapplication/sql; name=updates_2.sqlDownload
#22Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Jaime Casanova (#21)
Re: Patch: Write Amplification Reduction Method (WARM)

Jaime Casanova wrote:

* The isolation test for partial_index fails (attached the regression.diffs)

Hmm, I had a very similar (if not identical) failure with indirect
indexes; in my case it was a bug in RelationGetIndexAttrBitmap() -- I
was missing to have HOT considerate the columns in index predicate, that
is, the second pull_varattnos() call.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#23Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#22)
Re: Patch: Write Amplification Reduction Method (WARM)

Alvaro Herrera wrote:

Jaime Casanova wrote:

* The isolation test for partial_index fails (attached the regression.diffs)

Hmm, I had a very similar (if not identical) failure with indirect
indexes; in my case it was a bug in RelationGetIndexAttrBitmap() -- I
was missing to have HOT considerate the columns in index predicate, that
is, the second pull_varattnos() call.

Sorry, I meant:

Hmm, I had a very similar (if not identical) failure with indirect
indexes; in my case it was a bug in RelationGetIndexAttrBitmap() -- I
was missing to have HOT [take into account] the columns in index predicate, that
is, the second pull_varattnos() call.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#24Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Alvaro Herrera (#20)
Re: Patch: Write Amplification Reduction Method (WARM)

On Sat, Dec 24, 2016 at 1:18 AM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

Alvaro Herrera wrote:

With your WARM and my indirect indexes, plus the additions for for-key
locks, plus identity columns, there is no longer a real expectation that
we can exit early from the function. In your patch, as well as mine,
there is a semblance of optimization that tries to avoid computing the
updated_attrs output bitmapset if the pointer is not passed in, but it's
effectively pointless because the only interesting use case is from
ExecUpdate() which always activates the feature. Can we just agree to
drop that?

Yes, I agree. As you noted below, the only case that may be affected is
simple_heap_update() which does a lot more and hence this function will be
least of the worries.

I think the only case that gets worse is the path that does
simple_heap_update, which is used for DDL. I would be very surprised if
a change there is noticeable, when compared to the rest of the stuff
that goes on for DDL commands.

Now, after saying that, I think that a table with a very large number of
columns is going to be affected by this. But we don't really need to
compute the output bits for every single column -- we only care about
those that are covered by some index. So we should pass an input
bitmapset comprising all such columns, and the output bitmapset only
considers those columns, and ignores columns not indexed. My patch for
indirect indexes already does something similar (though it passes a
bitmapset of columns indexed by indirect indexes only, so it needs a
tweak there.)

Yes, that looks like a good compromise. This would require us to compare
only those columns that any caller of the function might be interested in.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#25Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Jaime Casanova (#21)
1 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Mon, Dec 26, 2016 at 11:49 AM, Jaime Casanova <
jaime.casanova@2ndquadrant.com> wrote:

On 2 December 2016 at 07:36, Pavan Deolasee <pavan.deolasee@gmail.com>
wrote:

I've updated the patches after fixing the issue. Multiple rounds of
regression passes for me without any issue. Please let me know if it

works

for you.

Hi Pavan,

Today i was playing with your patch and running some tests and found
some problems i wanted to report before i forget them ;)

Thanks Jaime for the tests and bug reports. I'm attaching an add-on patch
which fixes these issues for me. I'm deliberately not sending a fresh
revision because the changes are still minor.

* You need to add a prototype in src/backend/utils/adt/pgstatfuncs.c:
extern Datum pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS);

Added.

* The isolation test for partial_index fails (attached the
regression.diffs)

Fixed. Looks like I forgot to include attributes from predicates and
expressions in the list of index attributes (as pointed by Alvaro)

* running a home-made test i have at hand i got this assertion:
"""
TRAP: FailedAssertion("!(buf_state & (1U << 24))", File: "bufmgr.c", Line:
837)
LOG: server process (PID 18986) was terminated by signal 6: Aborted
"""
To reproduce:
1) run prepare_test.sql
2) then run the following pgbench command (sql scripts attached):
pgbench -c 24 -j 24 -T 600 -n -f inserts.sql@15 -f updates_1.sql@20 -f
updates_2.sql@20 -f deletes.sql@45 db_test

Looks like the patch was failing to set the block number correctly in the
t_ctid field, leading to these strange failures. There was also couple of
instances where the t_ctid field was being accessed directly, instead of
the newly added macro. I think we need some better mechanism to ensure that
we don't miss out on such things. But I don't have a very good idea about
doing that right now.

* sometimes when i have made the server crash the attempt to recovery
fails with this assertion:
"""
LOG: database system was not properly shut down; automatic recovery in
progress
LOG: redo starts at 0/157F970
TRAP: FailedAssertion("!(!warm_update)", File: "heapam.c", Line: 8924)
LOG: startup process (PID 14031) was terminated by signal 6: Aborted
LOG: aborting startup due to startup process failure
"""
still cannot reproduce this one consistently but happens often enough

This could be a case of uninitialised variable in log_heap_update(). What
surprises me though that none of the compilers I tried so far could catch
that. In the following code snippet, if the condition evaluates to false
then "warm_update" may remain uninitialised, leading to wrong xlog entry,
which may later result in assertion failure during redo recovery.

7845
7846 if (HeapTupleIsHeapWarmTuple(newtup))
7847 warm_update = true;
7848

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0003_warm_fixes_v6.patchapplication/octet-stream; name=0003_warm_fixes_v6.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index b3de79c..9353175 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -7831,7 +7831,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
-	bool		warm_update;
+	bool		warm_update = false;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index e32deb1..39ee6ac 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -75,6 +75,9 @@ RelationPutHeapTuple(Relation relation,
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
+		/* Copy t_ctid to set the correct block number */
+		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+
 		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item);
 		if (OffsetNumberIsValid(root_offnum))
 			HeapTupleHeaderSetRootOffset((HeapTupleHeader) item,
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 03c6b62..c24e486 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -801,7 +801,8 @@ retry:
 			  DirtySnapshot.speculativeToken &&
 			  TransactionIdPrecedes(GetCurrentTransactionId(), xwait))))
 		{
-			ctid_wait = tup->t_data->t_ctid;
+			HeapTupleHeaderGetNextCtid(tup->t_data, &ctid_wait,
+					ItemPointerGetOffsetNumber(&tup->t_self));
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 079a77f..466609c 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2451,7 +2451,8 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		}
 
 		/* updated, so look at the updated row */
-		tuple.t_self = tuple.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tuple.t_data, &tuple.t_self,
+				ItemPointerGetOffsetNumber(&tuple.t_self));
 		/* updated row should have xmin matching this xmax */
 		priorXmax = HeapTupleHeaderGetUpdateXid(tuple.t_data);
 		ReleaseBuffer(buffer);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 25752b0..ef4f5b4 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -37,6 +37,7 @@ extern Datum pg_stat_get_tuples_inserted(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_tuples_updated(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_tuples_deleted(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS);
+extern Datum pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_live_tuples(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_dead_tuples(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_mod_since_analyze(PG_FUNCTION_ARGS);
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 37874ca..c6ef4e2 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -4487,6 +4487,12 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
 
 		/*
+		 * indexattrs should include attributes referenced in index expressions
+		 * and predicates too
+		 */
+		indexattrs = bms_add_members(indexattrs, exprindexattrs);
+
+		/*
 		 * Check if the index has amrecheck method defined. If the method is
 		 * not defined, the index does not support WARM update. Completely
 		 * disable WARM updates on such tables
#26Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Pavan Deolasee (#25)
3 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Dec 27, 2016 at 6:51 PM, Pavan Deolasee <pavan.deolasee@gmail.com>
wrote:

Thanks Jaime for the tests and bug reports. I'm attaching an add-on patch
which fixes these issues for me. I'm deliberately not sending a fresh
revision because the changes are still minor.

Per Alvaro's request in another thread, I've rebased these patches on his
patch to refactor HeapSatisfiesHOTandKeyUpdate(). I've also attached that
patch here for easy reference.

The fixes based on bug reports by Jaime are also included in this patch
set. Other than that there are not any significant changes. The patch still
disables WARM on system tables, something I would like to fix. But I've
been delaying that because it will require changes at several places since
indexes on system tables are managed separately. In addition to that, the
patch only works with btree and hash indexes. We must implement the recheck
method for other index types so as to support them.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

interesting-attrs-2.patchapplication/octet-stream; name=interesting-attrs-2.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index ea579a0..19edbdf 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -95,11 +95,8 @@ static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
 				HeapTuple newtup, HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
-static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
-							 Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
+							 Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup);
 static bool heap_acquire_tuplock(Relation relation, ItemPointer tid,
 					 LockTupleMode mode, LockWaitPolicy wait_policy,
@@ -3443,6 +3440,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *interesting_attrs;
+	Bitmapset  *modified_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3460,9 +3459,6 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				pagefree;
 	bool		have_tuple_lock = false;
 	bool		iscombo;
-	bool		satisfies_hot;
-	bool		satisfies_key;
-	bool		satisfies_id;
 	bool		use_hot_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
@@ -3489,21 +3485,30 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				 errmsg("cannot update tuples during a parallel operation")));
 
 	/*
-	 * Fetch the list of attributes to be checked for HOT update.  This is
-	 * wasted effort if we fail to update or have to put the new tuple on a
-	 * different page.  But we must compute the list before obtaining buffer
-	 * lock --- in the worst case, if we are doing an update on one of the
-	 * relevant system catalogs, we could deadlock if we try to fetch the list
-	 * later.  In any case, the relcache caches the data so this is usually
-	 * pretty cheap.
+	 * Fetch the list of attributes to be checked for various operations.
 	 *
-	 * Note that we get a copy here, so we need not worry about relcache flush
-	 * happening midway through.
+	 * For HOT considerations, this is wasted effort if we fail to update or
+	 * have to put the new tuple on a different page.  But we must compute the
+	 * list before obtaining buffer lock --- in the worst case, if we are doing
+	 * an update on one of the relevant system catalogs, we could deadlock if
+	 * we try to fetch the list later.  In any case, the relcache caches the
+	 * data so this is usually pretty cheap.
+	 *
+	 * We also need columns used by the replica identity, the columns that
+	 * are considered the "key" of rows in the table, and columns that are
+	 * part of indirect indexes.
+	 *
+	 * Note that we get copies of each bitmap, so we need not worry about
+	 * relcache flush happening midway through.
 	 */
 	hot_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_ALL);
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	interesting_attrs = bms_add_members(NULL, hot_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
+
 
 	block = ItemPointerGetBlockNumber(otid);
 	buffer = ReadBuffer(relation, block);
@@ -3524,7 +3529,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Assert(ItemIdIsNormal(lp));
 
 	/*
-	 * Fill in enough data in oldtup for HeapSatisfiesHOTandKeyUpdate to work
+	 * Fill in enough data in oldtup for HeapDetermineModifiedColumns to work
 	 * properly.
 	 */
 	oldtup.t_tableOid = RelationGetRelid(relation);
@@ -3550,6 +3555,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 		Assert(!(newtup->t_data->t_infomask & HEAP_HASOID));
 	}
 
+	/* Determine columns modified by the update. */
+	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
+												  &oldtup, newtup);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3561,10 +3570,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	 * is updates that don't manipulate key columns, not those that
 	 * serendipitiously arrive at the same key values.
 	 */
-	HeapSatisfiesHOTandKeyUpdate(relation, hot_attrs, key_attrs, id_attrs,
-								 &satisfies_hot, &satisfies_key,
-								 &satisfies_id, &oldtup, newtup);
-	if (satisfies_key)
+	if (!bms_overlap(modified_attrs, key_attrs))
 	{
 		*lockmode = LockTupleNoKeyExclusive;
 		mxact_status = MultiXactStatusNoKeyUpdate;
@@ -3803,6 +3809,8 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(modified_attrs);
+		bms_free(interesting_attrs);
 		return result;
 	}
 
@@ -4107,7 +4115,7 @@ l2:
 		 * to do a HOT update.  Check if any of the index columns have been
 		 * changed.  If not, then HOT update is possible.
 		 */
-		if (satisfies_hot)
+		if (!bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
 	}
 	else
@@ -4122,7 +4130,9 @@ l2:
 	 * ExtractReplicaIdentity() will return NULL if nothing needs to be
 	 * logged.
 	 */
-	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup, !satisfies_id, &old_key_copied);
+	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup,
+										   bms_overlap(modified_attrs, id_attrs),
+										   &old_key_copied);
 
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
@@ -4270,13 +4280,15 @@ l2:
 	bms_free(hot_attrs);
 	bms_free(key_attrs);
 	bms_free(id_attrs);
+	bms_free(modified_attrs);
+	bms_free(interesting_attrs);
 
 	return HeapTupleMayBeUpdated;
 }
 
 /*
  * Check if the specified attribute's value is same in both given tuples.
- * Subroutine for HeapSatisfiesHOTandKeyUpdate.
+ * Subroutine for HeapDetermineModifiedColumns.
  */
 static bool
 heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
@@ -4310,7 +4322,7 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 
 	/*
 	 * Extract the corresponding values.  XXX this is pretty inefficient if
-	 * there are many indexed columns.  Should HeapSatisfiesHOTandKeyUpdate do
+	 * there are many indexed columns.  Should HeapDetermineModifiedColumns do
 	 * a single heap_deform_tuple call on each tuple, instead?	But that
 	 * doesn't work for system columns ...
 	 */
@@ -4355,114 +4367,30 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 /*
  * Check which columns are being updated.
  *
- * This simultaneously checks conditions for HOT updates, for FOR KEY
- * SHARE updates, and REPLICA IDENTITY concerns.  Since much of the time they
- * will be checking very similar sets of columns, and doing the same tests on
- * them, it makes sense to optimize and do them together.
+ * Given an updated tuple, determine (and return into the output bitmapset),
+ * from those listed as interesting, the set of columns that changed.
  *
- * We receive three bitmapsets comprising the three sets of columns we're
- * interested in.  Note these are destructively modified; that is OK since
- * this is invoked at most once in heap_update.
- *
- * hot_result is set to TRUE if it's okay to do a HOT update (i.e. it does not
- * modified indexed columns); key_result is set to TRUE if the update does not
- * modify columns used in the key; id_result is set to TRUE if the update does
- * not modify columns in any index marked as the REPLICA IDENTITY.
+ * The input bitmapset is destructively modified; that is OK since this is
+ * invoked at most once in heap_update.
  */
-static void
-HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *
+HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup)
 {
-	int			next_hot_attnum;
-	int			next_key_attnum;
-	int			next_id_attnum;
-	bool		hot_result = true;
-	bool		key_result = true;
-	bool		id_result = true;
+	int		attnum;
+	Bitmapset *modified = NULL;
 
-	/* If REPLICA IDENTITY is set to FULL, id_attrs will be empty. */
-	Assert(bms_is_subset(id_attrs, key_attrs));
-	Assert(bms_is_subset(key_attrs, hot_attrs));
-
-	/*
-	 * If one of these sets contains no remaining bits, bms_first_member will
-	 * return -1, and after adding FirstLowInvalidHeapAttributeNumber (which
-	 * is negative!)  we'll get an attribute number that can't possibly be
-	 * real, and thus won't match any actual attribute number.
-	 */
-	next_hot_attnum = bms_first_member(hot_attrs);
-	next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_key_attnum = bms_first_member(key_attrs);
-	next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_id_attnum = bms_first_member(id_attrs);
-	next_id_attnum += FirstLowInvalidHeapAttributeNumber;
-
-	for (;;)
+	while ((attnum = bms_first_member(interesting_cols)) >= 0)
 	{
-		bool		changed;
-		int			check_now;
+		attnum += FirstLowInvalidHeapAttributeNumber;
 
-		/*
-		 * Since the HOT attributes are a superset of the key attributes and
-		 * the key attributes are a superset of the id attributes, this logic
-		 * is guaranteed to identify the next column that needs to be checked.
-		 */
-		if (hot_result && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_hot_attnum;
-		else if (key_result && next_key_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_key_attnum;
-		else if (id_result && next_id_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_id_attnum;
-		else
-			break;
-
-		/* See whether it changed. */
-		changed = !heap_tuple_attr_equals(RelationGetDescr(relation),
-										  check_now, oldtup, newtup);
-		if (changed)
-		{
-			if (check_now == next_hot_attnum)
-				hot_result = false;
-			if (check_now == next_key_attnum)
-				key_result = false;
-			if (check_now == next_id_attnum)
-				id_result = false;
-
-			/* if all are false now, we can stop checking */
-			if (!hot_result && !key_result && !id_result)
-				break;
-		}
-
-		/*
-		 * Advance the next attribute numbers for the sets that contain the
-		 * attribute we just checked.  As we work our way through the columns,
-		 * the next_attnum values will rise; but when each set becomes empty,
-		 * bms_first_member() will return -1 and the attribute number will end
-		 * up with a value less than FirstLowInvalidHeapAttributeNumber.
-		 */
-		if (hot_result && check_now == next_hot_attnum)
-		{
-			next_hot_attnum = bms_first_member(hot_attrs);
-			next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (key_result && check_now == next_key_attnum)
-		{
-			next_key_attnum = bms_first_member(key_attrs);
-			next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (id_result && check_now == next_id_attnum)
-		{
-			next_id_attnum = bms_first_member(id_attrs);
-			next_id_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
+		if (!heap_tuple_attr_equals(RelationGetDescr(relation),
+								   attnum, oldtup, newtup))
+			modified = bms_add_member(modified,
+									  attnum - FirstLowInvalidHeapAttributeNumber);
 	}
 
-	*satisfies_hot = hot_result;
-	*satisfies_key = key_result;
-	*satisfies_id = id_result;
+	return modified;
 }
 
 /*
0001_track_root_lp_v7.patchapplication/octet-stream; name=0001_track_root_lp_v7.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index f1b4602..a22aae7 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -93,7 +93,8 @@ static HeapTuple heap_prepare_insert(Relation relation, HeapTuple tup,
 					TransactionId xid, CommandId cid, int options);
 static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
-				HeapTuple newtup, HeapTuple old_key_tup,
+				HeapTuple newtup, OffsetNumber root_offnum,
+				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
 static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
 							 Bitmapset *interesting_cols,
@@ -2247,13 +2248,13 @@ heap_get_latest_tid(Relation relation,
 		 */
 		if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) ||
 			HeapTupleHeaderIsOnlyLocked(tp.t_data) ||
-			ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid))
+			HeapTupleHeaderIsHeapLatest(tp.t_data, ctid))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
 		}
 
-		ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tp.t_data, &ctid, offnum);
 		priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		UnlockReleaseBuffer(buffer);
 	}							/* end of loop */
@@ -2412,7 +2413,8 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	START_CRIT_SECTION();
 
 	RelationPutHeapTuple(relation, buffer, heaptup,
-						 (options & HEAP_INSERT_SPECULATIVE) != 0);
+						 (options & HEAP_INSERT_SPECULATIVE) != 0,
+						 InvalidOffsetNumber);
 
 	if (PageIsAllVisible(BufferGetPage(buffer)))
 	{
@@ -2710,7 +2712,8 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		 * RelationGetBufferForTuple has ensured that the first tuple fits.
 		 * Put that on the page, and then as many other tuples as fit.
 		 */
-		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false);
+		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false,
+				InvalidOffsetNumber);
 		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++)
 		{
 			HeapTuple	heaptup = heaptuples[ndone + nthispage];
@@ -2718,7 +2721,8 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace)
 				break;
 
-			RelationPutHeapTuple(relation, buffer, heaptup, false);
+			RelationPutHeapTuple(relation, buffer, heaptup, false,
+					InvalidOffsetNumber);
 
 			/*
 			 * We don't use heap_multi_insert for catalog tuples yet, but
@@ -2990,6 +2994,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	HeapTupleData tp;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	TransactionId new_xmax;
@@ -3041,7 +3046,8 @@ heap_delete(Relation relation, ItemPointer tid,
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 	}
 
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
+	offnum = ItemPointerGetOffsetNumber(tid);
+	lp = PageGetItemId(page, offnum);
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
@@ -3171,7 +3177,7 @@ l1:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tp.t_data, &hufd->ctid, offnum);
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tp.t_data);
@@ -3247,8 +3253,8 @@ l1:
 	HeapTupleHeaderClearHotUpdated(tp.t_data);
 	HeapTupleHeaderSetXmax(tp.t_data, new_xmax);
 	HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo);
-	/* Make sure there is no forward chain link in t_ctid */
-	tp.t_data->t_ctid = tp.t_self;
+	/* Mark this tuple as the latest tuple in the update chain */
+	HeapTupleHeaderSetHeapLatest(tp.t_data);
 
 	MarkBufferDirty(buffer);
 
@@ -3449,6 +3455,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
+	OffsetNumber	root_offnum;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3511,6 +3519,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 
 
 	block = ItemPointerGetBlockNumber(otid);
+	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3795,7 +3804,7 @@ l2:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = oldtup.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(oldtup.t_data, &hufd->ctid, offnum);
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
@@ -3976,7 +3985,7 @@ l2:
 		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		HeapTupleHeaderSetHeapLatest(oldtup.t_data);
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4159,6 +4168,20 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			heap_get_root_tuple_one(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
+					&root_offnum);
 	}
 	else
 	{
@@ -4166,10 +4189,29 @@ l2:
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		root_offnum = InvalidOffsetNumber;
 	}
 
-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
+	/* insert new tuple */
+	RelationPutHeapTuple(relation, newbuf, heaptup, false, root_offnum);
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data);
 
+	/*
+	 * Also update the in-memory copy with the root line pointer information
+	 */
+	if (OffsetNumberIsValid(root_offnum))
+	{
+		HeapTupleHeaderSetRootOffset(heaptup->t_data, root_offnum);
+		HeapTupleHeaderSetRootOffset(newtup->t_data, root_offnum);
+	}
+	else
+	{
+		HeapTupleHeaderSetRootOffset(heaptup->t_data,
+				ItemPointerGetOffsetNumber(&heaptup->t_self));
+		HeapTupleHeaderSetRootOffset(newtup->t_data,
+				ItemPointerGetOffsetNumber(&heaptup->t_self));
+	}
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
 	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
@@ -4182,7 +4224,9 @@ l2:
 	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	HeapTupleHeaderSetNextCtid(oldtup.t_data,
+			ItemPointerGetBlockNumber(&(heaptup->t_self)),
+			ItemPointerGetOffsetNumber(&(heaptup->t_self)));
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4221,6 +4265,7 @@ l2:
 
 		recptr = log_heap_update(relation, buffer,
 								 newbuf, &oldtup, heaptup,
+								 root_offnum,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4501,7 +4546,8 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	ItemId		lp;
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
-	BlockNumber block;
+	BlockNumber	block;
+	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4513,6 +4559,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
+	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -4559,7 +4606,7 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+		HeapTupleHeaderGetNextCtid(tuple->t_data, &t_ctid, offnum);
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -4997,7 +5044,7 @@ failed:
 		Assert(result == HeapTupleSelfUpdated || result == HeapTupleUpdated ||
 			   result == HeapTupleWouldBlock);
 		Assert(!(tuple->t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tuple->t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tuple->t_data, &hufd->ctid, offnum);
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tuple->t_data);
@@ -5073,7 +5120,7 @@ failed:
 	 * the tuple as well.
 	 */
 	if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask))
-		tuple->t_data->t_ctid = *tid;
+		HeapTupleHeaderSetHeapLatest(tuple->t_data);
 
 	/* Clear only the all-frozen bit on visibility map if needed */
 	if (PageIsAllVisible(page) &&
@@ -5587,6 +5634,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
+	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5595,6 +5643,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
+		offnum = ItemPointerGetOffsetNumber(&tupid);
+
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
@@ -5824,7 +5874,7 @@ l4:
 
 		/* if we find the end of update chain, we're done. */
 		if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID ||
-			ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) ||
+			HeapTupleHeaderIsHeapLatest(mytup.t_data, mytup.t_self) ||
 			HeapTupleHeaderIsOnlyLocked(mytup.t_data))
 		{
 			result = HeapTupleMayBeUpdated;
@@ -5833,7 +5883,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		HeapTupleHeaderGetNextCtid(mytup.t_data, &tupid, offnum);
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
@@ -5950,7 +6000,8 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	 * Replace the speculative insertion token with a real t_ctid, pointing to
 	 * itself like it does on regular tuples.
 	 */
-	htup->t_ctid = tuple->t_self;
+	HeapTupleHeaderSetHeapLatest(htup);
+	HeapTupleHeaderSetRootOffset(htup, offnum);
 
 	/* XLOG stuff */
 	if (RelationNeedsWAL(relation))
@@ -6076,7 +6127,9 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
 
 	/* Clear the speculative insertion token too */
-	tp.t_data->t_ctid = tp.t_self;
+	HeapTupleHeaderSetNextCtid(tp.t_data,
+			ItemPointerGetBlockNumber(&tp.t_self),
+			ItemPointerGetOffsetNumber(&tp.t_self));
 
 	MarkBufferDirty(buffer);
 
@@ -7425,6 +7478,7 @@ log_heap_visible(RelFileNode rnode, Buffer heap_buffer, Buffer vm_buffer,
 static XLogRecPtr
 log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup, HeapTuple newtup,
+				OffsetNumber root_offnum,
 				HeapTuple old_key_tuple,
 				bool all_visible_cleared, bool new_all_visible_cleared)
 {
@@ -7544,6 +7598,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	/* Prepare WAL data for the new page */
 	xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self);
 	xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data);
+	xlrec.root_offnum = root_offnum;
 
 	bufflags = REGBUF_STANDARD;
 	if (init)
@@ -8199,7 +8254,7 @@ heap_xlog_delete(XLogReaderState *record)
 			PageClearAllVisible(page);
 
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = target_tid;
+		HeapTupleHeaderSetHeapLatest(htup);
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8289,7 +8344,9 @@ heap_xlog_insert(XLogReaderState *record)
 		htup->t_hoff = xlhdr.t_hoff;
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
-		htup->t_ctid = target_tid;
+
+		HeapTupleHeaderSetHeapLatest(htup);
+		HeapTupleHeaderSetRootOffset(htup, xlrec->offnum);
 
 		if (PageAddItem(page, (Item) htup, newlen, xlrec->offnum,
 						true, true) == InvalidOffsetNumber)
@@ -8424,8 +8481,9 @@ heap_xlog_multi_insert(XLogReaderState *record)
 			htup->t_hoff = xlhdr->t_hoff;
 			HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 			HeapTupleHeaderSetCmin(htup, FirstCommandId);
-			ItemPointerSetBlockNumber(&htup->t_ctid, blkno);
-			ItemPointerSetOffsetNumber(&htup->t_ctid, offnum);
+
+			HeapTupleHeaderSetHeapLatest(htup);
+			HeapTupleHeaderSetRootOffset(htup, offnum);
 
 			offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 			if (offnum == InvalidOffsetNumber)
@@ -8561,7 +8619,8 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 		/* Set forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetNextCtid(htup, ItemPointerGetBlockNumber(&newtid),
+				ItemPointerGetOffsetNumber(&newtid));
 
 		/* Mark the page as a candidate for pruning */
 		PageSetPrunable(page, XLogRecGetXid(record));
@@ -8695,12 +8754,17 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetHeapLatest(htup);
 
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
 
+		if (OffsetNumberIsValid(xlrec->root_offnum))
+			HeapTupleHeaderSetRootOffset(htup, xlrec->root_offnum);
+		else
+			HeapTupleHeaderSetRootOffset(htup, offnum);
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
@@ -8828,9 +8892,7 @@ heap_xlog_lock(XLogReaderState *record)
 		{
 			HeapTupleHeaderClearHotUpdated(htup);
 			/* Make sure there is no forward chain link in t_ctid */
-			ItemPointerSet(&htup->t_ctid,
-						   BufferGetBlockNumber(buffer),
-						   offnum);
+			HeapTupleHeaderSetHeapLatest(htup);
 		}
 		HeapTupleHeaderSetXmax(htup, xlrec->locking_xid);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index c90fb71..39ee6ac 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -31,12 +31,18 @@
  * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
  * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *
+ * The caller can optionally tell us to set the root offset to the given value.
+ * Otherwise, the root offset is set to the offset of the new location once its
+ * known. The former is used while updating an existing tuple while latter is
+ * used during insertion of a new row.
  */
 void
 RelationPutHeapTuple(Relation relation,
 					 Buffer buffer,
 					 HeapTuple tuple,
-					 bool token)
+					 bool token,
+					 OffsetNumber root_offnum)
 {
 	Page		pageHeader;
 	OffsetNumber offnum;
@@ -69,7 +75,16 @@ RelationPutHeapTuple(Relation relation,
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
+		/* Copy t_ctid to set the correct block number */
 		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+
+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item);
+		if (OffsetNumberIsValid(root_offnum))
+			HeapTupleHeaderSetRootOffset((HeapTupleHeader) item,
+					root_offnum);
+		else
+			HeapTupleHeaderSetRootOffset((HeapTupleHeader) item,
+					offnum);
 	}
 }
 
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index 6ff9251..7c2231a 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -55,6 +55,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
 
+static void heap_get_root_tuples_internal(Page page,
+				OffsetNumber target_offnum, OffsetNumber *root_offsets);
 
 /*
  * Optionally prune and repair fragmentation in the specified page.
@@ -740,8 +742,9 @@ heap_page_prune_execute(Buffer buffer,
  * holds a pin on the buffer. Once pin is released, a tuple might be pruned
  * and reused by a completely unrelated tuple.
  */
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
 {
 	OffsetNumber offnum,
 				maxoff;
@@ -820,6 +823,14 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 			/* Remember the root line pointer for this item */
 			root_offsets[nextoffnum - 1] = offnum;
 
+			/*
+			 * If the caller is interested in just one offset and we found
+			 * that, just return
+			 */
+			if (OffsetNumberIsValid(target_offnum) &&
+					(nextoffnum == target_offnum))
+				return;
+
 			/* Advance to next chain member, if any */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				break;
@@ -829,3 +840,25 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 		}
 	}
 }
+
+/*
+ * Get root line pointer for the given tuple
+ */
+void
+heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offnum)
+{
+	OffsetNumber offsets[MaxHeapTuplesPerPage];
+	heap_get_root_tuples_internal(page, target_offnum, offsets);
+	*root_offnum = offsets[target_offnum - 1];
+}
+
+/*
+ * Get root line pointers for all tuples in the page
+ */
+void
+heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+{
+	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+			root_offsets);
+}
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index 17584ba..09a164c 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -419,14 +419,14 @@ rewrite_heap_tuple(RewriteState state,
 	 */
 	if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
 		  HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
-		!(ItemPointerEquals(&(old_tuple->t_self),
-							&(old_tuple->t_data->t_ctid))))
+		!(HeapTupleHeaderIsHeapLatest(old_tuple->t_data, old_tuple->t_self)))
 	{
 		OldToNewMapping mapping;
 
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
-		hashkey.tid = old_tuple->t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(old_tuple->t_data, &hashkey.tid,
+				ItemPointerGetOffsetNumber(&old_tuple->t_self));
 
 		mapping = (OldToNewMapping)
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -439,7 +439,9 @@ rewrite_heap_tuple(RewriteState state,
 			 * set the ctid of this tuple to point to the new location, and
 			 * insert it right away.
 			 */
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextCtid(new_tuple->t_data,
+					ItemPointerGetBlockNumber(&mapping->new_tid),
+					ItemPointerGetOffsetNumber(&mapping->new_tid));
 
 			/* We don't need the mapping entry anymore */
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -525,7 +527,9 @@ rewrite_heap_tuple(RewriteState state,
 				new_tuple = unresolved->tuple;
 				free_new = true;
 				old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextCtid(new_tuple->t_data,
+						ItemPointerGetBlockNumber(&new_tid),
+						ItemPointerGetOffsetNumber(&new_tid));
 
 				/*
 				 * We don't need the hash entry anymore, but don't free its
@@ -731,7 +735,10 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		onpage_tup->t_ctid = tup->t_self;
+		HeapTupleHeaderSetNextCtid(onpage_tup,
+				ItemPointerGetBlockNumber(&tup->t_self),
+				ItemPointerGetOffsetNumber(&tup->t_self));
+		HeapTupleHeaderSetHeapLatest(onpage_tup);
 	}
 
 	/* If heaptup is a private copy, release it. */
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 009c1b7..882ce18 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -788,7 +788,8 @@ retry:
 			  DirtySnapshot.speculativeToken &&
 			  TransactionIdPrecedes(GetCurrentTransactionId(), xwait))))
 		{
-			ctid_wait = tup->t_data->t_ctid;
+			HeapTupleHeaderGetNextCtid(tup->t_data, &ctid_wait,
+					ItemPointerGetOffsetNumber(&tup->t_self));
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 32bb3f9..466609c 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2443,7 +2443,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		 * As above, it should be safe to examine xmax and t_ctid without the
 		 * buffer content lock, because they can't be changing.
 		 */
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, tuple.t_self))
 		{
 			/* deleted, so forget about it */
 			ReleaseBuffer(buffer);
@@ -2451,7 +2451,8 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		}
 
 		/* updated, so look at the updated row */
-		tuple.t_self = tuple.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tuple.t_data, &tuple.t_self,
+				ItemPointerGetOffsetNumber(&tuple.t_self));
 		/* updated row should have xmin matching this xmax */
 		priorXmax = HeapTupleHeaderGetUpdateXid(tuple.t_data);
 		ReleaseBuffer(buffer);
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 0d12bbb..81f7982 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -188,6 +188,8 @@ extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
+extern void heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offnum);
 extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
 
 /* in heap/syncscan.c */
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index 06a8242..5a04561 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -193,6 +193,8 @@ typedef struct xl_heap_update
 	uint8		flags;
 	TransactionId new_xmax;		/* xmax of the new tuple */
 	OffsetNumber new_offnum;	/* new tuple's offset */
+	OffsetNumber root_offnum;	/* offset of the root line pointer in case of
+								   HOT or WARM update */
 
 	/*
 	 * If XLOG_HEAP_CONTAINS_OLD_TUPLE or XLOG_HEAP_CONTAINS_OLD_KEY flags are
@@ -200,7 +202,7 @@ typedef struct xl_heap_update
 	 */
 } xl_heap_update;
 
-#define SizeOfHeapUpdate	(offsetof(xl_heap_update, new_offnum) + sizeof(OffsetNumber))
+#define SizeOfHeapUpdate	(offsetof(xl_heap_update, root_offnum) + sizeof(OffsetNumber))
 
 /*
  * This is what we need to know about vacuum page cleanup/redirect
diff --git a/src/include/access/hio.h b/src/include/access/hio.h
index a174b34..82e5b5f 100644
--- a/src/include/access/hio.h
+++ b/src/include/access/hio.h
@@ -36,7 +36,7 @@ typedef struct BulkInsertStateData
 
 
 extern void RelationPutHeapTuple(Relation relation, Buffer buffer,
-					 HeapTuple tuple, bool token);
+					 HeapTuple tuple, bool token, OffsetNumber root_offnum);
 extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
 						  Buffer otherBuffer, int options,
 						  BulkInsertState bistate,
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 8fb1f6d..4313eb9 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,13 +260,19 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1800 are available */
+/* bits 0x0800 are available */
+#define HEAP_LATEST_TUPLE		0x1000	/*
+										 * This is the last tuple in chain and
+										 * ip_posid points to the root line
+										 * pointer
+										 */
 #define HEAP_KEYS_UPDATED		0x2000	/* tuple was updated and key cols
 										 * modified, or tuple deleted */
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xE000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+
 
 /*
  * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins.  It is
@@ -504,6 +510,30 @@ do { \
   (tup)->t_infomask2 & HEAP_ONLY_TUPLE \
 )
 
+#define HeapTupleHeaderSetHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE \
+)
+
+#define HeapTupleHeaderClearHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 &= ~HEAP_LATEST_TUPLE \
+)
+
+/*
+ * If HEAP_LATEST_TUPLE is set in the last tuple in the update chain. But for
+ * clusters which are upgraded from pre-10.0 release, we still check if c_tid
+ * is pointing to itself and declare such tuple as the latest tuple in the
+ * chain
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  ((tup)->t_infomask2 & HEAP_LATEST_TUPLE) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(&tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(&tid))) \
+)
+
+
 #define HeapTupleHeaderSetHeapOnly(tup) \
 ( \
   (tup)->t_infomask2 |= HEAP_ONLY_TUPLE \
@@ -542,6 +572,55 @@ do { \
 
 
 /*
+ * Set the t_ctid chain and also clear the HEAP_LATEST_TUPLE flag since we
+ * probably have a new tuple in the chain
+ */
+#define HeapTupleHeaderSetNextCtid(tup, block, offset) \
+do { \
+		ItemPointerSetBlockNumber(&((tup)->t_ctid), (block)); \
+		ItemPointerSetOffsetNumber(&((tup)->t_ctid), (offset)); \
+		HeapTupleHeaderClearHeapLatest((tup)); \
+} while (0)
+
+/*
+ * Get TID of next tuple in the update chain. Traditionally, we have stored
+ * self TID in the t_ctid field if the tuple is the last tuple in the chain. We
+ * try to preserve that behaviour by returning self-TID if HEAP_LATEST_TUPLE
+ * flag is set.
+ */
+#define HeapTupleHeaderGetNextCtid(tup, next_ctid, offnum) \
+do { \
+	if ((tup)->t_infomask2 & HEAP_LATEST_TUPLE) \
+	{ \
+		ItemPointerSet((next_ctid), ItemPointerGetBlockNumber(&(tup)->t_ctid), \
+				(offnum)); \
+	} \
+	else \
+	{ \
+		ItemPointerSet((next_ctid), ItemPointerGetBlockNumber(&(tup)->t_ctid), \
+				ItemPointerGetOffsetNumber(&(tup)->t_ctid)); \
+	} \
+} while (0)
+
+#define HeapTupleHeaderSetRootOffset(tup, offset) \
+do { \
+	AssertMacro(!HeapTupleHeaderIsHotUpdated(tup)); \
+	AssertMacro((tup)->t_infomask2 & HEAP_LATEST_TUPLE); \
+	ItemPointerSetOffsetNumber(&(tup)->t_ctid, (offset)); \
+} while (0)
+
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+	AssertMacro((tup)->t_infomask2 & HEAP_LATEST_TUPLE), \
+	ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)
+
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+	(tup)->t_infomask2 & HEAP_LATEST_TUPLE \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
0002_warm_updates_v7.patchapplication/octet-stream; name=0002_warm_updates_v7.patchDownload
diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c
index b68a0d1..b95275f 100644
--- a/contrib/bloom/blutils.c
+++ b/contrib/bloom/blutils.c
@@ -138,6 +138,7 @@ blhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = blendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index 1b45a4c..ba3fffb 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -111,6 +111,7 @@ brinhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = brinendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index b8aa9bc..491e411 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -88,6 +88,7 @@ gisthandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = gistendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index 6806e32..2026004 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -85,6 +85,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = hashendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = hashrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -265,6 +266,8 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 	OffsetNumber offnum;
 	ItemPointer current;
 	bool		res;
+	IndexTuple	itup;
+
 
 	/* Hash indexes are always lossy since we store only the hash code */
 	scan->xs_recheck = true;
@@ -302,8 +305,6 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 			 offnum <= maxoffnum;
 			 offnum = OffsetNumberNext(offnum))
 		{
-			IndexTuple	itup;
-
 			itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 			if (ItemPointerEquals(&(so->hashso_heappos), &(itup->t_tid)))
 				break;
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index 8d43b38..05b078f 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -59,6 +59,8 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
 	return true;
 }
 
@@ -407,6 +409,9 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
+
 	return true;
 }
 
diff --git a/src/backend/access/hash/hashutil.c b/src/backend/access/hash/hashutil.c
index fa9cbdc..6897985 100644
--- a/src/backend/access/hash/hashutil.c
+++ b/src/backend/access/hash/hashutil.c
@@ -17,8 +17,12 @@
 #include "access/hash.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
+#include "nodes/execnodes.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 #define CALC_NEW_BUCKET(old_bucket, lowmask) \
 			old_bucket | (lowmask + 1)
@@ -446,3 +450,109 @@ _hash_get_newbucket_from_oldbucket(Relation rel, Bucket old_bucket,
 
 	return new_bucket;
 }
+
+/*
+ * Recheck if the heap tuple satisfies the key stored in the index tuple
+ */
+bool
+hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	Datum		values2[INDEX_MAX_KEYS];
+	bool		isnull2[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	/*
+	 * HASH indexes compute a hash value of the key and store that in the
+	 * index. So we must first obtain the hash of the value obtained from the
+	 * heap and then do a comparison
+	 */
+	_hash_convert_tuple(indexRel, values, isnull, values2, isnull2);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL then they are equal
+		 */
+		if (isnull2[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If either is NULL then they are not equal
+		 */
+		if (isnull2[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now do a raw memory comparison
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values2[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+}
diff --git a/src/backend/access/heap/README.WARM b/src/backend/access/heap/README.WARM
new file mode 100644
index 0000000..f793570
--- /dev/null
+++ b/src/backend/access/heap/README.WARM
@@ -0,0 +1,271 @@
+src/backend/access/heap/README.WARM
+
+Write Amplification Reduction Method (WARM)
+===========================================
+
+The Heap Only Tuple (HOT) feature greatly eliminated redudant index
+entries and allowed re-use of the dead space occupied by previously
+updated or deleted tuples (see src/backend/access/heap/README.HOT)
+
+One of the necessary conditions for satisfying HOT update is that the
+update must not change a column used in any of the indexes on the table.
+The condition is sometimes hard to meet, especially for complex
+workloads with several indexes on large yet frequently updated tables.
+Worse, sometimes only one or two index columns may be updated, but the
+regular non-HOT update will still insert a new index entry in every
+index on the table, irrespective of whether the key pertaining to the
+index changed or not.
+
+WARM is a technique devised to address these problems.
+
+
+Update Chains With Multiple Index Entries Pointing to the Root
+--------------------------------------------------------------
+
+When a non-HOT update is caused by an index key change, a new index
+entry must be inserted for the changed index. But if the index key
+hasn't changed for other indexes, we don't really need to insert a new
+entry. Even though the existing index entry is pointing to the old
+tuple, the new tuple is reachable via the t_ctid chain. To keep things
+simple, a WARM update requires that the heap block must have enough
+space to store the new version of the tuple. This is same as HOT
+updates.
+
+In WARM, we ensure that every index entry always points to the root of
+the WARM chain. In fact, a WARM chain looks exactly like a HOT chain
+except for the fact that there could be multiple index entries pointing
+to the root of the chain. So when new entry is inserted in an index for
+updated tuple, and if we are doing a WARM update, the new entry is made
+point to the root of the WARM chain.
+
+For example, if we have a table with two columns and two indexes on each
+of the column. When a tuple is first inserted the table, we have exactly
+one index entry pointing to the tuple from both indexes.
+
+	lp [1]
+	[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's entry (aaaa) also points to 1
+
+Now if the tuple's second column is updated and if there is room on the
+page, we perform a WARM update. To do so, Index1 does not get any new
+entry and Index2's new entry will still point to the root tuple of the
+chain.
+
+	lp [1]  [2]
+	[1111, aaaa]->[111, bbbb]
+
+	Index1's entry (1111) points to 1
+	Index2's old entry (aaaa) points to 1
+	Index2's new entry (bbbb) also points to 1
+
+"A update chain which has more than one index entries pointing to its
+root line pointer is called WARM chain and the action that creates a
+WARM chain is called WARM update."
+
+Since all indexes always point to the root of the WARM chain, even when
+there are more than one index entries, WARM chains can be pruned and
+dead tuples can be removed without a need to do corresponding index
+cleanup.
+
+While this solves the problem of pruning dead tuples from a HOT/WARM
+chain, it also opens up a new technical challenge because now we have a
+situation where a heap tuple is reachable from multiple index entries,
+each having a different index key. While MVCC still ensures that only
+valid tuples are returned, a tuple with a wrong index key may be
+returned because of wrong index entries. In the above example, tuple
+[1111, bbbb] is reachable from both keys (aaaa) as well as (bbbb). For
+this reason, tuples returned from a WARM chain must always be rechecked
+for index key-match.
+
+Recheck Index Key Againt Heap Tuple
+-----------------------------------
+
+Since every Index AM has it's own notion of index tuples, each Index AM
+must implement its own method to recheck heap tuples. For example, a
+hash index stores the hash value of the column and hence recheck routine
+for hash AM must first compute the hash value of the heap attribute and
+then compare it against the value stored in the index tuple.
+
+The patch currently implement recheck routines for hash and btree
+indexes. If the table has an index which doesn't support recheck
+routine, WARM updates are disabled on such tables.
+
+Problem With Duplicate (key, ctid) Index Entries
+------------------------------------------------
+
+The index-key recheck logic works as long as there are no duplicate
+index keys, both pointing to the same WARM chain. In that case, the same
+valid tuple will be reachable via multiple index keys, yet satisfying
+the index key checks. In the above example, if the tuple [1111, bbbb] is
+again updated to [1111, aaaa] and if we insert a new index entry (aaaa)
+pointing to the root line pointer, we will end up with the following
+structure:
+
+	lp [1]  [2]  [3]
+	[1111, aaaa]->[1111, bbbb]->[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's oldest entry (aaaa) points to 1
+	Index2's old entry (bbbb) also points to 1
+	Index2's new entry (aaaa) also points to 1
+
+We must solve this problem to ensure that the same tuple is not
+reachable via multiple index pointers. There are couple of ways to
+address this issue:
+
+1. Do not allow WARM update to a tuple from a WARM chain. This
+guarantees that there can never be duplicate index entries to the same
+root line pointer because we must have checked for old and new index
+keys while doing the first WARM update.
+
+2. Do not allow duplicate (key, ctid) index pointers. In the above
+example, since (aaaa, 1) already exists in the index, we must not insert
+a duplicate index entry.
+
+The patch currently implements 1 i.e. do not do WARM updates to a tuple
+from a WARM chain. HOT updates are fine because they do not add a new
+index entry.
+
+Even with the restriction, this is a significant improvement because the
+number of regular UPDATEs are curtailed down to half.
+
+Expression and Partial Indexes
+------------------------------
+
+Expressions may evaluate to the same value even if the underlying column
+values have changed. A simple example is an index on "lower(col)" which
+will return the same value if the new heap value only differs in the
+case sensitivity. So we can not solely rely on the heap column check to
+decide whether or not to insert a new index entry for expression
+indexes. Similarly, for partial indexes, the predicate expression must
+be evaluated to decide whether or not to cause a new index entry when
+columns referred in the predicate expressions change.
+
+(None of these things are currently implemented and we squarely disallow
+WARM update if a column from expression indexes or predicate has
+changed).
+
+
+Efficiently Finding the Root Line Pointer
+-----------------------------------------
+
+During WARM update, we must be able to find the root line pointer of the
+tuple being updated. It must be noted that the t_ctid field in the heap
+tuple header is usually used to find the next tuple in the update chain.
+But the tuple that we are updating, must be the last tuple in the update
+chain. In such cases, the c_tid field usually points the tuple itself.
+So in theory, we could use the t_ctid to store additional information in
+the last tuple of the update chain, if the information about the tuple
+being the last tuple is stored elsewhere.
+
+We now utilize another bit from t_infomask2 to explicitly identify that
+this is the last tuple in the update chain.
+
+HEAP_LATEST_TUPLE - When this bit is set, the tuple is the last tuple in
+the update chain. The OffsetNumber part of t_ctid points to the root
+line pointer of the chain when HEAP_LATEST_TUPLE flag is set.
+
+If UPDATE operation is aborted, the last tuple in the update chain
+becomes dead. The root line pointer information stored in the tuple
+which remains the last valid tuple in the chain is also lost. In such
+rare cases, the root line pointer must be found in a hard way by
+scanning the entire heap page.
+
+Tracking WARM Chains
+--------------------
+
+The old and every subsequent tuple in the chain is marked with a special
+HEAP_WARM_TUPLE flag. We use the last remaining bit in t_infomask2 to
+store this information.
+
+When a tuple is returned from a WARM chain, the caller must do
+additional checks to ensure that the tuple matches the index key. Even
+if the tuple comes precedes the WARM update in the chain, it must still
+be rechecked for the index key match (case when old tuple is returned by
+the new index key). So we must follow the update chain everytime to the
+end to see check if this is a WARM chain.
+
+When the old updated tuple is retired and the root line pointer is
+converted into a redirected line pointer, we can copy the information
+about WARM chain to the redirected line pointer by storing a special
+value in the lp_len field of the line pointer. This will handle the most
+common case where a WARM chain is replaced by a redirect line pointer
+and a single tuple in the chain.
+
+Converting WARM chains back to HOT chains (VACUUM ?)
+----------------------------------------------------
+
+The current implementation of WARM allows only one WARM update per
+chain. This simplifies the design and addresses certain issues around
+duplicate scans. But this also implies that the benefit of WARM will be
+no more than 50%, which is still significant, but if we could return
+WARM chains back to normal status, we could do far more WARM updates.
+
+A distinct property of a WARM chain is that at least one index has more
+than one live index entries pointing to the root of the chain. In other
+words, if we can remove duplicate entry from every index or conclusively
+prove that there are no duplicate index entries for the root line
+pointer, the chain can again be marked as HOT.
+
+Here is one idea:
+
+A WARM chain has two parts, separated by the tuple that caused WARM
+update. All tuples in each part has matching index keys, but certain
+index keys may not match between these two parts. Lets say we mark heap
+tuples in each part with a special Red-Blue flag. The same flag is
+replicated in the index tuples. For example, when new rows are inserted
+in a table, they are marked with Blue flag and the index entries
+associated with those rows are also marked with Blue flag. When a row is
+WARM updated, the new version is marked with Red flag and the new index
+entry created by the update is also marked with Red flag.
+
+
+Heap chain: [1] [2] [3] [4]
+			[aaaa, 1111]B -> [aaaa, 1111]B -> [bbbb, 1111]R -> [bbbb, 1111]R
+
+Index1: 	(aaaa)B points to 1 (satisfies only tuples marked with B)
+			(bbbb)R points to 1 (satisfies only tuples marked with R)
+
+Index2:		(1111)B points to 1 (satisfied bith B and R tuples)
+
+
+It's clear that for indexes with Red and Blue pointers, a heap tuple
+with Blue flag will be reachable from Blue pointer and that with Red
+flag will be reachable from Red pointer. But for indexes which did not
+create a new entry, both Blue and Red tuples will be reachable from Blue
+pointer (there is no Red pointer in such indexes). So, as a side note,
+matching Red and Blue flags is not enough from index scan perspective.
+
+During first heap scan of VACUUM, we look for tuples with
+HEAP_WARM_TUPLE set.  If all live tuples in the chain are either marked
+with Blue flag or Red flag (but no mix of Red and Blue), then the chain
+is a candidate for HOT conversion.  We remember the root line pointer
+and Red-Blue flag of the WARM chain in a separate array.
+
+If we have a Red WARM chain, then our goal is to remove Blue pointers
+and vice versa. But there is a catch. For Index2 above, there is only
+Blue pointer and that must not be removed. IOW we should remove Blue
+pointer iff a Red pointer exists. Since index vacuum may visit Red and
+Blue pointers in any order, I think we will need another index pass to
+remove dead index pointers. So in the first index pass we check which
+WARM candidates have 2 index pointers. In the second pass, we remove the
+dead pointer and reset Red flag is the surviving index pointer is Red.
+
+During the second heap scan, we fix WARM chain by clearing
+HEAP_WARM_TUPLE flag and also reset Red flag to Blue.
+
+There are some more problems around aborted vacuums. For example, if
+vacuum aborts after changing Red index flag to Blue but before removing
+the other Blue pointer, we will end up with two Blue pointers to a Red
+WARM chain. But since the HEAP_WARM_TUPLE flag on the heap tuple is
+still set, further WARM updates to the chain will be blocked. I guess we
+will need some special handling for case with multiple Blue pointers. We
+can either leave these WARM chains alone and let them die with a
+subsequent non-WARM update or must apply heap-recheck logic during index
+vacuum to find the dead pointer. Given that vacuum-aborts are not
+common, I am inclined to leave this case unhandled. We must still check
+for presence of multiple Blue pointers and ensure that we don't
+accidently remove either of the Blue pointers and not clear WARM chains
+either.
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index a22aae7..082bd1f 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -1957,6 +1957,76 @@ heap_fetch(Relation relation,
 }
 
 /*
+ * Check if the HOT chain is originating or continuing at tid ever became a
+ * WARM chain, even if the actual UPDATE operation finally aborted.
+ */
+static void
+hot_check_warm_chain(Page dp, ItemPointer tid, bool *recheck)
+{
+	TransactionId prev_xmax = InvalidTransactionId;
+	OffsetNumber offnum;
+	HeapTupleData heapTuple;
+
+	if (*recheck == true)
+		return;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+			break;
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Presence of either WARM or WARM updated tuple signals possible
+		 * breakage and the caller must recheck tuple returned from this chain
+		 * for index satisfaction
+		 */
+		if (HeapTupleHeaderIsHeapWarmTuple(heapTuple.t_data))
+		{
+			*recheck = true;
+			break;
+		}
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (HeapTupleIsHotUpdated(&heapTuple))
+		{
+			offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+			prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+		}
+		else
+			break;				/* end of chain */
+	}
+
+}
+
+/*
  *	heap_hot_search_buffer	- search HOT chain for tuple satisfying snapshot
  *
  * On entry, *tid is the TID of a tuple (either a simple tuple, or the root
@@ -1976,11 +2046,14 @@ heap_fetch(Relation relation,
  * Unlike heap_fetch, the caller must already have pin and (at least) share
  * lock on the buffer; it is still pinned/locked at exit.  Also unlike
  * heap_fetch, we do not report any pgstats count; caller may do so if wanted.
+ *
+ * recheck should be set false on entry by caller, will be set true on exit
+ * if a WARM tuple is encountered.
  */
 bool
 heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 					   Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call)
+					   bool *all_dead, bool first_call, bool *recheck)
 {
 	Page		dp = (Page) BufferGetPage(buffer);
 	TransactionId prev_xmax = InvalidTransactionId;
@@ -2022,6 +2095,16 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 				/* Follow the redirect */
 				offnum = ItemIdGetRedirect(lp);
 				at_chain_start = false;
+
+				/* Check if it's a WARM chain */
+				if (recheck && *recheck == false)
+				{
+					if (ItemIdIsHeapWarm(lp))
+					{
+						*recheck = true;
+						Assert(!IsSystemRelation(relation));
+					}
+				}
 				continue;
 			}
 			/* else must be end of chain */
@@ -2034,9 +2117,12 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		ItemPointerSetOffsetNumber(&heapTuple->t_self, offnum);
 
 		/*
-		 * Shouldn't see a HEAP_ONLY tuple at chain start.
+		 * Shouldn't see a HEAP_ONLY tuple at chain start, unless we are
+		 * dealing with a WARM updated tuple in which case deferred triggers
+		 * may request to fetch a WARM tuple from middle of a chain.
 		 */
-		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple))
+		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple) &&
+				!HeapTupleIsHeapWarmTuple(heapTuple))
 			break;
 
 		/*
@@ -2049,6 +2135,22 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 			break;
 
 		/*
+		 * Check if there exists a WARM tuple somewhere down the chain and set
+		 * recheck to TRUE.
+		 *
+		 * XXX This is not very efficient right now, and we should look for
+		 * possible improvements here
+		 */
+		if (recheck && *recheck == false)
+		{
+			hot_check_warm_chain(dp, &heapTuple->t_self, recheck);
+
+			/* WARM is not supported on system tables yet */
+			if (*recheck == true)
+				Assert(!IsSystemRelation(relation));
+		}
+
+		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
 		 * return the first tuple we find.  But on later passes, heapTuple
 		 * will initially be pointing to the tuple we returned last time.
@@ -2121,18 +2223,41 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
  */
 bool
 heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
-				bool *all_dead)
+				bool *all_dead, bool *recheck, Buffer *cbuffer,
+				HeapTuple heapTuple)
 {
 	bool		result;
 	Buffer		buffer;
-	HeapTupleData heapTuple;
+	ItemPointerData ret_tid = *tid;
 
 	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	LockBuffer(buffer, BUFFER_LOCK_SHARE);
-	result = heap_hot_search_buffer(tid, relation, buffer, snapshot,
-									&heapTuple, all_dead, true);
-	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buffer);
+	result = heap_hot_search_buffer(&ret_tid, relation, buffer, snapshot,
+									heapTuple, all_dead, true, recheck);
+
+	/*
+	 * If we are returning a potential candidate tuple from this chain and the
+	 * caller has requested for "recheck" hint, keep the buffer locked and
+	 * pinned. The caller must release the lock and pin on the buffer in all
+	 * such cases
+	 */
+	if (!result || !recheck || !(*recheck))
+	{
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	/*
+	 * Set the caller supplied tid with the actual location of the tuple being
+	 * returned
+	 */
+	if (result)
+	{
+		*tid = ret_tid;
+		if (cbuffer)
+			*cbuffer = buffer;
+	}
+
 	return result;
 }
 
@@ -3439,13 +3564,15 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 HTSU_Result
 heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode)
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update)
 {
 	HTSU_Result result;
 	TransactionId xid = GetCurrentTransactionId();
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *exprindx_attrs;
 	Bitmapset  *interesting_attrs;
 	Bitmapset  *modified_attrs;
 	ItemId		lp;
@@ -3468,6 +3595,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		use_hot_update = false;
+	bool		use_warm_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
 	bool		all_visible_cleared_new = false;
@@ -3492,6 +3620,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot update tuples during a parallel operation")));
 
+	/* Assume no-warm update */
+	if (warm_update)
+		*warm_update = false;
+
 	/*
 	 * Fetch the list of attributes to be checked for various operations.
 	 *
@@ -3513,10 +3645,13 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	exprindx_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_EXPR_PREDICATE);
+
 	interesting_attrs = bms_add_members(NULL, hot_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
-
+	interesting_attrs = bms_add_members(interesting_attrs, exprindx_attrs);
 
 	block = ItemPointerGetBlockNumber(otid);
 	offnum = ItemPointerGetOffsetNumber(otid);
@@ -3568,6 +3703,9 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
 												  &oldtup, newtup);
 
+	if (modified_attrsp)
+		*modified_attrsp = bms_copy(modified_attrs);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3818,6 +3956,7 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(exprindx_attrs);
 		bms_free(modified_attrs);
 		bms_free(interesting_attrs);
 		return result;
@@ -4126,6 +4265,36 @@ l2:
 		 */
 		if (!bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
+		else
+		{
+			/*
+			 * If no WARM updates yet on this chain, let this update be a WARM
+			 * update.
+			 *
+			 * We check for both warm and warm updated tuples since if the
+			 * previous WARM update aborted, we may still have added
+			 * another index entry for this HOT chain. In such situations, we
+			 * must not attempt a WARM update until duplicate (key, CTID) index
+			 * entry issue is sorted out
+			 *
+			 * XXX Later we'll add more checks to ensure WARM chains can
+			 * further be WARM updated. This is probably good to do first rounf
+			 * of tests of remaining functionality
+			 *
+			 * XXX Disable WARM updates on system tables. There is nothing in
+			 * principle that stops us from supporting this. But it would
+			 * require API change to propogate the changed columns back to the
+			 * caller so that CatalogUpdateIndexes() can avoid adding new
+			 * entries to indexes that are not changed by update. This will be
+			 * fixed once basic patch is tested. !!FIXME
+			 */
+			if (relation->rd_supportswarm &&
+				!bms_overlap(modified_attrs, exprindx_attrs) &&
+				!bms_is_subset(hot_attrs, modified_attrs) &&
+				!HeapTupleIsHeapWarmTuple(&oldtup) &&
+				!IsSystemRelation(relation))
+				use_warm_update = true;
+		}
 	}
 	else
 	{
@@ -4168,6 +4337,21 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+
+		/*
+		 * Even if we are doing a HOT update, we must carry forward the WARM
+		 * flag because we may have already inserted another index entry
+		 * pointing to our root and a third entry may create duplicates
+		 *
+		 * XXX This should be revisited if we get index (key, CTID) duplicate
+		 * detection mechanism in place
+		 */
+		if (HeapTupleIsHeapWarmTuple(&oldtup))
+		{
+			HeapTupleSetHeapWarmTuple(heaptup);
+			HeapTupleSetHeapWarmTuple(newtup);
+		}
+
 		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
@@ -4183,12 +4367,38 @@ l2:
 					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
 					&root_offnum);
 	}
+	else if (use_warm_update)
+	{
+		Assert(!IsSystemRelation(relation));
+
+		/* Mark the old tuple as HOT-updated */
+		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetHeapWarmTuple(&oldtup);
+		/* And mark the new tuple as heap-only */
+		HeapTupleSetHeapOnly(heaptup);
+		HeapTupleSetHeapWarmTuple(heaptup);
+		/* Mark the caller's copy too, in case different from heaptup */
+		HeapTupleSetHeapOnly(newtup);
+		HeapTupleSetHeapWarmTuple(newtup);
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			heap_get_root_tuple_one(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
+					&root_offnum);
+
+		/* Let the caller know we did a WARM update */
+		if (warm_update)
+			*warm_update = true;
+	}
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		HeapTupleClearHeapWarmTuple(heaptup);
+		HeapTupleClearHeapWarmTuple(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4307,7 +4517,10 @@ l2:
 	if (have_tuple_lock)
 		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
 
-	pgstat_count_heap_update(relation, use_hot_update);
+	/*
+	 * Count HOT and WARM updates separately
+	 */
+	pgstat_count_heap_update(relation, use_hot_update, use_warm_update);
 
 	/*
 	 * If heaptup is a private copy, release it.  Don't forget to copy t_self
@@ -4456,7 +4669,7 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &hufd, &lockmode);
+						 &hufd, &lockmode, NULL, NULL);
 	switch (result)
 	{
 		case HeapTupleSelfUpdated:
@@ -7354,6 +7567,7 @@ log_heap_cleanup_info(RelFileNode rnode, TransactionId latestRemovedXid)
 XLogRecPtr
 log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *redirected, int nredirected,
+			   OffsetNumber *warm, int nwarm,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid)
@@ -7367,6 +7581,7 @@ log_heap_clean(Relation reln, Buffer buffer,
 	xlrec.latestRemovedXid = latestRemovedXid;
 	xlrec.nredirected = nredirected;
 	xlrec.ndead = ndead;
+	xlrec.nwarm = nwarm;
 
 	XLogBeginInsert();
 	XLogRegisterData((char *) &xlrec, SizeOfHeapClean);
@@ -7389,6 +7604,10 @@ log_heap_clean(Relation reln, Buffer buffer,
 		XLogRegisterBufData(0, (char *) nowdead,
 							ndead * sizeof(OffsetNumber));
 
+	if (nwarm > 0)
+		XLogRegisterBufData(0, (char *) warm,
+							nwarm * sizeof(OffsetNumber));
+
 	if (nunused > 0)
 		XLogRegisterBufData(0, (char *) nowunused,
 							nunused * sizeof(OffsetNumber));
@@ -7494,6 +7713,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
+	bool		warm_update = false;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
@@ -7505,6 +7725,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	else
 		info = XLOG_HEAP_UPDATE;
 
+	if (HeapTupleIsHeapWarmTuple(newtup))
+		warm_update = true;
+
 	/*
 	 * If the old and new tuple are on the same page, we only need to log the
 	 * parts of the new tuple that were changed.  That saves on the amount of
@@ -7578,6 +7801,8 @@ log_heap_update(Relation reln, Buffer oldbuf,
 				xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY;
 		}
 	}
+	if (warm_update)
+		xlrec.flags |= XLH_UPDATE_WARM_UPDATE;
 
 	/* If new tuple is the single and first tuple on page... */
 	if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber &&
@@ -7945,24 +8170,38 @@ heap_xlog_clean(XLogReaderState *record)
 		OffsetNumber *redirected;
 		OffsetNumber *nowdead;
 		OffsetNumber *nowunused;
+		OffsetNumber *warm;
 		int			nredirected;
 		int			ndead;
 		int			nunused;
+		int			nwarm;
+		int			i;
 		Size		datalen;
+		bool		warmchain[MaxHeapTuplesPerPage + 1];
 
 		redirected = (OffsetNumber *) XLogRecGetBlockData(record, 0, &datalen);
 
 		nredirected = xlrec->nredirected;
 		ndead = xlrec->ndead;
+		nwarm = xlrec->nwarm;
+
 		end = (OffsetNumber *) ((char *) redirected + datalen);
 		nowdead = redirected + (nredirected * 2);
-		nowunused = nowdead + ndead;
-		nunused = (end - nowunused);
+		warm = nowdead + ndead;
+		nowunused = warm + nwarm;
+
+		nunused = (end - warm);
 		Assert(nunused >= 0);
 
+		memset(warmchain, 0, sizeof (warmchain));
+		for (i = 0; i < nwarm; i++)
+			warmchain[warm[i]] = true;
+
+
 		/* Update all item pointers per the record, and repair fragmentation */
 		heap_page_prune_execute(buffer,
 								redirected, nredirected,
+								warmchain,
 								nowdead, ndead,
 								nowunused, nunused);
 
@@ -8549,16 +8788,22 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	Size		freespace = 0;
 	XLogRedoAction oldaction;
 	XLogRedoAction newaction;
+	bool		warm_update = false;
 
 	/* initialize to keep the compiler quiet */
 	oldtup.t_data = NULL;
 	oldtup.t_len = 0;
 
+	if (xlrec->flags & XLH_UPDATE_WARM_UPDATE)
+		warm_update = true;
+
 	XLogRecGetBlockTag(record, 0, &rnode, NULL, &newblk);
 	if (XLogRecGetBlockTag(record, 1, NULL, NULL, &oldblk))
 	{
 		/* HOT updates are never done across pages */
 		Assert(!hot_update);
+		/* WARM updates are never done across pages */
+		Assert(!warm_update);
 	}
 	else
 		oldblk = newblk;
@@ -8618,6 +8863,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 								   &htup->t_infomask2);
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
+
+		/* Mark the old tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		/* Set forward chain link in t_ctid */
 		HeapTupleHeaderSetNextCtid(htup, ItemPointerGetBlockNumber(&newtid),
 				ItemPointerGetOffsetNumber(&newtid));
@@ -8753,6 +9003,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
+
+		/* Mark the new tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		/* Make sure there is no forward chain link in t_ctid */
 		HeapTupleHeaderSetHeapLatest(htup);
 
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index 7c2231a..d71a297 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -36,12 +36,19 @@ typedef struct
 	int			nredirected;	/* numbers of entries in arrays below */
 	int			ndead;
 	int			nunused;
+	int			nwarm;
 	/* arrays that accumulate indexes of items to be changed */
 	OffsetNumber redirected[MaxHeapTuplesPerPage * 2];
 	OffsetNumber nowdead[MaxHeapTuplesPerPage];
 	OffsetNumber nowunused[MaxHeapTuplesPerPage];
+	OffsetNumber warm[MaxHeapTuplesPerPage];
 	/* marked[i] is TRUE if item i is entered in one of the above arrays */
 	bool		marked[MaxHeapTuplesPerPage + 1];
+	/*
+	 * warmchain[i] is TRUE if item is becoming redirected lp and points a WARM
+	 * chain
+	 */
+	bool		warmchain[MaxHeapTuplesPerPage + 1];
 } PruneState;
 
 /* Local functions */
@@ -54,6 +61,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 						   OffsetNumber offnum, OffsetNumber rdoffnum);
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
+static void heap_prune_record_warmupdate(PruneState *prstate,
+						   OffsetNumber offnum);
 
 static void heap_get_root_tuples_internal(Page page,
 				OffsetNumber target_offnum, OffsetNumber *root_offsets);
@@ -203,8 +212,9 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 	 */
 	prstate.new_prune_xid = InvalidTransactionId;
 	prstate.latestRemovedXid = *latestRemovedXid;
-	prstate.nredirected = prstate.ndead = prstate.nunused = 0;
+	prstate.nredirected = prstate.ndead = prstate.nunused = prstate.nwarm = 0;
 	memset(prstate.marked, 0, sizeof(prstate.marked));
+	memset(prstate.warmchain, 0, sizeof(prstate.marked));
 
 	/* Scan the page */
 	maxoff = PageGetMaxOffsetNumber(page);
@@ -241,6 +251,7 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 		 */
 		heap_page_prune_execute(buffer,
 								prstate.redirected, prstate.nredirected,
+								prstate.warmchain,
 								prstate.nowdead, prstate.ndead,
 								prstate.nowunused, prstate.nunused);
 
@@ -268,6 +279,7 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 
 			recptr = log_heap_clean(relation, buffer,
 									prstate.redirected, prstate.nredirected,
+									prstate.warm, prstate.nwarm,
 									prstate.nowdead, prstate.ndead,
 									prstate.nowunused, prstate.nunused,
 									prstate.latestRemovedXid);
@@ -479,6 +491,12 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 			!TransactionIdEquals(HeapTupleHeaderGetXmin(htup), priorXmax))
 			break;
 
+		if (HeapTupleHeaderIsHeapWarmTuple(htup))
+		{
+			Assert(!IsSystemRelation(relation));
+			heap_prune_record_warmupdate(prstate, rootoffnum);
+		}
+
 		/*
 		 * OK, this tuple is indeed a member of the chain.
 		 */
@@ -668,6 +686,18 @@ heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum)
 	prstate->marked[offnum] = true;
 }
 
+/* Record item pointer which is a root of a WARM chain */
+static void
+heap_prune_record_warmupdate(PruneState *prstate, OffsetNumber offnum)
+{
+	Assert(prstate->nwarm < MaxHeapTuplesPerPage);
+	if (prstate->warmchain[offnum])
+		return;
+	prstate->warm[prstate->nwarm] = offnum;
+	prstate->nwarm++;
+	prstate->warmchain[offnum] = true;
+}
+
 
 /*
  * Perform the actual page changes needed by heap_page_prune.
@@ -681,6 +711,7 @@ heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum)
 void
 heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
+						bool *warmchain,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused)
 {
@@ -697,6 +728,12 @@ heap_page_prune_execute(Buffer buffer,
 		ItemId		fromlp = PageGetItemId(page, fromoff);
 
 		ItemIdSetRedirect(fromlp, tooff);
+
+		/*
+		 * Save information about WARM chains in the item itself
+		 */
+		if (warmchain[fromoff])
+			ItemIdSetHeapWarm(fromlp);
 	}
 
 	/* Update all now-dead line pointers */
diff --git a/src/backend/access/index/genam.c b/src/backend/access/index/genam.c
index 65c941d..4f9fb12 100644
--- a/src/backend/access/index/genam.c
+++ b/src/backend/access/index/genam.c
@@ -99,7 +99,7 @@ RelationGetIndexScan(Relation indexRelation, int nkeys, int norderbys)
 	else
 		scan->orderByData = NULL;
 
-	scan->xs_want_itup = false; /* may be set later */
+	scan->xs_want_itup = true; /* hack for now to always get index tuple */
 
 	/*
 	 * During recovery we ignore killed tuples and don't bother to kill them
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index 54b71cb..3f9a0cf 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -71,10 +71,12 @@
 #include "access/xlog.h"
 #include "catalog/catalog.h"
 #include "catalog/index.h"
+#include "executor/executor.h"
 #include "pgstat.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
+#include "utils/datum.h"
 #include "utils/snapmgr.h"
 #include "utils/tqual.h"
 
@@ -228,6 +230,20 @@ index_beginscan(Relation heapRelation,
 	scan->heapRelation = heapRelation;
 	scan->xs_snapshot = snapshot;
 
+	/*
+	 * If the index supports recheck, make sure that index tuple is saved
+	 * during index scans.
+	 *
+	 * XXX Ideally, we should look at all indexes on the table and check if
+	 * WARM is at all supported on the base table. If WARM is not supported
+	 * then we don't need to do any recheck. RelationGetIndexAttrBitmap() does
+	 * do that and sets rd_supportswarm after looking at all indexes, but we
+	 * can't be sure if the function was called at this point and we can't call
+	 * it now for the risk of deadlock.
+	 */
+	if (indexRelation->rd_amroutine->amrecheck)
+		scan->xs_want_itup = true;
+
 	return scan;
 }
 
@@ -409,7 +425,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
 	/*
 	 * The AM's amgettuple proc finds the next index entry matching the scan
 	 * keys, and puts the TID into scan->xs_ctup.t_self.  It should also set
-	 * scan->xs_recheck and possibly scan->xs_itup, though we pay no attention
+	 * scan->xs_tuple_recheck and possibly scan->xs_itup, though we pay no attention
 	 * to those fields here.
 	 */
 	found = scan->indexRelation->rd_amroutine->amgettuple(scan, direction);
@@ -448,7 +464,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
  * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
  * call).
  *
- * Note: caller must check scan->xs_recheck, and perform rechecking of the
+ * Note: caller must check scan->xs_tuple_recheck, and perform rechecking of the
  * scan keys if required.  We do not do that here because we don't have
  * enough information to do it efficiently in the general case.
  * ----------------
@@ -475,6 +491,15 @@ index_fetch_heap(IndexScanDesc scan)
 		 */
 		if (prev_buf != scan->xs_cbuf)
 			heap_page_prune_opt(scan->heapRelation, scan->xs_cbuf);
+
+		/*
+		 * If we're not always re-checking, reset recheck for this tuple
+		 */
+		if (!scan->xs_recheck)
+			scan->xs_tuple_recheck = false;
+		else
+			scan->xs_tuple_recheck = true;
+
 	}
 
 	/* Obtain share-lock on the buffer so we can examine visibility */
@@ -484,32 +509,64 @@ index_fetch_heap(IndexScanDesc scan)
 											scan->xs_snapshot,
 											&scan->xs_ctup,
 											&all_dead,
-											!scan->xs_continue_hot);
+											!scan->xs_continue_hot,
+											&scan->xs_tuple_recheck);
 	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
 
 	if (got_heap_tuple)
 	{
+		bool res = true;
+
+		/*
+		 * Ok we got a tuple which satisfies the snapshot, but if its part of a
+		 * WARM chain, we must do additional checks to ensure that we are
+		 * indeed returning a correct tuple. Note that if the index AM does not
+		 * implement amrecheck method, then we don't any additional checks
+		 * since WARM must have been disabled on such tables
+		 *
+		 * XXX What happens when a new index which does not support amcheck is
+		 * added to the table? Do we need to handle this case or is CREATE
+		 * INDEX and CREATE INDEX CONCURRENTLY smart enough to handle this
+		 * issue?
+		 */
+		if (scan->xs_tuple_recheck &&
+				scan->xs_itup &&
+				scan->indexRelation->rd_amroutine->amrecheck)
+		{
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
+			res = scan->indexRelation->rd_amroutine->amrecheck(
+						scan->indexRelation,
+						scan->xs_itup,
+						scan->heapRelation,
+						&scan->xs_ctup);
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+		}
+
 		/*
 		 * Only in a non-MVCC snapshot can more than one member of the HOT
 		 * chain be visible.
 		 */
 		scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
 		pgstat_count_heap_fetch(scan->indexRelation);
-		return &scan->xs_ctup;
-	}
 
-	/* We've reached the end of the HOT chain. */
-	scan->xs_continue_hot = false;
+		if (res)
+			return &scan->xs_ctup;
+	}
+	else
+	{
+		/* We've reached the end of the HOT chain. */
+		scan->xs_continue_hot = false;
 
-	/*
-	 * If we scanned a whole HOT chain and found only dead tuples, tell index
-	 * AM to kill its entry for that TID (this will take effect in the next
-	 * amgettuple call, in index_getnext_tid).  We do not do this when in
-	 * recovery because it may violate MVCC to do so.  See comments in
-	 * RelationGetIndexScan().
-	 */
-	if (!scan->xactStartedInRecovery)
-		scan->kill_prior_tuple = all_dead;
+		/*
+		 * If we scanned a whole HOT chain and found only dead tuples, tell index
+		 * AM to kill its entry for that TID (this will take effect in the next
+		 * amgettuple call, in index_getnext_tid).  We do not do this when in
+		 * recovery because it may violate MVCC to do so.  See comments in
+		 * RelationGetIndexScan().
+		 */
+		if (!scan->xactStartedInRecovery)
+			scan->kill_prior_tuple = all_dead;
+	}
 
 	return NULL;
 }
diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c
index ef69290..e0afffd 100644
--- a/src/backend/access/nbtree/nbtinsert.c
+++ b/src/backend/access/nbtree/nbtinsert.c
@@ -19,11 +19,14 @@
 #include "access/nbtree.h"
 #include "access/transam.h"
 #include "access/xloginsert.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
 #include "utils/tqual.h"
-
+#include "utils/datum.h"
 
 typedef struct
 {
@@ -249,6 +252,9 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 	BTPageOpaque opaque;
 	Buffer		nbuf = InvalidBuffer;
 	bool		found = false;
+	Buffer		buffer;
+	HeapTupleData	heapTuple;
+	bool		recheck = false;
 
 	/* Assume unique until we find a duplicate */
 	*is_unique = true;
@@ -308,6 +314,8 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				curitup = (IndexTuple) PageGetItem(page, curitemid);
 				htid = curitup->t_tid;
 
+				recheck = false;
+
 				/*
 				 * If we are doing a recheck, we expect to find the tuple we
 				 * are rechecking.  It's not a duplicate, but we have to keep
@@ -325,112 +333,153 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				 * have just a single index entry for the entire chain.
 				 */
 				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
-										 &all_dead))
+							&all_dead, &recheck, &buffer,
+							&heapTuple))
 				{
 					TransactionId xwait;
+					bool result = true;
 
 					/*
-					 * It is a duplicate. If we are only doing a partial
-					 * check, then don't bother checking if the tuple is being
-					 * updated in another transaction. Just return the fact
-					 * that it is a potential conflict and leave the full
-					 * check till later.
+					 * If the tuple was WARM update, we may again see our own
+					 * tuple. Since WARM updates don't create new index
+					 * entries, our own tuple is only reachable via the old
+					 * index pointer
 					 */
-					if (checkUnique == UNIQUE_CHECK_PARTIAL)
+					if (checkUnique == UNIQUE_CHECK_EXISTING &&
+							ItemPointerCompare(&htid, &itup->t_tid) == 0)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						*is_unique = false;
-						return InvalidTransactionId;
+						found = true;
+						result = false;
+						if (recheck)
+							UnlockReleaseBuffer(buffer);
 					}
-
-					/*
-					 * If this tuple is being updated by other transaction
-					 * then we have to wait for its commit/abort.
-					 */
-					xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
-						SnapshotDirty.xmin : SnapshotDirty.xmax;
-
-					if (TransactionIdIsValid(xwait))
+					else if (recheck)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						/* Tell _bt_doinsert to wait... */
-						*speculativeToken = SnapshotDirty.speculativeToken;
-						return xwait;
+						result = btrecheck(rel, curitup, heapRel, &heapTuple);
+						UnlockReleaseBuffer(buffer);
 					}
 
-					/*
-					 * Otherwise we have a definite conflict.  But before
-					 * complaining, look to see if the tuple we want to insert
-					 * is itself now committed dead --- if so, don't complain.
-					 * This is a waste of time in normal scenarios but we must
-					 * do it to support CREATE INDEX CONCURRENTLY.
-					 *
-					 * We must follow HOT-chains here because during
-					 * concurrent index build, we insert the root TID though
-					 * the actual tuple may be somewhere in the HOT-chain.
-					 * While following the chain we might not stop at the
-					 * exact tuple which triggered the insert, but that's OK
-					 * because if we find a live tuple anywhere in this chain,
-					 * we have a unique key conflict.  The other live tuple is
-					 * not part of this chain because it had a different index
-					 * entry.
-					 */
-					htid = itup->t_tid;
-					if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL))
-					{
-						/* Normal case --- it's still live */
-					}
-					else
+					if (result)
 					{
 						/*
-						 * It's been deleted, so no error, and no need to
-						 * continue searching
+						 * It is a duplicate. If we are only doing a partial
+						 * check, then don't bother checking if the tuple is being
+						 * updated in another transaction. Just return the fact
+						 * that it is a potential conflict and leave the full
+						 * check till later.
 						 */
-						break;
-					}
+						if (checkUnique == UNIQUE_CHECK_PARTIAL)
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							*is_unique = false;
+							return InvalidTransactionId;
+						}
 
-					/*
-					 * Check for a conflict-in as we would if we were going to
-					 * write to this page.  We aren't actually going to write,
-					 * but we want a chance to report SSI conflicts that would
-					 * otherwise be masked by this unique constraint
-					 * violation.
-					 */
-					CheckForSerializableConflictIn(rel, NULL, buf);
+						/*
+						 * If this tuple is being updated by other transaction
+						 * then we have to wait for its commit/abort.
+						 */
+						xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
+							SnapshotDirty.xmin : SnapshotDirty.xmax;
+
+						if (TransactionIdIsValid(xwait))
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							/* Tell _bt_doinsert to wait... */
+							*speculativeToken = SnapshotDirty.speculativeToken;
+							return xwait;
+						}
 
-					/*
-					 * This is a definite conflict.  Break the tuple down into
-					 * datums and report the error.  But first, make sure we
-					 * release the buffer locks we're holding ---
-					 * BuildIndexValueDescription could make catalog accesses,
-					 * which in the worst case might touch this same index and
-					 * cause deadlocks.
-					 */
-					if (nbuf != InvalidBuffer)
-						_bt_relbuf(rel, nbuf);
-					_bt_relbuf(rel, buf);
+						/*
+						 * Otherwise we have a definite conflict.  But before
+						 * complaining, look to see if the tuple we want to insert
+						 * is itself now committed dead --- if so, don't complain.
+						 * This is a waste of time in normal scenarios but we must
+						 * do it to support CREATE INDEX CONCURRENTLY.
+						 *
+						 * We must follow HOT-chains here because during
+						 * concurrent index build, we insert the root TID though
+						 * the actual tuple may be somewhere in the HOT-chain.
+						 * While following the chain we might not stop at the
+						 * exact tuple which triggered the insert, but that's OK
+						 * because if we find a live tuple anywhere in this chain,
+						 * we have a unique key conflict.  The other live tuple is
+						 * not part of this chain because it had a different index
+						 * entry.
+						 */
+						recheck = false;
+						ItemPointerCopy(&itup->t_tid, &htid);
+						if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL,
+									&recheck, &buffer, &heapTuple))
+						{
+							bool result = true;
+							if (recheck)
+							{
+								/*
+								 * Recheck if the tuple actually satisfies the
+								 * index key. Otherwise, we might be following
+								 * a wrong index pointer and mustn't entertain
+								 * this tuple
+								 */
+								result = btrecheck(rel, itup, heapRel, &heapTuple);
+								UnlockReleaseBuffer(buffer);
+							}
+							if (!result)
+								break;
+							/* Normal case --- it's still live */
+						}
+						else
+						{
+							/*
+							 * It's been deleted, so no error, and no need to
+							 * continue searching
+							 */
+							break;
+						}
 
-					{
-						Datum		values[INDEX_MAX_KEYS];
-						bool		isnull[INDEX_MAX_KEYS];
-						char	   *key_desc;
-
-						index_deform_tuple(itup, RelationGetDescr(rel),
-										   values, isnull);
-
-						key_desc = BuildIndexValueDescription(rel, values,
-															  isnull);
-
-						ereport(ERROR,
-								(errcode(ERRCODE_UNIQUE_VIOLATION),
-								 errmsg("duplicate key value violates unique constraint \"%s\"",
-										RelationGetRelationName(rel)),
-							   key_desc ? errdetail("Key %s already exists.",
-													key_desc) : 0,
-								 errtableconstraint(heapRel,
-											 RelationGetRelationName(rel))));
+						/*
+						 * Check for a conflict-in as we would if we were going to
+						 * write to this page.  We aren't actually going to write,
+						 * but we want a chance to report SSI conflicts that would
+						 * otherwise be masked by this unique constraint
+						 * violation.
+						 */
+						CheckForSerializableConflictIn(rel, NULL, buf);
+
+						/*
+						 * This is a definite conflict.  Break the tuple down into
+						 * datums and report the error.  But first, make sure we
+						 * release the buffer locks we're holding ---
+						 * BuildIndexValueDescription could make catalog accesses,
+						 * which in the worst case might touch this same index and
+						 * cause deadlocks.
+						 */
+						if (nbuf != InvalidBuffer)
+							_bt_relbuf(rel, nbuf);
+						_bt_relbuf(rel, buf);
+
+						{
+							Datum		values[INDEX_MAX_KEYS];
+							bool		isnull[INDEX_MAX_KEYS];
+							char	   *key_desc;
+
+							index_deform_tuple(itup, RelationGetDescr(rel),
+									values, isnull);
+
+							key_desc = BuildIndexValueDescription(rel, values,
+									isnull);
+
+							ereport(ERROR,
+									(errcode(ERRCODE_UNIQUE_VIOLATION),
+									 errmsg("duplicate key value violates unique constraint \"%s\"",
+										 RelationGetRelationName(rel)),
+									 key_desc ? errdetail("Key %s already exists.",
+										 key_desc) : 0,
+									 errtableconstraint(heapRel,
+										 RelationGetRelationName(rel))));
+						}
 					}
 				}
 				else if (all_dead)
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 128744c..6b1236a 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -23,6 +23,7 @@
 #include "access/xlog.h"
 #include "catalog/index.h"
 #include "commands/vacuum.h"
+#include "executor/nodeIndexscan.h"
 #include "storage/indexfsm.h"
 #include "storage/ipc.h"
 #include "storage/lmgr.h"
@@ -117,6 +118,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = btendscan;
 	amroutine->ammarkpos = btmarkpos;
 	amroutine->amrestrpos = btrestrpos;
+	amroutine->amrecheck = btrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -292,8 +294,9 @@ btgettuple(IndexScanDesc scan, ScanDirection dir)
 	BTScanOpaque so = (BTScanOpaque) scan->opaque;
 	bool		res;
 
-	/* btree indexes are never lossy */
-	scan->xs_recheck = false;
+	/* btree indexes are never lossy, except for WARM tuples */
+	scan->xs_recheck = indexscan_recheck;
+	scan->xs_tuple_recheck = indexscan_recheck;
 
 	/*
 	 * If we have any array keys, initialize them during first call for a
diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c
index 063c988..c9c0501 100644
--- a/src/backend/access/nbtree/nbtutils.c
+++ b/src/backend/access/nbtree/nbtutils.c
@@ -20,11 +20,15 @@
 #include "access/nbtree.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "utils/array.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 typedef struct BTSortArrayContext
@@ -2065,3 +2069,103 @@ btproperty(Oid index_oid, int attno,
 			return false;		/* punt to generic code */
 	}
 }
+
+/*
+ * Check if the index tuple's key matches the one computed from the given heap
+ * tuple's attribute
+ */
+bool
+btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	/* Get IndexInfo for this index */
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL, then they are equal
+		 */
+		if (isnull[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If just one is NULL, then they are not equal
+		 */
+		if (isnull[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now just do a raw memory comparison. If the index tuple was formed
+		 * using this heap tuple, the computed index values must match
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+}
diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c
index d570ae5..813b5c3 100644
--- a/src/backend/access/spgist/spgutils.c
+++ b/src/backend/access/spgist/spgutils.c
@@ -67,6 +67,7 @@ spghandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = spgendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 08b646d..e76e928 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -54,6 +54,7 @@
 #include "nodes/makefuncs.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/var.h"
 #include "parser/parser.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
@@ -1691,6 +1692,20 @@ BuildIndexInfo(Relation index)
 	ii->ii_Concurrent = false;
 	ii->ii_BrokenHotChain = false;
 
+	/* build a bitmap of all table attributes referred by this index */
+	for (i = 0; i < ii->ii_NumIndexAttrs; i++)
+	{
+		AttrNumber attr = ii->ii_KeyAttrNumbers[i];
+		ii->ii_indxattrs = bms_add_member(ii->ii_indxattrs, attr -
+				FirstLowInvalidHeapAttributeNumber);
+	}
+
+	/* Collect all attributes used in expressions, too */
+	pull_varattnos((Node *) ii->ii_Expressions, 1, &ii->ii_indxattrs);
+
+	/* Collect all attributes in the index predicate, too */
+	pull_varattnos((Node *) ii->ii_Predicate, 1, &ii->ii_indxattrs);
+
 	return ii;
 }
 
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index e011af1..97672a9 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -472,6 +472,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_tuples_deleted(C.oid) AS n_tup_del,
             pg_stat_get_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
@@ -502,7 +503,8 @@ CREATE VIEW pg_stat_xact_all_tables AS
             pg_stat_get_xact_tuples_inserted(C.oid) AS n_tup_ins,
             pg_stat_get_xact_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_xact_tuples_deleted(C.oid) AS n_tup_del,
-            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd
+            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_xact_tuples_warm_updated(C.oid) AS n_tup_warm_upd
     FROM pg_class C LEFT JOIN
          pg_index I ON C.oid = I.indrelid
          LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index 26f9114..997c8f5 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -40,6 +40,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	TriggerData *trigdata = (TriggerData *) fcinfo->context;
 	const char *funcname = "unique_key_recheck";
 	HeapTuple	new_row;
+	HeapTupleData heapTuple;
 	ItemPointerData tmptid;
 	Relation	indexRel;
 	IndexInfo  *indexInfo;
@@ -102,7 +103,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	 * removed.
 	 */
 	tmptid = new_row->t_self;
-	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL,
+				NULL, NULL, &heapTuple))
 	{
 		/*
 		 * All rows in the HOT chain are dead, so skip the check.
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index ec5d6f1..5e57cc9 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -2551,6 +2551,8 @@ CopyFrom(CopyState cstate)
 					if (resultRelInfo->ri_NumIndices > 0)
 						recheckIndexes = ExecInsertIndexTuples(slot,
 															&(tuple->t_self),
+															&(tuple->t_self),
+															NULL,
 															   estate,
 															   false,
 															   NULL,
@@ -2669,6 +2671,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
 			ExecStoreTuple(bufferedTuples[i], myslot, InvalidBuffer, false);
 			recheckIndexes =
 				ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
+									  &(bufferedTuples[i]->t_self), NULL,
 									  estate, false, NULL, NIL);
 			ExecARInsertTriggers(estate, resultRelInfo,
 								 bufferedTuples[i],
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index b5fb325..cd9b9a7 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -1468,6 +1468,7 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 
 		recptr = log_heap_clean(onerel, buffer,
 								NULL, 0, NULL, 0,
+								NULL, 0,
 								unused, uncnt,
 								vacrelstats->latestRemovedXid);
 		PageSetLSN(page, recptr);
@@ -2128,6 +2129,22 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 						break;
 					}
 
+					/*
+					 * If this tuple was ever WARM updated or is a WARM tuple,
+					 * there could be multiple index entries pointing to the
+					 * root of this chain. We can't do index-only scans for
+					 * such tuples without verifying index key check. So mark
+					 * the page as !all_visible
+					 *
+					 * XXX Should we look at the root line pointer and check if
+					 * WARM flag is set there or checking for tuples in the
+					 * chain is good enough?
+					 */
+					if (HeapTupleHeaderIsHeapWarmTuple(tuple.t_data))
+					{
+						all_visible = false;
+					}
+
 					/* Track newest xmin on page. */
 					if (TransactionIdFollows(xmin, *visibility_cutoff_xid))
 						*visibility_cutoff_xid = xmin;
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 882ce18..5fe6182 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -270,6 +270,8 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
 List *
 ExecInsertIndexTuples(TupleTableSlot *slot,
 					  ItemPointer tupleid,
+					  ItemPointer root_tid,
+					  Bitmapset *modified_attrs,
 					  EState *estate,
 					  bool noDupErr,
 					  bool *specConflict,
@@ -324,6 +326,17 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 		if (!indexInfo->ii_ReadyForInserts)
 			continue;
 
+		/*
+		 * If modified_attrs is set, we only insert index entries for those
+		 * indexes whose column has changed. All other indexes can use their
+		 * existing index pointers to look up the new tuple
+		 */
+		if (modified_attrs)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
 		/* Check for partial index */
 		if (indexInfo->ii_Predicate != NIL)
 		{
@@ -389,7 +402,7 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 			index_insert(indexRelation, /* index relation */
 						 values,	/* array of index Datums */
 						 isnull,	/* null flags */
-						 tupleid,		/* tid of heap tuple */
+						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique);	/* type of uniqueness check to do */
 
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index 449aacb..ff77349 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -37,6 +37,7 @@
 
 #include "access/relscan.h"
 #include "access/transam.h"
+#include "access/valid.h"
 #include "executor/execdebug.h"
 #include "executor/nodeBitmapHeapscan.h"
 #include "pgstat.h"
@@ -362,11 +363,23 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			OffsetNumber offnum = tbmres->offsets[curslot];
 			ItemPointerData tid;
 			HeapTupleData heapTuple;
+			bool recheck = false;
 
 			ItemPointerSet(&tid, page, offnum);
 			if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
-									   &heapTuple, NULL, true))
-				scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+									   &heapTuple, NULL, true, &recheck))
+			{
+				bool valid = true;
+
+				if (scan->rs_key)
+					HeapKeyTest(&heapTuple, RelationGetDescr(scan->rs_rd),
+							scan->rs_nkeys, scan->rs_key, valid);
+				if (valid)
+					scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+
+				if (recheck)
+					tbmres->recheck = true;
+			}
 		}
 	}
 	else
diff --git a/src/backend/executor/nodeIndexonlyscan.c b/src/backend/executor/nodeIndexonlyscan.c
index 4f6f91c..49bda34 100644
--- a/src/backend/executor/nodeIndexonlyscan.c
+++ b/src/backend/executor/nodeIndexonlyscan.c
@@ -141,6 +141,26 @@ IndexOnlyNext(IndexOnlyScanState *node)
 			 * but it's not clear whether it's a win to do so.  The next index
 			 * entry might require a visit to the same heap page.
 			 */
+
+			/*
+			 * If the index was lossy or the tuple was WARM, we have to recheck
+			 * the index quals using the fetched tuple.
+			 */
+			if (scandesc->xs_tuple_recheck)
+			{
+				ExecStoreTuple(tuple,	/* tuple to store */
+						slot,	/* slot to store in */
+						scandesc->xs_cbuf,		/* buffer containing tuple */
+						false);	/* don't pfree */
+				econtext->ecxt_scantuple = slot;
+				ResetExprContext(econtext);
+				if (!ExecQual(node->indexqual, econtext, false))
+				{
+					/* Fails recheck, so drop it and loop back for another */
+					InstrCountFiltered2(node, 1);
+					continue;
+				}
+			}
 		}
 
 		/*
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index 3143bd9..daa0826 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -39,6 +39,8 @@
 #include "utils/memutils.h"
 #include "utils/rel.h"
 
+bool indexscan_recheck = false;
+
 /*
  * When an ordering operator is used, tuples fetched from the index that
  * need to be reordered are queued in a pairing heap, as ReorderTuples.
@@ -115,10 +117,10 @@ IndexNext(IndexScanState *node)
 					   false);	/* don't pfree */
 
 		/*
-		 * If the index was lossy, we have to recheck the index quals using
-		 * the fetched tuple.
+		 * If the index was lossy or the tuple was WARM, we have to recheck
+		 * the index quals using the fetched tuple.
 		 */
-		if (scandesc->xs_recheck)
+		if (scandesc->xs_recheck || scandesc->xs_tuple_recheck)
 		{
 			econtext->ecxt_scantuple = slot;
 			ResetExprContext(econtext);
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index efb0c5e..3183db4 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -448,6 +448,7 @@ ExecInsert(ModifyTableState *mtstate,
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												 &(tuple->t_self), NULL,
 												 estate, true, &specConflict,
 												   arbiterIndexes);
 
@@ -494,6 +495,7 @@ ExecInsert(ModifyTableState *mtstate,
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
 				recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+													   &(tuple->t_self), NULL,
 													   estate, false, NULL,
 													   arbiterIndexes);
 		}
@@ -824,6 +826,9 @@ ExecUpdate(ItemPointer tupleid,
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
 	List	   *recheckIndexes = NIL;
+	Bitmapset  *modified_attrs = NULL;
+	ItemPointerData	root_tid;
+	bool		warm_update;
 
 	/*
 	 * abort the operation if not running transactions
@@ -938,7 +943,7 @@ lreplace:;
 							 estate->es_output_cid,
 							 estate->es_crosscheck_snapshot,
 							 true /* wait for commit */ ,
-							 &hufd, &lockmode);
+							 &hufd, &lockmode, &modified_attrs, &warm_update);
 		switch (result)
 		{
 			case HeapTupleSelfUpdated:
@@ -1025,10 +1030,28 @@ lreplace:;
 		 * the t_self field.
 		 *
 		 * If it's a HOT update, we mustn't insert new index entries.
+		 *
+		 * If it's a WARM update, then we must insert new entries with TID
+		 * pointing to the root of the WARM chain.
 		 */
-		if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
+		if (resultRelInfo->ri_NumIndices > 0 &&
+			(!HeapTupleIsHeapOnly(tuple) || warm_update))
+		{
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL, NIL);
+		}
 	}
 
 	if (canSetTag)
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index c7584cb..d89d37b 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -1823,7 +1823,7 @@ pgstat_count_heap_insert(Relation rel, int n)
  * pgstat_count_heap_update - count a tuple update
  */
 void
-pgstat_count_heap_update(Relation rel, bool hot)
+pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 {
 	PgStat_TableStatus *pgstat_info = rel->pgstat_info;
 
@@ -1841,6 +1841,8 @@ pgstat_count_heap_update(Relation rel, bool hot)
 		/* t_tuples_hot_updated is nontransactional, so just advance it */
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
+		else if (warm)
+			pgstat_info->t_counts.t_tuples_warm_updated++;
 	}
 }
 
@@ -4083,6 +4085,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_updated = 0;
 		result->tuples_deleted = 0;
 		result->tuples_hot_updated = 0;
+		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
 		result->changes_since_analyze = 0;
@@ -5192,6 +5195,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated = tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted = tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated = tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
@@ -5219,6 +5223,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated += tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted += tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated += tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated += tabmsg->t_counts.t_tuples_warm_updated;
 			/* If table was truncated, first reset the live/dead counters */
 			if (tabmsg->t_counts.t_truncated)
 			{
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 2d3cf9e..ef4f5b4 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -37,6 +37,7 @@ extern Datum pg_stat_get_tuples_inserted(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_tuples_updated(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_tuples_deleted(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS);
+extern Datum pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_live_tuples(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_dead_tuples(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_mod_since_analyze(PG_FUNCTION_ARGS);
@@ -115,6 +116,7 @@ extern Datum pg_stat_get_xact_tuples_inserted(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_xact_tuples_updated(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_xact_tuples_deleted(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS);
+extern Datum pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_xact_blocks_hit(PG_FUNCTION_ARGS);
 
@@ -245,6 +247,22 @@ pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS)
 
 
 Datum
+pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+
+Datum
 pg_stat_get_live_tuples(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
@@ -1744,6 +1762,21 @@ pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS)
 }
 
 Datum
+pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_TableStatus *tabentry;
+
+	if ((tabentry = find_tabstat_entry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->t_counts.t_tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+Datum
 pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 79e0b1f..c6ef4e2 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2030,6 +2030,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	list_free_deep(relation->rd_fkeylist);
 	list_free(relation->rd_indexlist);
 	bms_free(relation->rd_indexattr);
+	bms_free(relation->rd_exprindexattr);
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_idattr);
 	if (relation->rd_options)
@@ -4373,12 +4374,15 @@ Bitmapset *
 RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 {
 	Bitmapset  *indexattrs;		/* indexed columns */
+	Bitmapset  *exprindexattrs;	/* indexed columns in expression/prediacate
+									 indexes */
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
 	List	   *indexoidlist;
 	Oid			relreplindex;
 	ListCell   *l;
 	MemoryContext oldcxt;
+	bool		supportswarm = true;/* True if the table can be WARM updated */
 
 	/* Quick exit if we already computed the result. */
 	if (relation->rd_indexattr != NULL)
@@ -4391,6 +4395,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_keyattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
+			case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+				return bms_copy(relation->rd_exprindexattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -4429,6 +4435,7 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	 * won't be returned at all by RelationGetIndexList.
 	 */
 	indexattrs = NULL;
+	exprindexattrs = NULL;
 	uindexattrs = NULL;
 	idindexattrs = NULL;
 	foreach(l, indexoidlist)
@@ -4474,19 +4481,38 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 		}
 
 		/* Collect all attributes used in expressions, too */
-		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &exprindexattrs);
 
 		/* Collect all attributes in the index predicate, too */
-		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
+
+		/*
+		 * indexattrs should include attributes referenced in index expressions
+		 * and predicates too
+		 */
+		indexattrs = bms_add_members(indexattrs, exprindexattrs);
+
+		/*
+		 * Check if the index has amrecheck method defined. If the method is
+		 * not defined, the index does not support WARM update. Completely
+		 * disable WARM updates on such tables
+		 */
+		if (!indexDesc->rd_amroutine->amrecheck)
+			supportswarm = false;
 
 		index_close(indexDesc, AccessShareLock);
 	}
 
 	list_free(indexoidlist);
 
+	/* Remember if the table can do WARM updates */
+	relation->rd_supportswarm = supportswarm;
+
 	/* Don't leak the old values of these bitmaps, if any */
 	bms_free(relation->rd_indexattr);
 	relation->rd_indexattr = NULL;
+	bms_free(relation->rd_exprindexattr);
+	relation->rd_exprindexattr = NULL;
 	bms_free(relation->rd_keyattr);
 	relation->rd_keyattr = NULL;
 	bms_free(relation->rd_idattr);
@@ -4502,7 +4528,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	oldcxt = MemoryContextSwitchTo(CacheMemoryContext);
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_indexattr = bms_copy(indexattrs);
+	relation->rd_exprindexattr = bms_copy(exprindexattrs);
+	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -4514,6 +4541,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 			return uindexattrs;
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
+		case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+			return exprindexattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 28ebcb6..2241ffb 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -112,6 +112,7 @@ extern char *default_tablespace;
 extern char *temp_tablespaces;
 extern bool ignore_checksum_failure;
 extern bool synchronize_seqscans;
+extern bool indexscan_recheck;
 
 #ifdef TRACE_SYNCSCAN
 extern bool trace_syncscan;
@@ -1288,6 +1289,16 @@ static struct config_bool ConfigureNamesBool[] =
 		NULL, NULL, NULL
 	},
 	{
+		{"indexscan_recheck", PGC_USERSET, DEVELOPER_OPTIONS,
+			gettext_noop("Recheck heap rows returned from an index scan."),
+			NULL,
+			GUC_NOT_IN_SAMPLE
+		},
+		&indexscan_recheck,
+		false,
+		NULL, NULL, NULL
+	},
+	{
 		{"debug_deadlocks", PGC_SUSET, DEVELOPER_OPTIONS,
 			gettext_noop("Dumps information about all current locks when a deadlock timeout occurs."),
 			NULL,
diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h
index 1036cca..37eaf76 100644
--- a/src/include/access/amapi.h
+++ b/src/include/access/amapi.h
@@ -13,6 +13,7 @@
 #define AMAPI_H
 
 #include "access/genam.h"
+#include "access/itup.h"
 
 /*
  * We don't wish to include planner header files here, since most of an index
@@ -137,6 +138,9 @@ typedef void (*ammarkpos_function) (IndexScanDesc scan);
 /* restore marked scan position */
 typedef void (*amrestrpos_function) (IndexScanDesc scan);
 
+/* recheck index tuple and heap tuple match */
+typedef bool (*amrecheck_function) (Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * API struct for an index AM.  Note this must be stored in a single palloc'd
@@ -196,6 +200,7 @@ typedef struct IndexAmRoutine
 	amendscan_function amendscan;
 	ammarkpos_function ammarkpos;		/* can be NULL */
 	amrestrpos_function amrestrpos;		/* can be NULL */
+	amrecheck_function amrecheck;		/* can be NULL */
 } IndexAmRoutine;
 
 
diff --git a/src/include/access/hash.h b/src/include/access/hash.h
index 6dfc41f..f1c73a0 100644
--- a/src/include/access/hash.h
+++ b/src/include/access/hash.h
@@ -389,4 +389,8 @@ extern void hashbucketcleanup(Relation rel, Bucket cur_bucket,
 				  bool bucket_has_garbage,
 				  IndexBulkDeleteCallback callback, void *callback_state);
 
+/* hash.c */
+extern bool hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 #endif   /* HASH_H */
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 81f7982..04ffd67 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -137,9 +137,10 @@ extern bool heap_fetch(Relation relation, Snapshot snapshot,
 		   Relation stats_relation);
 extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
 					   Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call);
+					   bool *all_dead, bool first_call, bool *recheck);
 extern bool heap_hot_search(ItemPointer tid, Relation relation,
-				Snapshot snapshot, bool *all_dead);
+				Snapshot snapshot, bool *all_dead,
+				bool *recheck, Buffer *buffer, HeapTuple heapTuple);
 
 extern void heap_get_latest_tid(Relation relation, Snapshot snapshot,
 					ItemPointer tid);
@@ -160,7 +161,8 @@ extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
 extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
@@ -186,6 +188,7 @@ extern int heap_page_prune(Relation relation, Buffer buffer,
 				bool report_stats, TransactionId *latestRemovedXid);
 extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
+						bool *warmchain,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
 extern void heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index 5a04561..ddc3a7a 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -80,6 +80,7 @@
 #define XLH_UPDATE_CONTAINS_NEW_TUPLE			(1<<4)
 #define XLH_UPDATE_PREFIX_FROM_OLD				(1<<5)
 #define XLH_UPDATE_SUFFIX_FROM_OLD				(1<<6)
+#define XLH_UPDATE_WARM_UPDATE					(1<<7)
 
 /* convenience macro for checking whether any form of old tuple was logged */
 #define XLH_UPDATE_CONTAINS_OLD						\
@@ -211,7 +212,9 @@ typedef struct xl_heap_update
  *	* for each redirected item: the item offset, then the offset redirected to
  *	* for each now-dead item: the item offset
  *	* for each now-unused item: the item offset
- * The total number of OffsetNumbers is therefore 2*nredirected+ndead+nunused.
+ *	* for each now-warm item: the item offset
+ * The total number of OffsetNumbers is therefore
+ * 2*nredirected+ndead+nunused+nwarm.
  * Note that nunused is not explicitly stored, but may be found by reference
  * to the total record length.
  */
@@ -220,10 +223,11 @@ typedef struct xl_heap_clean
 	TransactionId latestRemovedXid;
 	uint16		nredirected;
 	uint16		ndead;
+	uint16		nwarm;
 	/* OFFSET NUMBERS are in the block reference 0 */
 } xl_heap_clean;
 
-#define SizeOfHeapClean (offsetof(xl_heap_clean, ndead) + sizeof(uint16))
+#define SizeOfHeapClean (offsetof(xl_heap_clean, nwarm) + sizeof(uint16))
 
 /*
  * Cleanup_info is required in some cases during a lazy VACUUM.
@@ -384,6 +388,7 @@ extern XLogRecPtr log_heap_cleanup_info(RelFileNode rnode,
 					  TransactionId latestRemovedXid);
 extern XLogRecPtr log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *redirected, int nredirected,
+			   OffsetNumber *warm, int nwarm,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid);
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 4313eb9..09246b2 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,7 +260,8 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x0800 are available */
+#define HEAP_WARM_TUPLE			0x0800	/* This tuple is a part of a WARM chain
+										 */
 #define HEAP_LATEST_TUPLE		0x1000	/*
 										 * This is the last tuple in chain and
 										 * ip_posid points to the root line
@@ -271,7 +272,7 @@ struct HeapTupleHeaderData
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF800	/* visibility-related bits */
 
 
 /*
@@ -510,6 +511,21 @@ do { \
   (tup)->t_infomask2 & HEAP_ONLY_TUPLE \
 )
 
+#define HeapTupleHeaderSetHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 |= HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderClearHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 &= ~HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderIsHeapWarmTuple(tup) \
+( \
+  ((tup)->t_infomask2 & HEAP_WARM_TUPLE) \
+)
+
 #define HeapTupleHeaderSetHeapLatest(tup) \
 ( \
 	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE \
@@ -771,6 +787,15 @@ struct MinimalTupleData
 #define HeapTupleClearHeapOnly(tuple) \
 		HeapTupleHeaderClearHeapOnly((tuple)->t_data)
 
+#define HeapTupleIsHeapWarmTuple(tuple) \
+		HeapTupleHeaderIsHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleSetHeapWarmTuple(tuple) \
+		HeapTupleHeaderSetHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleClearHeapWarmTuple(tuple) \
+		HeapTupleHeaderClearHeapWarmTuple((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index c580f51..83af072 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -751,6 +751,8 @@ extern bytea *btoptions(Datum reloptions, bool validate);
 extern bool btproperty(Oid index_oid, int attno,
 		   IndexAMProperty prop, const char *propname,
 		   bool *res, bool *isnull);
+extern bool btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * prototypes for functions in nbtvalidate.c
diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h
index de98dd6..da7ec84 100644
--- a/src/include/access/relscan.h
+++ b/src/include/access/relscan.h
@@ -111,7 +111,8 @@ typedef struct IndexScanDescData
 	HeapTupleData xs_ctup;		/* current heap tuple, if any */
 	Buffer		xs_cbuf;		/* current heap buffer in scan, if any */
 	/* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */
-	bool		xs_recheck;		/* T means scan keys must be rechecked */
+	bool		xs_recheck;		/* T means scan keys must be rechecked for each tuple */
+	bool		xs_tuple_recheck;	/* T means scan keys must be rechecked for current tuple */
 
 	/*
 	 * When fetching with an ordering operator, the values of the ORDER BY
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 047a1ce..31f295f 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2734,6 +2734,8 @@ DATA(insert OID = 1933 (  pg_stat_get_tuples_deleted	PGNSP PGUID 12 1 0 0 0 f f
 DESCR("statistics: number of tuples deleted");
 DATA(insert OID = 1972 (  pg_stat_get_tuples_hot_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated");
+DATA(insert OID = 3344 (  pg_stat_get_tuples_warm_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated");
 DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_live_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
@@ -2884,6 +2886,8 @@ DATA(insert OID = 3042 (  pg_stat_get_xact_tuples_deleted		PGNSP PGUID 12 1 0 0
 DESCR("statistics: number of tuples deleted in current transaction");
 DATA(insert OID = 3043 (  pg_stat_get_xact_tuples_hot_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated in current transaction");
+DATA(insert OID = 3343 (  pg_stat_get_xact_tuples_warm_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated in current transaction");
 DATA(insert OID = 3044 (  pg_stat_get_xact_blocks_fetched		PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_fetched _null_ _null_ _null_ ));
 DESCR("statistics: number of blocks fetched in current transaction");
 DATA(insert OID = 3045 (  pg_stat_get_xact_blocks_hit			PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_hit _null_ _null_ _null_ ));
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 136276b..e324deb 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -366,6 +366,7 @@ extern void UnregisterExprContextCallback(ExprContext *econtext,
 extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
 extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
 extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
+					  ItemPointer root_tid, Bitmapset *modified_attrs,
 					  EState *estate, bool noDupErr, bool *specConflict,
 					  List *arbiterIndexes);
 extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
diff --git a/src/include/executor/nodeIndexscan.h b/src/include/executor/nodeIndexscan.h
index 194fadb..fe9c78e 100644
--- a/src/include/executor/nodeIndexscan.h
+++ b/src/include/executor/nodeIndexscan.h
@@ -38,4 +38,5 @@ extern bool ExecIndexEvalArrayKeys(ExprContext *econtext,
 					   IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 extern bool ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 
+extern bool indexscan_recheck;
 #endif   /* NODEINDEXSCAN_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 8004d85..3bf4b5f 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -61,6 +61,7 @@ typedef struct IndexInfo
 	NodeTag		type;
 	int			ii_NumIndexAttrs;
 	AttrNumber	ii_KeyAttrNumbers[INDEX_MAX_KEYS];
+	Bitmapset  *ii_indxattrs;	/* bitmap of all columns used in this index */
 	List	   *ii_Expressions; /* list of Expr */
 	List	   *ii_ExpressionsState;	/* list of ExprState */
 	List	   *ii_Predicate;	/* list of Expr */
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 152ff06..e0c8a90 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -105,6 +105,7 @@ typedef struct PgStat_TableCounts
 	PgStat_Counter t_tuples_updated;
 	PgStat_Counter t_tuples_deleted;
 	PgStat_Counter t_tuples_hot_updated;
+	PgStat_Counter t_tuples_warm_updated;
 	bool		t_truncated;
 
 	PgStat_Counter t_delta_live_tuples;
@@ -625,6 +626,7 @@ typedef struct PgStat_StatTabEntry
 	PgStat_Counter tuples_updated;
 	PgStat_Counter tuples_deleted;
 	PgStat_Counter tuples_hot_updated;
+	PgStat_Counter tuples_warm_updated;
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
@@ -1176,7 +1178,7 @@ pgstat_report_wait_end(void)
 	(pgStatBlockWriteTime += (n))
 
 extern void pgstat_count_heap_insert(Relation rel, int n);
-extern void pgstat_count_heap_update(Relation rel, bool hot);
+extern void pgstat_count_heap_update(Relation rel, bool hot, bool warm);
 extern void pgstat_count_heap_delete(Relation rel);
 extern void pgstat_count_truncate(Relation rel);
 extern void pgstat_update_heap_dead_tuples(Relation rel, int delta);
diff --git a/src/include/storage/itemid.h b/src/include/storage/itemid.h
index 509c577..8c9cc99 100644
--- a/src/include/storage/itemid.h
+++ b/src/include/storage/itemid.h
@@ -46,6 +46,12 @@ typedef ItemIdData *ItemId;
 typedef uint16 ItemOffset;
 typedef uint16 ItemLength;
 
+/*
+ * Special value used in lp_len to indicate that the chain starting at line
+ * pointer may contain WARM tuples. This must only be interpreted along with
+ * LP_REDIRECT flag
+ */
+#define SpecHeapWarmLen	0x1ffb
 
 /* ----------------
  *		support macros
@@ -112,12 +118,15 @@ typedef uint16 ItemLength;
 #define ItemIdIsDead(itemId) \
 	((itemId)->lp_flags == LP_DEAD)
 
+#define ItemIdIsHeapWarm(itemId) \
+	(((itemId)->lp_flags == LP_REDIRECT) && \
+	 ((itemId)->lp_len == SpecHeapWarmLen))
 /*
  * ItemIdHasStorage
  *		True iff item identifier has associated storage.
  */
 #define ItemIdHasStorage(itemId) \
-	((itemId)->lp_len != 0)
+	(!ItemIdIsRedirected(itemId) && (itemId)->lp_len != 0)
 
 /*
  * ItemIdSetUnused
@@ -168,6 +177,26 @@ typedef uint16 ItemLength;
 )
 
 /*
+ * ItemIdSetHeapWarm
+ * 		Set the item identifier to identify as starting of a WARM chain
+ *
+ * Note: Since all bits in lp_flags are currently used, we store a special
+ * value in lp_len field to indicate this state. This is required only for
+ * LP_REDIRECT tuple and lp_len field is unused for such line pointers.
+ */
+#define ItemIdSetHeapWarm(itemId) \
+do { \
+	AssertMacro((itemId)->lp_flags == LP_REDIRECT); \
+	(itemId)->lp_len = SpecHeapWarmLen; \
+} while (0)
+
+#define ItemIdClearHeapWarm(itemId) \
+( \
+	AssertMacro((itemId)->lp_flags == LP_REDIRECT); \
+	(itemId)->lp_len = 0; \
+)
+
+/*
  * ItemIdMarkDead
  *		Set the item identifier to be DEAD, keeping its existing storage.
  *
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index fa15f28..982bf4c 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -101,8 +101,11 @@ typedef struct RelationData
 
 	/* data managed by RelationGetIndexAttrBitmap: */
 	Bitmapset  *rd_indexattr;	/* identifies columns used in indexes */
+	Bitmapset  *rd_exprindexattr; /* indentified columns used in expression or
+									 predicate indexes */
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	/*
 	 * rd_options is set whenever rd_rel is loaded into the relcache entry.
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index 6ea7dd2..290e9b7 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -48,7 +48,8 @@ typedef enum IndexAttrBitmapKind
 {
 	INDEX_ATTR_BITMAP_ALL,
 	INDEX_ATTR_BITMAP_KEY,
-	INDEX_ATTR_BITMAP_IDENTITY_KEY
+	INDEX_ATTR_BITMAP_IDENTITY_KEY,
+	INDEX_ATTR_BITMAP_EXPR_PREDICATE
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 031e8c2..c416fe6 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1705,6 +1705,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_tuples_deleted(c.oid) AS n_tup_del,
     pg_stat_get_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
@@ -1838,6 +1839,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1881,6 +1883,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1918,7 +1921,8 @@ pg_stat_xact_all_tables| SELECT c.oid AS relid,
     pg_stat_get_xact_tuples_inserted(c.oid) AS n_tup_ins,
     pg_stat_get_xact_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_xact_tuples_deleted(c.oid) AS n_tup_del,
-    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd
+    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_xact_tuples_warm_updated(c.oid) AS n_tup_warm_upd
    FROM ((pg_class c
      LEFT JOIN pg_index i ON ((c.oid = i.indrelid)))
      LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
@@ -1934,7 +1938,8 @@ pg_stat_xact_sys_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname = ANY (ARRAY['pg_catalog'::name, 'information_schema'::name])) OR (pg_stat_xact_all_tables.schemaname ~ '^pg_toast'::text));
 pg_stat_xact_user_functions| SELECT p.oid AS funcid,
@@ -1956,7 +1961,8 @@ pg_stat_xact_user_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname <> ALL (ARRAY['pg_catalog'::name, 'information_schema'::name])) AND (pg_stat_xact_all_tables.schemaname !~ '^pg_toast'::text));
 pg_statio_all_indexes| SELECT c.oid AS relid,
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
new file mode 100644
index 0000000..0aa1b83
--- /dev/null
+++ b/src/test/regress/expected/warm.out
@@ -0,0 +1,51 @@
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+explain select * from test_warm where lower(a) = 'test';
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on test_warm  (cost=4.18..12.65 rows=4 width=64)
+   Recheck Cond: (lower(a) = 'test'::text)
+   ->  Bitmap Index Scan on test_warmindx  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (lower(a) = 'test'::text)
+(4 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+select *, ctid from test_warm where a = 'test';
+ a | b | ctid 
+---+---+------
+(0 rows)
+
+select *, ctid from test_warm where a = 'TEST';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+                                   QUERY PLAN                                    
+---------------------------------------------------------------------------------
+ Index Scan using test_warmindx on test_warm  (cost=0.15..20.22 rows=4 width=64)
+   Index Cond: (lower(a) = 'test'::text)
+(2 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+DROP TABLE test_warm;
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 8641769..a610039 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -42,6 +42,8 @@ test: create_type
 test: create_table
 test: create_function_2
 
+test: warm
+
 # ----------
 # Load huge amounts of data
 # We should split the data files into single files and then
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
new file mode 100644
index 0000000..166ea37
--- /dev/null
+++ b/src/test/regress/sql/warm.sql
@@ -0,0 +1,15 @@
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where a = 'test';
+select *, ctid from test_warm where a = 'TEST';
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+DROP TABLE test_warm;
+
+
#27Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Pavan Deolasee (#26)
4 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Jan 3, 2017 at 9:43 AM, Pavan Deolasee <pavan.deolasee@gmail.com>
wrote:

The patch still disables WARM on system tables, something I would like to
fix. But I've been delaying that because it will require changes at several
places since indexes on system tables are managed separately.

Here is another version which fixes a bug that I discovered while adding
support for system tables. The patch set now also includes a patch to
enable WARM on system tables. I'm attaching that as a separate patch
because while the changes to support WARM on system tables are many, almost
all of them are purely mechanical. We need to pass additional information
to CatalogUpdateIndexes()/CatalogIndexInsert(). We need to tell these
routines whether the update leading to them was a WARM update and which
columns were modified so that it can correctly avoid adding new index
tuples for indexes for which index keys haven't changed.

I wish I could find another way of passing this information instead of
making changes at so many places, but the only other way I could think of
was tracking that information as part of the HeapTuple itself, which
doesn't seem nice and may also require changes at many call sites where
tuples are constructed. One minor improvement could be that instead of two,
we could just pass "modified_attrs" and a NULL value may imply non-WARM
update. Other suggestions are welcome though.

I'm quite happy that all tests pass even after adding support for system
tables. One reason for testing support for system tables was to ensure some
more code paths get exercised. As before, I've included Alvaro's
refactoring patch too.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0001_track_root_lp_v8.patchapplication/octet-stream; name=0001_track_root_lp_v8.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index f1b4602..a22aae7 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -93,7 +93,8 @@ static HeapTuple heap_prepare_insert(Relation relation, HeapTuple tup,
 					TransactionId xid, CommandId cid, int options);
 static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
-				HeapTuple newtup, HeapTuple old_key_tup,
+				HeapTuple newtup, OffsetNumber root_offnum,
+				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
 static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
 							 Bitmapset *interesting_cols,
@@ -2247,13 +2248,13 @@ heap_get_latest_tid(Relation relation,
 		 */
 		if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) ||
 			HeapTupleHeaderIsOnlyLocked(tp.t_data) ||
-			ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid))
+			HeapTupleHeaderIsHeapLatest(tp.t_data, ctid))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
 		}
 
-		ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tp.t_data, &ctid, offnum);
 		priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		UnlockReleaseBuffer(buffer);
 	}							/* end of loop */
@@ -2412,7 +2413,8 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	START_CRIT_SECTION();
 
 	RelationPutHeapTuple(relation, buffer, heaptup,
-						 (options & HEAP_INSERT_SPECULATIVE) != 0);
+						 (options & HEAP_INSERT_SPECULATIVE) != 0,
+						 InvalidOffsetNumber);
 
 	if (PageIsAllVisible(BufferGetPage(buffer)))
 	{
@@ -2710,7 +2712,8 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		 * RelationGetBufferForTuple has ensured that the first tuple fits.
 		 * Put that on the page, and then as many other tuples as fit.
 		 */
-		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false);
+		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false,
+				InvalidOffsetNumber);
 		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++)
 		{
 			HeapTuple	heaptup = heaptuples[ndone + nthispage];
@@ -2718,7 +2721,8 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace)
 				break;
 
-			RelationPutHeapTuple(relation, buffer, heaptup, false);
+			RelationPutHeapTuple(relation, buffer, heaptup, false,
+					InvalidOffsetNumber);
 
 			/*
 			 * We don't use heap_multi_insert for catalog tuples yet, but
@@ -2990,6 +2994,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	HeapTupleData tp;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	TransactionId new_xmax;
@@ -3041,7 +3046,8 @@ heap_delete(Relation relation, ItemPointer tid,
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 	}
 
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
+	offnum = ItemPointerGetOffsetNumber(tid);
+	lp = PageGetItemId(page, offnum);
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
@@ -3171,7 +3177,7 @@ l1:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tp.t_data, &hufd->ctid, offnum);
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tp.t_data);
@@ -3247,8 +3253,8 @@ l1:
 	HeapTupleHeaderClearHotUpdated(tp.t_data);
 	HeapTupleHeaderSetXmax(tp.t_data, new_xmax);
 	HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo);
-	/* Make sure there is no forward chain link in t_ctid */
-	tp.t_data->t_ctid = tp.t_self;
+	/* Mark this tuple as the latest tuple in the update chain */
+	HeapTupleHeaderSetHeapLatest(tp.t_data);
 
 	MarkBufferDirty(buffer);
 
@@ -3449,6 +3455,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
+	OffsetNumber	root_offnum;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3511,6 +3519,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 
 
 	block = ItemPointerGetBlockNumber(otid);
+	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3795,7 +3804,7 @@ l2:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = oldtup.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(oldtup.t_data, &hufd->ctid, offnum);
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
@@ -3976,7 +3985,7 @@ l2:
 		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		HeapTupleHeaderSetHeapLatest(oldtup.t_data);
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4159,6 +4168,20 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			heap_get_root_tuple_one(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
+					&root_offnum);
 	}
 	else
 	{
@@ -4166,10 +4189,29 @@ l2:
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		root_offnum = InvalidOffsetNumber;
 	}
 
-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
+	/* insert new tuple */
+	RelationPutHeapTuple(relation, newbuf, heaptup, false, root_offnum);
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data);
 
+	/*
+	 * Also update the in-memory copy with the root line pointer information
+	 */
+	if (OffsetNumberIsValid(root_offnum))
+	{
+		HeapTupleHeaderSetRootOffset(heaptup->t_data, root_offnum);
+		HeapTupleHeaderSetRootOffset(newtup->t_data, root_offnum);
+	}
+	else
+	{
+		HeapTupleHeaderSetRootOffset(heaptup->t_data,
+				ItemPointerGetOffsetNumber(&heaptup->t_self));
+		HeapTupleHeaderSetRootOffset(newtup->t_data,
+				ItemPointerGetOffsetNumber(&heaptup->t_self));
+	}
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
 	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
@@ -4182,7 +4224,9 @@ l2:
 	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	HeapTupleHeaderSetNextCtid(oldtup.t_data,
+			ItemPointerGetBlockNumber(&(heaptup->t_self)),
+			ItemPointerGetOffsetNumber(&(heaptup->t_self)));
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4221,6 +4265,7 @@ l2:
 
 		recptr = log_heap_update(relation, buffer,
 								 newbuf, &oldtup, heaptup,
+								 root_offnum,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4501,7 +4546,8 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	ItemId		lp;
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
-	BlockNumber block;
+	BlockNumber	block;
+	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4513,6 +4559,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
+	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -4559,7 +4606,7 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+		HeapTupleHeaderGetNextCtid(tuple->t_data, &t_ctid, offnum);
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -4997,7 +5044,7 @@ failed:
 		Assert(result == HeapTupleSelfUpdated || result == HeapTupleUpdated ||
 			   result == HeapTupleWouldBlock);
 		Assert(!(tuple->t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tuple->t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tuple->t_data, &hufd->ctid, offnum);
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tuple->t_data);
@@ -5073,7 +5120,7 @@ failed:
 	 * the tuple as well.
 	 */
 	if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask))
-		tuple->t_data->t_ctid = *tid;
+		HeapTupleHeaderSetHeapLatest(tuple->t_data);
 
 	/* Clear only the all-frozen bit on visibility map if needed */
 	if (PageIsAllVisible(page) &&
@@ -5587,6 +5634,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
+	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5595,6 +5643,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
+		offnum = ItemPointerGetOffsetNumber(&tupid);
+
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
@@ -5824,7 +5874,7 @@ l4:
 
 		/* if we find the end of update chain, we're done. */
 		if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID ||
-			ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) ||
+			HeapTupleHeaderIsHeapLatest(mytup.t_data, mytup.t_self) ||
 			HeapTupleHeaderIsOnlyLocked(mytup.t_data))
 		{
 			result = HeapTupleMayBeUpdated;
@@ -5833,7 +5883,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		HeapTupleHeaderGetNextCtid(mytup.t_data, &tupid, offnum);
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
@@ -5950,7 +6000,8 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	 * Replace the speculative insertion token with a real t_ctid, pointing to
 	 * itself like it does on regular tuples.
 	 */
-	htup->t_ctid = tuple->t_self;
+	HeapTupleHeaderSetHeapLatest(htup);
+	HeapTupleHeaderSetRootOffset(htup, offnum);
 
 	/* XLOG stuff */
 	if (RelationNeedsWAL(relation))
@@ -6076,7 +6127,9 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
 
 	/* Clear the speculative insertion token too */
-	tp.t_data->t_ctid = tp.t_self;
+	HeapTupleHeaderSetNextCtid(tp.t_data,
+			ItemPointerGetBlockNumber(&tp.t_self),
+			ItemPointerGetOffsetNumber(&tp.t_self));
 
 	MarkBufferDirty(buffer);
 
@@ -7425,6 +7478,7 @@ log_heap_visible(RelFileNode rnode, Buffer heap_buffer, Buffer vm_buffer,
 static XLogRecPtr
 log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup, HeapTuple newtup,
+				OffsetNumber root_offnum,
 				HeapTuple old_key_tuple,
 				bool all_visible_cleared, bool new_all_visible_cleared)
 {
@@ -7544,6 +7598,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	/* Prepare WAL data for the new page */
 	xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self);
 	xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data);
+	xlrec.root_offnum = root_offnum;
 
 	bufflags = REGBUF_STANDARD;
 	if (init)
@@ -8199,7 +8254,7 @@ heap_xlog_delete(XLogReaderState *record)
 			PageClearAllVisible(page);
 
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = target_tid;
+		HeapTupleHeaderSetHeapLatest(htup);
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8289,7 +8344,9 @@ heap_xlog_insert(XLogReaderState *record)
 		htup->t_hoff = xlhdr.t_hoff;
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
-		htup->t_ctid = target_tid;
+
+		HeapTupleHeaderSetHeapLatest(htup);
+		HeapTupleHeaderSetRootOffset(htup, xlrec->offnum);
 
 		if (PageAddItem(page, (Item) htup, newlen, xlrec->offnum,
 						true, true) == InvalidOffsetNumber)
@@ -8424,8 +8481,9 @@ heap_xlog_multi_insert(XLogReaderState *record)
 			htup->t_hoff = xlhdr->t_hoff;
 			HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 			HeapTupleHeaderSetCmin(htup, FirstCommandId);
-			ItemPointerSetBlockNumber(&htup->t_ctid, blkno);
-			ItemPointerSetOffsetNumber(&htup->t_ctid, offnum);
+
+			HeapTupleHeaderSetHeapLatest(htup);
+			HeapTupleHeaderSetRootOffset(htup, offnum);
 
 			offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 			if (offnum == InvalidOffsetNumber)
@@ -8561,7 +8619,8 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 		/* Set forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetNextCtid(htup, ItemPointerGetBlockNumber(&newtid),
+				ItemPointerGetOffsetNumber(&newtid));
 
 		/* Mark the page as a candidate for pruning */
 		PageSetPrunable(page, XLogRecGetXid(record));
@@ -8695,12 +8754,17 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetHeapLatest(htup);
 
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
 
+		if (OffsetNumberIsValid(xlrec->root_offnum))
+			HeapTupleHeaderSetRootOffset(htup, xlrec->root_offnum);
+		else
+			HeapTupleHeaderSetRootOffset(htup, offnum);
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
@@ -8828,9 +8892,7 @@ heap_xlog_lock(XLogReaderState *record)
 		{
 			HeapTupleHeaderClearHotUpdated(htup);
 			/* Make sure there is no forward chain link in t_ctid */
-			ItemPointerSet(&htup->t_ctid,
-						   BufferGetBlockNumber(buffer),
-						   offnum);
+			HeapTupleHeaderSetHeapLatest(htup);
 		}
 		HeapTupleHeaderSetXmax(htup, xlrec->locking_xid);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index c90fb71..39ee6ac 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -31,12 +31,18 @@
  * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
  * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *
+ * The caller can optionally tell us to set the root offset to the given value.
+ * Otherwise, the root offset is set to the offset of the new location once its
+ * known. The former is used while updating an existing tuple while latter is
+ * used during insertion of a new row.
  */
 void
 RelationPutHeapTuple(Relation relation,
 					 Buffer buffer,
 					 HeapTuple tuple,
-					 bool token)
+					 bool token,
+					 OffsetNumber root_offnum)
 {
 	Page		pageHeader;
 	OffsetNumber offnum;
@@ -69,7 +75,16 @@ RelationPutHeapTuple(Relation relation,
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
+		/* Copy t_ctid to set the correct block number */
 		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+
+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item);
+		if (OffsetNumberIsValid(root_offnum))
+			HeapTupleHeaderSetRootOffset((HeapTupleHeader) item,
+					root_offnum);
+		else
+			HeapTupleHeaderSetRootOffset((HeapTupleHeader) item,
+					offnum);
 	}
 }
 
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index 6ff9251..7c2231a 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -55,6 +55,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
 
+static void heap_get_root_tuples_internal(Page page,
+				OffsetNumber target_offnum, OffsetNumber *root_offsets);
 
 /*
  * Optionally prune and repair fragmentation in the specified page.
@@ -740,8 +742,9 @@ heap_page_prune_execute(Buffer buffer,
  * holds a pin on the buffer. Once pin is released, a tuple might be pruned
  * and reused by a completely unrelated tuple.
  */
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
 {
 	OffsetNumber offnum,
 				maxoff;
@@ -820,6 +823,14 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 			/* Remember the root line pointer for this item */
 			root_offsets[nextoffnum - 1] = offnum;
 
+			/*
+			 * If the caller is interested in just one offset and we found
+			 * that, just return
+			 */
+			if (OffsetNumberIsValid(target_offnum) &&
+					(nextoffnum == target_offnum))
+				return;
+
 			/* Advance to next chain member, if any */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				break;
@@ -829,3 +840,25 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 		}
 	}
 }
+
+/*
+ * Get root line pointer for the given tuple
+ */
+void
+heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offnum)
+{
+	OffsetNumber offsets[MaxHeapTuplesPerPage];
+	heap_get_root_tuples_internal(page, target_offnum, offsets);
+	*root_offnum = offsets[target_offnum - 1];
+}
+
+/*
+ * Get root line pointers for all tuples in the page
+ */
+void
+heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+{
+	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+			root_offsets);
+}
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index 17584ba..09a164c 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -419,14 +419,14 @@ rewrite_heap_tuple(RewriteState state,
 	 */
 	if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
 		  HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
-		!(ItemPointerEquals(&(old_tuple->t_self),
-							&(old_tuple->t_data->t_ctid))))
+		!(HeapTupleHeaderIsHeapLatest(old_tuple->t_data, old_tuple->t_self)))
 	{
 		OldToNewMapping mapping;
 
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
-		hashkey.tid = old_tuple->t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(old_tuple->t_data, &hashkey.tid,
+				ItemPointerGetOffsetNumber(&old_tuple->t_self));
 
 		mapping = (OldToNewMapping)
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -439,7 +439,9 @@ rewrite_heap_tuple(RewriteState state,
 			 * set the ctid of this tuple to point to the new location, and
 			 * insert it right away.
 			 */
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextCtid(new_tuple->t_data,
+					ItemPointerGetBlockNumber(&mapping->new_tid),
+					ItemPointerGetOffsetNumber(&mapping->new_tid));
 
 			/* We don't need the mapping entry anymore */
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -525,7 +527,9 @@ rewrite_heap_tuple(RewriteState state,
 				new_tuple = unresolved->tuple;
 				free_new = true;
 				old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextCtid(new_tuple->t_data,
+						ItemPointerGetBlockNumber(&new_tid),
+						ItemPointerGetOffsetNumber(&new_tid));
 
 				/*
 				 * We don't need the hash entry anymore, but don't free its
@@ -731,7 +735,10 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		onpage_tup->t_ctid = tup->t_self;
+		HeapTupleHeaderSetNextCtid(onpage_tup,
+				ItemPointerGetBlockNumber(&tup->t_self),
+				ItemPointerGetOffsetNumber(&tup->t_self));
+		HeapTupleHeaderSetHeapLatest(onpage_tup);
 	}
 
 	/* If heaptup is a private copy, release it. */
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 009c1b7..882ce18 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -788,7 +788,8 @@ retry:
 			  DirtySnapshot.speculativeToken &&
 			  TransactionIdPrecedes(GetCurrentTransactionId(), xwait))))
 		{
-			ctid_wait = tup->t_data->t_ctid;
+			HeapTupleHeaderGetNextCtid(tup->t_data, &ctid_wait,
+					ItemPointerGetOffsetNumber(&tup->t_self));
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 32bb3f9..466609c 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2443,7 +2443,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		 * As above, it should be safe to examine xmax and t_ctid without the
 		 * buffer content lock, because they can't be changing.
 		 */
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, tuple.t_self))
 		{
 			/* deleted, so forget about it */
 			ReleaseBuffer(buffer);
@@ -2451,7 +2451,8 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		}
 
 		/* updated, so look at the updated row */
-		tuple.t_self = tuple.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tuple.t_data, &tuple.t_self,
+				ItemPointerGetOffsetNumber(&tuple.t_self));
 		/* updated row should have xmin matching this xmax */
 		priorXmax = HeapTupleHeaderGetUpdateXid(tuple.t_data);
 		ReleaseBuffer(buffer);
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 0d12bbb..81f7982 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -188,6 +188,8 @@ extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
+extern void heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offnum);
 extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
 
 /* in heap/syncscan.c */
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index 06a8242..5a04561 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -193,6 +193,8 @@ typedef struct xl_heap_update
 	uint8		flags;
 	TransactionId new_xmax;		/* xmax of the new tuple */
 	OffsetNumber new_offnum;	/* new tuple's offset */
+	OffsetNumber root_offnum;	/* offset of the root line pointer in case of
+								   HOT or WARM update */
 
 	/*
 	 * If XLOG_HEAP_CONTAINS_OLD_TUPLE or XLOG_HEAP_CONTAINS_OLD_KEY flags are
@@ -200,7 +202,7 @@ typedef struct xl_heap_update
 	 */
 } xl_heap_update;
 
-#define SizeOfHeapUpdate	(offsetof(xl_heap_update, new_offnum) + sizeof(OffsetNumber))
+#define SizeOfHeapUpdate	(offsetof(xl_heap_update, root_offnum) + sizeof(OffsetNumber))
 
 /*
  * This is what we need to know about vacuum page cleanup/redirect
diff --git a/src/include/access/hio.h b/src/include/access/hio.h
index a174b34..82e5b5f 100644
--- a/src/include/access/hio.h
+++ b/src/include/access/hio.h
@@ -36,7 +36,7 @@ typedef struct BulkInsertStateData
 
 
 extern void RelationPutHeapTuple(Relation relation, Buffer buffer,
-					 HeapTuple tuple, bool token);
+					 HeapTuple tuple, bool token, OffsetNumber root_offnum);
 extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
 						  Buffer otherBuffer, int options,
 						  BulkInsertState bistate,
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 8fb1f6d..4313eb9 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,13 +260,19 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1800 are available */
+/* bits 0x0800 are available */
+#define HEAP_LATEST_TUPLE		0x1000	/*
+										 * This is the last tuple in chain and
+										 * ip_posid points to the root line
+										 * pointer
+										 */
 #define HEAP_KEYS_UPDATED		0x2000	/* tuple was updated and key cols
 										 * modified, or tuple deleted */
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xE000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+
 
 /*
  * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins.  It is
@@ -504,6 +510,30 @@ do { \
   (tup)->t_infomask2 & HEAP_ONLY_TUPLE \
 )
 
+#define HeapTupleHeaderSetHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE \
+)
+
+#define HeapTupleHeaderClearHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 &= ~HEAP_LATEST_TUPLE \
+)
+
+/*
+ * If HEAP_LATEST_TUPLE is set in the last tuple in the update chain. But for
+ * clusters which are upgraded from pre-10.0 release, we still check if c_tid
+ * is pointing to itself and declare such tuple as the latest tuple in the
+ * chain
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  ((tup)->t_infomask2 & HEAP_LATEST_TUPLE) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(&tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(&tid))) \
+)
+
+
 #define HeapTupleHeaderSetHeapOnly(tup) \
 ( \
   (tup)->t_infomask2 |= HEAP_ONLY_TUPLE \
@@ -542,6 +572,55 @@ do { \
 
 
 /*
+ * Set the t_ctid chain and also clear the HEAP_LATEST_TUPLE flag since we
+ * probably have a new tuple in the chain
+ */
+#define HeapTupleHeaderSetNextCtid(tup, block, offset) \
+do { \
+		ItemPointerSetBlockNumber(&((tup)->t_ctid), (block)); \
+		ItemPointerSetOffsetNumber(&((tup)->t_ctid), (offset)); \
+		HeapTupleHeaderClearHeapLatest((tup)); \
+} while (0)
+
+/*
+ * Get TID of next tuple in the update chain. Traditionally, we have stored
+ * self TID in the t_ctid field if the tuple is the last tuple in the chain. We
+ * try to preserve that behaviour by returning self-TID if HEAP_LATEST_TUPLE
+ * flag is set.
+ */
+#define HeapTupleHeaderGetNextCtid(tup, next_ctid, offnum) \
+do { \
+	if ((tup)->t_infomask2 & HEAP_LATEST_TUPLE) \
+	{ \
+		ItemPointerSet((next_ctid), ItemPointerGetBlockNumber(&(tup)->t_ctid), \
+				(offnum)); \
+	} \
+	else \
+	{ \
+		ItemPointerSet((next_ctid), ItemPointerGetBlockNumber(&(tup)->t_ctid), \
+				ItemPointerGetOffsetNumber(&(tup)->t_ctid)); \
+	} \
+} while (0)
+
+#define HeapTupleHeaderSetRootOffset(tup, offset) \
+do { \
+	AssertMacro(!HeapTupleHeaderIsHotUpdated(tup)); \
+	AssertMacro((tup)->t_infomask2 & HEAP_LATEST_TUPLE); \
+	ItemPointerSetOffsetNumber(&(tup)->t_ctid, (offset)); \
+} while (0)
+
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+	AssertMacro((tup)->t_infomask2 & HEAP_LATEST_TUPLE), \
+	ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)
+
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+	(tup)->t_infomask2 & HEAP_LATEST_TUPLE \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
0002_warm_updates_v8.patchapplication/octet-stream; name=0002_warm_updates_v8.patchDownload
diff --git b/contrib/bloom/blutils.c a/contrib/bloom/blutils.c
index b68a0d1..b95275f 100644
--- b/contrib/bloom/blutils.c
+++ a/contrib/bloom/blutils.c
@@ -138,6 +138,7 @@ blhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = blendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git b/src/backend/access/brin/brin.c a/src/backend/access/brin/brin.c
index 1b45a4c..ba3fffb 100644
--- b/src/backend/access/brin/brin.c
+++ a/src/backend/access/brin/brin.c
@@ -111,6 +111,7 @@ brinhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = brinendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git b/src/backend/access/gist/gist.c a/src/backend/access/gist/gist.c
index b8aa9bc..491e411 100644
--- b/src/backend/access/gist/gist.c
+++ a/src/backend/access/gist/gist.c
@@ -88,6 +88,7 @@ gisthandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = gistendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git b/src/backend/access/hash/hash.c a/src/backend/access/hash/hash.c
index 6806e32..2026004 100644
--- b/src/backend/access/hash/hash.c
+++ a/src/backend/access/hash/hash.c
@@ -85,6 +85,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = hashendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = hashrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -265,6 +266,8 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 	OffsetNumber offnum;
 	ItemPointer current;
 	bool		res;
+	IndexTuple	itup;
+
 
 	/* Hash indexes are always lossy since we store only the hash code */
 	scan->xs_recheck = true;
@@ -302,8 +305,6 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 			 offnum <= maxoffnum;
 			 offnum = OffsetNumberNext(offnum))
 		{
-			IndexTuple	itup;
-
 			itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 			if (ItemPointerEquals(&(so->hashso_heappos), &(itup->t_tid)))
 				break;
diff --git b/src/backend/access/hash/hashsearch.c a/src/backend/access/hash/hashsearch.c
index 8d43b38..05b078f 100644
--- b/src/backend/access/hash/hashsearch.c
+++ a/src/backend/access/hash/hashsearch.c
@@ -59,6 +59,8 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
 	return true;
 }
 
@@ -407,6 +409,9 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
+
 	return true;
 }
 
diff --git b/src/backend/access/hash/hashutil.c a/src/backend/access/hash/hashutil.c
index fa9cbdc..6897985 100644
--- b/src/backend/access/hash/hashutil.c
+++ a/src/backend/access/hash/hashutil.c
@@ -17,8 +17,12 @@
 #include "access/hash.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
+#include "nodes/execnodes.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 #define CALC_NEW_BUCKET(old_bucket, lowmask) \
 			old_bucket | (lowmask + 1)
@@ -446,3 +450,109 @@ _hash_get_newbucket_from_oldbucket(Relation rel, Bucket old_bucket,
 
 	return new_bucket;
 }
+
+/*
+ * Recheck if the heap tuple satisfies the key stored in the index tuple
+ */
+bool
+hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	Datum		values2[INDEX_MAX_KEYS];
+	bool		isnull2[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	/*
+	 * HASH indexes compute a hash value of the key and store that in the
+	 * index. So we must first obtain the hash of the value obtained from the
+	 * heap and then do a comparison
+	 */
+	_hash_convert_tuple(indexRel, values, isnull, values2, isnull2);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL then they are equal
+		 */
+		if (isnull2[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If either is NULL then they are not equal
+		 */
+		if (isnull2[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now do a raw memory comparison
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values2[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+}
diff --git b/src/backend/access/heap/README.WARM a/src/backend/access/heap/README.WARM
new file mode 100644
index 0000000..f793570
--- /dev/null
+++ a/src/backend/access/heap/README.WARM
@@ -0,0 +1,271 @@
+src/backend/access/heap/README.WARM
+
+Write Amplification Reduction Method (WARM)
+===========================================
+
+The Heap Only Tuple (HOT) feature greatly eliminated redudant index
+entries and allowed re-use of the dead space occupied by previously
+updated or deleted tuples (see src/backend/access/heap/README.HOT)
+
+One of the necessary conditions for satisfying HOT update is that the
+update must not change a column used in any of the indexes on the table.
+The condition is sometimes hard to meet, especially for complex
+workloads with several indexes on large yet frequently updated tables.
+Worse, sometimes only one or two index columns may be updated, but the
+regular non-HOT update will still insert a new index entry in every
+index on the table, irrespective of whether the key pertaining to the
+index changed or not.
+
+WARM is a technique devised to address these problems.
+
+
+Update Chains With Multiple Index Entries Pointing to the Root
+--------------------------------------------------------------
+
+When a non-HOT update is caused by an index key change, a new index
+entry must be inserted for the changed index. But if the index key
+hasn't changed for other indexes, we don't really need to insert a new
+entry. Even though the existing index entry is pointing to the old
+tuple, the new tuple is reachable via the t_ctid chain. To keep things
+simple, a WARM update requires that the heap block must have enough
+space to store the new version of the tuple. This is same as HOT
+updates.
+
+In WARM, we ensure that every index entry always points to the root of
+the WARM chain. In fact, a WARM chain looks exactly like a HOT chain
+except for the fact that there could be multiple index entries pointing
+to the root of the chain. So when new entry is inserted in an index for
+updated tuple, and if we are doing a WARM update, the new entry is made
+point to the root of the WARM chain.
+
+For example, if we have a table with two columns and two indexes on each
+of the column. When a tuple is first inserted the table, we have exactly
+one index entry pointing to the tuple from both indexes.
+
+	lp [1]
+	[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's entry (aaaa) also points to 1
+
+Now if the tuple's second column is updated and if there is room on the
+page, we perform a WARM update. To do so, Index1 does not get any new
+entry and Index2's new entry will still point to the root tuple of the
+chain.
+
+	lp [1]  [2]
+	[1111, aaaa]->[111, bbbb]
+
+	Index1's entry (1111) points to 1
+	Index2's old entry (aaaa) points to 1
+	Index2's new entry (bbbb) also points to 1
+
+"A update chain which has more than one index entries pointing to its
+root line pointer is called WARM chain and the action that creates a
+WARM chain is called WARM update."
+
+Since all indexes always point to the root of the WARM chain, even when
+there are more than one index entries, WARM chains can be pruned and
+dead tuples can be removed without a need to do corresponding index
+cleanup.
+
+While this solves the problem of pruning dead tuples from a HOT/WARM
+chain, it also opens up a new technical challenge because now we have a
+situation where a heap tuple is reachable from multiple index entries,
+each having a different index key. While MVCC still ensures that only
+valid tuples are returned, a tuple with a wrong index key may be
+returned because of wrong index entries. In the above example, tuple
+[1111, bbbb] is reachable from both keys (aaaa) as well as (bbbb). For
+this reason, tuples returned from a WARM chain must always be rechecked
+for index key-match.
+
+Recheck Index Key Againt Heap Tuple
+-----------------------------------
+
+Since every Index AM has it's own notion of index tuples, each Index AM
+must implement its own method to recheck heap tuples. For example, a
+hash index stores the hash value of the column and hence recheck routine
+for hash AM must first compute the hash value of the heap attribute and
+then compare it against the value stored in the index tuple.
+
+The patch currently implement recheck routines for hash and btree
+indexes. If the table has an index which doesn't support recheck
+routine, WARM updates are disabled on such tables.
+
+Problem With Duplicate (key, ctid) Index Entries
+------------------------------------------------
+
+The index-key recheck logic works as long as there are no duplicate
+index keys, both pointing to the same WARM chain. In that case, the same
+valid tuple will be reachable via multiple index keys, yet satisfying
+the index key checks. In the above example, if the tuple [1111, bbbb] is
+again updated to [1111, aaaa] and if we insert a new index entry (aaaa)
+pointing to the root line pointer, we will end up with the following
+structure:
+
+	lp [1]  [2]  [3]
+	[1111, aaaa]->[1111, bbbb]->[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's oldest entry (aaaa) points to 1
+	Index2's old entry (bbbb) also points to 1
+	Index2's new entry (aaaa) also points to 1
+
+We must solve this problem to ensure that the same tuple is not
+reachable via multiple index pointers. There are couple of ways to
+address this issue:
+
+1. Do not allow WARM update to a tuple from a WARM chain. This
+guarantees that there can never be duplicate index entries to the same
+root line pointer because we must have checked for old and new index
+keys while doing the first WARM update.
+
+2. Do not allow duplicate (key, ctid) index pointers. In the above
+example, since (aaaa, 1) already exists in the index, we must not insert
+a duplicate index entry.
+
+The patch currently implements 1 i.e. do not do WARM updates to a tuple
+from a WARM chain. HOT updates are fine because they do not add a new
+index entry.
+
+Even with the restriction, this is a significant improvement because the
+number of regular UPDATEs are curtailed down to half.
+
+Expression and Partial Indexes
+------------------------------
+
+Expressions may evaluate to the same value even if the underlying column
+values have changed. A simple example is an index on "lower(col)" which
+will return the same value if the new heap value only differs in the
+case sensitivity. So we can not solely rely on the heap column check to
+decide whether or not to insert a new index entry for expression
+indexes. Similarly, for partial indexes, the predicate expression must
+be evaluated to decide whether or not to cause a new index entry when
+columns referred in the predicate expressions change.
+
+(None of these things are currently implemented and we squarely disallow
+WARM update if a column from expression indexes or predicate has
+changed).
+
+
+Efficiently Finding the Root Line Pointer
+-----------------------------------------
+
+During WARM update, we must be able to find the root line pointer of the
+tuple being updated. It must be noted that the t_ctid field in the heap
+tuple header is usually used to find the next tuple in the update chain.
+But the tuple that we are updating, must be the last tuple in the update
+chain. In such cases, the c_tid field usually points the tuple itself.
+So in theory, we could use the t_ctid to store additional information in
+the last tuple of the update chain, if the information about the tuple
+being the last tuple is stored elsewhere.
+
+We now utilize another bit from t_infomask2 to explicitly identify that
+this is the last tuple in the update chain.
+
+HEAP_LATEST_TUPLE - When this bit is set, the tuple is the last tuple in
+the update chain. The OffsetNumber part of t_ctid points to the root
+line pointer of the chain when HEAP_LATEST_TUPLE flag is set.
+
+If UPDATE operation is aborted, the last tuple in the update chain
+becomes dead. The root line pointer information stored in the tuple
+which remains the last valid tuple in the chain is also lost. In such
+rare cases, the root line pointer must be found in a hard way by
+scanning the entire heap page.
+
+Tracking WARM Chains
+--------------------
+
+The old and every subsequent tuple in the chain is marked with a special
+HEAP_WARM_TUPLE flag. We use the last remaining bit in t_infomask2 to
+store this information.
+
+When a tuple is returned from a WARM chain, the caller must do
+additional checks to ensure that the tuple matches the index key. Even
+if the tuple comes precedes the WARM update in the chain, it must still
+be rechecked for the index key match (case when old tuple is returned by
+the new index key). So we must follow the update chain everytime to the
+end to see check if this is a WARM chain.
+
+When the old updated tuple is retired and the root line pointer is
+converted into a redirected line pointer, we can copy the information
+about WARM chain to the redirected line pointer by storing a special
+value in the lp_len field of the line pointer. This will handle the most
+common case where a WARM chain is replaced by a redirect line pointer
+and a single tuple in the chain.
+
+Converting WARM chains back to HOT chains (VACUUM ?)
+----------------------------------------------------
+
+The current implementation of WARM allows only one WARM update per
+chain. This simplifies the design and addresses certain issues around
+duplicate scans. But this also implies that the benefit of WARM will be
+no more than 50%, which is still significant, but if we could return
+WARM chains back to normal status, we could do far more WARM updates.
+
+A distinct property of a WARM chain is that at least one index has more
+than one live index entries pointing to the root of the chain. In other
+words, if we can remove duplicate entry from every index or conclusively
+prove that there are no duplicate index entries for the root line
+pointer, the chain can again be marked as HOT.
+
+Here is one idea:
+
+A WARM chain has two parts, separated by the tuple that caused WARM
+update. All tuples in each part has matching index keys, but certain
+index keys may not match between these two parts. Lets say we mark heap
+tuples in each part with a special Red-Blue flag. The same flag is
+replicated in the index tuples. For example, when new rows are inserted
+in a table, they are marked with Blue flag and the index entries
+associated with those rows are also marked with Blue flag. When a row is
+WARM updated, the new version is marked with Red flag and the new index
+entry created by the update is also marked with Red flag.
+
+
+Heap chain: [1] [2] [3] [4]
+			[aaaa, 1111]B -> [aaaa, 1111]B -> [bbbb, 1111]R -> [bbbb, 1111]R
+
+Index1: 	(aaaa)B points to 1 (satisfies only tuples marked with B)
+			(bbbb)R points to 1 (satisfies only tuples marked with R)
+
+Index2:		(1111)B points to 1 (satisfied bith B and R tuples)
+
+
+It's clear that for indexes with Red and Blue pointers, a heap tuple
+with Blue flag will be reachable from Blue pointer and that with Red
+flag will be reachable from Red pointer. But for indexes which did not
+create a new entry, both Blue and Red tuples will be reachable from Blue
+pointer (there is no Red pointer in such indexes). So, as a side note,
+matching Red and Blue flags is not enough from index scan perspective.
+
+During first heap scan of VACUUM, we look for tuples with
+HEAP_WARM_TUPLE set.  If all live tuples in the chain are either marked
+with Blue flag or Red flag (but no mix of Red and Blue), then the chain
+is a candidate for HOT conversion.  We remember the root line pointer
+and Red-Blue flag of the WARM chain in a separate array.
+
+If we have a Red WARM chain, then our goal is to remove Blue pointers
+and vice versa. But there is a catch. For Index2 above, there is only
+Blue pointer and that must not be removed. IOW we should remove Blue
+pointer iff a Red pointer exists. Since index vacuum may visit Red and
+Blue pointers in any order, I think we will need another index pass to
+remove dead index pointers. So in the first index pass we check which
+WARM candidates have 2 index pointers. In the second pass, we remove the
+dead pointer and reset Red flag is the surviving index pointer is Red.
+
+During the second heap scan, we fix WARM chain by clearing
+HEAP_WARM_TUPLE flag and also reset Red flag to Blue.
+
+There are some more problems around aborted vacuums. For example, if
+vacuum aborts after changing Red index flag to Blue but before removing
+the other Blue pointer, we will end up with two Blue pointers to a Red
+WARM chain. But since the HEAP_WARM_TUPLE flag on the heap tuple is
+still set, further WARM updates to the chain will be blocked. I guess we
+will need some special handling for case with multiple Blue pointers. We
+can either leave these WARM chains alone and let them die with a
+subsequent non-WARM update or must apply heap-recheck logic during index
+vacuum to find the dead pointer. Given that vacuum-aborts are not
+common, I am inclined to leave this case unhandled. We must still check
+for presence of multiple Blue pointers and ensure that we don't
+accidently remove either of the Blue pointers and not clear WARM chains
+either.
diff --git b/src/backend/access/heap/heapam.c a/src/backend/access/heap/heapam.c
index a22aae7..082bd1f 100644
--- b/src/backend/access/heap/heapam.c
+++ a/src/backend/access/heap/heapam.c
@@ -1957,6 +1957,76 @@ heap_fetch(Relation relation,
 }
 
 /*
+ * Check if the HOT chain is originating or continuing at tid ever became a
+ * WARM chain, even if the actual UPDATE operation finally aborted.
+ */
+static void
+hot_check_warm_chain(Page dp, ItemPointer tid, bool *recheck)
+{
+	TransactionId prev_xmax = InvalidTransactionId;
+	OffsetNumber offnum;
+	HeapTupleData heapTuple;
+
+	if (*recheck == true)
+		return;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+			break;
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Presence of either WARM or WARM updated tuple signals possible
+		 * breakage and the caller must recheck tuple returned from this chain
+		 * for index satisfaction
+		 */
+		if (HeapTupleHeaderIsHeapWarmTuple(heapTuple.t_data))
+		{
+			*recheck = true;
+			break;
+		}
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (HeapTupleIsHotUpdated(&heapTuple))
+		{
+			offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+			prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+		}
+		else
+			break;				/* end of chain */
+	}
+
+}
+
+/*
  *	heap_hot_search_buffer	- search HOT chain for tuple satisfying snapshot
  *
  * On entry, *tid is the TID of a tuple (either a simple tuple, or the root
@@ -1976,11 +2046,14 @@ heap_fetch(Relation relation,
  * Unlike heap_fetch, the caller must already have pin and (at least) share
  * lock on the buffer; it is still pinned/locked at exit.  Also unlike
  * heap_fetch, we do not report any pgstats count; caller may do so if wanted.
+ *
+ * recheck should be set false on entry by caller, will be set true on exit
+ * if a WARM tuple is encountered.
  */
 bool
 heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 					   Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call)
+					   bool *all_dead, bool first_call, bool *recheck)
 {
 	Page		dp = (Page) BufferGetPage(buffer);
 	TransactionId prev_xmax = InvalidTransactionId;
@@ -2022,6 +2095,16 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 				/* Follow the redirect */
 				offnum = ItemIdGetRedirect(lp);
 				at_chain_start = false;
+
+				/* Check if it's a WARM chain */
+				if (recheck && *recheck == false)
+				{
+					if (ItemIdIsHeapWarm(lp))
+					{
+						*recheck = true;
+						Assert(!IsSystemRelation(relation));
+					}
+				}
 				continue;
 			}
 			/* else must be end of chain */
@@ -2034,9 +2117,12 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		ItemPointerSetOffsetNumber(&heapTuple->t_self, offnum);
 
 		/*
-		 * Shouldn't see a HEAP_ONLY tuple at chain start.
+		 * Shouldn't see a HEAP_ONLY tuple at chain start, unless we are
+		 * dealing with a WARM updated tuple in which case deferred triggers
+		 * may request to fetch a WARM tuple from middle of a chain.
 		 */
-		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple))
+		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple) &&
+				!HeapTupleIsHeapWarmTuple(heapTuple))
 			break;
 
 		/*
@@ -2049,6 +2135,22 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 			break;
 
 		/*
+		 * Check if there exists a WARM tuple somewhere down the chain and set
+		 * recheck to TRUE.
+		 *
+		 * XXX This is not very efficient right now, and we should look for
+		 * possible improvements here
+		 */
+		if (recheck && *recheck == false)
+		{
+			hot_check_warm_chain(dp, &heapTuple->t_self, recheck);
+
+			/* WARM is not supported on system tables yet */
+			if (*recheck == true)
+				Assert(!IsSystemRelation(relation));
+		}
+
+		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
 		 * return the first tuple we find.  But on later passes, heapTuple
 		 * will initially be pointing to the tuple we returned last time.
@@ -2121,18 +2223,41 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
  */
 bool
 heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
-				bool *all_dead)
+				bool *all_dead, bool *recheck, Buffer *cbuffer,
+				HeapTuple heapTuple)
 {
 	bool		result;
 	Buffer		buffer;
-	HeapTupleData heapTuple;
+	ItemPointerData ret_tid = *tid;
 
 	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	LockBuffer(buffer, BUFFER_LOCK_SHARE);
-	result = heap_hot_search_buffer(tid, relation, buffer, snapshot,
-									&heapTuple, all_dead, true);
-	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buffer);
+	result = heap_hot_search_buffer(&ret_tid, relation, buffer, snapshot,
+									heapTuple, all_dead, true, recheck);
+
+	/*
+	 * If we are returning a potential candidate tuple from this chain and the
+	 * caller has requested for "recheck" hint, keep the buffer locked and
+	 * pinned. The caller must release the lock and pin on the buffer in all
+	 * such cases
+	 */
+	if (!result || !recheck || !(*recheck))
+	{
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	/*
+	 * Set the caller supplied tid with the actual location of the tuple being
+	 * returned
+	 */
+	if (result)
+	{
+		*tid = ret_tid;
+		if (cbuffer)
+			*cbuffer = buffer;
+	}
+
 	return result;
 }
 
@@ -3439,13 +3564,15 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 HTSU_Result
 heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode)
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update)
 {
 	HTSU_Result result;
 	TransactionId xid = GetCurrentTransactionId();
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *exprindx_attrs;
 	Bitmapset  *interesting_attrs;
 	Bitmapset  *modified_attrs;
 	ItemId		lp;
@@ -3468,6 +3595,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		use_hot_update = false;
+	bool		use_warm_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
 	bool		all_visible_cleared_new = false;
@@ -3492,6 +3620,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot update tuples during a parallel operation")));
 
+	/* Assume no-warm update */
+	if (warm_update)
+		*warm_update = false;
+
 	/*
 	 * Fetch the list of attributes to be checked for various operations.
 	 *
@@ -3513,10 +3645,13 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	exprindx_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_EXPR_PREDICATE);
+
 	interesting_attrs = bms_add_members(NULL, hot_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
-
+	interesting_attrs = bms_add_members(interesting_attrs, exprindx_attrs);
 
 	block = ItemPointerGetBlockNumber(otid);
 	offnum = ItemPointerGetOffsetNumber(otid);
@@ -3568,6 +3703,9 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
 												  &oldtup, newtup);
 
+	if (modified_attrsp)
+		*modified_attrsp = bms_copy(modified_attrs);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3818,6 +3956,7 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(exprindx_attrs);
 		bms_free(modified_attrs);
 		bms_free(interesting_attrs);
 		return result;
@@ -4126,6 +4265,36 @@ l2:
 		 */
 		if (!bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
+		else
+		{
+			/*
+			 * If no WARM updates yet on this chain, let this update be a WARM
+			 * update.
+			 *
+			 * We check for both warm and warm updated tuples since if the
+			 * previous WARM update aborted, we may still have added
+			 * another index entry for this HOT chain. In such situations, we
+			 * must not attempt a WARM update until duplicate (key, CTID) index
+			 * entry issue is sorted out
+			 *
+			 * XXX Later we'll add more checks to ensure WARM chains can
+			 * further be WARM updated. This is probably good to do first rounf
+			 * of tests of remaining functionality
+			 *
+			 * XXX Disable WARM updates on system tables. There is nothing in
+			 * principle that stops us from supporting this. But it would
+			 * require API change to propogate the changed columns back to the
+			 * caller so that CatalogUpdateIndexes() can avoid adding new
+			 * entries to indexes that are not changed by update. This will be
+			 * fixed once basic patch is tested. !!FIXME
+			 */
+			if (relation->rd_supportswarm &&
+				!bms_overlap(modified_attrs, exprindx_attrs) &&
+				!bms_is_subset(hot_attrs, modified_attrs) &&
+				!HeapTupleIsHeapWarmTuple(&oldtup) &&
+				!IsSystemRelation(relation))
+				use_warm_update = true;
+		}
 	}
 	else
 	{
@@ -4168,6 +4337,21 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+
+		/*
+		 * Even if we are doing a HOT update, we must carry forward the WARM
+		 * flag because we may have already inserted another index entry
+		 * pointing to our root and a third entry may create duplicates
+		 *
+		 * XXX This should be revisited if we get index (key, CTID) duplicate
+		 * detection mechanism in place
+		 */
+		if (HeapTupleIsHeapWarmTuple(&oldtup))
+		{
+			HeapTupleSetHeapWarmTuple(heaptup);
+			HeapTupleSetHeapWarmTuple(newtup);
+		}
+
 		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
@@ -4183,12 +4367,38 @@ l2:
 					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
 					&root_offnum);
 	}
+	else if (use_warm_update)
+	{
+		Assert(!IsSystemRelation(relation));
+
+		/* Mark the old tuple as HOT-updated */
+		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetHeapWarmTuple(&oldtup);
+		/* And mark the new tuple as heap-only */
+		HeapTupleSetHeapOnly(heaptup);
+		HeapTupleSetHeapWarmTuple(heaptup);
+		/* Mark the caller's copy too, in case different from heaptup */
+		HeapTupleSetHeapOnly(newtup);
+		HeapTupleSetHeapWarmTuple(newtup);
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			heap_get_root_tuple_one(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
+					&root_offnum);
+
+		/* Let the caller know we did a WARM update */
+		if (warm_update)
+			*warm_update = true;
+	}
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		HeapTupleClearHeapWarmTuple(heaptup);
+		HeapTupleClearHeapWarmTuple(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4307,7 +4517,10 @@ l2:
 	if (have_tuple_lock)
 		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
 
-	pgstat_count_heap_update(relation, use_hot_update);
+	/*
+	 * Count HOT and WARM updates separately
+	 */
+	pgstat_count_heap_update(relation, use_hot_update, use_warm_update);
 
 	/*
 	 * If heaptup is a private copy, release it.  Don't forget to copy t_self
@@ -4456,7 +4669,7 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &hufd, &lockmode);
+						 &hufd, &lockmode, NULL, NULL);
 	switch (result)
 	{
 		case HeapTupleSelfUpdated:
@@ -7354,6 +7567,7 @@ log_heap_cleanup_info(RelFileNode rnode, TransactionId latestRemovedXid)
 XLogRecPtr
 log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *redirected, int nredirected,
+			   OffsetNumber *warm, int nwarm,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid)
@@ -7367,6 +7581,7 @@ log_heap_clean(Relation reln, Buffer buffer,
 	xlrec.latestRemovedXid = latestRemovedXid;
 	xlrec.nredirected = nredirected;
 	xlrec.ndead = ndead;
+	xlrec.nwarm = nwarm;
 
 	XLogBeginInsert();
 	XLogRegisterData((char *) &xlrec, SizeOfHeapClean);
@@ -7389,6 +7604,10 @@ log_heap_clean(Relation reln, Buffer buffer,
 		XLogRegisterBufData(0, (char *) nowdead,
 							ndead * sizeof(OffsetNumber));
 
+	if (nwarm > 0)
+		XLogRegisterBufData(0, (char *) warm,
+							nwarm * sizeof(OffsetNumber));
+
 	if (nunused > 0)
 		XLogRegisterBufData(0, (char *) nowunused,
 							nunused * sizeof(OffsetNumber));
@@ -7494,6 +7713,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
+	bool		warm_update = false;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
@@ -7505,6 +7725,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	else
 		info = XLOG_HEAP_UPDATE;
 
+	if (HeapTupleIsHeapWarmTuple(newtup))
+		warm_update = true;
+
 	/*
 	 * If the old and new tuple are on the same page, we only need to log the
 	 * parts of the new tuple that were changed.  That saves on the amount of
@@ -7578,6 +7801,8 @@ log_heap_update(Relation reln, Buffer oldbuf,
 				xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY;
 		}
 	}
+	if (warm_update)
+		xlrec.flags |= XLH_UPDATE_WARM_UPDATE;
 
 	/* If new tuple is the single and first tuple on page... */
 	if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber &&
@@ -7945,24 +8170,38 @@ heap_xlog_clean(XLogReaderState *record)
 		OffsetNumber *redirected;
 		OffsetNumber *nowdead;
 		OffsetNumber *nowunused;
+		OffsetNumber *warm;
 		int			nredirected;
 		int			ndead;
 		int			nunused;
+		int			nwarm;
+		int			i;
 		Size		datalen;
+		bool		warmchain[MaxHeapTuplesPerPage + 1];
 
 		redirected = (OffsetNumber *) XLogRecGetBlockData(record, 0, &datalen);
 
 		nredirected = xlrec->nredirected;
 		ndead = xlrec->ndead;
+		nwarm = xlrec->nwarm;
+
 		end = (OffsetNumber *) ((char *) redirected + datalen);
 		nowdead = redirected + (nredirected * 2);
-		nowunused = nowdead + ndead;
-		nunused = (end - nowunused);
+		warm = nowdead + ndead;
+		nowunused = warm + nwarm;
+
+		nunused = (end - warm);
 		Assert(nunused >= 0);
 
+		memset(warmchain, 0, sizeof (warmchain));
+		for (i = 0; i < nwarm; i++)
+			warmchain[warm[i]] = true;
+
+
 		/* Update all item pointers per the record, and repair fragmentation */
 		heap_page_prune_execute(buffer,
 								redirected, nredirected,
+								warmchain,
 								nowdead, ndead,
 								nowunused, nunused);
 
@@ -8549,16 +8788,22 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	Size		freespace = 0;
 	XLogRedoAction oldaction;
 	XLogRedoAction newaction;
+	bool		warm_update = false;
 
 	/* initialize to keep the compiler quiet */
 	oldtup.t_data = NULL;
 	oldtup.t_len = 0;
 
+	if (xlrec->flags & XLH_UPDATE_WARM_UPDATE)
+		warm_update = true;
+
 	XLogRecGetBlockTag(record, 0, &rnode, NULL, &newblk);
 	if (XLogRecGetBlockTag(record, 1, NULL, NULL, &oldblk))
 	{
 		/* HOT updates are never done across pages */
 		Assert(!hot_update);
+		/* WARM updates are never done across pages */
+		Assert(!warm_update);
 	}
 	else
 		oldblk = newblk;
@@ -8618,6 +8863,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 								   &htup->t_infomask2);
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
+
+		/* Mark the old tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		/* Set forward chain link in t_ctid */
 		HeapTupleHeaderSetNextCtid(htup, ItemPointerGetBlockNumber(&newtid),
 				ItemPointerGetOffsetNumber(&newtid));
@@ -8753,6 +9003,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
+
+		/* Mark the new tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		/* Make sure there is no forward chain link in t_ctid */
 		HeapTupleHeaderSetHeapLatest(htup);
 
diff --git b/src/backend/access/heap/pruneheap.c a/src/backend/access/heap/pruneheap.c
index 7c2231a..d71a297 100644
--- b/src/backend/access/heap/pruneheap.c
+++ a/src/backend/access/heap/pruneheap.c
@@ -36,12 +36,19 @@ typedef struct
 	int			nredirected;	/* numbers of entries in arrays below */
 	int			ndead;
 	int			nunused;
+	int			nwarm;
 	/* arrays that accumulate indexes of items to be changed */
 	OffsetNumber redirected[MaxHeapTuplesPerPage * 2];
 	OffsetNumber nowdead[MaxHeapTuplesPerPage];
 	OffsetNumber nowunused[MaxHeapTuplesPerPage];
+	OffsetNumber warm[MaxHeapTuplesPerPage];
 	/* marked[i] is TRUE if item i is entered in one of the above arrays */
 	bool		marked[MaxHeapTuplesPerPage + 1];
+	/*
+	 * warmchain[i] is TRUE if item is becoming redirected lp and points a WARM
+	 * chain
+	 */
+	bool		warmchain[MaxHeapTuplesPerPage + 1];
 } PruneState;
 
 /* Local functions */
@@ -54,6 +61,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 						   OffsetNumber offnum, OffsetNumber rdoffnum);
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
+static void heap_prune_record_warmupdate(PruneState *prstate,
+						   OffsetNumber offnum);
 
 static void heap_get_root_tuples_internal(Page page,
 				OffsetNumber target_offnum, OffsetNumber *root_offsets);
@@ -203,8 +212,9 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 	 */
 	prstate.new_prune_xid = InvalidTransactionId;
 	prstate.latestRemovedXid = *latestRemovedXid;
-	prstate.nredirected = prstate.ndead = prstate.nunused = 0;
+	prstate.nredirected = prstate.ndead = prstate.nunused = prstate.nwarm = 0;
 	memset(prstate.marked, 0, sizeof(prstate.marked));
+	memset(prstate.warmchain, 0, sizeof(prstate.marked));
 
 	/* Scan the page */
 	maxoff = PageGetMaxOffsetNumber(page);
@@ -241,6 +251,7 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 		 */
 		heap_page_prune_execute(buffer,
 								prstate.redirected, prstate.nredirected,
+								prstate.warmchain,
 								prstate.nowdead, prstate.ndead,
 								prstate.nowunused, prstate.nunused);
 
@@ -268,6 +279,7 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 
 			recptr = log_heap_clean(relation, buffer,
 									prstate.redirected, prstate.nredirected,
+									prstate.warm, prstate.nwarm,
 									prstate.nowdead, prstate.ndead,
 									prstate.nowunused, prstate.nunused,
 									prstate.latestRemovedXid);
@@ -479,6 +491,12 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 			!TransactionIdEquals(HeapTupleHeaderGetXmin(htup), priorXmax))
 			break;
 
+		if (HeapTupleHeaderIsHeapWarmTuple(htup))
+		{
+			Assert(!IsSystemRelation(relation));
+			heap_prune_record_warmupdate(prstate, rootoffnum);
+		}
+
 		/*
 		 * OK, this tuple is indeed a member of the chain.
 		 */
@@ -668,6 +686,18 @@ heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum)
 	prstate->marked[offnum] = true;
 }
 
+/* Record item pointer which is a root of a WARM chain */
+static void
+heap_prune_record_warmupdate(PruneState *prstate, OffsetNumber offnum)
+{
+	Assert(prstate->nwarm < MaxHeapTuplesPerPage);
+	if (prstate->warmchain[offnum])
+		return;
+	prstate->warm[prstate->nwarm] = offnum;
+	prstate->nwarm++;
+	prstate->warmchain[offnum] = true;
+}
+
 
 /*
  * Perform the actual page changes needed by heap_page_prune.
@@ -681,6 +711,7 @@ heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum)
 void
 heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
+						bool *warmchain,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused)
 {
@@ -697,6 +728,12 @@ heap_page_prune_execute(Buffer buffer,
 		ItemId		fromlp = PageGetItemId(page, fromoff);
 
 		ItemIdSetRedirect(fromlp, tooff);
+
+		/*
+		 * Save information about WARM chains in the item itself
+		 */
+		if (warmchain[fromoff])
+			ItemIdSetHeapWarm(fromlp);
 	}
 
 	/* Update all now-dead line pointers */
diff --git b/src/backend/access/index/genam.c a/src/backend/access/index/genam.c
index 65c941d..4f9fb12 100644
--- b/src/backend/access/index/genam.c
+++ a/src/backend/access/index/genam.c
@@ -99,7 +99,7 @@ RelationGetIndexScan(Relation indexRelation, int nkeys, int norderbys)
 	else
 		scan->orderByData = NULL;
 
-	scan->xs_want_itup = false; /* may be set later */
+	scan->xs_want_itup = true; /* hack for now to always get index tuple */
 
 	/*
 	 * During recovery we ignore killed tuples and don't bother to kill them
diff --git b/src/backend/access/index/indexam.c a/src/backend/access/index/indexam.c
index 54b71cb..3f9a0cf 100644
--- b/src/backend/access/index/indexam.c
+++ a/src/backend/access/index/indexam.c
@@ -71,10 +71,12 @@
 #include "access/xlog.h"
 #include "catalog/catalog.h"
 #include "catalog/index.h"
+#include "executor/executor.h"
 #include "pgstat.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
+#include "utils/datum.h"
 #include "utils/snapmgr.h"
 #include "utils/tqual.h"
 
@@ -228,6 +230,20 @@ index_beginscan(Relation heapRelation,
 	scan->heapRelation = heapRelation;
 	scan->xs_snapshot = snapshot;
 
+	/*
+	 * If the index supports recheck, make sure that index tuple is saved
+	 * during index scans.
+	 *
+	 * XXX Ideally, we should look at all indexes on the table and check if
+	 * WARM is at all supported on the base table. If WARM is not supported
+	 * then we don't need to do any recheck. RelationGetIndexAttrBitmap() does
+	 * do that and sets rd_supportswarm after looking at all indexes, but we
+	 * can't be sure if the function was called at this point and we can't call
+	 * it now for the risk of deadlock.
+	 */
+	if (indexRelation->rd_amroutine->amrecheck)
+		scan->xs_want_itup = true;
+
 	return scan;
 }
 
@@ -409,7 +425,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
 	/*
 	 * The AM's amgettuple proc finds the next index entry matching the scan
 	 * keys, and puts the TID into scan->xs_ctup.t_self.  It should also set
-	 * scan->xs_recheck and possibly scan->xs_itup, though we pay no attention
+	 * scan->xs_tuple_recheck and possibly scan->xs_itup, though we pay no attention
 	 * to those fields here.
 	 */
 	found = scan->indexRelation->rd_amroutine->amgettuple(scan, direction);
@@ -448,7 +464,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
  * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
  * call).
  *
- * Note: caller must check scan->xs_recheck, and perform rechecking of the
+ * Note: caller must check scan->xs_tuple_recheck, and perform rechecking of the
  * scan keys if required.  We do not do that here because we don't have
  * enough information to do it efficiently in the general case.
  * ----------------
@@ -475,6 +491,15 @@ index_fetch_heap(IndexScanDesc scan)
 		 */
 		if (prev_buf != scan->xs_cbuf)
 			heap_page_prune_opt(scan->heapRelation, scan->xs_cbuf);
+
+		/*
+		 * If we're not always re-checking, reset recheck for this tuple
+		 */
+		if (!scan->xs_recheck)
+			scan->xs_tuple_recheck = false;
+		else
+			scan->xs_tuple_recheck = true;
+
 	}
 
 	/* Obtain share-lock on the buffer so we can examine visibility */
@@ -484,32 +509,64 @@ index_fetch_heap(IndexScanDesc scan)
 											scan->xs_snapshot,
 											&scan->xs_ctup,
 											&all_dead,
-											!scan->xs_continue_hot);
+											!scan->xs_continue_hot,
+											&scan->xs_tuple_recheck);
 	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
 
 	if (got_heap_tuple)
 	{
+		bool res = true;
+
+		/*
+		 * Ok we got a tuple which satisfies the snapshot, but if its part of a
+		 * WARM chain, we must do additional checks to ensure that we are
+		 * indeed returning a correct tuple. Note that if the index AM does not
+		 * implement amrecheck method, then we don't any additional checks
+		 * since WARM must have been disabled on such tables
+		 *
+		 * XXX What happens when a new index which does not support amcheck is
+		 * added to the table? Do we need to handle this case or is CREATE
+		 * INDEX and CREATE INDEX CONCURRENTLY smart enough to handle this
+		 * issue?
+		 */
+		if (scan->xs_tuple_recheck &&
+				scan->xs_itup &&
+				scan->indexRelation->rd_amroutine->amrecheck)
+		{
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
+			res = scan->indexRelation->rd_amroutine->amrecheck(
+						scan->indexRelation,
+						scan->xs_itup,
+						scan->heapRelation,
+						&scan->xs_ctup);
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+		}
+
 		/*
 		 * Only in a non-MVCC snapshot can more than one member of the HOT
 		 * chain be visible.
 		 */
 		scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
 		pgstat_count_heap_fetch(scan->indexRelation);
-		return &scan->xs_ctup;
-	}
 
-	/* We've reached the end of the HOT chain. */
-	scan->xs_continue_hot = false;
+		if (res)
+			return &scan->xs_ctup;
+	}
+	else
+	{
+		/* We've reached the end of the HOT chain. */
+		scan->xs_continue_hot = false;
 
-	/*
-	 * If we scanned a whole HOT chain and found only dead tuples, tell index
-	 * AM to kill its entry for that TID (this will take effect in the next
-	 * amgettuple call, in index_getnext_tid).  We do not do this when in
-	 * recovery because it may violate MVCC to do so.  See comments in
-	 * RelationGetIndexScan().
-	 */
-	if (!scan->xactStartedInRecovery)
-		scan->kill_prior_tuple = all_dead;
+		/*
+		 * If we scanned a whole HOT chain and found only dead tuples, tell index
+		 * AM to kill its entry for that TID (this will take effect in the next
+		 * amgettuple call, in index_getnext_tid).  We do not do this when in
+		 * recovery because it may violate MVCC to do so.  See comments in
+		 * RelationGetIndexScan().
+		 */
+		if (!scan->xactStartedInRecovery)
+			scan->kill_prior_tuple = all_dead;
+	}
 
 	return NULL;
 }
diff --git b/src/backend/access/nbtree/nbtinsert.c a/src/backend/access/nbtree/nbtinsert.c
index ef69290..e0afffd 100644
--- b/src/backend/access/nbtree/nbtinsert.c
+++ a/src/backend/access/nbtree/nbtinsert.c
@@ -19,11 +19,14 @@
 #include "access/nbtree.h"
 #include "access/transam.h"
 #include "access/xloginsert.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
 #include "utils/tqual.h"
-
+#include "utils/datum.h"
 
 typedef struct
 {
@@ -249,6 +252,9 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 	BTPageOpaque opaque;
 	Buffer		nbuf = InvalidBuffer;
 	bool		found = false;
+	Buffer		buffer;
+	HeapTupleData	heapTuple;
+	bool		recheck = false;
 
 	/* Assume unique until we find a duplicate */
 	*is_unique = true;
@@ -308,6 +314,8 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				curitup = (IndexTuple) PageGetItem(page, curitemid);
 				htid = curitup->t_tid;
 
+				recheck = false;
+
 				/*
 				 * If we are doing a recheck, we expect to find the tuple we
 				 * are rechecking.  It's not a duplicate, but we have to keep
@@ -325,112 +333,153 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				 * have just a single index entry for the entire chain.
 				 */
 				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
-										 &all_dead))
+							&all_dead, &recheck, &buffer,
+							&heapTuple))
 				{
 					TransactionId xwait;
+					bool result = true;
 
 					/*
-					 * It is a duplicate. If we are only doing a partial
-					 * check, then don't bother checking if the tuple is being
-					 * updated in another transaction. Just return the fact
-					 * that it is a potential conflict and leave the full
-					 * check till later.
+					 * If the tuple was WARM update, we may again see our own
+					 * tuple. Since WARM updates don't create new index
+					 * entries, our own tuple is only reachable via the old
+					 * index pointer
 					 */
-					if (checkUnique == UNIQUE_CHECK_PARTIAL)
+					if (checkUnique == UNIQUE_CHECK_EXISTING &&
+							ItemPointerCompare(&htid, &itup->t_tid) == 0)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						*is_unique = false;
-						return InvalidTransactionId;
+						found = true;
+						result = false;
+						if (recheck)
+							UnlockReleaseBuffer(buffer);
 					}
-
-					/*
-					 * If this tuple is being updated by other transaction
-					 * then we have to wait for its commit/abort.
-					 */
-					xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
-						SnapshotDirty.xmin : SnapshotDirty.xmax;
-
-					if (TransactionIdIsValid(xwait))
+					else if (recheck)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						/* Tell _bt_doinsert to wait... */
-						*speculativeToken = SnapshotDirty.speculativeToken;
-						return xwait;
+						result = btrecheck(rel, curitup, heapRel, &heapTuple);
+						UnlockReleaseBuffer(buffer);
 					}
 
-					/*
-					 * Otherwise we have a definite conflict.  But before
-					 * complaining, look to see if the tuple we want to insert
-					 * is itself now committed dead --- if so, don't complain.
-					 * This is a waste of time in normal scenarios but we must
-					 * do it to support CREATE INDEX CONCURRENTLY.
-					 *
-					 * We must follow HOT-chains here because during
-					 * concurrent index build, we insert the root TID though
-					 * the actual tuple may be somewhere in the HOT-chain.
-					 * While following the chain we might not stop at the
-					 * exact tuple which triggered the insert, but that's OK
-					 * because if we find a live tuple anywhere in this chain,
-					 * we have a unique key conflict.  The other live tuple is
-					 * not part of this chain because it had a different index
-					 * entry.
-					 */
-					htid = itup->t_tid;
-					if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL))
-					{
-						/* Normal case --- it's still live */
-					}
-					else
+					if (result)
 					{
 						/*
-						 * It's been deleted, so no error, and no need to
-						 * continue searching
+						 * It is a duplicate. If we are only doing a partial
+						 * check, then don't bother checking if the tuple is being
+						 * updated in another transaction. Just return the fact
+						 * that it is a potential conflict and leave the full
+						 * check till later.
 						 */
-						break;
-					}
+						if (checkUnique == UNIQUE_CHECK_PARTIAL)
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							*is_unique = false;
+							return InvalidTransactionId;
+						}
 
-					/*
-					 * Check for a conflict-in as we would if we were going to
-					 * write to this page.  We aren't actually going to write,
-					 * but we want a chance to report SSI conflicts that would
-					 * otherwise be masked by this unique constraint
-					 * violation.
-					 */
-					CheckForSerializableConflictIn(rel, NULL, buf);
+						/*
+						 * If this tuple is being updated by other transaction
+						 * then we have to wait for its commit/abort.
+						 */
+						xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
+							SnapshotDirty.xmin : SnapshotDirty.xmax;
+
+						if (TransactionIdIsValid(xwait))
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							/* Tell _bt_doinsert to wait... */
+							*speculativeToken = SnapshotDirty.speculativeToken;
+							return xwait;
+						}
 
-					/*
-					 * This is a definite conflict.  Break the tuple down into
-					 * datums and report the error.  But first, make sure we
-					 * release the buffer locks we're holding ---
-					 * BuildIndexValueDescription could make catalog accesses,
-					 * which in the worst case might touch this same index and
-					 * cause deadlocks.
-					 */
-					if (nbuf != InvalidBuffer)
-						_bt_relbuf(rel, nbuf);
-					_bt_relbuf(rel, buf);
+						/*
+						 * Otherwise we have a definite conflict.  But before
+						 * complaining, look to see if the tuple we want to insert
+						 * is itself now committed dead --- if so, don't complain.
+						 * This is a waste of time in normal scenarios but we must
+						 * do it to support CREATE INDEX CONCURRENTLY.
+						 *
+						 * We must follow HOT-chains here because during
+						 * concurrent index build, we insert the root TID though
+						 * the actual tuple may be somewhere in the HOT-chain.
+						 * While following the chain we might not stop at the
+						 * exact tuple which triggered the insert, but that's OK
+						 * because if we find a live tuple anywhere in this chain,
+						 * we have a unique key conflict.  The other live tuple is
+						 * not part of this chain because it had a different index
+						 * entry.
+						 */
+						recheck = false;
+						ItemPointerCopy(&itup->t_tid, &htid);
+						if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL,
+									&recheck, &buffer, &heapTuple))
+						{
+							bool result = true;
+							if (recheck)
+							{
+								/*
+								 * Recheck if the tuple actually satisfies the
+								 * index key. Otherwise, we might be following
+								 * a wrong index pointer and mustn't entertain
+								 * this tuple
+								 */
+								result = btrecheck(rel, itup, heapRel, &heapTuple);
+								UnlockReleaseBuffer(buffer);
+							}
+							if (!result)
+								break;
+							/* Normal case --- it's still live */
+						}
+						else
+						{
+							/*
+							 * It's been deleted, so no error, and no need to
+							 * continue searching
+							 */
+							break;
+						}
 
-					{
-						Datum		values[INDEX_MAX_KEYS];
-						bool		isnull[INDEX_MAX_KEYS];
-						char	   *key_desc;
-
-						index_deform_tuple(itup, RelationGetDescr(rel),
-										   values, isnull);
-
-						key_desc = BuildIndexValueDescription(rel, values,
-															  isnull);
-
-						ereport(ERROR,
-								(errcode(ERRCODE_UNIQUE_VIOLATION),
-								 errmsg("duplicate key value violates unique constraint \"%s\"",
-										RelationGetRelationName(rel)),
-							   key_desc ? errdetail("Key %s already exists.",
-													key_desc) : 0,
-								 errtableconstraint(heapRel,
-											 RelationGetRelationName(rel))));
+						/*
+						 * Check for a conflict-in as we would if we were going to
+						 * write to this page.  We aren't actually going to write,
+						 * but we want a chance to report SSI conflicts that would
+						 * otherwise be masked by this unique constraint
+						 * violation.
+						 */
+						CheckForSerializableConflictIn(rel, NULL, buf);
+
+						/*
+						 * This is a definite conflict.  Break the tuple down into
+						 * datums and report the error.  But first, make sure we
+						 * release the buffer locks we're holding ---
+						 * BuildIndexValueDescription could make catalog accesses,
+						 * which in the worst case might touch this same index and
+						 * cause deadlocks.
+						 */
+						if (nbuf != InvalidBuffer)
+							_bt_relbuf(rel, nbuf);
+						_bt_relbuf(rel, buf);
+
+						{
+							Datum		values[INDEX_MAX_KEYS];
+							bool		isnull[INDEX_MAX_KEYS];
+							char	   *key_desc;
+
+							index_deform_tuple(itup, RelationGetDescr(rel),
+									values, isnull);
+
+							key_desc = BuildIndexValueDescription(rel, values,
+									isnull);
+
+							ereport(ERROR,
+									(errcode(ERRCODE_UNIQUE_VIOLATION),
+									 errmsg("duplicate key value violates unique constraint \"%s\"",
+										 RelationGetRelationName(rel)),
+									 key_desc ? errdetail("Key %s already exists.",
+										 key_desc) : 0,
+									 errtableconstraint(heapRel,
+										 RelationGetRelationName(rel))));
+						}
 					}
 				}
 				else if (all_dead)
diff --git b/src/backend/access/nbtree/nbtree.c a/src/backend/access/nbtree/nbtree.c
index 128744c..6b1236a 100644
--- b/src/backend/access/nbtree/nbtree.c
+++ a/src/backend/access/nbtree/nbtree.c
@@ -23,6 +23,7 @@
 #include "access/xlog.h"
 #include "catalog/index.h"
 #include "commands/vacuum.h"
+#include "executor/nodeIndexscan.h"
 #include "storage/indexfsm.h"
 #include "storage/ipc.h"
 #include "storage/lmgr.h"
@@ -117,6 +118,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = btendscan;
 	amroutine->ammarkpos = btmarkpos;
 	amroutine->amrestrpos = btrestrpos;
+	amroutine->amrecheck = btrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -292,8 +294,9 @@ btgettuple(IndexScanDesc scan, ScanDirection dir)
 	BTScanOpaque so = (BTScanOpaque) scan->opaque;
 	bool		res;
 
-	/* btree indexes are never lossy */
-	scan->xs_recheck = false;
+	/* btree indexes are never lossy, except for WARM tuples */
+	scan->xs_recheck = indexscan_recheck;
+	scan->xs_tuple_recheck = indexscan_recheck;
 
 	/*
 	 * If we have any array keys, initialize them during first call for a
diff --git b/src/backend/access/nbtree/nbtutils.c a/src/backend/access/nbtree/nbtutils.c
index 063c988..c9c0501 100644
--- b/src/backend/access/nbtree/nbtutils.c
+++ a/src/backend/access/nbtree/nbtutils.c
@@ -20,11 +20,15 @@
 #include "access/nbtree.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "utils/array.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 typedef struct BTSortArrayContext
@@ -2065,3 +2069,103 @@ btproperty(Oid index_oid, int attno,
 			return false;		/* punt to generic code */
 	}
 }
+
+/*
+ * Check if the index tuple's key matches the one computed from the given heap
+ * tuple's attribute
+ */
+bool
+btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	/* Get IndexInfo for this index */
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL, then they are equal
+		 */
+		if (isnull[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If just one is NULL, then they are not equal
+		 */
+		if (isnull[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now just do a raw memory comparison. If the index tuple was formed
+		 * using this heap tuple, the computed index values must match
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+}
diff --git b/src/backend/access/spgist/spgutils.c a/src/backend/access/spgist/spgutils.c
index d570ae5..813b5c3 100644
--- b/src/backend/access/spgist/spgutils.c
+++ a/src/backend/access/spgist/spgutils.c
@@ -67,6 +67,7 @@ spghandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = spgendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git b/src/backend/catalog/index.c a/src/backend/catalog/index.c
index 08b646d..e76e928 100644
--- b/src/backend/catalog/index.c
+++ a/src/backend/catalog/index.c
@@ -54,6 +54,7 @@
 #include "nodes/makefuncs.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/var.h"
 #include "parser/parser.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
@@ -1691,6 +1692,20 @@ BuildIndexInfo(Relation index)
 	ii->ii_Concurrent = false;
 	ii->ii_BrokenHotChain = false;
 
+	/* build a bitmap of all table attributes referred by this index */
+	for (i = 0; i < ii->ii_NumIndexAttrs; i++)
+	{
+		AttrNumber attr = ii->ii_KeyAttrNumbers[i];
+		ii->ii_indxattrs = bms_add_member(ii->ii_indxattrs, attr -
+				FirstLowInvalidHeapAttributeNumber);
+	}
+
+	/* Collect all attributes used in expressions, too */
+	pull_varattnos((Node *) ii->ii_Expressions, 1, &ii->ii_indxattrs);
+
+	/* Collect all attributes in the index predicate, too */
+	pull_varattnos((Node *) ii->ii_Predicate, 1, &ii->ii_indxattrs);
+
 	return ii;
 }
 
diff --git b/src/backend/catalog/system_views.sql a/src/backend/catalog/system_views.sql
index e011af1..97672a9 100644
--- b/src/backend/catalog/system_views.sql
+++ a/src/backend/catalog/system_views.sql
@@ -472,6 +472,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_tuples_deleted(C.oid) AS n_tup_del,
             pg_stat_get_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
@@ -502,7 +503,8 @@ CREATE VIEW pg_stat_xact_all_tables AS
             pg_stat_get_xact_tuples_inserted(C.oid) AS n_tup_ins,
             pg_stat_get_xact_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_xact_tuples_deleted(C.oid) AS n_tup_del,
-            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd
+            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_xact_tuples_warm_updated(C.oid) AS n_tup_warm_upd
     FROM pg_class C LEFT JOIN
          pg_index I ON C.oid = I.indrelid
          LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
diff --git b/src/backend/commands/constraint.c a/src/backend/commands/constraint.c
index 26f9114..997c8f5 100644
--- b/src/backend/commands/constraint.c
+++ a/src/backend/commands/constraint.c
@@ -40,6 +40,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	TriggerData *trigdata = (TriggerData *) fcinfo->context;
 	const char *funcname = "unique_key_recheck";
 	HeapTuple	new_row;
+	HeapTupleData heapTuple;
 	ItemPointerData tmptid;
 	Relation	indexRel;
 	IndexInfo  *indexInfo;
@@ -102,7 +103,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	 * removed.
 	 */
 	tmptid = new_row->t_self;
-	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL,
+				NULL, NULL, &heapTuple))
 	{
 		/*
 		 * All rows in the HOT chain are dead, so skip the check.
diff --git b/src/backend/commands/copy.c a/src/backend/commands/copy.c
index ec5d6f1..5e57cc9 100644
--- b/src/backend/commands/copy.c
+++ a/src/backend/commands/copy.c
@@ -2551,6 +2551,8 @@ CopyFrom(CopyState cstate)
 					if (resultRelInfo->ri_NumIndices > 0)
 						recheckIndexes = ExecInsertIndexTuples(slot,
 															&(tuple->t_self),
+															&(tuple->t_self),
+															NULL,
 															   estate,
 															   false,
 															   NULL,
@@ -2669,6 +2671,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
 			ExecStoreTuple(bufferedTuples[i], myslot, InvalidBuffer, false);
 			recheckIndexes =
 				ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
+									  &(bufferedTuples[i]->t_self), NULL,
 									  estate, false, NULL, NIL);
 			ExecARInsertTriggers(estate, resultRelInfo,
 								 bufferedTuples[i],
diff --git b/src/backend/commands/vacuumlazy.c a/src/backend/commands/vacuumlazy.c
index b5fb325..cd9b9a7 100644
--- b/src/backend/commands/vacuumlazy.c
+++ a/src/backend/commands/vacuumlazy.c
@@ -1468,6 +1468,7 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 
 		recptr = log_heap_clean(onerel, buffer,
 								NULL, 0, NULL, 0,
+								NULL, 0,
 								unused, uncnt,
 								vacrelstats->latestRemovedXid);
 		PageSetLSN(page, recptr);
@@ -2128,6 +2129,22 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 						break;
 					}
 
+					/*
+					 * If this tuple was ever WARM updated or is a WARM tuple,
+					 * there could be multiple index entries pointing to the
+					 * root of this chain. We can't do index-only scans for
+					 * such tuples without verifying index key check. So mark
+					 * the page as !all_visible
+					 *
+					 * XXX Should we look at the root line pointer and check if
+					 * WARM flag is set there or checking for tuples in the
+					 * chain is good enough?
+					 */
+					if (HeapTupleHeaderIsHeapWarmTuple(tuple.t_data))
+					{
+						all_visible = false;
+					}
+
 					/* Track newest xmin on page. */
 					if (TransactionIdFollows(xmin, *visibility_cutoff_xid))
 						*visibility_cutoff_xid = xmin;
diff --git b/src/backend/executor/execIndexing.c a/src/backend/executor/execIndexing.c
index 882ce18..5fe6182 100644
--- b/src/backend/executor/execIndexing.c
+++ a/src/backend/executor/execIndexing.c
@@ -270,6 +270,8 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
 List *
 ExecInsertIndexTuples(TupleTableSlot *slot,
 					  ItemPointer tupleid,
+					  ItemPointer root_tid,
+					  Bitmapset *modified_attrs,
 					  EState *estate,
 					  bool noDupErr,
 					  bool *specConflict,
@@ -324,6 +326,17 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 		if (!indexInfo->ii_ReadyForInserts)
 			continue;
 
+		/*
+		 * If modified_attrs is set, we only insert index entries for those
+		 * indexes whose column has changed. All other indexes can use their
+		 * existing index pointers to look up the new tuple
+		 */
+		if (modified_attrs)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
 		/* Check for partial index */
 		if (indexInfo->ii_Predicate != NIL)
 		{
@@ -389,7 +402,7 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 			index_insert(indexRelation, /* index relation */
 						 values,	/* array of index Datums */
 						 isnull,	/* null flags */
-						 tupleid,		/* tid of heap tuple */
+						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique);	/* type of uniqueness check to do */
 
diff --git b/src/backend/executor/nodeBitmapHeapscan.c a/src/backend/executor/nodeBitmapHeapscan.c
index 449aacb..ff77349 100644
--- b/src/backend/executor/nodeBitmapHeapscan.c
+++ a/src/backend/executor/nodeBitmapHeapscan.c
@@ -37,6 +37,7 @@
 
 #include "access/relscan.h"
 #include "access/transam.h"
+#include "access/valid.h"
 #include "executor/execdebug.h"
 #include "executor/nodeBitmapHeapscan.h"
 #include "pgstat.h"
@@ -362,11 +363,23 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			OffsetNumber offnum = tbmres->offsets[curslot];
 			ItemPointerData tid;
 			HeapTupleData heapTuple;
+			bool recheck = false;
 
 			ItemPointerSet(&tid, page, offnum);
 			if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
-									   &heapTuple, NULL, true))
-				scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+									   &heapTuple, NULL, true, &recheck))
+			{
+				bool valid = true;
+
+				if (scan->rs_key)
+					HeapKeyTest(&heapTuple, RelationGetDescr(scan->rs_rd),
+							scan->rs_nkeys, scan->rs_key, valid);
+				if (valid)
+					scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+
+				if (recheck)
+					tbmres->recheck = true;
+			}
 		}
 	}
 	else
diff --git b/src/backend/executor/nodeIndexscan.c a/src/backend/executor/nodeIndexscan.c
index 3143bd9..daa0826 100644
--- b/src/backend/executor/nodeIndexscan.c
+++ a/src/backend/executor/nodeIndexscan.c
@@ -39,6 +39,8 @@
 #include "utils/memutils.h"
 #include "utils/rel.h"
 
+bool indexscan_recheck = false;
+
 /*
  * When an ordering operator is used, tuples fetched from the index that
  * need to be reordered are queued in a pairing heap, as ReorderTuples.
@@ -115,10 +117,10 @@ IndexNext(IndexScanState *node)
 					   false);	/* don't pfree */
 
 		/*
-		 * If the index was lossy, we have to recheck the index quals using
-		 * the fetched tuple.
+		 * If the index was lossy or the tuple was WARM, we have to recheck
+		 * the index quals using the fetched tuple.
 		 */
-		if (scandesc->xs_recheck)
+		if (scandesc->xs_recheck || scandesc->xs_tuple_recheck)
 		{
 			econtext->ecxt_scantuple = slot;
 			ResetExprContext(econtext);
diff --git b/src/backend/executor/nodeModifyTable.c a/src/backend/executor/nodeModifyTable.c
index efb0c5e..3183db4 100644
--- b/src/backend/executor/nodeModifyTable.c
+++ a/src/backend/executor/nodeModifyTable.c
@@ -448,6 +448,7 @@ ExecInsert(ModifyTableState *mtstate,
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												 &(tuple->t_self), NULL,
 												 estate, true, &specConflict,
 												   arbiterIndexes);
 
@@ -494,6 +495,7 @@ ExecInsert(ModifyTableState *mtstate,
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
 				recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+													   &(tuple->t_self), NULL,
 													   estate, false, NULL,
 													   arbiterIndexes);
 		}
@@ -824,6 +826,9 @@ ExecUpdate(ItemPointer tupleid,
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
 	List	   *recheckIndexes = NIL;
+	Bitmapset  *modified_attrs = NULL;
+	ItemPointerData	root_tid;
+	bool		warm_update;
 
 	/*
 	 * abort the operation if not running transactions
@@ -938,7 +943,7 @@ lreplace:;
 							 estate->es_output_cid,
 							 estate->es_crosscheck_snapshot,
 							 true /* wait for commit */ ,
-							 &hufd, &lockmode);
+							 &hufd, &lockmode, &modified_attrs, &warm_update);
 		switch (result)
 		{
 			case HeapTupleSelfUpdated:
@@ -1025,10 +1030,28 @@ lreplace:;
 		 * the t_self field.
 		 *
 		 * If it's a HOT update, we mustn't insert new index entries.
+		 *
+		 * If it's a WARM update, then we must insert new entries with TID
+		 * pointing to the root of the WARM chain.
 		 */
-		if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
+		if (resultRelInfo->ri_NumIndices > 0 &&
+			(!HeapTupleIsHeapOnly(tuple) || warm_update))
+		{
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL, NIL);
+		}
 	}
 
 	if (canSetTag)
diff --git b/src/backend/postmaster/pgstat.c a/src/backend/postmaster/pgstat.c
index c7584cb..d89d37b 100644
--- b/src/backend/postmaster/pgstat.c
+++ a/src/backend/postmaster/pgstat.c
@@ -1823,7 +1823,7 @@ pgstat_count_heap_insert(Relation rel, int n)
  * pgstat_count_heap_update - count a tuple update
  */
 void
-pgstat_count_heap_update(Relation rel, bool hot)
+pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 {
 	PgStat_TableStatus *pgstat_info = rel->pgstat_info;
 
@@ -1841,6 +1841,8 @@ pgstat_count_heap_update(Relation rel, bool hot)
 		/* t_tuples_hot_updated is nontransactional, so just advance it */
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
+		else if (warm)
+			pgstat_info->t_counts.t_tuples_warm_updated++;
 	}
 }
 
@@ -4083,6 +4085,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_updated = 0;
 		result->tuples_deleted = 0;
 		result->tuples_hot_updated = 0;
+		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
 		result->changes_since_analyze = 0;
@@ -5192,6 +5195,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated = tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted = tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated = tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
@@ -5219,6 +5223,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated += tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted += tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated += tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated += tabmsg->t_counts.t_tuples_warm_updated;
 			/* If table was truncated, first reset the live/dead counters */
 			if (tabmsg->t_counts.t_truncated)
 			{
diff --git b/src/backend/utils/adt/pgstatfuncs.c a/src/backend/utils/adt/pgstatfuncs.c
index 2d3cf9e..ef4f5b4 100644
--- b/src/backend/utils/adt/pgstatfuncs.c
+++ a/src/backend/utils/adt/pgstatfuncs.c
@@ -37,6 +37,7 @@ extern Datum pg_stat_get_tuples_inserted(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_tuples_updated(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_tuples_deleted(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS);
+extern Datum pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_live_tuples(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_dead_tuples(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_mod_since_analyze(PG_FUNCTION_ARGS);
@@ -115,6 +116,7 @@ extern Datum pg_stat_get_xact_tuples_inserted(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_xact_tuples_updated(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_xact_tuples_deleted(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS);
+extern Datum pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_xact_blocks_hit(PG_FUNCTION_ARGS);
 
@@ -245,6 +247,22 @@ pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS)
 
 
 Datum
+pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+
+Datum
 pg_stat_get_live_tuples(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
@@ -1744,6 +1762,21 @@ pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS)
 }
 
 Datum
+pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_TableStatus *tabentry;
+
+	if ((tabentry = find_tabstat_entry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->t_counts.t_tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+Datum
 pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
diff --git b/src/backend/utils/cache/relcache.c a/src/backend/utils/cache/relcache.c
index 79e0b1f..c6ef4e2 100644
--- b/src/backend/utils/cache/relcache.c
+++ a/src/backend/utils/cache/relcache.c
@@ -2030,6 +2030,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	list_free_deep(relation->rd_fkeylist);
 	list_free(relation->rd_indexlist);
 	bms_free(relation->rd_indexattr);
+	bms_free(relation->rd_exprindexattr);
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_idattr);
 	if (relation->rd_options)
@@ -4373,12 +4374,15 @@ Bitmapset *
 RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 {
 	Bitmapset  *indexattrs;		/* indexed columns */
+	Bitmapset  *exprindexattrs;	/* indexed columns in expression/prediacate
+									 indexes */
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
 	List	   *indexoidlist;
 	Oid			relreplindex;
 	ListCell   *l;
 	MemoryContext oldcxt;
+	bool		supportswarm = true;/* True if the table can be WARM updated */
 
 	/* Quick exit if we already computed the result. */
 	if (relation->rd_indexattr != NULL)
@@ -4391,6 +4395,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_keyattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
+			case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+				return bms_copy(relation->rd_exprindexattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -4429,6 +4435,7 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	 * won't be returned at all by RelationGetIndexList.
 	 */
 	indexattrs = NULL;
+	exprindexattrs = NULL;
 	uindexattrs = NULL;
 	idindexattrs = NULL;
 	foreach(l, indexoidlist)
@@ -4474,19 +4481,38 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 		}
 
 		/* Collect all attributes used in expressions, too */
-		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &exprindexattrs);
 
 		/* Collect all attributes in the index predicate, too */
-		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
+
+		/*
+		 * indexattrs should include attributes referenced in index expressions
+		 * and predicates too
+		 */
+		indexattrs = bms_add_members(indexattrs, exprindexattrs);
+
+		/*
+		 * Check if the index has amrecheck method defined. If the method is
+		 * not defined, the index does not support WARM update. Completely
+		 * disable WARM updates on such tables
+		 */
+		if (!indexDesc->rd_amroutine->amrecheck)
+			supportswarm = false;
 
 		index_close(indexDesc, AccessShareLock);
 	}
 
 	list_free(indexoidlist);
 
+	/* Remember if the table can do WARM updates */
+	relation->rd_supportswarm = supportswarm;
+
 	/* Don't leak the old values of these bitmaps, if any */
 	bms_free(relation->rd_indexattr);
 	relation->rd_indexattr = NULL;
+	bms_free(relation->rd_exprindexattr);
+	relation->rd_exprindexattr = NULL;
 	bms_free(relation->rd_keyattr);
 	relation->rd_keyattr = NULL;
 	bms_free(relation->rd_idattr);
@@ -4502,7 +4528,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	oldcxt = MemoryContextSwitchTo(CacheMemoryContext);
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_indexattr = bms_copy(indexattrs);
+	relation->rd_exprindexattr = bms_copy(exprindexattrs);
+	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -4514,6 +4541,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 			return uindexattrs;
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
+		case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+			return exprindexattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
diff --git b/src/backend/utils/misc/guc.c a/src/backend/utils/misc/guc.c
index 28ebcb6..2241ffb 100644
--- b/src/backend/utils/misc/guc.c
+++ a/src/backend/utils/misc/guc.c
@@ -112,6 +112,7 @@ extern char *default_tablespace;
 extern char *temp_tablespaces;
 extern bool ignore_checksum_failure;
 extern bool synchronize_seqscans;
+extern bool indexscan_recheck;
 
 #ifdef TRACE_SYNCSCAN
 extern bool trace_syncscan;
@@ -1288,6 +1289,16 @@ static struct config_bool ConfigureNamesBool[] =
 		NULL, NULL, NULL
 	},
 	{
+		{"indexscan_recheck", PGC_USERSET, DEVELOPER_OPTIONS,
+			gettext_noop("Recheck heap rows returned from an index scan."),
+			NULL,
+			GUC_NOT_IN_SAMPLE
+		},
+		&indexscan_recheck,
+		false,
+		NULL, NULL, NULL
+	},
+	{
 		{"debug_deadlocks", PGC_SUSET, DEVELOPER_OPTIONS,
 			gettext_noop("Dumps information about all current locks when a deadlock timeout occurs."),
 			NULL,
diff --git b/src/include/access/amapi.h a/src/include/access/amapi.h
index 1036cca..37eaf76 100644
--- b/src/include/access/amapi.h
+++ a/src/include/access/amapi.h
@@ -13,6 +13,7 @@
 #define AMAPI_H
 
 #include "access/genam.h"
+#include "access/itup.h"
 
 /*
  * We don't wish to include planner header files here, since most of an index
@@ -137,6 +138,9 @@ typedef void (*ammarkpos_function) (IndexScanDesc scan);
 /* restore marked scan position */
 typedef void (*amrestrpos_function) (IndexScanDesc scan);
 
+/* recheck index tuple and heap tuple match */
+typedef bool (*amrecheck_function) (Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * API struct for an index AM.  Note this must be stored in a single palloc'd
@@ -196,6 +200,7 @@ typedef struct IndexAmRoutine
 	amendscan_function amendscan;
 	ammarkpos_function ammarkpos;		/* can be NULL */
 	amrestrpos_function amrestrpos;		/* can be NULL */
+	amrecheck_function amrecheck;		/* can be NULL */
 } IndexAmRoutine;
 
 
diff --git b/src/include/access/hash.h a/src/include/access/hash.h
index 6dfc41f..f1c73a0 100644
--- b/src/include/access/hash.h
+++ a/src/include/access/hash.h
@@ -389,4 +389,8 @@ extern void hashbucketcleanup(Relation rel, Bucket cur_bucket,
 				  bool bucket_has_garbage,
 				  IndexBulkDeleteCallback callback, void *callback_state);
 
+/* hash.c */
+extern bool hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 #endif   /* HASH_H */
diff --git b/src/include/access/heapam.h a/src/include/access/heapam.h
index 81f7982..04ffd67 100644
--- b/src/include/access/heapam.h
+++ a/src/include/access/heapam.h
@@ -137,9 +137,10 @@ extern bool heap_fetch(Relation relation, Snapshot snapshot,
 		   Relation stats_relation);
 extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
 					   Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call);
+					   bool *all_dead, bool first_call, bool *recheck);
 extern bool heap_hot_search(ItemPointer tid, Relation relation,
-				Snapshot snapshot, bool *all_dead);
+				Snapshot snapshot, bool *all_dead,
+				bool *recheck, Buffer *buffer, HeapTuple heapTuple);
 
 extern void heap_get_latest_tid(Relation relation, Snapshot snapshot,
 					ItemPointer tid);
@@ -160,7 +161,8 @@ extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
 extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
@@ -186,6 +188,7 @@ extern int heap_page_prune(Relation relation, Buffer buffer,
 				bool report_stats, TransactionId *latestRemovedXid);
 extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
+						bool *warmchain,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
 extern void heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
diff --git b/src/include/access/heapam_xlog.h a/src/include/access/heapam_xlog.h
index 5a04561..ddc3a7a 100644
--- b/src/include/access/heapam_xlog.h
+++ a/src/include/access/heapam_xlog.h
@@ -80,6 +80,7 @@
 #define XLH_UPDATE_CONTAINS_NEW_TUPLE			(1<<4)
 #define XLH_UPDATE_PREFIX_FROM_OLD				(1<<5)
 #define XLH_UPDATE_SUFFIX_FROM_OLD				(1<<6)
+#define XLH_UPDATE_WARM_UPDATE					(1<<7)
 
 /* convenience macro for checking whether any form of old tuple was logged */
 #define XLH_UPDATE_CONTAINS_OLD						\
@@ -211,7 +212,9 @@ typedef struct xl_heap_update
  *	* for each redirected item: the item offset, then the offset redirected to
  *	* for each now-dead item: the item offset
  *	* for each now-unused item: the item offset
- * The total number of OffsetNumbers is therefore 2*nredirected+ndead+nunused.
+ *	* for each now-warm item: the item offset
+ * The total number of OffsetNumbers is therefore
+ * 2*nredirected+ndead+nunused+nwarm.
  * Note that nunused is not explicitly stored, but may be found by reference
  * to the total record length.
  */
@@ -220,10 +223,11 @@ typedef struct xl_heap_clean
 	TransactionId latestRemovedXid;
 	uint16		nredirected;
 	uint16		ndead;
+	uint16		nwarm;
 	/* OFFSET NUMBERS are in the block reference 0 */
 } xl_heap_clean;
 
-#define SizeOfHeapClean (offsetof(xl_heap_clean, ndead) + sizeof(uint16))
+#define SizeOfHeapClean (offsetof(xl_heap_clean, nwarm) + sizeof(uint16))
 
 /*
  * Cleanup_info is required in some cases during a lazy VACUUM.
@@ -384,6 +388,7 @@ extern XLogRecPtr log_heap_cleanup_info(RelFileNode rnode,
 					  TransactionId latestRemovedXid);
 extern XLogRecPtr log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *redirected, int nredirected,
+			   OffsetNumber *warm, int nwarm,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid);
diff --git b/src/include/access/htup_details.h a/src/include/access/htup_details.h
index 4313eb9..09246b2 100644
--- b/src/include/access/htup_details.h
+++ a/src/include/access/htup_details.h
@@ -260,7 +260,8 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x0800 are available */
+#define HEAP_WARM_TUPLE			0x0800	/* This tuple is a part of a WARM chain
+										 */
 #define HEAP_LATEST_TUPLE		0x1000	/*
 										 * This is the last tuple in chain and
 										 * ip_posid points to the root line
@@ -271,7 +272,7 @@ struct HeapTupleHeaderData
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF800	/* visibility-related bits */
 
 
 /*
@@ -510,6 +511,21 @@ do { \
   (tup)->t_infomask2 & HEAP_ONLY_TUPLE \
 )
 
+#define HeapTupleHeaderSetHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 |= HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderClearHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 &= ~HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderIsHeapWarmTuple(tup) \
+( \
+  ((tup)->t_infomask2 & HEAP_WARM_TUPLE) \
+)
+
 #define HeapTupleHeaderSetHeapLatest(tup) \
 ( \
 	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE \
@@ -771,6 +787,15 @@ struct MinimalTupleData
 #define HeapTupleClearHeapOnly(tuple) \
 		HeapTupleHeaderClearHeapOnly((tuple)->t_data)
 
+#define HeapTupleIsHeapWarmTuple(tuple) \
+		HeapTupleHeaderIsHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleSetHeapWarmTuple(tuple) \
+		HeapTupleHeaderSetHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleClearHeapWarmTuple(tuple) \
+		HeapTupleHeaderClearHeapWarmTuple((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git b/src/include/access/nbtree.h a/src/include/access/nbtree.h
index c580f51..83af072 100644
--- b/src/include/access/nbtree.h
+++ a/src/include/access/nbtree.h
@@ -751,6 +751,8 @@ extern bytea *btoptions(Datum reloptions, bool validate);
 extern bool btproperty(Oid index_oid, int attno,
 		   IndexAMProperty prop, const char *propname,
 		   bool *res, bool *isnull);
+extern bool btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * prototypes for functions in nbtvalidate.c
diff --git b/src/include/access/relscan.h a/src/include/access/relscan.h
index de98dd6..da7ec84 100644
--- b/src/include/access/relscan.h
+++ a/src/include/access/relscan.h
@@ -111,7 +111,8 @@ typedef struct IndexScanDescData
 	HeapTupleData xs_ctup;		/* current heap tuple, if any */
 	Buffer		xs_cbuf;		/* current heap buffer in scan, if any */
 	/* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */
-	bool		xs_recheck;		/* T means scan keys must be rechecked */
+	bool		xs_recheck;		/* T means scan keys must be rechecked for each tuple */
+	bool		xs_tuple_recheck;	/* T means scan keys must be rechecked for current tuple */
 
 	/*
 	 * When fetching with an ordering operator, the values of the ORDER BY
diff --git b/src/include/catalog/pg_proc.h a/src/include/catalog/pg_proc.h
index 047a1ce..31f295f 100644
--- b/src/include/catalog/pg_proc.h
+++ a/src/include/catalog/pg_proc.h
@@ -2734,6 +2734,8 @@ DATA(insert OID = 1933 (  pg_stat_get_tuples_deleted	PGNSP PGUID 12 1 0 0 0 f f
 DESCR("statistics: number of tuples deleted");
 DATA(insert OID = 1972 (  pg_stat_get_tuples_hot_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated");
+DATA(insert OID = 3344 (  pg_stat_get_tuples_warm_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated");
 DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_live_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
@@ -2884,6 +2886,8 @@ DATA(insert OID = 3042 (  pg_stat_get_xact_tuples_deleted		PGNSP PGUID 12 1 0 0
 DESCR("statistics: number of tuples deleted in current transaction");
 DATA(insert OID = 3043 (  pg_stat_get_xact_tuples_hot_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated in current transaction");
+DATA(insert OID = 3343 (  pg_stat_get_xact_tuples_warm_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated in current transaction");
 DATA(insert OID = 3044 (  pg_stat_get_xact_blocks_fetched		PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_fetched _null_ _null_ _null_ ));
 DESCR("statistics: number of blocks fetched in current transaction");
 DATA(insert OID = 3045 (  pg_stat_get_xact_blocks_hit			PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_hit _null_ _null_ _null_ ));
diff --git b/src/include/executor/executor.h a/src/include/executor/executor.h
index 136276b..e324deb 100644
--- b/src/include/executor/executor.h
+++ a/src/include/executor/executor.h
@@ -366,6 +366,7 @@ extern void UnregisterExprContextCallback(ExprContext *econtext,
 extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
 extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
 extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
+					  ItemPointer root_tid, Bitmapset *modified_attrs,
 					  EState *estate, bool noDupErr, bool *specConflict,
 					  List *arbiterIndexes);
 extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
diff --git b/src/include/executor/nodeIndexscan.h a/src/include/executor/nodeIndexscan.h
index 194fadb..fe9c78e 100644
--- b/src/include/executor/nodeIndexscan.h
+++ a/src/include/executor/nodeIndexscan.h
@@ -38,4 +38,5 @@ extern bool ExecIndexEvalArrayKeys(ExprContext *econtext,
 					   IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 extern bool ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 
+extern bool indexscan_recheck;
 #endif   /* NODEINDEXSCAN_H */
diff --git b/src/include/nodes/execnodes.h a/src/include/nodes/execnodes.h
index 8004d85..3bf4b5f 100644
--- b/src/include/nodes/execnodes.h
+++ a/src/include/nodes/execnodes.h
@@ -61,6 +61,7 @@ typedef struct IndexInfo
 	NodeTag		type;
 	int			ii_NumIndexAttrs;
 	AttrNumber	ii_KeyAttrNumbers[INDEX_MAX_KEYS];
+	Bitmapset  *ii_indxattrs;	/* bitmap of all columns used in this index */
 	List	   *ii_Expressions; /* list of Expr */
 	List	   *ii_ExpressionsState;	/* list of ExprState */
 	List	   *ii_Predicate;	/* list of Expr */
diff --git b/src/include/pgstat.h a/src/include/pgstat.h
index 152ff06..e0c8a90 100644
--- b/src/include/pgstat.h
+++ a/src/include/pgstat.h
@@ -105,6 +105,7 @@ typedef struct PgStat_TableCounts
 	PgStat_Counter t_tuples_updated;
 	PgStat_Counter t_tuples_deleted;
 	PgStat_Counter t_tuples_hot_updated;
+	PgStat_Counter t_tuples_warm_updated;
 	bool		t_truncated;
 
 	PgStat_Counter t_delta_live_tuples;
@@ -625,6 +626,7 @@ typedef struct PgStat_StatTabEntry
 	PgStat_Counter tuples_updated;
 	PgStat_Counter tuples_deleted;
 	PgStat_Counter tuples_hot_updated;
+	PgStat_Counter tuples_warm_updated;
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
@@ -1176,7 +1178,7 @@ pgstat_report_wait_end(void)
 	(pgStatBlockWriteTime += (n))
 
 extern void pgstat_count_heap_insert(Relation rel, int n);
-extern void pgstat_count_heap_update(Relation rel, bool hot);
+extern void pgstat_count_heap_update(Relation rel, bool hot, bool warm);
 extern void pgstat_count_heap_delete(Relation rel);
 extern void pgstat_count_truncate(Relation rel);
 extern void pgstat_update_heap_dead_tuples(Relation rel, int delta);
diff --git b/src/include/storage/itemid.h a/src/include/storage/itemid.h
index 509c577..8c9cc99 100644
--- b/src/include/storage/itemid.h
+++ a/src/include/storage/itemid.h
@@ -46,6 +46,12 @@ typedef ItemIdData *ItemId;
 typedef uint16 ItemOffset;
 typedef uint16 ItemLength;
 
+/*
+ * Special value used in lp_len to indicate that the chain starting at line
+ * pointer may contain WARM tuples. This must only be interpreted along with
+ * LP_REDIRECT flag
+ */
+#define SpecHeapWarmLen	0x1ffb
 
 /* ----------------
  *		support macros
@@ -112,12 +118,15 @@ typedef uint16 ItemLength;
 #define ItemIdIsDead(itemId) \
 	((itemId)->lp_flags == LP_DEAD)
 
+#define ItemIdIsHeapWarm(itemId) \
+	(((itemId)->lp_flags == LP_REDIRECT) && \
+	 ((itemId)->lp_len == SpecHeapWarmLen))
 /*
  * ItemIdHasStorage
  *		True iff item identifier has associated storage.
  */
 #define ItemIdHasStorage(itemId) \
-	((itemId)->lp_len != 0)
+	(!ItemIdIsRedirected(itemId) && (itemId)->lp_len != 0)
 
 /*
  * ItemIdSetUnused
@@ -168,6 +177,26 @@ typedef uint16 ItemLength;
 )
 
 /*
+ * ItemIdSetHeapWarm
+ * 		Set the item identifier to identify as starting of a WARM chain
+ *
+ * Note: Since all bits in lp_flags are currently used, we store a special
+ * value in lp_len field to indicate this state. This is required only for
+ * LP_REDIRECT tuple and lp_len field is unused for such line pointers.
+ */
+#define ItemIdSetHeapWarm(itemId) \
+do { \
+	AssertMacro((itemId)->lp_flags == LP_REDIRECT); \
+	(itemId)->lp_len = SpecHeapWarmLen; \
+} while (0)
+
+#define ItemIdClearHeapWarm(itemId) \
+( \
+	AssertMacro((itemId)->lp_flags == LP_REDIRECT); \
+	(itemId)->lp_len = 0; \
+)
+
+/*
  * ItemIdMarkDead
  *		Set the item identifier to be DEAD, keeping its existing storage.
  *
diff --git b/src/include/utils/rel.h a/src/include/utils/rel.h
index fa15f28..982bf4c 100644
--- b/src/include/utils/rel.h
+++ a/src/include/utils/rel.h
@@ -101,8 +101,11 @@ typedef struct RelationData
 
 	/* data managed by RelationGetIndexAttrBitmap: */
 	Bitmapset  *rd_indexattr;	/* identifies columns used in indexes */
+	Bitmapset  *rd_exprindexattr; /* indentified columns used in expression or
+									 predicate indexes */
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	/*
 	 * rd_options is set whenever rd_rel is loaded into the relcache entry.
diff --git b/src/include/utils/relcache.h a/src/include/utils/relcache.h
index 6ea7dd2..290e9b7 100644
--- b/src/include/utils/relcache.h
+++ a/src/include/utils/relcache.h
@@ -48,7 +48,8 @@ typedef enum IndexAttrBitmapKind
 {
 	INDEX_ATTR_BITMAP_ALL,
 	INDEX_ATTR_BITMAP_KEY,
-	INDEX_ATTR_BITMAP_IDENTITY_KEY
+	INDEX_ATTR_BITMAP_IDENTITY_KEY,
+	INDEX_ATTR_BITMAP_EXPR_PREDICATE
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
diff --git b/src/test/regress/expected/rules.out a/src/test/regress/expected/rules.out
index 031e8c2..c416fe6 100644
--- b/src/test/regress/expected/rules.out
+++ a/src/test/regress/expected/rules.out
@@ -1705,6 +1705,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_tuples_deleted(c.oid) AS n_tup_del,
     pg_stat_get_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
@@ -1838,6 +1839,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1881,6 +1883,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1918,7 +1921,8 @@ pg_stat_xact_all_tables| SELECT c.oid AS relid,
     pg_stat_get_xact_tuples_inserted(c.oid) AS n_tup_ins,
     pg_stat_get_xact_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_xact_tuples_deleted(c.oid) AS n_tup_del,
-    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd
+    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_xact_tuples_warm_updated(c.oid) AS n_tup_warm_upd
    FROM ((pg_class c
      LEFT JOIN pg_index i ON ((c.oid = i.indrelid)))
      LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
@@ -1934,7 +1938,8 @@ pg_stat_xact_sys_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname = ANY (ARRAY['pg_catalog'::name, 'information_schema'::name])) OR (pg_stat_xact_all_tables.schemaname ~ '^pg_toast'::text));
 pg_stat_xact_user_functions| SELECT p.oid AS funcid,
@@ -1956,7 +1961,8 @@ pg_stat_xact_user_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname <> ALL (ARRAY['pg_catalog'::name, 'information_schema'::name])) AND (pg_stat_xact_all_tables.schemaname !~ '^pg_toast'::text));
 pg_statio_all_indexes| SELECT c.oid AS relid,
diff --git b/src/test/regress/expected/warm.out a/src/test/regress/expected/warm.out
new file mode 100644
index 0000000..0aa1b83
--- /dev/null
+++ a/src/test/regress/expected/warm.out
@@ -0,0 +1,51 @@
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+explain select * from test_warm where lower(a) = 'test';
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on test_warm  (cost=4.18..12.65 rows=4 width=64)
+   Recheck Cond: (lower(a) = 'test'::text)
+   ->  Bitmap Index Scan on test_warmindx  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (lower(a) = 'test'::text)
+(4 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+select *, ctid from test_warm where a = 'test';
+ a | b | ctid 
+---+---+------
+(0 rows)
+
+select *, ctid from test_warm where a = 'TEST';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+                                   QUERY PLAN                                    
+---------------------------------------------------------------------------------
+ Index Scan using test_warmindx on test_warm  (cost=0.15..20.22 rows=4 width=64)
+   Index Cond: (lower(a) = 'test'::text)
+(2 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+DROP TABLE test_warm;
diff --git b/src/test/regress/parallel_schedule a/src/test/regress/parallel_schedule
index 8641769..a610039 100644
--- b/src/test/regress/parallel_schedule
+++ a/src/test/regress/parallel_schedule
@@ -42,6 +42,8 @@ test: create_type
 test: create_table
 test: create_function_2
 
+test: warm
+
 # ----------
 # Load huge amounts of data
 # We should split the data files into single files and then
diff --git b/src/test/regress/sql/warm.sql a/src/test/regress/sql/warm.sql
new file mode 100644
index 0000000..166ea37
--- /dev/null
+++ a/src/test/regress/sql/warm.sql
@@ -0,0 +1,15 @@
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where a = 'test';
+select *, ctid from test_warm where a = 'TEST';
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+DROP TABLE test_warm;
+
+
0003_warm_fixes_v6.patchapplication/octet-stream; name=0003_warm_fixes_v6.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index b3de79c..9353175 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -7831,7 +7831,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
-	bool		warm_update;
+	bool		warm_update = false;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index e32deb1..39ee6ac 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -75,6 +75,9 @@ RelationPutHeapTuple(Relation relation,
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
+		/* Copy t_ctid to set the correct block number */
+		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+
 		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item);
 		if (OffsetNumberIsValid(root_offnum))
 			HeapTupleHeaderSetRootOffset((HeapTupleHeader) item,
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 03c6b62..c24e486 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -801,7 +801,8 @@ retry:
 			  DirtySnapshot.speculativeToken &&
 			  TransactionIdPrecedes(GetCurrentTransactionId(), xwait))))
 		{
-			ctid_wait = tup->t_data->t_ctid;
+			HeapTupleHeaderGetNextCtid(tup->t_data, &ctid_wait,
+					ItemPointerGetOffsetNumber(&tup->t_self));
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 079a77f..466609c 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2451,7 +2451,8 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		}
 
 		/* updated, so look at the updated row */
-		tuple.t_self = tuple.t_data->t_ctid;
+		HeapTupleHeaderGetNextCtid(tuple.t_data, &tuple.t_self,
+				ItemPointerGetOffsetNumber(&tuple.t_self));
 		/* updated row should have xmin matching this xmax */
 		priorXmax = HeapTupleHeaderGetUpdateXid(tuple.t_data);
 		ReleaseBuffer(buffer);
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 25752b0..ef4f5b4 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -37,6 +37,7 @@ extern Datum pg_stat_get_tuples_inserted(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_tuples_updated(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_tuples_deleted(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS);
+extern Datum pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_live_tuples(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_dead_tuples(PG_FUNCTION_ARGS);
 extern Datum pg_stat_get_mod_since_analyze(PG_FUNCTION_ARGS);
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 37874ca..c6ef4e2 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -4487,6 +4487,12 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
 
 		/*
+		 * indexattrs should include attributes referenced in index expressions
+		 * and predicates too
+		 */
+		indexattrs = bms_add_members(indexattrs, exprindexattrs);
+
+		/*
 		 * Check if the index has amrecheck method defined. If the method is
 		 * not defined, the index does not support WARM update. Completely
 		 * disable WARM updates on such tables
interesting-attrs-2.patchapplication/octet-stream; name=interesting-attrs-2.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index ea579a0..19edbdf 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -95,11 +95,8 @@ static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
 				HeapTuple newtup, HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
-static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
-							 Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
+							 Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup);
 static bool heap_acquire_tuplock(Relation relation, ItemPointer tid,
 					 LockTupleMode mode, LockWaitPolicy wait_policy,
@@ -3443,6 +3440,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *interesting_attrs;
+	Bitmapset  *modified_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3460,9 +3459,6 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				pagefree;
 	bool		have_tuple_lock = false;
 	bool		iscombo;
-	bool		satisfies_hot;
-	bool		satisfies_key;
-	bool		satisfies_id;
 	bool		use_hot_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
@@ -3489,21 +3485,30 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				 errmsg("cannot update tuples during a parallel operation")));
 
 	/*
-	 * Fetch the list of attributes to be checked for HOT update.  This is
-	 * wasted effort if we fail to update or have to put the new tuple on a
-	 * different page.  But we must compute the list before obtaining buffer
-	 * lock --- in the worst case, if we are doing an update on one of the
-	 * relevant system catalogs, we could deadlock if we try to fetch the list
-	 * later.  In any case, the relcache caches the data so this is usually
-	 * pretty cheap.
+	 * Fetch the list of attributes to be checked for various operations.
 	 *
-	 * Note that we get a copy here, so we need not worry about relcache flush
-	 * happening midway through.
+	 * For HOT considerations, this is wasted effort if we fail to update or
+	 * have to put the new tuple on a different page.  But we must compute the
+	 * list before obtaining buffer lock --- in the worst case, if we are doing
+	 * an update on one of the relevant system catalogs, we could deadlock if
+	 * we try to fetch the list later.  In any case, the relcache caches the
+	 * data so this is usually pretty cheap.
+	 *
+	 * We also need columns used by the replica identity, the columns that
+	 * are considered the "key" of rows in the table, and columns that are
+	 * part of indirect indexes.
+	 *
+	 * Note that we get copies of each bitmap, so we need not worry about
+	 * relcache flush happening midway through.
 	 */
 	hot_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_ALL);
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	interesting_attrs = bms_add_members(NULL, hot_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
+
 
 	block = ItemPointerGetBlockNumber(otid);
 	buffer = ReadBuffer(relation, block);
@@ -3524,7 +3529,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Assert(ItemIdIsNormal(lp));
 
 	/*
-	 * Fill in enough data in oldtup for HeapSatisfiesHOTandKeyUpdate to work
+	 * Fill in enough data in oldtup for HeapDetermineModifiedColumns to work
 	 * properly.
 	 */
 	oldtup.t_tableOid = RelationGetRelid(relation);
@@ -3550,6 +3555,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 		Assert(!(newtup->t_data->t_infomask & HEAP_HASOID));
 	}
 
+	/* Determine columns modified by the update. */
+	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
+												  &oldtup, newtup);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3561,10 +3570,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	 * is updates that don't manipulate key columns, not those that
 	 * serendipitiously arrive at the same key values.
 	 */
-	HeapSatisfiesHOTandKeyUpdate(relation, hot_attrs, key_attrs, id_attrs,
-								 &satisfies_hot, &satisfies_key,
-								 &satisfies_id, &oldtup, newtup);
-	if (satisfies_key)
+	if (!bms_overlap(modified_attrs, key_attrs))
 	{
 		*lockmode = LockTupleNoKeyExclusive;
 		mxact_status = MultiXactStatusNoKeyUpdate;
@@ -3803,6 +3809,8 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(modified_attrs);
+		bms_free(interesting_attrs);
 		return result;
 	}
 
@@ -4107,7 +4115,7 @@ l2:
 		 * to do a HOT update.  Check if any of the index columns have been
 		 * changed.  If not, then HOT update is possible.
 		 */
-		if (satisfies_hot)
+		if (!bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
 	}
 	else
@@ -4122,7 +4130,9 @@ l2:
 	 * ExtractReplicaIdentity() will return NULL if nothing needs to be
 	 * logged.
 	 */
-	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup, !satisfies_id, &old_key_copied);
+	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup,
+										   bms_overlap(modified_attrs, id_attrs),
+										   &old_key_copied);
 
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
@@ -4270,13 +4280,15 @@ l2:
 	bms_free(hot_attrs);
 	bms_free(key_attrs);
 	bms_free(id_attrs);
+	bms_free(modified_attrs);
+	bms_free(interesting_attrs);
 
 	return HeapTupleMayBeUpdated;
 }
 
 /*
  * Check if the specified attribute's value is same in both given tuples.
- * Subroutine for HeapSatisfiesHOTandKeyUpdate.
+ * Subroutine for HeapDetermineModifiedColumns.
  */
 static bool
 heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
@@ -4310,7 +4322,7 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 
 	/*
 	 * Extract the corresponding values.  XXX this is pretty inefficient if
-	 * there are many indexed columns.  Should HeapSatisfiesHOTandKeyUpdate do
+	 * there are many indexed columns.  Should HeapDetermineModifiedColumns do
 	 * a single heap_deform_tuple call on each tuple, instead?	But that
 	 * doesn't work for system columns ...
 	 */
@@ -4355,114 +4367,30 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 /*
  * Check which columns are being updated.
  *
- * This simultaneously checks conditions for HOT updates, for FOR KEY
- * SHARE updates, and REPLICA IDENTITY concerns.  Since much of the time they
- * will be checking very similar sets of columns, and doing the same tests on
- * them, it makes sense to optimize and do them together.
+ * Given an updated tuple, determine (and return into the output bitmapset),
+ * from those listed as interesting, the set of columns that changed.
  *
- * We receive three bitmapsets comprising the three sets of columns we're
- * interested in.  Note these are destructively modified; that is OK since
- * this is invoked at most once in heap_update.
- *
- * hot_result is set to TRUE if it's okay to do a HOT update (i.e. it does not
- * modified indexed columns); key_result is set to TRUE if the update does not
- * modify columns used in the key; id_result is set to TRUE if the update does
- * not modify columns in any index marked as the REPLICA IDENTITY.
+ * The input bitmapset is destructively modified; that is OK since this is
+ * invoked at most once in heap_update.
  */
-static void
-HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *
+HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup)
 {
-	int			next_hot_attnum;
-	int			next_key_attnum;
-	int			next_id_attnum;
-	bool		hot_result = true;
-	bool		key_result = true;
-	bool		id_result = true;
+	int		attnum;
+	Bitmapset *modified = NULL;
 
-	/* If REPLICA IDENTITY is set to FULL, id_attrs will be empty. */
-	Assert(bms_is_subset(id_attrs, key_attrs));
-	Assert(bms_is_subset(key_attrs, hot_attrs));
-
-	/*
-	 * If one of these sets contains no remaining bits, bms_first_member will
-	 * return -1, and after adding FirstLowInvalidHeapAttributeNumber (which
-	 * is negative!)  we'll get an attribute number that can't possibly be
-	 * real, and thus won't match any actual attribute number.
-	 */
-	next_hot_attnum = bms_first_member(hot_attrs);
-	next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_key_attnum = bms_first_member(key_attrs);
-	next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_id_attnum = bms_first_member(id_attrs);
-	next_id_attnum += FirstLowInvalidHeapAttributeNumber;
-
-	for (;;)
+	while ((attnum = bms_first_member(interesting_cols)) >= 0)
 	{
-		bool		changed;
-		int			check_now;
+		attnum += FirstLowInvalidHeapAttributeNumber;
 
-		/*
-		 * Since the HOT attributes are a superset of the key attributes and
-		 * the key attributes are a superset of the id attributes, this logic
-		 * is guaranteed to identify the next column that needs to be checked.
-		 */
-		if (hot_result && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_hot_attnum;
-		else if (key_result && next_key_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_key_attnum;
-		else if (id_result && next_id_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_id_attnum;
-		else
-			break;
-
-		/* See whether it changed. */
-		changed = !heap_tuple_attr_equals(RelationGetDescr(relation),
-										  check_now, oldtup, newtup);
-		if (changed)
-		{
-			if (check_now == next_hot_attnum)
-				hot_result = false;
-			if (check_now == next_key_attnum)
-				key_result = false;
-			if (check_now == next_id_attnum)
-				id_result = false;
-
-			/* if all are false now, we can stop checking */
-			if (!hot_result && !key_result && !id_result)
-				break;
-		}
-
-		/*
-		 * Advance the next attribute numbers for the sets that contain the
-		 * attribute we just checked.  As we work our way through the columns,
-		 * the next_attnum values will rise; but when each set becomes empty,
-		 * bms_first_member() will return -1 and the attribute number will end
-		 * up with a value less than FirstLowInvalidHeapAttributeNumber.
-		 */
-		if (hot_result && check_now == next_hot_attnum)
-		{
-			next_hot_attnum = bms_first_member(hot_attrs);
-			next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (key_result && check_now == next_key_attnum)
-		{
-			next_key_attnum = bms_first_member(key_attrs);
-			next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (id_result && check_now == next_id_attnum)
-		{
-			next_id_attnum = bms_first_member(id_attrs);
-			next_id_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
+		if (!heap_tuple_attr_equals(RelationGetDescr(relation),
+								   attnum, oldtup, newtup))
+			modified = bms_add_member(modified,
+									  attnum - FirstLowInvalidHeapAttributeNumber);
 	}
 
-	*satisfies_hot = hot_result;
-	*satisfies_key = key_result;
-	*satisfies_id = id_result;
+	return modified;
 }
 
 /*
#28Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Pavan Deolasee (#27)
Re: Patch: Write Amplification Reduction Method (WARM)

Reading through the track_root_lp patch now.

+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			heap_get_root_tuple_one(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
+					&root_offnum);

Hmm. So the HasRootOffset tests the HEAP_LATEST_TUPLE bit, which is
reset temporarily during an update. So that case shouldn't occur often.

Oh, I just noticed that HeapTupleHeaderSetNextCtid also clears the flag.

@@ -4166,10 +4189,29 @@ l2:
HeapTupleClearHotUpdated(&oldtup);
HeapTupleClearHeapOnly(heaptup);
HeapTupleClearHeapOnly(newtup);
+ root_offnum = InvalidOffsetNumber;
}

-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
+	/* insert new tuple */
+	RelationPutHeapTuple(relation, newbuf, heaptup, false, root_offnum);
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data);
+	/*
+	 * Also update the in-memory copy with the root line pointer information
+	 */
+	if (OffsetNumberIsValid(root_offnum))
+	{
+		HeapTupleHeaderSetRootOffset(heaptup->t_data, root_offnum);
+		HeapTupleHeaderSetRootOffset(newtup->t_data, root_offnum);
+	}
+	else
+	{
+		HeapTupleHeaderSetRootOffset(heaptup->t_data,
+				ItemPointerGetOffsetNumber(&heaptup->t_self));
+		HeapTupleHeaderSetRootOffset(newtup->t_data,
+				ItemPointerGetOffsetNumber(&heaptup->t_self));
+	}

This is repetitive. I think after RelationPutHeapTuple it'd be better
to assign root_offnum = &heaptup->t_self, so that we can just call
SetRootOffset() on each tuple without the if().

+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item);
+		if (OffsetNumberIsValid(root_offnum))
+			HeapTupleHeaderSetRootOffset((HeapTupleHeader) item,
+					root_offnum);
+		else
+			HeapTupleHeaderSetRootOffset((HeapTupleHeader) item,
+					offnum);

Just a matter of style, but this reads nicer IMO:

HeapTupleHeaderSetRootOffset((HeapTupleHeader) item,
OffsetNumberIsValid(root_offnum) ? root_offnum : offnum);

@@ -740,8 +742,9 @@ heap_page_prune_execute(Buffer buffer,
* holds a pin on the buffer. Once pin is released, a tuple might be pruned
* and reused by a completely unrelated tuple.
*/
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
{
OffsetNumber offnum,

I think this function deserves more/better/updated commentary.

@@ -439,7 +439,9 @@ rewrite_heap_tuple(RewriteState state,
* set the ctid of this tuple to point to the new location, and
* insert it right away.
*/
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextCtid(new_tuple->t_data,
+					ItemPointerGetBlockNumber(&mapping->new_tid),
+					ItemPointerGetOffsetNumber(&mapping->new_tid));

I think this would be nicer:
HeapTupleHeaderSetNextTid(new_tuple->t_data, &mapping->new_tid);
AFAICS all the callers are doing ItemPointerGetFoo for a TID, so this is
overly verbose for no reason. Also, the "c" in Ctid stands for
"current"; I think we can omit that.

@@ -525,7 +527,9 @@ rewrite_heap_tuple(RewriteState state,
new_tuple = unresolved->tuple;
free_new = true;
old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextCtid(new_tuple->t_data,
+						ItemPointerGetBlockNumber(&new_tid),
+						ItemPointerGetOffsetNumber(&new_tid));

Did you forget to SetHeapLatest here, or ..? (If not, a comment is
warranted).

diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 32bb3f9..466609c 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2443,7 +2443,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
* As above, it should be safe to examine xmax and t_ctid without the
* buffer content lock, because they can't be changing.
*/
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, tuple.t_self))
{
/* deleted, so forget about it */
ReleaseBuffer(buffer);

This is the place where this patch would have an effect. To test this
bit I think we're going to need an ad-hoc stress-test harness.

+/*
+ * If HEAP_LATEST_TUPLE is set in the last tuple in the update chain. But for
+ * clusters which are upgraded from pre-10.0 release, we still check if c_tid
+ * is pointing to itself and declare such tuple as the latest tuple in the
+ * chain
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  ((tup)->t_infomask2 & HEAP_LATEST_TUPLE) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(&tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(&tid))) \
+)

Please add a "!= 0" to the first arm of the ||, so that we return a boolean.

+/*
+ * Get TID of next tuple in the update chain. Traditionally, we have stored
+ * self TID in the t_ctid field if the tuple is the last tuple in the chain. We
+ * try to preserve that behaviour by returning self-TID if HEAP_LATEST_TUPLE
+ * flag is set.
+ */
+#define HeapTupleHeaderGetNextCtid(tup, next_ctid, offnum) \
+do { \
+	if ((tup)->t_infomask2 & HEAP_LATEST_TUPLE) \
+	{ \
+		ItemPointerSet((next_ctid), ItemPointerGetBlockNumber(&(tup)->t_ctid), \
+				(offnum)); \
+	} \
+	else \
+	{ \
+		ItemPointerSet((next_ctid), ItemPointerGetBlockNumber(&(tup)->t_ctid), \
+				ItemPointerGetOffsetNumber(&(tup)->t_ctid)); \
+	} \
+} while (0)

This is a really odd macro, I think. Is any of the callers really
depending on the traditional behavior? If so, can we change them to
avoid that? (I think the "else" can be more easily written with
ItemPointerCopy). In any case, I think the documentation of the macro
leaves a bit to be desired -- I don't think we really care all that much
what we used to do, except perhaps as a secondary comment, but we do
care very much about what it actually does, which the current comment
doesn't really explain.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#29Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Alvaro Herrera (#28)
1 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

Hi Alvaro,

On Tue, Jan 17, 2017 at 8:41 PM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

Reading through the track_root_lp patch now.

Thanks for the review.

+             /*
+              * For HOT (or WARM) updated tuples, we store the offset

of the root

+ * line pointer of this chain in the ip_posid field of the

new tuple.

+ * Usually this information will be available in the

corresponding

+ * field of the old tuple. But for aborted updates or

pg_upgraded

+ * databases, we might be seeing the old-style CTID chains

and hence

+              * the information must be obtained by hard way
+              */
+             if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+                     root_offnum = HeapTupleHeaderGetRootOffset(o

ldtup.t_data);

+             else
+                     heap_get_root_tuple_one(page,
+                                     ItemPointerGetOffsetNumber(&(

oldtup.t_self)),

+ &root_offnum);

Hmm. So the HasRootOffset tests the HEAP_LATEST_TUPLE bit, which is
reset temporarily during an update. So that case shouldn't occur often.

Right. The root offset is stored only in those tuples where
HEAP_LATEST_TUPLE is set. This flag should generally be set on the tuples
that are being updated, except for the case when the last update failed and
the flag was cleared. In other common case is going to be pg-upgraded
cluster where none of the existing tuples will have this flag set. So in
those cases, we must find the root line pointer hard way.

Oh, I just noticed that HeapTupleHeaderSetNextCtid also clears the flag.

Yes, but this should happen only during updates and unless the update
fails, the next-to-be-updated tuple should have the flag set.

@@ -4166,10 +4189,29 @@ l2:
HeapTupleClearHotUpdated(&oldtup);
HeapTupleClearHeapOnly(heaptup);
HeapTupleClearHeapOnly(newtup);
+             root_offnum = InvalidOffsetNumber;
}

- RelationPutHeapTuple(relation, newbuf, heaptup, false);

/* insert new tuple */

+     /* insert new tuple */
+     RelationPutHeapTuple(relation, newbuf, heaptup, false,

root_offnum);

+     HeapTupleHeaderSetHeapLatest(heaptup->t_data);
+     HeapTupleHeaderSetHeapLatest(newtup->t_data);
+     /*
+      * Also update the in-memory copy with the root line pointer

information

+      */
+     if (OffsetNumberIsValid(root_offnum))
+     {
+             HeapTupleHeaderSetRootOffset(heaptup->t_data,

root_offnum);

+             HeapTupleHeaderSetRootOffset(newtup->t_data, root_offnum);
+     }
+     else
+     {
+             HeapTupleHeaderSetRootOffset(heaptup->t_data,
+                             ItemPointerGetOffsetNumber(&h

eaptup->t_self));

+             HeapTupleHeaderSetRootOffset(newtup->t_data,
+                             ItemPointerGetOffsetNumber(&h

eaptup->t_self));

+ }

This is repetitive. I think after RelationPutHeapTuple it'd be better
to assign root_offnum = &heaptup->t_self, so that we can just call
SetRootOffset() on each tuple without the if().

Fixed. I actually ripped off HeapTupleHeaderSetRootOffset() completely and
pushed setting of root line pointer into the HeapTupleHeaderSetHeapLatest().
That seems much cleaner because the system expects to find root line
pointer whenever HEAP_LATEST_TUPLE flag is set. Hence it makes sense to set
them together.

+             HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item);
+             if (OffsetNumberIsValid(root_offnum))
+                     HeapTupleHeaderSetRootOffset((HeapTupleHeader)

item,

+                                     root_offnum);
+             else
+                     HeapTupleHeaderSetRootOffset((HeapTupleHeader)

item,

+ offnum);

Just a matter of style, but this reads nicer IMO:

HeapTupleHeaderSetRootOffset((HeapTupleHeader) item,
OffsetNumberIsValid(root_offnum) ? root_offnum : offnum);

Understood. This code no longer exists in the new patch since
HeapTupleHeaderSetRootOffset is merged with HeapTupleHeaderSetHeapLatest.

@@ -740,8 +742,9 @@ heap_page_prune_execute(Buffer buffer,
* holds a pin on the buffer. Once pin is released, a tuple might be

pruned

* and reused by a completely unrelated tuple.
*/
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+             OffsetNumber *root_offsets)
{
OffsetNumber offnum,

I think this function deserves more/better/updated commentary.

Sure. I added more commentary. I also reworked the function so that the
caller can pass just one item array when it's interested in finding root
line pointer for just one item. Hopefully that will save a few bytes on the
stack.

@@ -439,7 +439,9 @@ rewrite_heap_tuple(RewriteState state,
* set the ctid of this tuple to point to the new

location, and

* insert it right away.
*/
-                     new_tuple->t_data->t_ctid = mapping->new_tid;
+                     HeapTupleHeaderSetNextCtid(new_tuple->t_data,
+                                     ItemPointerGetBlockNumber(&ma

pping->new_tid),

+ ItemPointerGetOffsetNumber(&m

apping->new_tid));

I think this would be nicer:
HeapTupleHeaderSetNextTid(new_tuple->t_data, &mapping->new_tid);
AFAICS all the callers are doing ItemPointerGetFoo for a TID, so this is
overly verbose for no reason. Also, the "c" in Ctid stands for
"current"; I think we can omit that.

Yes, fixed. I realised that all callers where anyways calling the macro
with the block/offset of the same TID. So it makes sense to just pass TID
to the macro.

@@ -525,7 +527,9 @@ rewrite_heap_tuple(RewriteState state,
new_tuple = unresolved->tuple;
free_new = true;
old_tid = unresolved->old_tid;
-                             new_tuple->t_data->t_ctid = new_tid;
+                             HeapTupleHeaderSetNextCtid(ne

w_tuple->t_data,

+

ItemPointerGetBlockNumber(&new_tid),

+

ItemPointerGetOffsetNumber(&new_tid));

Did you forget to SetHeapLatest here, or ..? (If not, a comment is
warranted).

Umm probably not. The way I see it, new_tuple is not actually the new tuple
when this is called, but it's changed to the unresolved tuple (see the
start of the hunk). So what we're doing is setting next CTID in the
previous tuple in the chain. SetHeapLatest is called on the new tuple
inside raw_heap_insert(). I did not add any more comments, but please let
me know if you think it's still confusing or if I'm missing something.

diff --git a/src/backend/executor/execMain.c

b/src/backend/executor/execMain.c

index 32bb3f9..466609c 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2443,7 +2443,7 @@ EvalPlanQualFetch(EState *estate, Relation

relation, int lockmode,

* As above, it should be safe to examine xmax and t_ctid

without the

* buffer content lock, because they can't be changing.
*/
- if (ItemPointerEquals(&tuple.t_self,

&tuple.t_data->t_ctid))

+ if (HeapTupleHeaderIsHeapLatest(tuple.t_data,

tuple.t_self))

{
/* deleted, so forget about it */
ReleaseBuffer(buffer);

This is the place where this patch would have an effect. To test this
bit I think we're going to need an ad-hoc stress-test harness.

Sure. I did some pgbench tests and ran consistency checks during and at the
end of tests. I chose a small scale factor and many clients so that same
tuple is often concurrently updated. That should exercise the new
chain-following code reguorsly. But I'll do more of those on a bigger box.
Do you have other suggestions for ad-hoc tests?

+/*
+ * If HEAP_LATEST_TUPLE is set in the last tuple in the update chain.

But for

+ * clusters which are upgraded from pre-10.0 release, we still check if

c_tid

+ * is pointing to itself and declare such tuple as the latest tuple in

the

+ * chain
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  ((tup)->t_infomask2 & HEAP_LATEST_TUPLE) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) ==

ItemPointerGetBlockNumber(&tid)) && \

+ (ItemPointerGetOffsetNumber(&(tup)->t_ctid) ==

ItemPointerGetOffsetNumber(&tid))) \

+)

Please add a "!= 0" to the first arm of the ||, so that we return a
boolean.

Done. Also rebased with new master where similar changes have been done.

+/*
+ * Get TID of next tuple in the update chain. Traditionally, we have

stored

+ * self TID in the t_ctid field if the tuple is the last tuple in the

chain. We

+ * try to preserve that behaviour by returning self-TID if

HEAP_LATEST_TUPLE

+ * flag is set.
+ */
+#define HeapTupleHeaderGetNextCtid(tup, next_ctid, offnum) \
+do { \
+     if ((tup)->t_infomask2 & HEAP_LATEST_TUPLE) \
+     { \
+             ItemPointerSet((next_ctid), ItemPointerGetBlockNumber(&(tup)->t_ctid),

\

+                             (offnum)); \
+     } \
+     else \
+     { \
+             ItemPointerSet((next_ctid), ItemPointerGetBlockNumber(&(tup)->t_ctid),

\

+ ItemPointerGetOffsetNumber(&(tup)->t_ctid));

\

+ } \
+} while (0)

This is a really odd macro, I think. Is any of the callers really
depending on the traditional behavior? If so, can we change them to
avoid that? (I think the "else" can be more easily written with
ItemPointerCopy). In any case, I think the documentation of the macro
leaves a bit to be desired -- I don't think we really care all that much
what we used to do, except perhaps as a secondary comment, but we do
care very much about what it actually does, which the current comment
doesn't really explain.

I reworked this quite a bit and I believe the new code does what you
suggested. The HeapTupleHeaderGetNextTid macro is now much simpler (it
just copies the TID) and we leave it to the caller to ensure they don't
call this on a tuple which is already at the end of the chain (i.e has
HEAP_LATEST_TUPLE set, but we don't look for old-style end-of-the-chain
markers). The callers can choose to return the same TID back if their
callers rely on that behaviour. But inside this macro, we now assert that
HEAP_LATEST_TUPLE is not set.

One thing that worried me is if there exists a path which sets the
t_infomask (and hence HEAP_LATEST_TUPLE) during redo recovery and if we
will fail to set the root line pointer correctly along with that. But
AFAICS the interesting cases of insert, multi-insert and update are being
handled ok. The only other places where I saw t_infomask being copied as-is
from the WAL record is DecodeXLogTuple() and DecodeMultiInsert(), but those
should not cause any problem AFAICS.

Revised patch is attached. All regression tests, isolation tests and
pgbench test with -c40 -j10 pass on my laptop.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0001_track_root_lp_v9.patchapplication/octet-stream; name=0001_track_root_lp_v9.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 91c13d4..8e57bae 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -93,7 +93,8 @@ static HeapTuple heap_prepare_insert(Relation relation, HeapTuple tup,
 					TransactionId xid, CommandId cid, int options);
 static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
-				HeapTuple newtup, HeapTuple old_key_tup,
+				HeapTuple newtup, OffsetNumber root_offnum,
+				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
 static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
 							 Bitmapset *interesting_cols,
@@ -2247,13 +2248,13 @@ heap_get_latest_tid(Relation relation,
 		 */
 		if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) ||
 			HeapTupleHeaderIsOnlyLocked(tp.t_data) ||
-			ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid))
+			HeapTupleHeaderIsHeapLatest(tp.t_data, &ctid))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
 		}
 
-		ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tp.t_data, &ctid);
 		priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		UnlockReleaseBuffer(buffer);
 	}							/* end of loop */
@@ -2373,6 +2374,7 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	bool		all_visible_cleared = false;
+	OffsetNumber	root_offnum;
 
 	/*
 	 * Fill in tuple header fields, assign an OID, and toast the tuple if
@@ -2411,8 +2413,14 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
+	root_offnum = InvalidOffsetNumber;
 	RelationPutHeapTuple(relation, buffer, heaptup,
-						 (options & HEAP_INSERT_SPECULATIVE) != 0);
+						 (options & HEAP_INSERT_SPECULATIVE) != 0,
+						 &root_offnum);
+
+	/* We must not overwrite the speculative insertion token */
+	if ((options & HEAP_INSERT_SPECULATIVE) == 0)
+		HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 	if (PageIsAllVisible(BufferGetPage(buffer)))
 	{
@@ -2640,6 +2648,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 	Size		saveFreeSpace;
 	bool		need_tuple_data = RelationIsLogicallyLogged(relation);
 	bool		need_cids = RelationIsAccessibleInLogicalDecoding(relation);
+	OffsetNumber	root_offnum;
 
 	needwal = !(options & HEAP_INSERT_SKIP_WAL) && RelationNeedsWAL(relation);
 	saveFreeSpace = RelationGetTargetPageFreeSpace(relation,
@@ -2710,7 +2719,13 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		 * RelationGetBufferForTuple has ensured that the first tuple fits.
 		 * Put that on the page, and then as many other tuples as fit.
 		 */
-		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false);
+		root_offnum = InvalidOffsetNumber;
+		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false,
+				&root_offnum);
+
+		/* Mark this tuple as the latest and also set root offset */
+		HeapTupleHeaderSetHeapLatest(heaptuples[ndone]->t_data, root_offnum);
+
 		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++)
 		{
 			HeapTuple	heaptup = heaptuples[ndone + nthispage];
@@ -2718,7 +2733,11 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace)
 				break;
 
-			RelationPutHeapTuple(relation, buffer, heaptup, false);
+			root_offnum = InvalidOffsetNumber;
+			RelationPutHeapTuple(relation, buffer, heaptup, false,
+					&root_offnum);
+			/* Mark each tuple as the latest and also set root offset */
+			HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 			/*
 			 * We don't use heap_multi_insert for catalog tuples yet, but
@@ -2990,6 +3009,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	HeapTupleData tp;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	TransactionId new_xmax;
@@ -3000,6 +3020,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	bool		all_visible_cleared = false;
 	HeapTuple	old_key_tuple = NULL;	/* replica identity of the tuple */
 	bool		old_key_copied = false;
+	OffsetNumber	root_offnum;
 
 	Assert(ItemPointerIsValid(tid));
 
@@ -3041,7 +3062,8 @@ heap_delete(Relation relation, ItemPointer tid,
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 	}
 
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
+	offnum = ItemPointerGetOffsetNumber(tid);
+	lp = PageGetItemId(page, offnum);
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
@@ -3171,7 +3193,17 @@ l1:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tp.t_data->t_ctid;
+
+		/*
+		 * If we're at the end of the chain, then just return the same TID back
+		 * to the caller. The caller uses that as a hint to know if we have hit
+		 * the end of the chain
+		 */
+		if (!HeapTupleHeaderIsHeapLatest(tp.t_data, &tp.t_self))
+			HeapTupleHeaderGetNextTid(tp.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&tp.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tp.t_data);
@@ -3220,6 +3252,23 @@ l1:
 							  xid, LockTupleExclusive, true,
 							  &new_xmax, &new_infomask, &new_infomask2);
 
+	/*
+	 * heap_get_root_tuple_one() may call palloc, which is disallowed once we
+	 * enter the critical section. So check if the root offset is cached in the
+	 * tuple and if not, fetch that information hard way before entering the
+	 * critical section
+	 *
+	 * Most often and unless we are dealing with a pg-upgraded cluster, the
+	 * root offset information should be cached. So there should not be too
+	 * much overhead of fetching this information. Also, once a tuple is
+	 * updated, the information will be copied to the new version. So it's not
+	 * as if we're going to pay this price forever
+	 */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		heap_get_root_tuple_one(page,
+				ItemPointerGetOffsetNumber(&tp.t_self),
+				&root_offnum);
+
 	START_CRIT_SECTION();
 
 	/*
@@ -3247,8 +3296,10 @@ l1:
 	HeapTupleHeaderClearHotUpdated(tp.t_data);
 	HeapTupleHeaderSetXmax(tp.t_data, new_xmax);
 	HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo);
-	/* Make sure there is no forward chain link in t_ctid */
-	tp.t_data->t_ctid = tp.t_self;
+
+	/* Mark this tuple as the latest tuple in the update chain */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		HeapTupleHeaderSetHeapLatest(tp.t_data, root_offnum);
 
 	MarkBufferDirty(buffer);
 
@@ -3449,6 +3500,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
+	OffsetNumber	root_offnum;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3511,6 +3564,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 
 
 	block = ItemPointerGetBlockNumber(otid);
+	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3795,7 +3849,12 @@ l2:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = oldtup.t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(oldtup.t_data, &oldtup.t_self))
+			HeapTupleHeaderGetNextTid(oldtup.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&oldtup.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
@@ -3935,6 +3994,7 @@ l2:
 		uint16		infomask_lock_old_tuple,
 					infomask2_lock_old_tuple;
 		bool		cleared_all_frozen = false;
+		OffsetNumber	root_offnum;
 
 		/*
 		 * To prevent concurrent sessions from updating the tuple, we have to
@@ -3962,6 +4022,15 @@ l2:
 
 		Assert(HEAP_XMAX_IS_LOCKED_ONLY(infomask_lock_old_tuple));
 
+		/*
+		 * Fetch root offset before entering the critical section. We do this
+		 * only if the information is not already available
+		 */
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			heap_get_root_tuple_one(page,
+					ItemPointerGetOffsetNumber(&oldtup.t_self),
+					&root_offnum);
+
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
@@ -3976,7 +4045,8 @@ l2:
 		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			HeapTupleHeaderSetHeapLatest(oldtup.t_data, root_offnum);
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4134,6 +4204,11 @@ l2:
 										   bms_overlap(modified_attrs, id_attrs),
 										   &old_key_copied);
 
+	if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+		heap_get_root_tuple_one(page,
+				ItemPointerGetOffsetNumber(&(oldtup.t_self)),
+				&root_offnum);
+
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
@@ -4159,6 +4234,17 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way (we should have done
+		 * that before entering the critical section above)
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
 	else
 	{
@@ -4166,10 +4252,21 @@ l2:
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		root_offnum = InvalidOffsetNumber;
 	}
 
-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
-
+	/* insert new tuple */
+	RelationPutHeapTuple(relation, newbuf, heaptup, false, &root_offnum);
+	/*
+	 * Also mark both copies as latest and set the root offset information. If
+	 * we're doing a HOT/WARM update, then we just copy the information from
+	 * old tuple, if available or computed above. For regular updates,
+	 * RelationPutHeapTuple must have returned us the actual offset number
+	 * where the new version was inserted and we store the same value since the
+	 * update resulted in a new HOT-chain
+	 */
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
 	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
@@ -4182,7 +4279,7 @@ l2:
 	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	HeapTupleHeaderSetNextTid(oldtup.t_data, &(heaptup->t_self));
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4221,6 +4318,7 @@ l2:
 
 		recptr = log_heap_update(relation, buffer,
 								 newbuf, &oldtup, heaptup,
+								 root_offnum,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4501,7 +4599,8 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	ItemId		lp;
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
-	BlockNumber block;
+	BlockNumber	block;
+	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4510,9 +4609,11 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	bool		first_time = true;
 	bool		have_tuple_lock = false;
 	bool		cleared_all_frozen = false;
+	OffsetNumber	root_offnum;
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
+	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -4532,6 +4633,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	tuple->t_data = (HeapTupleHeader) PageGetItem(page, lp);
 	tuple->t_len = ItemIdGetLength(lp);
 	tuple->t_tableOid = RelationGetRelid(relation);
+	tuple->t_self = *tid;
 
 l3:
 	result = HeapTupleSatisfiesUpdate(tuple, cid, *buffer);
@@ -4559,7 +4661,11 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &t_ctid);
+		else
+			ItemPointerCopy(tid, &t_ctid);
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -4997,7 +5103,12 @@ failed:
 		Assert(result == HeapTupleSelfUpdated || result == HeapTupleUpdated ||
 			   result == HeapTupleWouldBlock);
 		Assert(!(tuple->t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tuple->t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(tid, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tuple->t_data);
@@ -5045,6 +5156,11 @@ failed:
 							  GetCurrentTransactionId(), mode, false,
 							  &xid, &new_infomask, &new_infomask2);
 
+	if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+		heap_get_root_tuple_one(page,
+				ItemPointerGetOffsetNumber(&tuple->t_self),
+				&root_offnum);
+
 	START_CRIT_SECTION();
 
 	/*
@@ -5073,7 +5189,10 @@ failed:
 	 * the tuple as well.
 	 */
 	if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask))
-		tuple->t_data->t_ctid = *tid;
+	{
+		if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+			HeapTupleHeaderSetHeapLatest(tuple->t_data, root_offnum);
+	}
 
 	/* Clear only the all-frozen bit on visibility map if needed */
 	if (PageIsAllVisible(page) &&
@@ -5587,6 +5706,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
+	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5595,6 +5715,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
+		offnum = ItemPointerGetOffsetNumber(&tupid);
+
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
@@ -5824,7 +5946,7 @@ l4:
 
 		/* if we find the end of update chain, we're done. */
 		if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID ||
-			ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) ||
+			HeapTupleHeaderIsHeapLatest(mytup.t_data, &mytup.t_self) ||
 			HeapTupleHeaderIsOnlyLocked(mytup.t_data))
 		{
 			result = HeapTupleMayBeUpdated;
@@ -5833,7 +5955,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		HeapTupleHeaderGetNextTid(mytup.t_data, &tupid);
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
@@ -5950,7 +6072,7 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	 * Replace the speculative insertion token with a real t_ctid, pointing to
 	 * itself like it does on regular tuples.
 	 */
-	htup->t_ctid = tuple->t_self;
+	HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 	/* XLOG stuff */
 	if (RelationNeedsWAL(relation))
@@ -6076,8 +6198,7 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
 
 	/* Clear the speculative insertion token too */
-	tp.t_data->t_ctid = tp.t_self;
-
+	HeapTupleHeaderSetHeapLatest(tp.t_data, ItemPointerGetOffsetNumber(tid));
 	MarkBufferDirty(buffer);
 
 	/*
@@ -7425,6 +7546,7 @@ log_heap_visible(RelFileNode rnode, Buffer heap_buffer, Buffer vm_buffer,
 static XLogRecPtr
 log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup, HeapTuple newtup,
+				OffsetNumber root_offnum,
 				HeapTuple old_key_tuple,
 				bool all_visible_cleared, bool new_all_visible_cleared)
 {
@@ -7545,6 +7667,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self);
 	xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data);
 
+	Assert(OffsetNumberIsValid(root_offnum));
+	xlrec.root_offnum = root_offnum;
+
 	bufflags = REGBUF_STANDARD;
 	if (init)
 		bufflags |= REGBUF_WILL_INIT;
@@ -8199,7 +8324,13 @@ heap_xlog_delete(XLogReaderState *record)
 			PageClearAllVisible(page);
 
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = target_tid;
+		if (!HeapTupleHeaderHasRootOffset(htup))
+		{
+			OffsetNumber	root_offnum;
+			heap_get_root_tuple_one(page, xlrec->offnum, &root_offnum); 
+			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+		}
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8289,7 +8420,8 @@ heap_xlog_insert(XLogReaderState *record)
 		htup->t_hoff = xlhdr.t_hoff;
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
-		htup->t_ctid = target_tid;
+
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->offnum);
 
 		if (PageAddItem(page, (Item) htup, newlen, xlrec->offnum,
 						true, true) == InvalidOffsetNumber)
@@ -8424,8 +8556,8 @@ heap_xlog_multi_insert(XLogReaderState *record)
 			htup->t_hoff = xlhdr->t_hoff;
 			HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 			HeapTupleHeaderSetCmin(htup, FirstCommandId);
-			ItemPointerSetBlockNumber(&htup->t_ctid, blkno);
-			ItemPointerSetOffsetNumber(&htup->t_ctid, offnum);
+
+			HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 			offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 			if (offnum == InvalidOffsetNumber)
@@ -8561,7 +8693,7 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 		/* Set forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetNextTid(htup, &newtid);
 
 		/* Mark the page as a candidate for pruning */
 		PageSetPrunable(page, XLogRecGetXid(record));
@@ -8694,13 +8826,17 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
-		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = newtid;
 
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
 
+		/*
+		 * Make sure the tuple is marked as the latest and root offset
+		 * information is restored
+		 */
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->root_offnum);
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
@@ -8763,6 +8899,9 @@ heap_xlog_confirm(XLogReaderState *record)
 		 */
 		ItemPointerSet(&htup->t_ctid, BufferGetBlockNumber(buffer), offnum);
 
+		/* For newly inserted tuple, set root offset to itself */
+		HeapTupleHeaderSetHeapLatest(htup, offnum);
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8826,11 +8965,18 @@ heap_xlog_lock(XLogReaderState *record)
 		 */
 		if (HEAP_XMAX_IS_LOCKED_ONLY(htup->t_infomask))
 		{
+			ItemPointerData	target_tid;
+
+			ItemPointerSet(&target_tid, BufferGetBlockNumber(buffer), offnum);
 			HeapTupleHeaderClearHotUpdated(htup);
 			/* Make sure there is no forward chain link in t_ctid */
-			ItemPointerSet(&htup->t_ctid,
-						   BufferGetBlockNumber(buffer),
-						   offnum);
+			if (!HeapTupleHeaderHasRootOffset(htup))
+			{
+				OffsetNumber	root_offnum;
+				heap_get_root_tuple_one(page,
+						offnum, &root_offnum);
+				HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+			}
 		}
 		HeapTupleHeaderSetXmax(htup, xlrec->locking_xid);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index 6529fe3..14ed263 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -31,12 +31,18 @@
  * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
  * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *
+ * The caller can optionally tell us to set the root offset to the given value.
+ * Otherwise, the root offset is set to the offset of the new location once its
+ * known. The former is used while updating an existing tuple while latter is
+ * used during insertion of a new row.
  */
 void
 RelationPutHeapTuple(Relation relation,
 					 Buffer buffer,
 					 HeapTuple tuple,
-					 bool token)
+					 bool token,
+					 OffsetNumber *root_offnum)
 {
 	Page		pageHeader;
 	OffsetNumber offnum;
@@ -60,16 +66,21 @@ RelationPutHeapTuple(Relation relation,
 	ItemPointerSet(&(tuple->t_self), BufferGetBlockNumber(buffer), offnum);
 
 	/*
-	 * Insert the correct position into CTID of the stored tuple, too (unless
-	 * this is a speculative insertion, in which case the token is held in
-	 * CTID field instead)
+	 * Set block number and the root offset into CTID of the stored tuple, too
+	 * (unless this is a speculative insertion, in which case the token is held
+	 * in CTID field instead)
 	 */
 	if (!token)
 	{
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
+		/* Copy t_ctid to set the correct block number */
 		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+
+		if (!OffsetNumberIsValid(*root_offnum))
+			*root_offnum = offnum;
+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item, *root_offnum);
 	}
 }
 
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index d69a266..2406e77 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -55,6 +55,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
 
+static void heap_get_root_tuples_internal(Page page,
+				OffsetNumber target_offnum, OffsetNumber *root_offsets);
 
 /*
  * Optionally prune and repair fragmentation in the specified page.
@@ -553,6 +555,17 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 		if (!HeapTupleHeaderIsHotUpdated(htup))
 			break;
 
+
+		/*
+		 * If the tuple was HOT-updated and the update was later
+		 * aborted, someone could mark this tuple to be the last tuple
+		 * in the chain, without clearing the HOT-updated flag. So we must
+		 * check if this is the last tuple in the chain and stop following the
+		 * CTID, else we risk getting into an infinite recursion (though
+		 * prstate->marked[] currently protects against that)
+		 */
+		if (HeapTupleHeaderHasRootOffset(htup))
+			break;
 		/*
 		 * Advance to next chain member.
 		 */
@@ -726,27 +739,48 @@ heap_page_prune_execute(Buffer buffer,
 
 
 /*
- * For all items in this page, find their respective root line pointers.
- * If item k is part of a HOT-chain with root at item j, then we set
- * root_offsets[k - 1] = j.
+ * Either for all items in this page or for the given item, find their
+ * respective root line pointers.
+ *
+ * When target_offnum is a valid offset number, the caller is interested in
+ * just one item. In that case, the root line pointer is returned in
+ * root_offsets.
  *
- * The passed-in root_offsets array must have MaxHeapTuplesPerPage entries.
- * We zero out all unused entries.
+ * When target_offnum is a InvalidOffsetNumber then the caller wants to know
+ * the root line pointers of all the items in this page. The root_offsets array
+ * must have MaxHeapTuplesPerPage entries in that case. If item k is part of a
+ * HOT-chain with root at item j, then we set root_offsets[k - 1] = j. We zero
+ * out all unused entries.
  *
  * The function must be called with at least share lock on the buffer, to
  * prevent concurrent prune operations.
  *
+ * This is not a cheap function since it must scan through all line
+ * pointers and tuples on the page in order to find the root line pointers. To
+ * minimize the cost, we break early if target_offnum is specified and root
+ * line
+ * pointer to target_offnum is found.
+ *
  * Note: The information collected here is valid only as long as the caller
  * holds a pin on the buffer. Once pin is released, a tuple might be pruned
  * and reused by a completely unrelated tuple.
+ *
+ * Note: This function must not be called inside a critical section because it
+ * internally calls HeapTupleHeaderGetUpdateXid which somewhere down the stack
+ * may try to allocate heap memory. Memory allocation is disallowed in a
+ * critical section
  */
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
 {
 	OffsetNumber offnum,
 				maxoff;
 
-	MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
+	if (OffsetNumberIsValid(target_offnum))
+		*root_offsets = InvalidOffsetNumber;
+	else
+		MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
 
 	maxoff = PageGetMaxOffsetNumber(page);
 	for (offnum = FirstOffsetNumber; offnum <= maxoff; offnum = OffsetNumberNext(offnum))
@@ -774,9 +808,28 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 
 			/*
 			 * This is either a plain tuple or the root of a HOT-chain.
-			 * Remember it in the mapping.
+			 *
+			 * If the target_offnum is specified and if we found its mapping,
+			 * return
 			 */
-			root_offsets[offnum - 1] = offnum;
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (target_offnum == offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array
+				 */
+			}
+			else
+			{
+				/* Remember it in the mapping  */
+				root_offsets[offnum - 1] = offnum;
+			}
 
 			/* If it's not the start of a HOT-chain, we're done with it */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
@@ -817,15 +870,64 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 				!TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(htup)))
 				break;
 
-			/* Remember the root line pointer for this item */
-			root_offsets[nextoffnum - 1] = offnum;
+			/*
+			 * If target_offnum is specified and we found its mapping, return
+			 */
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (nextoffnum == target_offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array
+				 */
+			}
+			else
+			{
+				/* Remember the root line pointer for this item */
+				root_offsets[nextoffnum - 1] = offnum;
+			}
 
 			/* Advance to next chain member, if any */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				break;
 
+			/*
+			 * If the tuple was HOT-updated and the update was later aborted,
+			 * someone could mark this tuple to be the last tuple in the chain
+			 * and store root offset in CTID, without clearing the HOT-updated
+			 * flag. So we must check if CTID is actually root offset and break
+			 * to avoid infinite recursion
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
 		}
 	}
 }
+
+/*
+ * Get root line pointer for the given tuple
+ */
+void
+heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offnum)
+{
+	return heap_get_root_tuples_internal(page, target_offnum, root_offnum);
+}
+
+/*
+ * Get root line pointers for all tuples in the page
+ */
+void
+heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+{
+	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+			root_offsets);
+}
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index 90ab6f2..5f64ca6 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -419,14 +419,18 @@ rewrite_heap_tuple(RewriteState state,
 	 */
 	if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
 		  HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
-		!(ItemPointerEquals(&(old_tuple->t_self),
-							&(old_tuple->t_data->t_ctid))))
+		!(HeapTupleHeaderIsHeapLatest(old_tuple->t_data, &old_tuple->t_self)))
 	{
 		OldToNewMapping mapping;
 
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
-		hashkey.tid = old_tuple->t_data->t_ctid;
+
+		/* 
+		 * We've already checked that this is not the last tuple in the chain,
+		 * so fetch the next TID in the chain
+		 */
+		HeapTupleHeaderGetNextTid(old_tuple->t_data, &hashkey.tid);
 
 		mapping = (OldToNewMapping)
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -439,7 +443,7 @@ rewrite_heap_tuple(RewriteState state,
 			 * set the ctid of this tuple to point to the new location, and
 			 * insert it right away.
 			 */
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextTid(new_tuple->t_data, &mapping->new_tid);
 
 			/* We don't need the mapping entry anymore */
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -525,7 +529,7 @@ rewrite_heap_tuple(RewriteState state,
 				new_tuple = unresolved->tuple;
 				free_new = true;
 				old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextTid(new_tuple->t_data, &new_tid);
 
 				/*
 				 * We don't need the hash entry anymore, but don't free its
@@ -731,7 +735,12 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		onpage_tup->t_ctid = tup->t_self;
+		/* 
+		 * Set t_ctid just to ensure that block number is copied correctly, but
+		 * then immediately mark the tuple as the latest
+		 */
+		HeapTupleHeaderSetNextTid(onpage_tup, &tup->t_self);
+		HeapTupleHeaderSetHeapLatest(onpage_tup, newoff);
 	}
 
 	/* If heaptup is a private copy, release it. */
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 8d119f6..9920f48 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -788,7 +788,8 @@ retry:
 			  DirtySnapshot.speculativeToken &&
 			  TransactionIdPrecedes(GetCurrentTransactionId(), xwait))))
 		{
-			ctid_wait = tup->t_data->t_ctid;
+			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
+				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index ff277d3..9182fa7 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2563,7 +2563,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		 * As above, it should be safe to examine xmax and t_ctid without the
 		 * buffer content lock, because they can't be changing.
 		 */
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, &tuple.t_self))
 		{
 			/* deleted, so forget about it */
 			ReleaseBuffer(buffer);
@@ -2571,7 +2571,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		}
 
 		/* updated, so look at the updated row */
-		tuple.t_self = tuple.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tuple.t_data, &tuple.t_self);
 		/* updated row should have xmin matching this xmax */
 		priorXmax = HeapTupleHeaderGetUpdateXid(tuple.t_data);
 		ReleaseBuffer(buffer);
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index ee7e05a..22507dc 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -188,6 +188,8 @@ extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
+extern void heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offnum);
 extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
 
 /* in heap/syncscan.c */
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index 52f28b8..a4a1fe1 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -193,6 +193,8 @@ typedef struct xl_heap_update
 	uint8		flags;
 	TransactionId new_xmax;		/* xmax of the new tuple */
 	OffsetNumber new_offnum;	/* new tuple's offset */
+	OffsetNumber root_offnum;	/* offset of the root line pointer in case of
+								   HOT or WARM update */
 
 	/*
 	 * If XLOG_HEAP_CONTAINS_OLD_TUPLE or XLOG_HEAP_CONTAINS_OLD_KEY flags are
@@ -200,7 +202,7 @@ typedef struct xl_heap_update
 	 */
 } xl_heap_update;
 
-#define SizeOfHeapUpdate	(offsetof(xl_heap_update, new_offnum) + sizeof(OffsetNumber))
+#define SizeOfHeapUpdate	(offsetof(xl_heap_update, root_offnum) + sizeof(OffsetNumber))
 
 /*
  * This is what we need to know about vacuum page cleanup/redirect
diff --git a/src/include/access/hio.h b/src/include/access/hio.h
index 2824f23..8752f69 100644
--- a/src/include/access/hio.h
+++ b/src/include/access/hio.h
@@ -36,7 +36,7 @@ typedef struct BulkInsertStateData
 
 
 extern void RelationPutHeapTuple(Relation relation, Buffer buffer,
-					 HeapTuple tuple, bool token);
+					 HeapTuple tuple, bool token, OffsetNumber *root_offnum);
 extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
 						  Buffer otherBuffer, int options,
 						  BulkInsertState bistate,
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index fae955e..11bd1c8 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,13 +260,19 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1800 are available */
+/* bits 0x0800 are available */
+#define HEAP_LATEST_TUPLE		0x1000	/*
+										 * This is the last tuple in chain and
+										 * ip_posid points to the root line
+										 * pointer
+										 */
 #define HEAP_KEYS_UPDATED		0x2000	/* tuple was updated and key cols
 										 * modified, or tuple deleted */
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xE000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+
 
 /*
  * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins.  It is
@@ -504,6 +510,32 @@ do { \
   (tup)->t_infomask2 & HEAP_ONLY_TUPLE \
 )
 
+#define HeapTupleHeaderSetHeapLatest(tup, offnum) \
+do { \
+	AssertMacro(OffsetNumberIsValid(offnum)); \
+	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE; \
+	ItemPointerSetOffsetNumber(&(tup)->t_ctid, (offnum)); \
+} while (0)
+
+#define HeapTupleHeaderClearHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 &= ~HEAP_LATEST_TUPLE \
+)
+
+/*
+ * If HEAP_LATEST_TUPLE is set in the last tuple in the update chain. But for
+ * clusters which are upgraded from pre-10.0 release, we still check if c_tid
+ * is pointing to itself and declare such tuple as the latest tuple in the
+ * chain
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  (((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(tid))) \
+)
+
+
 #define HeapTupleHeaderSetHeapOnly(tup) \
 ( \
   (tup)->t_infomask2 |= HEAP_ONLY_TUPLE \
@@ -542,6 +574,45 @@ do { \
 
 
 /*
+ * Set the t_ctid chain and also clear the HEAP_LATEST_TUPLE flag since we
+ * probably have a new tuple in the chain
+ */
+#define HeapTupleHeaderSetNextTid(tup, tid) \
+do { \
+		ItemPointerCopy((tid), &((tup)->t_ctid)); \
+		HeapTupleHeaderClearHeapLatest((tup)); \
+} while (0)
+
+/*
+ * Get TID of next tuple in the update chain. Caller should have checked that
+ * we are not already at the end of the chain because in that case t_ctid may
+ * actually store the root line pointer of the HOT chain whose member this
+ * tuple is.
+ */
+#define HeapTupleHeaderGetNextTid(tup, next_ctid) \
+do { \
+	AssertMacro(!((tup)->t_infomask2 & HEAP_LATEST_TUPLE)); \
+	ItemPointerCopy(&(tup)->t_ctid, (next_ctid)); \
+} while (0)
+
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+	AssertMacro(((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0), \
+	ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)
+
+/*
+ * We use the same HEAP_LATEST_TUPLE flag to check if the tuple's t_ctid field
+ * contains the root line pointer. We can't use the same
+ * HeapTupleHeaderIsHeapLatest macro because that also checks for TID-equality
+ * to decide whether a tuple is at the of the chain
+ */
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+	((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0 \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
#30Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Pavan Deolasee (#29)
4 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Jan 19, 2017 at 6:35 PM, Pavan Deolasee <pavan.deolasee@gmail.com>
wrote:

Revised patch is attached.

I've now also rebased the main WARM patch against the current master
(3eaf03b5d331b7a06d79 to be precise). I'm attaching Alvaro's patch to get
interesting attributes (prefixed with 0000 since the other two patches are
based on that). The changes to support system tables are now merged with
the main patch. I could separate them if it helps in review.

I am also including a stress test workload that I am currently running to
test WARM's correctness since Robert raised a valid concern about that. The
idea is to include a few more columns in the pgbench_accounts table and
have a few more indexes. The additional columns with indexes kind of share
a relationship with the "aid" column. But instead of a fixed value, values
for these columns can vary within a fixed, non-overlapping range. For
example, for aid = 1, aid1's original value will be 10 and it can vary
between 8 to 12. Similarly, aid2's original value will be 20 and it can
vary between 16 to 24. This setup allows us to update these additional
columns (thus force WARM), but still ensure that we can do some sanity
checks on the results.

The test contains a bunch of UPDATE, FOR UPDATE, FOR SHARE transactions.
Some of these transactions commit and some rollback. The checks are
in-place to ensure that we always find exactly one tuple irrespective of
which column we use to fetch the row. Of course, when the aid[1-4] columns
are used to fetch tuples, we need to scan with a range instead of an
equality. Then we do a bunch of operations like CREATE INDEX, DROP INDEX,
CIC, run long transactions, VACUUM FULL etc while the tests are running and
ensure that the sanity checks always pass. We could do a few other things
like, may be marking these indexes as UNIQUE or keeping a long transaction
open while doing updates and other operations. I'll add some of those to
the test, but suggestions are welcome.

I do see a problem with CREATE INDEX CONCURRENTLY with these tests, though
everything else has run ok so far (I am yet to do very long running tests.
Probably just a few hours tests today).

I'm trying to understand why CIC fails to build a consistent index. I think
I've some clue now why it could be happening. With HOT, we don't need to
worry about broken chains since at the very beginning we add the index
tuple and all subsequent updates will honour the new index while deciding
on HOT updates i.e. we won't create any new broken HOT chains once we start
building the index. Later during validation phase, we only need to insert
tuples that are not already in the index. But with WARM, I think the check
needs to be more elaborate. So even if the TID (we always look at its root
line pointer etc) exists in the index, we will need to ensure that the
index key matches the heap tuple we are dealing with. That looks a bit
tricky. May be we can lookup the index using key from the current heap
tuple and then see if we get a tuple with the same TID back. Of course, we
need to do this only if the tuple is a WARM tuple. The other option is that
we collect not only TIDs but also keys while scanning the index. That might
increase the size of the state information for wildly wide indexes. Or may
be just turn WARM off if there exists a build-in-progress index.

Suggestions/reviews/tests welcome.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

warm_stress_test.tar.gzapplication/x-gzip; name=warm_stress_test.tar.gzDownload
0000_interesting_attrs.patchapplication/octet-stream; name=0000_interesting_attrs.patchDownload
commit 4e8623eadc6adbc31143ba1a774ef2db533fc7d2
Author: Pavan Deolasee <pavan.deolasee@gmail.com>
Date:   Sun Jan 1 16:29:10 2017 +0530

    Alvaro's patch on interesting attrs

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 1ce42ea..91c13d4 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -95,11 +95,8 @@ static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
 				HeapTuple newtup, HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
-static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
-							 Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
+							 Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup);
 static bool heap_acquire_tuplock(Relation relation, ItemPointer tid,
 					 LockTupleMode mode, LockWaitPolicy wait_policy,
@@ -3443,6 +3440,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *interesting_attrs;
+	Bitmapset  *modified_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3460,9 +3459,6 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				pagefree;
 	bool		have_tuple_lock = false;
 	bool		iscombo;
-	bool		satisfies_hot;
-	bool		satisfies_key;
-	bool		satisfies_id;
 	bool		use_hot_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
@@ -3489,21 +3485,30 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				 errmsg("cannot update tuples during a parallel operation")));
 
 	/*
-	 * Fetch the list of attributes to be checked for HOT update.  This is
-	 * wasted effort if we fail to update or have to put the new tuple on a
-	 * different page.  But we must compute the list before obtaining buffer
-	 * lock --- in the worst case, if we are doing an update on one of the
-	 * relevant system catalogs, we could deadlock if we try to fetch the list
-	 * later.  In any case, the relcache caches the data so this is usually
-	 * pretty cheap.
+	 * Fetch the list of attributes to be checked for various operations.
 	 *
-	 * Note that we get a copy here, so we need not worry about relcache flush
-	 * happening midway through.
+	 * For HOT considerations, this is wasted effort if we fail to update or
+	 * have to put the new tuple on a different page.  But we must compute the
+	 * list before obtaining buffer lock --- in the worst case, if we are doing
+	 * an update on one of the relevant system catalogs, we could deadlock if
+	 * we try to fetch the list later.  In any case, the relcache caches the
+	 * data so this is usually pretty cheap.
+	 *
+	 * We also need columns used by the replica identity, the columns that
+	 * are considered the "key" of rows in the table, and columns that are
+	 * part of indirect indexes.
+	 *
+	 * Note that we get copies of each bitmap, so we need not worry about
+	 * relcache flush happening midway through.
 	 */
 	hot_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_ALL);
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	interesting_attrs = bms_add_members(NULL, hot_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
+
 
 	block = ItemPointerGetBlockNumber(otid);
 	buffer = ReadBuffer(relation, block);
@@ -3524,7 +3529,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Assert(ItemIdIsNormal(lp));
 
 	/*
-	 * Fill in enough data in oldtup for HeapSatisfiesHOTandKeyUpdate to work
+	 * Fill in enough data in oldtup for HeapDetermineModifiedColumns to work
 	 * properly.
 	 */
 	oldtup.t_tableOid = RelationGetRelid(relation);
@@ -3550,6 +3555,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 		Assert(!(newtup->t_data->t_infomask & HEAP_HASOID));
 	}
 
+	/* Determine columns modified by the update. */
+	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
+												  &oldtup, newtup);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3561,10 +3570,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	 * is updates that don't manipulate key columns, not those that
 	 * serendipitiously arrive at the same key values.
 	 */
-	HeapSatisfiesHOTandKeyUpdate(relation, hot_attrs, key_attrs, id_attrs,
-								 &satisfies_hot, &satisfies_key,
-								 &satisfies_id, &oldtup, newtup);
-	if (satisfies_key)
+	if (!bms_overlap(modified_attrs, key_attrs))
 	{
 		*lockmode = LockTupleNoKeyExclusive;
 		mxact_status = MultiXactStatusNoKeyUpdate;
@@ -3803,6 +3809,8 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(modified_attrs);
+		bms_free(interesting_attrs);
 		return result;
 	}
 
@@ -4107,7 +4115,7 @@ l2:
 		 * to do a HOT update.  Check if any of the index columns have been
 		 * changed.  If not, then HOT update is possible.
 		 */
-		if (satisfies_hot)
+		if (!bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
 	}
 	else
@@ -4122,7 +4130,9 @@ l2:
 	 * ExtractReplicaIdentity() will return NULL if nothing needs to be
 	 * logged.
 	 */
-	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup, !satisfies_id, &old_key_copied);
+	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup,
+										   bms_overlap(modified_attrs, id_attrs),
+										   &old_key_copied);
 
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
@@ -4270,13 +4280,15 @@ l2:
 	bms_free(hot_attrs);
 	bms_free(key_attrs);
 	bms_free(id_attrs);
+	bms_free(modified_attrs);
+	bms_free(interesting_attrs);
 
 	return HeapTupleMayBeUpdated;
 }
 
 /*
  * Check if the specified attribute's value is same in both given tuples.
- * Subroutine for HeapSatisfiesHOTandKeyUpdate.
+ * Subroutine for HeapDetermineModifiedColumns.
  */
 static bool
 heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
@@ -4310,7 +4322,7 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 
 	/*
 	 * Extract the corresponding values.  XXX this is pretty inefficient if
-	 * there are many indexed columns.  Should HeapSatisfiesHOTandKeyUpdate do
+	 * there are many indexed columns.  Should HeapDetermineModifiedColumns do
 	 * a single heap_deform_tuple call on each tuple, instead?	But that
 	 * doesn't work for system columns ...
 	 */
@@ -4355,114 +4367,30 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 /*
  * Check which columns are being updated.
  *
- * This simultaneously checks conditions for HOT updates, for FOR KEY
- * SHARE updates, and REPLICA IDENTITY concerns.  Since much of the time they
- * will be checking very similar sets of columns, and doing the same tests on
- * them, it makes sense to optimize and do them together.
- *
- * We receive three bitmapsets comprising the three sets of columns we're
- * interested in.  Note these are destructively modified; that is OK since
- * this is invoked at most once in heap_update.
+ * Given an updated tuple, determine (and return into the output bitmapset),
+ * from those listed as interesting, the set of columns that changed.
  *
- * hot_result is set to TRUE if it's okay to do a HOT update (i.e. it does not
- * modified indexed columns); key_result is set to TRUE if the update does not
- * modify columns used in the key; id_result is set to TRUE if the update does
- * not modify columns in any index marked as the REPLICA IDENTITY.
+ * The input bitmapset is destructively modified; that is OK since this is
+ * invoked at most once in heap_update.
  */
-static void
-HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *
+HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup)
 {
-	int			next_hot_attnum;
-	int			next_key_attnum;
-	int			next_id_attnum;
-	bool		hot_result = true;
-	bool		key_result = true;
-	bool		id_result = true;
-
-	/* If REPLICA IDENTITY is set to FULL, id_attrs will be empty. */
-	Assert(bms_is_subset(id_attrs, key_attrs));
-	Assert(bms_is_subset(key_attrs, hot_attrs));
-
-	/*
-	 * If one of these sets contains no remaining bits, bms_first_member will
-	 * return -1, and after adding FirstLowInvalidHeapAttributeNumber (which
-	 * is negative!)  we'll get an attribute number that can't possibly be
-	 * real, and thus won't match any actual attribute number.
-	 */
-	next_hot_attnum = bms_first_member(hot_attrs);
-	next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_key_attnum = bms_first_member(key_attrs);
-	next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_id_attnum = bms_first_member(id_attrs);
-	next_id_attnum += FirstLowInvalidHeapAttributeNumber;
+	int		attnum;
+	Bitmapset *modified = NULL;
 
-	for (;;)
+	while ((attnum = bms_first_member(interesting_cols)) >= 0)
 	{
-		bool		changed;
-		int			check_now;
-
-		/*
-		 * Since the HOT attributes are a superset of the key attributes and
-		 * the key attributes are a superset of the id attributes, this logic
-		 * is guaranteed to identify the next column that needs to be checked.
-		 */
-		if (hot_result && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_hot_attnum;
-		else if (key_result && next_key_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_key_attnum;
-		else if (id_result && next_id_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_id_attnum;
-		else
-			break;
+		attnum += FirstLowInvalidHeapAttributeNumber;
 
-		/* See whether it changed. */
-		changed = !heap_tuple_attr_equals(RelationGetDescr(relation),
-										  check_now, oldtup, newtup);
-		if (changed)
-		{
-			if (check_now == next_hot_attnum)
-				hot_result = false;
-			if (check_now == next_key_attnum)
-				key_result = false;
-			if (check_now == next_id_attnum)
-				id_result = false;
-
-			/* if all are false now, we can stop checking */
-			if (!hot_result && !key_result && !id_result)
-				break;
-		}
-
-		/*
-		 * Advance the next attribute numbers for the sets that contain the
-		 * attribute we just checked.  As we work our way through the columns,
-		 * the next_attnum values will rise; but when each set becomes empty,
-		 * bms_first_member() will return -1 and the attribute number will end
-		 * up with a value less than FirstLowInvalidHeapAttributeNumber.
-		 */
-		if (hot_result && check_now == next_hot_attnum)
-		{
-			next_hot_attnum = bms_first_member(hot_attrs);
-			next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (key_result && check_now == next_key_attnum)
-		{
-			next_key_attnum = bms_first_member(key_attrs);
-			next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (id_result && check_now == next_id_attnum)
-		{
-			next_id_attnum = bms_first_member(id_attrs);
-			next_id_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
+		if (!heap_tuple_attr_equals(RelationGetDescr(relation),
+								   attnum, oldtup, newtup))
+			modified = bms_add_member(modified,
+									  attnum - FirstLowInvalidHeapAttributeNumber);
 	}
 
-	*satisfies_hot = hot_result;
-	*satisfies_key = key_result;
-	*satisfies_id = id_result;
+	return modified;
 }
 
 /*
0001_track_root_lp_v9.patchapplication/octet-stream; name=0001_track_root_lp_v9.patchDownload
diff --git b/src/backend/access/heap/heapam.c a/src/backend/access/heap/heapam.c
index 91c13d4..8e57bae 100644
--- b/src/backend/access/heap/heapam.c
+++ a/src/backend/access/heap/heapam.c
@@ -93,7 +93,8 @@ static HeapTuple heap_prepare_insert(Relation relation, HeapTuple tup,
 					TransactionId xid, CommandId cid, int options);
 static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
-				HeapTuple newtup, HeapTuple old_key_tup,
+				HeapTuple newtup, OffsetNumber root_offnum,
+				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
 static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
 							 Bitmapset *interesting_cols,
@@ -2247,13 +2248,13 @@ heap_get_latest_tid(Relation relation,
 		 */
 		if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) ||
 			HeapTupleHeaderIsOnlyLocked(tp.t_data) ||
-			ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid))
+			HeapTupleHeaderIsHeapLatest(tp.t_data, &ctid))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
 		}
 
-		ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tp.t_data, &ctid);
 		priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		UnlockReleaseBuffer(buffer);
 	}							/* end of loop */
@@ -2373,6 +2374,7 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	bool		all_visible_cleared = false;
+	OffsetNumber	root_offnum;
 
 	/*
 	 * Fill in tuple header fields, assign an OID, and toast the tuple if
@@ -2411,8 +2413,14 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
+	root_offnum = InvalidOffsetNumber;
 	RelationPutHeapTuple(relation, buffer, heaptup,
-						 (options & HEAP_INSERT_SPECULATIVE) != 0);
+						 (options & HEAP_INSERT_SPECULATIVE) != 0,
+						 &root_offnum);
+
+	/* We must not overwrite the speculative insertion token */
+	if ((options & HEAP_INSERT_SPECULATIVE) == 0)
+		HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 	if (PageIsAllVisible(BufferGetPage(buffer)))
 	{
@@ -2640,6 +2648,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 	Size		saveFreeSpace;
 	bool		need_tuple_data = RelationIsLogicallyLogged(relation);
 	bool		need_cids = RelationIsAccessibleInLogicalDecoding(relation);
+	OffsetNumber	root_offnum;
 
 	needwal = !(options & HEAP_INSERT_SKIP_WAL) && RelationNeedsWAL(relation);
 	saveFreeSpace = RelationGetTargetPageFreeSpace(relation,
@@ -2710,7 +2719,13 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		 * RelationGetBufferForTuple has ensured that the first tuple fits.
 		 * Put that on the page, and then as many other tuples as fit.
 		 */
-		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false);
+		root_offnum = InvalidOffsetNumber;
+		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false,
+				&root_offnum);
+
+		/* Mark this tuple as the latest and also set root offset */
+		HeapTupleHeaderSetHeapLatest(heaptuples[ndone]->t_data, root_offnum);
+
 		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++)
 		{
 			HeapTuple	heaptup = heaptuples[ndone + nthispage];
@@ -2718,7 +2733,11 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace)
 				break;
 
-			RelationPutHeapTuple(relation, buffer, heaptup, false);
+			root_offnum = InvalidOffsetNumber;
+			RelationPutHeapTuple(relation, buffer, heaptup, false,
+					&root_offnum);
+			/* Mark each tuple as the latest and also set root offset */
+			HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 			/*
 			 * We don't use heap_multi_insert for catalog tuples yet, but
@@ -2990,6 +3009,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	HeapTupleData tp;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	TransactionId new_xmax;
@@ -3000,6 +3020,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	bool		all_visible_cleared = false;
 	HeapTuple	old_key_tuple = NULL;	/* replica identity of the tuple */
 	bool		old_key_copied = false;
+	OffsetNumber	root_offnum;
 
 	Assert(ItemPointerIsValid(tid));
 
@@ -3041,7 +3062,8 @@ heap_delete(Relation relation, ItemPointer tid,
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 	}
 
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
+	offnum = ItemPointerGetOffsetNumber(tid);
+	lp = PageGetItemId(page, offnum);
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
@@ -3171,7 +3193,17 @@ l1:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tp.t_data->t_ctid;
+
+		/*
+		 * If we're at the end of the chain, then just return the same TID back
+		 * to the caller. The caller uses that as a hint to know if we have hit
+		 * the end of the chain
+		 */
+		if (!HeapTupleHeaderIsHeapLatest(tp.t_data, &tp.t_self))
+			HeapTupleHeaderGetNextTid(tp.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&tp.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tp.t_data);
@@ -3220,6 +3252,23 @@ l1:
 							  xid, LockTupleExclusive, true,
 							  &new_xmax, &new_infomask, &new_infomask2);
 
+	/*
+	 * heap_get_root_tuple_one() may call palloc, which is disallowed once we
+	 * enter the critical section. So check if the root offset is cached in the
+	 * tuple and if not, fetch that information hard way before entering the
+	 * critical section
+	 *
+	 * Most often and unless we are dealing with a pg-upgraded cluster, the
+	 * root offset information should be cached. So there should not be too
+	 * much overhead of fetching this information. Also, once a tuple is
+	 * updated, the information will be copied to the new version. So it's not
+	 * as if we're going to pay this price forever
+	 */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		heap_get_root_tuple_one(page,
+				ItemPointerGetOffsetNumber(&tp.t_self),
+				&root_offnum);
+
 	START_CRIT_SECTION();
 
 	/*
@@ -3247,8 +3296,10 @@ l1:
 	HeapTupleHeaderClearHotUpdated(tp.t_data);
 	HeapTupleHeaderSetXmax(tp.t_data, new_xmax);
 	HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo);
-	/* Make sure there is no forward chain link in t_ctid */
-	tp.t_data->t_ctid = tp.t_self;
+
+	/* Mark this tuple as the latest tuple in the update chain */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		HeapTupleHeaderSetHeapLatest(tp.t_data, root_offnum);
 
 	MarkBufferDirty(buffer);
 
@@ -3449,6 +3500,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
+	OffsetNumber	root_offnum;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3511,6 +3564,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 
 
 	block = ItemPointerGetBlockNumber(otid);
+	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3795,7 +3849,12 @@ l2:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = oldtup.t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(oldtup.t_data, &oldtup.t_self))
+			HeapTupleHeaderGetNextTid(oldtup.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&oldtup.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
@@ -3935,6 +3994,7 @@ l2:
 		uint16		infomask_lock_old_tuple,
 					infomask2_lock_old_tuple;
 		bool		cleared_all_frozen = false;
+		OffsetNumber	root_offnum;
 
 		/*
 		 * To prevent concurrent sessions from updating the tuple, we have to
@@ -3962,6 +4022,15 @@ l2:
 
 		Assert(HEAP_XMAX_IS_LOCKED_ONLY(infomask_lock_old_tuple));
 
+		/*
+		 * Fetch root offset before entering the critical section. We do this
+		 * only if the information is not already available
+		 */
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			heap_get_root_tuple_one(page,
+					ItemPointerGetOffsetNumber(&oldtup.t_self),
+					&root_offnum);
+
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
@@ -3976,7 +4045,8 @@ l2:
 		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			HeapTupleHeaderSetHeapLatest(oldtup.t_data, root_offnum);
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4134,6 +4204,11 @@ l2:
 										   bms_overlap(modified_attrs, id_attrs),
 										   &old_key_copied);
 
+	if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+		heap_get_root_tuple_one(page,
+				ItemPointerGetOffsetNumber(&(oldtup.t_self)),
+				&root_offnum);
+
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
@@ -4159,6 +4234,17 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way (we should have done
+		 * that before entering the critical section above)
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
 	else
 	{
@@ -4166,10 +4252,21 @@ l2:
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		root_offnum = InvalidOffsetNumber;
 	}
 
-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
-
+	/* insert new tuple */
+	RelationPutHeapTuple(relation, newbuf, heaptup, false, &root_offnum);
+	/*
+	 * Also mark both copies as latest and set the root offset information. If
+	 * we're doing a HOT/WARM update, then we just copy the information from
+	 * old tuple, if available or computed above. For regular updates,
+	 * RelationPutHeapTuple must have returned us the actual offset number
+	 * where the new version was inserted and we store the same value since the
+	 * update resulted in a new HOT-chain
+	 */
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
 	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
@@ -4182,7 +4279,7 @@ l2:
 	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	HeapTupleHeaderSetNextTid(oldtup.t_data, &(heaptup->t_self));
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4221,6 +4318,7 @@ l2:
 
 		recptr = log_heap_update(relation, buffer,
 								 newbuf, &oldtup, heaptup,
+								 root_offnum,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4501,7 +4599,8 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	ItemId		lp;
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
-	BlockNumber block;
+	BlockNumber	block;
+	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4510,9 +4609,11 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	bool		first_time = true;
 	bool		have_tuple_lock = false;
 	bool		cleared_all_frozen = false;
+	OffsetNumber	root_offnum;
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
+	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -4532,6 +4633,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	tuple->t_data = (HeapTupleHeader) PageGetItem(page, lp);
 	tuple->t_len = ItemIdGetLength(lp);
 	tuple->t_tableOid = RelationGetRelid(relation);
+	tuple->t_self = *tid;
 
 l3:
 	result = HeapTupleSatisfiesUpdate(tuple, cid, *buffer);
@@ -4559,7 +4661,11 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &t_ctid);
+		else
+			ItemPointerCopy(tid, &t_ctid);
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -4997,7 +5103,12 @@ failed:
 		Assert(result == HeapTupleSelfUpdated || result == HeapTupleUpdated ||
 			   result == HeapTupleWouldBlock);
 		Assert(!(tuple->t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tuple->t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(tid, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tuple->t_data);
@@ -5045,6 +5156,11 @@ failed:
 							  GetCurrentTransactionId(), mode, false,
 							  &xid, &new_infomask, &new_infomask2);
 
+	if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+		heap_get_root_tuple_one(page,
+				ItemPointerGetOffsetNumber(&tuple->t_self),
+				&root_offnum);
+
 	START_CRIT_SECTION();
 
 	/*
@@ -5073,7 +5189,10 @@ failed:
 	 * the tuple as well.
 	 */
 	if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask))
-		tuple->t_data->t_ctid = *tid;
+	{
+		if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+			HeapTupleHeaderSetHeapLatest(tuple->t_data, root_offnum);
+	}
 
 	/* Clear only the all-frozen bit on visibility map if needed */
 	if (PageIsAllVisible(page) &&
@@ -5587,6 +5706,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
+	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5595,6 +5715,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
+		offnum = ItemPointerGetOffsetNumber(&tupid);
+
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
@@ -5824,7 +5946,7 @@ l4:
 
 		/* if we find the end of update chain, we're done. */
 		if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID ||
-			ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) ||
+			HeapTupleHeaderIsHeapLatest(mytup.t_data, &mytup.t_self) ||
 			HeapTupleHeaderIsOnlyLocked(mytup.t_data))
 		{
 			result = HeapTupleMayBeUpdated;
@@ -5833,7 +5955,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		HeapTupleHeaderGetNextTid(mytup.t_data, &tupid);
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
@@ -5950,7 +6072,7 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	 * Replace the speculative insertion token with a real t_ctid, pointing to
 	 * itself like it does on regular tuples.
 	 */
-	htup->t_ctid = tuple->t_self;
+	HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 	/* XLOG stuff */
 	if (RelationNeedsWAL(relation))
@@ -6076,8 +6198,7 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
 
 	/* Clear the speculative insertion token too */
-	tp.t_data->t_ctid = tp.t_self;
-
+	HeapTupleHeaderSetHeapLatest(tp.t_data, ItemPointerGetOffsetNumber(tid));
 	MarkBufferDirty(buffer);
 
 	/*
@@ -7425,6 +7546,7 @@ log_heap_visible(RelFileNode rnode, Buffer heap_buffer, Buffer vm_buffer,
 static XLogRecPtr
 log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup, HeapTuple newtup,
+				OffsetNumber root_offnum,
 				HeapTuple old_key_tuple,
 				bool all_visible_cleared, bool new_all_visible_cleared)
 {
@@ -7545,6 +7667,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self);
 	xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data);
 
+	Assert(OffsetNumberIsValid(root_offnum));
+	xlrec.root_offnum = root_offnum;
+
 	bufflags = REGBUF_STANDARD;
 	if (init)
 		bufflags |= REGBUF_WILL_INIT;
@@ -8199,7 +8324,13 @@ heap_xlog_delete(XLogReaderState *record)
 			PageClearAllVisible(page);
 
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = target_tid;
+		if (!HeapTupleHeaderHasRootOffset(htup))
+		{
+			OffsetNumber	root_offnum;
+			heap_get_root_tuple_one(page, xlrec->offnum, &root_offnum); 
+			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+		}
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8289,7 +8420,8 @@ heap_xlog_insert(XLogReaderState *record)
 		htup->t_hoff = xlhdr.t_hoff;
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
-		htup->t_ctid = target_tid;
+
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->offnum);
 
 		if (PageAddItem(page, (Item) htup, newlen, xlrec->offnum,
 						true, true) == InvalidOffsetNumber)
@@ -8424,8 +8556,8 @@ heap_xlog_multi_insert(XLogReaderState *record)
 			htup->t_hoff = xlhdr->t_hoff;
 			HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 			HeapTupleHeaderSetCmin(htup, FirstCommandId);
-			ItemPointerSetBlockNumber(&htup->t_ctid, blkno);
-			ItemPointerSetOffsetNumber(&htup->t_ctid, offnum);
+
+			HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 			offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 			if (offnum == InvalidOffsetNumber)
@@ -8561,7 +8693,7 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 		/* Set forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetNextTid(htup, &newtid);
 
 		/* Mark the page as a candidate for pruning */
 		PageSetPrunable(page, XLogRecGetXid(record));
@@ -8694,13 +8826,17 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
-		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = newtid;
 
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
 
+		/*
+		 * Make sure the tuple is marked as the latest and root offset
+		 * information is restored
+		 */
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->root_offnum);
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
@@ -8763,6 +8899,9 @@ heap_xlog_confirm(XLogReaderState *record)
 		 */
 		ItemPointerSet(&htup->t_ctid, BufferGetBlockNumber(buffer), offnum);
 
+		/* For newly inserted tuple, set root offset to itself */
+		HeapTupleHeaderSetHeapLatest(htup, offnum);
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8826,11 +8965,18 @@ heap_xlog_lock(XLogReaderState *record)
 		 */
 		if (HEAP_XMAX_IS_LOCKED_ONLY(htup->t_infomask))
 		{
+			ItemPointerData	target_tid;
+
+			ItemPointerSet(&target_tid, BufferGetBlockNumber(buffer), offnum);
 			HeapTupleHeaderClearHotUpdated(htup);
 			/* Make sure there is no forward chain link in t_ctid */
-			ItemPointerSet(&htup->t_ctid,
-						   BufferGetBlockNumber(buffer),
-						   offnum);
+			if (!HeapTupleHeaderHasRootOffset(htup))
+			{
+				OffsetNumber	root_offnum;
+				heap_get_root_tuple_one(page,
+						offnum, &root_offnum);
+				HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+			}
 		}
 		HeapTupleHeaderSetXmax(htup, xlrec->locking_xid);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
diff --git b/src/backend/access/heap/hio.c a/src/backend/access/heap/hio.c
index 6529fe3..14ed263 100644
--- b/src/backend/access/heap/hio.c
+++ a/src/backend/access/heap/hio.c
@@ -31,12 +31,18 @@
  * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
  * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *
+ * The caller can optionally tell us to set the root offset to the given value.
+ * Otherwise, the root offset is set to the offset of the new location once its
+ * known. The former is used while updating an existing tuple while latter is
+ * used during insertion of a new row.
  */
 void
 RelationPutHeapTuple(Relation relation,
 					 Buffer buffer,
 					 HeapTuple tuple,
-					 bool token)
+					 bool token,
+					 OffsetNumber *root_offnum)
 {
 	Page		pageHeader;
 	OffsetNumber offnum;
@@ -60,16 +66,21 @@ RelationPutHeapTuple(Relation relation,
 	ItemPointerSet(&(tuple->t_self), BufferGetBlockNumber(buffer), offnum);
 
 	/*
-	 * Insert the correct position into CTID of the stored tuple, too (unless
-	 * this is a speculative insertion, in which case the token is held in
-	 * CTID field instead)
+	 * Set block number and the root offset into CTID of the stored tuple, too
+	 * (unless this is a speculative insertion, in which case the token is held
+	 * in CTID field instead)
 	 */
 	if (!token)
 	{
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
+		/* Copy t_ctid to set the correct block number */
 		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+
+		if (!OffsetNumberIsValid(*root_offnum))
+			*root_offnum = offnum;
+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item, *root_offnum);
 	}
 }
 
diff --git b/src/backend/access/heap/pruneheap.c a/src/backend/access/heap/pruneheap.c
index d69a266..2406e77 100644
--- b/src/backend/access/heap/pruneheap.c
+++ a/src/backend/access/heap/pruneheap.c
@@ -55,6 +55,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
 
+static void heap_get_root_tuples_internal(Page page,
+				OffsetNumber target_offnum, OffsetNumber *root_offsets);
 
 /*
  * Optionally prune and repair fragmentation in the specified page.
@@ -553,6 +555,17 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 		if (!HeapTupleHeaderIsHotUpdated(htup))
 			break;
 
+
+		/*
+		 * If the tuple was HOT-updated and the update was later
+		 * aborted, someone could mark this tuple to be the last tuple
+		 * in the chain, without clearing the HOT-updated flag. So we must
+		 * check if this is the last tuple in the chain and stop following the
+		 * CTID, else we risk getting into an infinite recursion (though
+		 * prstate->marked[] currently protects against that)
+		 */
+		if (HeapTupleHeaderHasRootOffset(htup))
+			break;
 		/*
 		 * Advance to next chain member.
 		 */
@@ -726,27 +739,48 @@ heap_page_prune_execute(Buffer buffer,
 
 
 /*
- * For all items in this page, find their respective root line pointers.
- * If item k is part of a HOT-chain with root at item j, then we set
- * root_offsets[k - 1] = j.
+ * Either for all items in this page or for the given item, find their
+ * respective root line pointers.
+ *
+ * When target_offnum is a valid offset number, the caller is interested in
+ * just one item. In that case, the root line pointer is returned in
+ * root_offsets.
  *
- * The passed-in root_offsets array must have MaxHeapTuplesPerPage entries.
- * We zero out all unused entries.
+ * When target_offnum is a InvalidOffsetNumber then the caller wants to know
+ * the root line pointers of all the items in this page. The root_offsets array
+ * must have MaxHeapTuplesPerPage entries in that case. If item k is part of a
+ * HOT-chain with root at item j, then we set root_offsets[k - 1] = j. We zero
+ * out all unused entries.
  *
  * The function must be called with at least share lock on the buffer, to
  * prevent concurrent prune operations.
  *
+ * This is not a cheap function since it must scan through all line
+ * pointers and tuples on the page in order to find the root line pointers. To
+ * minimize the cost, we break early if target_offnum is specified and root
+ * line
+ * pointer to target_offnum is found.
+ *
  * Note: The information collected here is valid only as long as the caller
  * holds a pin on the buffer. Once pin is released, a tuple might be pruned
  * and reused by a completely unrelated tuple.
+ *
+ * Note: This function must not be called inside a critical section because it
+ * internally calls HeapTupleHeaderGetUpdateXid which somewhere down the stack
+ * may try to allocate heap memory. Memory allocation is disallowed in a
+ * critical section
  */
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
 {
 	OffsetNumber offnum,
 				maxoff;
 
-	MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
+	if (OffsetNumberIsValid(target_offnum))
+		*root_offsets = InvalidOffsetNumber;
+	else
+		MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
 
 	maxoff = PageGetMaxOffsetNumber(page);
 	for (offnum = FirstOffsetNumber; offnum <= maxoff; offnum = OffsetNumberNext(offnum))
@@ -774,9 +808,28 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 
 			/*
 			 * This is either a plain tuple or the root of a HOT-chain.
-			 * Remember it in the mapping.
+			 *
+			 * If the target_offnum is specified and if we found its mapping,
+			 * return
 			 */
-			root_offsets[offnum - 1] = offnum;
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (target_offnum == offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array
+				 */
+			}
+			else
+			{
+				/* Remember it in the mapping  */
+				root_offsets[offnum - 1] = offnum;
+			}
 
 			/* If it's not the start of a HOT-chain, we're done with it */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
@@ -817,15 +870,64 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 				!TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(htup)))
 				break;
 
-			/* Remember the root line pointer for this item */
-			root_offsets[nextoffnum - 1] = offnum;
+			/*
+			 * If target_offnum is specified and we found its mapping, return
+			 */
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (nextoffnum == target_offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array
+				 */
+			}
+			else
+			{
+				/* Remember the root line pointer for this item */
+				root_offsets[nextoffnum - 1] = offnum;
+			}
 
 			/* Advance to next chain member, if any */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				break;
 
+			/*
+			 * If the tuple was HOT-updated and the update was later aborted,
+			 * someone could mark this tuple to be the last tuple in the chain
+			 * and store root offset in CTID, without clearing the HOT-updated
+			 * flag. So we must check if CTID is actually root offset and break
+			 * to avoid infinite recursion
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
 		}
 	}
 }
+
+/*
+ * Get root line pointer for the given tuple
+ */
+void
+heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offnum)
+{
+	return heap_get_root_tuples_internal(page, target_offnum, root_offnum);
+}
+
+/*
+ * Get root line pointers for all tuples in the page
+ */
+void
+heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+{
+	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+			root_offsets);
+}
diff --git b/src/backend/access/heap/rewriteheap.c a/src/backend/access/heap/rewriteheap.c
index 90ab6f2..5f64ca6 100644
--- b/src/backend/access/heap/rewriteheap.c
+++ a/src/backend/access/heap/rewriteheap.c
@@ -419,14 +419,18 @@ rewrite_heap_tuple(RewriteState state,
 	 */
 	if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
 		  HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
-		!(ItemPointerEquals(&(old_tuple->t_self),
-							&(old_tuple->t_data->t_ctid))))
+		!(HeapTupleHeaderIsHeapLatest(old_tuple->t_data, &old_tuple->t_self)))
 	{
 		OldToNewMapping mapping;
 
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
-		hashkey.tid = old_tuple->t_data->t_ctid;
+
+		/* 
+		 * We've already checked that this is not the last tuple in the chain,
+		 * so fetch the next TID in the chain
+		 */
+		HeapTupleHeaderGetNextTid(old_tuple->t_data, &hashkey.tid);
 
 		mapping = (OldToNewMapping)
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -439,7 +443,7 @@ rewrite_heap_tuple(RewriteState state,
 			 * set the ctid of this tuple to point to the new location, and
 			 * insert it right away.
 			 */
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextTid(new_tuple->t_data, &mapping->new_tid);
 
 			/* We don't need the mapping entry anymore */
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -525,7 +529,7 @@ rewrite_heap_tuple(RewriteState state,
 				new_tuple = unresolved->tuple;
 				free_new = true;
 				old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextTid(new_tuple->t_data, &new_tid);
 
 				/*
 				 * We don't need the hash entry anymore, but don't free its
@@ -731,7 +735,12 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		onpage_tup->t_ctid = tup->t_self;
+		/* 
+		 * Set t_ctid just to ensure that block number is copied correctly, but
+		 * then immediately mark the tuple as the latest
+		 */
+		HeapTupleHeaderSetNextTid(onpage_tup, &tup->t_self);
+		HeapTupleHeaderSetHeapLatest(onpage_tup, newoff);
 	}
 
 	/* If heaptup is a private copy, release it. */
diff --git b/src/backend/executor/execIndexing.c a/src/backend/executor/execIndexing.c
index 8d119f6..9920f48 100644
--- b/src/backend/executor/execIndexing.c
+++ a/src/backend/executor/execIndexing.c
@@ -788,7 +788,8 @@ retry:
 			  DirtySnapshot.speculativeToken &&
 			  TransactionIdPrecedes(GetCurrentTransactionId(), xwait))))
 		{
-			ctid_wait = tup->t_data->t_ctid;
+			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
+				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git b/src/backend/executor/execMain.c a/src/backend/executor/execMain.c
index 0bc146c..c38e290 100644
--- b/src/backend/executor/execMain.c
+++ a/src/backend/executor/execMain.c
@@ -2589,7 +2589,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		 * As above, it should be safe to examine xmax and t_ctid without the
 		 * buffer content lock, because they can't be changing.
 		 */
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, &tuple.t_self))
 		{
 			/* deleted, so forget about it */
 			ReleaseBuffer(buffer);
@@ -2597,7 +2597,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		}
 
 		/* updated, so look at the updated row */
-		tuple.t_self = tuple.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tuple.t_data, &tuple.t_self);
 		/* updated row should have xmin matching this xmax */
 		priorXmax = HeapTupleHeaderGetUpdateXid(tuple.t_data);
 		ReleaseBuffer(buffer);
diff --git b/src/include/access/heapam.h a/src/include/access/heapam.h
index ee7e05a..22507dc 100644
--- b/src/include/access/heapam.h
+++ a/src/include/access/heapam.h
@@ -188,6 +188,8 @@ extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
+extern void heap_get_root_tuple_one(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offnum);
 extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
 
 /* in heap/syncscan.c */
diff --git b/src/include/access/heapam_xlog.h a/src/include/access/heapam_xlog.h
index 52f28b8..a4a1fe1 100644
--- b/src/include/access/heapam_xlog.h
+++ a/src/include/access/heapam_xlog.h
@@ -193,6 +193,8 @@ typedef struct xl_heap_update
 	uint8		flags;
 	TransactionId new_xmax;		/* xmax of the new tuple */
 	OffsetNumber new_offnum;	/* new tuple's offset */
+	OffsetNumber root_offnum;	/* offset of the root line pointer in case of
+								   HOT or WARM update */
 
 	/*
 	 * If XLOG_HEAP_CONTAINS_OLD_TUPLE or XLOG_HEAP_CONTAINS_OLD_KEY flags are
@@ -200,7 +202,7 @@ typedef struct xl_heap_update
 	 */
 } xl_heap_update;
 
-#define SizeOfHeapUpdate	(offsetof(xl_heap_update, new_offnum) + sizeof(OffsetNumber))
+#define SizeOfHeapUpdate	(offsetof(xl_heap_update, root_offnum) + sizeof(OffsetNumber))
 
 /*
  * This is what we need to know about vacuum page cleanup/redirect
diff --git b/src/include/access/hio.h a/src/include/access/hio.h
index 2824f23..8752f69 100644
--- b/src/include/access/hio.h
+++ a/src/include/access/hio.h
@@ -36,7 +36,7 @@ typedef struct BulkInsertStateData
 
 
 extern void RelationPutHeapTuple(Relation relation, Buffer buffer,
-					 HeapTuple tuple, bool token);
+					 HeapTuple tuple, bool token, OffsetNumber *root_offnum);
 extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
 						  Buffer otherBuffer, int options,
 						  BulkInsertState bistate,
diff --git b/src/include/access/htup_details.h a/src/include/access/htup_details.h
index a6c7e31..fff1832 100644
--- b/src/include/access/htup_details.h
+++ a/src/include/access/htup_details.h
@@ -260,13 +260,19 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1800 are available */
+/* bits 0x0800 are available */
+#define HEAP_LATEST_TUPLE		0x1000	/*
+										 * This is the last tuple in chain and
+										 * ip_posid points to the root line
+										 * pointer
+										 */
 #define HEAP_KEYS_UPDATED		0x2000	/* tuple was updated and key cols
 										 * modified, or tuple deleted */
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xE000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+
 
 /*
  * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins.  It is
@@ -504,6 +510,32 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+#define HeapTupleHeaderSetHeapLatest(tup, offnum) \
+do { \
+	AssertMacro(OffsetNumberIsValid(offnum)); \
+	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE; \
+	ItemPointerSetOffsetNumber(&(tup)->t_ctid, (offnum)); \
+} while (0)
+
+#define HeapTupleHeaderClearHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 &= ~HEAP_LATEST_TUPLE \
+)
+
+/*
+ * If HEAP_LATEST_TUPLE is set in the last tuple in the update chain. But for
+ * clusters which are upgraded from pre-10.0 release, we still check if c_tid
+ * is pointing to itself and declare such tuple as the latest tuple in the
+ * chain
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  (((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(tid))) \
+)
+
+
 #define HeapTupleHeaderSetHeapOnly(tup) \
 ( \
   (tup)->t_infomask2 |= HEAP_ONLY_TUPLE \
@@ -542,6 +574,45 @@ do { \
 
 
 /*
+ * Set the t_ctid chain and also clear the HEAP_LATEST_TUPLE flag since we
+ * probably have a new tuple in the chain
+ */
+#define HeapTupleHeaderSetNextTid(tup, tid) \
+do { \
+		ItemPointerCopy((tid), &((tup)->t_ctid)); \
+		HeapTupleHeaderClearHeapLatest((tup)); \
+} while (0)
+
+/*
+ * Get TID of next tuple in the update chain. Caller should have checked that
+ * we are not already at the end of the chain because in that case t_ctid may
+ * actually store the root line pointer of the HOT chain whose member this
+ * tuple is.
+ */
+#define HeapTupleHeaderGetNextTid(tup, next_ctid) \
+do { \
+	AssertMacro(!((tup)->t_infomask2 & HEAP_LATEST_TUPLE)); \
+	ItemPointerCopy(&(tup)->t_ctid, (next_ctid)); \
+} while (0)
+
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+	AssertMacro(((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0), \
+	ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)
+
+/*
+ * We use the same HEAP_LATEST_TUPLE flag to check if the tuple's t_ctid field
+ * contains the root line pointer. We can't use the same
+ * HeapTupleHeaderIsHeapLatest macro because that also checks for TID-equality
+ * to decide whether a tuple is at the of the chain
+ */
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+	((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0 \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
0002_warm_updates_v9.patchapplication/octet-stream; name=0002_warm_updates_v9.patchDownload
diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c
index 06077af..4ab30d6 100644
--- a/contrib/bloom/blutils.c
+++ b/contrib/bloom/blutils.c
@@ -138,6 +138,7 @@ blhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = blendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index d60ddd2..3785045 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -112,6 +112,7 @@ brinhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = brinendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index 597056a..d4634af 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -89,6 +89,7 @@ gisthandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = gistendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index a64a9b9..40fede5 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -86,6 +86,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = hashendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = hashrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -266,6 +267,8 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 	OffsetNumber offnum;
 	ItemPointer current;
 	bool		res;
+	IndexTuple	itup;
+
 
 	/* Hash indexes are always lossy since we store only the hash code */
 	scan->xs_recheck = true;
@@ -303,8 +306,6 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 			 offnum <= maxoffnum;
 			 offnum = OffsetNumberNext(offnum))
 		{
-			IndexTuple	itup;
-
 			itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 			if (ItemPointerEquals(&(so->hashso_heappos), &(itup->t_tid)))
 				break;
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index a59ad6f..46a334c 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -59,6 +59,8 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
 	return true;
 }
 
@@ -408,6 +410,9 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
+
 	return true;
 }
 
diff --git a/src/backend/access/hash/hashutil.c b/src/backend/access/hash/hashutil.c
index c705531..dcba734 100644
--- a/src/backend/access/hash/hashutil.c
+++ b/src/backend/access/hash/hashutil.c
@@ -17,8 +17,12 @@
 #include "access/hash.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
+#include "nodes/execnodes.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 #define CALC_NEW_BUCKET(old_bucket, lowmask) \
 			old_bucket | (lowmask + 1)
@@ -446,3 +450,109 @@ _hash_get_newbucket_from_oldbucket(Relation rel, Bucket old_bucket,
 
 	return new_bucket;
 }
+
+/*
+ * Recheck if the heap tuple satisfies the key stored in the index tuple
+ */
+bool
+hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	Datum		values2[INDEX_MAX_KEYS];
+	bool		isnull2[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	/*
+	 * HASH indexes compute a hash value of the key and store that in the
+	 * index. So we must first obtain the hash of the value obtained from the
+	 * heap and then do a comparison
+	 */
+	_hash_convert_tuple(indexRel, values, isnull, values2, isnull2);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL then they are equal
+		 */
+		if (isnull2[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If either is NULL then they are not equal
+		 */
+		if (isnull2[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now do a raw memory comparison
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values2[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+}
diff --git a/src/backend/access/heap/README.WARM b/src/backend/access/heap/README.WARM
new file mode 100644
index 0000000..f793570
--- /dev/null
+++ b/src/backend/access/heap/README.WARM
@@ -0,0 +1,271 @@
+src/backend/access/heap/README.WARM
+
+Write Amplification Reduction Method (WARM)
+===========================================
+
+The Heap Only Tuple (HOT) feature greatly eliminated redudant index
+entries and allowed re-use of the dead space occupied by previously
+updated or deleted tuples (see src/backend/access/heap/README.HOT)
+
+One of the necessary conditions for satisfying HOT update is that the
+update must not change a column used in any of the indexes on the table.
+The condition is sometimes hard to meet, especially for complex
+workloads with several indexes on large yet frequently updated tables.
+Worse, sometimes only one or two index columns may be updated, but the
+regular non-HOT update will still insert a new index entry in every
+index on the table, irrespective of whether the key pertaining to the
+index changed or not.
+
+WARM is a technique devised to address these problems.
+
+
+Update Chains With Multiple Index Entries Pointing to the Root
+--------------------------------------------------------------
+
+When a non-HOT update is caused by an index key change, a new index
+entry must be inserted for the changed index. But if the index key
+hasn't changed for other indexes, we don't really need to insert a new
+entry. Even though the existing index entry is pointing to the old
+tuple, the new tuple is reachable via the t_ctid chain. To keep things
+simple, a WARM update requires that the heap block must have enough
+space to store the new version of the tuple. This is same as HOT
+updates.
+
+In WARM, we ensure that every index entry always points to the root of
+the WARM chain. In fact, a WARM chain looks exactly like a HOT chain
+except for the fact that there could be multiple index entries pointing
+to the root of the chain. So when new entry is inserted in an index for
+updated tuple, and if we are doing a WARM update, the new entry is made
+point to the root of the WARM chain.
+
+For example, if we have a table with two columns and two indexes on each
+of the column. When a tuple is first inserted the table, we have exactly
+one index entry pointing to the tuple from both indexes.
+
+	lp [1]
+	[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's entry (aaaa) also points to 1
+
+Now if the tuple's second column is updated and if there is room on the
+page, we perform a WARM update. To do so, Index1 does not get any new
+entry and Index2's new entry will still point to the root tuple of the
+chain.
+
+	lp [1]  [2]
+	[1111, aaaa]->[111, bbbb]
+
+	Index1's entry (1111) points to 1
+	Index2's old entry (aaaa) points to 1
+	Index2's new entry (bbbb) also points to 1
+
+"A update chain which has more than one index entries pointing to its
+root line pointer is called WARM chain and the action that creates a
+WARM chain is called WARM update."
+
+Since all indexes always point to the root of the WARM chain, even when
+there are more than one index entries, WARM chains can be pruned and
+dead tuples can be removed without a need to do corresponding index
+cleanup.
+
+While this solves the problem of pruning dead tuples from a HOT/WARM
+chain, it also opens up a new technical challenge because now we have a
+situation where a heap tuple is reachable from multiple index entries,
+each having a different index key. While MVCC still ensures that only
+valid tuples are returned, a tuple with a wrong index key may be
+returned because of wrong index entries. In the above example, tuple
+[1111, bbbb] is reachable from both keys (aaaa) as well as (bbbb). For
+this reason, tuples returned from a WARM chain must always be rechecked
+for index key-match.
+
+Recheck Index Key Againt Heap Tuple
+-----------------------------------
+
+Since every Index AM has it's own notion of index tuples, each Index AM
+must implement its own method to recheck heap tuples. For example, a
+hash index stores the hash value of the column and hence recheck routine
+for hash AM must first compute the hash value of the heap attribute and
+then compare it against the value stored in the index tuple.
+
+The patch currently implement recheck routines for hash and btree
+indexes. If the table has an index which doesn't support recheck
+routine, WARM updates are disabled on such tables.
+
+Problem With Duplicate (key, ctid) Index Entries
+------------------------------------------------
+
+The index-key recheck logic works as long as there are no duplicate
+index keys, both pointing to the same WARM chain. In that case, the same
+valid tuple will be reachable via multiple index keys, yet satisfying
+the index key checks. In the above example, if the tuple [1111, bbbb] is
+again updated to [1111, aaaa] and if we insert a new index entry (aaaa)
+pointing to the root line pointer, we will end up with the following
+structure:
+
+	lp [1]  [2]  [3]
+	[1111, aaaa]->[1111, bbbb]->[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's oldest entry (aaaa) points to 1
+	Index2's old entry (bbbb) also points to 1
+	Index2's new entry (aaaa) also points to 1
+
+We must solve this problem to ensure that the same tuple is not
+reachable via multiple index pointers. There are couple of ways to
+address this issue:
+
+1. Do not allow WARM update to a tuple from a WARM chain. This
+guarantees that there can never be duplicate index entries to the same
+root line pointer because we must have checked for old and new index
+keys while doing the first WARM update.
+
+2. Do not allow duplicate (key, ctid) index pointers. In the above
+example, since (aaaa, 1) already exists in the index, we must not insert
+a duplicate index entry.
+
+The patch currently implements 1 i.e. do not do WARM updates to a tuple
+from a WARM chain. HOT updates are fine because they do not add a new
+index entry.
+
+Even with the restriction, this is a significant improvement because the
+number of regular UPDATEs are curtailed down to half.
+
+Expression and Partial Indexes
+------------------------------
+
+Expressions may evaluate to the same value even if the underlying column
+values have changed. A simple example is an index on "lower(col)" which
+will return the same value if the new heap value only differs in the
+case sensitivity. So we can not solely rely on the heap column check to
+decide whether or not to insert a new index entry for expression
+indexes. Similarly, for partial indexes, the predicate expression must
+be evaluated to decide whether or not to cause a new index entry when
+columns referred in the predicate expressions change.
+
+(None of these things are currently implemented and we squarely disallow
+WARM update if a column from expression indexes or predicate has
+changed).
+
+
+Efficiently Finding the Root Line Pointer
+-----------------------------------------
+
+During WARM update, we must be able to find the root line pointer of the
+tuple being updated. It must be noted that the t_ctid field in the heap
+tuple header is usually used to find the next tuple in the update chain.
+But the tuple that we are updating, must be the last tuple in the update
+chain. In such cases, the c_tid field usually points the tuple itself.
+So in theory, we could use the t_ctid to store additional information in
+the last tuple of the update chain, if the information about the tuple
+being the last tuple is stored elsewhere.
+
+We now utilize another bit from t_infomask2 to explicitly identify that
+this is the last tuple in the update chain.
+
+HEAP_LATEST_TUPLE - When this bit is set, the tuple is the last tuple in
+the update chain. The OffsetNumber part of t_ctid points to the root
+line pointer of the chain when HEAP_LATEST_TUPLE flag is set.
+
+If UPDATE operation is aborted, the last tuple in the update chain
+becomes dead. The root line pointer information stored in the tuple
+which remains the last valid tuple in the chain is also lost. In such
+rare cases, the root line pointer must be found in a hard way by
+scanning the entire heap page.
+
+Tracking WARM Chains
+--------------------
+
+The old and every subsequent tuple in the chain is marked with a special
+HEAP_WARM_TUPLE flag. We use the last remaining bit in t_infomask2 to
+store this information.
+
+When a tuple is returned from a WARM chain, the caller must do
+additional checks to ensure that the tuple matches the index key. Even
+if the tuple comes precedes the WARM update in the chain, it must still
+be rechecked for the index key match (case when old tuple is returned by
+the new index key). So we must follow the update chain everytime to the
+end to see check if this is a WARM chain.
+
+When the old updated tuple is retired and the root line pointer is
+converted into a redirected line pointer, we can copy the information
+about WARM chain to the redirected line pointer by storing a special
+value in the lp_len field of the line pointer. This will handle the most
+common case where a WARM chain is replaced by a redirect line pointer
+and a single tuple in the chain.
+
+Converting WARM chains back to HOT chains (VACUUM ?)
+----------------------------------------------------
+
+The current implementation of WARM allows only one WARM update per
+chain. This simplifies the design and addresses certain issues around
+duplicate scans. But this also implies that the benefit of WARM will be
+no more than 50%, which is still significant, but if we could return
+WARM chains back to normal status, we could do far more WARM updates.
+
+A distinct property of a WARM chain is that at least one index has more
+than one live index entries pointing to the root of the chain. In other
+words, if we can remove duplicate entry from every index or conclusively
+prove that there are no duplicate index entries for the root line
+pointer, the chain can again be marked as HOT.
+
+Here is one idea:
+
+A WARM chain has two parts, separated by the tuple that caused WARM
+update. All tuples in each part has matching index keys, but certain
+index keys may not match between these two parts. Lets say we mark heap
+tuples in each part with a special Red-Blue flag. The same flag is
+replicated in the index tuples. For example, when new rows are inserted
+in a table, they are marked with Blue flag and the index entries
+associated with those rows are also marked with Blue flag. When a row is
+WARM updated, the new version is marked with Red flag and the new index
+entry created by the update is also marked with Red flag.
+
+
+Heap chain: [1] [2] [3] [4]
+			[aaaa, 1111]B -> [aaaa, 1111]B -> [bbbb, 1111]R -> [bbbb, 1111]R
+
+Index1: 	(aaaa)B points to 1 (satisfies only tuples marked with B)
+			(bbbb)R points to 1 (satisfies only tuples marked with R)
+
+Index2:		(1111)B points to 1 (satisfied bith B and R tuples)
+
+
+It's clear that for indexes with Red and Blue pointers, a heap tuple
+with Blue flag will be reachable from Blue pointer and that with Red
+flag will be reachable from Red pointer. But for indexes which did not
+create a new entry, both Blue and Red tuples will be reachable from Blue
+pointer (there is no Red pointer in such indexes). So, as a side note,
+matching Red and Blue flags is not enough from index scan perspective.
+
+During first heap scan of VACUUM, we look for tuples with
+HEAP_WARM_TUPLE set.  If all live tuples in the chain are either marked
+with Blue flag or Red flag (but no mix of Red and Blue), then the chain
+is a candidate for HOT conversion.  We remember the root line pointer
+and Red-Blue flag of the WARM chain in a separate array.
+
+If we have a Red WARM chain, then our goal is to remove Blue pointers
+and vice versa. But there is a catch. For Index2 above, there is only
+Blue pointer and that must not be removed. IOW we should remove Blue
+pointer iff a Red pointer exists. Since index vacuum may visit Red and
+Blue pointers in any order, I think we will need another index pass to
+remove dead index pointers. So in the first index pass we check which
+WARM candidates have 2 index pointers. In the second pass, we remove the
+dead pointer and reset Red flag is the surviving index pointer is Red.
+
+During the second heap scan, we fix WARM chain by clearing
+HEAP_WARM_TUPLE flag and also reset Red flag to Blue.
+
+There are some more problems around aborted vacuums. For example, if
+vacuum aborts after changing Red index flag to Blue but before removing
+the other Blue pointer, we will end up with two Blue pointers to a Red
+WARM chain. But since the HEAP_WARM_TUPLE flag on the heap tuple is
+still set, further WARM updates to the chain will be blocked. I guess we
+will need some special handling for case with multiple Blue pointers. We
+can either leave these WARM chains alone and let them die with a
+subsequent non-WARM update or must apply heap-recheck logic during index
+vacuum to find the dead pointer. Given that vacuum-aborts are not
+common, I am inclined to leave this case unhandled. We must still check
+for presence of multiple Blue pointers and ensure that we don't
+accidently remove either of the Blue pointers and not clear WARM chains
+either.
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 8e57bae..015aef1 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -1957,6 +1957,78 @@ heap_fetch(Relation relation,
 }
 
 /*
+ * Check if the HOT chain containing this tid is actually a WARM chain.
+ * Note that even if the WARM update ultimately aborted, we still must do a
+ * recheck because the failing UPDATE when have inserted created index entries
+ * which are now stale, but still referencing this chain.
+ */
+static bool
+hot_check_warm_chain(Page dp, ItemPointer tid)
+{
+	TransactionId prev_xmax = InvalidTransactionId;
+	OffsetNumber offnum;
+	HeapTupleData heapTuple;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+			break;
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Presence of either WARM or WARM updated tuple signals possible
+		 * breakage and the caller must recheck tuple returned from this chain
+		 * for index satisfaction
+		 */
+		if (HeapTupleHeaderIsHeapWarmTuple(heapTuple.t_data))
+			return true;
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	/* All OK. No need to recheck */
+	return false;
+}
+
+/*
  *	heap_hot_search_buffer	- search HOT chain for tuple satisfying snapshot
  *
  * On entry, *tid is the TID of a tuple (either a simple tuple, or the root
@@ -1976,11 +2048,14 @@ heap_fetch(Relation relation,
  * Unlike heap_fetch, the caller must already have pin and (at least) share
  * lock on the buffer; it is still pinned/locked at exit.  Also unlike
  * heap_fetch, we do not report any pgstats count; caller may do so if wanted.
+ *
+ * recheck should be set false on entry by caller, will be set true on exit
+ * if a WARM tuple is encountered.
  */
 bool
 heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 					   Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call)
+					   bool *all_dead, bool first_call, bool *recheck)
 {
 	Page		dp = (Page) BufferGetPage(buffer);
 	TransactionId prev_xmax = InvalidTransactionId;
@@ -2034,9 +2109,12 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		ItemPointerSetOffsetNumber(&heapTuple->t_self, offnum);
 
 		/*
-		 * Shouldn't see a HEAP_ONLY tuple at chain start.
+		 * Shouldn't see a HEAP_ONLY tuple at chain start, unless we are
+		 * dealing with a WARM updated tuple in which case deferred triggers
+		 * may request to fetch a WARM tuple from middle of a chain.
 		 */
-		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple))
+		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple) &&
+				!HeapTupleIsHeapWarmTuple(heapTuple))
 			break;
 
 		/*
@@ -2049,6 +2127,16 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 			break;
 
 		/*
+		 * Check if there exists a WARM tuple somewhere down the chain and set
+		 * recheck to TRUE.
+		 *
+		 * XXX This is not very efficient right now, and we should look for
+		 * possible improvements here
+		 */
+		if (recheck && *recheck == false)
+			*recheck = hot_check_warm_chain(dp, &heapTuple->t_self);
+
+		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
 		 * return the first tuple we find.  But on later passes, heapTuple
 		 * will initially be pointing to the tuple we returned last time.
@@ -2097,7 +2185,8 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		 * Check to see if HOT chain continues past this tuple; if so fetch
 		 * the next offnum and loop around.
 		 */
-		if (HeapTupleIsHotUpdated(heapTuple))
+		if (HeapTupleIsHotUpdated(heapTuple) &&
+			!HeapTupleHeaderHasRootOffset(heapTuple->t_data))
 		{
 			Assert(ItemPointerGetBlockNumber(&heapTuple->t_data->t_ctid) ==
 				   ItemPointerGetBlockNumber(tid));
@@ -2121,18 +2210,41 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
  */
 bool
 heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
-				bool *all_dead)
+				bool *all_dead, bool *recheck, Buffer *cbuffer,
+				HeapTuple heapTuple)
 {
 	bool		result;
 	Buffer		buffer;
-	HeapTupleData heapTuple;
+	ItemPointerData ret_tid = *tid;
 
 	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	LockBuffer(buffer, BUFFER_LOCK_SHARE);
-	result = heap_hot_search_buffer(tid, relation, buffer, snapshot,
-									&heapTuple, all_dead, true);
-	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buffer);
+	result = heap_hot_search_buffer(&ret_tid, relation, buffer, snapshot,
+									heapTuple, all_dead, true, recheck);
+
+	/*
+	 * If we are returning a potential candidate tuple from this chain and the
+	 * caller has requested for "recheck" hint, keep the buffer locked and
+	 * pinned. The caller must release the lock and pin on the buffer in all
+	 * such cases
+	 */
+	if (!result || !recheck || !(*recheck))
+	{
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	/*
+	 * Set the caller supplied tid with the actual location of the tuple being
+	 * returned
+	 */
+	if (result)
+	{
+		*tid = ret_tid;
+		if (cbuffer)
+			*cbuffer = buffer;
+	}
+
 	return result;
 }
 
@@ -3484,13 +3596,15 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 HTSU_Result
 heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode)
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update)
 {
 	HTSU_Result result;
 	TransactionId xid = GetCurrentTransactionId();
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *exprindx_attrs;
 	Bitmapset  *interesting_attrs;
 	Bitmapset  *modified_attrs;
 	ItemId		lp;
@@ -3513,6 +3627,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		use_hot_update = false;
+	bool		use_warm_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
 	bool		all_visible_cleared_new = false;
@@ -3537,6 +3652,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot update tuples during a parallel operation")));
 
+	/* Assume no-warm update */
+	if (warm_update)
+		*warm_update = false;
+
 	/*
 	 * Fetch the list of attributes to be checked for various operations.
 	 *
@@ -3558,10 +3677,13 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	exprindx_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_EXPR_PREDICATE);
+
 	interesting_attrs = bms_add_members(NULL, hot_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
-
+	interesting_attrs = bms_add_members(interesting_attrs, exprindx_attrs);
 
 	block = ItemPointerGetBlockNumber(otid);
 	offnum = ItemPointerGetOffsetNumber(otid);
@@ -3613,6 +3735,9 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
 												  &oldtup, newtup);
 
+	if (modified_attrsp)
+		*modified_attrsp = bms_copy(modified_attrs);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3868,6 +3993,7 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(exprindx_attrs);
 		bms_free(modified_attrs);
 		bms_free(interesting_attrs);
 		return result;
@@ -4187,6 +4313,35 @@ l2:
 		 */
 		if (!bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
+		else
+		{
+			/*
+			 * If no WARM updates yet on this chain, let this update be a WARM
+			 * update.
+			 *
+			 * We check for both warm and warm updated tuples since if the
+			 * previous WARM update aborted, we may still have added
+			 * another index entry for this HOT chain. In such situations, we
+			 * must not attempt a WARM update until duplicate (key, CTID) index
+			 * entry issue is sorted out
+			 *
+			 * XXX Later we'll add more checks to ensure WARM chains can
+			 * further be WARM updated. This is probably good to do first rounf
+			 * of tests of remaining functionality
+			 *
+			 * XXX Disable WARM updates on system tables. There is nothing in
+			 * principle that stops us from supporting this. But it would
+			 * require API change to propogate the changed columns back to the
+			 * caller so that CatalogUpdateIndexes() can avoid adding new
+			 * entries to indexes that are not changed by update. This will be
+			 * fixed once basic patch is tested. !!FIXME
+			 */
+			if (relation->rd_supportswarm &&
+				!bms_overlap(modified_attrs, exprindx_attrs) &&
+				!bms_is_subset(hot_attrs, modified_attrs) &&
+				!HeapTupleIsHeapWarmTuple(&oldtup))
+				use_warm_update = true;
+		}
 	}
 	else
 	{
@@ -4234,6 +4389,22 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+
+		/*
+		 * Even if we are doing a HOT update, we must carry forward the WARM
+		 * flag because we may have already inserted another index entry
+		 * pointing to our root and a third entry may create duplicates
+		 *
+		 * Note: If we ever have a mechanism to avoid duplicate <key, TID> in
+		 * indexes, we could look at relaxing this restriction and allow even
+		 * more WARM udpates
+		 */
+		if (HeapTupleIsHeapWarmTuple(&oldtup))
+		{
+			HeapTupleSetHeapWarmTuple(heaptup);
+			HeapTupleSetHeapWarmTuple(newtup);
+		}
+
 		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
@@ -4246,12 +4417,36 @@ l2:
 		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
 			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
+	else if (use_warm_update)
+	{
+		/* Mark the old tuple as HOT-updated */
+		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetHeapWarmTuple(&oldtup);
+		/* And mark the new tuple as heap-only */
+		HeapTupleSetHeapOnly(heaptup);
+		HeapTupleSetHeapWarmTuple(heaptup);
+		/* Mark the caller's copy too, in case different from heaptup */
+		HeapTupleSetHeapOnly(newtup);
+		HeapTupleSetHeapWarmTuple(newtup);
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			heap_get_root_tuple_one(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)),
+					&root_offnum);
+
+		/* Let the caller know we did a WARM update */
+		if (warm_update)
+			*warm_update = true;
+	}
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		HeapTupleClearHeapWarmTuple(heaptup);
+		HeapTupleClearHeapWarmTuple(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4360,7 +4555,10 @@ l2:
 	if (have_tuple_lock)
 		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
 
-	pgstat_count_heap_update(relation, use_hot_update);
+	/*
+	 * Count HOT and WARM updates separately
+	 */
+	pgstat_count_heap_update(relation, use_hot_update, use_warm_update);
 
 	/*
 	 * If heaptup is a private copy, release it.  Don't forget to copy t_self
@@ -4500,7 +4698,8 @@ HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
  * via ereport().
  */
 void
-simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
+simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup,
+		bool *warm_update, Bitmapset **modified_attrs)
 {
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
@@ -4509,7 +4708,8 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &hufd, &lockmode);
+						 &hufd, &lockmode, modified_attrs,
+						 warm_update);
 	switch (result)
 	{
 		case HeapTupleSelfUpdated:
@@ -7562,6 +7762,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
+	bool		warm_update = false;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
@@ -7573,6 +7774,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	else
 		info = XLOG_HEAP_UPDATE;
 
+	if (HeapTupleIsHeapWarmTuple(newtup))
+		warm_update = true;
+
 	/*
 	 * If the old and new tuple are on the same page, we only need to log the
 	 * parts of the new tuple that were changed.  That saves on the amount of
@@ -7646,6 +7850,8 @@ log_heap_update(Relation reln, Buffer oldbuf,
 				xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY;
 		}
 	}
+	if (warm_update)
+		xlrec.flags |= XLH_UPDATE_WARM_UPDATE;
 
 	/* If new tuple is the single and first tuple on page... */
 	if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber &&
@@ -8623,16 +8829,22 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	Size		freespace = 0;
 	XLogRedoAction oldaction;
 	XLogRedoAction newaction;
+	bool		warm_update = false;
 
 	/* initialize to keep the compiler quiet */
 	oldtup.t_data = NULL;
 	oldtup.t_len = 0;
 
+	if (xlrec->flags & XLH_UPDATE_WARM_UPDATE)
+		warm_update = true;
+
 	XLogRecGetBlockTag(record, 0, &rnode, NULL, &newblk);
 	if (XLogRecGetBlockTag(record, 1, NULL, NULL, &oldblk))
 	{
 		/* HOT updates are never done across pages */
 		Assert(!hot_update);
+		/* WARM updates are never done across pages */
+		Assert(!warm_update);
 	}
 	else
 		oldblk = newblk;
@@ -8692,6 +8904,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 								   &htup->t_infomask2);
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
+
+		/* Mark the old tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		/* Set forward chain link in t_ctid */
 		HeapTupleHeaderSetNextTid(htup, &newtid);
 
@@ -8827,6 +9044,10 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
 
+		/* Mark the new tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index 4822af9..de559fd 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -71,10 +71,12 @@
 #include "access/xlog.h"
 #include "catalog/catalog.h"
 #include "catalog/index.h"
+#include "executor/executor.h"
 #include "pgstat.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
+#include "utils/datum.h"
 #include "utils/snapmgr.h"
 #include "utils/tqual.h"
 
@@ -228,6 +230,21 @@ index_beginscan(Relation heapRelation,
 	scan->heapRelation = heapRelation;
 	scan->xs_snapshot = snapshot;
 
+	/*
+	 * If the index supports recheck, make sure that index tuple is saved
+	 * during index scans.
+	 *
+	 * XXX Ideally, we should look at all indexes on the table and check if
+	 * WARM is at all supported on the base table. If WARM is not supported
+	 * then we don't need to do any recheck. RelationGetIndexAttrBitmap() does
+	 * do that and sets rd_supportswarm after looking at all indexes. But we
+	 * don't know if the function was called earlier in the session when we're
+	 * here. We can't call it now because there exists a risk of causing
+	 * deadlock.
+	 */
+	if (indexRelation->rd_amroutine->amrecheck)
+		scan->xs_want_itup = true;
+
 	return scan;
 }
 
@@ -409,7 +426,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
 	/*
 	 * The AM's amgettuple proc finds the next index entry matching the scan
 	 * keys, and puts the TID into scan->xs_ctup.t_self.  It should also set
-	 * scan->xs_recheck and possibly scan->xs_itup, though we pay no attention
+	 * scan->xs_tuple_recheck and possibly scan->xs_itup, though we pay no attention
 	 * to those fields here.
 	 */
 	found = scan->indexRelation->rd_amroutine->amgettuple(scan, direction);
@@ -448,7 +465,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
  * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
  * call).
  *
- * Note: caller must check scan->xs_recheck, and perform rechecking of the
+ * Note: caller must check scan->xs_tuple_recheck, and perform rechecking of the
  * scan keys if required.  We do not do that here because we don't have
  * enough information to do it efficiently in the general case.
  * ----------------
@@ -475,6 +492,15 @@ index_fetch_heap(IndexScanDesc scan)
 		 */
 		if (prev_buf != scan->xs_cbuf)
 			heap_page_prune_opt(scan->heapRelation, scan->xs_cbuf);
+
+		/*
+		 * If we're not always re-checking, reset recheck for this tuple
+		 */
+		if (!scan->xs_recheck)
+			scan->xs_tuple_recheck = false;
+		else
+			scan->xs_tuple_recheck = true;
+
 	}
 
 	/* Obtain share-lock on the buffer so we can examine visibility */
@@ -484,32 +510,64 @@ index_fetch_heap(IndexScanDesc scan)
 											scan->xs_snapshot,
 											&scan->xs_ctup,
 											&all_dead,
-											!scan->xs_continue_hot);
+											!scan->xs_continue_hot,
+											&scan->xs_tuple_recheck);
 	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
 
 	if (got_heap_tuple)
 	{
+		bool res = true;
+
+		/*
+		 * Ok we got a tuple which satisfies the snapshot, but if its part of a
+		 * WARM chain, we must do additional checks to ensure that we are
+		 * indeed returning a correct tuple. Note that if the index AM does not
+		 * implement amrecheck method, then we don't any additional checks
+		 * since WARM must have been disabled on such tables
+		 *
+		 * XXX What happens when a new index which does not support amcheck is
+		 * added to the table? Do we need to handle this case or is CREATE
+		 * INDEX and CREATE INDEX CONCURRENTLY smart enough to handle this
+		 * issue?
+		 */
+		if (scan->xs_tuple_recheck &&
+				scan->xs_itup &&
+				scan->indexRelation->rd_amroutine->amrecheck)
+		{
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
+			res = scan->indexRelation->rd_amroutine->amrecheck(
+						scan->indexRelation,
+						scan->xs_itup,
+						scan->heapRelation,
+						&scan->xs_ctup);
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+		}
+
 		/*
 		 * Only in a non-MVCC snapshot can more than one member of the HOT
 		 * chain be visible.
 		 */
 		scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
 		pgstat_count_heap_fetch(scan->indexRelation);
-		return &scan->xs_ctup;
-	}
 
-	/* We've reached the end of the HOT chain. */
-	scan->xs_continue_hot = false;
+		if (res)
+			return &scan->xs_ctup;
+	}
+	else
+	{
+		/* We've reached the end of the HOT chain. */
+		scan->xs_continue_hot = false;
 
-	/*
-	 * If we scanned a whole HOT chain and found only dead tuples, tell index
-	 * AM to kill its entry for that TID (this will take effect in the next
-	 * amgettuple call, in index_getnext_tid).  We do not do this when in
-	 * recovery because it may violate MVCC to do so.  See comments in
-	 * RelationGetIndexScan().
-	 */
-	if (!scan->xactStartedInRecovery)
-		scan->kill_prior_tuple = all_dead;
+		/*
+		 * If we scanned a whole HOT chain and found only dead tuples, tell index
+		 * AM to kill its entry for that TID (this will take effect in the next
+		 * amgettuple call, in index_getnext_tid).  We do not do this when in
+		 * recovery because it may violate MVCC to do so.  See comments in
+		 * RelationGetIndexScan().
+		 */
+		if (!scan->xactStartedInRecovery)
+			scan->kill_prior_tuple = all_dead;
+	}
 
 	return NULL;
 }
diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c
index 883d70d..6efccf7 100644
--- a/src/backend/access/nbtree/nbtinsert.c
+++ b/src/backend/access/nbtree/nbtinsert.c
@@ -19,11 +19,14 @@
 #include "access/nbtree.h"
 #include "access/transam.h"
 #include "access/xloginsert.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
 #include "utils/tqual.h"
-
+#include "utils/datum.h"
 
 typedef struct
 {
@@ -249,6 +252,9 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 	BTPageOpaque opaque;
 	Buffer		nbuf = InvalidBuffer;
 	bool		found = false;
+	Buffer		buffer;
+	HeapTupleData	heapTuple;
+	bool		recheck = false;
 
 	/* Assume unique until we find a duplicate */
 	*is_unique = true;
@@ -308,6 +314,8 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				curitup = (IndexTuple) PageGetItem(page, curitemid);
 				htid = curitup->t_tid;
 
+				recheck = false;
+
 				/*
 				 * If we are doing a recheck, we expect to find the tuple we
 				 * are rechecking.  It's not a duplicate, but we have to keep
@@ -325,112 +333,153 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				 * have just a single index entry for the entire chain.
 				 */
 				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
-										 &all_dead))
+							&all_dead, &recheck, &buffer,
+							&heapTuple))
 				{
 					TransactionId xwait;
+					bool result = true;
 
 					/*
-					 * It is a duplicate. If we are only doing a partial
-					 * check, then don't bother checking if the tuple is being
-					 * updated in another transaction. Just return the fact
-					 * that it is a potential conflict and leave the full
-					 * check till later.
+					 * If the tuple was WARM update, we may again see our own
+					 * tuple. Since WARM updates don't create new index
+					 * entries, our own tuple is only reachable via the old
+					 * index pointer
 					 */
-					if (checkUnique == UNIQUE_CHECK_PARTIAL)
+					if (checkUnique == UNIQUE_CHECK_EXISTING &&
+							ItemPointerCompare(&htid, &itup->t_tid) == 0)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						*is_unique = false;
-						return InvalidTransactionId;
+						found = true;
+						result = false;
+						if (recheck)
+							UnlockReleaseBuffer(buffer);
 					}
-
-					/*
-					 * If this tuple is being updated by other transaction
-					 * then we have to wait for its commit/abort.
-					 */
-					xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
-						SnapshotDirty.xmin : SnapshotDirty.xmax;
-
-					if (TransactionIdIsValid(xwait))
+					else if (recheck)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						/* Tell _bt_doinsert to wait... */
-						*speculativeToken = SnapshotDirty.speculativeToken;
-						return xwait;
+						result = btrecheck(rel, curitup, heapRel, &heapTuple);
+						UnlockReleaseBuffer(buffer);
 					}
 
-					/*
-					 * Otherwise we have a definite conflict.  But before
-					 * complaining, look to see if the tuple we want to insert
-					 * is itself now committed dead --- if so, don't complain.
-					 * This is a waste of time in normal scenarios but we must
-					 * do it to support CREATE INDEX CONCURRENTLY.
-					 *
-					 * We must follow HOT-chains here because during
-					 * concurrent index build, we insert the root TID though
-					 * the actual tuple may be somewhere in the HOT-chain.
-					 * While following the chain we might not stop at the
-					 * exact tuple which triggered the insert, but that's OK
-					 * because if we find a live tuple anywhere in this chain,
-					 * we have a unique key conflict.  The other live tuple is
-					 * not part of this chain because it had a different index
-					 * entry.
-					 */
-					htid = itup->t_tid;
-					if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL))
-					{
-						/* Normal case --- it's still live */
-					}
-					else
+					if (result)
 					{
 						/*
-						 * It's been deleted, so no error, and no need to
-						 * continue searching
+						 * It is a duplicate. If we are only doing a partial
+						 * check, then don't bother checking if the tuple is being
+						 * updated in another transaction. Just return the fact
+						 * that it is a potential conflict and leave the full
+						 * check till later.
 						 */
-						break;
-					}
+						if (checkUnique == UNIQUE_CHECK_PARTIAL)
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							*is_unique = false;
+							return InvalidTransactionId;
+						}
 
-					/*
-					 * Check for a conflict-in as we would if we were going to
-					 * write to this page.  We aren't actually going to write,
-					 * but we want a chance to report SSI conflicts that would
-					 * otherwise be masked by this unique constraint
-					 * violation.
-					 */
-					CheckForSerializableConflictIn(rel, NULL, buf);
+						/*
+						 * If this tuple is being updated by other transaction
+						 * then we have to wait for its commit/abort.
+						 */
+						xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
+							SnapshotDirty.xmin : SnapshotDirty.xmax;
+
+						if (TransactionIdIsValid(xwait))
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							/* Tell _bt_doinsert to wait... */
+							*speculativeToken = SnapshotDirty.speculativeToken;
+							return xwait;
+						}
 
-					/*
-					 * This is a definite conflict.  Break the tuple down into
-					 * datums and report the error.  But first, make sure we
-					 * release the buffer locks we're holding ---
-					 * BuildIndexValueDescription could make catalog accesses,
-					 * which in the worst case might touch this same index and
-					 * cause deadlocks.
-					 */
-					if (nbuf != InvalidBuffer)
-						_bt_relbuf(rel, nbuf);
-					_bt_relbuf(rel, buf);
+						/*
+						 * Otherwise we have a definite conflict.  But before
+						 * complaining, look to see if the tuple we want to insert
+						 * is itself now committed dead --- if so, don't complain.
+						 * This is a waste of time in normal scenarios but we must
+						 * do it to support CREATE INDEX CONCURRENTLY.
+						 *
+						 * We must follow HOT-chains here because during
+						 * concurrent index build, we insert the root TID though
+						 * the actual tuple may be somewhere in the HOT-chain.
+						 * While following the chain we might not stop at the
+						 * exact tuple which triggered the insert, but that's OK
+						 * because if we find a live tuple anywhere in this chain,
+						 * we have a unique key conflict.  The other live tuple is
+						 * not part of this chain because it had a different index
+						 * entry.
+						 */
+						recheck = false;
+						ItemPointerCopy(&itup->t_tid, &htid);
+						if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL,
+									&recheck, &buffer, &heapTuple))
+						{
+							bool result = true;
+							if (recheck)
+							{
+								/*
+								 * Recheck if the tuple actually satisfies the
+								 * index key. Otherwise, we might be following
+								 * a wrong index pointer and mustn't entertain
+								 * this tuple
+								 */
+								result = btrecheck(rel, itup, heapRel, &heapTuple);
+								UnlockReleaseBuffer(buffer);
+							}
+							if (!result)
+								break;
+							/* Normal case --- it's still live */
+						}
+						else
+						{
+							/*
+							 * It's been deleted, so no error, and no need to
+							 * continue searching
+							 */
+							break;
+						}
 
-					{
-						Datum		values[INDEX_MAX_KEYS];
-						bool		isnull[INDEX_MAX_KEYS];
-						char	   *key_desc;
-
-						index_deform_tuple(itup, RelationGetDescr(rel),
-										   values, isnull);
-
-						key_desc = BuildIndexValueDescription(rel, values,
-															  isnull);
-
-						ereport(ERROR,
-								(errcode(ERRCODE_UNIQUE_VIOLATION),
-								 errmsg("duplicate key value violates unique constraint \"%s\"",
-										RelationGetRelationName(rel)),
-							   key_desc ? errdetail("Key %s already exists.",
-													key_desc) : 0,
-								 errtableconstraint(heapRel,
-											 RelationGetRelationName(rel))));
+						/*
+						 * Check for a conflict-in as we would if we were going to
+						 * write to this page.  We aren't actually going to write,
+						 * but we want a chance to report SSI conflicts that would
+						 * otherwise be masked by this unique constraint
+						 * violation.
+						 */
+						CheckForSerializableConflictIn(rel, NULL, buf);
+
+						/*
+						 * This is a definite conflict.  Break the tuple down into
+						 * datums and report the error.  But first, make sure we
+						 * release the buffer locks we're holding ---
+						 * BuildIndexValueDescription could make catalog accesses,
+						 * which in the worst case might touch this same index and
+						 * cause deadlocks.
+						 */
+						if (nbuf != InvalidBuffer)
+							_bt_relbuf(rel, nbuf);
+						_bt_relbuf(rel, buf);
+
+						{
+							Datum		values[INDEX_MAX_KEYS];
+							bool		isnull[INDEX_MAX_KEYS];
+							char	   *key_desc;
+
+							index_deform_tuple(itup, RelationGetDescr(rel),
+									values, isnull);
+
+							key_desc = BuildIndexValueDescription(rel, values,
+									isnull);
+
+							ereport(ERROR,
+									(errcode(ERRCODE_UNIQUE_VIOLATION),
+									 errmsg("duplicate key value violates unique constraint \"%s\"",
+										 RelationGetRelationName(rel)),
+									 key_desc ? errdetail("Key %s already exists.",
+										 key_desc) : 0,
+									 errtableconstraint(heapRel,
+										 RelationGetRelationName(rel))));
+						}
 					}
 				}
 				else if (all_dead)
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 1bb1acf..cb5a796 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -23,6 +23,7 @@
 #include "access/xlog.h"
 #include "catalog/index.h"
 #include "commands/vacuum.h"
+#include "executor/nodeIndexscan.h"
 #include "storage/indexfsm.h"
 #include "storage/ipc.h"
 #include "storage/lmgr.h"
@@ -118,6 +119,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = btendscan;
 	amroutine->ammarkpos = btmarkpos;
 	amroutine->amrestrpos = btrestrpos;
+	amroutine->amrecheck = btrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -298,8 +300,9 @@ btgettuple(IndexScanDesc scan, ScanDirection dir)
 	BTScanOpaque so = (BTScanOpaque) scan->opaque;
 	bool		res;
 
-	/* btree indexes are never lossy */
+	/* btree indexes are never lossy, except for WARM tuples */
 	scan->xs_recheck = false;
+	scan->xs_tuple_recheck = false;
 
 	/*
 	 * If we have any array keys, initialize them during first call for a
diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c
index da0f330..9becaeb 100644
--- a/src/backend/access/nbtree/nbtutils.c
+++ b/src/backend/access/nbtree/nbtutils.c
@@ -20,11 +20,15 @@
 #include "access/nbtree.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "utils/array.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 typedef struct BTSortArrayContext
@@ -2065,3 +2069,103 @@ btproperty(Oid index_oid, int attno,
 			return false;		/* punt to generic code */
 	}
 }
+
+/*
+ * Check if the index tuple's key matches the one computed from the given heap
+ * tuple's attribute
+ */
+bool
+btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	/* Get IndexInfo for this index */
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL, then they are equal
+		 */
+		if (isnull[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If just one is NULL, then they are not equal
+		 */
+		if (isnull[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now just do a raw memory comparison. If the index tuple was formed
+		 * using this heap tuple, the computed index values must match
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+}
diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c
index ca4b0bd..b0d2952 100644
--- a/src/backend/access/spgist/spgutils.c
+++ b/src/backend/access/spgist/spgutils.c
@@ -68,6 +68,7 @@ spghandler(PG_FUNCTION_ARGS)
 	amroutine->amendscan = spgendscan;
 	amroutine->ammarkpos = NULL;
 	amroutine->amrestrpos = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/catalog/aclchk.c b/src/backend/catalog/aclchk.c
index a96bf69..eed5b0b 100644
--- a/src/backend/catalog/aclchk.c
+++ b/src/backend/catalog/aclchk.c
@@ -1230,6 +1230,9 @@ SetDefaultACL(InternalDefaultACL *iacls)
 	}
 	else
 	{
+		bool warm_update;
+		Bitmapset *modified_attrs;
+
 		/* Prepare to insert or update pg_default_acl entry */
 		MemSet(values, 0, sizeof(values));
 		MemSet(nulls, false, sizeof(nulls));
@@ -1245,6 +1248,8 @@ SetDefaultACL(InternalDefaultACL *iacls)
 
 			newtuple = heap_form_tuple(RelationGetDescr(rel), values, nulls);
 			simple_heap_insert(rel, newtuple);
+			warm_update = false;
+			modified_attrs = NULL;
 		}
 		else
 		{
@@ -1254,11 +1259,12 @@ SetDefaultACL(InternalDefaultACL *iacls)
 
 			newtuple = heap_modify_tuple(tuple, RelationGetDescr(rel),
 										 values, nulls, replaces);
-			simple_heap_update(rel, &newtuple->t_self, newtuple);
+			simple_heap_update(rel, &newtuple->t_self, newtuple, &warm_update,
+					&modified_attrs);
 		}
 
 		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(rel, newtuple);
+		CatalogUpdateIndexes(rel, newtuple, warm_update, modified_attrs);
 
 		/* these dependencies don't change in an update */
 		if (isNew)
@@ -1686,13 +1692,17 @@ ExecGrant_Attribute(InternalGrant *istmt, Oid relOid, const char *relname,
 
 	if (need_update)
 	{
+		bool warm_update;
+		Bitmapset *modified_attrs;
+
 		newtuple = heap_modify_tuple(attr_tuple, RelationGetDescr(attRelation),
 									 values, nulls, replaces);
 
-		simple_heap_update(attRelation, &newtuple->t_self, newtuple);
+		simple_heap_update(attRelation, &newtuple->t_self, newtuple,
+				&warm_update, &modified_attrs);
 
 		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(attRelation, newtuple);
+		CatalogUpdateIndexes(attRelation, newtuple, warm_update, modified_attrs);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(relOid, RelationRelationId, attnum,
@@ -1899,6 +1909,8 @@ ExecGrant_Relation(InternalGrant *istmt)
 			int			nnewmembers;
 			Oid		   *newmembers;
 			AclObjectKind aclkind;
+			bool		warm_update;
+			Bitmapset	*modified_attrs;
 
 			/* Determine ID to do the grant as, and available grant options */
 			select_best_grantor(GetUserId(), this_privileges,
@@ -1955,10 +1967,12 @@ ExecGrant_Relation(InternalGrant *istmt)
 			newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation),
 										 values, nulls, replaces);
 
-			simple_heap_update(relation, &newtuple->t_self, newtuple);
+			simple_heap_update(relation, &newtuple->t_self, newtuple,
+					&warm_update, &modified_attrs);
 
 			/* keep the catalog indexes up to date */
-			CatalogUpdateIndexes(relation, newtuple);
+			CatalogUpdateIndexes(relation, newtuple, warm_update,
+					modified_attrs);
 
 			/* Update initial privileges for extensions */
 			recordExtensionInitPriv(relOid, RelationRelationId, 0, new_acl);
@@ -2079,6 +2093,8 @@ ExecGrant_Database(InternalGrant *istmt)
 		Oid		   *oldmembers;
 		Oid		   *newmembers;
 		HeapTuple	tuple;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		tuple = SearchSysCache1(DATABASEOID, ObjectIdGetDatum(datId));
 		if (!HeapTupleIsValid(tuple))
@@ -2148,10 +2164,11 @@ ExecGrant_Database(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
+		simple_heap_update(relation, &newtuple->t_self, newtuple, &warm_update,
+				&modified_attrs);
 
 		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateIndexes(relation, newtuple, warm_update, modified_attrs);
 
 		/* Update the shared dependency ACL info */
 		updateAclDependencies(DatabaseRelationId, HeapTupleGetOid(tuple), 0,
@@ -2202,6 +2219,8 @@ ExecGrant_Fdw(InternalGrant *istmt)
 		int			nnewmembers;
 		Oid		   *oldmembers;
 		Oid		   *newmembers;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		tuple = SearchSysCache1(FOREIGNDATAWRAPPEROID,
 								ObjectIdGetDatum(fdwid));
@@ -2273,10 +2292,11 @@ ExecGrant_Fdw(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
+		simple_heap_update(relation, &newtuple->t_self, newtuple, &warm_update,
+				&modified_attrs);
 
 		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateIndexes(relation, newtuple, warm_update, modified_attrs);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(fdwid, ForeignDataWrapperRelationId, 0,
@@ -2332,6 +2352,8 @@ ExecGrant_ForeignServer(InternalGrant *istmt)
 		int			nnewmembers;
 		Oid		   *oldmembers;
 		Oid		   *newmembers;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		tuple = SearchSysCache1(FOREIGNSERVEROID, ObjectIdGetDatum(srvid));
 		if (!HeapTupleIsValid(tuple))
@@ -2402,10 +2424,11 @@ ExecGrant_ForeignServer(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
+		simple_heap_update(relation, &newtuple->t_self, newtuple, &warm_update,
+				&modified_attrs);
 
 		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateIndexes(relation, newtuple, warm_update, modified_attrs);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(srvid, ForeignServerRelationId, 0, new_acl);
@@ -2460,6 +2483,8 @@ ExecGrant_Function(InternalGrant *istmt)
 		int			nnewmembers;
 		Oid		   *oldmembers;
 		Oid		   *newmembers;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		tuple = SearchSysCache1(PROCOID, ObjectIdGetDatum(funcId));
 		if (!HeapTupleIsValid(tuple))
@@ -2529,10 +2554,11 @@ ExecGrant_Function(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
+		simple_heap_update(relation, &newtuple->t_self, newtuple, &warm_update,
+				&modified_attrs);
 
 		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateIndexes(relation, newtuple, warm_update, modified_attrs);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(funcId, ProcedureRelationId, 0, new_acl);
@@ -2586,6 +2612,8 @@ ExecGrant_Language(InternalGrant *istmt)
 		int			nnewmembers;
 		Oid		   *oldmembers;
 		Oid		   *newmembers;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		tuple = SearchSysCache1(LANGOID, ObjectIdGetDatum(langId));
 		if (!HeapTupleIsValid(tuple))
@@ -2663,10 +2691,11 @@ ExecGrant_Language(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
+		simple_heap_update(relation, &newtuple->t_self, newtuple, &warm_update,
+				&modified_attrs);
 
 		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateIndexes(relation, newtuple, warm_update, modified_attrs);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(langId, LanguageRelationId, 0, new_acl);
@@ -2724,6 +2753,8 @@ ExecGrant_Largeobject(InternalGrant *istmt)
 		ScanKeyData entry[1];
 		SysScanDesc scan;
 		HeapTuple	tuple;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		/* There's no syscache for pg_largeobject_metadata */
 		ScanKeyInit(&entry[0],
@@ -2805,10 +2836,11 @@ ExecGrant_Largeobject(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation),
 									 values, nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
+		simple_heap_update(relation, &newtuple->t_self, newtuple, &warm_update,
+				&modified_attrs);
 
 		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateIndexes(relation, newtuple, warm_update, modified_attrs);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(loid, LargeObjectRelationId, 0, new_acl);
@@ -2863,6 +2895,8 @@ ExecGrant_Namespace(InternalGrant *istmt)
 		int			nnewmembers;
 		Oid		   *oldmembers;
 		Oid		   *newmembers;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		tuple = SearchSysCache1(NAMESPACEOID, ObjectIdGetDatum(nspid));
 		if (!HeapTupleIsValid(tuple))
@@ -2933,10 +2967,11 @@ ExecGrant_Namespace(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
+		simple_heap_update(relation, &newtuple->t_self, newtuple, &warm_update,
+				&modified_attrs);
 
 		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateIndexes(relation, newtuple, warm_update, modified_attrs);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(nspid, NamespaceRelationId, 0, new_acl);
@@ -2990,6 +3025,9 @@ ExecGrant_Tablespace(InternalGrant *istmt)
 		Oid		   *oldmembers;
 		Oid		   *newmembers;
 		HeapTuple	tuple;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
+
 
 		/* Search syscache for pg_tablespace */
 		tuple = SearchSysCache1(TABLESPACEOID, ObjectIdGetDatum(tblId));
@@ -3060,10 +3098,11 @@ ExecGrant_Tablespace(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
+		simple_heap_update(relation, &newtuple->t_self, newtuple, &warm_update,
+				&modified_attrs);
 
 		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateIndexes(relation, newtuple, warm_update, modified_attrs);
 
 		/* Update the shared dependency ACL info */
 		updateAclDependencies(TableSpaceRelationId, tblId, 0,
@@ -3113,6 +3152,8 @@ ExecGrant_Type(InternalGrant *istmt)
 		Oid		   *oldmembers;
 		Oid		   *newmembers;
 		HeapTuple	tuple;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		/* Search syscache for pg_type */
 		tuple = SearchSysCache1(TYPEOID, ObjectIdGetDatum(typId));
@@ -3197,10 +3238,11 @@ ExecGrant_Type(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
+		simple_heap_update(relation, &newtuple->t_self, newtuple, &warm_update,
+				&modified_attrs);
 
 		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateIndexes(relation, newtuple, warm_update, modified_attrs);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(typId, TypeRelationId, 0, new_acl);
@@ -5354,6 +5396,9 @@ recordExtensionInitPriv(Oid objoid, Oid classoid, int objsubid, Acl *new_acl)
 		/* If we have a new ACL to set, then update the row with it. */
 		if (new_acl)
 		{
+			bool	warm_update;
+			Bitmapset	*modified_attrs;
+
 			MemSet(values, 0, sizeof(values));
 			MemSet(nulls, false, sizeof(nulls));
 			MemSet(replace, false, sizeof(replace));
@@ -5364,10 +5409,12 @@ recordExtensionInitPriv(Oid objoid, Oid classoid, int objsubid, Acl *new_acl)
 			oldtuple = heap_modify_tuple(oldtuple, RelationGetDescr(relation),
 										 values, nulls, replace);
 
-			simple_heap_update(relation, &oldtuple->t_self, oldtuple);
+			simple_heap_update(relation, &oldtuple->t_self, oldtuple,
+					&warm_update, &modified_attrs);
 
 			/* keep the catalog indexes up to date */
-			CatalogUpdateIndexes(relation, oldtuple);
+			CatalogUpdateIndexes(relation, oldtuple, warm_update,
+					modified_attrs);
 		}
 		else
 			/* new_acl is NULL, so delete the entry we found. */
@@ -5396,7 +5443,7 @@ recordExtensionInitPriv(Oid objoid, Oid classoid, int objsubid, Acl *new_acl)
 		simple_heap_insert(relation, tuple);
 
 		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, tuple);
+		CatalogUpdateIndexes(relation, tuple, false, NULL);
 	}
 
 	systable_endscan(scan);
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index bfc54a8..8a8cdee 100644
--- a/src/backend/catalog/heap.c
+++ b/src/backend/catalog/heap.c
@@ -644,9 +644,9 @@ InsertPgAttributeTuple(Relation pg_attribute_rel,
 	simple_heap_insert(pg_attribute_rel, tup);
 
 	if (indstate != NULL)
-		CatalogIndexInsert(indstate, tup);
+		CatalogIndexInsert(indstate, tup, false, NULL);
 	else
-		CatalogUpdateIndexes(pg_attribute_rel, tup);
+		CatalogUpdateIndexes(pg_attribute_rel, tup, false, NULL);
 
 	heap_freetuple(tup);
 }
@@ -837,7 +837,7 @@ InsertPgClassTuple(Relation pg_class_desc,
 	/* finally insert the new tuple, update the indexes, and clean up */
 	simple_heap_insert(pg_class_desc, tup);
 
-	CatalogUpdateIndexes(pg_class_desc, tup);
+	CatalogUpdateIndexes(pg_class_desc, tup, false, NULL);
 
 	heap_freetuple(tup);
 }
@@ -1581,6 +1581,9 @@ RemoveAttributeById(Oid relid, AttrNumber attnum)
 	}
 	else
 	{
+		bool	warm_update;
+		Bitmapset	*modified_attrs;
+
 		/* Dropping user attributes is lots harder */
 
 		/* Mark the attribute as dropped */
@@ -1610,10 +1613,11 @@ RemoveAttributeById(Oid relid, AttrNumber attnum)
 				 "........pg.dropped.%d........", attnum);
 		namestrcpy(&(attStruct->attname), newattname);
 
-		simple_heap_update(attr_rel, &tuple->t_self, tuple);
+		simple_heap_update(attr_rel, &tuple->t_self, tuple, &warm_update,
+				&modified_attrs);
 
 		/* keep the system catalog indexes current */
-		CatalogUpdateIndexes(attr_rel, tuple);
+		CatalogUpdateIndexes(attr_rel, tuple, warm_update, modified_attrs);
 	}
 
 	/*
@@ -1701,6 +1705,8 @@ RemoveAttrDefaultById(Oid attrdefId)
 	HeapTuple	tuple;
 	Oid			myrelid;
 	AttrNumber	myattnum;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/* Grab an appropriate lock on the pg_attrdef relation */
 	attrdef_rel = heap_open(AttrDefaultRelationId, RowExclusiveLock);
@@ -1742,10 +1748,11 @@ RemoveAttrDefaultById(Oid attrdefId)
 
 	((Form_pg_attribute) GETSTRUCT(tuple))->atthasdef = false;
 
-	simple_heap_update(attr_rel, &tuple->t_self, tuple);
+	simple_heap_update(attr_rel, &tuple->t_self, tuple, &warm_update,
+			&modified_attrs);
 
 	/* keep the system catalog indexes current */
-	CatalogUpdateIndexes(attr_rel, tuple);
+	CatalogUpdateIndexes(attr_rel, tuple, warm_update, modified_attrs);
 
 	/*
 	 * Our update of the pg_attribute row will force a relcache rebuild, so
@@ -1945,7 +1952,7 @@ StoreAttrDefault(Relation rel, AttrNumber attnum,
 	tuple = heap_form_tuple(adrel->rd_att, values, nulls);
 	attrdefOid = simple_heap_insert(adrel, tuple);
 
-	CatalogUpdateIndexes(adrel, tuple);
+	CatalogUpdateIndexes(adrel, tuple, false, NULL);
 
 	defobject.classId = AttrDefaultRelationId;
 	defobject.objectId = attrdefOid;
@@ -1974,10 +1981,14 @@ StoreAttrDefault(Relation rel, AttrNumber attnum,
 	attStruct = (Form_pg_attribute) GETSTRUCT(atttup);
 	if (!attStruct->atthasdef)
 	{
+		bool	warm_update;
+		Bitmapset	*modified_attrs;
+
 		attStruct->atthasdef = true;
-		simple_heap_update(attrrel, &atttup->t_self, atttup);
+		simple_heap_update(attrrel, &atttup->t_self, atttup, &warm_update,
+				&modified_attrs);
 		/* keep catalog indexes current */
-		CatalogUpdateIndexes(attrrel, atttup);
+		CatalogUpdateIndexes(attrrel, atttup, warm_update, modified_attrs);
 	}
 	heap_close(attrrel, RowExclusiveLock);
 	heap_freetuple(atttup);
@@ -2479,6 +2490,9 @@ MergeWithExistingConstraint(Relation rel, char *ccname, Node *expr,
 
 		if (con->conrelid == RelationGetRelid(rel))
 		{
+			bool	warm_update;
+			Bitmapset	*modified_attrs;
+
 			/* Found it.  Conflicts if not identical check constraint */
 			if (con->contype == CONSTRAINT_CHECK)
 			{
@@ -2572,8 +2586,9 @@ MergeWithExistingConstraint(Relation rel, char *ccname, Node *expr,
 				Assert(is_local);
 				con->connoinherit = true;
 			}
-			simple_heap_update(conDesc, &tup->t_self, tup);
-			CatalogUpdateIndexes(conDesc, tup);
+			simple_heap_update(conDesc, &tup->t_self, tup, &warm_update,
+					&modified_attrs);
+			CatalogUpdateIndexes(conDesc, tup, warm_update, modified_attrs);
 			break;
 		}
 	}
@@ -2611,12 +2626,16 @@ SetRelationNumChecks(Relation rel, int numchecks)
 
 	if (relStruct->relchecks != numchecks)
 	{
+		bool	warm_update;
+		Bitmapset	*modified_attrs;
+
 		relStruct->relchecks = numchecks;
 
-		simple_heap_update(relrel, &reltup->t_self, reltup);
+		simple_heap_update(relrel, &reltup->t_self, reltup, &warm_update,
+				&modified_attrs);
 
 		/* keep catalog indexes current */
-		CatalogUpdateIndexes(relrel, reltup);
+		CatalogUpdateIndexes(relrel, reltup, warm_update, modified_attrs);
 	}
 	else
 	{
@@ -3159,7 +3178,7 @@ StorePartitionKey(Relation rel,
 	simple_heap_insert(pg_partitioned_table, tuple);
 
 	/* Update the indexes on pg_partitioned_table */
-	CatalogUpdateIndexes(pg_partitioned_table, tuple);
+	CatalogUpdateIndexes(pg_partitioned_table, tuple, false, NULL);
 	heap_close(pg_partitioned_table, RowExclusiveLock);
 
 	/* Mark this relation as dependent on a few things as follows */
@@ -3243,6 +3262,8 @@ StorePartitionBound(Relation rel, Relation parent, Node *bound)
 	Datum	new_val[Natts_pg_class];
 	bool	new_null[Natts_pg_class],
 			new_repl[Natts_pg_class];
+	bool	warm_update;
+	Bitmapset	*modified_attrs;
 
 	/* Update pg_class tuple */
 	classRel = heap_open(RelationRelationId, RowExclusiveLock);
@@ -3276,8 +3297,9 @@ StorePartitionBound(Relation rel, Relation parent, Node *bound)
 								 new_val, new_null, new_repl);
 	/* Also set the flag */
 	((Form_pg_class) GETSTRUCT(newtuple))->relispartition = true;
-	simple_heap_update(classRel, &newtuple->t_self, newtuple);
-	CatalogUpdateIndexes(classRel, newtuple);
+	simple_heap_update(classRel, &newtuple->t_self, newtuple, &warm_update,
+			&modified_attrs);
+	CatalogUpdateIndexes(classRel, newtuple, warm_update, modified_attrs);
 	heap_freetuple(newtuple);
 	heap_close(classRel, RowExclusiveLock);
 
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 26cbc0e..04ea34f 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -54,6 +54,7 @@
 #include "nodes/makefuncs.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/var.h"
 #include "parser/parser.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
@@ -652,7 +653,7 @@ UpdateIndexRelation(Oid indexoid,
 	simple_heap_insert(pg_index, tuple);
 
 	/* update the indexes on pg_index */
-	CatalogUpdateIndexes(pg_index, tuple);
+	CatalogUpdateIndexes(pg_index, tuple, false, NULL);
 
 	/*
 	 * close the relation and free the tuple
@@ -1324,8 +1325,13 @@ index_constraint_create(Relation heapRelation,
 
 		if (dirty)
 		{
-			simple_heap_update(pg_index, &indexTuple->t_self, indexTuple);
-			CatalogUpdateIndexes(pg_index, indexTuple);
+			bool	warm_update;
+			Bitmapset	*modified_attrs;
+
+			simple_heap_update(pg_index, &indexTuple->t_self, indexTuple,
+					&warm_update, &modified_attrs);
+			CatalogUpdateIndexes(pg_index, indexTuple, warm_update,
+					modified_attrs);
 
 			InvokeObjectPostAlterHookArg(IndexRelationId, indexRelationId, 0,
 										 InvalidOid, is_internal);
@@ -1691,6 +1697,20 @@ BuildIndexInfo(Relation index)
 	ii->ii_Concurrent = false;
 	ii->ii_BrokenHotChain = false;
 
+	/* build a bitmap of all table attributes referred by this index */
+	for (i = 0; i < ii->ii_NumIndexAttrs; i++)
+	{
+		AttrNumber attr = ii->ii_KeyAttrNumbers[i];
+		ii->ii_indxattrs = bms_add_member(ii->ii_indxattrs, attr -
+				FirstLowInvalidHeapAttributeNumber);
+	}
+
+	/* Collect all attributes used in expressions, too */
+	pull_varattnos((Node *) ii->ii_Expressions, 1, &ii->ii_indxattrs);
+
+	/* Collect all attributes in the index predicate, too */
+	pull_varattnos((Node *) ii->ii_Predicate, 1, &ii->ii_indxattrs);
+
 	return ii;
 }
 
@@ -2090,6 +2110,8 @@ index_build(Relation heapRelation,
 		Relation	pg_index;
 		HeapTuple	indexTuple;
 		Form_pg_index indexForm;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		pg_index = heap_open(IndexRelationId, RowExclusiveLock);
 
@@ -2103,8 +2125,9 @@ index_build(Relation heapRelation,
 		Assert(!indexForm->indcheckxmin);
 
 		indexForm->indcheckxmin = true;
-		simple_heap_update(pg_index, &indexTuple->t_self, indexTuple);
-		CatalogUpdateIndexes(pg_index, indexTuple);
+		simple_heap_update(pg_index, &indexTuple->t_self, indexTuple,
+				&warm_update, &modified_attrs);
+		CatalogUpdateIndexes(pg_index, indexTuple, warm_update, modified_attrs);
 
 		heap_freetuple(indexTuple);
 		heap_close(pg_index, RowExclusiveLock);
@@ -3441,6 +3464,9 @@ reindex_index(Oid indexId, bool skip_constraint_checks, char persistence,
 			(indexForm->indcheckxmin && !indexInfo->ii_BrokenHotChain) ||
 			early_pruning_enabled)
 		{
+			bool	warm_update;
+			Bitmapset	*modified_attrs;
+
 			if (!indexInfo->ii_BrokenHotChain && !early_pruning_enabled)
 				indexForm->indcheckxmin = false;
 			else if (index_bad || early_pruning_enabled)
@@ -3448,8 +3474,10 @@ reindex_index(Oid indexId, bool skip_constraint_checks, char persistence,
 			indexForm->indisvalid = true;
 			indexForm->indisready = true;
 			indexForm->indislive = true;
-			simple_heap_update(pg_index, &indexTuple->t_self, indexTuple);
-			CatalogUpdateIndexes(pg_index, indexTuple);
+			simple_heap_update(pg_index, &indexTuple->t_self, indexTuple,
+					&warm_update, &modified_attrs);
+			CatalogUpdateIndexes(pg_index, indexTuple, warm_update,
+					modified_attrs);
 
 			/*
 			 * Invalidate the relcache for the table, so that after we commit
diff --git a/src/backend/catalog/indexing.c b/src/backend/catalog/indexing.c
index 1915ca3..5046fd1 100644
--- a/src/backend/catalog/indexing.c
+++ b/src/backend/catalog/indexing.c
@@ -67,9 +67,13 @@ CatalogCloseIndexes(CatalogIndexState indstate)
  * This should be called for each inserted or updated catalog tuple.
  *
  * This is effectively a cut-down version of ExecInsertIndexTuples.
+ *
+ * See comments at CatalogUpdateIndexes for details about warm_update and
+ * modified_attrs
  */
 void
-CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
+CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple,
+		bool warm_update, Bitmapset *modified_attrs)
 {
 	int			i;
 	int			numIndexes;
@@ -79,12 +83,27 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 	IndexInfo **indexInfoArray;
 	Datum		values[INDEX_MAX_KEYS];
 	bool		isnull[INDEX_MAX_KEYS];
+	ItemPointerData	root_tid;
 
-	/* HOT update does not require index inserts */
-	if (HeapTupleIsHeapOnly(heapTuple))
+	/* 
+	 * HOT update does not require index inserts, but WARM may need for some
+	 * indexes
+	 */
+	if (HeapTupleIsHeapOnly(heapTuple) && !warm_update)
 		return;
 
 	/*
+	 * If we've done a WARM update, then we must index the TID of the root line
+	 * pointer and not the actual TID of the new tuple.
+	 */
+	if (warm_update)
+		ItemPointerSet(&root_tid,
+				ItemPointerGetBlockNumber(&(heapTuple->t_self)),
+				HeapTupleHeaderGetRootOffset(heapTuple->t_data));
+	else
+		ItemPointerCopy(&heapTuple->t_self, &root_tid);
+
+	/*
 	 * Get information from the state structure.  Fall out if nothing to do.
 	 */
 	numIndexes = indstate->ri_NumIndices;
@@ -112,6 +131,17 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 			continue;
 
 		/*
+		 * If we've done WARM update, then we must not insert a new index tuple
+		 * if none of the index keys have changed. This is not just an
+		 * optimization, but a requirement for WARM to work correctly.
+		 */
+		if (warm_update)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
+		/*
 		 * Expressional and partial indexes on system catalogs are not
 		 * supported, nor exclusion constraints, nor deferred uniqueness
 		 */
@@ -136,7 +166,7 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 		index_insert(relationDescs[i],	/* index relation */
 					 values,	/* array of index Datums */
 					 isnull,	/* is-null flags */
-					 &(heapTuple->t_self),		/* tid of heap tuple */
+					 &root_tid,
 					 heapRelation,
 					 relationDescs[i]->rd_index->indisunique ?
 					 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO);
@@ -152,13 +182,21 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
  * to insert or update a single tuple in a system catalog.  Avoid using it for
  * multiple tuples, since opening the indexes and building the index info
  * structures is moderately expensive.
+ *
+ * warm_update is passed as true if the indexes are being updated as a result
+ * of an update operation on the underlying system table and that update was a
+ * WARM update.
+ *
+ * modified_attrs contains a bitmap of attributes modified by the update
+ * operation.
  */
 void
-CatalogUpdateIndexes(Relation heapRel, HeapTuple heapTuple)
+CatalogUpdateIndexes(Relation heapRel, HeapTuple heapTuple,
+		bool warm_update, Bitmapset *modified_attrs)
 {
 	CatalogIndexState indstate;
 
 	indstate = CatalogOpenIndexes(heapRel);
-	CatalogIndexInsert(indstate, heapTuple);
+	CatalogIndexInsert(indstate, heapTuple, warm_update, modified_attrs);
 	CatalogCloseIndexes(indstate);
 }
diff --git a/src/backend/catalog/pg_aggregate.c b/src/backend/catalog/pg_aggregate.c
index 3a4e22f..4642614 100644
--- a/src/backend/catalog/pg_aggregate.c
+++ b/src/backend/catalog/pg_aggregate.c
@@ -676,7 +676,7 @@ AggregateCreate(const char *aggName,
 	tup = heap_form_tuple(tupDesc, values, nulls);
 	simple_heap_insert(aggdesc, tup);
 
-	CatalogUpdateIndexes(aggdesc, tup);
+	CatalogUpdateIndexes(aggdesc, tup, false, NULL);
 
 	heap_close(aggdesc, RowExclusiveLock);
 
diff --git a/src/backend/catalog/pg_collation.c b/src/backend/catalog/pg_collation.c
index 694c0f6..d143a4a 100644
--- a/src/backend/catalog/pg_collation.c
+++ b/src/backend/catalog/pg_collation.c
@@ -138,7 +138,7 @@ CollationCreate(const char *collname, Oid collnamespace,
 	Assert(OidIsValid(oid));
 
 	/* update the index if any */
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateIndexes(rel, tup, false, NULL);
 
 	/* set up dependencies for the new collation */
 	myself.classId = CollationRelationId;
diff --git a/src/backend/catalog/pg_constraint.c b/src/backend/catalog/pg_constraint.c
index b5a0ce9..6757d9c 100644
--- a/src/backend/catalog/pg_constraint.c
+++ b/src/backend/catalog/pg_constraint.c
@@ -229,7 +229,7 @@ CreateConstraintEntry(const char *constraintName,
 	conOid = simple_heap_insert(conDesc, tup);
 
 	/* update catalog indexes */
-	CatalogUpdateIndexes(conDesc, tup);
+	CatalogUpdateIndexes(conDesc, tup, false, NULL);
 
 	conobject.classId = ConstraintRelationId;
 	conobject.objectId = conOid;
@@ -570,6 +570,8 @@ RemoveConstraintById(Oid conId)
 			Relation	pgrel;
 			HeapTuple	relTup;
 			Form_pg_class classForm;
+			bool		warm_update;
+			Bitmapset	*modified_attrs;
 
 			pgrel = heap_open(RelationRelationId, RowExclusiveLock);
 			relTup = SearchSysCacheCopy1(RELOID,
@@ -584,9 +586,10 @@ RemoveConstraintById(Oid conId)
 					 RelationGetRelationName(rel));
 			classForm->relchecks--;
 
-			simple_heap_update(pgrel, &relTup->t_self, relTup);
+			simple_heap_update(pgrel, &relTup->t_self, relTup, &warm_update,
+					&modified_attrs);
 
-			CatalogUpdateIndexes(pgrel, relTup);
+			CatalogUpdateIndexes(pgrel, relTup, warm_update, modified_attrs);
 
 			heap_freetuple(relTup);
 
@@ -632,6 +635,8 @@ RenameConstraintById(Oid conId, const char *newname)
 	Relation	conDesc;
 	HeapTuple	tuple;
 	Form_pg_constraint con;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	conDesc = heap_open(ConstraintRelationId, RowExclusiveLock);
 
@@ -666,10 +671,11 @@ RenameConstraintById(Oid conId, const char *newname)
 	/* OK, do the rename --- tuple is a copy, so OK to scribble on it */
 	namestrcpy(&(con->conname), newname);
 
-	simple_heap_update(conDesc, &tuple->t_self, tuple);
+	simple_heap_update(conDesc, &tuple->t_self, tuple, &warm_update,
+			&modified_attrs);
 
 	/* update the system catalog indexes */
-	CatalogUpdateIndexes(conDesc, tuple);
+	CatalogUpdateIndexes(conDesc, tuple, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(ConstraintRelationId, conId, 0);
 
@@ -731,13 +737,17 @@ AlterConstraintNamespaces(Oid ownerId, Oid oldNspId,
 		/* Don't update if the object is already part of the namespace */
 		if (conform->connamespace == oldNspId && oldNspId != newNspId)
 		{
+			bool	warm_update;
+			Bitmapset	*modified_attrs;
+
 			tup = heap_copytuple(tup);
 			conform = (Form_pg_constraint) GETSTRUCT(tup);
 
 			conform->connamespace = newNspId;
 
-			simple_heap_update(conRel, &tup->t_self, tup);
-			CatalogUpdateIndexes(conRel, tup);
+			simple_heap_update(conRel, &tup->t_self, tup, &warm_update,
+					&modified_attrs);
+			CatalogUpdateIndexes(conRel, tup, warm_update, modified_attrs);
 
 			/*
 			 * Note: currently, the constraint will not have its own
diff --git a/src/backend/catalog/pg_conversion.c b/src/backend/catalog/pg_conversion.c
index adaf7b8..99da1f3 100644
--- a/src/backend/catalog/pg_conversion.c
+++ b/src/backend/catalog/pg_conversion.c
@@ -108,7 +108,7 @@ ConversionCreate(const char *conname, Oid connamespace,
 	simple_heap_insert(rel, tup);
 
 	/* update the index if any */
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateIndexes(rel, tup, false, NULL);
 
 	myself.classId = ConversionRelationId;
 	myself.objectId = HeapTupleGetOid(tup);
diff --git a/src/backend/catalog/pg_db_role_setting.c b/src/backend/catalog/pg_db_role_setting.c
index 117cc8d..f592419 100644
--- a/src/backend/catalog/pg_db_role_setting.c
+++ b/src/backend/catalog/pg_db_role_setting.c
@@ -78,6 +78,8 @@ AlterSetting(Oid databaseid, Oid roleid, VariableSetStmt *setstmt)
 				bool		repl_null[Natts_pg_db_role_setting];
 				bool		repl_repl[Natts_pg_db_role_setting];
 				HeapTuple	newtuple;
+				bool		warm_update;
+				Bitmapset	*modified_attrs;
 
 				memset(repl_repl, false, sizeof(repl_repl));
 
@@ -88,10 +90,11 @@ AlterSetting(Oid databaseid, Oid roleid, VariableSetStmt *setstmt)
 
 				newtuple = heap_modify_tuple(tuple, RelationGetDescr(rel),
 											 repl_val, repl_null, repl_repl);
-				simple_heap_update(rel, &tuple->t_self, newtuple);
+				simple_heap_update(rel, &tuple->t_self, newtuple, &warm_update,
+						&modified_attrs);
 
 				/* Update indexes */
-				CatalogUpdateIndexes(rel, newtuple);
+				CatalogUpdateIndexes(rel, newtuple, warm_update, modified_attrs);
 			}
 			else
 				simple_heap_delete(rel, &tuple->t_self);
@@ -106,6 +109,8 @@ AlterSetting(Oid databaseid, Oid roleid, VariableSetStmt *setstmt)
 		Datum		datum;
 		bool		isnull;
 		ArrayType  *a;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		memset(repl_repl, false, sizeof(repl_repl));
 		repl_repl[Anum_pg_db_role_setting_setconfig - 1] = true;
@@ -129,10 +134,11 @@ AlterSetting(Oid databaseid, Oid roleid, VariableSetStmt *setstmt)
 
 			newtuple = heap_modify_tuple(tuple, RelationGetDescr(rel),
 										 repl_val, repl_null, repl_repl);
-			simple_heap_update(rel, &tuple->t_self, newtuple);
+			simple_heap_update(rel, &tuple->t_self, newtuple, &warm_update,
+					&modified_attrs);
 
 			/* Update indexes */
-			CatalogUpdateIndexes(rel, newtuple);
+			CatalogUpdateIndexes(rel, newtuple, warm_update, modified_attrs);
 		}
 		else
 			simple_heap_delete(rel, &tuple->t_self);
@@ -158,7 +164,7 @@ AlterSetting(Oid databaseid, Oid roleid, VariableSetStmt *setstmt)
 		simple_heap_insert(rel, newtuple);
 
 		/* Update indexes */
-		CatalogUpdateIndexes(rel, newtuple);
+		CatalogUpdateIndexes(rel, newtuple, false, NULL);
 	}
 
 	InvokeObjectPostAlterHookArg(DbRoleSettingRelationId,
diff --git a/src/backend/catalog/pg_depend.c b/src/backend/catalog/pg_depend.c
index b71fa1b..cae00ad 100644
--- a/src/backend/catalog/pg_depend.c
+++ b/src/backend/catalog/pg_depend.c
@@ -113,7 +113,7 @@ recordMultipleDependencies(const ObjectAddress *depender,
 			if (indstate == NULL)
 				indstate = CatalogOpenIndexes(dependDesc);
 
-			CatalogIndexInsert(indstate, tup);
+			CatalogIndexInsert(indstate, tup, false, NULL);
 
 			heap_freetuple(tup);
 		}
@@ -356,14 +356,18 @@ changeDependencyFor(Oid classId, Oid objectId,
 				simple_heap_delete(depRel, &tup->t_self);
 			else
 			{
+				bool		warm_update;
+				Bitmapset	*modified_attrs;
+
 				/* make a modifiable copy */
 				tup = heap_copytuple(tup);
 				depform = (Form_pg_depend) GETSTRUCT(tup);
 
 				depform->refobjid = newRefObjectId;
 
-				simple_heap_update(depRel, &tup->t_self, tup);
-				CatalogUpdateIndexes(depRel, tup);
+				simple_heap_update(depRel, &tup->t_self, tup, &warm_update,
+						&modified_attrs);
+				CatalogUpdateIndexes(depRel, tup, warm_update, modified_attrs);
 
 				heap_freetuple(tup);
 			}
diff --git a/src/backend/catalog/pg_enum.c b/src/backend/catalog/pg_enum.c
index 089a9a0..4bf90a4 100644
--- a/src/backend/catalog/pg_enum.c
+++ b/src/backend/catalog/pg_enum.c
@@ -126,7 +126,7 @@ EnumValuesCreate(Oid enumTypeOid, List *vals)
 		HeapTupleSetOid(tup, oids[elemno]);
 
 		simple_heap_insert(pg_enum, tup);
-		CatalogUpdateIndexes(pg_enum, tup);
+		CatalogUpdateIndexes(pg_enum, tup, false, NULL);
 		heap_freetuple(tup);
 
 		elemno++;
@@ -459,7 +459,7 @@ restart:
 	enum_tup = heap_form_tuple(RelationGetDescr(pg_enum), values, nulls);
 	HeapTupleSetOid(enum_tup, newOid);
 	simple_heap_insert(pg_enum, enum_tup);
-	CatalogUpdateIndexes(pg_enum, enum_tup);
+	CatalogUpdateIndexes(pg_enum, enum_tup, false, NULL);
 	heap_freetuple(enum_tup);
 
 	heap_close(pg_enum, RowExclusiveLock);
@@ -483,6 +483,8 @@ RenameEnumLabel(Oid enumTypeOid,
 	HeapTuple	old_tup;
 	bool		found_new;
 	int			i;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/* check length of new label is ok */
 	if (strlen(newVal) > (NAMEDATALEN - 1))
@@ -543,8 +545,9 @@ RenameEnumLabel(Oid enumTypeOid,
 
 	/* Update the pg_enum entry */
 	namestrcpy(&en->enumlabel, newVal);
-	simple_heap_update(pg_enum, &enum_tup->t_self, enum_tup);
-	CatalogUpdateIndexes(pg_enum, enum_tup);
+	simple_heap_update(pg_enum, &enum_tup->t_self, enum_tup, &warm_update,
+			&modified_attrs);
+	CatalogUpdateIndexes(pg_enum, enum_tup, warm_update, modified_attrs);
 	heap_freetuple(enum_tup);
 
 	heap_close(pg_enum, RowExclusiveLock);
@@ -588,6 +591,8 @@ RenumberEnumType(Relation pg_enum, HeapTuple *existing, int nelems)
 		HeapTuple	newtup;
 		Form_pg_enum en;
 		float4		newsortorder;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		newtup = heap_copytuple(existing[i]);
 		en = (Form_pg_enum) GETSTRUCT(newtup);
@@ -597,9 +602,10 @@ RenumberEnumType(Relation pg_enum, HeapTuple *existing, int nelems)
 		{
 			en->enumsortorder = newsortorder;
 
-			simple_heap_update(pg_enum, &newtup->t_self, newtup);
+			simple_heap_update(pg_enum, &newtup->t_self, newtup, &warm_update,
+					&modified_attrs);
 
-			CatalogUpdateIndexes(pg_enum, newtup);
+			CatalogUpdateIndexes(pg_enum, newtup, warm_update, modified_attrs);
 		}
 
 		heap_freetuple(newtup);
diff --git a/src/backend/catalog/pg_largeobject.c b/src/backend/catalog/pg_largeobject.c
index 24edf6a..7ece246 100644
--- a/src/backend/catalog/pg_largeobject.c
+++ b/src/backend/catalog/pg_largeobject.c
@@ -66,7 +66,7 @@ LargeObjectCreate(Oid loid)
 	loid_new = simple_heap_insert(pg_lo_meta, ntup);
 	Assert(!OidIsValid(loid) || loid == loid_new);
 
-	CatalogUpdateIndexes(pg_lo_meta, ntup);
+	CatalogUpdateIndexes(pg_lo_meta, ntup, false, NULL);
 
 	heap_freetuple(ntup);
 
diff --git a/src/backend/catalog/pg_namespace.c b/src/backend/catalog/pg_namespace.c
index f048ad4..6b31e7e 100644
--- a/src/backend/catalog/pg_namespace.c
+++ b/src/backend/catalog/pg_namespace.c
@@ -79,7 +79,7 @@ NamespaceCreate(const char *nspName, Oid ownerId, bool isTemp)
 	nspoid = simple_heap_insert(nspdesc, tup);
 	Assert(OidIsValid(nspoid));
 
-	CatalogUpdateIndexes(nspdesc, tup);
+	CatalogUpdateIndexes(nspdesc, tup, false, NULL);
 
 	heap_close(nspdesc, RowExclusiveLock);
 
diff --git a/src/backend/catalog/pg_operator.c b/src/backend/catalog/pg_operator.c
index 556f9fe..77bbd97 100644
--- a/src/backend/catalog/pg_operator.c
+++ b/src/backend/catalog/pg_operator.c
@@ -264,7 +264,7 @@ OperatorShellMake(const char *operatorName,
 	 */
 	operatorObjectId = simple_heap_insert(pg_operator_desc, tup);
 
-	CatalogUpdateIndexes(pg_operator_desc, tup);
+	CatalogUpdateIndexes(pg_operator_desc, tup, false, NULL);
 
 	/* Add dependencies for the entry */
 	makeOperatorDependencies(tup, false);
@@ -350,6 +350,8 @@ OperatorCreate(const char *operatorName,
 	NameData	oname;
 	int			i;
 	ObjectAddress address;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/*
 	 * Sanity checks
@@ -526,7 +528,8 @@ OperatorCreate(const char *operatorName,
 								nulls,
 								replaces);
 
-		simple_heap_update(pg_operator_desc, &tup->t_self, tup);
+		simple_heap_update(pg_operator_desc, &tup->t_self, tup, &warm_update,
+				&modified_attrs);
 	}
 	else
 	{
@@ -536,10 +539,12 @@ OperatorCreate(const char *operatorName,
 							  values, nulls);
 
 		operatorObjectId = simple_heap_insert(pg_operator_desc, tup);
+		warm_update = false;
+		modified_attrs = NULL;
 	}
 
 	/* Must update the indexes in either case */
-	CatalogUpdateIndexes(pg_operator_desc, tup);
+	CatalogUpdateIndexes(pg_operator_desc, tup, warm_update, modified_attrs);
 
 	/* Add dependencies for the entry */
 	address = makeOperatorDependencies(tup, isUpdate);
@@ -695,8 +700,12 @@ OperatorUpd(Oid baseId, Oid commId, Oid negId, bool isDelete)
 		/* If any columns were found to need modification, update tuple. */
 		if (update_commutator)
 		{
-			simple_heap_update(pg_operator_desc, &tup->t_self, tup);
-			CatalogUpdateIndexes(pg_operator_desc, tup);
+			bool		warm_update;
+			Bitmapset	*modified_attrs;
+			simple_heap_update(pg_operator_desc, &tup->t_self, tup,
+					&warm_update, &modified_attrs);
+			CatalogUpdateIndexes(pg_operator_desc, tup, warm_update,
+					modified_attrs);
 
 			/*
 			 * Do CCI to make the updated tuple visible.  We must do this in
@@ -741,8 +750,13 @@ OperatorUpd(Oid baseId, Oid commId, Oid negId, bool isDelete)
 		/* If any columns were found to need modification, update tuple. */
 		if (update_negator)
 		{
-			simple_heap_update(pg_operator_desc, &tup->t_self, tup);
-			CatalogUpdateIndexes(pg_operator_desc, tup);
+			bool		warm_update;
+			Bitmapset	*modified_attrs;
+
+			simple_heap_update(pg_operator_desc, &tup->t_self, tup,
+					&warm_update, &modified_attrs);
+			CatalogUpdateIndexes(pg_operator_desc, tup, warm_update,
+					modified_attrs);
 
 			/*
 			 * In the deletion case, do CCI to make the updated tuple visible.
diff --git a/src/backend/catalog/pg_proc.c b/src/backend/catalog/pg_proc.c
index 7ae192a..0f7027a 100644
--- a/src/backend/catalog/pg_proc.c
+++ b/src/backend/catalog/pg_proc.c
@@ -118,6 +118,8 @@ ProcedureCreate(const char *procedureName,
 				referenced;
 	int			i;
 	Oid			trfid;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/*
 	 * sanity checks
@@ -573,7 +575,8 @@ ProcedureCreate(const char *procedureName,
 
 		/* Okay, do it... */
 		tup = heap_modify_tuple(oldtup, tupDesc, values, nulls, replaces);
-		simple_heap_update(rel, &tup->t_self, tup);
+		simple_heap_update(rel, &tup->t_self, tup, &warm_update,
+				&modified_attrs);
 
 		ReleaseSysCache(oldtup);
 		is_update = true;
@@ -593,10 +596,12 @@ ProcedureCreate(const char *procedureName,
 		tup = heap_form_tuple(tupDesc, values, nulls);
 		simple_heap_insert(rel, tup);
 		is_update = false;
+		warm_update = false;
+		modified_attrs = NULL;
 	}
 
 	/* Need to update indexes for either the insert or update case */
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateIndexes(rel, tup, warm_update, modified_attrs);
 
 	retval = HeapTupleGetOid(tup);
 
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 576b7fa..b93f7c3 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -149,7 +149,7 @@ publication_add_relation(Oid pubid, Relation targetrel,
 
 	/* Insert tuple into catalog. */
 	prrelid = simple_heap_insert(rel, tup);
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateIndexes(rel, tup, false, NULL);
 	heap_freetuple(tup);
 
 	ObjectAddressSet(myself, PublicationRelRelationId, prrelid);
diff --git a/src/backend/catalog/pg_range.c b/src/backend/catalog/pg_range.c
index d3a4c26..7d4cc5d 100644
--- a/src/backend/catalog/pg_range.c
+++ b/src/backend/catalog/pg_range.c
@@ -59,7 +59,7 @@ RangeCreate(Oid rangeTypeOid, Oid rangeSubType, Oid rangeCollation,
 	tup = heap_form_tuple(RelationGetDescr(pg_range), values, nulls);
 
 	simple_heap_insert(pg_range, tup);
-	CatalogUpdateIndexes(pg_range, tup);
+	CatalogUpdateIndexes(pg_range, tup, false, NULL);
 	heap_freetuple(tup);
 
 	/* record type's dependencies on range-related items */
diff --git a/src/backend/catalog/pg_shdepend.c b/src/backend/catalog/pg_shdepend.c
index 60ed957..3019b4e 100644
--- a/src/backend/catalog/pg_shdepend.c
+++ b/src/backend/catalog/pg_shdepend.c
@@ -253,6 +253,8 @@ shdepChangeDep(Relation sdepRel,
 	}
 	else if (oldtup)
 	{
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 		/* Need to update existing entry */
 		Form_pg_shdepend shForm = (Form_pg_shdepend) GETSTRUCT(oldtup);
 
@@ -260,10 +262,11 @@ shdepChangeDep(Relation sdepRel,
 		shForm->refclassid = refclassid;
 		shForm->refobjid = refobjid;
 
-		simple_heap_update(sdepRel, &oldtup->t_self, oldtup);
+		simple_heap_update(sdepRel, &oldtup->t_self, oldtup, &warm_update,
+				&modified_attrs);
 
 		/* keep indexes current */
-		CatalogUpdateIndexes(sdepRel, oldtup);
+		CatalogUpdateIndexes(sdepRel, oldtup, warm_update, modified_attrs);
 	}
 	else
 	{
@@ -290,7 +293,7 @@ shdepChangeDep(Relation sdepRel,
 		simple_heap_insert(sdepRel, oldtup);
 
 		/* keep indexes current */
-		CatalogUpdateIndexes(sdepRel, oldtup);
+		CatalogUpdateIndexes(sdepRel, oldtup, false, NULL);
 	}
 
 	if (oldtup)
@@ -762,7 +765,7 @@ copyTemplateDependencies(Oid templateDbId, Oid newDbId)
 		simple_heap_insert(sdepRel, newtup);
 
 		/* Keep indexes current */
-		CatalogIndexInsert(indstate, newtup);
+		CatalogIndexInsert(indstate, newtup, false, NULL);
 
 		heap_freetuple(newtup);
 	}
@@ -885,7 +888,7 @@ shdepAddDependency(Relation sdepRel,
 	simple_heap_insert(sdepRel, tup);
 
 	/* keep indexes current */
-	CatalogUpdateIndexes(sdepRel, tup);
+	CatalogUpdateIndexes(sdepRel, tup, false, NULL);
 
 	/* clean up */
 	heap_freetuple(tup);
diff --git a/src/backend/catalog/pg_type.c b/src/backend/catalog/pg_type.c
index 6d9a324..5857045 100644
--- a/src/backend/catalog/pg_type.c
+++ b/src/backend/catalog/pg_type.c
@@ -144,7 +144,7 @@ TypeShellMake(const char *typeName, Oid typeNamespace, Oid ownerId)
 	 */
 	typoid = simple_heap_insert(pg_type_desc, tup);
 
-	CatalogUpdateIndexes(pg_type_desc, tup);
+	CatalogUpdateIndexes(pg_type_desc, tup, false, NULL);
 
 	/*
 	 * Create dependencies.  We can/must skip this in bootstrap mode.
@@ -237,6 +237,8 @@ TypeCreate(Oid newTypeOid,
 	int			i;
 	Acl		   *typacl = NULL;
 	ObjectAddress address;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/*
 	 * We assume that the caller validated the arguments individually, but did
@@ -430,7 +432,8 @@ TypeCreate(Oid newTypeOid,
 								nulls,
 								replaces);
 
-		simple_heap_update(pg_type_desc, &tup->t_self, tup);
+		simple_heap_update(pg_type_desc, &tup->t_self, tup, &warm_update,
+				&modified_attrs);
 
 		typeObjectId = HeapTupleGetOid(tup);
 
@@ -459,10 +462,12 @@ TypeCreate(Oid newTypeOid,
 		/* else allow system to assign oid */
 
 		typeObjectId = simple_heap_insert(pg_type_desc, tup);
+		warm_update = false;
+		modified_attrs = NULL;
 	}
 
 	/* Update indexes */
-	CatalogUpdateIndexes(pg_type_desc, tup);
+	CatalogUpdateIndexes(pg_type_desc, tup, warm_update, modified_attrs);
 
 	/*
 	 * Create dependencies.  We can/must skip this in bootstrap mode.
@@ -700,6 +705,8 @@ RenameTypeInternal(Oid typeOid, const char *newTypeName, Oid typeNamespace)
 	HeapTuple	tuple;
 	Form_pg_type typ;
 	Oid			arrayOid;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	pg_type_desc = heap_open(TypeRelationId, RowExclusiveLock);
 
@@ -724,10 +731,11 @@ RenameTypeInternal(Oid typeOid, const char *newTypeName, Oid typeNamespace)
 	/* OK, do the rename --- tuple is a copy, so OK to scribble on it */
 	namestrcpy(&(typ->typname), newTypeName);
 
-	simple_heap_update(pg_type_desc, &tuple->t_self, tuple);
+	simple_heap_update(pg_type_desc, &tuple->t_self, tuple, &warm_update,
+			&modified_attrs);
 
 	/* update the system catalog indexes */
-	CatalogUpdateIndexes(pg_type_desc, tuple);
+	CatalogUpdateIndexes(pg_type_desc, tuple, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(TypeRelationId, typeOid, 0);
 
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 4dfedf8..27bc137 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -487,6 +487,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_tuples_deleted(C.oid) AS n_tup_del,
             pg_stat_get_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
@@ -517,7 +518,8 @@ CREATE VIEW pg_stat_xact_all_tables AS
             pg_stat_get_xact_tuples_inserted(C.oid) AS n_tup_ins,
             pg_stat_get_xact_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_xact_tuples_deleted(C.oid) AS n_tup_del,
-            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd
+            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_xact_tuples_warm_updated(C.oid) AS n_tup_warm_upd
     FROM pg_class C LEFT JOIN
          pg_index I ON C.oid = I.indrelid
          LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
diff --git a/src/backend/catalog/toasting.c b/src/backend/catalog/toasting.c
index ee4a182..b48f785 100644
--- a/src/backend/catalog/toasting.c
+++ b/src/backend/catalog/toasting.c
@@ -349,11 +349,15 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid,
 
 	if (!IsBootstrapProcessingMode())
 	{
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
+
 		/* normal case, use a transactional update */
-		simple_heap_update(class_rel, &reltup->t_self, reltup);
+		simple_heap_update(class_rel, &reltup->t_self, reltup, &warm_update,
+				&modified_attrs);
 
 		/* Keep catalog indexes current */
-		CatalogUpdateIndexes(class_rel, reltup);
+		CatalogUpdateIndexes(class_rel, reltup, warm_update, modified_attrs);
 	}
 	else
 	{
diff --git a/src/backend/commands/alter.c b/src/backend/commands/alter.c
index 768fcc8..5c03207 100644
--- a/src/backend/commands/alter.c
+++ b/src/backend/commands/alter.c
@@ -172,6 +172,8 @@ AlterObjectRename_internal(Relation rel, Oid objectId, const char *new_name)
 	bool	   *nulls;
 	bool	   *replaces;
 	NameData	nameattrdata;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	oldtup = SearchSysCache1(oidCacheId, ObjectIdGetDatum(objectId));
 	if (!HeapTupleIsValid(oldtup))
@@ -284,8 +286,9 @@ AlterObjectRename_internal(Relation rel, Oid objectId, const char *new_name)
 							   values, nulls, replaces);
 
 	/* Perform actual update */
-	simple_heap_update(rel, &oldtup->t_self, newtup);
-	CatalogUpdateIndexes(rel, newtup);
+	simple_heap_update(rel, &oldtup->t_self, newtup, &warm_update,
+			&modified_attrs);
+	CatalogUpdateIndexes(rel, newtup, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(classId, objectId, 0);
 
@@ -617,6 +620,8 @@ AlterObjectNamespace_internal(Relation rel, Oid objid, Oid nspOid)
 	Datum	   *values;
 	bool	   *nulls;
 	bool	   *replaces;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	tup = SearchSysCacheCopy1(oidCacheId, ObjectIdGetDatum(objid));
 	if (!HeapTupleIsValid(tup)) /* should not happen */
@@ -722,8 +727,8 @@ AlterObjectNamespace_internal(Relation rel, Oid objid, Oid nspOid)
 							   values, nulls, replaces);
 
 	/* Perform actual update */
-	simple_heap_update(rel, &tup->t_self, newtup);
-	CatalogUpdateIndexes(rel, newtup);
+	simple_heap_update(rel, &tup->t_self, newtup, &warm_update, &modified_attrs);
+	CatalogUpdateIndexes(rel, newtup, warm_update, modified_attrs);
 
 	/* Release memory */
 	pfree(values);
@@ -880,6 +885,8 @@ AlterObjectOwner_internal(Relation rel, Oid objectId, Oid new_ownerId)
 		Datum	   *values;
 		bool	   *nulls;
 		bool	   *replaces;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		/* Superusers can bypass permission checks */
 		if (!superuser())
@@ -954,8 +961,9 @@ AlterObjectOwner_internal(Relation rel, Oid objectId, Oid new_ownerId)
 								   values, nulls, replaces);
 
 		/* Perform actual update */
-		simple_heap_update(rel, &newtup->t_self, newtup);
-		CatalogUpdateIndexes(rel, newtup);
+		simple_heap_update(rel, &newtup->t_self, newtup, &warm_update,
+				&modified_attrs);
+		CatalogUpdateIndexes(rel, newtup, warm_update, modified_attrs);
 
 		/* Update owner dependency reference */
 		if (classId == LargeObjectMetadataRelationId)
diff --git a/src/backend/commands/amcmds.c b/src/backend/commands/amcmds.c
index 29061b8..2f33b2c 100644
--- a/src/backend/commands/amcmds.c
+++ b/src/backend/commands/amcmds.c
@@ -88,7 +88,7 @@ CreateAccessMethod(CreateAmStmt *stmt)
 	tup = heap_form_tuple(RelationGetDescr(rel), values, nulls);
 
 	amoid = simple_heap_insert(rel, tup);
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateIndexes(rel, tup, false, NULL);
 	heap_freetuple(tup);
 
 	myself.classId = AccessMethodRelationId;
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index e3e1a53..b9a9ede 100644
--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -1498,6 +1498,8 @@ update_attstats(Oid relid, bool inh, int natts, VacAttrStats **vacattrstats)
 		Datum		values[Natts_pg_statistic];
 		bool		nulls[Natts_pg_statistic];
 		bool		replaces[Natts_pg_statistic];
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		/* Ignore attr if we weren't able to collect stats */
 		if (!stats->stats_valid)
@@ -1589,17 +1591,20 @@ update_attstats(Oid relid, bool inh, int natts, VacAttrStats **vacattrstats)
 									 nulls,
 									 replaces);
 			ReleaseSysCache(oldtup);
-			simple_heap_update(sd, &stup->t_self, stup);
+			simple_heap_update(sd, &stup->t_self, stup, &warm_update,
+					&modified_attrs);
 		}
 		else
 		{
 			/* No, insert new tuple */
 			stup = heap_form_tuple(RelationGetDescr(sd), values, nulls);
 			simple_heap_insert(sd, stup);
+			warm_update = false;
+			modified_attrs = NULL;
 		}
 
 		/* update indexes too */
-		CatalogUpdateIndexes(sd, stup);
+		CatalogUpdateIndexes(sd, stup, warm_update, modified_attrs);
 
 		heap_freetuple(stup);
 	}
diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c
index f9309fc..03ed871 100644
--- a/src/backend/commands/cluster.c
+++ b/src/backend/commands/cluster.c
@@ -522,18 +522,28 @@ mark_index_clustered(Relation rel, Oid indexOid, bool is_internal)
 		 */
 		if (indexForm->indisclustered)
 		{
+			bool		warm_update;
+			Bitmapset	*modified_attrs;
+
 			indexForm->indisclustered = false;
-			simple_heap_update(pg_index, &indexTuple->t_self, indexTuple);
-			CatalogUpdateIndexes(pg_index, indexTuple);
+			simple_heap_update(pg_index, &indexTuple->t_self, indexTuple,
+					&warm_update, &modified_attrs);
+			CatalogUpdateIndexes(pg_index, indexTuple, warm_update,
+					modified_attrs);
 		}
 		else if (thisIndexOid == indexOid)
 		{
+			bool		warm_update;
+			Bitmapset	*modified_attrs;
+
 			/* this was checked earlier, but let's be real sure */
 			if (!IndexIsValid(indexForm))
 				elog(ERROR, "cannot cluster on invalid index %u", indexOid);
 			indexForm->indisclustered = true;
-			simple_heap_update(pg_index, &indexTuple->t_self, indexTuple);
-			CatalogUpdateIndexes(pg_index, indexTuple);
+			simple_heap_update(pg_index, &indexTuple->t_self, indexTuple,
+					&warm_update, &modified_attrs);
+			CatalogUpdateIndexes(pg_index, indexTuple, warm_update,
+					modified_attrs);
 		}
 
 		InvokeObjectPostAlterHookArg(IndexRelationId, thisIndexOid, 0,
@@ -1287,13 +1297,18 @@ swap_relation_files(Oid r1, Oid r2, bool target_is_pg_class,
 	 */
 	if (!target_is_pg_class)
 	{
-		simple_heap_update(relRelation, &reltup1->t_self, reltup1);
-		simple_heap_update(relRelation, &reltup2->t_self, reltup2);
+		bool warm_update1 = false, warm_update2 = false;
+		Bitmapset *modified_attrs1, *modified_attrs2;
+
+		simple_heap_update(relRelation, &reltup1->t_self, reltup1,
+				&warm_update1, &modified_attrs1);
+		simple_heap_update(relRelation, &reltup2->t_self, reltup2,
+				&warm_update2, &modified_attrs2);
 
 		/* Keep system catalogs current */
 		indstate = CatalogOpenIndexes(relRelation);
-		CatalogIndexInsert(indstate, reltup1);
-		CatalogIndexInsert(indstate, reltup2);
+		CatalogIndexInsert(indstate, reltup1, warm_update1, modified_attrs1);
+		CatalogIndexInsert(indstate, reltup2, warm_update2, modified_attrs2);
 		CatalogCloseIndexes(indstate);
 	}
 	else
@@ -1547,6 +1562,8 @@ finish_heap_swap(Oid OIDOldHeap, Oid OIDNewHeap,
 		Relation	relRelation;
 		HeapTuple	reltup;
 		Form_pg_class relform;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		relRelation = heap_open(RelationRelationId, RowExclusiveLock);
 
@@ -1558,8 +1575,9 @@ finish_heap_swap(Oid OIDOldHeap, Oid OIDNewHeap,
 		relform->relfrozenxid = frozenXid;
 		relform->relminmxid = cutoffMulti;
 
-		simple_heap_update(relRelation, &reltup->t_self, reltup);
-		CatalogUpdateIndexes(relRelation, reltup);
+		simple_heap_update(relRelation, &reltup->t_self, reltup, &warm_update,
+				&modified_attrs);
+		CatalogUpdateIndexes(relRelation, reltup, warm_update, modified_attrs);
 
 		heap_close(relRelation, RowExclusiveLock);
 	}
diff --git a/src/backend/commands/comment.c b/src/backend/commands/comment.c
index ada0b03..60b3631 100644
--- a/src/backend/commands/comment.c
+++ b/src/backend/commands/comment.c
@@ -150,6 +150,8 @@ CreateComments(Oid oid, Oid classoid, int32 subid, char *comment)
 	bool		nulls[Natts_pg_description];
 	bool		replaces[Natts_pg_description];
 	int			i;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/* Reduce empty-string to NULL case */
 	if (comment != NULL && strlen(comment) == 0)
@@ -199,7 +201,8 @@ CreateComments(Oid oid, Oid classoid, int32 subid, char *comment)
 		{
 			newtuple = heap_modify_tuple(oldtuple, RelationGetDescr(description), values,
 										 nulls, replaces);
-			simple_heap_update(description, &oldtuple->t_self, newtuple);
+			simple_heap_update(description, &oldtuple->t_self, newtuple,
+					&warm_update, &modified_attrs);
 		}
 
 		break;					/* Assume there can be only one match */
@@ -214,12 +217,13 @@ CreateComments(Oid oid, Oid classoid, int32 subid, char *comment)
 		newtuple = heap_form_tuple(RelationGetDescr(description),
 								   values, nulls);
 		simple_heap_insert(description, newtuple);
+		warm_update = false;
 	}
 
 	/* Update indexes, if necessary */
 	if (newtuple != NULL)
 	{
-		CatalogUpdateIndexes(description, newtuple);
+		CatalogUpdateIndexes(description, newtuple, warm_update, modified_attrs);
 		heap_freetuple(newtuple);
 	}
 
@@ -249,6 +253,8 @@ CreateSharedComments(Oid oid, Oid classoid, char *comment)
 	bool		nulls[Natts_pg_shdescription];
 	bool		replaces[Natts_pg_shdescription];
 	int			i;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/* Reduce empty-string to NULL case */
 	if (comment != NULL && strlen(comment) == 0)
@@ -293,7 +299,8 @@ CreateSharedComments(Oid oid, Oid classoid, char *comment)
 		{
 			newtuple = heap_modify_tuple(oldtuple, RelationGetDescr(shdescription),
 										 values, nulls, replaces);
-			simple_heap_update(shdescription, &oldtuple->t_self, newtuple);
+			simple_heap_update(shdescription, &oldtuple->t_self, newtuple,
+					&warm_update, &modified_attrs);
 		}
 
 		break;					/* Assume there can be only one match */
@@ -308,12 +315,14 @@ CreateSharedComments(Oid oid, Oid classoid, char *comment)
 		newtuple = heap_form_tuple(RelationGetDescr(shdescription),
 								   values, nulls);
 		simple_heap_insert(shdescription, newtuple);
+		warm_update = false;
 	}
 
 	/* Update indexes, if necessary */
 	if (newtuple != NULL)
 	{
-		CatalogUpdateIndexes(shdescription, newtuple);
+		CatalogUpdateIndexes(shdescription, newtuple, warm_update,
+				modified_attrs);
 		heap_freetuple(newtuple);
 	}
 
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index 77cf8ce..faef5b4 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -40,6 +40,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	TriggerData *trigdata = (TriggerData *) fcinfo->context;
 	const char *funcname = "unique_key_recheck";
 	HeapTuple	new_row;
+	HeapTupleData heapTuple;
 	ItemPointerData tmptid;
 	Relation	indexRel;
 	IndexInfo  *indexInfo;
@@ -102,7 +103,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	 * removed.
 	 */
 	tmptid = new_row->t_self;
-	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL,
+				NULL, NULL, &heapTuple))
 	{
 		/*
 		 * All rows in the HOT chain are dead, so skip the check.
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index c05e14e..55b955a 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -2669,6 +2669,8 @@ CopyFrom(CopyState cstate)
 					if (resultRelInfo->ri_NumIndices > 0)
 						recheckIndexes = ExecInsertIndexTuples(slot,
 															&(tuple->t_self),
+															&(tuple->t_self),
+															NULL,
 															   estate,
 															   false,
 															   NULL,
@@ -2823,6 +2825,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
 			ExecStoreTuple(bufferedTuples[i], myslot, InvalidBuffer, false);
 			recheckIndexes =
 				ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
+									  &(bufferedTuples[i]->t_self), NULL,
 									  estate, false, NULL, NIL);
 			ExecARInsertTriggers(estate, resultRelInfo,
 								 bufferedTuples[i],
diff --git a/src/backend/commands/dbcommands.c b/src/backend/commands/dbcommands.c
index 6ad8fd7..db9f9fc 100644
--- a/src/backend/commands/dbcommands.c
+++ b/src/backend/commands/dbcommands.c
@@ -549,7 +549,7 @@ createdb(ParseState *pstate, const CreatedbStmt *stmt)
 	simple_heap_insert(pg_database_rel, tuple);
 
 	/* Update indexes */
-	CatalogUpdateIndexes(pg_database_rel, tuple);
+	CatalogUpdateIndexes(pg_database_rel, tuple, false, NULL);
 
 	/*
 	 * Now generate additional catalog entries associated with the new DB
@@ -978,6 +978,8 @@ RenameDatabase(const char *oldname, const char *newname)
 	int			notherbackends;
 	int			npreparedxacts;
 	ObjectAddress address;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/*
 	 * Look up the target database's OID, and get exclusive lock on it. We
@@ -1040,8 +1042,9 @@ RenameDatabase(const char *oldname, const char *newname)
 	if (!HeapTupleIsValid(newtup))
 		elog(ERROR, "cache lookup failed for database %u", db_id);
 	namestrcpy(&(((Form_pg_database) GETSTRUCT(newtup))->datname), newname);
-	simple_heap_update(rel, &newtup->t_self, newtup);
-	CatalogUpdateIndexes(rel, newtup);
+	simple_heap_update(rel, &newtup->t_self, newtup, &warm_update,
+			&modified_attrs);
+	CatalogUpdateIndexes(rel, newtup, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(DatabaseRelationId, db_id, 0);
 
@@ -1081,6 +1084,8 @@ movedb(const char *dbname, const char *tblspcname)
 	DIR		   *dstdir;
 	struct dirent *xlde;
 	movedb_failure_params fparms;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/*
 	 * Look up the target database's OID, and get exclusive lock on it. We
@@ -1296,10 +1301,11 @@ movedb(const char *dbname, const char *tblspcname)
 		newtuple = heap_modify_tuple(oldtuple, RelationGetDescr(pgdbrel),
 									 new_record,
 									 new_record_nulls, new_record_repl);
-		simple_heap_update(pgdbrel, &oldtuple->t_self, newtuple);
+		simple_heap_update(pgdbrel, &oldtuple->t_self, newtuple, &warm_update,
+				&modified_attrs);
 
 		/* Update indexes */
-		CatalogUpdateIndexes(pgdbrel, newtuple);
+		CatalogUpdateIndexes(pgdbrel, newtuple, warm_update, modified_attrs);
 
 		InvokeObjectPostAlterHook(DatabaseRelationId,
 								  HeapTupleGetOid(newtuple), 0);
@@ -1413,6 +1419,8 @@ AlterDatabase(ParseState *pstate, AlterDatabaseStmt *stmt, bool isTopLevel)
 	Datum		new_record[Natts_pg_database];
 	bool		new_record_nulls[Natts_pg_database];
 	bool		new_record_repl[Natts_pg_database];
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/* Extract options from the statement node tree */
 	foreach(option, stmt->options)
@@ -1554,10 +1562,11 @@ AlterDatabase(ParseState *pstate, AlterDatabaseStmt *stmt, bool isTopLevel)
 
 	newtuple = heap_modify_tuple(tuple, RelationGetDescr(rel), new_record,
 								 new_record_nulls, new_record_repl);
-	simple_heap_update(rel, &tuple->t_self, newtuple);
+	simple_heap_update(rel, &tuple->t_self, newtuple, &warm_update,
+			&modified_attrs);
 
 	/* Update indexes */
-	CatalogUpdateIndexes(rel, newtuple);
+	CatalogUpdateIndexes(rel, newtuple, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(DatabaseRelationId,
 							  HeapTupleGetOid(newtuple), 0);
@@ -1610,6 +1619,8 @@ AlterDatabaseOwner(const char *dbname, Oid newOwnerId)
 	SysScanDesc scan;
 	Form_pg_database datForm;
 	ObjectAddress address;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/*
 	 * Get the old tuple.  We don't need a lock on the database per se,
@@ -1692,8 +1703,9 @@ AlterDatabaseOwner(const char *dbname, Oid newOwnerId)
 		}
 
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(rel), repl_val, repl_null, repl_repl);
-		simple_heap_update(rel, &newtuple->t_self, newtuple);
-		CatalogUpdateIndexes(rel, newtuple);
+		simple_heap_update(rel, &newtuple->t_self, newtuple, &warm_update,
+				&modified_attrs);
+		CatalogUpdateIndexes(rel, newtuple, warm_update, modified_attrs);
 
 		heap_freetuple(newtuple);
 
diff --git a/src/backend/commands/event_trigger.c b/src/backend/commands/event_trigger.c
index 8125537..63cdff8 100644
--- a/src/backend/commands/event_trigger.c
+++ b/src/backend/commands/event_trigger.c
@@ -406,7 +406,7 @@ insert_event_trigger_tuple(char *trigname, char *eventname, Oid evtOwner,
 	/* Insert heap tuple. */
 	tuple = heap_form_tuple(tgrel->rd_att, values, nulls);
 	trigoid = simple_heap_insert(tgrel, tuple);
-	CatalogUpdateIndexes(tgrel, tuple);
+	CatalogUpdateIndexes(tgrel, tuple, false, NULL);
 	heap_freetuple(tuple);
 
 	/* Depend on owner. */
@@ -503,6 +503,8 @@ AlterEventTrigger(AlterEventTrigStmt *stmt)
 	Oid			trigoid;
 	Form_pg_event_trigger evtForm;
 	char		tgenabled = stmt->tgenabled;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	tgrel = heap_open(EventTriggerRelationId, RowExclusiveLock);
 
@@ -524,8 +526,9 @@ AlterEventTrigger(AlterEventTrigStmt *stmt)
 	evtForm = (Form_pg_event_trigger) GETSTRUCT(tup);
 	evtForm->evtenabled = tgenabled;
 
-	simple_heap_update(tgrel, &tup->t_self, tup);
-	CatalogUpdateIndexes(tgrel, tup);
+	simple_heap_update(tgrel, &tup->t_self, tup, &warm_update,
+			&modified_attrs);
+	CatalogUpdateIndexes(tgrel, tup, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(EventTriggerRelationId,
 							  trigoid, 0);
@@ -602,6 +605,8 @@ static void
 AlterEventTriggerOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 {
 	Form_pg_event_trigger form;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	form = (Form_pg_event_trigger) GETSTRUCT(tup);
 
@@ -621,8 +626,8 @@ AlterEventTriggerOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			 errhint("The owner of an event trigger must be a superuser.")));
 
 	form->evtowner = newOwnerId;
-	simple_heap_update(rel, &tup->t_self, tup);
-	CatalogUpdateIndexes(rel, tup);
+	simple_heap_update(rel, &tup->t_self, tup, &warm_update, &modified_attrs);
+	CatalogUpdateIndexes(rel, tup, warm_update, modified_attrs);
 
 	/* Update owner dependency reference */
 	changeDependencyOnOwner(EventTriggerRelationId,
diff --git a/src/backend/commands/extension.c b/src/backend/commands/extension.c
index 554fdc4..01e6a54 100644
--- a/src/backend/commands/extension.c
+++ b/src/backend/commands/extension.c
@@ -1773,7 +1773,7 @@ InsertExtensionTuple(const char *extName, Oid extOwner,
 	tuple = heap_form_tuple(rel->rd_att, values, nulls);
 
 	extensionOid = simple_heap_insert(rel, tuple);
-	CatalogUpdateIndexes(rel, tuple);
+	CatalogUpdateIndexes(rel, tuple, false, NULL);
 
 	heap_freetuple(tuple);
 	heap_close(rel, RowExclusiveLock);
@@ -2332,6 +2332,8 @@ pg_extension_config_dump(PG_FUNCTION_ARGS)
 	bool		repl_null[Natts_pg_extension];
 	bool		repl_repl[Natts_pg_extension];
 	ArrayType  *a;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/*
 	 * We only allow this to be called from an extension's SQL script. We
@@ -2484,8 +2486,9 @@ pg_extension_config_dump(PG_FUNCTION_ARGS)
 	extTup = heap_modify_tuple(extTup, RelationGetDescr(extRel),
 							   repl_val, repl_null, repl_repl);
 
-	simple_heap_update(extRel, &extTup->t_self, extTup);
-	CatalogUpdateIndexes(extRel, extTup);
+	simple_heap_update(extRel, &extTup->t_self, extTup, &warm_update,
+			&modified_attrs);
+	CatalogUpdateIndexes(extRel, extTup, warm_update, modified_attrs);
 
 	systable_endscan(extScan);
 
@@ -2516,6 +2519,8 @@ extension_config_remove(Oid extensionoid, Oid tableoid)
 	bool		repl_null[Natts_pg_extension];
 	bool		repl_repl[Natts_pg_extension];
 	ArrayType  *a;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/* Find the pg_extension tuple */
 	extRel = heap_open(ExtensionRelationId, RowExclusiveLock);
@@ -2662,8 +2667,9 @@ extension_config_remove(Oid extensionoid, Oid tableoid)
 	extTup = heap_modify_tuple(extTup, RelationGetDescr(extRel),
 							   repl_val, repl_null, repl_repl);
 
-	simple_heap_update(extRel, &extTup->t_self, extTup);
-	CatalogUpdateIndexes(extRel, extTup);
+	simple_heap_update(extRel, &extTup->t_self, extTup, &warm_update,
+			&modified_attrs);
+	CatalogUpdateIndexes(extRel, extTup, warm_update, modified_attrs);
 
 	systable_endscan(extScan);
 
@@ -2691,6 +2697,8 @@ AlterExtensionNamespace(List *names, const char *newschema, Oid *oldschema)
 	HeapTuple	depTup;
 	ObjectAddresses *objsMoved;
 	ObjectAddress extAddr;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	if (list_length(names) != 1)
 		ereport(ERROR,
@@ -2843,8 +2851,9 @@ AlterExtensionNamespace(List *names, const char *newschema, Oid *oldschema)
 	/* Now adjust pg_extension.extnamespace */
 	extForm->extnamespace = nspOid;
 
-	simple_heap_update(extRel, &extTup->t_self, extTup);
-	CatalogUpdateIndexes(extRel, extTup);
+	simple_heap_update(extRel, &extTup->t_self, extTup, &warm_update,
+			&modified_attrs);
+	CatalogUpdateIndexes(extRel, extTup, warm_update, modified_attrs);
 
 	heap_close(extRel, RowExclusiveLock);
 
@@ -3042,6 +3051,8 @@ ApplyExtensionUpdates(Oid extensionOid,
 		bool		repl[Natts_pg_extension];
 		ObjectAddress myself;
 		ListCell   *lc;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		/*
 		 * Fetch parameters for specific version (pcontrol is not changed)
@@ -3090,8 +3101,9 @@ ApplyExtensionUpdates(Oid extensionOid,
 		extTup = heap_modify_tuple(extTup, RelationGetDescr(extRel),
 								   values, nulls, repl);
 
-		simple_heap_update(extRel, &extTup->t_self, extTup);
-		CatalogUpdateIndexes(extRel, extTup);
+		simple_heap_update(extRel, &extTup->t_self, extTup, &warm_update,
+				&modified_attrs);
+		CatalogUpdateIndexes(extRel, extTup, warm_update, modified_attrs);
 
 		systable_endscan(extScan);
 
diff --git a/src/backend/commands/foreigncmds.c b/src/backend/commands/foreigncmds.c
index 476a023..d76ccda 100644
--- a/src/backend/commands/foreigncmds.c
+++ b/src/backend/commands/foreigncmds.c
@@ -234,6 +234,9 @@ AlterForeignDataWrapperOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerI
 
 	if (form->fdwowner != newOwnerId)
 	{
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
+
 		memset(repl_null, false, sizeof(repl_null));
 		memset(repl_repl, false, sizeof(repl_repl));
 
@@ -256,8 +259,9 @@ AlterForeignDataWrapperOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerI
 		tup = heap_modify_tuple(tup, RelationGetDescr(rel), repl_val, repl_null,
 								repl_repl);
 
-		simple_heap_update(rel, &tup->t_self, tup);
-		CatalogUpdateIndexes(rel, tup);
+		simple_heap_update(rel, &tup->t_self, tup, &warm_update,
+				&modified_attrs);
+		CatalogUpdateIndexes(rel, tup, warm_update, modified_attrs);
 
 		/* Update owner dependency reference */
 		changeDependencyOnOwner(ForeignDataWrapperRelationId,
@@ -349,6 +353,9 @@ AlterForeignServerOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 
 	if (form->srvowner != newOwnerId)
 	{
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
+
 		/* Superusers can always do it */
 		if (!superuser())
 		{
@@ -397,8 +404,9 @@ AlterForeignServerOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 		tup = heap_modify_tuple(tup, RelationGetDescr(rel), repl_val, repl_null,
 								repl_repl);
 
-		simple_heap_update(rel, &tup->t_self, tup);
-		CatalogUpdateIndexes(rel, tup);
+		simple_heap_update(rel, &tup->t_self, tup, &warm_update,
+				&modified_attrs);
+		CatalogUpdateIndexes(rel, tup, warm_update, modified_attrs);
 
 		/* Update owner dependency reference */
 		changeDependencyOnOwner(ForeignServerRelationId, HeapTupleGetOid(tup),
@@ -630,7 +638,7 @@ CreateForeignDataWrapper(CreateFdwStmt *stmt)
 	tuple = heap_form_tuple(rel->rd_att, values, nulls);
 
 	fdwId = simple_heap_insert(rel, tuple);
-	CatalogUpdateIndexes(rel, tuple);
+	CatalogUpdateIndexes(rel, tuple, false, NULL);
 
 	heap_freetuple(tuple);
 
@@ -689,6 +697,8 @@ AlterForeignDataWrapper(AlterFdwStmt *stmt)
 	Oid			fdwhandler;
 	Oid			fdwvalidator;
 	ObjectAddress myself;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	rel = heap_open(ForeignDataWrapperRelationId, RowExclusiveLock);
 
@@ -786,8 +796,8 @@ AlterForeignDataWrapper(AlterFdwStmt *stmt)
 	tp = heap_modify_tuple(tp, RelationGetDescr(rel),
 						   repl_val, repl_null, repl_repl);
 
-	simple_heap_update(rel, &tp->t_self, tp);
-	CatalogUpdateIndexes(rel, tp);
+	simple_heap_update(rel, &tp->t_self, tp, &warm_update, &modified_attrs);
+	CatalogUpdateIndexes(rel, tp, warm_update, modified_attrs);
 
 	heap_freetuple(tp);
 
@@ -943,7 +953,7 @@ CreateForeignServer(CreateForeignServerStmt *stmt)
 
 	srvId = simple_heap_insert(rel, tuple);
 
-	CatalogUpdateIndexes(rel, tuple);
+	CatalogUpdateIndexes(rel, tuple, false, NULL);
 
 	heap_freetuple(tuple);
 
@@ -985,6 +995,8 @@ AlterForeignServer(AlterForeignServerStmt *stmt)
 	Oid			srvId;
 	Form_pg_foreign_server srvForm;
 	ObjectAddress address;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	rel = heap_open(ForeignServerRelationId, RowExclusiveLock);
 
@@ -1056,8 +1068,8 @@ AlterForeignServer(AlterForeignServerStmt *stmt)
 	tp = heap_modify_tuple(tp, RelationGetDescr(rel),
 						   repl_val, repl_null, repl_repl);
 
-	simple_heap_update(rel, &tp->t_self, tp);
-	CatalogUpdateIndexes(rel, tp);
+	simple_heap_update(rel, &tp->t_self, tp, &warm_update, &modified_attrs);
+	CatalogUpdateIndexes(rel, tp, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(ForeignServerRelationId, srvId, 0);
 
@@ -1192,7 +1204,7 @@ CreateUserMapping(CreateUserMappingStmt *stmt)
 
 	umId = simple_heap_insert(rel, tuple);
 
-	CatalogUpdateIndexes(rel, tuple);
+	CatalogUpdateIndexes(rel, tuple, false, NULL);
 
 	heap_freetuple(tuple);
 
@@ -1240,6 +1252,8 @@ AlterUserMapping(AlterUserMappingStmt *stmt)
 	ForeignServer *srv;
 	ObjectAddress address;
 	RoleSpec   *role = (RoleSpec *) stmt->user;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	rel = heap_open(UserMappingRelationId, RowExclusiveLock);
 
@@ -1307,8 +1321,8 @@ AlterUserMapping(AlterUserMappingStmt *stmt)
 	tp = heap_modify_tuple(tp, RelationGetDescr(rel),
 						   repl_val, repl_null, repl_repl);
 
-	simple_heap_update(rel, &tp->t_self, tp);
-	CatalogUpdateIndexes(rel, tp);
+	simple_heap_update(rel, &tp->t_self, tp, &warm_update, &modified_attrs);
+	CatalogUpdateIndexes(rel, tp, warm_update, modified_attrs);
 
 	ObjectAddressSet(address, UserMappingRelationId, umId);
 
@@ -1485,7 +1499,7 @@ CreateForeignTable(CreateForeignTableStmt *stmt, Oid relid)
 	tuple = heap_form_tuple(ftrel->rd_att, values, nulls);
 
 	simple_heap_insert(ftrel, tuple);
-	CatalogUpdateIndexes(ftrel, tuple);
+	CatalogUpdateIndexes(ftrel, tuple, false, NULL);
 
 	heap_freetuple(tuple);
 
diff --git a/src/backend/commands/functioncmds.c b/src/backend/commands/functioncmds.c
index 22aecb2..8a48bdc 100644
--- a/src/backend/commands/functioncmds.c
+++ b/src/backend/commands/functioncmds.c
@@ -1181,6 +1181,8 @@ AlterFunction(ParseState *pstate, AlterFunctionStmt *stmt)
 	DefElem    *rows_item = NULL;
 	DefElem    *parallel_item = NULL;
 	ObjectAddress address;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	rel = heap_open(ProcedureRelationId, RowExclusiveLock);
 
@@ -1295,8 +1297,8 @@ AlterFunction(ParseState *pstate, AlterFunctionStmt *stmt)
 		procForm->proparallel = interpret_func_parallel(parallel_item);
 
 	/* Do the update */
-	simple_heap_update(rel, &tup->t_self, tup);
-	CatalogUpdateIndexes(rel, tup);
+	simple_heap_update(rel, &tup->t_self, tup, &warm_update, &modified_attrs);
+	CatalogUpdateIndexes(rel, tup, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(ProcedureRelationId, funcOid, 0);
 
@@ -1321,6 +1323,8 @@ SetFunctionReturnType(Oid funcOid, Oid newRetType)
 	Relation	pg_proc_rel;
 	HeapTuple	tup;
 	Form_pg_proc procForm;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	pg_proc_rel = heap_open(ProcedureRelationId, RowExclusiveLock);
 
@@ -1336,9 +1340,10 @@ SetFunctionReturnType(Oid funcOid, Oid newRetType)
 	procForm->prorettype = newRetType;
 
 	/* update the catalog and its indexes */
-	simple_heap_update(pg_proc_rel, &tup->t_self, tup);
+	simple_heap_update(pg_proc_rel, &tup->t_self, tup, &warm_update,
+			&modified_attrs);
 
-	CatalogUpdateIndexes(pg_proc_rel, tup);
+	CatalogUpdateIndexes(pg_proc_rel, tup, warm_update, modified_attrs);
 
 	heap_close(pg_proc_rel, RowExclusiveLock);
 }
@@ -1355,6 +1360,8 @@ SetFunctionArgType(Oid funcOid, int argIndex, Oid newArgType)
 	Relation	pg_proc_rel;
 	HeapTuple	tup;
 	Form_pg_proc procForm;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	pg_proc_rel = heap_open(ProcedureRelationId, RowExclusiveLock);
 
@@ -1371,9 +1378,10 @@ SetFunctionArgType(Oid funcOid, int argIndex, Oid newArgType)
 	procForm->proargtypes.values[argIndex] = newArgType;
 
 	/* update the catalog and its indexes */
-	simple_heap_update(pg_proc_rel, &tup->t_self, tup);
+	simple_heap_update(pg_proc_rel, &tup->t_self, tup, &warm_update,
+			&modified_attrs);
 
-	CatalogUpdateIndexes(pg_proc_rel, tup);
+	CatalogUpdateIndexes(pg_proc_rel, tup, warm_update, modified_attrs);
 
 	heap_close(pg_proc_rel, RowExclusiveLock);
 }
@@ -1661,7 +1669,7 @@ CreateCast(CreateCastStmt *stmt)
 
 	castid = simple_heap_insert(relation, tuple);
 
-	CatalogUpdateIndexes(relation, tuple);
+	CatalogUpdateIndexes(relation, tuple, false, NULL);
 
 	/* make dependency entries */
 	myself.classId = CastRelationId;
@@ -1806,6 +1814,8 @@ CreateTransform(CreateTransformStmt *stmt)
 	ObjectAddress myself,
 				referenced;
 	bool		is_replace;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/*
 	 * Get the type
@@ -1924,7 +1934,8 @@ CreateTransform(CreateTransformStmt *stmt)
 		replaces[Anum_pg_transform_trftosql - 1] = true;
 
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values, nulls, replaces);
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
+		simple_heap_update(relation, &newtuple->t_self, newtuple, &warm_update,
+				&modified_attrs);
 
 		transformid = HeapTupleGetOid(tuple);
 		ReleaseSysCache(tuple);
@@ -1935,9 +1946,11 @@ CreateTransform(CreateTransformStmt *stmt)
 		newtuple = heap_form_tuple(RelationGetDescr(relation), values, nulls);
 		transformid = simple_heap_insert(relation, newtuple);
 		is_replace = false;
+		warm_update = false;
+		modified_attrs = NULL;
 	}
 
-	CatalogUpdateIndexes(relation, newtuple);
+	CatalogUpdateIndexes(relation, newtuple, warm_update, modified_attrs);
 
 	if (is_replace)
 		deleteDependencyRecordsFor(TransformRelationId, transformid, true);
diff --git a/src/backend/commands/matview.c b/src/backend/commands/matview.c
index 6b5a9b6..c22173d 100644
--- a/src/backend/commands/matview.c
+++ b/src/backend/commands/matview.c
@@ -83,6 +83,8 @@ SetMatViewPopulatedState(Relation relation, bool newstate)
 {
 	Relation	pgrel;
 	HeapTuple	tuple;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	Assert(relation->rd_rel->relkind == RELKIND_MATVIEW);
 
@@ -100,9 +102,10 @@ SetMatViewPopulatedState(Relation relation, bool newstate)
 
 	((Form_pg_class) GETSTRUCT(tuple))->relispopulated = newstate;
 
-	simple_heap_update(pgrel, &tuple->t_self, tuple);
+	simple_heap_update(pgrel, &tuple->t_self, tuple, &warm_update,
+			&modified_attrs);
 
-	CatalogUpdateIndexes(pgrel, tuple);
+	CatalogUpdateIndexes(pgrel, tuple, warm_update, modified_attrs);
 
 	heap_freetuple(tuple);
 	heap_close(pgrel, RowExclusiveLock);
diff --git a/src/backend/commands/opclasscmds.c b/src/backend/commands/opclasscmds.c
index 7cfcc6d..609d92b 100644
--- a/src/backend/commands/opclasscmds.c
+++ b/src/backend/commands/opclasscmds.c
@@ -280,7 +280,7 @@ CreateOpFamily(char *amname, char *opfname, Oid namespaceoid, Oid amoid)
 
 	opfamilyoid = simple_heap_insert(rel, tup);
 
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateIndexes(rel, tup, false, NULL);
 
 	heap_freetuple(tup);
 
@@ -657,7 +657,7 @@ DefineOpClass(CreateOpClassStmt *stmt)
 
 	opclassoid = simple_heap_insert(rel, tup);
 
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateIndexes(rel, tup, false, NULL);
 
 	heap_freetuple(tup);
 
@@ -1332,7 +1332,7 @@ storeOperators(List *opfamilyname, Oid amoid,
 
 		entryoid = simple_heap_insert(rel, tup);
 
-		CatalogUpdateIndexes(rel, tup);
+		CatalogUpdateIndexes(rel, tup, false, NULL);
 
 		heap_freetuple(tup);
 
@@ -1443,7 +1443,7 @@ storeProcedures(List *opfamilyname, Oid amoid,
 
 		entryoid = simple_heap_insert(rel, tup);
 
-		CatalogUpdateIndexes(rel, tup);
+		CatalogUpdateIndexes(rel, tup, false, NULL);
 
 		heap_freetuple(tup);
 
diff --git a/src/backend/commands/operatorcmds.c b/src/backend/commands/operatorcmds.c
index a273376..e93a71a 100644
--- a/src/backend/commands/operatorcmds.c
+++ b/src/backend/commands/operatorcmds.c
@@ -400,6 +400,8 @@ AlterOperator(AlterOperatorStmt *stmt)
 	List	   *joinName = NIL; /* optional join sel. procedure */
 	bool		updateJoin = false;
 	Oid			joinOid;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/* Look up the operator */
 	oprId = LookupOperNameTypeNames(NULL, stmt->opername,
@@ -518,8 +520,9 @@ AlterOperator(AlterOperatorStmt *stmt)
 	tup = heap_modify_tuple(tup, RelationGetDescr(catalog),
 							values, nulls, replaces);
 
-	simple_heap_update(catalog, &tup->t_self, tup);
-	CatalogUpdateIndexes(catalog, tup);
+	simple_heap_update(catalog, &tup->t_self, tup, &warm_update,
+			&modified_attrs);
+	CatalogUpdateIndexes(catalog, tup, warm_update, modified_attrs);
 
 	address = makeOperatorDependencies(tup, true);
 
diff --git a/src/backend/commands/policy.c b/src/backend/commands/policy.c
index 5d9d3a6..a080fc0 100644
--- a/src/backend/commands/policy.c
+++ b/src/backend/commands/policy.c
@@ -536,6 +536,8 @@ RemoveRoleFromObjectPolicy(Oid roleid, Oid classid, Oid policy_id)
 		HeapTuple	new_tuple;
 		ObjectAddress target;
 		ObjectAddress myself;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		/* zero-clear */
 		memset(values, 0, sizeof(values));
@@ -614,10 +616,12 @@ RemoveRoleFromObjectPolicy(Oid roleid, Oid classid, Oid policy_id)
 		new_tuple = heap_modify_tuple(tuple,
 									  RelationGetDescr(pg_policy_rel),
 									  values, isnull, replaces);
-		simple_heap_update(pg_policy_rel, &new_tuple->t_self, new_tuple);
+		simple_heap_update(pg_policy_rel, &new_tuple->t_self, new_tuple,
+				&warm_update, &modified_attrs);
 
 		/* Update Catalog Indexes */
-		CatalogUpdateIndexes(pg_policy_rel, new_tuple);
+		CatalogUpdateIndexes(pg_policy_rel, new_tuple, warm_update,
+				modified_attrs);
 
 		/* Remove all old dependencies. */
 		deleteDependencyRecordsFor(PolicyRelationId, policy_id, false);
@@ -826,7 +830,7 @@ CreatePolicy(CreatePolicyStmt *stmt)
 	policy_id = simple_heap_insert(pg_policy_rel, policy_tuple);
 
 	/* Update Indexes */
-	CatalogUpdateIndexes(pg_policy_rel, policy_tuple);
+	CatalogUpdateIndexes(pg_policy_rel, policy_tuple, false, NULL);
 
 	/* Record Dependencies */
 	target.classId = RelationRelationId;
@@ -906,6 +910,8 @@ AlterPolicy(AlterPolicyStmt *stmt)
 	char		polcmd;
 	bool		polcmd_isnull;
 	int			i;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/* Parse role_ids */
 	if (stmt->roles != NULL)
@@ -1150,10 +1156,12 @@ AlterPolicy(AlterPolicyStmt *stmt)
 	new_tuple = heap_modify_tuple(policy_tuple,
 								  RelationGetDescr(pg_policy_rel),
 								  values, isnull, replaces);
-	simple_heap_update(pg_policy_rel, &new_tuple->t_self, new_tuple);
+	simple_heap_update(pg_policy_rel, &new_tuple->t_self, new_tuple,
+			&warm_update, &modified_attrs);
 
 	/* Update Catalog Indexes */
-	CatalogUpdateIndexes(pg_policy_rel, new_tuple);
+	CatalogUpdateIndexes(pg_policy_rel, new_tuple, warm_update,
+			modified_attrs);
 
 	/* Update Dependencies. */
 	deleteDependencyRecordsFor(PolicyRelationId, policy_id, false);
@@ -1217,6 +1225,8 @@ rename_policy(RenameStmt *stmt)
 	SysScanDesc sscan;
 	HeapTuple	policy_tuple;
 	ObjectAddress address;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/* Get id of table.  Also handles permissions checks. */
 	table_id = RangeVarGetRelidExtended(stmt->relation, AccessExclusiveLock,
@@ -1287,10 +1297,12 @@ rename_policy(RenameStmt *stmt)
 	namestrcpy(&((Form_pg_policy) GETSTRUCT(policy_tuple))->polname,
 			   stmt->newname);
 
-	simple_heap_update(pg_policy_rel, &policy_tuple->t_self, policy_tuple);
+	simple_heap_update(pg_policy_rel, &policy_tuple->t_self, policy_tuple,
+			&warm_update, &modified_attrs);
 
 	/* keep system catalog indexes current */
-	CatalogUpdateIndexes(pg_policy_rel, policy_tuple);
+	CatalogUpdateIndexes(pg_policy_rel, policy_tuple, warm_update,
+			modified_attrs);
 
 	InvokeObjectPostAlterHook(PolicyRelationId,
 							  HeapTupleGetOid(policy_tuple), 0);
diff --git a/src/backend/commands/proclang.c b/src/backend/commands/proclang.c
index b684f41..aae5ef6 100644
--- a/src/backend/commands/proclang.c
+++ b/src/backend/commands/proclang.c
@@ -336,6 +336,8 @@ create_proc_lang(const char *languageName, bool replace,
 	bool		is_update;
 	ObjectAddress myself,
 				referenced;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	rel = heap_open(LanguageRelationId, RowExclusiveLock);
 	tupDesc = RelationGetDescr(rel);
@@ -378,7 +380,8 @@ create_proc_lang(const char *languageName, bool replace,
 
 		/* Okay, do it... */
 		tup = heap_modify_tuple(oldtup, tupDesc, values, nulls, replaces);
-		simple_heap_update(rel, &tup->t_self, tup);
+		simple_heap_update(rel, &tup->t_self, tup, &warm_update,
+				&modified_attrs);
 
 		ReleaseSysCache(oldtup);
 		is_update = true;
@@ -389,10 +392,12 @@ create_proc_lang(const char *languageName, bool replace,
 		tup = heap_form_tuple(tupDesc, values, nulls);
 		simple_heap_insert(rel, tup);
 		is_update = false;
+		warm_update = false;
+		modified_attrs = NULL;
 	}
 
 	/* Need to update indexes for either the insert or update case */
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateIndexes(rel, tup, warm_update, modified_attrs);
 
 	/*
 	 * Create dependencies for the new language.  If we are updating an
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 63dcc10..4980b36 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -215,7 +215,7 @@ CreatePublication(CreatePublicationStmt *stmt)
 
 	/* Insert tuple into catalog. */
 	puboid = simple_heap_insert(rel, tup);
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateIndexes(rel, tup, false, NULL);
 	heap_freetuple(tup);
 
 	recordDependencyOnOwner(PublicationRelationId, puboid, GetUserId());
@@ -260,6 +260,8 @@ AlterPublicationOptions(AlterPublicationStmt *stmt, Relation rel,
 	bool		publish_update;
 	bool		publish_delete;
 	ObjectAddress		obj;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	parse_publication_options(stmt->options,
 							  &publish_insert_given, &publish_insert,
@@ -294,8 +296,8 @@ AlterPublicationOptions(AlterPublicationStmt *stmt, Relation rel,
 							replaces);
 
 	/* Update the catalog. */
-	simple_heap_update(rel, &tup->t_self, tup);
-	CatalogUpdateIndexes(rel, tup);
+	simple_heap_update(rel, &tup->t_self, tup, &warm_update, &modified_attrs);
+	CatalogUpdateIndexes(rel, tup, warm_update, modified_attrs);
 
 	CommandCounterIncrement();
 
@@ -666,6 +668,8 @@ PublicationDropTables(Oid pubid, List *rels, bool missing_ok)
 AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 {
 	Form_pg_publication form;
+	bool				warm_update;
+	Bitmapset			*modified_attrs;
 
 	form = (Form_pg_publication) GETSTRUCT(tup);
 
@@ -685,8 +689,8 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 				 errhint("The owner of a publication must be a superuser.")));
 
 	form->pubowner = newOwnerId;
-	simple_heap_update(rel, &tup->t_self, tup);
-	CatalogUpdateIndexes(rel, tup);
+	simple_heap_update(rel, &tup->t_self, tup, &warm_update, &modified_attrs);
+	CatalogUpdateIndexes(rel, tup, warm_update, modified_attrs);
 
 	/* Update owner dependency reference */
 	changeDependencyOnOwner(PublicationRelationId,
diff --git a/src/backend/commands/schemacmds.c b/src/backend/commands/schemacmds.c
index c3b37b2..d89e093 100644
--- a/src/backend/commands/schemacmds.c
+++ b/src/backend/commands/schemacmds.c
@@ -245,6 +245,8 @@ RenameSchema(const char *oldname, const char *newname)
 	Relation	rel;
 	AclResult	aclresult;
 	ObjectAddress address;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	rel = heap_open(NamespaceRelationId, RowExclusiveLock);
 
@@ -281,8 +283,8 @@ RenameSchema(const char *oldname, const char *newname)
 
 	/* rename */
 	namestrcpy(&(((Form_pg_namespace) GETSTRUCT(tup))->nspname), newname);
-	simple_heap_update(rel, &tup->t_self, tup);
-	CatalogUpdateIndexes(rel, tup);
+	simple_heap_update(rel, &tup->t_self, tup, &warm_update, &modified_attrs);
+	CatalogUpdateIndexes(rel, tup, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(NamespaceRelationId, HeapTupleGetOid(tup), 0);
 
@@ -370,6 +372,8 @@ AlterSchemaOwner_internal(HeapTuple tup, Relation rel, Oid newOwnerId)
 		bool		isNull;
 		HeapTuple	newtuple;
 		AclResult	aclresult;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		/* Otherwise, must be owner of the existing object */
 		if (!pg_namespace_ownercheck(HeapTupleGetOid(tup), GetUserId()))
@@ -417,8 +421,9 @@ AlterSchemaOwner_internal(HeapTuple tup, Relation rel, Oid newOwnerId)
 
 		newtuple = heap_modify_tuple(tup, RelationGetDescr(rel), repl_val, repl_null, repl_repl);
 
-		simple_heap_update(rel, &newtuple->t_self, newtuple);
-		CatalogUpdateIndexes(rel, newtuple);
+		simple_heap_update(rel, &newtuple->t_self, newtuple, &warm_update,
+				&modified_attrs); CatalogUpdateIndexes(rel, newtuple,
+					warm_update, modified_attrs);
 
 		heap_freetuple(newtuple);
 
diff --git a/src/backend/commands/seclabel.c b/src/backend/commands/seclabel.c
index 324f2e7..30d7af8 100644
--- a/src/backend/commands/seclabel.c
+++ b/src/backend/commands/seclabel.c
@@ -260,6 +260,8 @@ SetSharedSecurityLabel(const ObjectAddress *object,
 	Datum		values[Natts_pg_shseclabel];
 	bool		nulls[Natts_pg_shseclabel];
 	bool		replaces[Natts_pg_shseclabel];
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/* Prepare to form or update a tuple, if necessary. */
 	memset(nulls, false, sizeof(nulls));
@@ -299,7 +301,8 @@ SetSharedSecurityLabel(const ObjectAddress *object,
 			replaces[Anum_pg_shseclabel_label - 1] = true;
 			newtup = heap_modify_tuple(oldtup, RelationGetDescr(pg_shseclabel),
 									   values, nulls, replaces);
-			simple_heap_update(pg_shseclabel, &oldtup->t_self, newtup);
+			simple_heap_update(pg_shseclabel, &oldtup->t_self, newtup,
+					&warm_update, &modified_attrs);
 		}
 	}
 	systable_endscan(scan);
@@ -310,12 +313,13 @@ SetSharedSecurityLabel(const ObjectAddress *object,
 		newtup = heap_form_tuple(RelationGetDescr(pg_shseclabel),
 								 values, nulls);
 		simple_heap_insert(pg_shseclabel, newtup);
+		warm_update = false;
 	}
 
 	/* Update indexes, if necessary */
 	if (newtup != NULL)
 	{
-		CatalogUpdateIndexes(pg_shseclabel, newtup);
+		CatalogUpdateIndexes(pg_shseclabel, newtup, warm_update, modified_attrs);
 		heap_freetuple(newtup);
 	}
 
@@ -339,6 +343,8 @@ SetSecurityLabel(const ObjectAddress *object,
 	Datum		values[Natts_pg_seclabel];
 	bool		nulls[Natts_pg_seclabel];
 	bool		replaces[Natts_pg_seclabel];
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/* Shared objects have their own security label catalog. */
 	if (IsSharedRelation(object->classId))
@@ -390,7 +396,8 @@ SetSecurityLabel(const ObjectAddress *object,
 			replaces[Anum_pg_seclabel_label - 1] = true;
 			newtup = heap_modify_tuple(oldtup, RelationGetDescr(pg_seclabel),
 									   values, nulls, replaces);
-			simple_heap_update(pg_seclabel, &oldtup->t_self, newtup);
+			simple_heap_update(pg_seclabel, &oldtup->t_self, newtup,
+					&warm_update, &modified_attrs);
 		}
 	}
 	systable_endscan(scan);
@@ -401,12 +408,13 @@ SetSecurityLabel(const ObjectAddress *object,
 		newtup = heap_form_tuple(RelationGetDescr(pg_seclabel),
 								 values, nulls);
 		simple_heap_insert(pg_seclabel, newtup);
+		warm_update = false;
 	}
 
 	/* Update indexes, if necessary */
 	if (newtup != NULL)
 	{
-		CatalogUpdateIndexes(pg_seclabel, newtup);
+		CatalogUpdateIndexes(pg_seclabel, newtup, warm_update, modified_attrs);
 		heap_freetuple(newtup);
 	}
 
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0c673f5..8d4d9a4 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -237,7 +237,7 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
 
 	tuple = heap_form_tuple(tupDesc, pgs_values, pgs_nulls);
 	simple_heap_insert(rel, tuple);
-	CatalogUpdateIndexes(rel, tuple);
+	CatalogUpdateIndexes(rel, tuple, false, NULL);
 
 	heap_freetuple(tuple);
 	heap_close(rel, RowExclusiveLock);
@@ -419,6 +419,8 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 	ObjectAddress address;
 	Relation	rel;
 	HeapTuple	tuple;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/* Open and lock sequence. */
 	relid = RangeVarGetRelid(stmt->sequence, AccessShareLock, stmt->missing_ok);
@@ -504,8 +506,9 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 
 	relation_close(seqrel, NoLock);
 
-	simple_heap_update(rel, &tuple->t_self, tuple);
-	CatalogUpdateIndexes(rel, tuple);
+	simple_heap_update(rel, &tuple->t_self, tuple, &warm_update,
+			&modified_attrs);
+	CatalogUpdateIndexes(rel, tuple, warm_update, modified_attrs);
 	heap_close(rel, RowExclusiveLock);
 
 	return address;
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 2b6d322..ccabde1 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -277,7 +277,7 @@ CreateSubscription(CreateSubscriptionStmt *stmt)
 
 	/* Insert tuple into catalog. */
 	subid = simple_heap_insert(rel, tup);
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateIndexes(rel, tup, false, NULL);
 	heap_freetuple(tup);
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
@@ -339,6 +339,8 @@ AlterSubscription(AlterSubscriptionStmt *stmt)
 	char	   *conninfo;
 	char	   *slot_name;
 	List	   *publications;
+	bool		warm_update;
+	Bitmapset  *modified_attrs;
 
 	rel = heap_open(SubscriptionRelationId, RowExclusiveLock);
 
@@ -397,8 +399,8 @@ AlterSubscription(AlterSubscriptionStmt *stmt)
 							replaces);
 
 	/* Update the catalog. */
-	simple_heap_update(rel, &tup->t_self, tup);
-	CatalogUpdateIndexes(rel, tup);
+	simple_heap_update(rel, &tup->t_self, tup, &warm_update, &modified_attrs);
+	CatalogUpdateIndexes(rel, tup, warm_update, modified_attrs);
 
 	ObjectAddressSet(myself, SubscriptionRelationId, subid);
 
@@ -558,6 +560,8 @@ static void
 AlterSubscriptionOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 {
 	Form_pg_subscription form;
+	bool				 warm_update;
+	Bitmapset  			*modified_attrs;
 
 	form = (Form_pg_subscription) GETSTRUCT(tup);
 
@@ -577,8 +581,8 @@ AlterSubscriptionOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			 errhint("The owner of an subscription must be a superuser.")));
 
 	form->subowner = newOwnerId;
-	simple_heap_update(rel, &tup->t_self, tup);
-	CatalogUpdateIndexes(rel, tup);
+	simple_heap_update(rel, &tup->t_self, tup, &warm_update, &modified_attrs);
+	CatalogUpdateIndexes(rel, tup, warm_update, modified_attrs);
 
 	/* Update owner dependency reference */
 	changeDependencyOnOwner(SubscriptionRelationId,
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index c4b0011..2d8d419 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -2310,7 +2310,7 @@ StoreCatalogInheritance1(Oid relationId, Oid parentOid,
 
 	simple_heap_insert(inhRelation, tuple);
 
-	CatalogUpdateIndexes(inhRelation, tuple);
+	CatalogUpdateIndexes(inhRelation, tuple, false, NULL);
 
 	heap_freetuple(tuple);
 
@@ -2397,11 +2397,16 @@ SetRelationHasSubclass(Oid relationId, bool relhassubclass)
 
 	if (classtuple->relhassubclass != relhassubclass)
 	{
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
+
 		classtuple->relhassubclass = relhassubclass;
-		simple_heap_update(relationRelation, &tuple->t_self, tuple);
+		simple_heap_update(relationRelation, &tuple->t_self, tuple,
+				&warm_update, &modified_attrs);
 
 		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relationRelation, tuple);
+		CatalogUpdateIndexes(relationRelation, tuple, warm_update,
+				modified_attrs);
 	}
 	else
 	{
@@ -2477,6 +2482,8 @@ renameatt_internal(Oid myrelid,
 	HeapTuple	atttup;
 	Form_pg_attribute attform;
 	AttrNumber	attnum;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/*
 	 * Grab an exclusive lock on the target table, which we will NOT release
@@ -2592,10 +2599,11 @@ renameatt_internal(Oid myrelid,
 	/* apply the update */
 	namestrcpy(&(attform->attname), newattname);
 
-	simple_heap_update(attrelation, &atttup->t_self, atttup);
+	simple_heap_update(attrelation, &atttup->t_self, atttup, &warm_update,
+			&modified_attrs);
 
 	/* keep system catalog indexes current */
-	CatalogUpdateIndexes(attrelation, atttup);
+	CatalogUpdateIndexes(attrelation, atttup, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(RelationRelationId, myrelid, attnum);
 
@@ -2871,6 +2879,8 @@ RenameRelationInternal(Oid myrelid, const char *newrelname, bool is_internal)
 	HeapTuple	reltup;
 	Form_pg_class relform;
 	Oid			namespaceId;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/*
 	 * Grab an exclusive lock on the target table, index, sequence, view,
@@ -2902,10 +2912,11 @@ RenameRelationInternal(Oid myrelid, const char *newrelname, bool is_internal)
 	 */
 	namestrcpy(&(relform->relname), newrelname);
 
-	simple_heap_update(relrelation, &reltup->t_self, reltup);
+	simple_heap_update(relrelation, &reltup->t_self, reltup, &warm_update,
+			&modified_attrs);
 
 	/* keep the system catalog indexes current */
-	CatalogUpdateIndexes(relrelation, reltup);
+	CatalogUpdateIndexes(relrelation, reltup, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHookArg(RelationRelationId, myrelid, 0,
 								 InvalidOid, is_internal);
@@ -5039,6 +5050,8 @@ ATExecAddColumn(List **wqueue, AlteredTableInfo *tab, Relation rel,
 	ListCell   *child;
 	AclResult	aclresult;
 	ObjectAddress address;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/* At top level, permission check was done in ATPrepCmd, else do it */
 	if (recursing)
@@ -5069,6 +5082,8 @@ ATExecAddColumn(List **wqueue, AlteredTableInfo *tab, Relation rel,
 			Oid			ctypeId;
 			int32		ctypmod;
 			Oid			ccollid;
+			bool		warm_update;
+			Bitmapset	*modified_attrs;
 
 			/* Child column must match on type, typmod, and collation */
 			typenameTypeIdAndMod(NULL, colDef->typeName, &ctypeId, &ctypmod);
@@ -5097,8 +5112,9 @@ ATExecAddColumn(List **wqueue, AlteredTableInfo *tab, Relation rel,
 
 			/* Bump the existing child att's inhcount */
 			childatt->attinhcount++;
-			simple_heap_update(attrdesc, &tuple->t_self, tuple);
-			CatalogUpdateIndexes(attrdesc, tuple);
+			simple_heap_update(attrdesc, &tuple->t_self, tuple, &warm_update,
+					&modified_attrs);
+			CatalogUpdateIndexes(attrdesc, tuple, warm_update, modified_attrs);
 
 			heap_freetuple(tuple);
 
@@ -5191,10 +5207,11 @@ ATExecAddColumn(List **wqueue, AlteredTableInfo *tab, Relation rel,
 	else
 		((Form_pg_class) GETSTRUCT(reltup))->relnatts = newattnum;
 
-	simple_heap_update(pgclass, &reltup->t_self, reltup);
+	simple_heap_update(pgclass, &reltup->t_self, reltup, &warm_update,
+			&modified_attrs);
 
 	/* keep catalog indexes current */
-	CatalogUpdateIndexes(pgclass, reltup);
+	CatalogUpdateIndexes(pgclass, reltup, warm_update, modified_attrs);
 
 	heap_freetuple(reltup);
 
@@ -5628,12 +5645,16 @@ ATExecDropNotNull(Relation rel, const char *colName, LOCKMODE lockmode)
 	 */
 	if (((Form_pg_attribute) GETSTRUCT(tuple))->attnotnull)
 	{
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
+
 		((Form_pg_attribute) GETSTRUCT(tuple))->attnotnull = FALSE;
 
-		simple_heap_update(attr_rel, &tuple->t_self, tuple);
+		simple_heap_update(attr_rel, &tuple->t_self, tuple, &warm_update,
+				&modified_attrs);
 
 		/* keep the system catalog indexes current */
-		CatalogUpdateIndexes(attr_rel, tuple);
+		CatalogUpdateIndexes(attr_rel, tuple, warm_update, modified_attrs);
 
 		ObjectAddressSubSet(address, RelationRelationId,
 							RelationGetRelid(rel), attnum);
@@ -5706,12 +5727,16 @@ ATExecSetNotNull(AlteredTableInfo *tab, Relation rel,
 	 */
 	if (!((Form_pg_attribute) GETSTRUCT(tuple))->attnotnull)
 	{
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
+
 		((Form_pg_attribute) GETSTRUCT(tuple))->attnotnull = TRUE;
 
-		simple_heap_update(attr_rel, &tuple->t_self, tuple);
+		simple_heap_update(attr_rel, &tuple->t_self, tuple, &warm_update,
+				&modified_attrs);
 
 		/* keep the system catalog indexes current */
-		CatalogUpdateIndexes(attr_rel, tuple);
+		CatalogUpdateIndexes(attr_rel, tuple, warm_update, modified_attrs);
 
 		/* Tell Phase 3 it needs to test the constraint */
 		tab->new_notnull = true;
@@ -5833,6 +5858,8 @@ ATExecSetStatistics(Relation rel, const char *colName, Node *newValue, LOCKMODE
 	Form_pg_attribute attrtuple;
 	AttrNumber	attnum;
 	ObjectAddress address;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	Assert(IsA(newValue, Integer));
 	newtarget = intVal(newValue);
@@ -5876,10 +5903,11 @@ ATExecSetStatistics(Relation rel, const char *colName, Node *newValue, LOCKMODE
 
 	attrtuple->attstattarget = newtarget;
 
-	simple_heap_update(attrelation, &tuple->t_self, tuple);
+	simple_heap_update(attrelation, &tuple->t_self, tuple, &warm_update,
+			&modified_attrs);
 
 	/* keep system catalog indexes current */
-	CatalogUpdateIndexes(attrelation, tuple);
+	CatalogUpdateIndexes(attrelation, tuple, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(RelationRelationId,
 							  RelationGetRelid(rel),
@@ -5912,6 +5940,8 @@ ATExecSetOptions(Relation rel, const char *colName, Node *options,
 	Datum		repl_val[Natts_pg_attribute];
 	bool		repl_null[Natts_pg_attribute];
 	bool		repl_repl[Natts_pg_attribute];
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	attrelation = heap_open(AttributeRelationId, RowExclusiveLock);
 
@@ -5953,8 +5983,9 @@ ATExecSetOptions(Relation rel, const char *colName, Node *options,
 								 repl_val, repl_null, repl_repl);
 
 	/* Update system catalog. */
-	simple_heap_update(attrelation, &newtuple->t_self, newtuple);
-	CatalogUpdateIndexes(attrelation, newtuple);
+	simple_heap_update(attrelation, &newtuple->t_self, newtuple, &warm_update,
+			&modified_attrs);
+	CatalogUpdateIndexes(attrelation, newtuple, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(RelationRelationId,
 							  RelationGetRelid(rel),
@@ -5986,6 +6017,8 @@ ATExecSetStorage(Relation rel, const char *colName, Node *newValue, LOCKMODE loc
 	Form_pg_attribute attrtuple;
 	AttrNumber	attnum;
 	ObjectAddress address;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	Assert(IsA(newValue, String));
 	storagemode = strVal(newValue);
@@ -6037,10 +6070,11 @@ ATExecSetStorage(Relation rel, const char *colName, Node *newValue, LOCKMODE loc
 				 errmsg("column data type %s can only have storage PLAIN",
 						format_type_be(attrtuple->atttypid))));
 
-	simple_heap_update(attrelation, &tuple->t_self, tuple);
+	simple_heap_update(attrelation, &tuple->t_self, tuple, &warm_update,
+			&modified_attrs);
 
 	/* keep system catalog indexes current */
-	CatalogUpdateIndexes(attrelation, tuple);
+	CatalogUpdateIndexes(attrelation, tuple, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(RelationRelationId,
 							  RelationGetRelid(rel),
@@ -6275,13 +6309,18 @@ ATExecDropColumn(List **wqueue, Relation rel, const char *colName,
 				}
 				else
 				{
+					bool		warm_update;
+					Bitmapset	*modified_attrs;
+
 					/* Child column must survive my deletion */
 					childatt->attinhcount--;
 
-					simple_heap_update(attr_rel, &tuple->t_self, tuple);
+					simple_heap_update(attr_rel, &tuple->t_self, tuple,
+							&warm_update, &modified_attrs);
 
 					/* keep the system catalog indexes current */
-					CatalogUpdateIndexes(attr_rel, tuple);
+					CatalogUpdateIndexes(attr_rel, tuple, warm_update,
+							modified_attrs);
 
 					/* Make update visible */
 					CommandCounterIncrement();
@@ -6289,6 +6328,9 @@ ATExecDropColumn(List **wqueue, Relation rel, const char *colName,
 			}
 			else
 			{
+				bool		warm_update;
+				Bitmapset	*modified_attrs;
+
 				/*
 				 * If we were told to drop ONLY in this table (no recursion),
 				 * we need to mark the inheritors' attributes as locally
@@ -6297,10 +6339,12 @@ ATExecDropColumn(List **wqueue, Relation rel, const char *colName,
 				childatt->attinhcount--;
 				childatt->attislocal = true;
 
-				simple_heap_update(attr_rel, &tuple->t_self, tuple);
+				simple_heap_update(attr_rel, &tuple->t_self, tuple,
+						&warm_update, &modified_attrs);
 
 				/* keep the system catalog indexes current */
-				CatalogUpdateIndexes(attr_rel, tuple);
+				CatalogUpdateIndexes(attr_rel, tuple, warm_update,
+						modified_attrs);
 
 				/* Make update visible */
 				CommandCounterIncrement();
@@ -6333,6 +6377,8 @@ ATExecDropColumn(List **wqueue, Relation rel, const char *colName,
 		Relation	class_rel;
 		Form_pg_class tuple_class;
 		AlteredTableInfo *tab;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		class_rel = heap_open(RelationRelationId, RowExclusiveLock);
 
@@ -6344,10 +6390,11 @@ ATExecDropColumn(List **wqueue, Relation rel, const char *colName,
 		tuple_class = (Form_pg_class) GETSTRUCT(tuple);
 
 		tuple_class->relhasoids = false;
-		simple_heap_update(class_rel, &tuple->t_self, tuple);
+		simple_heap_update(class_rel, &tuple->t_self, tuple, &warm_update,
+				&modified_attrs);
 
 		/* Keep the catalog indexes up to date */
-		CatalogUpdateIndexes(class_rel, tuple);
+		CatalogUpdateIndexes(class_rel, tuple, warm_update, modified_attrs);
 
 		heap_close(class_rel, RowExclusiveLock);
 
@@ -7189,6 +7236,8 @@ ATExecAlterConstraint(Relation rel, AlterTableCmd *cmd,
 		SysScanDesc tgscan;
 		Relation	tgrel;
 		ListCell   *lc;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		/*
 		 * Now update the catalog, while we have the door open.
@@ -7197,8 +7246,9 @@ ATExecAlterConstraint(Relation rel, AlterTableCmd *cmd,
 		copy_con = (Form_pg_constraint) GETSTRUCT(copyTuple);
 		copy_con->condeferrable = cmdcon->deferrable;
 		copy_con->condeferred = cmdcon->initdeferred;
-		simple_heap_update(conrel, &copyTuple->t_self, copyTuple);
-		CatalogUpdateIndexes(conrel, copyTuple);
+		simple_heap_update(conrel, &copyTuple->t_self, copyTuple, &warm_update,
+				&modified_attrs);
+		CatalogUpdateIndexes(conrel, copyTuple, warm_update, modified_attrs);
 
 		InvokeObjectPostAlterHook(ConstraintRelationId,
 								  HeapTupleGetOid(contuple), 0);
@@ -7223,6 +7273,8 @@ ATExecAlterConstraint(Relation rel, AlterTableCmd *cmd,
 		{
 			Form_pg_trigger tgform = (Form_pg_trigger) GETSTRUCT(tgtuple);
 			Form_pg_trigger copy_tg;
+			bool		warm_update;
+			Bitmapset	*modified_attrs;
 
 			/*
 			 * Remember OIDs of other relation(s) involved in FK constraint.
@@ -7251,8 +7303,9 @@ ATExecAlterConstraint(Relation rel, AlterTableCmd *cmd,
 
 			copy_tg->tgdeferrable = cmdcon->deferrable;
 			copy_tg->tginitdeferred = cmdcon->initdeferred;
-			simple_heap_update(tgrel, &copyTuple->t_self, copyTuple);
-			CatalogUpdateIndexes(tgrel, copyTuple);
+			simple_heap_update(tgrel, &copyTuple->t_self, copyTuple,
+					&warm_update, &modified_attrs);
+			CatalogUpdateIndexes(tgrel, copyTuple, warm_update, modified_attrs);
 
 			InvokeObjectPostAlterHook(TriggerRelationId,
 									  HeapTupleGetOid(tgtuple), 0);
@@ -7351,6 +7404,8 @@ ATExecValidateConstraint(Relation rel, char *constrName, bool recurse,
 	{
 		HeapTuple	copyTuple;
 		Form_pg_constraint copy_con;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		if (con->contype == CONSTRAINT_FOREIGN)
 		{
@@ -7438,8 +7493,9 @@ ATExecValidateConstraint(Relation rel, char *constrName, bool recurse,
 		copyTuple = heap_copytuple(tuple);
 		copy_con = (Form_pg_constraint) GETSTRUCT(copyTuple);
 		copy_con->convalidated = true;
-		simple_heap_update(conrel, &copyTuple->t_self, copyTuple);
-		CatalogUpdateIndexes(conrel, copyTuple);
+		simple_heap_update(conrel, &copyTuple->t_self, copyTuple, &warm_update,
+				&modified_attrs);
+		CatalogUpdateIndexes(conrel, copyTuple, warm_update, modified_attrs);
 
 		InvokeObjectPostAlterHook(ConstraintRelationId,
 								  HeapTupleGetOid(tuple), 0);
@@ -8339,10 +8395,14 @@ ATExecDropConstraint(Relation rel, const char *constrName,
 			}
 			else
 			{
+				bool		warm_update;
+				Bitmapset	*modified_attrs;
+
 				/* Child constraint must survive my deletion */
 				con->coninhcount--;
-				simple_heap_update(conrel, &copy_tuple->t_self, copy_tuple);
-				CatalogUpdateIndexes(conrel, copy_tuple);
+				simple_heap_update(conrel, &copy_tuple->t_self, copy_tuple,
+						&warm_update, &modified_attrs);
+				CatalogUpdateIndexes(conrel, copy_tuple, warm_update, modified_attrs);
 
 				/* Make update visible */
 				CommandCounterIncrement();
@@ -8350,6 +8410,9 @@ ATExecDropConstraint(Relation rel, const char *constrName,
 		}
 		else
 		{
+			bool		warm_update;
+			Bitmapset	*modified_attrs;
+
 			/*
 			 * If we were told to drop ONLY in this table (no recursion), we
 			 * need to mark the inheritors' constraints as locally defined
@@ -8358,8 +8421,10 @@ ATExecDropConstraint(Relation rel, const char *constrName,
 			con->coninhcount--;
 			con->conislocal = true;
 
-			simple_heap_update(conrel, &copy_tuple->t_self, copy_tuple);
-			CatalogUpdateIndexes(conrel, copy_tuple);
+			simple_heap_update(conrel, &copy_tuple->t_self, copy_tuple,
+					&warm_update, &modified_attrs);
+			CatalogUpdateIndexes(conrel, copy_tuple, warm_update,
+					modified_attrs);
 
 			/* Make update visible */
 			CommandCounterIncrement();
@@ -8675,6 +8740,8 @@ ATExecAlterColumnType(AlteredTableInfo *tab, Relation rel,
 	SysScanDesc scan;
 	HeapTuple	depTup;
 	ObjectAddress address;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	attrelation = heap_open(AttributeRelationId, RowExclusiveLock);
 
@@ -9005,10 +9072,11 @@ ATExecAlterColumnType(AlteredTableInfo *tab, Relation rel,
 
 	ReleaseSysCache(typeTuple);
 
-	simple_heap_update(attrelation, &heapTup->t_self, heapTup);
+	simple_heap_update(attrelation, &heapTup->t_self, heapTup, &warm_update,
+			&modified_attrs);
 
 	/* keep system catalog indexes current */
-	CatalogUpdateIndexes(attrelation, heapTup);
+	CatalogUpdateIndexes(attrelation, heapTup, warm_update, modified_attrs);
 
 	heap_close(attrelation, RowExclusiveLock);
 
@@ -9079,6 +9147,8 @@ ATExecAlterColumnGenericOptions(Relation rel,
 	Form_pg_attribute atttableform;
 	AttrNumber	attnum;
 	ObjectAddress address;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	if (options == NIL)
 		return InvalidObjectAddress;
@@ -9146,8 +9216,9 @@ ATExecAlterColumnGenericOptions(Relation rel,
 	newtuple = heap_modify_tuple(tuple, RelationGetDescr(attrel),
 								 repl_val, repl_null, repl_repl);
 
-	simple_heap_update(attrel, &newtuple->t_self, newtuple);
-	CatalogUpdateIndexes(attrel, newtuple);
+	simple_heap_update(attrel, &newtuple->t_self, newtuple, &warm_update,
+			&modified_attrs);
+	CatalogUpdateIndexes(attrel, newtuple, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(RelationRelationId,
 							  RelationGetRelid(rel),
@@ -9617,6 +9688,8 @@ ATExecChangeOwner(Oid relationOid, Oid newOwnerId, bool recursing, LOCKMODE lock
 		Datum		aclDatum;
 		bool		isNull;
 		HeapTuple	newtuple;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		/* skip permission checks when recursing to index or toast table */
 		if (!recursing)
@@ -9667,8 +9740,9 @@ ATExecChangeOwner(Oid relationOid, Oid newOwnerId, bool recursing, LOCKMODE lock
 
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(class_rel), repl_val, repl_null, repl_repl);
 
-		simple_heap_update(class_rel, &newtuple->t_self, newtuple);
-		CatalogUpdateIndexes(class_rel, newtuple);
+		simple_heap_update(class_rel, &newtuple->t_self, newtuple,
+				&warm_update, &modified_attrs);
+		CatalogUpdateIndexes(class_rel, newtuple, warm_update, modified_attrs);
 
 		heap_freetuple(newtuple);
 
@@ -9770,6 +9844,8 @@ change_owner_fix_column_acls(Oid relationOid, Oid oldOwnerId, Oid newOwnerId)
 		Datum		aclDatum;
 		bool		isNull;
 		HeapTuple	newtuple;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		/* Ignore dropped columns */
 		if (att->attisdropped)
@@ -9795,8 +9871,10 @@ change_owner_fix_column_acls(Oid relationOid, Oid oldOwnerId, Oid newOwnerId)
 									 RelationGetDescr(attRelation),
 									 repl_val, repl_null, repl_repl);
 
-		simple_heap_update(attRelation, &newtuple->t_self, newtuple);
-		CatalogUpdateIndexes(attRelation, newtuple);
+		simple_heap_update(attRelation, &newtuple->t_self, newtuple,
+				&warm_update, &modified_attrs);
+		CatalogUpdateIndexes(attRelation, newtuple, warm_update,
+				modified_attrs);
 
 		heap_freetuple(newtuple);
 	}
@@ -9966,6 +10044,8 @@ ATExecSetRelOptions(Relation rel, List *defList, AlterTableType operation,
 	bool		repl_null[Natts_pg_class];
 	bool		repl_repl[Natts_pg_class];
 	static char *validnsps[] = HEAP_RELOPT_NAMESPACES;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	if (defList == NIL && operation != AT_ReplaceRelOptions)
 		return;					/* nothing to do */
@@ -10073,9 +10153,10 @@ ATExecSetRelOptions(Relation rel, List *defList, AlterTableType operation,
 	newtuple = heap_modify_tuple(tuple, RelationGetDescr(pgclass),
 								 repl_val, repl_null, repl_repl);
 
-	simple_heap_update(pgclass, &newtuple->t_self, newtuple);
+	simple_heap_update(pgclass, &newtuple->t_self, newtuple, &warm_update,
+			&modified_attrs);
 
-	CatalogUpdateIndexes(pgclass, newtuple);
+	CatalogUpdateIndexes(pgclass, newtuple, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(RelationRelationId, RelationGetRelid(rel), 0);
 
@@ -10088,6 +10169,8 @@ ATExecSetRelOptions(Relation rel, List *defList, AlterTableType operation,
 	{
 		Relation	toastrel;
 		Oid			toastid = rel->rd_rel->reltoastrelid;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		toastrel = heap_open(toastid, lockmode);
 
@@ -10132,9 +10215,9 @@ ATExecSetRelOptions(Relation rel, List *defList, AlterTableType operation,
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(pgclass),
 									 repl_val, repl_null, repl_repl);
 
-		simple_heap_update(pgclass, &newtuple->t_self, newtuple);
+		simple_heap_update(pgclass, &newtuple->t_self, newtuple, &warm_update, &modified_attrs);
 
-		CatalogUpdateIndexes(pgclass, newtuple);
+		CatalogUpdateIndexes(pgclass, newtuple, warm_update, modified_attrs);
 
 		InvokeObjectPostAlterHookArg(RelationRelationId,
 									 RelationGetRelid(toastrel), 0,
@@ -10169,6 +10252,8 @@ ATExecSetTableSpace(Oid tableOid, Oid newTableSpace, LOCKMODE lockmode)
 	ForkNumber	forkNum;
 	List	   *reltoastidxids = NIL;
 	ListCell   *lc;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/*
 	 * Need lock here in case we are recursing to toast table or index
@@ -10295,8 +10380,9 @@ ATExecSetTableSpace(Oid tableOid, Oid newTableSpace, LOCKMODE lockmode)
 	/* update the pg_class row */
 	rd_rel->reltablespace = (newTableSpace == MyDatabaseTableSpace) ? InvalidOid : newTableSpace;
 	rd_rel->relfilenode = newrelfilenode;
-	simple_heap_update(pg_class, &tuple->t_self, tuple);
-	CatalogUpdateIndexes(pg_class, tuple);
+	simple_heap_update(pg_class, &tuple->t_self, tuple, &warm_update,
+			&modified_attrs);
+	CatalogUpdateIndexes(pg_class, tuple, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(RelationRelationId, RelationGetRelid(rel), 0);
 
@@ -10901,6 +10987,9 @@ MergeAttributesIntoExisting(Relation child_rel, Relation parent_rel)
 										  attributeName);
 		if (HeapTupleIsValid(tuple))
 		{
+			bool		warm_update;
+			Bitmapset	*modified_attrs;
+
 			/* Check they are same type, typmod, and collation */
 			Form_pg_attribute childatt = (Form_pg_attribute) GETSTRUCT(tuple);
 
@@ -10946,8 +11035,9 @@ MergeAttributesIntoExisting(Relation child_rel, Relation parent_rel)
 				childatt->attislocal = false;
 			}
 
-			simple_heap_update(attrrel, &tuple->t_self, tuple);
-			CatalogUpdateIndexes(attrrel, tuple);
+			simple_heap_update(attrrel, &tuple->t_self, tuple, &warm_update,
+					&modified_attrs);
+			CatalogUpdateIndexes(attrrel, tuple, warm_update, modified_attrs);
 			heap_freetuple(tuple);
 		}
 		else
@@ -10976,6 +11066,8 @@ MergeAttributesIntoExisting(Relation child_rel, Relation parent_rel)
 		if (HeapTupleIsValid(tuple))
 		{
 			Form_pg_attribute childatt = (Form_pg_attribute) GETSTRUCT(tuple);
+			bool		warm_update;
+			Bitmapset	*modified_attrs;
 
 			/* See comments above; these changes should be the same */
 			childatt->attinhcount++;
@@ -10986,8 +11078,9 @@ MergeAttributesIntoExisting(Relation child_rel, Relation parent_rel)
 				childatt->attislocal = false;
 			}
 
-			simple_heap_update(attrrel, &tuple->t_self, tuple);
-			CatalogUpdateIndexes(attrrel, tuple);
+			simple_heap_update(attrrel, &tuple->t_self, tuple, &warm_update,
+					&modified_attrs);
+			CatalogUpdateIndexes(attrrel, tuple, warm_update, modified_attrs);
 			heap_freetuple(tuple);
 		}
 		else
@@ -11071,6 +11164,8 @@ MergeConstraintsIntoExisting(Relation child_rel, Relation parent_rel)
 		{
 			Form_pg_constraint child_con = (Form_pg_constraint) GETSTRUCT(child_tuple);
 			HeapTuple	child_copy;
+			bool		warm_update;
+			Bitmapset	*modified_attrs;
 
 			if (child_con->contype != CONSTRAINT_CHECK)
 				continue;
@@ -11124,8 +11219,10 @@ MergeConstraintsIntoExisting(Relation child_rel, Relation parent_rel)
 				child_con->conislocal = false;
 			}
 
-			simple_heap_update(catalog_relation, &child_copy->t_self, child_copy);
-			CatalogUpdateIndexes(catalog_relation, child_copy);
+			simple_heap_update(catalog_relation, &child_copy->t_self,
+					child_copy, &warm_update, &modified_attrs);
+			CatalogUpdateIndexes(catalog_relation, child_copy, warm_update,
+					modified_attrs);
 			heap_freetuple(child_copy);
 
 			found = true;
@@ -11290,13 +11387,17 @@ RemoveInheritance(Relation child_rel, Relation parent_rel)
 			/* Decrement inhcount and possibly set islocal to true */
 			HeapTuple	copyTuple = heap_copytuple(attributeTuple);
 			Form_pg_attribute copy_att = (Form_pg_attribute) GETSTRUCT(copyTuple);
+			bool		warm_update;
+			Bitmapset	*modified_attrs;
 
 			copy_att->attinhcount--;
 			if (copy_att->attinhcount == 0)
 				copy_att->attislocal = true;
 
-			simple_heap_update(catalogRelation, &copyTuple->t_self, copyTuple);
-			CatalogUpdateIndexes(catalogRelation, copyTuple);
+			simple_heap_update(catalogRelation, &copyTuple->t_self, copyTuple,
+					&warm_update, &modified_attrs);
+			CatalogUpdateIndexes(catalogRelation, copyTuple, warm_update,
+					modified_attrs);
 			heap_freetuple(copyTuple);
 		}
 	}
@@ -11361,6 +11462,8 @@ RemoveInheritance(Relation child_rel, Relation parent_rel)
 			/* Decrement inhcount and possibly set islocal to true */
 			HeapTuple	copyTuple = heap_copytuple(constraintTuple);
 			Form_pg_constraint copy_con = (Form_pg_constraint) GETSTRUCT(copyTuple);
+			bool		warm_update;
+			Bitmapset	*modified_attrs;
 
 			if (copy_con->coninhcount <= 0)		/* shouldn't happen */
 				elog(ERROR, "relation %u has non-inherited constraint \"%s\"",
@@ -11370,8 +11473,10 @@ RemoveInheritance(Relation child_rel, Relation parent_rel)
 			if (copy_con->coninhcount == 0)
 				copy_con->conislocal = true;
 
-			simple_heap_update(catalogRelation, &copyTuple->t_self, copyTuple);
-			CatalogUpdateIndexes(catalogRelation, copyTuple);
+			simple_heap_update(catalogRelation, &copyTuple->t_self, copyTuple,
+					&warm_update, &modified_attrs);
+			CatalogUpdateIndexes(catalogRelation, copyTuple, warm_update,
+					modified_attrs);
 			heap_freetuple(copyTuple);
 		}
 	}
@@ -11468,6 +11573,8 @@ ATExecAddOf(Relation rel, const TypeName *ofTypename, LOCKMODE lockmode)
 	ObjectAddress tableobj,
 				typeobj;
 	HeapTuple	classtuple;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/* Validate the type. */
 	typetuple = typenameType(NULL, ofTypename, NULL);
@@ -11571,8 +11678,10 @@ ATExecAddOf(Relation rel, const TypeName *ofTypename, LOCKMODE lockmode)
 	if (!HeapTupleIsValid(classtuple))
 		elog(ERROR, "cache lookup failed for relation %u", relid);
 	((Form_pg_class) GETSTRUCT(classtuple))->reloftype = typeid;
-	simple_heap_update(relationRelation, &classtuple->t_self, classtuple);
-	CatalogUpdateIndexes(relationRelation, classtuple);
+	simple_heap_update(relationRelation, &classtuple->t_self, classtuple,
+			&warm_update, &modified_attrs);
+	CatalogUpdateIndexes(relationRelation, classtuple, warm_update,
+			modified_attrs);
 
 	InvokeObjectPostAlterHook(RelationRelationId, relid, 0);
 
@@ -11596,6 +11705,8 @@ ATExecDropOf(Relation rel, LOCKMODE lockmode)
 	Oid			relid = RelationGetRelid(rel);
 	Relation	relationRelation;
 	HeapTuple	tuple;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	if (!OidIsValid(rel->rd_rel->reloftype))
 		ereport(ERROR,
@@ -11616,8 +11727,9 @@ ATExecDropOf(Relation rel, LOCKMODE lockmode)
 	if (!HeapTupleIsValid(tuple))
 		elog(ERROR, "cache lookup failed for relation %u", relid);
 	((Form_pg_class) GETSTRUCT(tuple))->reloftype = InvalidOid;
-	simple_heap_update(relationRelation, &tuple->t_self, tuple);
-	CatalogUpdateIndexes(relationRelation, tuple);
+	simple_heap_update(relationRelation, &tuple->t_self, tuple, &warm_update,
+			&modified_attrs);
+	CatalogUpdateIndexes(relationRelation, tuple, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(RelationRelationId, relid, 0);
 
@@ -11656,9 +11768,14 @@ relation_mark_replica_identity(Relation rel, char ri_type, Oid indexOid,
 	pg_class_form = (Form_pg_class) GETSTRUCT(pg_class_tuple);
 	if (pg_class_form->relreplident != ri_type)
 	{
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
+
 		pg_class_form->relreplident = ri_type;
-		simple_heap_update(pg_class, &pg_class_tuple->t_self, pg_class_tuple);
-		CatalogUpdateIndexes(pg_class, pg_class_tuple);
+		simple_heap_update(pg_class, &pg_class_tuple->t_self, pg_class_tuple,
+				&warm_update, &modified_attrs);
+		CatalogUpdateIndexes(pg_class, pg_class_tuple, warm_update,
+				modified_attrs);
 	}
 	heap_close(pg_class, RowExclusiveLock);
 	heap_freetuple(pg_class_tuple);
@@ -11717,8 +11834,13 @@ relation_mark_replica_identity(Relation rel, char ri_type, Oid indexOid,
 
 		if (dirty)
 		{
-			simple_heap_update(pg_index, &pg_index_tuple->t_self, pg_index_tuple);
-			CatalogUpdateIndexes(pg_index, pg_index_tuple);
+			bool		warm_update;
+			Bitmapset	*modified_attrs;
+
+			simple_heap_update(pg_index, &pg_index_tuple->t_self,
+					pg_index_tuple, &warm_update, &modified_attrs);
+			CatalogUpdateIndexes(pg_index, pg_index_tuple, warm_update,
+					modified_attrs);
 			InvokeObjectPostAlterHookArg(IndexRelationId, thisIndexOid, 0,
 										 InvalidOid, is_internal);
 		}
@@ -11856,6 +11978,8 @@ ATExecEnableRowSecurity(Relation rel)
 	Relation	pg_class;
 	Oid			relid;
 	HeapTuple	tuple;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	relid = RelationGetRelid(rel);
 
@@ -11867,10 +11991,11 @@ ATExecEnableRowSecurity(Relation rel)
 		elog(ERROR, "cache lookup failed for relation %u", relid);
 
 	((Form_pg_class) GETSTRUCT(tuple))->relrowsecurity = true;
-	simple_heap_update(pg_class, &tuple->t_self, tuple);
+	simple_heap_update(pg_class, &tuple->t_self, tuple, &warm_update,
+			&modified_attrs);
 
 	/* keep catalog indexes current */
-	CatalogUpdateIndexes(pg_class, tuple);
+	CatalogUpdateIndexes(pg_class, tuple, warm_update, modified_attrs);
 
 	heap_close(pg_class, RowExclusiveLock);
 	heap_freetuple(tuple);
@@ -11882,6 +12007,8 @@ ATExecDisableRowSecurity(Relation rel)
 	Relation	pg_class;
 	Oid			relid;
 	HeapTuple	tuple;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	relid = RelationGetRelid(rel);
 
@@ -11894,10 +12021,11 @@ ATExecDisableRowSecurity(Relation rel)
 		elog(ERROR, "cache lookup failed for relation %u", relid);
 
 	((Form_pg_class) GETSTRUCT(tuple))->relrowsecurity = false;
-	simple_heap_update(pg_class, &tuple->t_self, tuple);
+	simple_heap_update(pg_class, &tuple->t_self, tuple, &warm_update,
+			&modified_attrs);
 
 	/* keep catalog indexes current */
-	CatalogUpdateIndexes(pg_class, tuple);
+	CatalogUpdateIndexes(pg_class, tuple, warm_update, modified_attrs);
 
 	heap_close(pg_class, RowExclusiveLock);
 	heap_freetuple(tuple);
@@ -11912,6 +12040,8 @@ ATExecForceNoForceRowSecurity(Relation rel, bool force_rls)
 	Relation	pg_class;
 	Oid			relid;
 	HeapTuple	tuple;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	relid = RelationGetRelid(rel);
 
@@ -11923,10 +12053,11 @@ ATExecForceNoForceRowSecurity(Relation rel, bool force_rls)
 		elog(ERROR, "cache lookup failed for relation %u", relid);
 
 	((Form_pg_class) GETSTRUCT(tuple))->relforcerowsecurity = force_rls;
-	simple_heap_update(pg_class, &tuple->t_self, tuple);
+	simple_heap_update(pg_class, &tuple->t_self, tuple, &warm_update,
+			&modified_attrs);
 
 	/* keep catalog indexes current */
-	CatalogUpdateIndexes(pg_class, tuple);
+	CatalogUpdateIndexes(pg_class, tuple, warm_update, modified_attrs);
 
 	heap_close(pg_class, RowExclusiveLock);
 	heap_freetuple(tuple);
@@ -11948,6 +12079,8 @@ ATExecGenericOptions(Relation rel, List *options)
 	bool		repl_repl[Natts_pg_foreign_table];
 	Datum		datum;
 	Form_pg_foreign_table tableform;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	if (options == NIL)
 		return;
@@ -11994,8 +12127,8 @@ ATExecGenericOptions(Relation rel, List *options)
 	tuple = heap_modify_tuple(tuple, RelationGetDescr(ftrel),
 							  repl_val, repl_null, repl_repl);
 
-	simple_heap_update(ftrel, &tuple->t_self, tuple);
-	CatalogUpdateIndexes(ftrel, tuple);
+	simple_heap_update(ftrel, &tuple->t_self, tuple, &warm_update, &modified_attrs);
+	CatalogUpdateIndexes(ftrel, tuple, warm_update, modified_attrs);
 
 	/*
 	 * Invalidate relcache so that all sessions will refresh any cached plans
@@ -12278,6 +12411,9 @@ AlterRelationNamespaceInternal(Relation classRel, Oid relOid,
 	already_done = object_address_present(&thisobj, objsMoved);
 	if (!already_done && oldNspOid != newNspOid)
 	{
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
+
 		/* check for duplicate name (more friendly than unique-index failure) */
 		if (get_relname_relid(NameStr(classForm->relname),
 							  newNspOid) != InvalidOid)
@@ -12290,8 +12426,9 @@ AlterRelationNamespaceInternal(Relation classRel, Oid relOid,
 		/* classTup is a copy, so OK to scribble on */
 		classForm->relnamespace = newNspOid;
 
-		simple_heap_update(classRel, &classTup->t_self, classTup);
-		CatalogUpdateIndexes(classRel, classTup);
+		simple_heap_update(classRel, &classTup->t_self, classTup, &warm_update,
+				&modified_attrs);
+		CatalogUpdateIndexes(classRel, classTup, warm_update, modified_attrs);
 
 		/* Update dependency on schema if caller said so */
 		if (hasDependEntry &&
@@ -13499,6 +13636,8 @@ ATExecDetachPartition(Relation rel, RangeVar *name)
 				new_null[Natts_pg_class],
 				new_repl[Natts_pg_class];
 	ObjectAddress address;
+	bool			warm_update;
+	Bitmapset		*modified_attrs;
 
 	partRel = heap_openrv(name, AccessShareLock);
 
@@ -13526,8 +13665,9 @@ ATExecDetachPartition(Relation rel, RangeVar *name)
 								 new_val, new_null, new_repl);
 
 	((Form_pg_class) GETSTRUCT(newtuple))->relispartition = false;
-	simple_heap_update(classRel, &newtuple->t_self, newtuple);
-	CatalogUpdateIndexes(classRel, newtuple);
+	simple_heap_update(classRel, &newtuple->t_self, newtuple, &warm_update,
+			&modified_attrs);
+	CatalogUpdateIndexes(classRel, newtuple, warm_update, modified_attrs);
 	heap_freetuple(newtuple);
 	heap_close(classRel, RowExclusiveLock);
 
diff --git a/src/backend/commands/tablespace.c b/src/backend/commands/tablespace.c
index 651e1b3..bc18cfb 100644
--- a/src/backend/commands/tablespace.c
+++ b/src/backend/commands/tablespace.c
@@ -346,7 +346,7 @@ CreateTableSpace(CreateTableSpaceStmt *stmt)
 
 	tablespaceoid = simple_heap_insert(rel, tuple);
 
-	CatalogUpdateIndexes(rel, tuple);
+	CatalogUpdateIndexes(rel, tuple, false, NULL);
 
 	heap_freetuple(tuple);
 
@@ -920,6 +920,8 @@ RenameTableSpace(const char *oldname, const char *newname)
 	HeapTuple	newtuple;
 	Form_pg_tablespace newform;
 	ObjectAddress address;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/* Search pg_tablespace */
 	rel = heap_open(TableSpaceRelationId, RowExclusiveLock);
@@ -971,8 +973,9 @@ RenameTableSpace(const char *oldname, const char *newname)
 	/* OK, update the entry */
 	namestrcpy(&(newform->spcname), newname);
 
-	simple_heap_update(rel, &newtuple->t_self, newtuple);
-	CatalogUpdateIndexes(rel, newtuple);
+	simple_heap_update(rel, &newtuple->t_self, newtuple, &warm_update,
+			&modified_attrs);
+	CatalogUpdateIndexes(rel, newtuple, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(TableSpaceRelationId, tspId, 0);
 
@@ -1001,6 +1004,8 @@ AlterTableSpaceOptions(AlterTableSpaceOptionsStmt *stmt)
 	bool		repl_null[Natts_pg_tablespace];
 	bool		repl_repl[Natts_pg_tablespace];
 	HeapTuple	newtuple;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/* Search pg_tablespace */
 	rel = heap_open(TableSpaceRelationId, RowExclusiveLock);
@@ -1044,8 +1049,8 @@ AlterTableSpaceOptions(AlterTableSpaceOptionsStmt *stmt)
 								 repl_null, repl_repl);
 
 	/* Update system catalog. */
-	simple_heap_update(rel, &newtuple->t_self, newtuple);
-	CatalogUpdateIndexes(rel, newtuple);
+	simple_heap_update(rel, &newtuple->t_self, newtuple, &warm_update, &modified_attrs);
+	CatalogUpdateIndexes(rel, newtuple, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(TableSpaceRelationId, HeapTupleGetOid(tup), 0);
 
diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c
index 3fc3a21..1ed9e4b 100644
--- a/src/backend/commands/trigger.c
+++ b/src/backend/commands/trigger.c
@@ -166,6 +166,8 @@ CreateTrigger(CreateTrigStmt *stmt, const char *queryString,
 				referenced;
 	char	   *oldtablename = NULL;
 	char	   *newtablename = NULL;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	if (OidIsValid(relOid))
 		rel = heap_open(relOid, ShareRowExclusiveLock);
@@ -777,7 +779,7 @@ CreateTrigger(CreateTrigStmt *stmt, const char *queryString,
 	 */
 	simple_heap_insert(tgrel, tuple);
 
-	CatalogUpdateIndexes(tgrel, tuple);
+	CatalogUpdateIndexes(tgrel, tuple, false, NULL);
 
 	heap_freetuple(tuple);
 	heap_close(tgrel, RowExclusiveLock);
@@ -804,9 +806,10 @@ CreateTrigger(CreateTrigStmt *stmt, const char *queryString,
 
 	((Form_pg_class) GETSTRUCT(tuple))->relhastriggers = true;
 
-	simple_heap_update(pgrel, &tuple->t_self, tuple);
+	simple_heap_update(pgrel, &tuple->t_self, tuple, &warm_update,
+			&modified_attrs);
 
-	CatalogUpdateIndexes(pgrel, tuple);
+	CatalogUpdateIndexes(pgrel, tuple, warm_update, modified_attrs);
 
 	heap_freetuple(tuple);
 	heap_close(pgrel, RowExclusiveLock);
@@ -1436,6 +1439,9 @@ renametrig(RenameStmt *stmt)
 								NULL, 2, key);
 	if (HeapTupleIsValid(tuple = systable_getnext(tgscan)))
 	{
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
+
 		tgoid = HeapTupleGetOid(tuple);
 
 		/*
@@ -1446,10 +1452,11 @@ renametrig(RenameStmt *stmt)
 		namestrcpy(&((Form_pg_trigger) GETSTRUCT(tuple))->tgname,
 				   stmt->newname);
 
-		simple_heap_update(tgrel, &tuple->t_self, tuple);
+		simple_heap_update(tgrel, &tuple->t_self, tuple, &warm_update,
+				&modified_attrs);
 
 		/* keep system catalog indexes current */
-		CatalogUpdateIndexes(tgrel, tuple);
+		CatalogUpdateIndexes(tgrel, tuple, warm_update, modified_attrs);
 
 		InvokeObjectPostAlterHook(TriggerRelationId,
 								  HeapTupleGetOid(tuple), 0);
@@ -1559,13 +1566,16 @@ EnableDisableTrigger(Relation rel, const char *tgname,
 			/* need to change this one ... make a copy to scribble on */
 			HeapTuple	newtup = heap_copytuple(tuple);
 			Form_pg_trigger newtrig = (Form_pg_trigger) GETSTRUCT(newtup);
+			bool		warm_update;
+			Bitmapset	*modified_attrs;
 
 			newtrig->tgenabled = fires_when;
 
-			simple_heap_update(tgrel, &newtup->t_self, newtup);
+			simple_heap_update(tgrel, &newtup->t_self, newtup, &warm_update,
+					&modified_attrs);
 
 			/* Keep catalog indexes current */
-			CatalogUpdateIndexes(tgrel, newtup);
+			CatalogUpdateIndexes(tgrel, newtup, warm_update, modified_attrs);
 
 			heap_freetuple(newtup);
 
diff --git a/src/backend/commands/tsearchcmds.c b/src/backend/commands/tsearchcmds.c
index 479a160..d8355f6 100644
--- a/src/backend/commands/tsearchcmds.c
+++ b/src/backend/commands/tsearchcmds.c
@@ -273,7 +273,7 @@ DefineTSParser(List *names, List *parameters)
 
 	prsOid = simple_heap_insert(prsRel, tup);
 
-	CatalogUpdateIndexes(prsRel, tup);
+	CatalogUpdateIndexes(prsRel, tup, false, NULL);
 
 	address = makeParserDependencies(tup);
 
@@ -484,7 +484,7 @@ DefineTSDictionary(List *names, List *parameters)
 
 	dictOid = simple_heap_insert(dictRel, tup);
 
-	CatalogUpdateIndexes(dictRel, tup);
+	CatalogUpdateIndexes(dictRel, tup, false, NULL);
 
 	address = makeDictionaryDependencies(tup);
 
@@ -540,6 +540,8 @@ AlterTSDictionary(AlterTSDictionaryStmt *stmt)
 	bool		repl_null[Natts_pg_ts_dict];
 	bool		repl_repl[Natts_pg_ts_dict];
 	ObjectAddress address;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	dictId = get_ts_dict_oid(stmt->dictname, false);
 
@@ -620,9 +622,10 @@ AlterTSDictionary(AlterTSDictionaryStmt *stmt)
 	newtup = heap_modify_tuple(tup, RelationGetDescr(rel),
 							   repl_val, repl_null, repl_repl);
 
-	simple_heap_update(rel, &newtup->t_self, newtup);
+	simple_heap_update(rel, &newtup->t_self, newtup, &warm_update,
+			&modified_attrs);
 
-	CatalogUpdateIndexes(rel, newtup);
+	CatalogUpdateIndexes(rel, newtup, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(TSDictionaryRelationId, dictId, 0);
 
@@ -808,7 +811,7 @@ DefineTSTemplate(List *names, List *parameters)
 
 	tmplOid = simple_heap_insert(tmplRel, tup);
 
-	CatalogUpdateIndexes(tmplRel, tup);
+	CatalogUpdateIndexes(tmplRel, tup, false, NULL);
 
 	address = makeTSTemplateDependencies(tup);
 
@@ -1068,7 +1071,7 @@ DefineTSConfiguration(List *names, List *parameters, ObjectAddress *copied)
 
 	cfgOid = simple_heap_insert(cfgRel, tup);
 
-	CatalogUpdateIndexes(cfgRel, tup);
+	CatalogUpdateIndexes(cfgRel, tup, false, NULL);
 
 	if (OidIsValid(sourceOid))
 	{
@@ -1108,7 +1111,7 @@ DefineTSConfiguration(List *names, List *parameters, ObjectAddress *copied)
 
 			simple_heap_insert(mapRel, newmaptup);
 
-			CatalogUpdateIndexes(mapRel, newmaptup);
+			CatalogUpdateIndexes(mapRel, newmaptup, false, NULL);
 
 			heap_freetuple(newmaptup);
 		}
@@ -1398,6 +1401,8 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 				bool		repl_null[Natts_pg_ts_config_map];
 				bool		repl_repl[Natts_pg_ts_config_map];
 				HeapTuple	newtup;
+				bool		warm_update;
+				Bitmapset	*modified_attrs;
 
 				memset(repl_val, 0, sizeof(repl_val));
 				memset(repl_null, false, sizeof(repl_null));
@@ -1409,9 +1414,10 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 				newtup = heap_modify_tuple(maptup,
 										   RelationGetDescr(relMap),
 										   repl_val, repl_null, repl_repl);
-				simple_heap_update(relMap, &newtup->t_self, newtup);
+				simple_heap_update(relMap, &newtup->t_self, newtup,
+						&warm_update, &modified_attrs);
 
-				CatalogUpdateIndexes(relMap, newtup);
+				CatalogUpdateIndexes(relMap, newtup, warm_update, modified_attrs);
 			}
 		}
 
@@ -1437,7 +1443,7 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 
 				tup = heap_form_tuple(relMap->rd_att, values, nulls);
 				simple_heap_insert(relMap, tup);
-				CatalogUpdateIndexes(relMap, tup);
+				CatalogUpdateIndexes(relMap, tup, false, NULL);
 
 				heap_freetuple(tup);
 			}
diff --git a/src/backend/commands/typecmds.c b/src/backend/commands/typecmds.c
index 4c33d55..ccbe96f 100644
--- a/src/backend/commands/typecmds.c
+++ b/src/backend/commands/typecmds.c
@@ -2138,6 +2138,8 @@ AlterDomainDefault(List *names, Node *defaultRaw)
 	HeapTuple	newtuple;
 	Form_pg_type typTup;
 	ObjectAddress address;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/* Make a TypeName so we can use standard type lookup machinery */
 	typename = makeTypeNameFromNameList(names);
@@ -2221,9 +2223,9 @@ AlterDomainDefault(List *names, Node *defaultRaw)
 								 new_record, new_record_nulls,
 								 new_record_repl);
 
-	simple_heap_update(rel, &tup->t_self, newtuple);
+	simple_heap_update(rel, &tup->t_self, newtuple, &warm_update, &modified_attrs);
 
-	CatalogUpdateIndexes(rel, newtuple);
+	CatalogUpdateIndexes(rel, newtuple, warm_update, modified_attrs);
 
 	/* Rebuild dependencies */
 	GenerateTypeDependencies(typTup->typnamespace,
@@ -2272,6 +2274,8 @@ AlterDomainNotNull(List *names, bool notNull)
 	HeapTuple	tup;
 	Form_pg_type typTup;
 	ObjectAddress address = InvalidObjectAddress;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/* Make a TypeName so we can use standard type lookup machinery */
 	typename = makeTypeNameFromNameList(names);
@@ -2360,9 +2364,9 @@ AlterDomainNotNull(List *names, bool notNull)
 	 */
 	typTup->typnotnull = notNull;
 
-	simple_heap_update(typrel, &tup->t_self, tup);
+	simple_heap_update(typrel, &tup->t_self, tup, &warm_update, &modified_attrs);
 
-	CatalogUpdateIndexes(typrel, tup);
+	CatalogUpdateIndexes(typrel, tup, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(TypeRelationId, domainoid, 0);
 
@@ -2598,6 +2602,8 @@ AlterDomainValidateConstraint(List *names, char *constrName)
 	HeapTuple	copyTuple;
 	ScanKeyData key;
 	ObjectAddress address;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/* Make a TypeName so we can use standard type lookup machinery */
 	typename = makeTypeNameFromNameList(names);
@@ -2662,8 +2668,8 @@ AlterDomainValidateConstraint(List *names, char *constrName)
 	copyTuple = heap_copytuple(tuple);
 	copy_con = (Form_pg_constraint) GETSTRUCT(copyTuple);
 	copy_con->convalidated = true;
-	simple_heap_update(conrel, &copyTuple->t_self, copyTuple);
-	CatalogUpdateIndexes(conrel, copyTuple);
+	simple_heap_update(conrel, &copyTuple->t_self, copyTuple, &warm_update, &modified_attrs);
+	CatalogUpdateIndexes(conrel, copyTuple, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(ConstraintRelationId,
 							  HeapTupleGetOid(copyTuple), 0);
@@ -3374,6 +3380,8 @@ AlterTypeOwnerInternal(Oid typeOid, Oid newOwnerId)
 	Acl		   *newAcl;
 	Datum		aclDatum;
 	bool		isNull;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	rel = heap_open(TypeRelationId, RowExclusiveLock);
 
@@ -3404,9 +3412,9 @@ AlterTypeOwnerInternal(Oid typeOid, Oid newOwnerId)
 	tup = heap_modify_tuple(tup, RelationGetDescr(rel), repl_val, repl_null,
 							repl_repl);
 
-	simple_heap_update(rel, &tup->t_self, tup);
+	simple_heap_update(rel, &tup->t_self, tup, &warm_update, &modified_attrs);
 
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateIndexes(rel, tup, warm_update, modified_attrs);
 
 	/* If it has an array type, update that too */
 	if (OidIsValid(typTup->typarray))
@@ -3561,13 +3569,16 @@ AlterTypeNamespaceInternal(Oid typeOid, Oid nspOid,
 
 	if (oldNspOid != nspOid)
 	{
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
+
 		/* OK, modify the pg_type row */
 
 		/* tup is a copy, so we can scribble directly on it */
 		typform->typnamespace = nspOid;
 
-		simple_heap_update(rel, &tup->t_self, tup);
-		CatalogUpdateIndexes(rel, tup);
+		simple_heap_update(rel, &tup->t_self, tup, &warm_update, &modified_attrs);
+		CatalogUpdateIndexes(rel, tup, warm_update, modified_attrs);
 	}
 
 	/*
diff --git a/src/backend/commands/user.c b/src/backend/commands/user.c
index e6fdac3..5d67041 100644
--- a/src/backend/commands/user.c
+++ b/src/backend/commands/user.c
@@ -434,7 +434,7 @@ CreateRole(ParseState *pstate, CreateRoleStmt *stmt)
 	 * Insert new record in the pg_authid table
 	 */
 	roleid = simple_heap_insert(pg_authid_rel, tuple);
-	CatalogUpdateIndexes(pg_authid_rel, tuple);
+	CatalogUpdateIndexes(pg_authid_rel, tuple, false, NULL);
 
 	/*
 	 * Advance command counter so we can see new record; else tests in
@@ -531,6 +531,8 @@ AlterRole(AlterRoleStmt *stmt)
 	DefElem    *dvalidUntil = NULL;
 	DefElem    *dbypassRLS = NULL;
 	Oid			roleid;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	check_rolespec_name(stmt->role,
 						"Cannot alter reserved roles.");
@@ -838,10 +840,11 @@ AlterRole(AlterRoleStmt *stmt)
 
 	new_tuple = heap_modify_tuple(tuple, pg_authid_dsc, new_record,
 								  new_record_nulls, new_record_repl);
-	simple_heap_update(pg_authid_rel, &tuple->t_self, new_tuple);
+	simple_heap_update(pg_authid_rel, &tuple->t_self, new_tuple, &warm_update,
+			&modified_attrs);
 
 	/* Update indexes */
-	CatalogUpdateIndexes(pg_authid_rel, new_tuple);
+	CatalogUpdateIndexes(pg_authid_rel, new_tuple, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(AuthIdRelationId, roleid, 0);
 
@@ -1149,6 +1152,8 @@ RenameRole(const char *oldname, const char *newname)
 	Oid			roleid;
 	ObjectAddress address;
 	Form_pg_authid authform;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	rel = heap_open(AuthIdRelationId, RowExclusiveLock);
 	dsc = RelationGetDescr(rel);
@@ -1243,9 +1248,9 @@ RenameRole(const char *oldname, const char *newname)
 	}
 
 	newtuple = heap_modify_tuple(oldtuple, dsc, repl_val, repl_null, repl_repl);
-	simple_heap_update(rel, &oldtuple->t_self, newtuple);
+	simple_heap_update(rel, &oldtuple->t_self, newtuple, &warm_update, &modified_attrs);
 
-	CatalogUpdateIndexes(rel, newtuple);
+	CatalogUpdateIndexes(rel, newtuple, warm_update, modified_attrs);
 
 	InvokeObjectPostAlterHook(AuthIdRelationId, roleid, 0);
 
@@ -1527,13 +1532,16 @@ AddRoleMems(const char *rolename, Oid roleid,
 
 		if (HeapTupleIsValid(authmem_tuple))
 		{
+			bool		warm_update;
+			Bitmapset	*modified_attrs;
+
 			new_record_repl[Anum_pg_auth_members_grantor - 1] = true;
 			new_record_repl[Anum_pg_auth_members_admin_option - 1] = true;
 			tuple = heap_modify_tuple(authmem_tuple, pg_authmem_dsc,
 									  new_record,
 									  new_record_nulls, new_record_repl);
-			simple_heap_update(pg_authmem_rel, &tuple->t_self, tuple);
-			CatalogUpdateIndexes(pg_authmem_rel, tuple);
+			simple_heap_update(pg_authmem_rel, &tuple->t_self, tuple, &warm_update, &modified_attrs);
+			CatalogUpdateIndexes(pg_authmem_rel, tuple, warm_update, modified_attrs);
 			ReleaseSysCache(authmem_tuple);
 		}
 		else
@@ -1541,7 +1549,7 @@ AddRoleMems(const char *rolename, Oid roleid,
 			tuple = heap_form_tuple(pg_authmem_dsc,
 									new_record, new_record_nulls);
 			simple_heap_insert(pg_authmem_rel, tuple);
-			CatalogUpdateIndexes(pg_authmem_rel, tuple);
+			CatalogUpdateIndexes(pg_authmem_rel, tuple, false, NULL);
 		}
 
 		/* CCI after each change, in case there are duplicates in list */
@@ -1637,6 +1645,8 @@ DelRoleMems(const char *rolename, Oid roleid,
 			Datum		new_record[Natts_pg_auth_members];
 			bool		new_record_nulls[Natts_pg_auth_members];
 			bool		new_record_repl[Natts_pg_auth_members];
+			bool		warm_update;
+			Bitmapset	*modified_attrs;
 
 			/* Build a tuple to update with */
 			MemSet(new_record, 0, sizeof(new_record));
@@ -1649,8 +1659,10 @@ DelRoleMems(const char *rolename, Oid roleid,
 			tuple = heap_modify_tuple(authmem_tuple, pg_authmem_dsc,
 									  new_record,
 									  new_record_nulls, new_record_repl);
-			simple_heap_update(pg_authmem_rel, &tuple->t_self, tuple);
-			CatalogUpdateIndexes(pg_authmem_rel, tuple);
+			simple_heap_update(pg_authmem_rel, &tuple->t_self, tuple,
+					&warm_update, &modified_attrs);
+			CatalogUpdateIndexes(pg_authmem_rel, tuple, warm_update,
+					modified_attrs);
 		}
 
 		ReleaseSysCache(authmem_tuple);
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 005440e..9e3d0ee 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -2158,6 +2158,22 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 						break;
 					}
 
+					/*
+					 * If this tuple was ever WARM updated or is a WARM tuple,
+					 * there could be multiple index entries pointing to the
+					 * root of this chain. We can't do index-only scans for
+					 * such tuples without verifying index key check. So mark
+					 * the page as !all_visible
+					 *
+					 * XXX Should we look at the root line pointer and check if
+					 * WARM flag is set there or checking for tuples in the
+					 * chain is good enough?
+					 */
+					if (HeapTupleHeaderIsHeapWarmTuple(tuple.t_data))
+					{
+						all_visible = false;
+					}
+
 					/* Track newest xmin on page. */
 					if (TransactionIdFollows(xmin, *visibility_cutoff_xid))
 						*visibility_cutoff_xid = xmin;
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 9920f48..94cf92f 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -270,6 +270,8 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
 List *
 ExecInsertIndexTuples(TupleTableSlot *slot,
 					  ItemPointer tupleid,
+					  ItemPointer root_tid,
+					  Bitmapset *modified_attrs,
 					  EState *estate,
 					  bool noDupErr,
 					  bool *specConflict,
@@ -324,6 +326,17 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 		if (!indexInfo->ii_ReadyForInserts)
 			continue;
 
+		/*
+		 * If modified_attrs is set, we only insert index entries for those
+		 * indexes whose column has changed. All other indexes can use their
+		 * existing index pointers to look up the new tuple
+		 */
+		if (modified_attrs)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
 		/* Check for partial index */
 		if (indexInfo->ii_Predicate != NIL)
 		{
@@ -389,7 +402,7 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 			index_insert(indexRelation, /* index relation */
 						 values,	/* array of index Datums */
 						 isnull,	/* null flags */
-						 tupleid,		/* tid of heap tuple */
+						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique);	/* type of uniqueness check to do */
 
@@ -790,6 +803,9 @@ retry:
 		{
 			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
 				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
+			else
+				ItemPointerCopy(&tup->t_self, &ctid_wait);
+
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index a18ae51..fb81633 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -399,6 +399,7 @@ ExecSimpleRelationInsert(EState *estate, TupleTableSlot *slot)
 
 		if (resultRelInfo->ri_NumIndices > 0)
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &(tuple->t_self), NULL,
 												   estate, false, NULL,
 												   NIL);
 
@@ -445,6 +446,8 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 	if (!skip_tuple)
 	{
 		List	   *recheckIndexes = NIL;
+		bool		warm_update;
+		Bitmapset  *modified_attrs;
 
 		/* Check the constraints of the tuple */
 		if (rel->rd_att->constr)
@@ -455,13 +458,31 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 
 		/* OK, update the tuple and index entries for it */
 		simple_heap_update(rel, &searchslot->tts_tuple->t_self,
-						   slot->tts_tuple);
+						   slot->tts_tuple,
+						   &warm_update,
+						   &modified_attrs);
 
 		if (resultRelInfo->ri_NumIndices > 0 &&
-			!HeapTupleIsHeapOnly(slot->tts_tuple))
+			(!HeapTupleIsHeapOnly(slot->tts_tuple) || warm_update))
+		{
+			ItemPointerData root_tid;
+
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
+
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid, modified_attrs,
 												   estate, false, NULL,
 												   NIL);
+		}
 
 		/* AFTER ROW UPDATE Triggers */
 		ExecARUpdateTriggers(estate, resultRelInfo,
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index f18827d..f81d290 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -37,6 +37,7 @@
 
 #include "access/relscan.h"
 #include "access/transam.h"
+#include "access/valid.h"
 #include "executor/execdebug.h"
 #include "executor/nodeBitmapHeapscan.h"
 #include "pgstat.h"
@@ -362,11 +363,27 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			OffsetNumber offnum = tbmres->offsets[curslot];
 			ItemPointerData tid;
 			HeapTupleData heapTuple;
+			bool recheck = false;
 
 			ItemPointerSet(&tid, page, offnum);
 			if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
-									   &heapTuple, NULL, true))
-				scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+									   &heapTuple, NULL, true, &recheck))
+			{
+				bool valid = true;
+
+				if (scan->rs_key)
+					HeapKeyTest(&heapTuple, RelationGetDescr(scan->rs_rd),
+							scan->rs_nkeys, scan->rs_key, valid);
+				if (valid)
+					scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+
+				/*
+				 * If the heap tuple needs a recheck because of a WARM update,
+				 * it's a lossy case
+				 */
+				if (recheck)
+					tbmres->recheck = true;
+			}
 		}
 	}
 	else
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index 5734550..c7be366 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -115,10 +115,10 @@ IndexNext(IndexScanState *node)
 					   false);	/* don't pfree */
 
 		/*
-		 * If the index was lossy, we have to recheck the index quals using
-		 * the fetched tuple.
+		 * If the index was lossy or the tuple was WARM, we have to recheck
+		 * the index quals using the fetched tuple.
 		 */
-		if (scandesc->xs_recheck)
+		if (scandesc->xs_recheck || scandesc->xs_tuple_recheck)
 		{
 			econtext->ecxt_scantuple = slot;
 			ResetExprContext(econtext);
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index 2ac7407..142eb57 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -512,6 +512,7 @@ ExecInsert(ModifyTableState *mtstate,
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												 &(tuple->t_self), NULL,
 												 estate, true, &specConflict,
 												   arbiterIndexes);
 
@@ -558,6 +559,7 @@ ExecInsert(ModifyTableState *mtstate,
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
 				recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+													   &(tuple->t_self), NULL,
 													   estate, false, NULL,
 													   arbiterIndexes);
 		}
@@ -891,6 +893,9 @@ ExecUpdate(ItemPointer tupleid,
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
 	List	   *recheckIndexes = NIL;
+	Bitmapset  *modified_attrs = NULL;
+	ItemPointerData	root_tid;
+	bool		warm_update;
 
 	/*
 	 * abort the operation if not running transactions
@@ -1007,7 +1012,7 @@ lreplace:;
 							 estate->es_output_cid,
 							 estate->es_crosscheck_snapshot,
 							 true /* wait for commit */ ,
-							 &hufd, &lockmode);
+							 &hufd, &lockmode, &modified_attrs, &warm_update);
 		switch (result)
 		{
 			case HeapTupleSelfUpdated:
@@ -1094,10 +1099,28 @@ lreplace:;
 		 * the t_self field.
 		 *
 		 * If it's a HOT update, we mustn't insert new index entries.
+		 *
+		 * If it's a WARM update, then we must insert new entries with TID
+		 * pointing to the root of the WARM chain.
 		 */
-		if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
+		if (resultRelInfo->ri_NumIndices > 0 &&
+			(!HeapTupleIsHeapOnly(tuple) || warm_update))
+		{
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL, NIL);
+		}
 	}
 
 	if (canSetTag)
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index 7176cf1..432dd4b 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -1823,7 +1823,7 @@ pgstat_count_heap_insert(Relation rel, int n)
  * pgstat_count_heap_update - count a tuple update
  */
 void
-pgstat_count_heap_update(Relation rel, bool hot)
+pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 {
 	PgStat_TableStatus *pgstat_info = rel->pgstat_info;
 
@@ -1841,6 +1841,8 @@ pgstat_count_heap_update(Relation rel, bool hot)
 		/* t_tuples_hot_updated is nontransactional, so just advance it */
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
+		else if (warm)
+			pgstat_info->t_counts.t_tuples_warm_updated++;
 	}
 }
 
@@ -4085,6 +4087,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_updated = 0;
 		result->tuples_deleted = 0;
 		result->tuples_hot_updated = 0;
+		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
 		result->changes_since_analyze = 0;
@@ -5194,6 +5197,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated = tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted = tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated = tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
@@ -5221,6 +5225,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated += tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted += tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated += tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated += tabmsg->t_counts.t_tuples_warm_updated;
 			/* If table was truncated, first reset the live/dead counters */
 			if (tabmsg->t_counts.t_truncated)
 			{
diff --git a/src/backend/replication/logical/origin.c b/src/backend/replication/logical/origin.c
index d7dda6a..adafe23 100644
--- a/src/backend/replication/logical/origin.c
+++ b/src/backend/replication/logical/origin.c
@@ -300,7 +300,7 @@ replorigin_create(char *roname)
 
 			tuple = heap_form_tuple(RelationGetDescr(rel), values, nulls);
 			simple_heap_insert(rel, tuple);
-			CatalogUpdateIndexes(rel, tuple);
+			CatalogUpdateIndexes(rel, tuple, false, NULL);
 			CommandCounterIncrement();
 			break;
 		}
diff --git a/src/backend/rewrite/rewriteDefine.c b/src/backend/rewrite/rewriteDefine.c
index 864d45f..59f163a 100644
--- a/src/backend/rewrite/rewriteDefine.c
+++ b/src/backend/rewrite/rewriteDefine.c
@@ -77,6 +77,8 @@ InsertRule(char *rulname,
 	ObjectAddress myself,
 				referenced;
 	bool		is_update = false;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/*
 	 * Set up *nulls and *values arrays
@@ -124,7 +126,7 @@ InsertRule(char *rulname,
 		tup = heap_modify_tuple(oldtup, RelationGetDescr(pg_rewrite_desc),
 								values, nulls, replaces);
 
-		simple_heap_update(pg_rewrite_desc, &tup->t_self, tup);
+		simple_heap_update(pg_rewrite_desc, &tup->t_self, tup, &warm_update, &modified_attrs);
 
 		ReleaseSysCache(oldtup);
 
@@ -136,10 +138,12 @@ InsertRule(char *rulname,
 		tup = heap_form_tuple(pg_rewrite_desc->rd_att, values, nulls);
 
 		rewriteObjectId = simple_heap_insert(pg_rewrite_desc, tup);
+		warm_update = false;
+		modified_attrs = NULL;
 	}
 
 	/* Need to update indexes in either case */
-	CatalogUpdateIndexes(pg_rewrite_desc, tup);
+	CatalogUpdateIndexes(pg_rewrite_desc, tup, warm_update, modified_attrs);
 
 	heap_freetuple(tup);
 
@@ -549,6 +553,8 @@ DefineQueryRewrite(char *rulename,
 		Oid			toastrelid;
 		HeapTuple	classTup;
 		Form_pg_class classForm;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		relationRelation = heap_open(RelationRelationId, RowExclusiveLock);
 		toastrelid = event_relation->rd_rel->reltoastrelid;
@@ -613,8 +619,8 @@ DefineQueryRewrite(char *rulename,
 		classForm->relminmxid = InvalidMultiXactId;
 		classForm->relreplident = REPLICA_IDENTITY_NOTHING;
 
-		simple_heap_update(relationRelation, &classTup->t_self, classTup);
-		CatalogUpdateIndexes(relationRelation, classTup);
+		simple_heap_update(relationRelation, &classTup->t_self, classTup, &warm_update, &modified_attrs);
+		CatalogUpdateIndexes(relationRelation, classTup, warm_update, modified_attrs);
 
 		heap_freetuple(classTup);
 		heap_close(relationRelation, RowExclusiveLock);
@@ -864,12 +870,15 @@ EnableDisableRule(Relation rel, const char *rulename,
 	if (DatumGetChar(((Form_pg_rewrite) GETSTRUCT(ruletup))->ev_enabled) !=
 		fires_when)
 	{
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
+
 		((Form_pg_rewrite) GETSTRUCT(ruletup))->ev_enabled =
 			CharGetDatum(fires_when);
-		simple_heap_update(pg_rewrite_desc, &ruletup->t_self, ruletup);
+		simple_heap_update(pg_rewrite_desc, &ruletup->t_self, ruletup, &warm_update, &modified_attrs);
 
 		/* keep system catalog indexes current */
-		CatalogUpdateIndexes(pg_rewrite_desc, ruletup);
+		CatalogUpdateIndexes(pg_rewrite_desc, ruletup, warm_update, modified_attrs);
 
 		changed = true;
 	}
@@ -938,6 +947,8 @@ RenameRewriteRule(RangeVar *relation, const char *oldName,
 	Form_pg_rewrite ruleform;
 	Oid			ruleOid;
 	ObjectAddress address;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/*
 	 * Look up name, check permissions, and acquire lock (which we will NOT
@@ -985,10 +996,10 @@ RenameRewriteRule(RangeVar *relation, const char *oldName,
 	/* OK, do the update */
 	namestrcpy(&(ruleform->rulename), newName);
 
-	simple_heap_update(pg_rewrite_desc, &ruletup->t_self, ruletup);
+	simple_heap_update(pg_rewrite_desc, &ruletup->t_self, ruletup, &warm_update, &modified_attrs);
 
 	/* keep system catalog indexes current */
-	CatalogUpdateIndexes(pg_rewrite_desc, ruletup);
+	CatalogUpdateIndexes(pg_rewrite_desc, ruletup, warm_update, modified_attrs);
 
 	heap_freetuple(ruletup);
 	heap_close(pg_rewrite_desc, RowExclusiveLock);
diff --git a/src/backend/rewrite/rewriteSupport.c b/src/backend/rewrite/rewriteSupport.c
index 0154072..848ee7a 100644
--- a/src/backend/rewrite/rewriteSupport.c
+++ b/src/backend/rewrite/rewriteSupport.c
@@ -69,13 +69,16 @@ SetRelationRuleStatus(Oid relationId, bool relHasRules)
 
 	if (classForm->relhasrules != relHasRules)
 	{
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
+
 		/* Do the update */
 		classForm->relhasrules = relHasRules;
 
-		simple_heap_update(relationRelation, &tuple->t_self, tuple);
+		simple_heap_update(relationRelation, &tuple->t_self, tuple, &warm_update, &modified_attrs);
 
 		/* Keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relationRelation, tuple);
+		CatalogUpdateIndexes(relationRelation, tuple, warm_update, modified_attrs);
 	}
 	else
 	{
diff --git a/src/backend/storage/large_object/inv_api.c b/src/backend/storage/large_object/inv_api.c
index 262b0b2..7a643bf 100644
--- a/src/backend/storage/large_object/inv_api.c
+++ b/src/backend/storage/large_object/inv_api.c
@@ -638,6 +638,9 @@ inv_write(LargeObjectDesc *obj_desc, const char *buf, int nbytes)
 		 */
 		if (olddata != NULL && olddata->pageno == pageno)
 		{
+			bool warm_update;
+			Bitmapset *modified_attrs;
+
 			/*
 			 * Update an existing page with fresh data.
 			 *
@@ -678,8 +681,9 @@ inv_write(LargeObjectDesc *obj_desc, const char *buf, int nbytes)
 			replace[Anum_pg_largeobject_data - 1] = true;
 			newtup = heap_modify_tuple(oldtuple, RelationGetDescr(lo_heap_r),
 									   values, nulls, replace);
-			simple_heap_update(lo_heap_r, &newtup->t_self, newtup);
-			CatalogIndexInsert(indstate, newtup);
+			simple_heap_update(lo_heap_r, &newtup->t_self, newtup,
+					&warm_update, &modified_attrs);
+			CatalogIndexInsert(indstate, newtup, warm_update, modified_attrs);
 			heap_freetuple(newtup);
 
 			/*
@@ -722,7 +726,7 @@ inv_write(LargeObjectDesc *obj_desc, const char *buf, int nbytes)
 			values[Anum_pg_largeobject_data - 1] = PointerGetDatum(&workbuf);
 			newtup = heap_form_tuple(lo_heap_r->rd_att, values, nulls);
 			simple_heap_insert(lo_heap_r, newtup);
-			CatalogIndexInsert(indstate, newtup);
+			CatalogIndexInsert(indstate, newtup, false, NULL);
 			heap_freetuple(newtup);
 		}
 		pageno++;
@@ -824,6 +828,8 @@ inv_truncate(LargeObjectDesc *obj_desc, int64 len)
 		bytea	   *datafield;
 		int			pagelen;
 		bool		pfreeit;
+		bool		warm_update;
+		Bitmapset	*modified_attrs;
 
 		getdatafield(olddata, &datafield, &pagelen, &pfreeit);
 		memcpy(workb, VARDATA(datafield), pagelen);
@@ -850,8 +856,9 @@ inv_truncate(LargeObjectDesc *obj_desc, int64 len)
 		replace[Anum_pg_largeobject_data - 1] = true;
 		newtup = heap_modify_tuple(oldtuple, RelationGetDescr(lo_heap_r),
 								   values, nulls, replace);
-		simple_heap_update(lo_heap_r, &newtup->t_self, newtup);
-		CatalogIndexInsert(indstate, newtup);
+		simple_heap_update(lo_heap_r, &newtup->t_self, newtup, &warm_update,
+				&modified_attrs);
+		CatalogIndexInsert(indstate, newtup, warm_update, modified_attrs);
 		heap_freetuple(newtup);
 	}
 	else
@@ -889,7 +896,7 @@ inv_truncate(LargeObjectDesc *obj_desc, int64 len)
 		values[Anum_pg_largeobject_data - 1] = PointerGetDatum(&workbuf);
 		newtup = heap_form_tuple(lo_heap_r->rd_att, values, nulls);
 		simple_heap_insert(lo_heap_r, newtup);
-		CatalogIndexInsert(indstate, newtup);
+		CatalogIndexInsert(indstate, newtup, false, NULL);
 		heap_freetuple(newtup);
 	}
 
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index a987d0d..b8677f3 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -145,6 +145,22 @@ pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS)
 
 
 Datum
+pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+
+Datum
 pg_stat_get_live_tuples(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
@@ -1644,6 +1660,21 @@ pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS)
 }
 
 Datum
+pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_TableStatus *tabentry;
+
+	if ((tabentry = find_tabstat_entry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->t_counts.t_tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+Datum
 pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 26ff7e1..1976753 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2338,6 +2338,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	list_free_deep(relation->rd_fkeylist);
 	list_free(relation->rd_indexlist);
 	bms_free(relation->rd_indexattr);
+	bms_free(relation->rd_exprindexattr);
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_pkattr);
 	bms_free(relation->rd_idattr);
@@ -3419,6 +3420,8 @@ RelationSetNewRelfilenode(Relation relation, char persistence,
 	Relation	pg_class;
 	HeapTuple	tuple;
 	Form_pg_class classform;
+	bool		warm_update;
+	Bitmapset	*modified_attrs;
 
 	/* Indexes, sequences must have Invalid frozenxid; other rels must not */
 	Assert((relation->rd_rel->relkind == RELKIND_INDEX ||
@@ -3484,8 +3487,8 @@ RelationSetNewRelfilenode(Relation relation, char persistence,
 	classform->relminmxid = minmulti;
 	classform->relpersistence = persistence;
 
-	simple_heap_update(pg_class, &tuple->t_self, tuple);
-	CatalogUpdateIndexes(pg_class, tuple);
+	simple_heap_update(pg_class, &tuple->t_self, tuple, &warm_update, &modified_attrs);
+	CatalogUpdateIndexes(pg_class, tuple, warm_update, modified_attrs);
 
 	heap_freetuple(tuple);
 
@@ -4757,6 +4760,8 @@ Bitmapset *
 RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 {
 	Bitmapset  *indexattrs;		/* indexed columns */
+	Bitmapset  *exprindexattrs;	/* indexed columns in expression/prediacate
+									 indexes */
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *pkindexattrs;	/* columns in the primary index */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
@@ -4765,6 +4770,7 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	Oid			relreplindex;
 	ListCell   *l;
 	MemoryContext oldcxt;
+	bool		supportswarm = true;/* True if the table can be WARM updated */
 
 	/* Quick exit if we already computed the result. */
 	if (relation->rd_indexattr != NULL)
@@ -4779,6 +4785,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_pkattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
+			case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+				return bms_copy(relation->rd_exprindexattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -4819,6 +4827,7 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	 * won't be returned at all by RelationGetIndexList.
 	 */
 	indexattrs = NULL;
+	exprindexattrs = NULL;
 	uindexattrs = NULL;
 	pkindexattrs = NULL;
 	idindexattrs = NULL;
@@ -4873,19 +4882,38 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 		}
 
 		/* Collect all attributes used in expressions, too */
-		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &exprindexattrs);
 
 		/* Collect all attributes in the index predicate, too */
-		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
+
+		/*
+		 * indexattrs should include attributes referenced in index expressions
+		 * and predicates too
+		 */
+		indexattrs = bms_add_members(indexattrs, exprindexattrs);
+
+		/*
+		 * Check if the index has amrecheck method defined. If the method is
+		 * not defined, the index does not support WARM update. Completely
+		 * disable WARM updates on such tables
+		 */
+		if (!indexDesc->rd_amroutine->amrecheck)
+			supportswarm = false;
 
 		index_close(indexDesc, AccessShareLock);
 	}
 
 	list_free(indexoidlist);
 
+	/* Remember if the table can do WARM updates */
+	relation->rd_supportswarm = supportswarm;
+
 	/* Don't leak the old values of these bitmaps, if any */
 	bms_free(relation->rd_indexattr);
 	relation->rd_indexattr = NULL;
+	bms_free(relation->rd_exprindexattr);
+	relation->rd_exprindexattr = NULL;
 	bms_free(relation->rd_keyattr);
 	relation->rd_keyattr = NULL;
 	bms_free(relation->rd_pkattr);
@@ -4904,7 +4932,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_pkattr = bms_copy(pkindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_indexattr = bms_copy(indexattrs);
+	relation->rd_exprindexattr = bms_copy(exprindexattrs);
+	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -4918,6 +4947,8 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 			return bms_copy(relation->rd_pkattr);
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
+		case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+			return exprindexattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h
index 6a5f279..0de82fa 100644
--- a/src/include/access/amapi.h
+++ b/src/include/access/amapi.h
@@ -13,6 +13,7 @@
 #define AMAPI_H
 
 #include "access/genam.h"
+#include "access/itup.h"
 
 /*
  * We don't wish to include planner header files here, since most of an index
@@ -137,6 +138,9 @@ typedef void (*ammarkpos_function) (IndexScanDesc scan);
 /* restore marked scan position */
 typedef void (*amrestrpos_function) (IndexScanDesc scan);
 
+/* recheck index tuple and heap tuple match */
+typedef bool (*amrecheck_function) (Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * API struct for an index AM.  Note this must be stored in a single palloc'd
@@ -196,6 +200,7 @@ typedef struct IndexAmRoutine
 	amendscan_function amendscan;
 	ammarkpos_function ammarkpos;		/* can be NULL */
 	amrestrpos_function amrestrpos;		/* can be NULL */
+	amrecheck_function amrecheck;		/* can be NULL */
 } IndexAmRoutine;
 
 
diff --git a/src/include/access/hash.h b/src/include/access/hash.h
index 69a3873..3e14023 100644
--- a/src/include/access/hash.h
+++ b/src/include/access/hash.h
@@ -364,4 +364,8 @@ extern void hashbucketcleanup(Relation rel, Bucket cur_bucket,
 				  bool bucket_has_garbage,
 				  IndexBulkDeleteCallback callback, void *callback_state);
 
+/* hash.c */
+extern bool hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 #endif   /* HASH_H */
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 22507dc..06e22a3 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -137,9 +137,10 @@ extern bool heap_fetch(Relation relation, Snapshot snapshot,
 		   Relation stats_relation);
 extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
 					   Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call);
+					   bool *all_dead, bool first_call, bool *recheck);
 extern bool heap_hot_search(ItemPointer tid, Relation relation,
-				Snapshot snapshot, bool *all_dead);
+				Snapshot snapshot, bool *all_dead,
+				bool *recheck, Buffer *buffer, HeapTuple heapTuple);
 
 extern void heap_get_latest_tid(Relation relation, Snapshot snapshot,
 					ItemPointer tid);
@@ -160,7 +161,8 @@ extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
 extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
@@ -175,7 +177,9 @@ extern bool heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple);
 extern Oid	simple_heap_insert(Relation relation, HeapTuple tup);
 extern void simple_heap_delete(Relation relation, ItemPointer tid);
 extern void simple_heap_update(Relation relation, ItemPointer otid,
-				   HeapTuple tup);
+				   HeapTuple tup,
+				   bool *warm_update,
+				   Bitmapset **modified_attrs);
 
 extern void heap_sync(Relation relation);
 
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index a4a1fe1..b4238e5 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -80,6 +80,7 @@
 #define XLH_UPDATE_CONTAINS_NEW_TUPLE			(1<<4)
 #define XLH_UPDATE_PREFIX_FROM_OLD				(1<<5)
 #define XLH_UPDATE_SUFFIX_FROM_OLD				(1<<6)
+#define XLH_UPDATE_WARM_UPDATE					(1<<7)
 
 /* convenience macro for checking whether any form of old tuple was logged */
 #define XLH_UPDATE_CONTAINS_OLD						\
diff --git a/src/include/access/htup.h b/src/include/access/htup.h
index 870adf4..0ae223e 100644
--- a/src/include/access/htup.h
+++ b/src/include/access/htup.h
@@ -14,6 +14,7 @@
 #ifndef HTUP_H
 #define HTUP_H
 
+#include "nodes/bitmapset.h"
 #include "storage/itemptr.h"
 
 /* typedefs and forward declarations for structs defined in htup_details.h */
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index fff1832..2ea4865 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,7 +260,8 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x0800 are available */
+#define HEAP_WARM_TUPLE			0x0800	/* This tuple is a part of a WARM chain
+										 */
 #define HEAP_LATEST_TUPLE		0x1000	/*
 										 * This is the last tuple in chain and
 										 * ip_posid points to the root line
@@ -271,7 +272,7 @@ struct HeapTupleHeaderData
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF800	/* visibility-related bits */
 
 
 /*
@@ -510,6 +511,22 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+#define HeapTupleHeaderSetHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 |= HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderClearHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 &= ~HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderIsHeapWarmTuple(tup) \
+( \
+  ((tup)->t_infomask2 & HEAP_WARM_TUPLE) != 0 \
+)
+
+
 #define HeapTupleHeaderSetHeapLatest(tup, offnum) \
 do { \
 	AssertMacro(OffsetNumberIsValid(offnum)); \
@@ -763,6 +780,15 @@ struct MinimalTupleData
 #define HeapTupleClearHeapOnly(tuple) \
 		HeapTupleHeaderClearHeapOnly((tuple)->t_data)
 
+#define HeapTupleIsHeapWarmTuple(tuple) \
+		HeapTupleHeaderIsHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleSetHeapWarmTuple(tuple) \
+		HeapTupleHeaderSetHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleClearHeapWarmTuple(tuple) \
+		HeapTupleHeaderClearHeapWarmTuple((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index 011a72e..98129d6 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -750,6 +750,8 @@ extern bytea *btoptions(Datum reloptions, bool validate);
 extern bool btproperty(Oid index_oid, int attno,
 		   IndexAMProperty prop, const char *propname,
 		   bool *res, bool *isnull);
+extern bool btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * prototypes for functions in nbtvalidate.c
diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h
index 8746045..1f5b361 100644
--- a/src/include/access/relscan.h
+++ b/src/include/access/relscan.h
@@ -111,7 +111,8 @@ typedef struct IndexScanDescData
 	HeapTupleData xs_ctup;		/* current heap tuple, if any */
 	Buffer		xs_cbuf;		/* current heap buffer in scan, if any */
 	/* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */
-	bool		xs_recheck;		/* T means scan keys must be rechecked */
+	bool		xs_recheck;		/* T means scan keys must be rechecked for each tuple */
+	bool		xs_tuple_recheck;	/* T means scan keys must be rechecked for current tuple */
 
 	/*
 	 * When fetching with an ordering operator, the values of the ORDER BY
diff --git a/src/include/catalog/indexing.h b/src/include/catalog/indexing.h
index 45605a0..5a8fb70 100644
--- a/src/include/catalog/indexing.h
+++ b/src/include/catalog/indexing.h
@@ -31,8 +31,12 @@ typedef struct ResultRelInfo *CatalogIndexState;
 extern CatalogIndexState CatalogOpenIndexes(Relation heapRel);
 extern void CatalogCloseIndexes(CatalogIndexState indstate);
 extern void CatalogIndexInsert(CatalogIndexState indstate,
-				   HeapTuple heapTuple);
-extern void CatalogUpdateIndexes(Relation heapRel, HeapTuple heapTuple);
+				   HeapTuple heapTuple,
+				   bool warm_update,
+				   Bitmapset *modified_attrs);
+extern void CatalogUpdateIndexes(Relation heapRel, HeapTuple heapTuple,
+				   bool warm_update,
+				   Bitmapset *modified_attrs);
 
 
 /*
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index ab12761..201e8b6 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2740,6 +2740,8 @@ DATA(insert OID = 1933 (  pg_stat_get_tuples_deleted	PGNSP PGUID 12 1 0 0 0 f f
 DESCR("statistics: number of tuples deleted");
 DATA(insert OID = 1972 (  pg_stat_get_tuples_hot_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated");
+DATA(insert OID = 3353 (  pg_stat_get_tuples_warm_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated");
 DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_live_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
@@ -2892,6 +2894,8 @@ DATA(insert OID = 3042 (  pg_stat_get_xact_tuples_deleted		PGNSP PGUID 12 1 0 0
 DESCR("statistics: number of tuples deleted in current transaction");
 DATA(insert OID = 3043 (  pg_stat_get_xact_tuples_hot_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated in current transaction");
+DATA(insert OID = 3354 (  pg_stat_get_xact_tuples_warm_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated in current transaction");
 DATA(insert OID = 3044 (  pg_stat_get_xact_blocks_fetched		PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_fetched _null_ _null_ _null_ ));
 DESCR("statistics: number of blocks fetched in current transaction");
 DATA(insert OID = 3045 (  pg_stat_get_xact_blocks_hit			PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_hit _null_ _null_ _null_ ));
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 02dbe7b..c4495a3 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -382,6 +382,7 @@ extern void UnregisterExprContextCallback(ExprContext *econtext,
 extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
 extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
 extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
+					  ItemPointer root_tid, Bitmapset *modified_attrs,
 					  EState *estate, bool noDupErr, bool *specConflict,
 					  List *arbiterIndexes);
 extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
diff --git a/src/include/executor/nodeIndexscan.h b/src/include/executor/nodeIndexscan.h
index 46d6f45..2c4d884 100644
--- a/src/include/executor/nodeIndexscan.h
+++ b/src/include/executor/nodeIndexscan.h
@@ -37,5 +37,4 @@ extern void ExecIndexEvalRuntimeKeys(ExprContext *econtext,
 extern bool ExecIndexEvalArrayKeys(ExprContext *econtext,
 					   IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 extern bool ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
-
 #endif   /* NODEINDEXSCAN_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index f9bcdd6..07f2900 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -62,6 +62,7 @@ typedef struct IndexInfo
 	NodeTag		type;
 	int			ii_NumIndexAttrs;
 	AttrNumber	ii_KeyAttrNumbers[INDEX_MAX_KEYS];
+	Bitmapset  *ii_indxattrs;	/* bitmap of all columns used in this index */
 	List	   *ii_Expressions; /* list of Expr */
 	List	   *ii_ExpressionsState;	/* list of ExprState */
 	List	   *ii_Predicate;	/* list of Expr */
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index de8225b..ee635be 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -105,6 +105,7 @@ typedef struct PgStat_TableCounts
 	PgStat_Counter t_tuples_updated;
 	PgStat_Counter t_tuples_deleted;
 	PgStat_Counter t_tuples_hot_updated;
+	PgStat_Counter t_tuples_warm_updated;
 	bool		t_truncated;
 
 	PgStat_Counter t_delta_live_tuples;
@@ -625,6 +626,7 @@ typedef struct PgStat_StatTabEntry
 	PgStat_Counter tuples_updated;
 	PgStat_Counter tuples_deleted;
 	PgStat_Counter tuples_hot_updated;
+	PgStat_Counter tuples_warm_updated;
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
@@ -1177,7 +1179,7 @@ pgstat_report_wait_end(void)
 	(pgStatBlockWriteTime += (n))
 
 extern void pgstat_count_heap_insert(Relation rel, int n);
-extern void pgstat_count_heap_update(Relation rel, bool hot);
+extern void pgstat_count_heap_update(Relation rel, bool hot, bool warm);
 extern void pgstat_count_heap_delete(Relation rel);
 extern void pgstat_count_truncate(Relation rel);
 extern void pgstat_update_heap_dead_tuples(Relation rel, int delta);
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index a1750ac..092491f 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -138,9 +138,12 @@ typedef struct RelationData
 
 	/* data managed by RelationGetIndexAttrBitmap: */
 	Bitmapset  *rd_indexattr;	/* identifies columns used in indexes */
+	Bitmapset  *rd_exprindexattr; /* indentified columns used in expression or
+									 predicate indexes */
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_pkattr;		/* cols included in primary key */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	PublicationActions  *rd_pubactions;	/* publication actions */
 
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index da36b67..83a7f20 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -50,7 +50,8 @@ typedef enum IndexAttrBitmapKind
 	INDEX_ATTR_BITMAP_ALL,
 	INDEX_ATTR_BITMAP_KEY,
 	INDEX_ATTR_BITMAP_PRIMARY_KEY,
-	INDEX_ATTR_BITMAP_IDENTITY_KEY
+	INDEX_ATTR_BITMAP_IDENTITY_KEY,
+	INDEX_ATTR_BITMAP_EXPR_PREDICATE
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
diff --git a/src/test/regress/expected/alter_generic.out b/src/test/regress/expected/alter_generic.out
index b01be59..37719c9 100644
--- a/src/test/regress/expected/alter_generic.out
+++ b/src/test/regress/expected/alter_generic.out
@@ -161,15 +161,15 @@ ALTER SERVER alt_fserv1 RENAME TO alt_fserv3;   -- OK
 SELECT fdwname FROM pg_foreign_data_wrapper WHERE fdwname like 'alt_fdw%';
  fdwname  
 ----------
- alt_fdw2
  alt_fdw3
+ alt_fdw2
 (2 rows)
 
 SELECT srvname FROM pg_foreign_server WHERE srvname like 'alt_fserv%';
   srvname   
 ------------
- alt_fserv2
  alt_fserv3
+ alt_fserv2
 (2 rows)
 
 --
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 60abcad..42d45a1 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1718,6 +1718,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_tuples_deleted(c.oid) AS n_tup_del,
     pg_stat_get_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
@@ -1861,6 +1862,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1904,6 +1906,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1941,7 +1944,8 @@ pg_stat_xact_all_tables| SELECT c.oid AS relid,
     pg_stat_get_xact_tuples_inserted(c.oid) AS n_tup_ins,
     pg_stat_get_xact_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_xact_tuples_deleted(c.oid) AS n_tup_del,
-    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd
+    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_xact_tuples_warm_updated(c.oid) AS n_tup_warm_upd
    FROM ((pg_class c
      LEFT JOIN pg_index i ON ((c.oid = i.indrelid)))
      LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
@@ -1957,7 +1961,8 @@ pg_stat_xact_sys_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname = ANY (ARRAY['pg_catalog'::name, 'information_schema'::name])) OR (pg_stat_xact_all_tables.schemaname ~ '^pg_toast'::text));
 pg_stat_xact_user_functions| SELECT p.oid AS funcid,
@@ -1979,7 +1984,8 @@ pg_stat_xact_user_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname <> ALL (ARRAY['pg_catalog'::name, 'information_schema'::name])) AND (pg_stat_xact_all_tables.schemaname !~ '^pg_toast'::text));
 pg_statio_all_indexes| SELECT c.oid AS relid,
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
new file mode 100644
index 0000000..ebbc4ca
--- /dev/null
+++ b/src/test/regress/expected/warm.out
@@ -0,0 +1,367 @@
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on updtst_tab1  (cost=4.45..47.23 rows=22 width=72)
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1  (cost=0.00..4.45 rows=22 width=0)
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Check if index only scan works correctly
+EXPLAIN SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on updtst_tab1  (cost=4.45..47.23 rows=22 width=4)
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1  (cost=0.00..4.45 rows=22 width=0)
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+                                      QUERY PLAN                                      
+--------------------------------------------------------------------------------------
+ Index Only Scan using updtst_indx1 on updtst_tab1  (cost=0.29..5.16 rows=50 width=4)
+   Index Cond: (b = 140001)
+(2 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab1;
+------------------
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+EXPLAIN SELECT * FROM updtst_tab2 WHERE b = 701;
+                                QUERY PLAN                                 
+---------------------------------------------------------------------------
+ Bitmap Heap Scan on updtst_tab2  (cost=4.18..12.64 rows=4 width=72)
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE a = 1;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+SET enable_seqscan = false;
+EXPLAIN SELECT * FROM updtst_tab2 WHERE b = 701;
+                                QUERY PLAN                                 
+---------------------------------------------------------------------------
+ Bitmap Heap Scan on updtst_tab2  (cost=4.18..12.64 rows=4 width=72)
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE b = 701;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+VACUUM updtst_tab2;
+EXPLAIN SELECT b FROM updtst_tab2 WHERE b = 701;
+                                     QUERY PLAN                                      
+-------------------------------------------------------------------------------------
+ Index Only Scan using updtst_indx2 on updtst_tab2  (cost=0.14..4.16 rows=1 width=4)
+   Index Cond: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab2 WHERE b = 701;
+  b  
+-----
+ 701
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab2;
+------------------
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 1;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN SELECT b FROM updtst_tab3 WHERE b = 701;
+                        QUERY PLAN                         
+-----------------------------------------------------------
+ Seq Scan on updtst_tab3  (cost=0.00..2.25 rows=1 width=4)
+   Filter: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 701;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+  b   
+------
+ 1421
+(1 row)
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+SET enable_seqscan = false;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    98
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 2;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN SELECT b FROM updtst_tab3 WHERE b = 702;
+                                     QUERY PLAN                                      
+-------------------------------------------------------------------------------------
+ Index Only Scan using updtst_indx3 on updtst_tab3  (cost=0.14..8.16 rows=1 width=4)
+   Index Cond: (b = 702)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 702;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+  b   
+------
+ 1422
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab3;
+------------------
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+explain select * from test_warm where lower(a) = 'test';
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on test_warm  (cost=4.18..12.65 rows=4 width=64)
+   Recheck Cond: (lower(a) = 'test'::text)
+   ->  Bitmap Index Scan on test_warmindx  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (lower(a) = 'test'::text)
+(4 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+select *, ctid from test_warm where a = 'test';
+ a | b | ctid 
+---+---+------
+(0 rows)
+
+select *, ctid from test_warm where a = 'TEST';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+                                   QUERY PLAN                                    
+---------------------------------------------------------------------------------
+ Index Scan using test_warmindx on test_warm  (cost=0.15..20.22 rows=4 width=64)
+   Index Cond: (lower(a) = 'test'::text)
+(2 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+DROP TABLE test_warm;
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index e9b2bad..a9a269b 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -42,6 +42,8 @@ test: create_type
 test: create_table
 test: create_function_2
 
+test: warm
+
 # ----------
 # Load huge amounts of data
 # We should split the data files into single files and then
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
new file mode 100644
index 0000000..b73c278
--- /dev/null
+++ b/src/test/regress/sql/warm.sql
@@ -0,0 +1,172 @@
+
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Check if index only scan works correctly
+EXPLAIN SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab1;
+
+------------------
+
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+
+EXPLAIN SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE a = 1;
+
+SET enable_seqscan = false;
+EXPLAIN SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE b = 701;
+
+VACUUM updtst_tab2;
+EXPLAIN SELECT b FROM updtst_tab2 WHERE b = 701;
+SELECT b FROM updtst_tab2 WHERE b = 701;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab2;
+------------------
+
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+SELECT * FROM updtst_tab3 WHERE a = 1;
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+
+VACUUM updtst_tab3;
+EXPLAIN SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+SET enable_seqscan = false;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+SELECT * FROM updtst_tab3 WHERE a = 2;
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+
+VACUUM updtst_tab3;
+EXPLAIN SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab3;
+------------------
+
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where a = 'test';
+select *, ctid from test_warm where a = 'TEST';
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+DROP TABLE test_warm;
+
+
#31Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Pavan Deolasee (#30)
Re: Patch: Write Amplification Reduction Method (WARM)

Reading 0001_track_root_lp_v9.patch again:

+/*
+ * We use the same HEAP_LATEST_TUPLE flag to check if the tuple's t_ctid field
+ * contains the root line pointer. We can't use the same
+ * HeapTupleHeaderIsHeapLatest macro because that also checks for TID-equality
+ * to decide whether a tuple is at the of the chain
+ */
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+	((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0 \
+)
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+	AssertMacro(((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0), \
+	ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)

Interesting stuff; it took me a bit to see why these macros are this
way. I propose the following wording which I think is clearer:

Return whether the tuple has a cached root offset. We don't use
HeapTupleHeaderIsHeapLatest because that one also considers the slow
case of scanning the whole block.

Please flag the macros that have multiple evaluation hazards -- there
are a few of them.

+/*
+ * If HEAP_LATEST_TUPLE is set in the last tuple in the update chain. But for
+ * clusters which are upgraded from pre-10.0 release, we still check if c_tid
+ * is pointing to itself and declare such tuple as the latest tuple in the
+ * chain
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  (((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(tid))) \
+)

I suggest rewording this comment as:
Starting from PostgreSQL 10, the latest tuple in an update chain has
HEAP_LATEST_TUPLE set; but tuples upgraded from earlier versions do
not. For those, we determine whether a tuple is latest by testing
that its t_ctid points to itself.
(as discussed, there is no "10.0 release"; it's called the "10 release"
only, no ".0". Feel free to use "v10" or "pg10").

+/*
+ * Get TID of next tuple in the update chain. Caller should have checked that
+ * we are not already at the end of the chain because in that case t_ctid may
+ * actually store the root line pointer of the HOT chain whose member this
+ * tuple is.
+ */
+#define HeapTupleHeaderGetNextTid(tup, next_ctid) \
+do { \
+	AssertMacro(!((tup)->t_infomask2 & HEAP_LATEST_TUPLE)); \
+	ItemPointerCopy(&(tup)->t_ctid, (next_ctid)); \
+} while (0)

Actually, I think this macro could just return the TID so that it can be
used as struct assignment, just like ItemPointerCopy does internally --
callers can do
ctid = HeapTupleHeaderGetNextTid(tup);

or more precisely, this pattern

+		if (!HeapTupleHeaderIsHeapLatest(tp.t_data, &tp.t_self))
+			HeapTupleHeaderGetNextTid(tp.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&tp.t_self, &hufd->ctid);

becomes
hufd->ctid = HeapTupleHeaderIsHeapLatest(foo) ?
HeapTupleHeaderGetNextTid(foo) : &tp->t_self;
or something like that. I further wonder if it'd make sense to hide
this into yet another macro.

The API of RelationPutHeapTuple appears a bit contorted, where
root_offnum is both input and output. I think it's cleaner to have the
argument be the input, and have the output offset be the return value --
please check whether that simplifies things; for example I think this:

+			root_offnum = InvalidOffsetNumber;
+			RelationPutHeapTuple(relation, buffer, heaptup, false,
+					&root_offnum);

becomes

root_offnum = RelationPutHeapTuple(relation, buffer, heaptup, false,
InvalidOffsetNumber);

Please remove the words "must have" in this comment:

+	/*
+	 * Also mark both copies as latest and set the root offset information. If
+	 * we're doing a HOT/WARM update, then we just copy the information from
+	 * old tuple, if available or computed above. For regular updates,
+	 * RelationPutHeapTuple must have returned us the actual offset number
+	 * where the new version was inserted and we store the same value since the
+	 * update resulted in a new HOT-chain
+	 */

Many comments lack finishing periods in complete sentences, which looks
odd. Please fix.

I have not looked at the other patch yet.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#32Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Pavan Deolasee (#30)
Re: Patch: Write Amplification Reduction Method (WARM)

Looking at your 0002 patch now. It no longer applies, but the conflicts
are trivial to fix. Please rebase and resubmit.

I think the way WARM works has been pretty well hammered by now, other
than the CREATE INDEX CONCURRENTLY issues, so I'm looking at the code
from a maintainability point of view only.

I think we should have some test harness for WARM as part of the source
repository. A test that runs for an hour hammering the machine to
highest possible load cannot be run in "make check", of course, but we
could have some specific Make target to run it manually. We don't have
this for any other feature, but this looks like a decent place to start.
Maybe we should even do it before going any further. The test code you
submitted looks OK to test the feature, but I'm not in love with it
enough to add it to the repo. Maybe I will spend some time trying to
convert it to Perl using PostgresNode.

I think having the "recheck" index methods create an ExecutorState looks
out of place. How difficult is it to pass the estate from the calling
code?

IMO heap_get_root_tuple_one should be called just heap_get_root_tuple().
That function and its plural sibling heap_get_root_tuples() should
indicate in their own comments what the expectations are regarding the
root_offsets output argument, rather than deferring to the comments in
the "internal" function, since they differ on that point; for the rest
of the invariants I think it makes sense to say "Also see the comment
for heap_get_root_tuples_internal". I wonder if heap_get_root_tuple
should just return the ctid instead of assigning the value to a
passed-in pointer, i.e.
OffsetNumber
heap_get_root_tuple(Page page, OffsetNumber target_offnum)
{
OffsetNumber off;
heap_get_root_tuples_internal(page, target_offnum, &off);
return off;
}

The simple_heap_update + CatalogUpdateIndexes pattern is getting
obnoxious. How about creating something like catalog_heap_update which
does both things at once, and stop bothering each callsite with the WARM
stuff? In fact, given that CatalogUpdateIndexes is used in other
places, maybe we should leave its API alone and create another function,
so that we don't have to change the many places that only do
simple_heap_insert. (Places like OperatorCreate which do either insert
or update could just move the index update call into each branch.)

I'm not real sure about the interface between index AM and executor,
namely IndexScanDesc->xs_tuple_recheck. For example, this pattern:
if (!scan->xs_recheck)
scan->xs_tuple_recheck = false;
else
scan->xs_tuple_recheck = true;
can become simply
scan->xs_tuple_recheck = scan->xs_recheck;
which looks odd. I can't pinpoint exactly what's the problem, though.
I'll continue looking at this one.

I wonder if heap_hot_search_buffer() and heap_hot_search() should return
a tri-valued enum instead of boolean; that idea looks reasonable in
theory but callers have to do more work afterwards, so maybe not.

I think heap_hot_search() sometimes leaving the buffer pinned is
confusing. Really, the whole idea of having heap_hot_search have a
buffer output argument is an important API change that should be better
thought. Maybe it'd be better to return the buffer pinned always, and
the caller is always in charge of unpinning if not InvalidBuffer. Or
perhaps we need a completely new function, given how different it is to
the original? If you tried to document in the comment above
heap_hot_search how it works, you'd find that it's difficult to
describe, which'd be an indicator that it's not well considered.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#33Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#32)
Re: Patch: Write Amplification Reduction Method (WARM)

Alvaro Herrera wrote:

I wonder if heap_hot_search_buffer() and heap_hot_search() should return
a tri-valued enum instead of boolean; that idea looks reasonable in
theory but callers have to do more work afterwards, so maybe not.

I think heap_hot_search() sometimes leaving the buffer pinned is
confusing. Really, the whole idea of having heap_hot_search have a
buffer output argument is an important API change that should be better
thought. Maybe it'd be better to return the buffer pinned always, and
the caller is always in charge of unpinning if not InvalidBuffer. Or
perhaps we need a completely new function, given how different it is to
the original? If you tried to document in the comment above
heap_hot_search how it works, you'd find that it's difficult to
describe, which'd be an indicator that it's not well considered.

Even before your patch, heap_hot_search claims to have the same API as
heap_hot_search_buffer "except that caller does not provide the buffer."
But this is a lie and has been since 9.2 (more precisely, since commit
4da99ea4231e). I think WARM makes things even worse and we should fix
that. Not yet sure which direction to fix it ...

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#34Robert Haas
robertmhaas@gmail.com
In reply to: Alvaro Herrera (#32)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Jan 25, 2017 at 4:08 PM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:

I think the way WARM works has been pretty well hammered by now, other
than the CREATE INDEX CONCURRENTLY issues, so I'm looking at the code
from a maintainability point of view only.

Which senior hackers have previously reviewed it in detail?

Where would I go to get a good overview of the overall theory of operation?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#35Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Robert Haas (#34)
Re: Patch: Write Amplification Reduction Method (WARM)

Robert Haas wrote:

On Wed, Jan 25, 2017 at 4:08 PM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:

I think the way WARM works has been pretty well hammered by now, other
than the CREATE INDEX CONCURRENTLY issues, so I'm looking at the code
from a maintainability point of view only.

Which senior hackers have previously reviewed it in detail?

The previous thread,
/messages/by-id/CABOikdMop5Rb_RnS2xFdAXMZGSqcJ-P-BY2ruMd+buUkJ4iDPw@mail.gmail.com
contains some discussion of it, which uncovered bugs in the initial idea
and gave rise to the current design.

Where would I go to get a good overview of the overall theory of operation?

The added README file does a pretty good job, I thought.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#36Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Alvaro Herrera (#31)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Jan 25, 2017 at 10:06 PM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

Reading 0001_track_root_lp_v9.patch again:

Thanks for the review.

+/*
+ * We use the same HEAP_LATEST_TUPLE flag to check if the tuple's

t_ctid field

+ * contains the root line pointer. We can't use the same
+ * HeapTupleHeaderIsHeapLatest macro because that also checks for

TID-equality

+ * to decide whether a tuple is at the of the chain
+ */
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+     ((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0 \
+)
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+     AssertMacro(((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0), \
+     ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)

Interesting stuff; it took me a bit to see why these macros are this
way. I propose the following wording which I think is clearer:

Return whether the tuple has a cached root offset. We don't use
HeapTupleHeaderIsHeapLatest because that one also considers the slow
case of scanning the whole block.

Umm, not scanning the whole block, but HeapTupleHeaderIsHeapLatest compares
t_ctid with the passed in TID and returns true if those matches. To know if
root lp is cached, we only rely on the HEAP_LATEST_TUPLE flag. Though if
the flag is set, then it implies latest tuple too.

Please flag the macros that have multiple evaluation hazards -- there
are a few of them.

Can you please tell me an example? I must be missing something.

+/*
+ * Get TID of next tuple in the update chain. Caller should have

checked that

+ * we are not already at the end of the chain because in that case

t_ctid may

+ * actually store the root line pointer of the HOT chain whose member

this

+ * tuple is.
+ */
+#define HeapTupleHeaderGetNextTid(tup, next_ctid) \
+do { \
+     AssertMacro(!((tup)->t_infomask2 & HEAP_LATEST_TUPLE)); \
+     ItemPointerCopy(&(tup)->t_ctid, (next_ctid)); \
+} while (0)

Actually, I think this macro could just return the TID so that it can be
used as struct assignment, just like ItemPointerCopy does internally --
callers can do
ctid = HeapTupleHeaderGetNextTid(tup);

Yes, makes sense. Will fix.

The API of RelationPutHeapTuple appears a bit contorted, where
root_offnum is both input and output. I think it's cleaner to have the
argument be the input, and have the output offset be the return value --
please check whether that simplifies things; for example I think this:

+                     root_offnum = InvalidOffsetNumber;
+                     RelationPutHeapTuple(relation, buffer, heaptup,

false,

+ &root_offnum);

becomes

root_offnum = RelationPutHeapTuple(relation, buffer, heaptup,
false,
InvalidOffsetNumber);

Make sense. Will fix.

Many comments lack finishing periods in complete sentences, which looks
odd. Please fix.

Sorry, not sure where I picked that style from. I see that the existing
code has both styles, though I will add finishing periods because I like
that way too.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#37Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Alvaro Herrera (#32)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Jan 26, 2017 at 2:38 AM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

Looking at your 0002 patch now. It no longer applies, but the conflicts
are trivial to fix. Please rebase and resubmit.

Thanks.

Maybe I will spend some time trying to
convert it to Perl using PostgresNode.

Agree. I put together a test harness to hammer the WARM code as much as we
can. This harness has already discovered some bugs, especially around index
creation part. It also discovered one outstanding bug in master, so it's
been useful. But I agree to rewrite it using perl.

I think having the "recheck" index methods create an ExecutorState looks
out of place. How difficult is it to pass the estate from the calling
code?

I couldn't find an easy way given the place where recheck is required. Can
you suggest something?

IMO heap_get_root_tuple_one should be called just heap_get_root_tuple().
That function and its plural sibling heap_get_root_tuples() should
indicate in their own comments what the expectations are regarding the
root_offsets output argument, rather than deferring to the comments in
the "internal" function, since they differ on that point; for the rest
of the invariants I think it makes sense to say "Also see the comment
for heap_get_root_tuples_internal". I wonder if heap_get_root_tuple
should just return the ctid instead of assigning the value to a
passed-in pointer, i.e.
OffsetNumber
heap_get_root_tuple(Page page, OffsetNumber target_offnum)
{
OffsetNumber off;
heap_get_root_tuples_internal(page, target_offnum, &off);
return off;
}

Yes, all of that makes sense. Will fix.

The simple_heap_update + CatalogUpdateIndexes pattern is getting
obnoxious. How about creating something like catalog_heap_update which
does both things at once, and stop bothering each callsite with the WARM
stuff? In fact, given that CatalogUpdateIndexes is used in other
places, maybe we should leave its API alone and create another function,
so that we don't have to change the many places that only do
simple_heap_insert. (Places like OperatorCreate which do either insert
or update could just move the index update call into each branch.)

What I ended up doing is I added two new APIs.
- CatalogUpdateHeapAndIndex
- CatalogInsertHeapAndIndex

I could replace almost all occurrences of simple_heap_update +
CatalogUpdateIndexes with the first API and simple_heap_insert +
CatalogUpdateIndexes with the second API. This looks like a good
improvement to me anyways since there are about 180 places where these
functions are called almost in the same pattern. May be it will also avoid
a bug when someone forgets to update the index after inserting/updating
heap.

.
I wonder if heap_hot_search_buffer() and heap_hot_search() should return
a tri-valued enum instead of boolean; that idea looks reasonable in
theory but callers have to do more work afterwards, so maybe not.

Ok. I'll try to rearrange it a bit. May be we just have one API after all?
There are only a very few callers of these APIs.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#38Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Pavan Deolasee (#36)
Re: Patch: Write Amplification Reduction Method (WARM)

Pavan Deolasee wrote:

On Wed, Jan 25, 2017 at 10:06 PM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

+( \
+     ((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0 \
+)
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+     AssertMacro(((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0), \
+     ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)

Interesting stuff; it took me a bit to see why these macros are this
way. I propose the following wording which I think is clearer:

Return whether the tuple has a cached root offset. We don't use
HeapTupleHeaderIsHeapLatest because that one also considers the slow
case of scanning the whole block.

Umm, not scanning the whole block, but HeapTupleHeaderIsHeapLatest compares
t_ctid with the passed in TID and returns true if those matches. To know if
root lp is cached, we only rely on the HEAP_LATEST_TUPLE flag. Though if
the flag is set, then it implies latest tuple too.

Well, I'm just trying to fix the problem that when I saw that macro, I
thought "why is this checking the bitmask directly instead of using the
existing IsHeapLatest macro?" when I saw the code. It turned out that
IsHeapLatest is not just simply comparing the bitmask, but it also does
more expensive processing which is unwanted in this case. I think the
comment to this macro should explain why the other macro cannot be used.

Please flag the macros that have multiple evaluation hazards -- there
are a few of them.

Can you please tell me an example? I must be missing something.

Any macro that uses an argument more than once is subject to multiple
evaluations of that argument; for example, if you pass a function call to
the macro as one of the parameters, the function is called multiple
times. In many cases this is not a problem because the argument is
always a constant, but sometimes it does become a problem.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#39Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Alvaro Herrera (#32)
3 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Jan 26, 2017 at 2:38 AM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

Looking at your 0002 patch now. It no longer applies, but the conflicts
are trivial to fix. Please rebase and resubmit.

Please see rebased and updated patches attached.

I think having the "recheck" index methods create an ExecutorState looks
out of place. How difficult is it to pass the estate from the calling
code?

I couldn't find a good way to pass estate from the calling code. It would
require changes to many other APIs. I saw all other callers who need to
form index keys do that too. But please suggest if there are better ways.

OffsetNumber
heap_get_root_tuple(Page page, OffsetNumber target_offnum)
{
OffsetNumber off;
heap_get_root_tuples_internal(page, target_offnum, &off);
return off;
}

Ok. Changed this way. Definitely looks better.

The simple_heap_update + CatalogUpdateIndexes pattern is getting
obnoxious. How about creating something like catalog_heap_update which
does both things at once, and stop bothering each callsite with the WARM
stuff?

What I realised that there are really 2 patterns:
1. simple_heap_insert, CatalogUpdateIndexes
2. simple_heap_update, CatalogUpdateIndexes

There are only couple of places where we already have indexes open or have
more than one tuple to update, so we call CatalogIndexInsert directly. What
I ended up doing in the attached patch is add two new APIs which combines
the two steps of each of these patterns. It seems much cleaner to me and
also less buggy for future users. I hope I am not missing a reason not to
do combine these steps.

I'm not real sure about the interface between index AM and executor,
namely IndexScanDesc->xs_tuple_recheck. For example, this pattern:
if (!scan->xs_recheck)
scan->xs_tuple_recheck = false;
else
scan->xs_tuple_recheck = true;
can become simply
scan->xs_tuple_recheck = scan->xs_recheck;

Fixed.

which looks odd. I can't pinpoint exactly what's the problem, though.
I'll continue looking at this one.

What we do is if the index scan is marked to do recheck, we do it for each
tuple anyways. Otherwise recheck is required only if a tuple comes from a
WARM chain.

I wonder if heap_hot_search_buffer() and heap_hot_search() should return
a tri-valued enum instead of boolean; that idea looks reasonable in
theory but callers have to do more work afterwards, so maybe not.

I did not do anything with this yet. But I agree with you that we need to
make it better/simpler. Will continue to work on that.

I've addressed other review comments on the 0001 patch, except this one.

+/*
+ * Get TID of next tuple in the update chain. Caller should have checked

that

+ * we are not already at the end of the chain because in that case

t_ctid may

+ * actually store the root line pointer of the HOT chain whose member

this

+ * tuple is.
+ */
+#define HeapTupleHeaderGetNextTid(tup, next_ctid) \
+do { \
+     AssertMacro(!((tup)->t_infomask2 & HEAP_LATEST_TUPLE)); \
+     ItemPointerCopy(&(tup)->t_ctid, (next_ctid)); \
+} while (0)

Actually, I think this macro could just return the TID so that it can be
used as struct assignment, just like ItemPointerCopy does internally --
callers can do
ctid = HeapTupleHeaderGetNextTid(tup);

While I agree with your proposal, I wonder why we have ItemPointerCopy() in
the first place because we freely copy TIDs as struct assignment. Is there
a reason for that? And if there is, does it impact this specific case?

Other than the review comments, there were couple of bugs that I discovered
while running the stress test notably around visibility map handling. The
patch has those fixes. I also ripped out the kludge to record WARM-ness in
the line pointer because that is no longer needed after I reworked the code
a few versions back.

The other critical bug I found, which unfortunately exists in the master
too, is the index corruption during CIC. The patch includes the same fix
that I've proposed on the other thread. With these changes, WARM stress is
running fine for last 24 hours on a decently powerful box. Multiple
CREATE/DROP INDEX cycles and updates via different indexed columns, with a
mix of FOR SHARE/UPDATE and rollbacks did not produce any consistency
issues. A side note: while performance measurement wasn't a goal of stress
tests, WARM has done about 67% more transaction than master in 24 hour
period (95M in master vs 156M in WARM to be precise on a 30GB table
including indexes). I believe the numbers would be far better had the test
not dropping and recreating the indexes, thus effectively cleaning up all
index bloats. Also the table is small enough to fit in the shared buffers.
I'll rerun these tests with much larger scale factor and without dropping
indexes.

Of course, make check-world, including all TAP tests, passes too.

The CREATE INDEX CONCURRENTLY now works. The way we handle this is by
ensuring that no broken WARM chains are created while the initial index
build is happening. We check the list of attributes of indexes currently
in-progress (i.e. not ready for inserts) and if any of these attributes are
being modified, we don't do a WARM update. This is enough to address CIC
issue and all other mechanisms remain same as HOT. I've updated README to
include CIC algorithm.

There is one issue that bothers me. The current implementation lacks
ability to convert WARM chains into HOT chains. The README.WARM has some
proposal to do that. But it requires additional free bit in tuple header
(which we don't have) and of course, it needs to be vetted and implemented.
If the heap ends up with many WARM tuples, then index-only-scans will
become ineffective because index-only-scan can not skip a heap page, if it
contains a WARM tuple. Alternate ideas/suggestions and review of the design
are welcome!

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0002_warm_updates_v10.patchapplication/octet-stream; name=0002_warm_updates_v10.patchDownload
diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c
index 858798d..7a9a976 100644
--- a/contrib/bloom/blutils.c
+++ b/contrib/bloom/blutils.c
@@ -141,6 +141,7 @@ blhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index b2afdb7..ef3bfa3 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -115,6 +115,7 @@ brinhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index c2247ad..2135ae0 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -92,6 +92,7 @@ gisthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index ec8ed33..4861957 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -89,6 +89,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = hashrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -269,6 +270,8 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 	OffsetNumber offnum;
 	ItemPointer current;
 	bool		res;
+	IndexTuple	itup;
+
 
 	/* Hash indexes are always lossy since we store only the hash code */
 	scan->xs_recheck = true;
@@ -306,8 +309,6 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 			 offnum <= maxoffnum;
 			 offnum = OffsetNumberNext(offnum))
 		{
-			IndexTuple	itup;
-
 			itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 			if (ItemPointerEquals(&(so->hashso_heappos), &(itup->t_tid)))
 				break;
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index a59ad6f..46a334c 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -59,6 +59,8 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
 	return true;
 }
 
@@ -408,6 +410,9 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
+
 	return true;
 }
 
diff --git a/src/backend/access/hash/hashutil.c b/src/backend/access/hash/hashutil.c
index c705531..dcba734 100644
--- a/src/backend/access/hash/hashutil.c
+++ b/src/backend/access/hash/hashutil.c
@@ -17,8 +17,12 @@
 #include "access/hash.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
+#include "nodes/execnodes.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 #define CALC_NEW_BUCKET(old_bucket, lowmask) \
 			old_bucket | (lowmask + 1)
@@ -446,3 +450,109 @@ _hash_get_newbucket_from_oldbucket(Relation rel, Bucket old_bucket,
 
 	return new_bucket;
 }
+
+/*
+ * Recheck if the heap tuple satisfies the key stored in the index tuple
+ */
+bool
+hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	Datum		values2[INDEX_MAX_KEYS];
+	bool		isnull2[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	/*
+	 * HASH indexes compute a hash value of the key and store that in the
+	 * index. So we must first obtain the hash of the value obtained from the
+	 * heap and then do a comparison
+	 */
+	_hash_convert_tuple(indexRel, values, isnull, values2, isnull2);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL then they are equal
+		 */
+		if (isnull2[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If either is NULL then they are not equal
+		 */
+		if (isnull2[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now do a raw memory comparison
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values2[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+}
diff --git a/src/backend/access/heap/README.WARM b/src/backend/access/heap/README.WARM
new file mode 100644
index 0000000..7b9a712
--- /dev/null
+++ b/src/backend/access/heap/README.WARM
@@ -0,0 +1,306 @@
+src/backend/access/heap/README.WARM
+
+Write Amplification Reduction Method (WARM)
+===========================================
+
+The Heap Only Tuple (HOT) feature greatly eliminated redudant index
+entries and allowed re-use of the dead space occupied by previously
+updated or deleted tuples (see src/backend/access/heap/README.HOT)
+
+One of the necessary conditions for satisfying HOT update is that the
+update must not change a column used in any of the indexes on the table.
+The condition is sometimes hard to meet, especially for complex
+workloads with several indexes on large yet frequently updated tables.
+Worse, sometimes only one or two index columns may be updated, but the
+regular non-HOT update will still insert a new index entry in every
+index on the table, irrespective of whether the key pertaining to the
+index changed or not.
+
+WARM is a technique devised to address these problems.
+
+
+Update Chains With Multiple Index Entries Pointing to the Root
+--------------------------------------------------------------
+
+When a non-HOT update is caused by an index key change, a new index
+entry must be inserted for the changed index. But if the index key
+hasn't changed for other indexes, we don't really need to insert a new
+entry. Even though the existing index entry is pointing to the old
+tuple, the new tuple is reachable via the t_ctid chain. To keep things
+simple, a WARM update requires that the heap block must have enough
+space to store the new version of the tuple. This is same as HOT
+updates.
+
+In WARM, we ensure that every index entry always points to the root of
+the WARM chain. In fact, a WARM chain looks exactly like a HOT chain
+except for the fact that there could be multiple index entries pointing
+to the root of the chain. So when new entry is inserted in an index for
+updated tuple, and if we are doing a WARM update, the new entry is made
+point to the root of the WARM chain.
+
+For example, if we have a table with two columns and two indexes on each
+of the column. When a tuple is first inserted the table, we have exactly
+one index entry pointing to the tuple from both indexes.
+
+	lp [1]
+	[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's entry (aaaa) also points to 1
+
+Now if the tuple's second column is updated and if there is room on the
+page, we perform a WARM update. To do so, Index1 does not get any new
+entry and Index2's new entry will still point to the root tuple of the
+chain.
+
+	lp [1]  [2]
+	[1111, aaaa]->[111, bbbb]
+
+	Index1's entry (1111) points to 1
+	Index2's old entry (aaaa) points to 1
+	Index2's new entry (bbbb) also points to 1
+
+"A update chain which has more than one index entries pointing to its
+root line pointer is called WARM chain and the action that creates a
+WARM chain is called WARM update."
+
+Since all indexes always point to the root of the WARM chain, even when
+there are more than one index entries, WARM chains can be pruned and
+dead tuples can be removed without a need to do corresponding index
+cleanup.
+
+While this solves the problem of pruning dead tuples from a HOT/WARM
+chain, it also opens up a new technical challenge because now we have a
+situation where a heap tuple is reachable from multiple index entries,
+each having a different index key. While MVCC still ensures that only
+valid tuples are returned, a tuple with a wrong index key may be
+returned because of wrong index entries. In the above example, tuple
+[1111, bbbb] is reachable from both keys (aaaa) as well as (bbbb). For
+this reason, tuples returned from a WARM chain must always be rechecked
+for index key-match.
+
+Recheck Index Key Againt Heap Tuple
+-----------------------------------
+
+Since every Index AM has it's own notion of index tuples, each Index AM
+must implement its own method to recheck heap tuples. For example, a
+hash index stores the hash value of the column and hence recheck routine
+for hash AM must first compute the hash value of the heap attribute and
+then compare it against the value stored in the index tuple.
+
+The patch currently implement recheck routines for hash and btree
+indexes. If the table has an index which doesn't support recheck
+routine, WARM updates are disabled on such tables.
+
+Problem With Duplicate (key, ctid) Index Entries
+------------------------------------------------
+
+The index-key recheck logic works as long as there are no duplicate
+index keys, both pointing to the same WARM chain. In that case, the same
+valid tuple will be reachable via multiple index keys, yet satisfying
+the index key checks. In the above example, if the tuple [1111, bbbb] is
+again updated to [1111, aaaa] and if we insert a new index entry (aaaa)
+pointing to the root line pointer, we will end up with the following
+structure:
+
+	lp [1]  [2]  [3]
+	[1111, aaaa]->[1111, bbbb]->[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's oldest entry (aaaa) points to 1
+	Index2's old entry (bbbb) also points to 1
+	Index2's new entry (aaaa) also points to 1
+
+We must solve this problem to ensure that the same tuple is not
+reachable via multiple index pointers. There are couple of ways to
+address this issue:
+
+1. Do not allow WARM update to a tuple from a WARM chain. This
+guarantees that there can never be duplicate index entries to the same
+root line pointer because we must have checked for old and new index
+keys while doing the first WARM update.
+
+2. Do not allow duplicate (key, ctid) index pointers. In the above
+example, since (aaaa, 1) already exists in the index, we must not insert
+a duplicate index entry.
+
+The patch currently implements 1 i.e. do not do WARM updates to a tuple
+from a WARM chain. HOT updates are fine because they do not add a new
+index entry.
+
+Even with the restriction, this is a significant improvement because the
+number of regular UPDATEs are curtailed down to half.
+
+Expression and Partial Indexes
+------------------------------
+
+Expressions may evaluate to the same value even if the underlying column
+values have changed. A simple example is an index on "lower(col)" which
+will return the same value if the new heap value only differs in the
+case sensitivity. So we can not solely rely on the heap column check to
+decide whether or not to insert a new index entry for expression
+indexes. Similarly, for partial indexes, the predicate expression must
+be evaluated to decide whether or not to cause a new index entry when
+columns referred in the predicate expressions change.
+
+(None of these things are currently implemented and we squarely disallow
+WARM update if a column from expression indexes or predicate has
+changed).
+
+
+Efficiently Finding the Root Line Pointer
+-----------------------------------------
+
+During WARM update, we must be able to find the root line pointer of the
+tuple being updated. It must be noted that the t_ctid field in the heap
+tuple header is usually used to find the next tuple in the update chain.
+But the tuple that we are updating, must be the last tuple in the update
+chain. In such cases, the c_tid field usually points the tuple itself.
+So in theory, we could use the t_ctid to store additional information in
+the last tuple of the update chain, if the information about the tuple
+being the last tuple is stored elsewhere.
+
+We now utilize another bit from t_infomask2 to explicitly identify that
+this is the last tuple in the update chain.
+
+HEAP_LATEST_TUPLE - When this bit is set, the tuple is the last tuple in
+the update chain. The OffsetNumber part of t_ctid points to the root
+line pointer of the chain when HEAP_LATEST_TUPLE flag is set.
+
+If UPDATE operation is aborted, the last tuple in the update chain
+becomes dead. The root line pointer information stored in the tuple
+which remains the last valid tuple in the chain is also lost. In such
+rare cases, the root line pointer must be found in a hard way by
+scanning the entire heap page.
+
+Tracking WARM Chains
+--------------------
+
+The old and every subsequent tuple in the chain is marked with a special
+HEAP_WARM_TUPLE flag. We use the last remaining bit in t_infomask2 to
+store this information.
+
+When a tuple is returned from a WARM chain, the caller must do
+additional checks to ensure that the tuple matches the index key. Even
+if the tuple comes precedes the WARM update in the chain, it must still
+be rechecked for the index key match (case when old tuple is returned by
+the new index key). So we must follow the update chain everytime to the
+end to see check if this is a WARM chain.
+
+When the old updated tuple is retired and the root line pointer is
+converted into a redirected line pointer, we can copy the information
+about WARM chain to the redirected line pointer by storing a special
+value in the lp_len field of the line pointer. This will handle the most
+common case where a WARM chain is replaced by a redirect line pointer
+and a single tuple in the chain.
+
+Converting WARM chains back to HOT chains (VACUUM ?)
+----------------------------------------------------
+
+The current implementation of WARM allows only one WARM update per
+chain. This simplifies the design and addresses certain issues around
+duplicate scans. But this also implies that the benefit of WARM will be
+no more than 50%, which is still significant, but if we could return
+WARM chains back to normal status, we could do far more WARM updates.
+
+A distinct property of a WARM chain is that at least one index has more
+than one live index entries pointing to the root of the chain. In other
+words, if we can remove duplicate entry from every index or conclusively
+prove that there are no duplicate index entries for the root line
+pointer, the chain can again be marked as HOT.
+
+Here is one idea:
+
+A WARM chain has two parts, separated by the tuple that caused WARM
+update. All tuples in each part has matching index keys, but certain
+index keys may not match between these two parts. Lets say we mark heap
+tuples in each part with a special Red-Blue flag. The same flag is
+replicated in the index tuples. For example, when new rows are inserted
+in a table, they are marked with Blue flag and the index entries
+associated with those rows are also marked with Blue flag. When a row is
+WARM updated, the new version is marked with Red flag and the new index
+entry created by the update is also marked with Red flag.
+
+
+Heap chain: [1] [2] [3] [4]
+			[aaaa, 1111]B -> [aaaa, 1111]B -> [bbbb, 1111]R -> [bbbb, 1111]R
+
+Index1: 	(aaaa)B points to 1 (satisfies only tuples marked with B)
+			(bbbb)R points to 1 (satisfies only tuples marked with R)
+
+Index2:		(1111)B points to 1 (satisfied bith B and R tuples)
+
+
+It's clear that for indexes with Red and Blue pointers, a heap tuple
+with Blue flag will be reachable from Blue pointer and that with Red
+flag will be reachable from Red pointer. But for indexes which did not
+create a new entry, both Blue and Red tuples will be reachable from Blue
+pointer (there is no Red pointer in such indexes). So, as a side note,
+matching Red and Blue flags is not enough from index scan perspective.
+
+During first heap scan of VACUUM, we look for tuples with
+HEAP_WARM_TUPLE set.  If all live tuples in the chain are either marked
+with Blue flag or Red flag (but no mix of Red and Blue), then the chain
+is a candidate for HOT conversion.  We remember the root line pointer
+and Red-Blue flag of the WARM chain in a separate array.
+
+If we have a Red WARM chain, then our goal is to remove Blue pointers
+and vice versa. But there is a catch. For Index2 above, there is only
+Blue pointer and that must not be removed. IOW we should remove Blue
+pointer iff a Red pointer exists. Since index vacuum may visit Red and
+Blue pointers in any order, I think we will need another index pass to
+remove dead index pointers. So in the first index pass we check which
+WARM candidates have 2 index pointers. In the second pass, we remove the
+dead pointer and reset Red flag is the surviving index pointer is Red.
+
+During the second heap scan, we fix WARM chain by clearing
+HEAP_WARM_TUPLE flag and also reset Red flag to Blue.
+
+There are some more problems around aborted vacuums. For example, if
+vacuum aborts after changing Red index flag to Blue but before removing
+the other Blue pointer, we will end up with two Blue pointers to a Red
+WARM chain. But since the HEAP_WARM_TUPLE flag on the heap tuple is
+still set, further WARM updates to the chain will be blocked. I guess we
+will need some special handling for case with multiple Blue pointers. We
+can either leave these WARM chains alone and let them die with a
+subsequent non-WARM update or must apply heap-recheck logic during index
+vacuum to find the dead pointer. Given that vacuum-aborts are not
+common, I am inclined to leave this case unhandled. We must still check
+for presence of multiple Blue pointers and ensure that we don't
+accidently remove either of the Blue pointers and not clear WARM chains
+either.
+
+CREATE INDEX CONCURRENTLY
+-------------------------
+
+Currently CREATE INDEX CONCURRENTLY (CIC) is implemented as a 3-phase
+process.  In the first phase, we create catalog entry for the new index
+so that the index is visible to all other backends, but still don't use
+it for either read or write.  But we ensure that no new broken HOT
+chains are created by new transactions. In the second phase, we build
+the new index using a MVCC snapshot and then make the index available
+for inserts. We then do another pass over the index and insert any
+missing tuples, everytime indexing only it's root line pointer. See
+README.HOT for details about how HOT impacts CIC and how various
+challenges are tackeled.
+
+WARM poses another challenge because it allows creation of HOT chains
+even when an index key is changed. But since the index is not ready for
+insertion until the second phase is over, we might end up with a
+situation where the HOT chain has tuples with different index columns,
+yet only one of these values are indexed by the new index. Note that
+during the third phase, we only index tuples whose root line pointer is
+missing from the index. But we can't easily check if the existing index
+tuple is actually indexing the heap tuple visible to the new MVCC
+snapshot. Finding that information will require us to query the index
+again for every tuple in the chain, especially if it's a WARM tuple.
+This would require repeated access to the index. Another option would be
+to return index keys along with the heap TIDs when index is scanned for
+collecting all indexed TIDs during third phase. We can then compare the
+heap tuple against the already indexed key and decide whether or not to
+index the new tuple.
+
+We solve this problem more simply by disallowing WARM updates until the
+index is ready for insertion. We don't need to disallow WARM on a
+wholesale basis, but only those updates that change the columns of the
+new index are disallowed to be WARM updates.
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 5149c07..8be0137 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -1957,6 +1957,78 @@ heap_fetch(Relation relation,
 }
 
 /*
+ * Check if the HOT chain containing this tid is actually a WARM chain.
+ * Note that even if the WARM update ultimately aborted, we still must do a
+ * recheck because the failing UPDATE when have inserted created index entries
+ * which are now stale, but still referencing this chain.
+ */
+static bool
+hot_check_warm_chain(Page dp, ItemPointer tid)
+{
+	TransactionId prev_xmax = InvalidTransactionId;
+	OffsetNumber offnum;
+	HeapTupleData heapTuple;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+			break;
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Presence of either WARM or WARM updated tuple signals possible
+		 * breakage and the caller must recheck tuple returned from this chain
+		 * for index satisfaction
+		 */
+		if (HeapTupleHeaderIsHeapWarmTuple(heapTuple.t_data))
+			return true;
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	/* All OK. No need to recheck */
+	return false;
+}
+
+/*
  *	heap_hot_search_buffer	- search HOT chain for tuple satisfying snapshot
  *
  * On entry, *tid is the TID of a tuple (either a simple tuple, or the root
@@ -1976,11 +2048,14 @@ heap_fetch(Relation relation,
  * Unlike heap_fetch, the caller must already have pin and (at least) share
  * lock on the buffer; it is still pinned/locked at exit.  Also unlike
  * heap_fetch, we do not report any pgstats count; caller may do so if wanted.
+ *
+ * recheck should be set false on entry by caller, will be set true on exit
+ * if a WARM tuple is encountered.
  */
 bool
 heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 					   Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call)
+					   bool *all_dead, bool first_call, bool *recheck)
 {
 	Page		dp = (Page) BufferGetPage(buffer);
 	TransactionId prev_xmax = InvalidTransactionId;
@@ -2034,9 +2109,12 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		ItemPointerSetOffsetNumber(&heapTuple->t_self, offnum);
 
 		/*
-		 * Shouldn't see a HEAP_ONLY tuple at chain start.
+		 * Shouldn't see a HEAP_ONLY tuple at chain start, unless we are
+		 * dealing with a WARM updated tuple in which case deferred triggers
+		 * may request to fetch a WARM tuple from middle of a chain.
 		 */
-		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple))
+		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple) &&
+				!HeapTupleIsHeapWarmTuple(heapTuple))
 			break;
 
 		/*
@@ -2049,6 +2127,16 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 			break;
 
 		/*
+		 * Check if there exists a WARM tuple somewhere down the chain and set
+		 * recheck to TRUE.
+		 *
+		 * XXX This is not very efficient right now, and we should look for
+		 * possible improvements here
+		 */
+		if (recheck && *recheck == false)
+			*recheck = hot_check_warm_chain(dp, &heapTuple->t_self);
+
+		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
 		 * return the first tuple we find.  But on later passes, heapTuple
 		 * will initially be pointing to the tuple we returned last time.
@@ -2097,7 +2185,8 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		 * Check to see if HOT chain continues past this tuple; if so fetch
 		 * the next offnum and loop around.
 		 */
-		if (HeapTupleIsHotUpdated(heapTuple))
+		if (HeapTupleIsHotUpdated(heapTuple) &&
+			!HeapTupleHeaderHasRootOffset(heapTuple->t_data))
 		{
 			Assert(ItemPointerGetBlockNumber(&heapTuple->t_data->t_ctid) ==
 				   ItemPointerGetBlockNumber(tid));
@@ -2121,18 +2210,41 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
  */
 bool
 heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
-				bool *all_dead)
+				bool *all_dead, bool *recheck, Buffer *cbuffer,
+				HeapTuple heapTuple)
 {
 	bool		result;
 	Buffer		buffer;
-	HeapTupleData heapTuple;
+	ItemPointerData ret_tid = *tid;
 
 	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	LockBuffer(buffer, BUFFER_LOCK_SHARE);
-	result = heap_hot_search_buffer(tid, relation, buffer, snapshot,
-									&heapTuple, all_dead, true);
-	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buffer);
+	result = heap_hot_search_buffer(&ret_tid, relation, buffer, snapshot,
+									heapTuple, all_dead, true, recheck);
+
+	/*
+	 * If we are returning a potential candidate tuple from this chain and the
+	 * caller has requested for "recheck" hint, keep the buffer locked and
+	 * pinned. The caller must release the lock and pin on the buffer in all
+	 * such cases
+	 */
+	if (!result || !recheck || !(*recheck))
+	{
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	/*
+	 * Set the caller supplied tid with the actual location of the tuple being
+	 * returned
+	 */
+	if (result)
+	{
+		*tid = ret_tid;
+		if (cbuffer)
+			*cbuffer = buffer;
+	}
+
 	return result;
 }
 
@@ -3491,15 +3603,18 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 HTSU_Result
 heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode)
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update)
 {
 	HTSU_Result result;
 	TransactionId xid = GetCurrentTransactionId();
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *exprindx_attrs;
 	Bitmapset  *interesting_attrs;
 	Bitmapset  *modified_attrs;
+	Bitmapset  *notready_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3520,6 +3635,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		use_hot_update = false;
+	bool		use_warm_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
 	bool		all_visible_cleared_new = false;
@@ -3544,6 +3660,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot update tuples during a parallel operation")));
 
+	/* Assume no-warm update */
+	if (warm_update)
+		*warm_update = false;
+
 	/*
 	 * Fetch the list of attributes to be checked for various operations.
 	 *
@@ -3565,10 +3685,17 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	exprindx_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_EXPR_PREDICATE);
+	notready_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_NOTREADY);
+
+
 	interesting_attrs = bms_add_members(NULL, hot_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
-
+	interesting_attrs = bms_add_members(interesting_attrs, exprindx_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, notready_attrs);
 
 	block = ItemPointerGetBlockNumber(otid);
 	offnum = ItemPointerGetOffsetNumber(otid);
@@ -3620,6 +3747,9 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
 												  &oldtup, newtup);
 
+	if (modified_attrsp)
+		*modified_attrsp = bms_copy(modified_attrs);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3875,6 +4005,7 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(exprindx_attrs);
 		bms_free(modified_attrs);
 		bms_free(interesting_attrs);
 		return result;
@@ -4193,6 +4324,37 @@ l2:
 		 */
 		if (!bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
+		else
+		{
+			/*
+			 * If no WARM updates yet on this chain, let this update be a WARM
+			 * update.
+			 *
+			 * We check for both warm and warm updated tuples since if the
+			 * previous WARM update aborted, we may still have added
+			 * another index entry for this HOT chain. In such situations, we
+			 * must not attempt a WARM update until duplicate (key, CTID) index
+			 * entry issue is sorted out
+			 *
+			 * XXX Later we'll add more checks to ensure WARM chains can
+			 * further be WARM updated. This is probably good to do first rounf
+			 * of tests of remaining functionality
+			 *
+			 * XXX Disable WARM updates on system tables. There is nothing in
+			 * principle that stops us from supporting this. But it would
+			 * require API change to propogate the changed columns back to the
+			 * caller so that CatalogUpdateIndexes() can avoid adding new
+			 * entries to indexes that are not changed by update. This will be
+			 * fixed once basic patch is tested. !!FIXME
+			 */
+			if (relation->rd_supportswarm &&
+				!bms_overlap(modified_attrs, exprindx_attrs) &&
+				!bms_is_subset(hot_attrs, modified_attrs) &&
+				!IsSystemRelation(relation) &&
+				!bms_overlap(notready_attrs, modified_attrs) &&
+				!HeapTupleIsHeapWarmTuple(&oldtup))
+				use_warm_update = true;
+		}
 	}
 	else
 	{
@@ -4239,6 +4401,22 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+
+		/*
+		 * Even if we are doing a HOT update, we must carry forward the WARM
+		 * flag because we may have already inserted another index entry
+		 * pointing to our root and a third entry may create duplicates
+		 *
+		 * Note: If we ever have a mechanism to avoid duplicate <key, TID> in
+		 * indexes, we could look at relaxing this restriction and allow even
+		 * more WARM udpates
+		 */
+		if (HeapTupleIsHeapWarmTuple(&oldtup))
+		{
+			HeapTupleSetHeapWarmTuple(heaptup);
+			HeapTupleSetHeapWarmTuple(newtup);
+		}
+
 		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
@@ -4251,12 +4429,35 @@ l2:
 		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
 			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
+	else if (use_warm_update)
+	{
+		/* Mark the old tuple as HOT-updated */
+		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetHeapWarmTuple(&oldtup);
+		/* And mark the new tuple as heap-only */
+		HeapTupleSetHeapOnly(heaptup);
+		HeapTupleSetHeapWarmTuple(heaptup);
+		/* Mark the caller's copy too, in case different from heaptup */
+		HeapTupleSetHeapOnly(newtup);
+		HeapTupleSetHeapWarmTuple(newtup);
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
+		/* Let the caller know we did a WARM update */
+		if (warm_update)
+			*warm_update = true;
+	}
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		HeapTupleClearHeapWarmTuple(heaptup);
+		HeapTupleClearHeapWarmTuple(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4366,7 +4567,10 @@ l2:
 	if (have_tuple_lock)
 		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
 
-	pgstat_count_heap_update(relation, use_hot_update);
+	/*
+	 * Count HOT and WARM updates separately
+	 */
+	pgstat_count_heap_update(relation, use_hot_update, use_warm_update);
 
 	/*
 	 * If heaptup is a private copy, release it.  Don't forget to copy t_self
@@ -4506,7 +4710,8 @@ HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
  * via ereport().
  */
 void
-simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
+simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup,
+		Bitmapset **modified_attrs, bool *warm_update)
 {
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
@@ -4515,7 +4720,7 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &hufd, &lockmode);
+						 &hufd, &lockmode, modified_attrs, warm_update);
 	switch (result)
 	{
 		case HeapTupleSelfUpdated:
@@ -7567,6 +7772,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
+	bool		warm_update = false;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
@@ -7578,6 +7784,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	else
 		info = XLOG_HEAP_UPDATE;
 
+	if (HeapTupleIsHeapWarmTuple(newtup))
+		warm_update = true;
+
 	/*
 	 * If the old and new tuple are on the same page, we only need to log the
 	 * parts of the new tuple that were changed.  That saves on the amount of
@@ -7651,6 +7860,8 @@ log_heap_update(Relation reln, Buffer oldbuf,
 				xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY;
 		}
 	}
+	if (warm_update)
+		xlrec.flags |= XLH_UPDATE_WARM_UPDATE;
 
 	/* If new tuple is the single and first tuple on page... */
 	if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber &&
@@ -8628,16 +8839,22 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	Size		freespace = 0;
 	XLogRedoAction oldaction;
 	XLogRedoAction newaction;
+	bool		warm_update = false;
 
 	/* initialize to keep the compiler quiet */
 	oldtup.t_data = NULL;
 	oldtup.t_len = 0;
 
+	if (xlrec->flags & XLH_UPDATE_WARM_UPDATE)
+		warm_update = true;
+
 	XLogRecGetBlockTag(record, 0, &rnode, NULL, &newblk);
 	if (XLogRecGetBlockTag(record, 1, NULL, NULL, &oldblk))
 	{
 		/* HOT updates are never done across pages */
 		Assert(!hot_update);
+		/* WARM updates are never done across pages */
+		Assert(!warm_update);
 	}
 	else
 		oldblk = newblk;
@@ -8697,6 +8914,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 								   &htup->t_infomask2);
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
+
+		/* Mark the old tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		/* Set forward chain link in t_ctid */
 		HeapTupleHeaderSetNextTid(htup, &newtid);
 
@@ -8832,6 +9054,10 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
 
+		/* Mark the new tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index f54337c..c2bd7d6 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -834,6 +834,13 @@ heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				continue;
 
+			/* 
+			 * If the tuple has root line pointer, it must be the end of the
+			 * chain
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			/* Set up to scan the HOT-chain */
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index ba27c1e..3cbe1d0 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -75,10 +75,12 @@
 #include "access/xlog.h"
 #include "catalog/catalog.h"
 #include "catalog/index.h"
+#include "executor/executor.h"
 #include "pgstat.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
+#include "utils/datum.h"
 #include "utils/snapmgr.h"
 #include "utils/tqual.h"
 
@@ -233,6 +235,21 @@ index_beginscan(Relation heapRelation,
 	scan->heapRelation = heapRelation;
 	scan->xs_snapshot = snapshot;
 
+	/*
+	 * If the index supports recheck, make sure that index tuple is saved
+	 * during index scans.
+	 *
+	 * XXX Ideally, we should look at all indexes on the table and check if
+	 * WARM is at all supported on the base table. If WARM is not supported
+	 * then we don't need to do any recheck. RelationGetIndexAttrBitmap() does
+	 * do that and sets rd_supportswarm after looking at all indexes. But we
+	 * don't know if the function was called earlier in the session when we're
+	 * here. We can't call it now because there exists a risk of causing
+	 * deadlock.
+	 */
+	if (indexRelation->rd_amroutine->amrecheck)
+		scan->xs_want_itup = true;
+
 	return scan;
 }
 
@@ -534,7 +551,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
 	/*
 	 * The AM's amgettuple proc finds the next index entry matching the scan
 	 * keys, and puts the TID into scan->xs_ctup.t_self.  It should also set
-	 * scan->xs_recheck and possibly scan->xs_itup, though we pay no attention
+	 * scan->xs_tuple_recheck and possibly scan->xs_itup, though we pay no attention
 	 * to those fields here.
 	 */
 	found = scan->indexRelation->rd_amroutine->amgettuple(scan, direction);
@@ -573,7 +590,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
  * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
  * call).
  *
- * Note: caller must check scan->xs_recheck, and perform rechecking of the
+ * Note: caller must check scan->xs_tuple_recheck, and perform rechecking of the
  * scan keys if required.  We do not do that here because we don't have
  * enough information to do it efficiently in the general case.
  * ----------------
@@ -600,6 +617,12 @@ index_fetch_heap(IndexScanDesc scan)
 		 */
 		if (prev_buf != scan->xs_cbuf)
 			heap_page_prune_opt(scan->heapRelation, scan->xs_cbuf);
+
+		/*
+		 * If we're not always re-checking, reset recheck for this tuple.
+		 * Otherwise we must recheck every tuple.
+		 */
+		scan->xs_tuple_recheck = scan->xs_recheck;
 	}
 
 	/* Obtain share-lock on the buffer so we can examine visibility */
@@ -609,32 +632,64 @@ index_fetch_heap(IndexScanDesc scan)
 											scan->xs_snapshot,
 											&scan->xs_ctup,
 											&all_dead,
-											!scan->xs_continue_hot);
+											!scan->xs_continue_hot,
+											&scan->xs_tuple_recheck);
 	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
 
 	if (got_heap_tuple)
 	{
+		bool res = true;
+
+		/*
+		 * Ok we got a tuple which satisfies the snapshot, but if its part of a
+		 * WARM chain, we must do additional checks to ensure that we are
+		 * indeed returning a correct tuple. Note that if the index AM does not
+		 * implement amrecheck method, then we don't any additional checks
+		 * since WARM must have been disabled on such tables
+		 *
+		 * XXX What happens when a new index which does not support amcheck is
+		 * added to the table? Do we need to handle this case or is CREATE
+		 * INDEX and CREATE INDEX CONCURRENTLY smart enough to handle this
+		 * issue?
+		 */
+		if (scan->xs_tuple_recheck &&
+				scan->xs_itup &&
+				scan->indexRelation->rd_amroutine->amrecheck)
+		{
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
+			res = scan->indexRelation->rd_amroutine->amrecheck(
+						scan->indexRelation,
+						scan->xs_itup,
+						scan->heapRelation,
+						&scan->xs_ctup);
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+		}
+
 		/*
 		 * Only in a non-MVCC snapshot can more than one member of the HOT
 		 * chain be visible.
 		 */
 		scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
 		pgstat_count_heap_fetch(scan->indexRelation);
-		return &scan->xs_ctup;
-	}
 
-	/* We've reached the end of the HOT chain. */
-	scan->xs_continue_hot = false;
+		if (res)
+			return &scan->xs_ctup;
+	}
+	else
+	{
+		/* We've reached the end of the HOT chain. */
+		scan->xs_continue_hot = false;
 
-	/*
-	 * If we scanned a whole HOT chain and found only dead tuples, tell index
-	 * AM to kill its entry for that TID (this will take effect in the next
-	 * amgettuple call, in index_getnext_tid).  We do not do this when in
-	 * recovery because it may violate MVCC to do so.  See comments in
-	 * RelationGetIndexScan().
-	 */
-	if (!scan->xactStartedInRecovery)
-		scan->kill_prior_tuple = all_dead;
+		/*
+		 * If we scanned a whole HOT chain and found only dead tuples, tell index
+		 * AM to kill its entry for that TID (this will take effect in the next
+		 * amgettuple call, in index_getnext_tid).  We do not do this when in
+		 * recovery because it may violate MVCC to do so.  See comments in
+		 * RelationGetIndexScan().
+		 */
+		if (!scan->xactStartedInRecovery)
+			scan->kill_prior_tuple = all_dead;
+	}
 
 	return NULL;
 }
diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c
index 883d70d..6efccf7 100644
--- a/src/backend/access/nbtree/nbtinsert.c
+++ b/src/backend/access/nbtree/nbtinsert.c
@@ -19,11 +19,14 @@
 #include "access/nbtree.h"
 #include "access/transam.h"
 #include "access/xloginsert.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
 #include "utils/tqual.h"
-
+#include "utils/datum.h"
 
 typedef struct
 {
@@ -249,6 +252,9 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 	BTPageOpaque opaque;
 	Buffer		nbuf = InvalidBuffer;
 	bool		found = false;
+	Buffer		buffer;
+	HeapTupleData	heapTuple;
+	bool		recheck = false;
 
 	/* Assume unique until we find a duplicate */
 	*is_unique = true;
@@ -308,6 +314,8 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				curitup = (IndexTuple) PageGetItem(page, curitemid);
 				htid = curitup->t_tid;
 
+				recheck = false;
+
 				/*
 				 * If we are doing a recheck, we expect to find the tuple we
 				 * are rechecking.  It's not a duplicate, but we have to keep
@@ -325,112 +333,153 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				 * have just a single index entry for the entire chain.
 				 */
 				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
-										 &all_dead))
+							&all_dead, &recheck, &buffer,
+							&heapTuple))
 				{
 					TransactionId xwait;
+					bool result = true;
 
 					/*
-					 * It is a duplicate. If we are only doing a partial
-					 * check, then don't bother checking if the tuple is being
-					 * updated in another transaction. Just return the fact
-					 * that it is a potential conflict and leave the full
-					 * check till later.
+					 * If the tuple was WARM update, we may again see our own
+					 * tuple. Since WARM updates don't create new index
+					 * entries, our own tuple is only reachable via the old
+					 * index pointer
 					 */
-					if (checkUnique == UNIQUE_CHECK_PARTIAL)
+					if (checkUnique == UNIQUE_CHECK_EXISTING &&
+							ItemPointerCompare(&htid, &itup->t_tid) == 0)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						*is_unique = false;
-						return InvalidTransactionId;
+						found = true;
+						result = false;
+						if (recheck)
+							UnlockReleaseBuffer(buffer);
 					}
-
-					/*
-					 * If this tuple is being updated by other transaction
-					 * then we have to wait for its commit/abort.
-					 */
-					xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
-						SnapshotDirty.xmin : SnapshotDirty.xmax;
-
-					if (TransactionIdIsValid(xwait))
+					else if (recheck)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						/* Tell _bt_doinsert to wait... */
-						*speculativeToken = SnapshotDirty.speculativeToken;
-						return xwait;
+						result = btrecheck(rel, curitup, heapRel, &heapTuple);
+						UnlockReleaseBuffer(buffer);
 					}
 
-					/*
-					 * Otherwise we have a definite conflict.  But before
-					 * complaining, look to see if the tuple we want to insert
-					 * is itself now committed dead --- if so, don't complain.
-					 * This is a waste of time in normal scenarios but we must
-					 * do it to support CREATE INDEX CONCURRENTLY.
-					 *
-					 * We must follow HOT-chains here because during
-					 * concurrent index build, we insert the root TID though
-					 * the actual tuple may be somewhere in the HOT-chain.
-					 * While following the chain we might not stop at the
-					 * exact tuple which triggered the insert, but that's OK
-					 * because if we find a live tuple anywhere in this chain,
-					 * we have a unique key conflict.  The other live tuple is
-					 * not part of this chain because it had a different index
-					 * entry.
-					 */
-					htid = itup->t_tid;
-					if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL))
-					{
-						/* Normal case --- it's still live */
-					}
-					else
+					if (result)
 					{
 						/*
-						 * It's been deleted, so no error, and no need to
-						 * continue searching
+						 * It is a duplicate. If we are only doing a partial
+						 * check, then don't bother checking if the tuple is being
+						 * updated in another transaction. Just return the fact
+						 * that it is a potential conflict and leave the full
+						 * check till later.
 						 */
-						break;
-					}
+						if (checkUnique == UNIQUE_CHECK_PARTIAL)
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							*is_unique = false;
+							return InvalidTransactionId;
+						}
 
-					/*
-					 * Check for a conflict-in as we would if we were going to
-					 * write to this page.  We aren't actually going to write,
-					 * but we want a chance to report SSI conflicts that would
-					 * otherwise be masked by this unique constraint
-					 * violation.
-					 */
-					CheckForSerializableConflictIn(rel, NULL, buf);
+						/*
+						 * If this tuple is being updated by other transaction
+						 * then we have to wait for its commit/abort.
+						 */
+						xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
+							SnapshotDirty.xmin : SnapshotDirty.xmax;
+
+						if (TransactionIdIsValid(xwait))
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							/* Tell _bt_doinsert to wait... */
+							*speculativeToken = SnapshotDirty.speculativeToken;
+							return xwait;
+						}
 
-					/*
-					 * This is a definite conflict.  Break the tuple down into
-					 * datums and report the error.  But first, make sure we
-					 * release the buffer locks we're holding ---
-					 * BuildIndexValueDescription could make catalog accesses,
-					 * which in the worst case might touch this same index and
-					 * cause deadlocks.
-					 */
-					if (nbuf != InvalidBuffer)
-						_bt_relbuf(rel, nbuf);
-					_bt_relbuf(rel, buf);
+						/*
+						 * Otherwise we have a definite conflict.  But before
+						 * complaining, look to see if the tuple we want to insert
+						 * is itself now committed dead --- if so, don't complain.
+						 * This is a waste of time in normal scenarios but we must
+						 * do it to support CREATE INDEX CONCURRENTLY.
+						 *
+						 * We must follow HOT-chains here because during
+						 * concurrent index build, we insert the root TID though
+						 * the actual tuple may be somewhere in the HOT-chain.
+						 * While following the chain we might not stop at the
+						 * exact tuple which triggered the insert, but that's OK
+						 * because if we find a live tuple anywhere in this chain,
+						 * we have a unique key conflict.  The other live tuple is
+						 * not part of this chain because it had a different index
+						 * entry.
+						 */
+						recheck = false;
+						ItemPointerCopy(&itup->t_tid, &htid);
+						if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL,
+									&recheck, &buffer, &heapTuple))
+						{
+							bool result = true;
+							if (recheck)
+							{
+								/*
+								 * Recheck if the tuple actually satisfies the
+								 * index key. Otherwise, we might be following
+								 * a wrong index pointer and mustn't entertain
+								 * this tuple
+								 */
+								result = btrecheck(rel, itup, heapRel, &heapTuple);
+								UnlockReleaseBuffer(buffer);
+							}
+							if (!result)
+								break;
+							/* Normal case --- it's still live */
+						}
+						else
+						{
+							/*
+							 * It's been deleted, so no error, and no need to
+							 * continue searching
+							 */
+							break;
+						}
 
-					{
-						Datum		values[INDEX_MAX_KEYS];
-						bool		isnull[INDEX_MAX_KEYS];
-						char	   *key_desc;
-
-						index_deform_tuple(itup, RelationGetDescr(rel),
-										   values, isnull);
-
-						key_desc = BuildIndexValueDescription(rel, values,
-															  isnull);
-
-						ereport(ERROR,
-								(errcode(ERRCODE_UNIQUE_VIOLATION),
-								 errmsg("duplicate key value violates unique constraint \"%s\"",
-										RelationGetRelationName(rel)),
-							   key_desc ? errdetail("Key %s already exists.",
-													key_desc) : 0,
-								 errtableconstraint(heapRel,
-											 RelationGetRelationName(rel))));
+						/*
+						 * Check for a conflict-in as we would if we were going to
+						 * write to this page.  We aren't actually going to write,
+						 * but we want a chance to report SSI conflicts that would
+						 * otherwise be masked by this unique constraint
+						 * violation.
+						 */
+						CheckForSerializableConflictIn(rel, NULL, buf);
+
+						/*
+						 * This is a definite conflict.  Break the tuple down into
+						 * datums and report the error.  But first, make sure we
+						 * release the buffer locks we're holding ---
+						 * BuildIndexValueDescription could make catalog accesses,
+						 * which in the worst case might touch this same index and
+						 * cause deadlocks.
+						 */
+						if (nbuf != InvalidBuffer)
+							_bt_relbuf(rel, nbuf);
+						_bt_relbuf(rel, buf);
+
+						{
+							Datum		values[INDEX_MAX_KEYS];
+							bool		isnull[INDEX_MAX_KEYS];
+							char	   *key_desc;
+
+							index_deform_tuple(itup, RelationGetDescr(rel),
+									values, isnull);
+
+							key_desc = BuildIndexValueDescription(rel, values,
+									isnull);
+
+							ereport(ERROR,
+									(errcode(ERRCODE_UNIQUE_VIOLATION),
+									 errmsg("duplicate key value violates unique constraint \"%s\"",
+										 RelationGetRelationName(rel)),
+									 key_desc ? errdetail("Key %s already exists.",
+										 key_desc) : 0,
+									 errtableconstraint(heapRel,
+										 RelationGetRelationName(rel))));
+						}
 					}
 				}
 				else if (all_dead)
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 469e7ab..27013f4 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -23,6 +23,7 @@
 #include "access/xlog.h"
 #include "catalog/index.h"
 #include "commands/vacuum.h"
+#include "executor/nodeIndexscan.h"
 #include "storage/indexfsm.h"
 #include "storage/ipc.h"
 #include "storage/lmgr.h"
@@ -121,6 +122,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = btrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -301,8 +303,9 @@ btgettuple(IndexScanDesc scan, ScanDirection dir)
 	BTScanOpaque so = (BTScanOpaque) scan->opaque;
 	bool		res;
 
-	/* btree indexes are never lossy */
+	/* btree indexes are never lossy, except for WARM tuples */
 	scan->xs_recheck = false;
+	scan->xs_tuple_recheck = false;
 
 	/*
 	 * If we have any array keys, initialize them during first call for a
diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c
index da0f330..9becaeb 100644
--- a/src/backend/access/nbtree/nbtutils.c
+++ b/src/backend/access/nbtree/nbtutils.c
@@ -20,11 +20,15 @@
 #include "access/nbtree.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "utils/array.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 typedef struct BTSortArrayContext
@@ -2065,3 +2069,103 @@ btproperty(Oid index_oid, int attno,
 			return false;		/* punt to generic code */
 	}
 }
+
+/*
+ * Check if the index tuple's key matches the one computed from the given heap
+ * tuple's attribute
+ */
+bool
+btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	/* Get IndexInfo for this index */
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL, then they are equal
+		 */
+		if (isnull[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If just one is NULL, then they are not equal
+		 */
+		if (isnull[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now just do a raw memory comparison. If the index tuple was formed
+		 * using this heap tuple, the computed index values must match
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+}
diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c
index 78846be..2236f02 100644
--- a/src/backend/access/spgist/spgutils.c
+++ b/src/backend/access/spgist/spgutils.c
@@ -71,6 +71,7 @@ spghandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/catalog/aclchk.c b/src/backend/catalog/aclchk.c
index 00a9aea..477f450 100644
--- a/src/backend/catalog/aclchk.c
+++ b/src/backend/catalog/aclchk.c
@@ -1252,7 +1252,7 @@ SetDefaultACL(InternalDefaultACL *iacls)
 			values[Anum_pg_default_acl_defaclacl - 1] = PointerGetDatum(new_acl);
 
 			newtuple = heap_form_tuple(RelationGetDescr(rel), values, nulls);
-			simple_heap_insert(rel, newtuple);
+			CatalogInsertHeapAndIndexes(rel, newtuple);
 		}
 		else
 		{
@@ -1262,12 +1262,9 @@ SetDefaultACL(InternalDefaultACL *iacls)
 
 			newtuple = heap_modify_tuple(tuple, RelationGetDescr(rel),
 										 values, nulls, replaces);
-			simple_heap_update(rel, &newtuple->t_self, newtuple);
+			CatalogUpdateHeapAndIndexes(rel, &newtuple->t_self, newtuple);
 		}
 
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(rel, newtuple);
-
 		/* these dependencies don't change in an update */
 		if (isNew)
 		{
@@ -1697,10 +1694,7 @@ ExecGrant_Attribute(InternalGrant *istmt, Oid relOid, const char *relname,
 		newtuple = heap_modify_tuple(attr_tuple, RelationGetDescr(attRelation),
 									 values, nulls, replaces);
 
-		simple_heap_update(attRelation, &newtuple->t_self, newtuple);
-
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(attRelation, newtuple);
+		CatalogUpdateHeapAndIndexes(attRelation, &newtuple->t_self, newtuple);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(relOid, RelationRelationId, attnum,
@@ -1963,10 +1957,7 @@ ExecGrant_Relation(InternalGrant *istmt)
 			newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation),
 										 values, nulls, replaces);
 
-			simple_heap_update(relation, &newtuple->t_self, newtuple);
-
-			/* keep the catalog indexes up to date */
-			CatalogUpdateIndexes(relation, newtuple);
+			CatalogUpdateHeapAndIndexes(relation, &newtuple->t_self, newtuple);
 
 			/* Update initial privileges for extensions */
 			recordExtensionInitPriv(relOid, RelationRelationId, 0, new_acl);
@@ -2156,10 +2147,7 @@ ExecGrant_Database(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
-
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateHeapAndIndexes(relation, &newtuple->t_self, newtuple);
 
 		/* Update the shared dependency ACL info */
 		updateAclDependencies(DatabaseRelationId, HeapTupleGetOid(tuple), 0,
@@ -2281,10 +2269,7 @@ ExecGrant_Fdw(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
-
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateHeapAndIndexes(relation, &newtuple->t_self, newtuple);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(fdwid, ForeignDataWrapperRelationId, 0,
@@ -2410,10 +2395,7 @@ ExecGrant_ForeignServer(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
-
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateHeapAndIndexes(relation, &newtuple->t_self, newtuple);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(srvid, ForeignServerRelationId, 0, new_acl);
@@ -2537,10 +2519,7 @@ ExecGrant_Function(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
-
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateHeapAndIndexes(relation, &newtuple->t_self, newtuple);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(funcId, ProcedureRelationId, 0, new_acl);
@@ -2671,10 +2650,7 @@ ExecGrant_Language(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
-
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateHeapAndIndexes(relation, &newtuple->t_self, newtuple);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(langId, LanguageRelationId, 0, new_acl);
@@ -2813,10 +2789,7 @@ ExecGrant_Largeobject(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation),
 									 values, nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
-
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateHeapAndIndexes(relation, &newtuple->t_self, newtuple);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(loid, LargeObjectRelationId, 0, new_acl);
@@ -2941,10 +2914,7 @@ ExecGrant_Namespace(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
-
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateHeapAndIndexes(relation, &newtuple->t_self, newtuple);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(nspid, NamespaceRelationId, 0, new_acl);
@@ -3068,10 +3038,7 @@ ExecGrant_Tablespace(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
-
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateHeapAndIndexes(relation, &newtuple->t_self, newtuple);
 
 		/* Update the shared dependency ACL info */
 		updateAclDependencies(TableSpaceRelationId, tblId, 0,
@@ -3205,10 +3172,7 @@ ExecGrant_Type(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
-
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateHeapAndIndexes(relation, &newtuple->t_self, newtuple);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(typId, TypeRelationId, 0, new_acl);
@@ -5751,10 +5715,7 @@ recordExtensionInitPrivWorker(Oid objoid, Oid classoid, int objsubid, Acl *new_a
 			oldtuple = heap_modify_tuple(oldtuple, RelationGetDescr(relation),
 										 values, nulls, replace);
 
-			simple_heap_update(relation, &oldtuple->t_self, oldtuple);
-
-			/* keep the catalog indexes up to date */
-			CatalogUpdateIndexes(relation, oldtuple);
+			CatalogUpdateHeapAndIndexes(relation, &oldtuple->t_self, oldtuple);
 		}
 		else
 			/* new_acl is NULL, so delete the entry we found. */
@@ -5788,10 +5749,7 @@ recordExtensionInitPrivWorker(Oid objoid, Oid classoid, int objsubid, Acl *new_a
 
 			tuple = heap_form_tuple(RelationGetDescr(relation), values, nulls);
 
-			simple_heap_insert(relation, tuple);
-
-			/* keep the catalog indexes up to date */
-			CatalogUpdateIndexes(relation, tuple);
+			CatalogInsertHeapAndIndexes(relation, tuple);
 		}
 	}
 
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index 7ce9115..84e9ef5 100644
--- a/src/backend/catalog/heap.c
+++ b/src/backend/catalog/heap.c
@@ -633,9 +633,9 @@ InsertPgAttributeTuple(Relation pg_attribute_rel,
 	simple_heap_insert(pg_attribute_rel, tup);
 
 	if (indstate != NULL)
-		CatalogIndexInsert(indstate, tup);
+		CatalogIndexInsert(indstate, tup, NULL, false);
 	else
-		CatalogUpdateIndexes(pg_attribute_rel, tup);
+		CatalogUpdateIndexes(pg_attribute_rel, tup, NULL, false);
 
 	heap_freetuple(tup);
 }
@@ -824,9 +824,7 @@ InsertPgClassTuple(Relation pg_class_desc,
 	HeapTupleSetOid(tup, new_rel_oid);
 
 	/* finally insert the new tuple, update the indexes, and clean up */
-	simple_heap_insert(pg_class_desc, tup);
-
-	CatalogUpdateIndexes(pg_class_desc, tup);
+	CatalogInsertHeapAndIndexes(pg_class_desc, tup);
 
 	heap_freetuple(tup);
 }
@@ -1599,10 +1597,7 @@ RemoveAttributeById(Oid relid, AttrNumber attnum)
 				 "........pg.dropped.%d........", attnum);
 		namestrcpy(&(attStruct->attname), newattname);
 
-		simple_heap_update(attr_rel, &tuple->t_self, tuple);
-
-		/* keep the system catalog indexes current */
-		CatalogUpdateIndexes(attr_rel, tuple);
+		CatalogUpdateHeapAndIndexes(attr_rel, &tuple->t_self, tuple);
 	}
 
 	/*
@@ -1731,10 +1726,7 @@ RemoveAttrDefaultById(Oid attrdefId)
 
 	((Form_pg_attribute) GETSTRUCT(tuple))->atthasdef = false;
 
-	simple_heap_update(attr_rel, &tuple->t_self, tuple);
-
-	/* keep the system catalog indexes current */
-	CatalogUpdateIndexes(attr_rel, tuple);
+	CatalogUpdateHeapAndIndexes(attr_rel, &tuple->t_self, tuple);
 
 	/*
 	 * Our update of the pg_attribute row will force a relcache rebuild, so
@@ -1932,9 +1924,7 @@ StoreAttrDefault(Relation rel, AttrNumber attnum,
 	adrel = heap_open(AttrDefaultRelationId, RowExclusiveLock);
 
 	tuple = heap_form_tuple(adrel->rd_att, values, nulls);
-	attrdefOid = simple_heap_insert(adrel, tuple);
-
-	CatalogUpdateIndexes(adrel, tuple);
+	attrdefOid = CatalogInsertHeapAndIndexes(adrel, tuple);
 
 	defobject.classId = AttrDefaultRelationId;
 	defobject.objectId = attrdefOid;
@@ -1964,9 +1954,7 @@ StoreAttrDefault(Relation rel, AttrNumber attnum,
 	if (!attStruct->atthasdef)
 	{
 		attStruct->atthasdef = true;
-		simple_heap_update(attrrel, &atttup->t_self, atttup);
-		/* keep catalog indexes current */
-		CatalogUpdateIndexes(attrrel, atttup);
+		CatalogUpdateHeapAndIndexes(attrrel, &atttup->t_self, atttup);
 	}
 	heap_close(attrrel, RowExclusiveLock);
 	heap_freetuple(atttup);
@@ -2561,8 +2549,7 @@ MergeWithExistingConstraint(Relation rel, char *ccname, Node *expr,
 				Assert(is_local);
 				con->connoinherit = true;
 			}
-			simple_heap_update(conDesc, &tup->t_self, tup);
-			CatalogUpdateIndexes(conDesc, tup);
+			CatalogUpdateHeapAndIndexes(conDesc, &tup->t_self, tup);
 			break;
 		}
 	}
@@ -2602,10 +2589,7 @@ SetRelationNumChecks(Relation rel, int numchecks)
 	{
 		relStruct->relchecks = numchecks;
 
-		simple_heap_update(relrel, &reltup->t_self, reltup);
-
-		/* keep catalog indexes current */
-		CatalogUpdateIndexes(relrel, reltup);
+		CatalogUpdateHeapAndIndexes(relrel, &reltup->t_self, reltup);
 	}
 	else
 	{
@@ -3145,10 +3129,7 @@ StorePartitionKey(Relation rel,
 
 	tuple = heap_form_tuple(RelationGetDescr(pg_partitioned_table), values, nulls);
 
-	simple_heap_insert(pg_partitioned_table, tuple);
-
-	/* Update the indexes on pg_partitioned_table */
-	CatalogUpdateIndexes(pg_partitioned_table, tuple);
+	CatalogInsertHeapAndIndexes(pg_partitioned_table, tuple);
 	heap_close(pg_partitioned_table, RowExclusiveLock);
 
 	/* Mark this relation as dependent on a few things as follows */
@@ -3265,8 +3246,7 @@ StorePartitionBound(Relation rel, Relation parent, Node *bound)
 								 new_val, new_null, new_repl);
 	/* Also set the flag */
 	((Form_pg_class) GETSTRUCT(newtuple))->relispartition = true;
-	simple_heap_update(classRel, &newtuple->t_self, newtuple);
-	CatalogUpdateIndexes(classRel, newtuple);
+	CatalogUpdateHeapAndIndexes(classRel, &newtuple->t_self, newtuple);
 	heap_freetuple(newtuple);
 	heap_close(classRel, RowExclusiveLock);
 
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 26cbc0e..1b51cbc 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -54,6 +54,7 @@
 #include "nodes/makefuncs.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/var.h"
 #include "parser/parser.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
@@ -649,10 +650,7 @@ UpdateIndexRelation(Oid indexoid,
 	/*
 	 * insert the tuple into the pg_index catalog
 	 */
-	simple_heap_insert(pg_index, tuple);
-
-	/* update the indexes on pg_index */
-	CatalogUpdateIndexes(pg_index, tuple);
+	CatalogInsertHeapAndIndexes(pg_index, tuple);
 
 	/*
 	 * close the relation and free the tuple
@@ -1324,8 +1322,7 @@ index_constraint_create(Relation heapRelation,
 
 		if (dirty)
 		{
-			simple_heap_update(pg_index, &indexTuple->t_self, indexTuple);
-			CatalogUpdateIndexes(pg_index, indexTuple);
+			CatalogUpdateHeapAndIndexes(pg_index, &indexTuple->t_self, indexTuple);
 
 			InvokeObjectPostAlterHookArg(IndexRelationId, indexRelationId, 0,
 										 InvalidOid, is_internal);
@@ -1691,6 +1688,20 @@ BuildIndexInfo(Relation index)
 	ii->ii_Concurrent = false;
 	ii->ii_BrokenHotChain = false;
 
+	/* build a bitmap of all table attributes referred by this index */
+	for (i = 0; i < ii->ii_NumIndexAttrs; i++)
+	{
+		AttrNumber attr = ii->ii_KeyAttrNumbers[i];
+		ii->ii_indxattrs = bms_add_member(ii->ii_indxattrs, attr -
+				FirstLowInvalidHeapAttributeNumber);
+	}
+
+	/* Collect all attributes used in expressions, too */
+	pull_varattnos((Node *) ii->ii_Expressions, 1, &ii->ii_indxattrs);
+
+	/* Collect all attributes in the index predicate, too */
+	pull_varattnos((Node *) ii->ii_Predicate, 1, &ii->ii_indxattrs);
+
 	return ii;
 }
 
@@ -2103,8 +2114,7 @@ index_build(Relation heapRelation,
 		Assert(!indexForm->indcheckxmin);
 
 		indexForm->indcheckxmin = true;
-		simple_heap_update(pg_index, &indexTuple->t_self, indexTuple);
-		CatalogUpdateIndexes(pg_index, indexTuple);
+		CatalogUpdateHeapAndIndexes(pg_index, &indexTuple->t_self, indexTuple);
 
 		heap_freetuple(indexTuple);
 		heap_close(pg_index, RowExclusiveLock);
@@ -3448,8 +3458,7 @@ reindex_index(Oid indexId, bool skip_constraint_checks, char persistence,
 			indexForm->indisvalid = true;
 			indexForm->indisready = true;
 			indexForm->indislive = true;
-			simple_heap_update(pg_index, &indexTuple->t_self, indexTuple);
-			CatalogUpdateIndexes(pg_index, indexTuple);
+			CatalogUpdateHeapAndIndexes(pg_index, &indexTuple->t_self, indexTuple);
 
 			/*
 			 * Invalidate the relcache for the table, so that after we commit
diff --git a/src/backend/catalog/indexing.c b/src/backend/catalog/indexing.c
index 1915ca3..304f742 100644
--- a/src/backend/catalog/indexing.c
+++ b/src/backend/catalog/indexing.c
@@ -66,10 +66,15 @@ CatalogCloseIndexes(CatalogIndexState indstate)
  *
  * This should be called for each inserted or updated catalog tuple.
  *
+ * If the tuple was WARM updated, the modified_attrs contains the list of
+ * columns updated by the update. We must not insert new index entries for
+ * indexes which do not refer to any of the modified columns.
+ *
  * This is effectively a cut-down version of ExecInsertIndexTuples.
  */
 void
-CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
+CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple,
+		Bitmapset *modified_attrs, bool warm_update)
 {
 	int			i;
 	int			numIndexes;
@@ -79,12 +84,28 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 	IndexInfo **indexInfoArray;
 	Datum		values[INDEX_MAX_KEYS];
 	bool		isnull[INDEX_MAX_KEYS];
+	ItemPointerData root_tid;
 
-	/* HOT update does not require index inserts */
-	if (HeapTupleIsHeapOnly(heapTuple))
+	/* 
+	 * HOT update does not require index inserts, but WARM may need for some
+	 * indexes.
+	 */
+	if (HeapTupleIsHeapOnly(heapTuple) && !warm_update)
 		return;
 
 	/*
+	 * If we've done a WARM update, then we must index the TID of the root line
+	 * pointer and not the actual TID of the new tuple.
+	 */
+	if (warm_update)
+		ItemPointerSet(&root_tid,
+				ItemPointerGetBlockNumber(&(heapTuple->t_self)),
+				HeapTupleHeaderGetRootOffset(heapTuple->t_data));
+	else
+		ItemPointerCopy(&heapTuple->t_self, &root_tid);
+
+
+	/*
 	 * Get information from the state structure.  Fall out if nothing to do.
 	 */
 	numIndexes = indstate->ri_NumIndices;
@@ -112,6 +133,17 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 			continue;
 
 		/*
+		 * If we've done WARM update, then we must not insert a new index tuple
+		 * if none of the index keys have changed. This is not just an
+		 * optimization, but a requirement for WARM to work correctly.
+		 */
+		if (warm_update)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
+		/*
 		 * Expressional and partial indexes on system catalogs are not
 		 * supported, nor exclusion constraints, nor deferred uniqueness
 		 */
@@ -136,7 +168,7 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 		index_insert(relationDescs[i],	/* index relation */
 					 values,	/* array of index Datums */
 					 isnull,	/* is-null flags */
-					 &(heapTuple->t_self),		/* tid of heap tuple */
+					 &root_tid,
 					 heapRelation,
 					 relationDescs[i]->rd_index->indisunique ?
 					 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO);
@@ -154,11 +186,43 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
  * structures is moderately expensive.
  */
 void
-CatalogUpdateIndexes(Relation heapRel, HeapTuple heapTuple)
+CatalogUpdateIndexes(Relation heapRel, HeapTuple heapTuple,
+		Bitmapset *modified_attrs, bool warm_update)
 {
 	CatalogIndexState indstate;
 
 	indstate = CatalogOpenIndexes(heapRel);
-	CatalogIndexInsert(indstate, heapTuple);
+	CatalogIndexInsert(indstate, heapTuple, modified_attrs, warm_update);
 	CatalogCloseIndexes(indstate);
 }
+
+/*
+ * A convenience routine which updates the heap tuple (identified by otid) with
+ * tup and also update all indexes on the table.
+ */
+void
+CatalogUpdateHeapAndIndexes(Relation heapRel, ItemPointer otid, HeapTuple tup)
+{
+	bool	warm_update;
+	Bitmapset	*modified_attrs;
+
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
+
+	/* Make sure only indexes whose columns are modified receive new entries */
+	CatalogUpdateIndexes(heapRel, tup, modified_attrs, warm_update);
+}
+
+/*
+ * A convenience routine which inserts a new heap tuple and also update all
+ * indexes on the table.
+ *
+ * Oid of the inserted tuple is returned
+ */
+Oid
+CatalogInsertHeapAndIndexes(Relation heapRel, HeapTuple tup)
+{
+	Oid oid;
+	oid = simple_heap_insert(heapRel, tup);
+	CatalogUpdateIndexes(heapRel, tup, NULL, false);
+	return oid;
+}
diff --git a/src/backend/catalog/pg_aggregate.c b/src/backend/catalog/pg_aggregate.c
index 3a4e22f..9cab585 100644
--- a/src/backend/catalog/pg_aggregate.c
+++ b/src/backend/catalog/pg_aggregate.c
@@ -674,9 +674,7 @@ AggregateCreate(const char *aggName,
 	tupDesc = aggdesc->rd_att;
 
 	tup = heap_form_tuple(tupDesc, values, nulls);
-	simple_heap_insert(aggdesc, tup);
-
-	CatalogUpdateIndexes(aggdesc, tup);
+	CatalogInsertHeapAndIndexes(aggdesc, tup);
 
 	heap_close(aggdesc, RowExclusiveLock);
 
diff --git a/src/backend/catalog/pg_collation.c b/src/backend/catalog/pg_collation.c
index 694c0f6..ebaf3fd 100644
--- a/src/backend/catalog/pg_collation.c
+++ b/src/backend/catalog/pg_collation.c
@@ -134,12 +134,9 @@ CollationCreate(const char *collname, Oid collnamespace,
 	tup = heap_form_tuple(tupDesc, values, nulls);
 
 	/* insert a new tuple */
-	oid = simple_heap_insert(rel, tup);
+	oid = CatalogInsertHeapAndIndexes(rel, tup);
 	Assert(OidIsValid(oid));
 
-	/* update the index if any */
-	CatalogUpdateIndexes(rel, tup);
-
 	/* set up dependencies for the new collation */
 	myself.classId = CollationRelationId;
 	myself.objectId = oid;
diff --git a/src/backend/catalog/pg_constraint.c b/src/backend/catalog/pg_constraint.c
index b5a0ce9..9509cac 100644
--- a/src/backend/catalog/pg_constraint.c
+++ b/src/backend/catalog/pg_constraint.c
@@ -226,10 +226,7 @@ CreateConstraintEntry(const char *constraintName,
 
 	tup = heap_form_tuple(RelationGetDescr(conDesc), values, nulls);
 
-	conOid = simple_heap_insert(conDesc, tup);
-
-	/* update catalog indexes */
-	CatalogUpdateIndexes(conDesc, tup);
+	conOid = CatalogInsertHeapAndIndexes(conDesc, tup);
 
 	conobject.classId = ConstraintRelationId;
 	conobject.objectId = conOid;
@@ -584,9 +581,7 @@ RemoveConstraintById(Oid conId)
 					 RelationGetRelationName(rel));
 			classForm->relchecks--;
 
-			simple_heap_update(pgrel, &relTup->t_self, relTup);
-
-			CatalogUpdateIndexes(pgrel, relTup);
+			CatalogUpdateHeapAndIndexes(pgrel, &relTup->t_self, relTup);
 
 			heap_freetuple(relTup);
 
@@ -666,10 +661,7 @@ RenameConstraintById(Oid conId, const char *newname)
 	/* OK, do the rename --- tuple is a copy, so OK to scribble on it */
 	namestrcpy(&(con->conname), newname);
 
-	simple_heap_update(conDesc, &tuple->t_self, tuple);
-
-	/* update the system catalog indexes */
-	CatalogUpdateIndexes(conDesc, tuple);
+	CatalogUpdateHeapAndIndexes(conDesc, &tuple->t_self, tuple);
 
 	InvokeObjectPostAlterHook(ConstraintRelationId, conId, 0);
 
@@ -736,8 +728,7 @@ AlterConstraintNamespaces(Oid ownerId, Oid oldNspId,
 
 			conform->connamespace = newNspId;
 
-			simple_heap_update(conRel, &tup->t_self, tup);
-			CatalogUpdateIndexes(conRel, tup);
+			CatalogUpdateHeapAndIndexes(conRel, &tup->t_self, tup);
 
 			/*
 			 * Note: currently, the constraint will not have its own
diff --git a/src/backend/catalog/pg_conversion.c b/src/backend/catalog/pg_conversion.c
index adaf7b8..a942e02 100644
--- a/src/backend/catalog/pg_conversion.c
+++ b/src/backend/catalog/pg_conversion.c
@@ -105,10 +105,7 @@ ConversionCreate(const char *conname, Oid connamespace,
 	tup = heap_form_tuple(tupDesc, values, nulls);
 
 	/* insert a new tuple */
-	simple_heap_insert(rel, tup);
-
-	/* update the index if any */
-	CatalogUpdateIndexes(rel, tup);
+	CatalogInsertHeapAndIndexes(rel, tup);
 
 	myself.classId = ConversionRelationId;
 	myself.objectId = HeapTupleGetOid(tup);
diff --git a/src/backend/catalog/pg_db_role_setting.c b/src/backend/catalog/pg_db_role_setting.c
index 117cc8d..c206b03 100644
--- a/src/backend/catalog/pg_db_role_setting.c
+++ b/src/backend/catalog/pg_db_role_setting.c
@@ -88,10 +88,7 @@ AlterSetting(Oid databaseid, Oid roleid, VariableSetStmt *setstmt)
 
 				newtuple = heap_modify_tuple(tuple, RelationGetDescr(rel),
 											 repl_val, repl_null, repl_repl);
-				simple_heap_update(rel, &tuple->t_self, newtuple);
-
-				/* Update indexes */
-				CatalogUpdateIndexes(rel, newtuple);
+				CatalogUpdateHeapAndIndexes(rel, &tuple->t_self, newtuple);
 			}
 			else
 				simple_heap_delete(rel, &tuple->t_self);
@@ -129,10 +126,7 @@ AlterSetting(Oid databaseid, Oid roleid, VariableSetStmt *setstmt)
 
 			newtuple = heap_modify_tuple(tuple, RelationGetDescr(rel),
 										 repl_val, repl_null, repl_repl);
-			simple_heap_update(rel, &tuple->t_self, newtuple);
-
-			/* Update indexes */
-			CatalogUpdateIndexes(rel, newtuple);
+			CatalogUpdateHeapAndIndexes(rel, &tuple->t_self, newtuple);
 		}
 		else
 			simple_heap_delete(rel, &tuple->t_self);
@@ -155,10 +149,7 @@ AlterSetting(Oid databaseid, Oid roleid, VariableSetStmt *setstmt)
 		values[Anum_pg_db_role_setting_setconfig - 1] = PointerGetDatum(a);
 		newtuple = heap_form_tuple(RelationGetDescr(rel), values, nulls);
 
-		simple_heap_insert(rel, newtuple);
-
-		/* Update indexes */
-		CatalogUpdateIndexes(rel, newtuple);
+		CatalogInsertHeapAndIndexes(rel, newtuple);
 	}
 
 	InvokeObjectPostAlterHookArg(DbRoleSettingRelationId,
diff --git a/src/backend/catalog/pg_depend.c b/src/backend/catalog/pg_depend.c
index b71fa1b..85a7622 100644
--- a/src/backend/catalog/pg_depend.c
+++ b/src/backend/catalog/pg_depend.c
@@ -113,7 +113,7 @@ recordMultipleDependencies(const ObjectAddress *depender,
 			if (indstate == NULL)
 				indstate = CatalogOpenIndexes(dependDesc);
 
-			CatalogIndexInsert(indstate, tup);
+			CatalogIndexInsert(indstate, tup, NULL, false);
 
 			heap_freetuple(tup);
 		}
@@ -362,8 +362,7 @@ changeDependencyFor(Oid classId, Oid objectId,
 
 				depform->refobjid = newRefObjectId;
 
-				simple_heap_update(depRel, &tup->t_self, tup);
-				CatalogUpdateIndexes(depRel, tup);
+				CatalogUpdateHeapAndIndexes(depRel, &tup->t_self, tup);
 
 				heap_freetuple(tup);
 			}
diff --git a/src/backend/catalog/pg_enum.c b/src/backend/catalog/pg_enum.c
index 089a9a0..16a4e80 100644
--- a/src/backend/catalog/pg_enum.c
+++ b/src/backend/catalog/pg_enum.c
@@ -125,8 +125,7 @@ EnumValuesCreate(Oid enumTypeOid, List *vals)
 		tup = heap_form_tuple(RelationGetDescr(pg_enum), values, nulls);
 		HeapTupleSetOid(tup, oids[elemno]);
 
-		simple_heap_insert(pg_enum, tup);
-		CatalogUpdateIndexes(pg_enum, tup);
+		CatalogInsertHeapAndIndexes(pg_enum, tup);
 		heap_freetuple(tup);
 
 		elemno++;
@@ -458,8 +457,7 @@ restart:
 	values[Anum_pg_enum_enumlabel - 1] = NameGetDatum(&enumlabel);
 	enum_tup = heap_form_tuple(RelationGetDescr(pg_enum), values, nulls);
 	HeapTupleSetOid(enum_tup, newOid);
-	simple_heap_insert(pg_enum, enum_tup);
-	CatalogUpdateIndexes(pg_enum, enum_tup);
+	CatalogInsertHeapAndIndexes(pg_enum, enum_tup);
 	heap_freetuple(enum_tup);
 
 	heap_close(pg_enum, RowExclusiveLock);
@@ -543,8 +541,7 @@ RenameEnumLabel(Oid enumTypeOid,
 
 	/* Update the pg_enum entry */
 	namestrcpy(&en->enumlabel, newVal);
-	simple_heap_update(pg_enum, &enum_tup->t_self, enum_tup);
-	CatalogUpdateIndexes(pg_enum, enum_tup);
+	CatalogUpdateHeapAndIndexes(pg_enum, &enum_tup->t_self, enum_tup);
 	heap_freetuple(enum_tup);
 
 	heap_close(pg_enum, RowExclusiveLock);
@@ -597,9 +594,7 @@ RenumberEnumType(Relation pg_enum, HeapTuple *existing, int nelems)
 		{
 			en->enumsortorder = newsortorder;
 
-			simple_heap_update(pg_enum, &newtup->t_self, newtup);
-
-			CatalogUpdateIndexes(pg_enum, newtup);
+			CatalogUpdateHeapAndIndexes(pg_enum, &newtup->t_self, newtup);
 		}
 
 		heap_freetuple(newtup);
diff --git a/src/backend/catalog/pg_largeobject.c b/src/backend/catalog/pg_largeobject.c
index 24edf6a..d59d4b7 100644
--- a/src/backend/catalog/pg_largeobject.c
+++ b/src/backend/catalog/pg_largeobject.c
@@ -63,11 +63,9 @@ LargeObjectCreate(Oid loid)
 	if (OidIsValid(loid))
 		HeapTupleSetOid(ntup, loid);
 
-	loid_new = simple_heap_insert(pg_lo_meta, ntup);
+	loid_new = CatalogInsertHeapAndIndexes(pg_lo_meta, ntup);
 	Assert(!OidIsValid(loid) || loid == loid_new);
 
-	CatalogUpdateIndexes(pg_lo_meta, ntup);
-
 	heap_freetuple(ntup);
 
 	heap_close(pg_lo_meta, RowExclusiveLock);
diff --git a/src/backend/catalog/pg_namespace.c b/src/backend/catalog/pg_namespace.c
index f048ad4..4c06873 100644
--- a/src/backend/catalog/pg_namespace.c
+++ b/src/backend/catalog/pg_namespace.c
@@ -76,11 +76,9 @@ NamespaceCreate(const char *nspName, Oid ownerId, bool isTemp)
 
 	tup = heap_form_tuple(tupDesc, values, nulls);
 
-	nspoid = simple_heap_insert(nspdesc, tup);
+	nspoid = CatalogInsertHeapAndIndexes(nspdesc, tup);
 	Assert(OidIsValid(nspoid));
 
-	CatalogUpdateIndexes(nspdesc, tup);
-
 	heap_close(nspdesc, RowExclusiveLock);
 
 	/* Record dependencies */
diff --git a/src/backend/catalog/pg_operator.c b/src/backend/catalog/pg_operator.c
index 556f9fe..d3f71ca 100644
--- a/src/backend/catalog/pg_operator.c
+++ b/src/backend/catalog/pg_operator.c
@@ -262,9 +262,7 @@ OperatorShellMake(const char *operatorName,
 	/*
 	 * insert our "shell" operator tuple
 	 */
-	operatorObjectId = simple_heap_insert(pg_operator_desc, tup);
-
-	CatalogUpdateIndexes(pg_operator_desc, tup);
+	operatorObjectId = CatalogInsertHeapAndIndexes(pg_operator_desc, tup);
 
 	/* Add dependencies for the entry */
 	makeOperatorDependencies(tup, false);
@@ -526,7 +524,7 @@ OperatorCreate(const char *operatorName,
 								nulls,
 								replaces);
 
-		simple_heap_update(pg_operator_desc, &tup->t_self, tup);
+		CatalogUpdateHeapAndIndexes(pg_operator_desc, &tup->t_self, tup);
 	}
 	else
 	{
@@ -535,12 +533,9 @@ OperatorCreate(const char *operatorName,
 		tup = heap_form_tuple(RelationGetDescr(pg_operator_desc),
 							  values, nulls);
 
-		operatorObjectId = simple_heap_insert(pg_operator_desc, tup);
+		operatorObjectId = CatalogInsertHeapAndIndexes(pg_operator_desc, tup);
 	}
 
-	/* Must update the indexes in either case */
-	CatalogUpdateIndexes(pg_operator_desc, tup);
-
 	/* Add dependencies for the entry */
 	address = makeOperatorDependencies(tup, isUpdate);
 
@@ -695,8 +690,7 @@ OperatorUpd(Oid baseId, Oid commId, Oid negId, bool isDelete)
 		/* If any columns were found to need modification, update tuple. */
 		if (update_commutator)
 		{
-			simple_heap_update(pg_operator_desc, &tup->t_self, tup);
-			CatalogUpdateIndexes(pg_operator_desc, tup);
+			CatalogUpdateHeapAndIndexes(pg_operator_desc, &tup->t_self, tup);
 
 			/*
 			 * Do CCI to make the updated tuple visible.  We must do this in
@@ -741,8 +735,7 @@ OperatorUpd(Oid baseId, Oid commId, Oid negId, bool isDelete)
 		/* If any columns were found to need modification, update tuple. */
 		if (update_negator)
 		{
-			simple_heap_update(pg_operator_desc, &tup->t_self, tup);
-			CatalogUpdateIndexes(pg_operator_desc, tup);
+			CatalogUpdateHeapAndIndexes(pg_operator_desc, &tup->t_self, tup);
 
 			/*
 			 * In the deletion case, do CCI to make the updated tuple visible.
diff --git a/src/backend/catalog/pg_proc.c b/src/backend/catalog/pg_proc.c
index 6ab849c..f35769e 100644
--- a/src/backend/catalog/pg_proc.c
+++ b/src/backend/catalog/pg_proc.c
@@ -572,7 +572,7 @@ ProcedureCreate(const char *procedureName,
 
 		/* Okay, do it... */
 		tup = heap_modify_tuple(oldtup, tupDesc, values, nulls, replaces);
-		simple_heap_update(rel, &tup->t_self, tup);
+		CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 		ReleaseSysCache(oldtup);
 		is_update = true;
@@ -590,12 +590,10 @@ ProcedureCreate(const char *procedureName,
 			nulls[Anum_pg_proc_proacl - 1] = true;
 
 		tup = heap_form_tuple(tupDesc, values, nulls);
-		simple_heap_insert(rel, tup);
+		CatalogInsertHeapAndIndexes(rel, tup);
 		is_update = false;
 	}
 
-	/* Need to update indexes for either the insert or update case */
-	CatalogUpdateIndexes(rel, tup);
 
 	retval = HeapTupleGetOid(tup);
 
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 00ed28f..2c7c3b5 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -149,8 +149,7 @@ publication_add_relation(Oid pubid, Relation targetrel,
 	tup = heap_form_tuple(RelationGetDescr(rel), values, nulls);
 
 	/* Insert tuple into catalog. */
-	prrelid = simple_heap_insert(rel, tup);
-	CatalogUpdateIndexes(rel, tup);
+	prrelid = CatalogInsertHeapAndIndexes(rel, tup);
 	heap_freetuple(tup);
 
 	ObjectAddressSet(myself, PublicationRelRelationId, prrelid);
diff --git a/src/backend/catalog/pg_range.c b/src/backend/catalog/pg_range.c
index d3a4c26..c21610d 100644
--- a/src/backend/catalog/pg_range.c
+++ b/src/backend/catalog/pg_range.c
@@ -58,8 +58,7 @@ RangeCreate(Oid rangeTypeOid, Oid rangeSubType, Oid rangeCollation,
 
 	tup = heap_form_tuple(RelationGetDescr(pg_range), values, nulls);
 
-	simple_heap_insert(pg_range, tup);
-	CatalogUpdateIndexes(pg_range, tup);
+	CatalogInsertHeapAndIndexes(pg_range, tup);
 	heap_freetuple(tup);
 
 	/* record type's dependencies on range-related items */
diff --git a/src/backend/catalog/pg_shdepend.c b/src/backend/catalog/pg_shdepend.c
index 60ed957..8d1ddab 100644
--- a/src/backend/catalog/pg_shdepend.c
+++ b/src/backend/catalog/pg_shdepend.c
@@ -260,10 +260,7 @@ shdepChangeDep(Relation sdepRel,
 		shForm->refclassid = refclassid;
 		shForm->refobjid = refobjid;
 
-		simple_heap_update(sdepRel, &oldtup->t_self, oldtup);
-
-		/* keep indexes current */
-		CatalogUpdateIndexes(sdepRel, oldtup);
+		CatalogUpdateHeapAndIndexes(sdepRel, &oldtup->t_self, oldtup);
 	}
 	else
 	{
@@ -287,10 +284,7 @@ shdepChangeDep(Relation sdepRel,
 		 * it's certainly a new tuple
 		 */
 		oldtup = heap_form_tuple(RelationGetDescr(sdepRel), values, nulls);
-		simple_heap_insert(sdepRel, oldtup);
-
-		/* keep indexes current */
-		CatalogUpdateIndexes(sdepRel, oldtup);
+		CatalogInsertHeapAndIndexes(sdepRel, oldtup);
 	}
 
 	if (oldtup)
@@ -759,10 +753,7 @@ copyTemplateDependencies(Oid templateDbId, Oid newDbId)
 		HeapTuple	newtup;
 
 		newtup = heap_modify_tuple(tup, sdepDesc, values, nulls, replace);
-		simple_heap_insert(sdepRel, newtup);
-
-		/* Keep indexes current */
-		CatalogIndexInsert(indstate, newtup);
+		CatalogInsertHeapAndIndexes(sdepRel, newtup);
 
 		heap_freetuple(newtup);
 	}
@@ -882,10 +873,7 @@ shdepAddDependency(Relation sdepRel,
 
 	tup = heap_form_tuple(sdepRel->rd_att, values, nulls);
 
-	simple_heap_insert(sdepRel, tup);
-
-	/* keep indexes current */
-	CatalogUpdateIndexes(sdepRel, tup);
+	CatalogInsertHeapAndIndexes(sdepRel, tup);
 
 	/* clean up */
 	heap_freetuple(tup);
diff --git a/src/backend/catalog/pg_type.c b/src/backend/catalog/pg_type.c
index 6d9a324..8dfd5f0 100644
--- a/src/backend/catalog/pg_type.c
+++ b/src/backend/catalog/pg_type.c
@@ -142,9 +142,7 @@ TypeShellMake(const char *typeName, Oid typeNamespace, Oid ownerId)
 	/*
 	 * insert the tuple in the relation and get the tuple's oid.
 	 */
-	typoid = simple_heap_insert(pg_type_desc, tup);
-
-	CatalogUpdateIndexes(pg_type_desc, tup);
+	typoid = CatalogInsertHeapAndIndexes(pg_type_desc, tup);
 
 	/*
 	 * Create dependencies.  We can/must skip this in bootstrap mode.
@@ -430,7 +428,7 @@ TypeCreate(Oid newTypeOid,
 								nulls,
 								replaces);
 
-		simple_heap_update(pg_type_desc, &tup->t_self, tup);
+		CatalogUpdateHeapAndIndexes(pg_type_desc, &tup->t_self, tup);
 
 		typeObjectId = HeapTupleGetOid(tup);
 
@@ -458,12 +456,9 @@ TypeCreate(Oid newTypeOid,
 		}
 		/* else allow system to assign oid */
 
-		typeObjectId = simple_heap_insert(pg_type_desc, tup);
+		typeObjectId = CatalogInsertHeapAndIndexes(pg_type_desc, tup);
 	}
 
-	/* Update indexes */
-	CatalogUpdateIndexes(pg_type_desc, tup);
-
 	/*
 	 * Create dependencies.  We can/must skip this in bootstrap mode.
 	 */
@@ -724,10 +719,7 @@ RenameTypeInternal(Oid typeOid, const char *newTypeName, Oid typeNamespace)
 	/* OK, do the rename --- tuple is a copy, so OK to scribble on it */
 	namestrcpy(&(typ->typname), newTypeName);
 
-	simple_heap_update(pg_type_desc, &tuple->t_self, tuple);
-
-	/* update the system catalog indexes */
-	CatalogUpdateIndexes(pg_type_desc, tuple);
+	CatalogUpdateHeapAndIndexes(pg_type_desc, &tuple->t_self, tuple);
 
 	InvokeObjectPostAlterHook(TypeRelationId, typeOid, 0);
 
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 28be27a..92fa6e0 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -493,6 +493,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_tuples_deleted(C.oid) AS n_tup_del,
             pg_stat_get_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
@@ -523,7 +524,8 @@ CREATE VIEW pg_stat_xact_all_tables AS
             pg_stat_get_xact_tuples_inserted(C.oid) AS n_tup_ins,
             pg_stat_get_xact_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_xact_tuples_deleted(C.oid) AS n_tup_del,
-            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd
+            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_xact_tuples_warm_updated(C.oid) AS n_tup_warm_upd
     FROM pg_class C LEFT JOIN
          pg_index I ON C.oid = I.indrelid
          LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
diff --git a/src/backend/catalog/toasting.c b/src/backend/catalog/toasting.c
index ee4a182..cae1228 100644
--- a/src/backend/catalog/toasting.c
+++ b/src/backend/catalog/toasting.c
@@ -350,10 +350,7 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid,
 	if (!IsBootstrapProcessingMode())
 	{
 		/* normal case, use a transactional update */
-		simple_heap_update(class_rel, &reltup->t_self, reltup);
-
-		/* Keep catalog indexes current */
-		CatalogUpdateIndexes(class_rel, reltup);
+		CatalogUpdateHeapAndIndexes(class_rel, &reltup->t_self, reltup);
 	}
 	else
 	{
diff --git a/src/backend/commands/alter.c b/src/backend/commands/alter.c
index 768fcc8..d8d4bec 100644
--- a/src/backend/commands/alter.c
+++ b/src/backend/commands/alter.c
@@ -284,8 +284,7 @@ AlterObjectRename_internal(Relation rel, Oid objectId, const char *new_name)
 							   values, nulls, replaces);
 
 	/* Perform actual update */
-	simple_heap_update(rel, &oldtup->t_self, newtup);
-	CatalogUpdateIndexes(rel, newtup);
+	CatalogUpdateHeapAndIndexes(rel, &oldtup->t_self, newtup);
 
 	InvokeObjectPostAlterHook(classId, objectId, 0);
 
@@ -722,8 +721,7 @@ AlterObjectNamespace_internal(Relation rel, Oid objid, Oid nspOid)
 							   values, nulls, replaces);
 
 	/* Perform actual update */
-	simple_heap_update(rel, &tup->t_self, newtup);
-	CatalogUpdateIndexes(rel, newtup);
+	CatalogUpdateHeapAndIndexes(rel, &tup->t_self, newtup);
 
 	/* Release memory */
 	pfree(values);
@@ -954,8 +952,7 @@ AlterObjectOwner_internal(Relation rel, Oid objectId, Oid new_ownerId)
 								   values, nulls, replaces);
 
 		/* Perform actual update */
-		simple_heap_update(rel, &newtup->t_self, newtup);
-		CatalogUpdateIndexes(rel, newtup);
+		CatalogUpdateHeapAndIndexes(rel, &newtup->t_self, newtup);
 
 		/* Update owner dependency reference */
 		if (classId == LargeObjectMetadataRelationId)
diff --git a/src/backend/commands/amcmds.c b/src/backend/commands/amcmds.c
index 29061b8..33e207c 100644
--- a/src/backend/commands/amcmds.c
+++ b/src/backend/commands/amcmds.c
@@ -87,8 +87,7 @@ CreateAccessMethod(CreateAmStmt *stmt)
 
 	tup = heap_form_tuple(RelationGetDescr(rel), values, nulls);
 
-	amoid = simple_heap_insert(rel, tup);
-	CatalogUpdateIndexes(rel, tup);
+	amoid = CatalogInsertHeapAndIndexes(rel, tup);
 	heap_freetuple(tup);
 
 	myself.classId = AccessMethodRelationId;
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index c9f6afe..648520e 100644
--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -1589,18 +1589,15 @@ update_attstats(Oid relid, bool inh, int natts, VacAttrStats **vacattrstats)
 									 nulls,
 									 replaces);
 			ReleaseSysCache(oldtup);
-			simple_heap_update(sd, &stup->t_self, stup);
+			CatalogUpdateHeapAndIndexes(sd, &stup->t_self, stup);
 		}
 		else
 		{
 			/* No, insert new tuple */
 			stup = heap_form_tuple(RelationGetDescr(sd), values, nulls);
-			simple_heap_insert(sd, stup);
+			CatalogInsertHeapAndIndexes(sd, stup);
 		}
 
-		/* update indexes too */
-		CatalogUpdateIndexes(sd, stup);
-
 		heap_freetuple(stup);
 	}
 
diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c
index f9309fc..8983cdf 100644
--- a/src/backend/commands/cluster.c
+++ b/src/backend/commands/cluster.c
@@ -523,8 +523,7 @@ mark_index_clustered(Relation rel, Oid indexOid, bool is_internal)
 		if (indexForm->indisclustered)
 		{
 			indexForm->indisclustered = false;
-			simple_heap_update(pg_index, &indexTuple->t_self, indexTuple);
-			CatalogUpdateIndexes(pg_index, indexTuple);
+			CatalogUpdateHeapAndIndexes(pg_index, &indexTuple->t_self, indexTuple);
 		}
 		else if (thisIndexOid == indexOid)
 		{
@@ -532,8 +531,7 @@ mark_index_clustered(Relation rel, Oid indexOid, bool is_internal)
 			if (!IndexIsValid(indexForm))
 				elog(ERROR, "cannot cluster on invalid index %u", indexOid);
 			indexForm->indisclustered = true;
-			simple_heap_update(pg_index, &indexTuple->t_self, indexTuple);
-			CatalogUpdateIndexes(pg_index, indexTuple);
+			CatalogUpdateHeapAndIndexes(pg_index, &indexTuple->t_self, indexTuple);
 		}
 
 		InvokeObjectPostAlterHookArg(IndexRelationId, thisIndexOid, 0,
@@ -1287,13 +1285,17 @@ swap_relation_files(Oid r1, Oid r2, bool target_is_pg_class,
 	 */
 	if (!target_is_pg_class)
 	{
-		simple_heap_update(relRelation, &reltup1->t_self, reltup1);
-		simple_heap_update(relRelation, &reltup2->t_self, reltup2);
+		bool		warm_update1, warm_update2;
+		Bitmapset  *modified_attrs1, *modified_attrs2;
+		simple_heap_update(relRelation, &reltup1->t_self, reltup1,
+				&modified_attrs1, &warm_update1);
+		simple_heap_update(relRelation, &reltup2->t_self, reltup2,
+				&modified_attrs2, &warm_update2);
 
 		/* Keep system catalogs current */
 		indstate = CatalogOpenIndexes(relRelation);
-		CatalogIndexInsert(indstate, reltup1);
-		CatalogIndexInsert(indstate, reltup2);
+		CatalogIndexInsert(indstate, reltup1, modified_attrs1, warm_update1);
+		CatalogIndexInsert(indstate, reltup2, modified_attrs2, warm_update2);
 		CatalogCloseIndexes(indstate);
 	}
 	else
@@ -1558,8 +1560,7 @@ finish_heap_swap(Oid OIDOldHeap, Oid OIDNewHeap,
 		relform->relfrozenxid = frozenXid;
 		relform->relminmxid = cutoffMulti;
 
-		simple_heap_update(relRelation, &reltup->t_self, reltup);
-		CatalogUpdateIndexes(relRelation, reltup);
+		CatalogUpdateHeapAndIndexes(relRelation, &reltup->t_self, reltup);
 
 		heap_close(relRelation, RowExclusiveLock);
 	}
diff --git a/src/backend/commands/comment.c b/src/backend/commands/comment.c
index ada0b03..c250385 100644
--- a/src/backend/commands/comment.c
+++ b/src/backend/commands/comment.c
@@ -199,7 +199,7 @@ CreateComments(Oid oid, Oid classoid, int32 subid, char *comment)
 		{
 			newtuple = heap_modify_tuple(oldtuple, RelationGetDescr(description), values,
 										 nulls, replaces);
-			simple_heap_update(description, &oldtuple->t_self, newtuple);
+			CatalogUpdateHeapAndIndexes(description, &oldtuple->t_self, newtuple);
 		}
 
 		break;					/* Assume there can be only one match */
@@ -213,15 +213,11 @@ CreateComments(Oid oid, Oid classoid, int32 subid, char *comment)
 	{
 		newtuple = heap_form_tuple(RelationGetDescr(description),
 								   values, nulls);
-		simple_heap_insert(description, newtuple);
+		CatalogInsertHeapAndIndexes(description, newtuple);
 	}
 
-	/* Update indexes, if necessary */
 	if (newtuple != NULL)
-	{
-		CatalogUpdateIndexes(description, newtuple);
 		heap_freetuple(newtuple);
-	}
 
 	/* Done */
 
@@ -293,7 +289,7 @@ CreateSharedComments(Oid oid, Oid classoid, char *comment)
 		{
 			newtuple = heap_modify_tuple(oldtuple, RelationGetDescr(shdescription),
 										 values, nulls, replaces);
-			simple_heap_update(shdescription, &oldtuple->t_self, newtuple);
+			CatalogUpdateHeapAndIndexes(shdescription, &oldtuple->t_self, newtuple);
 		}
 
 		break;					/* Assume there can be only one match */
@@ -307,15 +303,11 @@ CreateSharedComments(Oid oid, Oid classoid, char *comment)
 	{
 		newtuple = heap_form_tuple(RelationGetDescr(shdescription),
 								   values, nulls);
-		simple_heap_insert(shdescription, newtuple);
+		CatalogInsertHeapAndIndexes(shdescription, newtuple);
 	}
 
-	/* Update indexes, if necessary */
 	if (newtuple != NULL)
-	{
-		CatalogUpdateIndexes(shdescription, newtuple);
 		heap_freetuple(newtuple);
-	}
 
 	/* Done */
 
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index e9eeacd..f199074 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -40,6 +40,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	TriggerData *trigdata = castNode(TriggerData, fcinfo->context);
 	const char *funcname = "unique_key_recheck";
 	HeapTuple	new_row;
+	HeapTupleData heapTuple;
 	ItemPointerData tmptid;
 	Relation	indexRel;
 	IndexInfo  *indexInfo;
@@ -102,7 +103,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	 * removed.
 	 */
 	tmptid = new_row->t_self;
-	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL,
+				NULL, NULL, &heapTuple))
 	{
 		/*
 		 * All rows in the HOT chain are dead, so skip the check.
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index 949844d..38702e5 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -2680,6 +2680,8 @@ CopyFrom(CopyState cstate)
 					if (resultRelInfo->ri_NumIndices > 0)
 						recheckIndexes = ExecInsertIndexTuples(slot,
 															&(tuple->t_self),
+															&(tuple->t_self),
+															NULL,
 															   estate,
 															   false,
 															   NULL,
@@ -2834,6 +2836,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
 			ExecStoreTuple(bufferedTuples[i], myslot, InvalidBuffer, false);
 			recheckIndexes =
 				ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
+									  &(bufferedTuples[i]->t_self), NULL,
 									  estate, false, NULL, NIL);
 			ExecARInsertTriggers(estate, resultRelInfo,
 								 bufferedTuples[i],
diff --git a/src/backend/commands/dbcommands.c b/src/backend/commands/dbcommands.c
index 6ad8fd7..b6ef57d 100644
--- a/src/backend/commands/dbcommands.c
+++ b/src/backend/commands/dbcommands.c
@@ -546,10 +546,7 @@ createdb(ParseState *pstate, const CreatedbStmt *stmt)
 
 	HeapTupleSetOid(tuple, dboid);
 
-	simple_heap_insert(pg_database_rel, tuple);
-
-	/* Update indexes */
-	CatalogUpdateIndexes(pg_database_rel, tuple);
+	CatalogInsertHeapAndIndexes(pg_database_rel, tuple);
 
 	/*
 	 * Now generate additional catalog entries associated with the new DB
@@ -1040,8 +1037,7 @@ RenameDatabase(const char *oldname, const char *newname)
 	if (!HeapTupleIsValid(newtup))
 		elog(ERROR, "cache lookup failed for database %u", db_id);
 	namestrcpy(&(((Form_pg_database) GETSTRUCT(newtup))->datname), newname);
-	simple_heap_update(rel, &newtup->t_self, newtup);
-	CatalogUpdateIndexes(rel, newtup);
+	CatalogUpdateHeapAndIndexes(rel, &newtup->t_self, newtup);
 
 	InvokeObjectPostAlterHook(DatabaseRelationId, db_id, 0);
 
@@ -1296,10 +1292,7 @@ movedb(const char *dbname, const char *tblspcname)
 		newtuple = heap_modify_tuple(oldtuple, RelationGetDescr(pgdbrel),
 									 new_record,
 									 new_record_nulls, new_record_repl);
-		simple_heap_update(pgdbrel, &oldtuple->t_self, newtuple);
-
-		/* Update indexes */
-		CatalogUpdateIndexes(pgdbrel, newtuple);
+		CatalogUpdateHeapAndIndexes(pgdbrel, &oldtuple->t_self, newtuple);
 
 		InvokeObjectPostAlterHook(DatabaseRelationId,
 								  HeapTupleGetOid(newtuple), 0);
@@ -1554,10 +1547,7 @@ AlterDatabase(ParseState *pstate, AlterDatabaseStmt *stmt, bool isTopLevel)
 
 	newtuple = heap_modify_tuple(tuple, RelationGetDescr(rel), new_record,
 								 new_record_nulls, new_record_repl);
-	simple_heap_update(rel, &tuple->t_self, newtuple);
-
-	/* Update indexes */
-	CatalogUpdateIndexes(rel, newtuple);
+	CatalogUpdateHeapAndIndexes(rel, &tuple->t_self, newtuple);
 
 	InvokeObjectPostAlterHook(DatabaseRelationId,
 							  HeapTupleGetOid(newtuple), 0);
@@ -1692,8 +1682,7 @@ AlterDatabaseOwner(const char *dbname, Oid newOwnerId)
 		}
 
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(rel), repl_val, repl_null, repl_repl);
-		simple_heap_update(rel, &newtuple->t_self, newtuple);
-		CatalogUpdateIndexes(rel, newtuple);
+		CatalogUpdateHeapAndIndexes(rel, &newtuple->t_self, newtuple);
 
 		heap_freetuple(newtuple);
 
diff --git a/src/backend/commands/event_trigger.c b/src/backend/commands/event_trigger.c
index 8125537..a5460a3 100644
--- a/src/backend/commands/event_trigger.c
+++ b/src/backend/commands/event_trigger.c
@@ -405,8 +405,7 @@ insert_event_trigger_tuple(char *trigname, char *eventname, Oid evtOwner,
 
 	/* Insert heap tuple. */
 	tuple = heap_form_tuple(tgrel->rd_att, values, nulls);
-	trigoid = simple_heap_insert(tgrel, tuple);
-	CatalogUpdateIndexes(tgrel, tuple);
+	trigoid = CatalogInsertHeapAndIndexes(tgrel, tuple);
 	heap_freetuple(tuple);
 
 	/* Depend on owner. */
@@ -524,8 +523,7 @@ AlterEventTrigger(AlterEventTrigStmt *stmt)
 	evtForm = (Form_pg_event_trigger) GETSTRUCT(tup);
 	evtForm->evtenabled = tgenabled;
 
-	simple_heap_update(tgrel, &tup->t_self, tup);
-	CatalogUpdateIndexes(tgrel, tup);
+	CatalogUpdateHeapAndIndexes(tgrel, &tup->t_self, tup);
 
 	InvokeObjectPostAlterHook(EventTriggerRelationId,
 							  trigoid, 0);
@@ -621,8 +619,7 @@ AlterEventTriggerOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			 errhint("The owner of an event trigger must be a superuser.")));
 
 	form->evtowner = newOwnerId;
-	simple_heap_update(rel, &tup->t_self, tup);
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 	/* Update owner dependency reference */
 	changeDependencyOnOwner(EventTriggerRelationId,
diff --git a/src/backend/commands/extension.c b/src/backend/commands/extension.c
index f23c697..425d14b 100644
--- a/src/backend/commands/extension.c
+++ b/src/backend/commands/extension.c
@@ -1773,8 +1773,7 @@ InsertExtensionTuple(const char *extName, Oid extOwner,
 
 	tuple = heap_form_tuple(rel->rd_att, values, nulls);
 
-	extensionOid = simple_heap_insert(rel, tuple);
-	CatalogUpdateIndexes(rel, tuple);
+	extensionOid = CatalogInsertHeapAndIndexes(rel, tuple);
 
 	heap_freetuple(tuple);
 	heap_close(rel, RowExclusiveLock);
@@ -2485,8 +2484,7 @@ pg_extension_config_dump(PG_FUNCTION_ARGS)
 	extTup = heap_modify_tuple(extTup, RelationGetDescr(extRel),
 							   repl_val, repl_null, repl_repl);
 
-	simple_heap_update(extRel, &extTup->t_self, extTup);
-	CatalogUpdateIndexes(extRel, extTup);
+	CatalogUpdateHeapAndIndexes(extRel, &extTup->t_self, extTup);
 
 	systable_endscan(extScan);
 
@@ -2663,8 +2661,7 @@ extension_config_remove(Oid extensionoid, Oid tableoid)
 	extTup = heap_modify_tuple(extTup, RelationGetDescr(extRel),
 							   repl_val, repl_null, repl_repl);
 
-	simple_heap_update(extRel, &extTup->t_self, extTup);
-	CatalogUpdateIndexes(extRel, extTup);
+	CatalogUpdateHeapAndIndexes(extRel, &extTup->t_self, extTup);
 
 	systable_endscan(extScan);
 
@@ -2844,8 +2841,7 @@ AlterExtensionNamespace(List *names, const char *newschema, Oid *oldschema)
 	/* Now adjust pg_extension.extnamespace */
 	extForm->extnamespace = nspOid;
 
-	simple_heap_update(extRel, &extTup->t_self, extTup);
-	CatalogUpdateIndexes(extRel, extTup);
+	CatalogUpdateHeapAndIndexes(extRel, &extTup->t_self, extTup);
 
 	heap_close(extRel, RowExclusiveLock);
 
@@ -3091,8 +3087,7 @@ ApplyExtensionUpdates(Oid extensionOid,
 		extTup = heap_modify_tuple(extTup, RelationGetDescr(extRel),
 								   values, nulls, repl);
 
-		simple_heap_update(extRel, &extTup->t_self, extTup);
-		CatalogUpdateIndexes(extRel, extTup);
+		CatalogUpdateHeapAndIndexes(extRel, &extTup->t_self, extTup);
 
 		systable_endscan(extScan);
 
diff --git a/src/backend/commands/foreigncmds.c b/src/backend/commands/foreigncmds.c
index 6ff8b69..a67dc52 100644
--- a/src/backend/commands/foreigncmds.c
+++ b/src/backend/commands/foreigncmds.c
@@ -256,8 +256,7 @@ AlterForeignDataWrapperOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerI
 		tup = heap_modify_tuple(tup, RelationGetDescr(rel), repl_val, repl_null,
 								repl_repl);
 
-		simple_heap_update(rel, &tup->t_self, tup);
-		CatalogUpdateIndexes(rel, tup);
+		CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 		/* Update owner dependency reference */
 		changeDependencyOnOwner(ForeignDataWrapperRelationId,
@@ -397,8 +396,7 @@ AlterForeignServerOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 		tup = heap_modify_tuple(tup, RelationGetDescr(rel), repl_val, repl_null,
 								repl_repl);
 
-		simple_heap_update(rel, &tup->t_self, tup);
-		CatalogUpdateIndexes(rel, tup);
+		CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 		/* Update owner dependency reference */
 		changeDependencyOnOwner(ForeignServerRelationId, HeapTupleGetOid(tup),
@@ -629,8 +627,7 @@ CreateForeignDataWrapper(CreateFdwStmt *stmt)
 
 	tuple = heap_form_tuple(rel->rd_att, values, nulls);
 
-	fdwId = simple_heap_insert(rel, tuple);
-	CatalogUpdateIndexes(rel, tuple);
+	fdwId = CatalogInsertHeapAndIndexes(rel, tuple);
 
 	heap_freetuple(tuple);
 
@@ -786,8 +783,7 @@ AlterForeignDataWrapper(AlterFdwStmt *stmt)
 	tp = heap_modify_tuple(tp, RelationGetDescr(rel),
 						   repl_val, repl_null, repl_repl);
 
-	simple_heap_update(rel, &tp->t_self, tp);
-	CatalogUpdateIndexes(rel, tp);
+	CatalogUpdateHeapAndIndexes(rel, &tp->t_self, tp);
 
 	heap_freetuple(tp);
 
@@ -941,9 +937,7 @@ CreateForeignServer(CreateForeignServerStmt *stmt)
 
 	tuple = heap_form_tuple(rel->rd_att, values, nulls);
 
-	srvId = simple_heap_insert(rel, tuple);
-
-	CatalogUpdateIndexes(rel, tuple);
+	srvId = CatalogInsertHeapAndIndexes(rel, tuple);
 
 	heap_freetuple(tuple);
 
@@ -1056,8 +1050,7 @@ AlterForeignServer(AlterForeignServerStmt *stmt)
 	tp = heap_modify_tuple(tp, RelationGetDescr(rel),
 						   repl_val, repl_null, repl_repl);
 
-	simple_heap_update(rel, &tp->t_self, tp);
-	CatalogUpdateIndexes(rel, tp);
+	CatalogUpdateHeapAndIndexes(rel, &tp->t_self, tp);
 
 	InvokeObjectPostAlterHook(ForeignServerRelationId, srvId, 0);
 
@@ -1190,9 +1183,7 @@ CreateUserMapping(CreateUserMappingStmt *stmt)
 
 	tuple = heap_form_tuple(rel->rd_att, values, nulls);
 
-	umId = simple_heap_insert(rel, tuple);
-
-	CatalogUpdateIndexes(rel, tuple);
+	umId = CatalogInsertHeapAndIndexes(rel, tuple);
 
 	heap_freetuple(tuple);
 
@@ -1307,8 +1298,7 @@ AlterUserMapping(AlterUserMappingStmt *stmt)
 	tp = heap_modify_tuple(tp, RelationGetDescr(rel),
 						   repl_val, repl_null, repl_repl);
 
-	simple_heap_update(rel, &tp->t_self, tp);
-	CatalogUpdateIndexes(rel, tp);
+	CatalogUpdateHeapAndIndexes(rel, &tp->t_self, tp);
 
 	ObjectAddressSet(address, UserMappingRelationId, umId);
 
@@ -1484,8 +1474,7 @@ CreateForeignTable(CreateForeignTableStmt *stmt, Oid relid)
 
 	tuple = heap_form_tuple(ftrel->rd_att, values, nulls);
 
-	simple_heap_insert(ftrel, tuple);
-	CatalogUpdateIndexes(ftrel, tuple);
+	CatalogInsertHeapAndIndexes(ftrel, tuple);
 
 	heap_freetuple(tuple);
 
diff --git a/src/backend/commands/functioncmds.c b/src/backend/commands/functioncmds.c
index ec833c3..c58dc26 100644
--- a/src/backend/commands/functioncmds.c
+++ b/src/backend/commands/functioncmds.c
@@ -1292,8 +1292,7 @@ AlterFunction(ParseState *pstate, AlterFunctionStmt *stmt)
 		procForm->proparallel = interpret_func_parallel(parallel_item);
 
 	/* Do the update */
-	simple_heap_update(rel, &tup->t_self, tup);
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 	InvokeObjectPostAlterHook(ProcedureRelationId, funcOid, 0);
 
@@ -1333,9 +1332,7 @@ SetFunctionReturnType(Oid funcOid, Oid newRetType)
 	procForm->prorettype = newRetType;
 
 	/* update the catalog and its indexes */
-	simple_heap_update(pg_proc_rel, &tup->t_self, tup);
-
-	CatalogUpdateIndexes(pg_proc_rel, tup);
+	CatalogUpdateHeapAndIndexes(pg_proc_rel, &tup->t_self, tup);
 
 	heap_close(pg_proc_rel, RowExclusiveLock);
 }
@@ -1368,9 +1365,7 @@ SetFunctionArgType(Oid funcOid, int argIndex, Oid newArgType)
 	procForm->proargtypes.values[argIndex] = newArgType;
 
 	/* update the catalog and its indexes */
-	simple_heap_update(pg_proc_rel, &tup->t_self, tup);
-
-	CatalogUpdateIndexes(pg_proc_rel, tup);
+	CatalogUpdateHeapAndIndexes(pg_proc_rel, &tup->t_self, tup);
 
 	heap_close(pg_proc_rel, RowExclusiveLock);
 }
@@ -1656,9 +1651,7 @@ CreateCast(CreateCastStmt *stmt)
 
 	tuple = heap_form_tuple(RelationGetDescr(relation), values, nulls);
 
-	castid = simple_heap_insert(relation, tuple);
-
-	CatalogUpdateIndexes(relation, tuple);
+	castid = CatalogInsertHeapAndIndexes(relation, tuple);
 
 	/* make dependency entries */
 	myself.classId = CastRelationId;
@@ -1921,7 +1914,7 @@ CreateTransform(CreateTransformStmt *stmt)
 		replaces[Anum_pg_transform_trftosql - 1] = true;
 
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values, nulls, replaces);
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
+		CatalogUpdateHeapAndIndexes(relation, &newtuple->t_self, newtuple);
 
 		transformid = HeapTupleGetOid(tuple);
 		ReleaseSysCache(tuple);
@@ -1930,12 +1923,10 @@ CreateTransform(CreateTransformStmt *stmt)
 	else
 	{
 		newtuple = heap_form_tuple(RelationGetDescr(relation), values, nulls);
-		transformid = simple_heap_insert(relation, newtuple);
+		transformid = CatalogInsertHeapAndIndexes(relation, newtuple);
 		is_replace = false;
 	}
 
-	CatalogUpdateIndexes(relation, newtuple);
-
 	if (is_replace)
 		deleteDependencyRecordsFor(TransformRelationId, transformid, true);
 
diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c
index ed6136c..0fc77b6 100644
--- a/src/backend/commands/indexcmds.c
+++ b/src/backend/commands/indexcmds.c
@@ -694,7 +694,14 @@ DefineIndex(Oid relationId,
 	 * visible to other transactions before we start to build the index. That
 	 * will prevent them from making incompatible HOT updates.  The new index
 	 * will be marked not indisready and not indisvalid, so that no one else
-	 * tries to either insert into it or use it for queries.
+	 * tries to either insert into it or use it for queries. In addition to
+	 * that, WARM updates will be disallowed if an update is modifying one of
+	 * the columns used by this new index. This is necessary to ensure that we
+	 * don't create WARM tuples which do not have corresponding entry in this
+	 * index. It must be noted that during the second phase, we will index only
+	 * those heap tuples whose root line pointer is not already in the index,
+	 * hence it's important that all tuples in a given chain, has the same
+	 * value for any indexed column (including this new index).
 	 *
 	 * We must commit our current transaction so that the index becomes
 	 * visible; then start another.  Note that all the data structures we just
@@ -742,7 +749,10 @@ DefineIndex(Oid relationId,
 	 * marked as "not-ready-for-inserts".  The index is consulted while
 	 * deciding HOT-safety though.  This arrangement ensures that no new HOT
 	 * chains can be created where the new tuple and the old tuple in the
-	 * chain have different index keys.
+	 * chain have different index keys. Also, the new index is consulted for
+	 * deciding whether a WARM update is possible, and WARM update is not done
+	 * if a column used by this index is being updated. This ensures that we
+	 * don't create WARM tuples which are not indexed by this index.
 	 *
 	 * We now take a new snapshot, and build the index using all tuples that
 	 * are visible in this snapshot.  We can be sure that any HOT updates to
@@ -777,7 +787,8 @@ DefineIndex(Oid relationId,
 	/*
 	 * Update the pg_index row to mark the index as ready for inserts. Once we
 	 * commit this transaction, any new transactions that open the table must
-	 * insert new entries into the index for insertions and non-HOT updates.
+	 * insert new entries into the index for insertions and non-HOT updates or
+	 * WARM updates where this index needs new entry.
 	 */
 	index_set_state_flags(indexRelationId, INDEX_CREATE_SET_READY);
 
diff --git a/src/backend/commands/matview.c b/src/backend/commands/matview.c
index b7daf1c..53661a3 100644
--- a/src/backend/commands/matview.c
+++ b/src/backend/commands/matview.c
@@ -100,9 +100,7 @@ SetMatViewPopulatedState(Relation relation, bool newstate)
 
 	((Form_pg_class) GETSTRUCT(tuple))->relispopulated = newstate;
 
-	simple_heap_update(pgrel, &tuple->t_self, tuple);
-
-	CatalogUpdateIndexes(pgrel, tuple);
+	CatalogUpdateHeapAndIndexes(pgrel, &tuple->t_self, tuple);
 
 	heap_freetuple(tuple);
 	heap_close(pgrel, RowExclusiveLock);
diff --git a/src/backend/commands/opclasscmds.c b/src/backend/commands/opclasscmds.c
index bc43483..adb4a7d 100644
--- a/src/backend/commands/opclasscmds.c
+++ b/src/backend/commands/opclasscmds.c
@@ -278,9 +278,7 @@ CreateOpFamily(char *amname, char *opfname, Oid namespaceoid, Oid amoid)
 
 	tup = heap_form_tuple(rel->rd_att, values, nulls);
 
-	opfamilyoid = simple_heap_insert(rel, tup);
-
-	CatalogUpdateIndexes(rel, tup);
+	opfamilyoid = CatalogInsertHeapAndIndexes(rel, tup);
 
 	heap_freetuple(tup);
 
@@ -654,9 +652,7 @@ DefineOpClass(CreateOpClassStmt *stmt)
 
 	tup = heap_form_tuple(rel->rd_att, values, nulls);
 
-	opclassoid = simple_heap_insert(rel, tup);
-
-	CatalogUpdateIndexes(rel, tup);
+	opclassoid = CatalogInsertHeapAndIndexes(rel, tup);
 
 	heap_freetuple(tup);
 
@@ -1327,9 +1323,7 @@ storeOperators(List *opfamilyname, Oid amoid,
 
 		tup = heap_form_tuple(rel->rd_att, values, nulls);
 
-		entryoid = simple_heap_insert(rel, tup);
-
-		CatalogUpdateIndexes(rel, tup);
+		entryoid = CatalogInsertHeapAndIndexes(rel, tup);
 
 		heap_freetuple(tup);
 
@@ -1438,9 +1432,7 @@ storeProcedures(List *opfamilyname, Oid amoid,
 
 		tup = heap_form_tuple(rel->rd_att, values, nulls);
 
-		entryoid = simple_heap_insert(rel, tup);
-
-		CatalogUpdateIndexes(rel, tup);
+		entryoid = CatalogInsertHeapAndIndexes(rel, tup);
 
 		heap_freetuple(tup);
 
diff --git a/src/backend/commands/operatorcmds.c b/src/backend/commands/operatorcmds.c
index a273376..eb6b308 100644
--- a/src/backend/commands/operatorcmds.c
+++ b/src/backend/commands/operatorcmds.c
@@ -518,8 +518,7 @@ AlterOperator(AlterOperatorStmt *stmt)
 	tup = heap_modify_tuple(tup, RelationGetDescr(catalog),
 							values, nulls, replaces);
 
-	simple_heap_update(catalog, &tup->t_self, tup);
-	CatalogUpdateIndexes(catalog, tup);
+	CatalogUpdateHeapAndIndexes(catalog, &tup->t_self, tup);
 
 	address = makeOperatorDependencies(tup, true);
 
diff --git a/src/backend/commands/policy.c b/src/backend/commands/policy.c
index 5d9d3a6..d1513f7 100644
--- a/src/backend/commands/policy.c
+++ b/src/backend/commands/policy.c
@@ -614,10 +614,7 @@ RemoveRoleFromObjectPolicy(Oid roleid, Oid classid, Oid policy_id)
 		new_tuple = heap_modify_tuple(tuple,
 									  RelationGetDescr(pg_policy_rel),
 									  values, isnull, replaces);
-		simple_heap_update(pg_policy_rel, &new_tuple->t_self, new_tuple);
-
-		/* Update Catalog Indexes */
-		CatalogUpdateIndexes(pg_policy_rel, new_tuple);
+		CatalogUpdateHeapAndIndexes(pg_policy_rel, &new_tuple->t_self, new_tuple);
 
 		/* Remove all old dependencies. */
 		deleteDependencyRecordsFor(PolicyRelationId, policy_id, false);
@@ -823,10 +820,7 @@ CreatePolicy(CreatePolicyStmt *stmt)
 	policy_tuple = heap_form_tuple(RelationGetDescr(pg_policy_rel), values,
 								   isnull);
 
-	policy_id = simple_heap_insert(pg_policy_rel, policy_tuple);
-
-	/* Update Indexes */
-	CatalogUpdateIndexes(pg_policy_rel, policy_tuple);
+	policy_id = CatalogInsertHeapAndIndexes(pg_policy_rel, policy_tuple);
 
 	/* Record Dependencies */
 	target.classId = RelationRelationId;
@@ -1150,10 +1144,7 @@ AlterPolicy(AlterPolicyStmt *stmt)
 	new_tuple = heap_modify_tuple(policy_tuple,
 								  RelationGetDescr(pg_policy_rel),
 								  values, isnull, replaces);
-	simple_heap_update(pg_policy_rel, &new_tuple->t_self, new_tuple);
-
-	/* Update Catalog Indexes */
-	CatalogUpdateIndexes(pg_policy_rel, new_tuple);
+	CatalogUpdateHeapAndIndexes(pg_policy_rel, &new_tuple->t_self, new_tuple);
 
 	/* Update Dependencies. */
 	deleteDependencyRecordsFor(PolicyRelationId, policy_id, false);
@@ -1287,10 +1278,7 @@ rename_policy(RenameStmt *stmt)
 	namestrcpy(&((Form_pg_policy) GETSTRUCT(policy_tuple))->polname,
 			   stmt->newname);
 
-	simple_heap_update(pg_policy_rel, &policy_tuple->t_self, policy_tuple);
-
-	/* keep system catalog indexes current */
-	CatalogUpdateIndexes(pg_policy_rel, policy_tuple);
+	CatalogUpdateHeapAndIndexes(pg_policy_rel, &policy_tuple->t_self, policy_tuple);
 
 	InvokeObjectPostAlterHook(PolicyRelationId,
 							  HeapTupleGetOid(policy_tuple), 0);
diff --git a/src/backend/commands/proclang.c b/src/backend/commands/proclang.c
index b684f41..f7fa548 100644
--- a/src/backend/commands/proclang.c
+++ b/src/backend/commands/proclang.c
@@ -378,7 +378,7 @@ create_proc_lang(const char *languageName, bool replace,
 
 		/* Okay, do it... */
 		tup = heap_modify_tuple(oldtup, tupDesc, values, nulls, replaces);
-		simple_heap_update(rel, &tup->t_self, tup);
+		CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 		ReleaseSysCache(oldtup);
 		is_update = true;
@@ -387,13 +387,10 @@ create_proc_lang(const char *languageName, bool replace,
 	{
 		/* Creating a new language */
 		tup = heap_form_tuple(tupDesc, values, nulls);
-		simple_heap_insert(rel, tup);
+		CatalogInsertHeapAndIndexes(rel, tup);
 		is_update = false;
 	}
 
-	/* Need to update indexes for either the insert or update case */
-	CatalogUpdateIndexes(rel, tup);
-
 	/*
 	 * Create dependencies for the new language.  If we are updating an
 	 * existing language, first delete any existing pg_depend entries.
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 173b076..57543e4 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -215,8 +215,7 @@ CreatePublication(CreatePublicationStmt *stmt)
 	tup = heap_form_tuple(RelationGetDescr(rel), values, nulls);
 
 	/* Insert tuple into catalog. */
-	puboid = simple_heap_insert(rel, tup);
-	CatalogUpdateIndexes(rel, tup);
+	puboid = CatalogInsertHeapAndIndexes(rel, tup);
 	heap_freetuple(tup);
 
 	recordDependencyOnOwner(PublicationRelationId, puboid, GetUserId());
@@ -295,8 +294,7 @@ AlterPublicationOptions(AlterPublicationStmt *stmt, Relation rel,
 							replaces);
 
 	/* Update the catalog. */
-	simple_heap_update(rel, &tup->t_self, tup);
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 	CommandCounterIncrement();
 
@@ -686,8 +684,7 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 				 errhint("The owner of a publication must be a superuser.")));
 
 	form->pubowner = newOwnerId;
-	simple_heap_update(rel, &tup->t_self, tup);
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 	/* Update owner dependency reference */
 	changeDependencyOnOwner(PublicationRelationId,
diff --git a/src/backend/commands/schemacmds.c b/src/backend/commands/schemacmds.c
index c3b37b2..f49767e 100644
--- a/src/backend/commands/schemacmds.c
+++ b/src/backend/commands/schemacmds.c
@@ -281,8 +281,7 @@ RenameSchema(const char *oldname, const char *newname)
 
 	/* rename */
 	namestrcpy(&(((Form_pg_namespace) GETSTRUCT(tup))->nspname), newname);
-	simple_heap_update(rel, &tup->t_self, tup);
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 	InvokeObjectPostAlterHook(NamespaceRelationId, HeapTupleGetOid(tup), 0);
 
@@ -417,8 +416,7 @@ AlterSchemaOwner_internal(HeapTuple tup, Relation rel, Oid newOwnerId)
 
 		newtuple = heap_modify_tuple(tup, RelationGetDescr(rel), repl_val, repl_null, repl_repl);
 
-		simple_heap_update(rel, &newtuple->t_self, newtuple);
-		CatalogUpdateIndexes(rel, newtuple);
+		CatalogUpdateHeapAndIndexes(rel, &newtuple->t_self, newtuple);
 
 		heap_freetuple(newtuple);
 
diff --git a/src/backend/commands/seclabel.c b/src/backend/commands/seclabel.c
index 324f2e7..7e25411 100644
--- a/src/backend/commands/seclabel.c
+++ b/src/backend/commands/seclabel.c
@@ -299,7 +299,7 @@ SetSharedSecurityLabel(const ObjectAddress *object,
 			replaces[Anum_pg_shseclabel_label - 1] = true;
 			newtup = heap_modify_tuple(oldtup, RelationGetDescr(pg_shseclabel),
 									   values, nulls, replaces);
-			simple_heap_update(pg_shseclabel, &oldtup->t_self, newtup);
+			CatalogUpdateHeapAndIndexes(pg_shseclabel, &oldtup->t_self, newtup);
 		}
 	}
 	systable_endscan(scan);
@@ -309,15 +309,11 @@ SetSharedSecurityLabel(const ObjectAddress *object,
 	{
 		newtup = heap_form_tuple(RelationGetDescr(pg_shseclabel),
 								 values, nulls);
-		simple_heap_insert(pg_shseclabel, newtup);
+		CatalogInsertHeapAndIndexes(pg_shseclabel, newtup);
 	}
 
-	/* Update indexes, if necessary */
 	if (newtup != NULL)
-	{
-		CatalogUpdateIndexes(pg_shseclabel, newtup);
 		heap_freetuple(newtup);
-	}
 
 	heap_close(pg_shseclabel, RowExclusiveLock);
 }
@@ -390,7 +386,7 @@ SetSecurityLabel(const ObjectAddress *object,
 			replaces[Anum_pg_seclabel_label - 1] = true;
 			newtup = heap_modify_tuple(oldtup, RelationGetDescr(pg_seclabel),
 									   values, nulls, replaces);
-			simple_heap_update(pg_seclabel, &oldtup->t_self, newtup);
+			CatalogUpdateHeapAndIndexes(pg_seclabel, &oldtup->t_self, newtup);
 		}
 	}
 	systable_endscan(scan);
@@ -400,15 +396,12 @@ SetSecurityLabel(const ObjectAddress *object,
 	{
 		newtup = heap_form_tuple(RelationGetDescr(pg_seclabel),
 								 values, nulls);
-		simple_heap_insert(pg_seclabel, newtup);
+		CatalogInsertHeapAndIndexes(pg_seclabel, newtup);
 	}
 
 	/* Update indexes, if necessary */
 	if (newtup != NULL)
-	{
-		CatalogUpdateIndexes(pg_seclabel, newtup);
 		heap_freetuple(newtup);
-	}
 
 	heap_close(pg_seclabel, RowExclusiveLock);
 }
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0c673f5..830b600 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -236,8 +236,7 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
 	pgs_values[Anum_pg_sequence_seqcache - 1] = Int64GetDatumFast(seqform.seqcache);
 
 	tuple = heap_form_tuple(tupDesc, pgs_values, pgs_nulls);
-	simple_heap_insert(rel, tuple);
-	CatalogUpdateIndexes(rel, tuple);
+	CatalogInsertHeapAndIndexes(rel, tuple);
 
 	heap_freetuple(tuple);
 	heap_close(rel, RowExclusiveLock);
@@ -504,8 +503,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 
 	relation_close(seqrel, NoLock);
 
-	simple_heap_update(rel, &tuple->t_self, tuple);
-	CatalogUpdateIndexes(rel, tuple);
+	CatalogUpdateHeapAndIndexes(rel, &tuple->t_self, tuple);
 	heap_close(rel, RowExclusiveLock);
 
 	return address;
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 41ef7a3..853dcd3 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -277,8 +277,7 @@ CreateSubscription(CreateSubscriptionStmt *stmt)
 	tup = heap_form_tuple(RelationGetDescr(rel), values, nulls);
 
 	/* Insert tuple into catalog. */
-	subid = simple_heap_insert(rel, tup);
-	CatalogUpdateIndexes(rel, tup);
+	subid = CatalogInsertHeapAndIndexes(rel, tup);
 	heap_freetuple(tup);
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
@@ -408,8 +407,7 @@ AlterSubscription(AlterSubscriptionStmt *stmt)
 							replaces);
 
 	/* Update the catalog. */
-	simple_heap_update(rel, &tup->t_self, tup);
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 	ObjectAddressSet(myself, SubscriptionRelationId, subid);
 
@@ -588,8 +586,7 @@ AlterSubscriptionOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			 errhint("The owner of an subscription must be a superuser.")));
 
 	form->subowner = newOwnerId;
-	simple_heap_update(rel, &tup->t_self, tup);
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 	/* Update owner dependency reference */
 	changeDependencyOnOwner(SubscriptionRelationId,
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 90f2f7f..f62f8d7 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -2308,9 +2308,7 @@ StoreCatalogInheritance1(Oid relationId, Oid parentOid,
 
 	tuple = heap_form_tuple(desc, values, nulls);
 
-	simple_heap_insert(inhRelation, tuple);
-
-	CatalogUpdateIndexes(inhRelation, tuple);
+	CatalogInsertHeapAndIndexes(inhRelation, tuple);
 
 	heap_freetuple(tuple);
 
@@ -2398,10 +2396,7 @@ SetRelationHasSubclass(Oid relationId, bool relhassubclass)
 	if (classtuple->relhassubclass != relhassubclass)
 	{
 		classtuple->relhassubclass = relhassubclass;
-		simple_heap_update(relationRelation, &tuple->t_self, tuple);
-
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relationRelation, tuple);
+		CatalogUpdateHeapAndIndexes(relationRelation, &tuple->t_self, tuple);
 	}
 	else
 	{
@@ -2592,10 +2587,7 @@ renameatt_internal(Oid myrelid,
 	/* apply the update */
 	namestrcpy(&(attform->attname), newattname);
 
-	simple_heap_update(attrelation, &atttup->t_self, atttup);
-
-	/* keep system catalog indexes current */
-	CatalogUpdateIndexes(attrelation, atttup);
+	CatalogUpdateHeapAndIndexes(attrelation, &atttup->t_self, atttup);
 
 	InvokeObjectPostAlterHook(RelationRelationId, myrelid, attnum);
 
@@ -2902,10 +2894,7 @@ RenameRelationInternal(Oid myrelid, const char *newrelname, bool is_internal)
 	 */
 	namestrcpy(&(relform->relname), newrelname);
 
-	simple_heap_update(relrelation, &reltup->t_self, reltup);
-
-	/* keep the system catalog indexes current */
-	CatalogUpdateIndexes(relrelation, reltup);
+	CatalogUpdateHeapAndIndexes(relrelation, &reltup->t_self, reltup);
 
 	InvokeObjectPostAlterHookArg(RelationRelationId, myrelid, 0,
 								 InvalidOid, is_internal);
@@ -5097,8 +5086,7 @@ ATExecAddColumn(List **wqueue, AlteredTableInfo *tab, Relation rel,
 
 			/* Bump the existing child att's inhcount */
 			childatt->attinhcount++;
-			simple_heap_update(attrdesc, &tuple->t_self, tuple);
-			CatalogUpdateIndexes(attrdesc, tuple);
+			CatalogUpdateHeapAndIndexes(attrdesc, &tuple->t_self, tuple);
 
 			heap_freetuple(tuple);
 
@@ -5191,10 +5179,7 @@ ATExecAddColumn(List **wqueue, AlteredTableInfo *tab, Relation rel,
 	else
 		((Form_pg_class) GETSTRUCT(reltup))->relnatts = newattnum;
 
-	simple_heap_update(pgclass, &reltup->t_self, reltup);
-
-	/* keep catalog indexes current */
-	CatalogUpdateIndexes(pgclass, reltup);
+	CatalogUpdateHeapAndIndexes(pgclass, &reltup->t_self, reltup);
 
 	heap_freetuple(reltup);
 
@@ -5630,10 +5615,7 @@ ATExecDropNotNull(Relation rel, const char *colName, LOCKMODE lockmode)
 	{
 		((Form_pg_attribute) GETSTRUCT(tuple))->attnotnull = FALSE;
 
-		simple_heap_update(attr_rel, &tuple->t_self, tuple);
-
-		/* keep the system catalog indexes current */
-		CatalogUpdateIndexes(attr_rel, tuple);
+		CatalogUpdateHeapAndIndexes(attr_rel, &tuple->t_self, tuple);
 
 		ObjectAddressSubSet(address, RelationRelationId,
 							RelationGetRelid(rel), attnum);
@@ -5708,10 +5690,7 @@ ATExecSetNotNull(AlteredTableInfo *tab, Relation rel,
 	{
 		((Form_pg_attribute) GETSTRUCT(tuple))->attnotnull = TRUE;
 
-		simple_heap_update(attr_rel, &tuple->t_self, tuple);
-
-		/* keep the system catalog indexes current */
-		CatalogUpdateIndexes(attr_rel, tuple);
+		CatalogUpdateHeapAndIndexes(attr_rel, &tuple->t_self, tuple);
 
 		/* Tell Phase 3 it needs to test the constraint */
 		tab->new_notnull = true;
@@ -5876,10 +5855,7 @@ ATExecSetStatistics(Relation rel, const char *colName, Node *newValue, LOCKMODE
 
 	attrtuple->attstattarget = newtarget;
 
-	simple_heap_update(attrelation, &tuple->t_self, tuple);
-
-	/* keep system catalog indexes current */
-	CatalogUpdateIndexes(attrelation, tuple);
+	CatalogUpdateHeapAndIndexes(attrelation, &tuple->t_self, tuple);
 
 	InvokeObjectPostAlterHook(RelationRelationId,
 							  RelationGetRelid(rel),
@@ -5952,8 +5928,7 @@ ATExecSetOptions(Relation rel, const char *colName, Node *options,
 								 repl_val, repl_null, repl_repl);
 
 	/* Update system catalog. */
-	simple_heap_update(attrelation, &newtuple->t_self, newtuple);
-	CatalogUpdateIndexes(attrelation, newtuple);
+	CatalogUpdateHeapAndIndexes(attrelation, &newtuple->t_self, newtuple);
 
 	InvokeObjectPostAlterHook(RelationRelationId,
 							  RelationGetRelid(rel),
@@ -6036,10 +6011,7 @@ ATExecSetStorage(Relation rel, const char *colName, Node *newValue, LOCKMODE loc
 				 errmsg("column data type %s can only have storage PLAIN",
 						format_type_be(attrtuple->atttypid))));
 
-	simple_heap_update(attrelation, &tuple->t_self, tuple);
-
-	/* keep system catalog indexes current */
-	CatalogUpdateIndexes(attrelation, tuple);
+	CatalogUpdateHeapAndIndexes(attrelation, &tuple->t_self, tuple);
 
 	InvokeObjectPostAlterHook(RelationRelationId,
 							  RelationGetRelid(rel),
@@ -6277,10 +6249,7 @@ ATExecDropColumn(List **wqueue, Relation rel, const char *colName,
 					/* Child column must survive my deletion */
 					childatt->attinhcount--;
 
-					simple_heap_update(attr_rel, &tuple->t_self, tuple);
-
-					/* keep the system catalog indexes current */
-					CatalogUpdateIndexes(attr_rel, tuple);
+					CatalogUpdateHeapAndIndexes(attr_rel, &tuple->t_self, tuple);
 
 					/* Make update visible */
 					CommandCounterIncrement();
@@ -6296,10 +6265,7 @@ ATExecDropColumn(List **wqueue, Relation rel, const char *colName,
 				childatt->attinhcount--;
 				childatt->attislocal = true;
 
-				simple_heap_update(attr_rel, &tuple->t_self, tuple);
-
-				/* keep the system catalog indexes current */
-				CatalogUpdateIndexes(attr_rel, tuple);
+				CatalogUpdateHeapAndIndexes(attr_rel, &tuple->t_self, tuple);
 
 				/* Make update visible */
 				CommandCounterIncrement();
@@ -6343,10 +6309,7 @@ ATExecDropColumn(List **wqueue, Relation rel, const char *colName,
 		tuple_class = (Form_pg_class) GETSTRUCT(tuple);
 
 		tuple_class->relhasoids = false;
-		simple_heap_update(class_rel, &tuple->t_self, tuple);
-
-		/* Keep the catalog indexes up to date */
-		CatalogUpdateIndexes(class_rel, tuple);
+		CatalogUpdateHeapAndIndexes(class_rel, &tuple->t_self, tuple);
 
 		heap_close(class_rel, RowExclusiveLock);
 
@@ -7195,8 +7158,7 @@ ATExecAlterConstraint(Relation rel, AlterTableCmd *cmd,
 		copy_con = (Form_pg_constraint) GETSTRUCT(copyTuple);
 		copy_con->condeferrable = cmdcon->deferrable;
 		copy_con->condeferred = cmdcon->initdeferred;
-		simple_heap_update(conrel, &copyTuple->t_self, copyTuple);
-		CatalogUpdateIndexes(conrel, copyTuple);
+		CatalogUpdateHeapAndIndexes(conrel, &copyTuple->t_self, copyTuple);
 
 		InvokeObjectPostAlterHook(ConstraintRelationId,
 								  HeapTupleGetOid(contuple), 0);
@@ -7249,8 +7211,7 @@ ATExecAlterConstraint(Relation rel, AlterTableCmd *cmd,
 
 			copy_tg->tgdeferrable = cmdcon->deferrable;
 			copy_tg->tginitdeferred = cmdcon->initdeferred;
-			simple_heap_update(tgrel, &copyTuple->t_self, copyTuple);
-			CatalogUpdateIndexes(tgrel, copyTuple);
+			CatalogUpdateHeapAndIndexes(tgrel, &copyTuple->t_self, copyTuple);
 
 			InvokeObjectPostAlterHook(TriggerRelationId,
 									  HeapTupleGetOid(tgtuple), 0);
@@ -7436,8 +7397,7 @@ ATExecValidateConstraint(Relation rel, char *constrName, bool recurse,
 		copyTuple = heap_copytuple(tuple);
 		copy_con = (Form_pg_constraint) GETSTRUCT(copyTuple);
 		copy_con->convalidated = true;
-		simple_heap_update(conrel, &copyTuple->t_self, copyTuple);
-		CatalogUpdateIndexes(conrel, copyTuple);
+		CatalogUpdateHeapAndIndexes(conrel, &copyTuple->t_self, copyTuple);
 
 		InvokeObjectPostAlterHook(ConstraintRelationId,
 								  HeapTupleGetOid(tuple), 0);
@@ -8339,8 +8299,7 @@ ATExecDropConstraint(Relation rel, const char *constrName,
 			{
 				/* Child constraint must survive my deletion */
 				con->coninhcount--;
-				simple_heap_update(conrel, &copy_tuple->t_self, copy_tuple);
-				CatalogUpdateIndexes(conrel, copy_tuple);
+				CatalogUpdateHeapAndIndexes(conrel, &copy_tuple->t_self, copy_tuple);
 
 				/* Make update visible */
 				CommandCounterIncrement();
@@ -8356,8 +8315,7 @@ ATExecDropConstraint(Relation rel, const char *constrName,
 			con->coninhcount--;
 			con->conislocal = true;
 
-			simple_heap_update(conrel, &copy_tuple->t_self, copy_tuple);
-			CatalogUpdateIndexes(conrel, copy_tuple);
+			CatalogUpdateHeapAndIndexes(conrel, &copy_tuple->t_self, copy_tuple);
 
 			/* Make update visible */
 			CommandCounterIncrement();
@@ -9003,10 +8961,7 @@ ATExecAlterColumnType(AlteredTableInfo *tab, Relation rel,
 
 	ReleaseSysCache(typeTuple);
 
-	simple_heap_update(attrelation, &heapTup->t_self, heapTup);
-
-	/* keep system catalog indexes current */
-	CatalogUpdateIndexes(attrelation, heapTup);
+	CatalogUpdateHeapAndIndexes(attrelation, &heapTup->t_self, heapTup);
 
 	heap_close(attrelation, RowExclusiveLock);
 
@@ -9144,8 +9099,7 @@ ATExecAlterColumnGenericOptions(Relation rel,
 	newtuple = heap_modify_tuple(tuple, RelationGetDescr(attrel),
 								 repl_val, repl_null, repl_repl);
 
-	simple_heap_update(attrel, &newtuple->t_self, newtuple);
-	CatalogUpdateIndexes(attrel, newtuple);
+	CatalogUpdateHeapAndIndexes(attrel, &newtuple->t_self, newtuple);
 
 	InvokeObjectPostAlterHook(RelationRelationId,
 							  RelationGetRelid(rel),
@@ -9661,8 +9615,7 @@ ATExecChangeOwner(Oid relationOid, Oid newOwnerId, bool recursing, LOCKMODE lock
 
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(class_rel), repl_val, repl_null, repl_repl);
 
-		simple_heap_update(class_rel, &newtuple->t_self, newtuple);
-		CatalogUpdateIndexes(class_rel, newtuple);
+		CatalogUpdateHeapAndIndexes(class_rel, &newtuple->t_self, newtuple);
 
 		heap_freetuple(newtuple);
 
@@ -9789,8 +9742,7 @@ change_owner_fix_column_acls(Oid relationOid, Oid oldOwnerId, Oid newOwnerId)
 									 RelationGetDescr(attRelation),
 									 repl_val, repl_null, repl_repl);
 
-		simple_heap_update(attRelation, &newtuple->t_self, newtuple);
-		CatalogUpdateIndexes(attRelation, newtuple);
+		CatalogUpdateHeapAndIndexes(attRelation, &newtuple->t_self, newtuple);
 
 		heap_freetuple(newtuple);
 	}
@@ -10067,9 +10019,7 @@ ATExecSetRelOptions(Relation rel, List *defList, AlterTableType operation,
 	newtuple = heap_modify_tuple(tuple, RelationGetDescr(pgclass),
 								 repl_val, repl_null, repl_repl);
 
-	simple_heap_update(pgclass, &newtuple->t_self, newtuple);
-
-	CatalogUpdateIndexes(pgclass, newtuple);
+	CatalogUpdateHeapAndIndexes(pgclass, &newtuple->t_self, newtuple);
 
 	InvokeObjectPostAlterHook(RelationRelationId, RelationGetRelid(rel), 0);
 
@@ -10126,9 +10076,7 @@ ATExecSetRelOptions(Relation rel, List *defList, AlterTableType operation,
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(pgclass),
 									 repl_val, repl_null, repl_repl);
 
-		simple_heap_update(pgclass, &newtuple->t_self, newtuple);
-
-		CatalogUpdateIndexes(pgclass, newtuple);
+		CatalogUpdateHeapAndIndexes(pgclass, &newtuple->t_self, newtuple);
 
 		InvokeObjectPostAlterHookArg(RelationRelationId,
 									 RelationGetRelid(toastrel), 0,
@@ -10289,8 +10237,7 @@ ATExecSetTableSpace(Oid tableOid, Oid newTableSpace, LOCKMODE lockmode)
 	/* update the pg_class row */
 	rd_rel->reltablespace = (newTableSpace == MyDatabaseTableSpace) ? InvalidOid : newTableSpace;
 	rd_rel->relfilenode = newrelfilenode;
-	simple_heap_update(pg_class, &tuple->t_self, tuple);
-	CatalogUpdateIndexes(pg_class, tuple);
+	CatalogUpdateHeapAndIndexes(pg_class, &tuple->t_self, tuple);
 
 	InvokeObjectPostAlterHook(RelationRelationId, RelationGetRelid(rel), 0);
 
@@ -10940,8 +10887,7 @@ MergeAttributesIntoExisting(Relation child_rel, Relation parent_rel)
 				childatt->attislocal = false;
 			}
 
-			simple_heap_update(attrrel, &tuple->t_self, tuple);
-			CatalogUpdateIndexes(attrrel, tuple);
+			CatalogUpdateHeapAndIndexes(attrrel, &tuple->t_self, tuple);
 			heap_freetuple(tuple);
 		}
 		else
@@ -10980,8 +10926,7 @@ MergeAttributesIntoExisting(Relation child_rel, Relation parent_rel)
 				childatt->attislocal = false;
 			}
 
-			simple_heap_update(attrrel, &tuple->t_self, tuple);
-			CatalogUpdateIndexes(attrrel, tuple);
+			CatalogUpdateHeapAndIndexes(attrrel, &tuple->t_self, tuple);
 			heap_freetuple(tuple);
 		}
 		else
@@ -11118,8 +11063,7 @@ MergeConstraintsIntoExisting(Relation child_rel, Relation parent_rel)
 				child_con->conislocal = false;
 			}
 
-			simple_heap_update(catalog_relation, &child_copy->t_self, child_copy);
-			CatalogUpdateIndexes(catalog_relation, child_copy);
+			CatalogUpdateHeapAndIndexes(catalog_relation, &child_copy->t_self, child_copy);
 			heap_freetuple(child_copy);
 
 			found = true;
@@ -11289,8 +11233,7 @@ RemoveInheritance(Relation child_rel, Relation parent_rel)
 			if (copy_att->attinhcount == 0)
 				copy_att->attislocal = true;
 
-			simple_heap_update(catalogRelation, &copyTuple->t_self, copyTuple);
-			CatalogUpdateIndexes(catalogRelation, copyTuple);
+			CatalogUpdateHeapAndIndexes(catalogRelation, &copyTuple->t_self, copyTuple);
 			heap_freetuple(copyTuple);
 		}
 	}
@@ -11364,8 +11307,7 @@ RemoveInheritance(Relation child_rel, Relation parent_rel)
 			if (copy_con->coninhcount == 0)
 				copy_con->conislocal = true;
 
-			simple_heap_update(catalogRelation, &copyTuple->t_self, copyTuple);
-			CatalogUpdateIndexes(catalogRelation, copyTuple);
+			CatalogUpdateHeapAndIndexes(catalogRelation, &copyTuple->t_self, copyTuple);
 			heap_freetuple(copyTuple);
 		}
 	}
@@ -11565,8 +11507,7 @@ ATExecAddOf(Relation rel, const TypeName *ofTypename, LOCKMODE lockmode)
 	if (!HeapTupleIsValid(classtuple))
 		elog(ERROR, "cache lookup failed for relation %u", relid);
 	((Form_pg_class) GETSTRUCT(classtuple))->reloftype = typeid;
-	simple_heap_update(relationRelation, &classtuple->t_self, classtuple);
-	CatalogUpdateIndexes(relationRelation, classtuple);
+	CatalogUpdateHeapAndIndexes(relationRelation, &classtuple->t_self, classtuple);
 
 	InvokeObjectPostAlterHook(RelationRelationId, relid, 0);
 
@@ -11610,8 +11551,7 @@ ATExecDropOf(Relation rel, LOCKMODE lockmode)
 	if (!HeapTupleIsValid(tuple))
 		elog(ERROR, "cache lookup failed for relation %u", relid);
 	((Form_pg_class) GETSTRUCT(tuple))->reloftype = InvalidOid;
-	simple_heap_update(relationRelation, &tuple->t_self, tuple);
-	CatalogUpdateIndexes(relationRelation, tuple);
+	CatalogUpdateHeapAndIndexes(relationRelation, &tuple->t_self, tuple);
 
 	InvokeObjectPostAlterHook(RelationRelationId, relid, 0);
 
@@ -11651,8 +11591,7 @@ relation_mark_replica_identity(Relation rel, char ri_type, Oid indexOid,
 	if (pg_class_form->relreplident != ri_type)
 	{
 		pg_class_form->relreplident = ri_type;
-		simple_heap_update(pg_class, &pg_class_tuple->t_self, pg_class_tuple);
-		CatalogUpdateIndexes(pg_class, pg_class_tuple);
+		CatalogUpdateHeapAndIndexes(pg_class, &pg_class_tuple->t_self, pg_class_tuple);
 	}
 	heap_close(pg_class, RowExclusiveLock);
 	heap_freetuple(pg_class_tuple);
@@ -11711,8 +11650,7 @@ relation_mark_replica_identity(Relation rel, char ri_type, Oid indexOid,
 
 		if (dirty)
 		{
-			simple_heap_update(pg_index, &pg_index_tuple->t_self, pg_index_tuple);
-			CatalogUpdateIndexes(pg_index, pg_index_tuple);
+			CatalogUpdateHeapAndIndexes(pg_index, &pg_index_tuple->t_self, pg_index_tuple);
 			InvokeObjectPostAlterHookArg(IndexRelationId, thisIndexOid, 0,
 										 InvalidOid, is_internal);
 		}
@@ -11861,10 +11799,7 @@ ATExecEnableRowSecurity(Relation rel)
 		elog(ERROR, "cache lookup failed for relation %u", relid);
 
 	((Form_pg_class) GETSTRUCT(tuple))->relrowsecurity = true;
-	simple_heap_update(pg_class, &tuple->t_self, tuple);
-
-	/* keep catalog indexes current */
-	CatalogUpdateIndexes(pg_class, tuple);
+	CatalogUpdateHeapAndIndexes(pg_class, &tuple->t_self, tuple);
 
 	heap_close(pg_class, RowExclusiveLock);
 	heap_freetuple(tuple);
@@ -11888,10 +11823,7 @@ ATExecDisableRowSecurity(Relation rel)
 		elog(ERROR, "cache lookup failed for relation %u", relid);
 
 	((Form_pg_class) GETSTRUCT(tuple))->relrowsecurity = false;
-	simple_heap_update(pg_class, &tuple->t_self, tuple);
-
-	/* keep catalog indexes current */
-	CatalogUpdateIndexes(pg_class, tuple);
+	CatalogUpdateHeapAndIndexes(pg_class, &tuple->t_self, tuple);
 
 	heap_close(pg_class, RowExclusiveLock);
 	heap_freetuple(tuple);
@@ -11917,10 +11849,7 @@ ATExecForceNoForceRowSecurity(Relation rel, bool force_rls)
 		elog(ERROR, "cache lookup failed for relation %u", relid);
 
 	((Form_pg_class) GETSTRUCT(tuple))->relforcerowsecurity = force_rls;
-	simple_heap_update(pg_class, &tuple->t_self, tuple);
-
-	/* keep catalog indexes current */
-	CatalogUpdateIndexes(pg_class, tuple);
+	CatalogUpdateHeapAndIndexes(pg_class, &tuple->t_self, tuple);
 
 	heap_close(pg_class, RowExclusiveLock);
 	heap_freetuple(tuple);
@@ -11988,8 +11917,7 @@ ATExecGenericOptions(Relation rel, List *options)
 	tuple = heap_modify_tuple(tuple, RelationGetDescr(ftrel),
 							  repl_val, repl_null, repl_repl);
 
-	simple_heap_update(ftrel, &tuple->t_self, tuple);
-	CatalogUpdateIndexes(ftrel, tuple);
+	CatalogUpdateHeapAndIndexes(ftrel, &tuple->t_self, tuple);
 
 	/*
 	 * Invalidate relcache so that all sessions will refresh any cached plans
@@ -12284,8 +12212,7 @@ AlterRelationNamespaceInternal(Relation classRel, Oid relOid,
 		/* classTup is a copy, so OK to scribble on */
 		classForm->relnamespace = newNspOid;
 
-		simple_heap_update(classRel, &classTup->t_self, classTup);
-		CatalogUpdateIndexes(classRel, classTup);
+		CatalogUpdateHeapAndIndexes(classRel, &classTup->t_self, classTup);
 
 		/* Update dependency on schema if caller said so */
 		if (hasDependEntry &&
@@ -13520,8 +13447,7 @@ ATExecDetachPartition(Relation rel, RangeVar *name)
 								 new_val, new_null, new_repl);
 
 	((Form_pg_class) GETSTRUCT(newtuple))->relispartition = false;
-	simple_heap_update(classRel, &newtuple->t_self, newtuple);
-	CatalogUpdateIndexes(classRel, newtuple);
+	CatalogUpdateHeapAndIndexes(classRel, &newtuple->t_self, newtuple);
 	heap_freetuple(newtuple);
 	heap_close(classRel, RowExclusiveLock);
 
diff --git a/src/backend/commands/tablespace.c b/src/backend/commands/tablespace.c
index 651e1b3..f3c7436 100644
--- a/src/backend/commands/tablespace.c
+++ b/src/backend/commands/tablespace.c
@@ -344,9 +344,7 @@ CreateTableSpace(CreateTableSpaceStmt *stmt)
 
 	tuple = heap_form_tuple(rel->rd_att, values, nulls);
 
-	tablespaceoid = simple_heap_insert(rel, tuple);
-
-	CatalogUpdateIndexes(rel, tuple);
+	tablespaceoid = CatalogInsertHeapAndIndexes(rel, tuple);
 
 	heap_freetuple(tuple);
 
@@ -971,8 +969,7 @@ RenameTableSpace(const char *oldname, const char *newname)
 	/* OK, update the entry */
 	namestrcpy(&(newform->spcname), newname);
 
-	simple_heap_update(rel, &newtuple->t_self, newtuple);
-	CatalogUpdateIndexes(rel, newtuple);
+	CatalogUpdateHeapAndIndexes(rel, &newtuple->t_self, newtuple);
 
 	InvokeObjectPostAlterHook(TableSpaceRelationId, tspId, 0);
 
@@ -1044,8 +1041,7 @@ AlterTableSpaceOptions(AlterTableSpaceOptionsStmt *stmt)
 								 repl_null, repl_repl);
 
 	/* Update system catalog. */
-	simple_heap_update(rel, &newtuple->t_self, newtuple);
-	CatalogUpdateIndexes(rel, newtuple);
+	CatalogUpdateHeapAndIndexes(rel, &newtuple->t_self, newtuple);
 
 	InvokeObjectPostAlterHook(TableSpaceRelationId, HeapTupleGetOid(tup), 0);
 
diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c
index f067d0a..1cc67ef 100644
--- a/src/backend/commands/trigger.c
+++ b/src/backend/commands/trigger.c
@@ -773,9 +773,7 @@ CreateTrigger(CreateTrigStmt *stmt, const char *queryString,
 	/*
 	 * Insert tuple into pg_trigger.
 	 */
-	simple_heap_insert(tgrel, tuple);
-
-	CatalogUpdateIndexes(tgrel, tuple);
+	CatalogInsertHeapAndIndexes(tgrel, tuple);
 
 	heap_freetuple(tuple);
 	heap_close(tgrel, RowExclusiveLock);
@@ -802,9 +800,7 @@ CreateTrigger(CreateTrigStmt *stmt, const char *queryString,
 
 	((Form_pg_class) GETSTRUCT(tuple))->relhastriggers = true;
 
-	simple_heap_update(pgrel, &tuple->t_self, tuple);
-
-	CatalogUpdateIndexes(pgrel, tuple);
+	CatalogUpdateHeapAndIndexes(pgrel, &tuple->t_self, tuple);
 
 	heap_freetuple(tuple);
 	heap_close(pgrel, RowExclusiveLock);
@@ -1444,10 +1440,7 @@ renametrig(RenameStmt *stmt)
 		namestrcpy(&((Form_pg_trigger) GETSTRUCT(tuple))->tgname,
 				   stmt->newname);
 
-		simple_heap_update(tgrel, &tuple->t_self, tuple);
-
-		/* keep system catalog indexes current */
-		CatalogUpdateIndexes(tgrel, tuple);
+		CatalogUpdateHeapAndIndexes(tgrel, &tuple->t_self, tuple);
 
 		InvokeObjectPostAlterHook(TriggerRelationId,
 								  HeapTupleGetOid(tuple), 0);
@@ -1560,10 +1553,7 @@ EnableDisableTrigger(Relation rel, const char *tgname,
 
 			newtrig->tgenabled = fires_when;
 
-			simple_heap_update(tgrel, &newtup->t_self, newtup);
-
-			/* Keep catalog indexes current */
-			CatalogUpdateIndexes(tgrel, newtup);
+			CatalogUpdateHeapAndIndexes(tgrel, &newtup->t_self, newtup);
 
 			heap_freetuple(newtup);
 
diff --git a/src/backend/commands/tsearchcmds.c b/src/backend/commands/tsearchcmds.c
index 479a160..b9929a5 100644
--- a/src/backend/commands/tsearchcmds.c
+++ b/src/backend/commands/tsearchcmds.c
@@ -271,9 +271,7 @@ DefineTSParser(List *names, List *parameters)
 
 	tup = heap_form_tuple(prsRel->rd_att, values, nulls);
 
-	prsOid = simple_heap_insert(prsRel, tup);
-
-	CatalogUpdateIndexes(prsRel, tup);
+	prsOid = CatalogInsertHeapAndIndexes(prsRel, tup);
 
 	address = makeParserDependencies(tup);
 
@@ -482,9 +480,7 @@ DefineTSDictionary(List *names, List *parameters)
 
 	tup = heap_form_tuple(dictRel->rd_att, values, nulls);
 
-	dictOid = simple_heap_insert(dictRel, tup);
-
-	CatalogUpdateIndexes(dictRel, tup);
+	dictOid = CatalogInsertHeapAndIndexes(dictRel, tup);
 
 	address = makeDictionaryDependencies(tup);
 
@@ -620,9 +616,7 @@ AlterTSDictionary(AlterTSDictionaryStmt *stmt)
 	newtup = heap_modify_tuple(tup, RelationGetDescr(rel),
 							   repl_val, repl_null, repl_repl);
 
-	simple_heap_update(rel, &newtup->t_self, newtup);
-
-	CatalogUpdateIndexes(rel, newtup);
+	CatalogUpdateHeapAndIndexes(rel, &newtup->t_self, newtup);
 
 	InvokeObjectPostAlterHook(TSDictionaryRelationId, dictId, 0);
 
@@ -806,9 +800,7 @@ DefineTSTemplate(List *names, List *parameters)
 
 	tup = heap_form_tuple(tmplRel->rd_att, values, nulls);
 
-	tmplOid = simple_heap_insert(tmplRel, tup);
-
-	CatalogUpdateIndexes(tmplRel, tup);
+	tmplOid = CatalogInsertHeapAndIndexes(tmplRel, tup);
 
 	address = makeTSTemplateDependencies(tup);
 
@@ -1066,9 +1058,7 @@ DefineTSConfiguration(List *names, List *parameters, ObjectAddress *copied)
 
 	tup = heap_form_tuple(cfgRel->rd_att, values, nulls);
 
-	cfgOid = simple_heap_insert(cfgRel, tup);
-
-	CatalogUpdateIndexes(cfgRel, tup);
+	cfgOid = CatalogInsertHeapAndIndexes(cfgRel, tup);
 
 	if (OidIsValid(sourceOid))
 	{
@@ -1106,9 +1096,7 @@ DefineTSConfiguration(List *names, List *parameters, ObjectAddress *copied)
 
 			newmaptup = heap_form_tuple(mapRel->rd_att, mapvalues, mapnulls);
 
-			simple_heap_insert(mapRel, newmaptup);
-
-			CatalogUpdateIndexes(mapRel, newmaptup);
+			CatalogInsertHeapAndIndexes(mapRel, newmaptup);
 
 			heap_freetuple(newmaptup);
 		}
@@ -1409,9 +1397,7 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 				newtup = heap_modify_tuple(maptup,
 										   RelationGetDescr(relMap),
 										   repl_val, repl_null, repl_repl);
-				simple_heap_update(relMap, &newtup->t_self, newtup);
-
-				CatalogUpdateIndexes(relMap, newtup);
+				CatalogUpdateHeapAndIndexes(relMap, &newtup->t_self, newtup);
 			}
 		}
 
@@ -1436,8 +1422,7 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 				values[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictIds[j]);
 
 				tup = heap_form_tuple(relMap->rd_att, values, nulls);
-				simple_heap_insert(relMap, tup);
-				CatalogUpdateIndexes(relMap, tup);
+				CatalogInsertHeapAndIndexes(relMap, tup);
 
 				heap_freetuple(tup);
 			}
diff --git a/src/backend/commands/typecmds.c b/src/backend/commands/typecmds.c
index 4c33d55..68e93fc 100644
--- a/src/backend/commands/typecmds.c
+++ b/src/backend/commands/typecmds.c
@@ -2221,9 +2221,7 @@ AlterDomainDefault(List *names, Node *defaultRaw)
 								 new_record, new_record_nulls,
 								 new_record_repl);
 
-	simple_heap_update(rel, &tup->t_self, newtuple);
-
-	CatalogUpdateIndexes(rel, newtuple);
+	CatalogUpdateHeapAndIndexes(rel, &tup->t_self, newtuple);
 
 	/* Rebuild dependencies */
 	GenerateTypeDependencies(typTup->typnamespace,
@@ -2360,9 +2358,7 @@ AlterDomainNotNull(List *names, bool notNull)
 	 */
 	typTup->typnotnull = notNull;
 
-	simple_heap_update(typrel, &tup->t_self, tup);
-
-	CatalogUpdateIndexes(typrel, tup);
+	CatalogUpdateHeapAndIndexes(typrel, &tup->t_self, tup);
 
 	InvokeObjectPostAlterHook(TypeRelationId, domainoid, 0);
 
@@ -2662,8 +2658,7 @@ AlterDomainValidateConstraint(List *names, char *constrName)
 	copyTuple = heap_copytuple(tuple);
 	copy_con = (Form_pg_constraint) GETSTRUCT(copyTuple);
 	copy_con->convalidated = true;
-	simple_heap_update(conrel, &copyTuple->t_self, copyTuple);
-	CatalogUpdateIndexes(conrel, copyTuple);
+	CatalogUpdateHeapAndIndexes(conrel, &copyTuple->t_self, copyTuple);
 
 	InvokeObjectPostAlterHook(ConstraintRelationId,
 							  HeapTupleGetOid(copyTuple), 0);
@@ -3404,9 +3399,7 @@ AlterTypeOwnerInternal(Oid typeOid, Oid newOwnerId)
 	tup = heap_modify_tuple(tup, RelationGetDescr(rel), repl_val, repl_null,
 							repl_repl);
 
-	simple_heap_update(rel, &tup->t_self, tup);
-
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 	/* If it has an array type, update that too */
 	if (OidIsValid(typTup->typarray))
@@ -3566,8 +3559,7 @@ AlterTypeNamespaceInternal(Oid typeOid, Oid nspOid,
 		/* tup is a copy, so we can scribble directly on it */
 		typform->typnamespace = nspOid;
 
-		simple_heap_update(rel, &tup->t_self, tup);
-		CatalogUpdateIndexes(rel, tup);
+		CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 	}
 
 	/*
diff --git a/src/backend/commands/user.c b/src/backend/commands/user.c
index b746982..46e3a66 100644
--- a/src/backend/commands/user.c
+++ b/src/backend/commands/user.c
@@ -433,8 +433,7 @@ CreateRole(ParseState *pstate, CreateRoleStmt *stmt)
 	/*
 	 * Insert new record in the pg_authid table
 	 */
-	roleid = simple_heap_insert(pg_authid_rel, tuple);
-	CatalogUpdateIndexes(pg_authid_rel, tuple);
+	roleid = CatalogInsertHeapAndIndexes(pg_authid_rel, tuple);
 
 	/*
 	 * Advance command counter so we can see new record; else tests in
@@ -838,10 +837,7 @@ AlterRole(AlterRoleStmt *stmt)
 
 	new_tuple = heap_modify_tuple(tuple, pg_authid_dsc, new_record,
 								  new_record_nulls, new_record_repl);
-	simple_heap_update(pg_authid_rel, &tuple->t_self, new_tuple);
-
-	/* Update indexes */
-	CatalogUpdateIndexes(pg_authid_rel, new_tuple);
+	CatalogUpdateHeapAndIndexes(pg_authid_rel, &tuple->t_self, new_tuple);
 
 	InvokeObjectPostAlterHook(AuthIdRelationId, roleid, 0);
 
@@ -1243,9 +1239,7 @@ RenameRole(const char *oldname, const char *newname)
 	}
 
 	newtuple = heap_modify_tuple(oldtuple, dsc, repl_val, repl_null, repl_repl);
-	simple_heap_update(rel, &oldtuple->t_self, newtuple);
-
-	CatalogUpdateIndexes(rel, newtuple);
+	CatalogUpdateHeapAndIndexes(rel, &oldtuple->t_self, newtuple);
 
 	InvokeObjectPostAlterHook(AuthIdRelationId, roleid, 0);
 
@@ -1530,16 +1524,14 @@ AddRoleMems(const char *rolename, Oid roleid,
 			tuple = heap_modify_tuple(authmem_tuple, pg_authmem_dsc,
 									  new_record,
 									  new_record_nulls, new_record_repl);
-			simple_heap_update(pg_authmem_rel, &tuple->t_self, tuple);
-			CatalogUpdateIndexes(pg_authmem_rel, tuple);
+			CatalogUpdateHeapAndIndexes(pg_authmem_rel, &tuple->t_self, tuple);
 			ReleaseSysCache(authmem_tuple);
 		}
 		else
 		{
 			tuple = heap_form_tuple(pg_authmem_dsc,
 									new_record, new_record_nulls);
-			simple_heap_insert(pg_authmem_rel, tuple);
-			CatalogUpdateIndexes(pg_authmem_rel, tuple);
+			CatalogInsertHeapAndIndexes(pg_authmem_rel, tuple);
 		}
 
 		/* CCI after each change, in case there are duplicates in list */
@@ -1647,8 +1639,7 @@ DelRoleMems(const char *rolename, Oid roleid,
 			tuple = heap_modify_tuple(authmem_tuple, pg_authmem_dsc,
 									  new_record,
 									  new_record_nulls, new_record_repl);
-			simple_heap_update(pg_authmem_rel, &tuple->t_self, tuple);
-			CatalogUpdateIndexes(pg_authmem_rel, tuple);
+			CatalogUpdateHeapAndIndexes(pg_authmem_rel, &tuple->t_self, tuple);
 		}
 
 		ReleaseSysCache(authmem_tuple);
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 005440e..1388be1 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -1032,6 +1032,19 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 							break;
 						}
 
+						/*
+						 * If this tuple was ever WARM updated or is a WARM
+						 * tuple, there could be multiple index entries
+						 * pointing to the root of this chain. We can't do
+						 * index-only scans for such tuples without verifying
+						 * index key check. So mark the page as !all_visible
+						 */
+						if (HeapTupleHeaderIsHeapWarmTuple(tuple.t_data))
+						{
+							all_visible = false;
+							break;
+						}
+
 						/* Track newest xmin on page. */
 						if (TransactionIdFollows(xmin, visibility_cutoff_xid))
 							visibility_cutoff_xid = xmin;
@@ -2158,6 +2171,18 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 						break;
 					}
 
+					/*
+					 * If this tuple was ever WARM updated or is a WARM tuple,
+					 * there could be multiple index entries pointing to the
+					 * root of this chain. We can't do index-only scans for
+					 * such tuples without verifying index key check. So mark
+					 * the page as !all_visible
+					 */
+					if (HeapTupleHeaderIsHeapWarmTuple(tuple.t_data))
+					{
+						all_visible = false;
+					}
+
 					/* Track newest xmin on page. */
 					if (TransactionIdFollows(xmin, *visibility_cutoff_xid))
 						*visibility_cutoff_xid = xmin;
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 9920f48..94cf92f 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -270,6 +270,8 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
 List *
 ExecInsertIndexTuples(TupleTableSlot *slot,
 					  ItemPointer tupleid,
+					  ItemPointer root_tid,
+					  Bitmapset *modified_attrs,
 					  EState *estate,
 					  bool noDupErr,
 					  bool *specConflict,
@@ -324,6 +326,17 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 		if (!indexInfo->ii_ReadyForInserts)
 			continue;
 
+		/*
+		 * If modified_attrs is set, we only insert index entries for those
+		 * indexes whose column has changed. All other indexes can use their
+		 * existing index pointers to look up the new tuple
+		 */
+		if (modified_attrs)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
 		/* Check for partial index */
 		if (indexInfo->ii_Predicate != NIL)
 		{
@@ -389,7 +402,7 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 			index_insert(indexRelation, /* index relation */
 						 values,	/* array of index Datums */
 						 isnull,	/* null flags */
-						 tupleid,		/* tid of heap tuple */
+						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique);	/* type of uniqueness check to do */
 
@@ -790,6 +803,9 @@ retry:
 		{
 			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
 				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
+			else
+				ItemPointerCopy(&tup->t_self, &ctid_wait);
+
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index a8bd583..b6c115d 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -399,6 +399,8 @@ ExecSimpleRelationInsert(EState *estate, TupleTableSlot *slot)
 
 		if (resultRelInfo->ri_NumIndices > 0)
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &(tuple->t_self),
+												   NULL,
 												   estate, false, NULL,
 												   NIL);
 
@@ -445,6 +447,8 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 	if (!skip_tuple)
 	{
 		List	   *recheckIndexes = NIL;
+		bool		warm_update;
+		Bitmapset  *modified_attrs;
 
 		/* Check the constraints of the tuple */
 		if (rel->rd_att->constr)
@@ -455,13 +459,30 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 
 		/* OK, update the tuple and index entries for it */
 		simple_heap_update(rel, &searchslot->tts_tuple->t_self,
-						   slot->tts_tuple);
+						   slot->tts_tuple, &modified_attrs, &warm_update);
 
 		if (resultRelInfo->ri_NumIndices > 0 &&
-			!HeapTupleIsHeapOnly(slot->tts_tuple))
+			(!HeapTupleIsHeapOnly(slot->tts_tuple) || warm_update))
+		{
+			ItemPointerData root_tid;
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self,
+						&root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
+
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL,
 												   NIL);
+		}
 
 		/* AFTER ROW UPDATE Triggers */
 		ExecARUpdateTriggers(estate, resultRelInfo,
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index f18827d..f81d290 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -37,6 +37,7 @@
 
 #include "access/relscan.h"
 #include "access/transam.h"
+#include "access/valid.h"
 #include "executor/execdebug.h"
 #include "executor/nodeBitmapHeapscan.h"
 #include "pgstat.h"
@@ -362,11 +363,27 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			OffsetNumber offnum = tbmres->offsets[curslot];
 			ItemPointerData tid;
 			HeapTupleData heapTuple;
+			bool recheck = false;
 
 			ItemPointerSet(&tid, page, offnum);
 			if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
-									   &heapTuple, NULL, true))
-				scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+									   &heapTuple, NULL, true, &recheck))
+			{
+				bool valid = true;
+
+				if (scan->rs_key)
+					HeapKeyTest(&heapTuple, RelationGetDescr(scan->rs_rd),
+							scan->rs_nkeys, scan->rs_key, valid);
+				if (valid)
+					scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+
+				/*
+				 * If the heap tuple needs a recheck because of a WARM update,
+				 * it's a lossy case
+				 */
+				if (recheck)
+					tbmres->recheck = true;
+			}
 		}
 	}
 	else
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index 5734550..c7be366 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -115,10 +115,10 @@ IndexNext(IndexScanState *node)
 					   false);	/* don't pfree */
 
 		/*
-		 * If the index was lossy, we have to recheck the index quals using
-		 * the fetched tuple.
+		 * If the index was lossy or the tuple was WARM, we have to recheck
+		 * the index quals using the fetched tuple.
 		 */
-		if (scandesc->xs_recheck)
+		if (scandesc->xs_recheck || scandesc->xs_tuple_recheck)
 		{
 			econtext->ecxt_scantuple = slot;
 			ResetExprContext(econtext);
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index 95e1589..a1f3440 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -512,6 +512,7 @@ ExecInsert(ModifyTableState *mtstate,
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												 &(tuple->t_self), NULL,
 												 estate, true, &specConflict,
 												   arbiterIndexes);
 
@@ -558,6 +559,7 @@ ExecInsert(ModifyTableState *mtstate,
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
 				recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+													   &(tuple->t_self), NULL,
 													   estate, false, NULL,
 													   arbiterIndexes);
 		}
@@ -891,6 +893,9 @@ ExecUpdate(ItemPointer tupleid,
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
 	List	   *recheckIndexes = NIL;
+	Bitmapset  *modified_attrs = NULL;
+	ItemPointerData	root_tid;
+	bool		warm_update;
 
 	/*
 	 * abort the operation if not running transactions
@@ -1007,7 +1012,7 @@ lreplace:;
 							 estate->es_output_cid,
 							 estate->es_crosscheck_snapshot,
 							 true /* wait for commit */ ,
-							 &hufd, &lockmode);
+							 &hufd, &lockmode, &modified_attrs, &warm_update);
 		switch (result)
 		{
 			case HeapTupleSelfUpdated:
@@ -1094,10 +1099,28 @@ lreplace:;
 		 * the t_self field.
 		 *
 		 * If it's a HOT update, we mustn't insert new index entries.
+		 *
+		 * If it's a WARM update, then we must insert new entries with TID
+		 * pointing to the root of the WARM chain.
 		 */
-		if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
+		if (resultRelInfo->ri_NumIndices > 0 &&
+			(!HeapTupleIsHeapOnly(tuple) || warm_update))
+		{
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL, NIL);
+		}
 	}
 
 	if (canSetTag)
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index 7176cf1..432dd4b 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -1823,7 +1823,7 @@ pgstat_count_heap_insert(Relation rel, int n)
  * pgstat_count_heap_update - count a tuple update
  */
 void
-pgstat_count_heap_update(Relation rel, bool hot)
+pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 {
 	PgStat_TableStatus *pgstat_info = rel->pgstat_info;
 
@@ -1841,6 +1841,8 @@ pgstat_count_heap_update(Relation rel, bool hot)
 		/* t_tuples_hot_updated is nontransactional, so just advance it */
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
+		else if (warm)
+			pgstat_info->t_counts.t_tuples_warm_updated++;
 	}
 }
 
@@ -4085,6 +4087,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_updated = 0;
 		result->tuples_deleted = 0;
 		result->tuples_hot_updated = 0;
+		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
 		result->changes_since_analyze = 0;
@@ -5194,6 +5197,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated = tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted = tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated = tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
@@ -5221,6 +5225,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated += tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted += tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated += tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated += tabmsg->t_counts.t_tuples_warm_updated;
 			/* If table was truncated, first reset the live/dead counters */
 			if (tabmsg->t_counts.t_truncated)
 			{
diff --git a/src/backend/replication/logical/origin.c b/src/backend/replication/logical/origin.c
index d7dda6a..7048f73 100644
--- a/src/backend/replication/logical/origin.c
+++ b/src/backend/replication/logical/origin.c
@@ -299,8 +299,7 @@ replorigin_create(char *roname)
 			values[Anum_pg_replication_origin_roname - 1] = roname_d;
 
 			tuple = heap_form_tuple(RelationGetDescr(rel), values, nulls);
-			simple_heap_insert(rel, tuple);
-			CatalogUpdateIndexes(rel, tuple);
+			CatalogInsertHeapAndIndexes(rel, tuple);
 			CommandCounterIncrement();
 			break;
 		}
diff --git a/src/backend/rewrite/rewriteDefine.c b/src/backend/rewrite/rewriteDefine.c
index 481868b..33d73c2 100644
--- a/src/backend/rewrite/rewriteDefine.c
+++ b/src/backend/rewrite/rewriteDefine.c
@@ -124,7 +124,7 @@ InsertRule(char *rulname,
 		tup = heap_modify_tuple(oldtup, RelationGetDescr(pg_rewrite_desc),
 								values, nulls, replaces);
 
-		simple_heap_update(pg_rewrite_desc, &tup->t_self, tup);
+		CatalogUpdateHeapAndIndexes(pg_rewrite_desc, &tup->t_self, tup);
 
 		ReleaseSysCache(oldtup);
 
@@ -135,11 +135,9 @@ InsertRule(char *rulname,
 	{
 		tup = heap_form_tuple(pg_rewrite_desc->rd_att, values, nulls);
 
-		rewriteObjectId = simple_heap_insert(pg_rewrite_desc, tup);
+		rewriteObjectId = CatalogInsertHeapAndIndexes(pg_rewrite_desc, tup);
 	}
 
-	/* Need to update indexes in either case */
-	CatalogUpdateIndexes(pg_rewrite_desc, tup);
 
 	heap_freetuple(tup);
 
@@ -613,8 +611,7 @@ DefineQueryRewrite(char *rulename,
 		classForm->relminmxid = InvalidMultiXactId;
 		classForm->relreplident = REPLICA_IDENTITY_NOTHING;
 
-		simple_heap_update(relationRelation, &classTup->t_self, classTup);
-		CatalogUpdateIndexes(relationRelation, classTup);
+		CatalogUpdateHeapAndIndexes(relationRelation, &classTup->t_self, classTup);
 
 		heap_freetuple(classTup);
 		heap_close(relationRelation, RowExclusiveLock);
@@ -866,10 +863,7 @@ EnableDisableRule(Relation rel, const char *rulename,
 	{
 		((Form_pg_rewrite) GETSTRUCT(ruletup))->ev_enabled =
 			CharGetDatum(fires_when);
-		simple_heap_update(pg_rewrite_desc, &ruletup->t_self, ruletup);
-
-		/* keep system catalog indexes current */
-		CatalogUpdateIndexes(pg_rewrite_desc, ruletup);
+		CatalogUpdateHeapAndIndexes(pg_rewrite_desc, &ruletup->t_self, ruletup);
 
 		changed = true;
 	}
@@ -985,10 +979,7 @@ RenameRewriteRule(RangeVar *relation, const char *oldName,
 	/* OK, do the update */
 	namestrcpy(&(ruleform->rulename), newName);
 
-	simple_heap_update(pg_rewrite_desc, &ruletup->t_self, ruletup);
-
-	/* keep system catalog indexes current */
-	CatalogUpdateIndexes(pg_rewrite_desc, ruletup);
+	CatalogUpdateHeapAndIndexes(pg_rewrite_desc, &ruletup->t_self, ruletup);
 
 	heap_freetuple(ruletup);
 	heap_close(pg_rewrite_desc, RowExclusiveLock);
diff --git a/src/backend/rewrite/rewriteSupport.c b/src/backend/rewrite/rewriteSupport.c
index 0154072..fc76fab 100644
--- a/src/backend/rewrite/rewriteSupport.c
+++ b/src/backend/rewrite/rewriteSupport.c
@@ -72,10 +72,7 @@ SetRelationRuleStatus(Oid relationId, bool relHasRules)
 		/* Do the update */
 		classForm->relhasrules = relHasRules;
 
-		simple_heap_update(relationRelation, &tuple->t_self, tuple);
-
-		/* Keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relationRelation, tuple);
+		CatalogUpdateHeapAndIndexes(relationRelation, &tuple->t_self, tuple);
 	}
 	else
 	{
diff --git a/src/backend/storage/large_object/inv_api.c b/src/backend/storage/large_object/inv_api.c
index 262b0b2..de35e03 100644
--- a/src/backend/storage/large_object/inv_api.c
+++ b/src/backend/storage/large_object/inv_api.c
@@ -678,8 +678,7 @@ inv_write(LargeObjectDesc *obj_desc, const char *buf, int nbytes)
 			replace[Anum_pg_largeobject_data - 1] = true;
 			newtup = heap_modify_tuple(oldtuple, RelationGetDescr(lo_heap_r),
 									   values, nulls, replace);
-			simple_heap_update(lo_heap_r, &newtup->t_self, newtup);
-			CatalogIndexInsert(indstate, newtup);
+			CatalogUpdateHeapAndIndexes(lo_heap_r, &newtup->t_self, newtup);
 			heap_freetuple(newtup);
 
 			/*
@@ -721,8 +720,7 @@ inv_write(LargeObjectDesc *obj_desc, const char *buf, int nbytes)
 			values[Anum_pg_largeobject_pageno - 1] = Int32GetDatum(pageno);
 			values[Anum_pg_largeobject_data - 1] = PointerGetDatum(&workbuf);
 			newtup = heap_form_tuple(lo_heap_r->rd_att, values, nulls);
-			simple_heap_insert(lo_heap_r, newtup);
-			CatalogIndexInsert(indstate, newtup);
+			CatalogInsertHeapAndIndexes(lo_heap_r, newtup);
 			heap_freetuple(newtup);
 		}
 		pageno++;
@@ -850,8 +848,7 @@ inv_truncate(LargeObjectDesc *obj_desc, int64 len)
 		replace[Anum_pg_largeobject_data - 1] = true;
 		newtup = heap_modify_tuple(oldtuple, RelationGetDescr(lo_heap_r),
 								   values, nulls, replace);
-		simple_heap_update(lo_heap_r, &newtup->t_self, newtup);
-		CatalogIndexInsert(indstate, newtup);
+		CatalogUpdateHeapAndIndexes(lo_heap_r, &newtup->t_self, newtup);
 		heap_freetuple(newtup);
 	}
 	else
@@ -888,8 +885,7 @@ inv_truncate(LargeObjectDesc *obj_desc, int64 len)
 		values[Anum_pg_largeobject_pageno - 1] = Int32GetDatum(pageno);
 		values[Anum_pg_largeobject_data - 1] = PointerGetDatum(&workbuf);
 		newtup = heap_form_tuple(lo_heap_r->rd_att, values, nulls);
-		simple_heap_insert(lo_heap_r, newtup);
-		CatalogIndexInsert(indstate, newtup);
+		CatalogInsertHeapAndIndexes(lo_heap_r, newtup);
 		heap_freetuple(newtup);
 	}
 
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index a987d0d..b8677f3 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -145,6 +145,22 @@ pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS)
 
 
 Datum
+pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+
+Datum
 pg_stat_get_live_tuples(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
@@ -1644,6 +1660,21 @@ pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS)
 }
 
 Datum
+pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_TableStatus *tabentry;
+
+	if ((tabentry = find_tabstat_entry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->t_counts.t_tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+Datum
 pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 26ff7e1..43781fb 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2338,6 +2338,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	list_free_deep(relation->rd_fkeylist);
 	list_free(relation->rd_indexlist);
 	bms_free(relation->rd_indexattr);
+	bms_free(relation->rd_exprindexattr);
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_pkattr);
 	bms_free(relation->rd_idattr);
@@ -3484,8 +3485,7 @@ RelationSetNewRelfilenode(Relation relation, char persistence,
 	classform->relminmxid = minmulti;
 	classform->relpersistence = persistence;
 
-	simple_heap_update(pg_class, &tuple->t_self, tuple);
-	CatalogUpdateIndexes(pg_class, tuple);
+	CatalogUpdateHeapAndIndexes(pg_class, &tuple->t_self, tuple);
 
 	heap_freetuple(tuple);
 
@@ -4352,6 +4352,13 @@ RelationGetIndexList(Relation relation)
 		return list_copy(relation->rd_indexlist);
 
 	/*
+	 * If the index list was invalidated, we better also invalidate the index
+	 * attribute list (which should automatically invalidate other attributes
+	 * such as primary key and replica identity)
+	 */
+	relation->rd_indexattr = NULL;
+
+	/*
 	 * We build the list we intend to return (in the caller's context) while
 	 * doing the scan.  After successfully completing the scan, we copy that
 	 * list into the relcache entry.  This avoids cache-context memory leakage
@@ -4757,14 +4764,18 @@ Bitmapset *
 RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 {
 	Bitmapset  *indexattrs;		/* indexed columns */
+	Bitmapset  *exprindexattrs;	/* indexed columns in expression/prediacate
+									 indexes */
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *pkindexattrs;	/* columns in the primary index */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
+	Bitmapset  *indxnotreadyattrs;	/* columns in not ready indexes */
 	List	   *indexoidlist;
 	Oid			relpkindex;
 	Oid			relreplindex;
 	ListCell   *l;
 	MemoryContext oldcxt;
+	bool		supportswarm = true;/* True if the table can be WARM updated */
 
 	/* Quick exit if we already computed the result. */
 	if (relation->rd_indexattr != NULL)
@@ -4779,6 +4790,10 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_pkattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
+			case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+				return bms_copy(relation->rd_exprindexattr);
+			case INDEX_ATTR_BITMAP_NOTREADY:
+				return bms_copy(relation->rd_indxnotreadyattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -4819,9 +4834,11 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	 * won't be returned at all by RelationGetIndexList.
 	 */
 	indexattrs = NULL;
+	exprindexattrs = NULL;
 	uindexattrs = NULL;
 	pkindexattrs = NULL;
 	idindexattrs = NULL;
+	indxnotreadyattrs = NULL;
 	foreach(l, indexoidlist)
 	{
 		Oid			indexOid = lfirst_oid(l);
@@ -4858,6 +4875,10 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				indexattrs = bms_add_member(indexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
 
+				if (!indexInfo->ii_ReadyForInserts)
+					indxnotreadyattrs = bms_add_member(indxnotreadyattrs,
+							   attrnum - FirstLowInvalidHeapAttributeNumber);
+
 				if (isKey)
 					uindexattrs = bms_add_member(uindexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
@@ -4873,25 +4894,51 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 		}
 
 		/* Collect all attributes used in expressions, too */
-		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &exprindexattrs);
 
 		/* Collect all attributes in the index predicate, too */
-		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
+
+		/*
+		 * indexattrs should include attributes referenced in index expressions
+		 * and predicates too
+		 */
+		indexattrs = bms_add_members(indexattrs, exprindexattrs);
+
+		if (!indexInfo->ii_ReadyForInserts)
+			indxnotreadyattrs = bms_add_members(indxnotreadyattrs,
+					exprindexattrs);
+
+		/*
+		 * Check if the index has amrecheck method defined. If the method is
+		 * not defined, the index does not support WARM update. Completely
+		 * disable WARM updates on such tables
+		 */
+		if (!indexDesc->rd_amroutine->amrecheck)
+			supportswarm = false;
+
 
 		index_close(indexDesc, AccessShareLock);
 	}
 
 	list_free(indexoidlist);
 
+	/* Remember if the table can do WARM updates */
+	relation->rd_supportswarm = supportswarm;
+
 	/* Don't leak the old values of these bitmaps, if any */
 	bms_free(relation->rd_indexattr);
 	relation->rd_indexattr = NULL;
+	bms_free(relation->rd_exprindexattr);
+	relation->rd_exprindexattr = NULL;
 	bms_free(relation->rd_keyattr);
 	relation->rd_keyattr = NULL;
 	bms_free(relation->rd_pkattr);
 	relation->rd_pkattr = NULL;
 	bms_free(relation->rd_idattr);
 	relation->rd_idattr = NULL;
+	bms_free(relation->rd_indxnotreadyattr);
+	relation->rd_indxnotreadyattr = NULL;
 
 	/*
 	 * Now save copies of the bitmaps in the relcache entry.  We intentionally
@@ -4904,7 +4951,9 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_pkattr = bms_copy(pkindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_indexattr = bms_copy(indexattrs);
+	relation->rd_exprindexattr = bms_copy(exprindexattrs);
+	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
+	relation->rd_indxnotreadyattr = bms_copy(indxnotreadyattrs);
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -4918,6 +4967,10 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 			return bms_copy(relation->rd_pkattr);
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
+		case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+			return exprindexattrs;
+		case INDEX_ATTR_BITMAP_NOTREADY:
+			return indxnotreadyattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
@@ -5530,6 +5583,7 @@ load_relcache_init_file(bool shared)
 		rel->rd_keyattr = NULL;
 		rel->rd_pkattr = NULL;
 		rel->rd_idattr = NULL;
+		rel->rd_indxnotreadyattr = NULL;
 		rel->rd_pubactions = NULL;
 		rel->rd_createSubid = InvalidSubTransactionId;
 		rel->rd_newRelfilenodeSubid = InvalidSubTransactionId;
diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h
index e91e41d..34430a9 100644
--- a/src/include/access/amapi.h
+++ b/src/include/access/amapi.h
@@ -13,6 +13,7 @@
 #define AMAPI_H
 
 #include "access/genam.h"
+#include "access/itup.h"
 
 /*
  * We don't wish to include planner header files here, since most of an index
@@ -150,6 +151,10 @@ typedef void (*aminitparallelscan_function) (void *target);
 /* (re)start parallel index scan */
 typedef void (*amparallelrescan_function) (IndexScanDesc scan);
 
+/* recheck index tuple and heap tuple match */
+typedef bool (*amrecheck_function) (Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 /*
  * API struct for an index AM.  Note this must be stored in a single palloc'd
  * chunk of memory.
@@ -213,6 +218,9 @@ typedef struct IndexAmRoutine
 	amestimateparallelscan_function amestimateparallelscan;		/* can be NULL */
 	aminitparallelscan_function aminitparallelscan;		/* can be NULL */
 	amparallelrescan_function amparallelrescan; /* can be NULL */
+
+	/* interface function to support WARM */
+	amrecheck_function amrecheck;		/* can be NULL */
 } IndexAmRoutine;
 
 
diff --git a/src/include/access/hash.h b/src/include/access/hash.h
index 69a3873..3e14023 100644
--- a/src/include/access/hash.h
+++ b/src/include/access/hash.h
@@ -364,4 +364,8 @@ extern void hashbucketcleanup(Relation rel, Bucket cur_bucket,
 				  bool bucket_has_garbage,
 				  IndexBulkDeleteCallback callback, void *callback_state);
 
+/* hash.c */
+extern bool hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 #endif   /* HASH_H */
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 95aa976..9412c3a 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -137,9 +137,10 @@ extern bool heap_fetch(Relation relation, Snapshot snapshot,
 		   Relation stats_relation);
 extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
 					   Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call);
+					   bool *all_dead, bool first_call, bool *recheck);
 extern bool heap_hot_search(ItemPointer tid, Relation relation,
-				Snapshot snapshot, bool *all_dead);
+				Snapshot snapshot, bool *all_dead,
+				bool *recheck, Buffer *buffer, HeapTuple heapTuple);
 
 extern void heap_get_latest_tid(Relation relation, Snapshot snapshot,
 					ItemPointer tid);
@@ -161,7 +162,8 @@ extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
 extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
@@ -176,7 +178,9 @@ extern bool heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple);
 extern Oid	simple_heap_insert(Relation relation, HeapTuple tup);
 extern void simple_heap_delete(Relation relation, ItemPointer tid);
 extern void simple_heap_update(Relation relation, ItemPointer otid,
-				   HeapTuple tup);
+				   HeapTuple tup,
+				   Bitmapset **modified_attrs,
+				   bool *warm_update);
 
 extern void heap_sync(Relation relation);
 
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index a4a1fe1..b4238e5 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -80,6 +80,7 @@
 #define XLH_UPDATE_CONTAINS_NEW_TUPLE			(1<<4)
 #define XLH_UPDATE_PREFIX_FROM_OLD				(1<<5)
 #define XLH_UPDATE_SUFFIX_FROM_OLD				(1<<6)
+#define XLH_UPDATE_WARM_UPDATE					(1<<7)
 
 /* convenience macro for checking whether any form of old tuple was logged */
 #define XLH_UPDATE_CONTAINS_OLD						\
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 7552186..ddbdbcd 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,7 +260,8 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x0800 are available */
+#define HEAP_WARM_TUPLE			0x0800	/* This tuple is a part of a WARM chain
+										 */
 #define HEAP_LATEST_TUPLE		0x1000	/*
 										 * This is the last tuple in chain and
 										 * ip_posid points to the root line
@@ -271,7 +272,7 @@ struct HeapTupleHeaderData
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF800	/* visibility-related bits */
 
 
 /*
@@ -510,6 +511,21 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+#define HeapTupleHeaderSetHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 |= HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderClearHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 &= ~HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderIsHeapWarmTuple(tup) \
+( \
+  ((tup)->t_infomask2 & HEAP_WARM_TUPLE) != 0 \
+)
+
 /*
  * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
  * the TID of the tuple itself in t_ctid field to mark the end of the chain.
@@ -785,6 +801,15 @@ struct MinimalTupleData
 #define HeapTupleClearHeapOnly(tuple) \
 		HeapTupleHeaderClearHeapOnly((tuple)->t_data)
 
+#define HeapTupleIsHeapWarmTuple(tuple) \
+		HeapTupleHeaderIsHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleSetHeapWarmTuple(tuple) \
+		HeapTupleHeaderSetHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleClearHeapWarmTuple(tuple) \
+		HeapTupleHeaderClearHeapWarmTuple((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index 011a72e..98129d6 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -750,6 +750,8 @@ extern bytea *btoptions(Datum reloptions, bool validate);
 extern bool btproperty(Oid index_oid, int attno,
 		   IndexAMProperty prop, const char *propname,
 		   bool *res, bool *isnull);
+extern bool btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * prototypes for functions in nbtvalidate.c
diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h
index ce3ca8d..12d3b0c 100644
--- a/src/include/access/relscan.h
+++ b/src/include/access/relscan.h
@@ -112,7 +112,8 @@ typedef struct IndexScanDescData
 	HeapTupleData xs_ctup;		/* current heap tuple, if any */
 	Buffer		xs_cbuf;		/* current heap buffer in scan, if any */
 	/* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */
-	bool		xs_recheck;		/* T means scan keys must be rechecked */
+	bool		xs_recheck;		/* T means scan keys must be rechecked for each tuple */
+	bool		xs_tuple_recheck;	/* T means scan keys must be rechecked for current tuple */
 
 	/*
 	 * When fetching with an ordering operator, the values of the ORDER BY
diff --git a/src/include/catalog/indexing.h b/src/include/catalog/indexing.h
index a3635a4..7e29df3 100644
--- a/src/include/catalog/indexing.h
+++ b/src/include/catalog/indexing.h
@@ -31,8 +31,13 @@ typedef struct ResultRelInfo *CatalogIndexState;
 extern CatalogIndexState CatalogOpenIndexes(Relation heapRel);
 extern void CatalogCloseIndexes(CatalogIndexState indstate);
 extern void CatalogIndexInsert(CatalogIndexState indstate,
-				   HeapTuple heapTuple);
-extern void CatalogUpdateIndexes(Relation heapRel, HeapTuple heapTuple);
+				   HeapTuple heapTuple,
+				   Bitmapset *modified_attrs, bool warm_update);
+extern void CatalogUpdateIndexes(Relation heapRel, HeapTuple heapTuple,
+				   Bitmapset *modified_attrs, bool warm_update);
+extern void CatalogUpdateHeapAndIndexes(Relation heapRel, ItemPointer otid,
+				   HeapTuple tup);
+extern Oid CatalogInsertHeapAndIndexes(Relation heapRel, HeapTuple tup);
 
 
 /*
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 05652e8..c132b10 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2740,6 +2740,8 @@ DATA(insert OID = 1933 (  pg_stat_get_tuples_deleted	PGNSP PGUID 12 1 0 0 0 f f
 DESCR("statistics: number of tuples deleted");
 DATA(insert OID = 1972 (  pg_stat_get_tuples_hot_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated");
+DATA(insert OID = 3353 (  pg_stat_get_tuples_warm_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated");
 DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_live_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
@@ -2892,6 +2894,8 @@ DATA(insert OID = 3042 (  pg_stat_get_xact_tuples_deleted		PGNSP PGUID 12 1 0 0
 DESCR("statistics: number of tuples deleted in current transaction");
 DATA(insert OID = 3043 (  pg_stat_get_xact_tuples_hot_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated in current transaction");
+DATA(insert OID = 3354 (  pg_stat_get_xact_tuples_warm_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated in current transaction");
 DATA(insert OID = 3044 (  pg_stat_get_xact_blocks_fetched		PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_fetched _null_ _null_ _null_ ));
 DESCR("statistics: number of blocks fetched in current transaction");
 DATA(insert OID = 3045 (  pg_stat_get_xact_blocks_hit			PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_hit _null_ _null_ _null_ ));
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 02dbe7b..c4495a3 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -382,6 +382,7 @@ extern void UnregisterExprContextCallback(ExprContext *econtext,
 extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
 extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
 extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
+					  ItemPointer root_tid, Bitmapset *modified_attrs,
 					  EState *estate, bool noDupErr, bool *specConflict,
 					  List *arbiterIndexes);
 extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
diff --git a/src/include/executor/nodeIndexscan.h b/src/include/executor/nodeIndexscan.h
index 46d6f45..2c4d884 100644
--- a/src/include/executor/nodeIndexscan.h
+++ b/src/include/executor/nodeIndexscan.h
@@ -37,5 +37,4 @@ extern void ExecIndexEvalRuntimeKeys(ExprContext *econtext,
 extern bool ExecIndexEvalArrayKeys(ExprContext *econtext,
 					   IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 extern bool ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
-
 #endif   /* NODEINDEXSCAN_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index f9bcdd6..07f2900 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -62,6 +62,7 @@ typedef struct IndexInfo
 	NodeTag		type;
 	int			ii_NumIndexAttrs;
 	AttrNumber	ii_KeyAttrNumbers[INDEX_MAX_KEYS];
+	Bitmapset  *ii_indxattrs;	/* bitmap of all columns used in this index */
 	List	   *ii_Expressions; /* list of Expr */
 	List	   *ii_ExpressionsState;	/* list of ExprState */
 	List	   *ii_Predicate;	/* list of Expr */
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index de8225b..ee635be 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -105,6 +105,7 @@ typedef struct PgStat_TableCounts
 	PgStat_Counter t_tuples_updated;
 	PgStat_Counter t_tuples_deleted;
 	PgStat_Counter t_tuples_hot_updated;
+	PgStat_Counter t_tuples_warm_updated;
 	bool		t_truncated;
 
 	PgStat_Counter t_delta_live_tuples;
@@ -625,6 +626,7 @@ typedef struct PgStat_StatTabEntry
 	PgStat_Counter tuples_updated;
 	PgStat_Counter tuples_deleted;
 	PgStat_Counter tuples_hot_updated;
+	PgStat_Counter tuples_warm_updated;
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
@@ -1177,7 +1179,7 @@ pgstat_report_wait_end(void)
 	(pgStatBlockWriteTime += (n))
 
 extern void pgstat_count_heap_insert(Relation rel, int n);
-extern void pgstat_count_heap_update(Relation rel, bool hot);
+extern void pgstat_count_heap_update(Relation rel, bool hot, bool warm);
 extern void pgstat_count_heap_delete(Relation rel);
 extern void pgstat_count_truncate(Relation rel);
 extern void pgstat_update_heap_dead_tuples(Relation rel, int delta);
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index a617a7c..fbac7c0 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -138,9 +138,14 @@ typedef struct RelationData
 
 	/* data managed by RelationGetIndexAttrBitmap: */
 	Bitmapset  *rd_indexattr;	/* identifies columns used in indexes */
+	Bitmapset  *rd_exprindexattr; /* indentified columns used in expression or
+									 predicate indexes */
+	Bitmapset  *rd_indxnotreadyattr;	/* columns used by indexes not yet
+										   ready */
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_pkattr;		/* cols included in primary key */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	PublicationActions  *rd_pubactions;	/* publication actions */
 
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index da36b67..d18bd09 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -50,7 +50,9 @@ typedef enum IndexAttrBitmapKind
 	INDEX_ATTR_BITMAP_ALL,
 	INDEX_ATTR_BITMAP_KEY,
 	INDEX_ATTR_BITMAP_PRIMARY_KEY,
-	INDEX_ATTR_BITMAP_IDENTITY_KEY
+	INDEX_ATTR_BITMAP_IDENTITY_KEY,
+	INDEX_ATTR_BITMAP_EXPR_PREDICATE,
+	INDEX_ATTR_BITMAP_NOTREADY
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index de5ae00..7656e6e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1728,6 +1728,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_tuples_deleted(c.oid) AS n_tup_del,
     pg_stat_get_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
@@ -1871,6 +1872,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1914,6 +1916,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1951,7 +1954,8 @@ pg_stat_xact_all_tables| SELECT c.oid AS relid,
     pg_stat_get_xact_tuples_inserted(c.oid) AS n_tup_ins,
     pg_stat_get_xact_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_xact_tuples_deleted(c.oid) AS n_tup_del,
-    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd
+    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_xact_tuples_warm_updated(c.oid) AS n_tup_warm_upd
    FROM ((pg_class c
      LEFT JOIN pg_index i ON ((c.oid = i.indrelid)))
      LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
@@ -1967,7 +1971,8 @@ pg_stat_xact_sys_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname = ANY (ARRAY['pg_catalog'::name, 'information_schema'::name])) OR (pg_stat_xact_all_tables.schemaname ~ '^pg_toast'::text));
 pg_stat_xact_user_functions| SELECT p.oid AS funcid,
@@ -1989,7 +1994,8 @@ pg_stat_xact_user_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname <> ALL (ARRAY['pg_catalog'::name, 'information_schema'::name])) AND (pg_stat_xact_all_tables.schemaname !~ '^pg_toast'::text));
 pg_statio_all_indexes| SELECT c.oid AS relid,
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
new file mode 100644
index 0000000..0aa3bb7
--- /dev/null
+++ b/src/test/regress/expected/warm.out
@@ -0,0 +1,367 @@
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on updtst_tab1  (cost=4.45..47.23 rows=22 width=72)
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1  (cost=0.00..4.45 rows=22 width=0)
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Check if index only scan works correctly
+EXPLAIN SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on updtst_tab1  (cost=4.45..47.23 rows=22 width=4)
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1  (cost=0.00..4.45 rows=22 width=0)
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+                                      QUERY PLAN                                      
+--------------------------------------------------------------------------------------
+ Index Only Scan using updtst_indx1 on updtst_tab1  (cost=0.29..9.16 rows=50 width=4)
+   Index Cond: (b = 140001)
+(2 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab1;
+------------------
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+EXPLAIN SELECT * FROM updtst_tab2 WHERE b = 701;
+                                QUERY PLAN                                 
+---------------------------------------------------------------------------
+ Bitmap Heap Scan on updtst_tab2  (cost=4.18..12.64 rows=4 width=72)
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE a = 1;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+SET enable_seqscan = false;
+EXPLAIN SELECT * FROM updtst_tab2 WHERE b = 701;
+                                QUERY PLAN                                 
+---------------------------------------------------------------------------
+ Bitmap Heap Scan on updtst_tab2  (cost=4.18..12.64 rows=4 width=72)
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE b = 701;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+VACUUM updtst_tab2;
+EXPLAIN SELECT b FROM updtst_tab2 WHERE b = 701;
+                                     QUERY PLAN                                      
+-------------------------------------------------------------------------------------
+ Index Only Scan using updtst_indx2 on updtst_tab2  (cost=0.14..4.16 rows=1 width=4)
+   Index Cond: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab2 WHERE b = 701;
+  b  
+-----
+ 701
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab2;
+------------------
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 1;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN SELECT b FROM updtst_tab3 WHERE b = 701;
+                        QUERY PLAN                         
+-----------------------------------------------------------
+ Seq Scan on updtst_tab3  (cost=0.00..2.25 rows=1 width=4)
+   Filter: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 701;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+  b   
+------
+ 1421
+(1 row)
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+SET enable_seqscan = false;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    98
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 2;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN SELECT b FROM updtst_tab3 WHERE b = 702;
+                                     QUERY PLAN                                      
+-------------------------------------------------------------------------------------
+ Index Only Scan using updtst_indx3 on updtst_tab3  (cost=0.14..8.16 rows=1 width=4)
+   Index Cond: (b = 702)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 702;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+  b   
+------
+ 1422
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab3;
+------------------
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+explain select * from test_warm where lower(a) = 'test';
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on test_warm  (cost=4.18..12.65 rows=4 width=64)
+   Recheck Cond: (lower(a) = 'test'::text)
+   ->  Bitmap Index Scan on test_warmindx  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (lower(a) = 'test'::text)
+(4 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+select *, ctid from test_warm where a = 'test';
+ a | b | ctid 
+---+---+------
+(0 rows)
+
+select *, ctid from test_warm where a = 'TEST';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+                                   QUERY PLAN                                    
+---------------------------------------------------------------------------------
+ Index Scan using test_warmindx on test_warm  (cost=0.15..20.22 rows=4 width=64)
+   Index Cond: (lower(a) = 'test'::text)
+(2 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+DROP TABLE test_warm;
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index edeb2d6..2268705 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -42,6 +42,8 @@ test: create_type
 test: create_table
 test: create_function_2
 
+test: warm
+
 # ----------
 # Load huge amounts of data
 # We should split the data files into single files and then
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
new file mode 100644
index 0000000..b73c278
--- /dev/null
+++ b/src/test/regress/sql/warm.sql
@@ -0,0 +1,172 @@
+
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Check if index only scan works correctly
+EXPLAIN SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab1;
+
+------------------
+
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+
+EXPLAIN SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE a = 1;
+
+SET enable_seqscan = false;
+EXPLAIN SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE b = 701;
+
+VACUUM updtst_tab2;
+EXPLAIN SELECT b FROM updtst_tab2 WHERE b = 701;
+SELECT b FROM updtst_tab2 WHERE b = 701;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab2;
+------------------
+
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+SELECT * FROM updtst_tab3 WHERE a = 1;
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+
+VACUUM updtst_tab3;
+EXPLAIN SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+SET enable_seqscan = false;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+SELECT * FROM updtst_tab3 WHERE a = 2;
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+
+VACUUM updtst_tab3;
+EXPLAIN SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab3;
+------------------
+
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where a = 'test';
+select *, ctid from test_warm where a = 'TEST';
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+DROP TABLE test_warm;
+
+
0001_track_root_lp_v10.patchapplication/octet-stream; name=0001_track_root_lp_v10.patchDownload
diff --git b/src/backend/access/heap/heapam.c a/src/backend/access/heap/heapam.c
index 84447f0..5149c07 100644
--- b/src/backend/access/heap/heapam.c
+++ a/src/backend/access/heap/heapam.c
@@ -93,7 +93,8 @@ static HeapTuple heap_prepare_insert(Relation relation, HeapTuple tup,
 					TransactionId xid, CommandId cid, int options);
 static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
-				HeapTuple newtup, HeapTuple old_key_tup,
+				HeapTuple newtup, OffsetNumber root_offnum,
+				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
 static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
 							 Bitmapset *interesting_cols,
@@ -2247,13 +2248,13 @@ heap_get_latest_tid(Relation relation,
 		 */
 		if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) ||
 			HeapTupleHeaderIsOnlyLocked(tp.t_data) ||
-			ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid))
+			HeapTupleHeaderIsHeapLatest(tp.t_data, &ctid))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
 		}
 
-		ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tp.t_data, &ctid);
 		priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		UnlockReleaseBuffer(buffer);
 	}							/* end of loop */
@@ -2384,6 +2385,7 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	bool		all_visible_cleared = false;
+	OffsetNumber	root_offnum;
 
 	/*
 	 * Fill in tuple header fields, assign an OID, and toast the tuple if
@@ -2422,8 +2424,13 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
-	RelationPutHeapTuple(relation, buffer, heaptup,
-						 (options & HEAP_INSERT_SPECULATIVE) != 0);
+	root_offnum = RelationPutHeapTuple(relation, buffer, heaptup,
+						 (options & HEAP_INSERT_SPECULATIVE) != 0,
+						 InvalidOffsetNumber);
+
+	/* We must not overwrite the speculative insertion token. */
+	if ((options & HEAP_INSERT_SPECULATIVE) == 0)
+		HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 	if (PageIsAllVisible(BufferGetPage(buffer)))
 	{
@@ -2651,6 +2658,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 	Size		saveFreeSpace;
 	bool		need_tuple_data = RelationIsLogicallyLogged(relation);
 	bool		need_cids = RelationIsAccessibleInLogicalDecoding(relation);
+	OffsetNumber	root_offnum;
 
 	needwal = !(options & HEAP_INSERT_SKIP_WAL) && RelationNeedsWAL(relation);
 	saveFreeSpace = RelationGetTargetPageFreeSpace(relation,
@@ -2721,7 +2729,12 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		 * RelationGetBufferForTuple has ensured that the first tuple fits.
 		 * Put that on the page, and then as many other tuples as fit.
 		 */
-		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false);
+		root_offnum = RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false,
+				InvalidOffsetNumber);
+
+		/* Mark this tuple as the latest and also set root offset. */
+		HeapTupleHeaderSetHeapLatest(heaptuples[ndone]->t_data, root_offnum);
+
 		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++)
 		{
 			HeapTuple	heaptup = heaptuples[ndone + nthispage];
@@ -2729,7 +2742,10 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace)
 				break;
 
-			RelationPutHeapTuple(relation, buffer, heaptup, false);
+			root_offnum = RelationPutHeapTuple(relation, buffer, heaptup, false,
+					InvalidOffsetNumber);
+			/* Mark each tuple as the latest and also set root offset. */
+			HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 			/*
 			 * We don't use heap_multi_insert for catalog tuples yet, but
@@ -3001,6 +3017,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	HeapTupleData tp;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	TransactionId new_xmax;
@@ -3011,6 +3028,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	bool		all_visible_cleared = false;
 	HeapTuple	old_key_tuple = NULL;	/* replica identity of the tuple */
 	bool		old_key_copied = false;
+	OffsetNumber	root_offnum;
 
 	Assert(ItemPointerIsValid(tid));
 
@@ -3052,7 +3070,8 @@ heap_delete(Relation relation, ItemPointer tid,
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 	}
 
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
+	offnum = ItemPointerGetOffsetNumber(tid);
+	lp = PageGetItemId(page, offnum);
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
@@ -3182,7 +3201,17 @@ l1:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tp.t_data->t_ctid;
+
+		/*
+		 * If we're at the end of the chain, then just return the same TID back
+		 * to the caller. The caller uses that as a hint to know if we have hit
+		 * the end of the chain.
+		 */
+		if (!HeapTupleHeaderIsHeapLatest(tp.t_data, &tp.t_self))
+			HeapTupleHeaderGetNextTid(tp.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&tp.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tp.t_data);
@@ -3231,6 +3260,22 @@ l1:
 							  xid, LockTupleExclusive, true,
 							  &new_xmax, &new_infomask, &new_infomask2);
 
+	/*
+	 * heap_get_root_tuple_one() may call palloc, which is disallowed once we
+	 * enter the critical section. So check if the root offset is cached in the
+	 * tuple and if not, fetch that information hard way before entering the
+	 * critical section.
+	 *
+	 * Most often and unless we are dealing with a pg-upgraded cluster, the
+	 * root offset information should be cached. So there should not be too
+	 * much overhead of fetching this information. Also, once a tuple is
+	 * updated, the information will be copied to the new version. So it's not
+	 * as if we're going to pay this price forever.
+	 */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tp.t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -3258,8 +3303,10 @@ l1:
 	HeapTupleHeaderClearHotUpdated(tp.t_data);
 	HeapTupleHeaderSetXmax(tp.t_data, new_xmax);
 	HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo);
-	/* Make sure there is no forward chain link in t_ctid */
-	tp.t_data->t_ctid = tp.t_self;
+
+	/* Mark this tuple as the latest tuple in the update chain. */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		HeapTupleHeaderSetHeapLatest(tp.t_data, root_offnum);
 
 	MarkBufferDirty(buffer);
 
@@ -3460,6 +3507,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
+	OffsetNumber	root_offnum;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3522,6 +3571,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 
 
 	block = ItemPointerGetBlockNumber(otid);
+	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3806,7 +3856,12 @@ l2:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = oldtup.t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(oldtup.t_data, &oldtup.t_self))
+			HeapTupleHeaderGetNextTid(oldtup.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&oldtup.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
@@ -3946,6 +4001,7 @@ l2:
 		uint16		infomask_lock_old_tuple,
 					infomask2_lock_old_tuple;
 		bool		cleared_all_frozen = false;
+		OffsetNumber	root_offnum;
 
 		/*
 		 * To prevent concurrent sessions from updating the tuple, we have to
@@ -3973,6 +4029,14 @@ l2:
 
 		Assert(HEAP_XMAX_IS_LOCKED_ONLY(infomask_lock_old_tuple));
 
+		/*
+		 * Fetch root offset before entering the critical section. We do this
+		 * only if the information is not already available.
+		 */
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&oldtup.t_self));
+
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
@@ -3987,7 +4051,8 @@ l2:
 		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			HeapTupleHeaderSetHeapLatest(oldtup.t_data, root_offnum);
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4145,6 +4210,10 @@ l2:
 										   bms_overlap(modified_attrs, id_attrs),
 										   &old_key_copied);
 
+	if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
@@ -4170,6 +4239,17 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way (we should have done
+		 * that before entering the critical section above).
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
 	else
 	{
@@ -4177,10 +4257,22 @@ l2:
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		root_offnum = InvalidOffsetNumber;
 	}
 
-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
-
+	/* insert new tuple */
+	root_offnum = RelationPutHeapTuple(relation, newbuf, heaptup, false,
+									   root_offnum);
+	/*
+	 * Also mark both copies as latest and set the root offset information. If
+	 * we're doing a HOT/WARM update, then we just copy the information from
+	 * old tuple, if available or computed above. For regular updates,
+	 * RelationPutHeapTuple must have returned us the actual offset number
+	 * where the new version was inserted and we store the same value since the
+	 * update resulted in a new HOT-chain.
+	 */
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
 	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
@@ -4193,7 +4285,7 @@ l2:
 	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	HeapTupleHeaderSetNextTid(oldtup.t_data, &(heaptup->t_self));
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4232,6 +4324,7 @@ l2:
 
 		recptr = log_heap_update(relation, buffer,
 								 newbuf, &oldtup, heaptup,
+								 root_offnum,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4512,7 +4605,8 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	ItemId		lp;
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
-	BlockNumber block;
+	BlockNumber	block;
+	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4521,9 +4615,11 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	bool		first_time = true;
 	bool		have_tuple_lock = false;
 	bool		cleared_all_frozen = false;
+	OffsetNumber	root_offnum;
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
+	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -4543,6 +4639,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	tuple->t_data = (HeapTupleHeader) PageGetItem(page, lp);
 	tuple->t_len = ItemIdGetLength(lp);
 	tuple->t_tableOid = RelationGetRelid(relation);
+	tuple->t_self = *tid;
 
 l3:
 	result = HeapTupleSatisfiesUpdate(tuple, cid, *buffer);
@@ -4570,7 +4667,11 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &t_ctid);
+		else
+			ItemPointerCopy(tid, &t_ctid);
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -5008,7 +5109,12 @@ failed:
 		Assert(result == HeapTupleSelfUpdated || result == HeapTupleUpdated ||
 			   result == HeapTupleWouldBlock);
 		Assert(!(tuple->t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tuple->t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(tid, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tuple->t_data);
@@ -5056,6 +5162,10 @@ failed:
 							  GetCurrentTransactionId(), mode, false,
 							  &xid, &new_infomask, &new_infomask2);
 
+	if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tuple->t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -5084,7 +5194,10 @@ failed:
 	 * the tuple as well.
 	 */
 	if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask))
-		tuple->t_data->t_ctid = *tid;
+	{
+		if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+			HeapTupleHeaderSetHeapLatest(tuple->t_data, root_offnum);
+	}
 
 	/* Clear only the all-frozen bit on visibility map if needed */
 	if (PageIsAllVisible(page) &&
@@ -5598,6 +5711,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
+	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5606,6 +5720,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
+		offnum = ItemPointerGetOffsetNumber(&tupid);
+
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
@@ -5835,7 +5951,7 @@ l4:
 
 		/* if we find the end of update chain, we're done. */
 		if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID ||
-			ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) ||
+			HeapTupleHeaderIsHeapLatest(mytup.t_data, &mytup.t_self) ||
 			HeapTupleHeaderIsOnlyLocked(mytup.t_data))
 		{
 			result = HeapTupleMayBeUpdated;
@@ -5844,7 +5960,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		HeapTupleHeaderGetNextTid(mytup.t_data, &tupid);
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
@@ -5961,7 +6077,7 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	 * Replace the speculative insertion token with a real t_ctid, pointing to
 	 * itself like it does on regular tuples.
 	 */
-	htup->t_ctid = tuple->t_self;
+	HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 	/* XLOG stuff */
 	if (RelationNeedsWAL(relation))
@@ -6087,8 +6203,7 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
 
 	/* Clear the speculative insertion token too */
-	tp.t_data->t_ctid = tp.t_self;
-
+	HeapTupleHeaderSetHeapLatest(tp.t_data, ItemPointerGetOffsetNumber(tid));
 	MarkBufferDirty(buffer);
 
 	/*
@@ -7436,6 +7551,7 @@ log_heap_visible(RelFileNode rnode, Buffer heap_buffer, Buffer vm_buffer,
 static XLogRecPtr
 log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup, HeapTuple newtup,
+				OffsetNumber root_offnum,
 				HeapTuple old_key_tuple,
 				bool all_visible_cleared, bool new_all_visible_cleared)
 {
@@ -7556,6 +7672,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self);
 	xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data);
 
+	Assert(OffsetNumberIsValid(root_offnum));
+	xlrec.root_offnum = root_offnum;
+
 	bufflags = REGBUF_STANDARD;
 	if (init)
 		bufflags |= REGBUF_WILL_INIT;
@@ -8210,7 +8329,13 @@ heap_xlog_delete(XLogReaderState *record)
 			PageClearAllVisible(page);
 
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = target_tid;
+		if (!HeapTupleHeaderHasRootOffset(htup))
+		{
+			OffsetNumber	root_offnum;
+			root_offnum = heap_get_root_tuple(page, xlrec->offnum); 
+			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+		}
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8300,7 +8425,8 @@ heap_xlog_insert(XLogReaderState *record)
 		htup->t_hoff = xlhdr.t_hoff;
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
-		htup->t_ctid = target_tid;
+
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->offnum);
 
 		if (PageAddItem(page, (Item) htup, newlen, xlrec->offnum,
 						true, true) == InvalidOffsetNumber)
@@ -8435,8 +8561,8 @@ heap_xlog_multi_insert(XLogReaderState *record)
 			htup->t_hoff = xlhdr->t_hoff;
 			HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 			HeapTupleHeaderSetCmin(htup, FirstCommandId);
-			ItemPointerSetBlockNumber(&htup->t_ctid, blkno);
-			ItemPointerSetOffsetNumber(&htup->t_ctid, offnum);
+
+			HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 			offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 			if (offnum == InvalidOffsetNumber)
@@ -8572,7 +8698,7 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 		/* Set forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetNextTid(htup, &newtid);
 
 		/* Mark the page as a candidate for pruning */
 		PageSetPrunable(page, XLogRecGetXid(record));
@@ -8705,13 +8831,17 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
-		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = newtid;
 
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
 
+		/*
+		 * Make sure the tuple is marked as the latest and root offset
+		 * information is restored.
+		 */
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->root_offnum);
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
@@ -8774,6 +8904,9 @@ heap_xlog_confirm(XLogReaderState *record)
 		 */
 		ItemPointerSet(&htup->t_ctid, BufferGetBlockNumber(buffer), offnum);
 
+		/* For newly inserted tuple, set root offset to itself. */
+		HeapTupleHeaderSetHeapLatest(htup, offnum);
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8837,11 +8970,17 @@ heap_xlog_lock(XLogReaderState *record)
 		 */
 		if (HEAP_XMAX_IS_LOCKED_ONLY(htup->t_infomask))
 		{
+			ItemPointerData	target_tid;
+
+			ItemPointerSet(&target_tid, BufferGetBlockNumber(buffer), offnum);
 			HeapTupleHeaderClearHotUpdated(htup);
 			/* Make sure there is no forward chain link in t_ctid */
-			ItemPointerSet(&htup->t_ctid,
-						   BufferGetBlockNumber(buffer),
-						   offnum);
+			if (!HeapTupleHeaderHasRootOffset(htup))
+			{
+				OffsetNumber	root_offnum;
+				root_offnum = heap_get_root_tuple(page, offnum);
+				HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+			}
 		}
 		HeapTupleHeaderSetXmax(htup, xlrec->locking_xid);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
diff --git b/src/backend/access/heap/hio.c a/src/backend/access/heap/hio.c
index 6529fe3..8052519 100644
--- b/src/backend/access/heap/hio.c
+++ a/src/backend/access/heap/hio.c
@@ -31,12 +31,20 @@
  * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
  * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *
+ * The caller can optionally tell us to set the root offset to the given value.
+ * Otherwise, the root offset is set to the offset of the new location once its
+ * known. The former is used while updating an existing tuple where the caller
+ * tells us about the root line pointer of the chain.  The latter is used
+ * during insertion of a new row, hence root line pointer is set to the offset
+ * where this tuple is inserted.
  */
-void
+OffsetNumber
 RelationPutHeapTuple(Relation relation,
 					 Buffer buffer,
 					 HeapTuple tuple,
-					 bool token)
+					 bool token,
+					 OffsetNumber root_offnum)
 {
 	Page		pageHeader;
 	OffsetNumber offnum;
@@ -60,17 +68,24 @@ RelationPutHeapTuple(Relation relation,
 	ItemPointerSet(&(tuple->t_self), BufferGetBlockNumber(buffer), offnum);
 
 	/*
-	 * Insert the correct position into CTID of the stored tuple, too (unless
-	 * this is a speculative insertion, in which case the token is held in
-	 * CTID field instead)
+	 * Set block number and the root offset into CTID of the stored tuple, too
+	 * (unless this is a speculative insertion, in which case the token is held
+	 * in CTID field instead).
 	 */
 	if (!token)
 	{
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
+		/* Copy t_ctid to set the correct block number. */
 		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+
+		if (!OffsetNumberIsValid(root_offnum))
+			root_offnum = offnum;
+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item, root_offnum);
 	}
+
+	return root_offnum;
 }
 
 /*
diff --git b/src/backend/access/heap/pruneheap.c a/src/backend/access/heap/pruneheap.c
index d69a266..f54337c 100644
--- b/src/backend/access/heap/pruneheap.c
+++ a/src/backend/access/heap/pruneheap.c
@@ -55,6 +55,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
 
+static void heap_get_root_tuples_internal(Page page,
+				OffsetNumber target_offnum, OffsetNumber *root_offsets);
 
 /*
  * Optionally prune and repair fragmentation in the specified page.
@@ -553,6 +555,17 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 		if (!HeapTupleHeaderIsHotUpdated(htup))
 			break;
 
+
+		/*
+		 * If the tuple was HOT-updated and the update was later
+		 * aborted, someone could mark this tuple to be the last tuple
+		 * in the chain, without clearing the HOT-updated flag. So we must
+		 * check if this is the last tuple in the chain and stop following the
+		 * CTID, else we risk getting into an infinite recursion (though
+		 * prstate->marked[] currently protects against that).
+		 */
+		if (HeapTupleHeaderHasRootOffset(htup))
+			break;
 		/*
 		 * Advance to next chain member.
 		 */
@@ -726,27 +739,47 @@ heap_page_prune_execute(Buffer buffer,
 
 
 /*
- * For all items in this page, find their respective root line pointers.
- * If item k is part of a HOT-chain with root at item j, then we set
- * root_offsets[k - 1] = j.
+ * Either for all items in this page or for the given item, find their
+ * respective root line pointers.
+ *
+ * When target_offnum is a valid offset number, the caller is interested in
+ * just one item. In that case, the root line pointer is returned in
+ * root_offsets.
  *
- * The passed-in root_offsets array must have MaxHeapTuplesPerPage entries.
- * We zero out all unused entries.
+ * When target_offnum is a InvalidOffsetNumber then the caller wants to know
+ * the root line pointers of all the items in this page. The root_offsets array
+ * must have MaxHeapTuplesPerPage entries in that case. If item k is part of a
+ * HOT-chain with root at item j, then we set root_offsets[k - 1] = j. We zero
+ * out all unused entries.
  *
  * The function must be called with at least share lock on the buffer, to
  * prevent concurrent prune operations.
  *
+ * This is not a cheap function since it must scan through all line pointers
+ * and tuples on the page in order to find the root line pointers. To minimize
+ * the cost, we break early if target_offnum is specified and root line pointer
+ * to target_offnum is found.
+ *
  * Note: The information collected here is valid only as long as the caller
  * holds a pin on the buffer. Once pin is released, a tuple might be pruned
  * and reused by a completely unrelated tuple.
+ *
+ * Note: This function must not be called inside a critical section because it
+ * internally calls HeapTupleHeaderGetUpdateXid which somewhere down the stack
+ * may try to allocate heap memory. Memory allocation is disallowed in a
+ * critical section.
  */
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
 {
 	OffsetNumber offnum,
 				maxoff;
 
-	MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
+	if (OffsetNumberIsValid(target_offnum))
+		*root_offsets = InvalidOffsetNumber;
+	else
+		MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
 
 	maxoff = PageGetMaxOffsetNumber(page);
 	for (offnum = FirstOffsetNumber; offnum <= maxoff; offnum = OffsetNumberNext(offnum))
@@ -774,9 +807,28 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 
 			/*
 			 * This is either a plain tuple or the root of a HOT-chain.
-			 * Remember it in the mapping.
+			 *
+			 * If the target_offnum is specified and if we found its mapping,
+			 * return.
 			 */
-			root_offsets[offnum - 1] = offnum;
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (target_offnum == offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember it in the mapping. */
+				root_offsets[offnum - 1] = offnum;
+			}
 
 			/* If it's not the start of a HOT-chain, we're done with it */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
@@ -817,15 +869,65 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 				!TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(htup)))
 				break;
 
-			/* Remember the root line pointer for this item */
-			root_offsets[nextoffnum - 1] = offnum;
+			/*
+			 * If target_offnum is specified and we found its mapping, return.
+			 */
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (nextoffnum == target_offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember the root line pointer for this item. */
+				root_offsets[nextoffnum - 1] = offnum;
+			}
 
 			/* Advance to next chain member, if any */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				break;
 
+			/*
+			 * If the tuple was HOT-updated and the update was later aborted,
+			 * someone could mark this tuple to be the last tuple in the chain
+			 * and store root offset in CTID, without clearing the HOT-updated
+			 * flag. So we must check if CTID is actually root offset and break
+			 * to avoid infinite recursion.
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
 		}
 	}
 }
+
+/*
+ * Get root line pointer for the given tuple.
+ */
+OffsetNumber
+heap_get_root_tuple(Page page, OffsetNumber target_offnum)
+{
+	OffsetNumber offnum = InvalidOffsetNumber;
+	heap_get_root_tuples_internal(page, target_offnum, &offnum);
+	return offnum;
+}
+
+/*
+ * Get root line pointers for all tuples in the page
+ */
+void
+heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+{
+	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+			root_offsets);
+}
diff --git b/src/backend/access/heap/rewriteheap.c a/src/backend/access/heap/rewriteheap.c
index 90ab6f2..e11b4a2 100644
--- b/src/backend/access/heap/rewriteheap.c
+++ a/src/backend/access/heap/rewriteheap.c
@@ -419,14 +419,18 @@ rewrite_heap_tuple(RewriteState state,
 	 */
 	if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
 		  HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
-		!(ItemPointerEquals(&(old_tuple->t_self),
-							&(old_tuple->t_data->t_ctid))))
+		!(HeapTupleHeaderIsHeapLatest(old_tuple->t_data, &old_tuple->t_self)))
 	{
 		OldToNewMapping mapping;
 
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
-		hashkey.tid = old_tuple->t_data->t_ctid;
+
+		/* 
+		 * We've already checked that this is not the last tuple in the chain,
+		 * so fetch the next TID in the chain.
+		 */
+		HeapTupleHeaderGetNextTid(old_tuple->t_data, &hashkey.tid);
 
 		mapping = (OldToNewMapping)
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -439,7 +443,7 @@ rewrite_heap_tuple(RewriteState state,
 			 * set the ctid of this tuple to point to the new location, and
 			 * insert it right away.
 			 */
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextTid(new_tuple->t_data, &mapping->new_tid);
 
 			/* We don't need the mapping entry anymore */
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -525,7 +529,7 @@ rewrite_heap_tuple(RewriteState state,
 				new_tuple = unresolved->tuple;
 				free_new = true;
 				old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextTid(new_tuple->t_data, &new_tid);
 
 				/*
 				 * We don't need the hash entry anymore, but don't free its
@@ -731,7 +735,12 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		onpage_tup->t_ctid = tup->t_self;
+		/* 
+		 * Set t_ctid just to ensure that block number is copied correctly, but
+		 * then immediately mark the tuple as the latest.
+		 */
+		HeapTupleHeaderSetNextTid(onpage_tup, &tup->t_self);
+		HeapTupleHeaderSetHeapLatest(onpage_tup, newoff);
 	}
 
 	/* If heaptup is a private copy, release it. */
diff --git b/src/backend/executor/execIndexing.c a/src/backend/executor/execIndexing.c
index 8d119f6..9920f48 100644
--- b/src/backend/executor/execIndexing.c
+++ a/src/backend/executor/execIndexing.c
@@ -788,7 +788,8 @@ retry:
 			  DirtySnapshot.speculativeToken &&
 			  TransactionIdPrecedes(GetCurrentTransactionId(), xwait))))
 		{
-			ctid_wait = tup->t_data->t_ctid;
+			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
+				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git b/src/backend/executor/execMain.c a/src/backend/executor/execMain.c
index 3a5b5b2..12476e7 100644
--- b/src/backend/executor/execMain.c
+++ a/src/backend/executor/execMain.c
@@ -2589,7 +2589,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		 * As above, it should be safe to examine xmax and t_ctid without the
 		 * buffer content lock, because they can't be changing.
 		 */
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, &tuple.t_self))
 		{
 			/* deleted, so forget about it */
 			ReleaseBuffer(buffer);
@@ -2597,7 +2597,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		}
 
 		/* updated, so look at the updated row */
-		tuple.t_self = tuple.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tuple.t_data, &tuple.t_self);
 		/* updated row should have xmin matching this xmax */
 		priorXmax = HeapTupleHeaderGetUpdateXid(tuple.t_data);
 		ReleaseBuffer(buffer);
diff --git b/src/include/access/heapam.h a/src/include/access/heapam.h
index a864f78..95aa976 100644
--- b/src/include/access/heapam.h
+++ a/src/include/access/heapam.h
@@ -189,6 +189,7 @@ extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
+extern OffsetNumber heap_get_root_tuple(Page page, OffsetNumber target_offnum);
 extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
 
 /* in heap/syncscan.c */
diff --git b/src/include/access/heapam_xlog.h a/src/include/access/heapam_xlog.h
index 52f28b8..a4a1fe1 100644
--- b/src/include/access/heapam_xlog.h
+++ a/src/include/access/heapam_xlog.h
@@ -193,6 +193,8 @@ typedef struct xl_heap_update
 	uint8		flags;
 	TransactionId new_xmax;		/* xmax of the new tuple */
 	OffsetNumber new_offnum;	/* new tuple's offset */
+	OffsetNumber root_offnum;	/* offset of the root line pointer in case of
+								   HOT or WARM update */
 
 	/*
 	 * If XLOG_HEAP_CONTAINS_OLD_TUPLE or XLOG_HEAP_CONTAINS_OLD_KEY flags are
@@ -200,7 +202,7 @@ typedef struct xl_heap_update
 	 */
 } xl_heap_update;
 
-#define SizeOfHeapUpdate	(offsetof(xl_heap_update, new_offnum) + sizeof(OffsetNumber))
+#define SizeOfHeapUpdate	(offsetof(xl_heap_update, root_offnum) + sizeof(OffsetNumber))
 
 /*
  * This is what we need to know about vacuum page cleanup/redirect
diff --git b/src/include/access/hio.h a/src/include/access/hio.h
index 2824f23..921cb37 100644
--- b/src/include/access/hio.h
+++ a/src/include/access/hio.h
@@ -35,8 +35,8 @@ typedef struct BulkInsertStateData
 }	BulkInsertStateData;
 
 
-extern void RelationPutHeapTuple(Relation relation, Buffer buffer,
-					 HeapTuple tuple, bool token);
+extern OffsetNumber RelationPutHeapTuple(Relation relation, Buffer buffer,
+					 HeapTuple tuple, bool token, OffsetNumber root_offnum);
 extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
 						  Buffer otherBuffer, int options,
 						  BulkInsertState bistate,
diff --git b/src/include/access/htup_details.h a/src/include/access/htup_details.h
index a6c7e31..7552186 100644
--- b/src/include/access/htup_details.h
+++ a/src/include/access/htup_details.h
@@ -260,13 +260,19 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1800 are available */
+/* bits 0x0800 are available */
+#define HEAP_LATEST_TUPLE		0x1000	/*
+										 * This is the last tuple in chain and
+										 * ip_posid points to the root line
+										 * pointer
+										 */
 #define HEAP_KEYS_UPDATED		0x2000	/* tuple was updated and key cols
 										 * modified, or tuple deleted */
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xE000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+
 
 /*
  * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins.  It is
@@ -504,6 +510,43 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+/*
+ * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
+ * the TID of the tuple itself in t_ctid field to mark the end of the chain.
+ * But starting PG v10, we use a special flag HEAP_LATEST_TUPLE to identify the
+ * last tuple and store the root line pointer of the HOT chain in t_ctid field
+ * instead.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetHeapLatest(tup, offnum) \
+do { \
+	AssertMacro(OffsetNumberIsValid(offnum)); \
+	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE; \
+	ItemPointerSetOffsetNumber(&(tup)->t_ctid, (offnum)); \
+} while (0)
+
+#define HeapTupleHeaderClearHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 &= ~HEAP_LATEST_TUPLE \
+)
+
+/*
+ * Starting from PostgreSQL 10, the latest tuple in an update chain has
+ * HEAP_LATEST_TUPLE set; but tuples upgraded from earlier versions do not.
+ * For those, we determine whether a tuple is latest by testing that its t_ctid
+ * points to itself.
+ *
+ * Note: beware of multiple evaluations of "tup" and "tid" arguments.
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  (((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(tid))) \
+)
+
+
 #define HeapTupleHeaderSetHeapOnly(tup) \
 ( \
   (tup)->t_infomask2 |= HEAP_ONLY_TUPLE \
@@ -542,6 +585,56 @@ do { \
 
 
 /*
+ * Set the t_ctid chain and also clear the HEAP_LATEST_TUPLE flag since we
+ * now have a new tuple in the chain and this is no longer the last tuple of
+ * the chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetNextTid(tup, tid) \
+do { \
+		ItemPointerCopy((tid), &((tup)->t_ctid)); \
+		HeapTupleHeaderClearHeapLatest((tup)); \
+} while (0)
+
+/*
+ * Get TID of next tuple in the update chain. Caller must have checked that
+ * we are not already at the end of the chain because in that case t_ctid may
+ * actually store the root line pointer of the HOT chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetNextTid(tup, next_ctid) \
+do { \
+	AssertMacro(!((tup)->t_infomask2 & HEAP_LATEST_TUPLE)); \
+	ItemPointerCopy(&(tup)->t_ctid, (next_ctid)); \
+} while (0)
+
+/*
+ * Get the root line pointer of the HOT chain. The caller should have confirmed
+ * that the root offset is cached before calling this macro.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+	AssertMacro(((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0), \
+	ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)
+
+/*
+ * Return whether the tuple has a cached root offset.  We don't use
+ * HeapTupleHeaderIsHeapLatest because that one also considers the case of
+ * t_ctid pointing to itself, for tuples migrated from pre v10 clusters. Here
+ * we are only interested in the tuples which are marked with HEAP_LATEST_TUPLE
+ * flag.
+ */
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+	((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0 \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
0000_interesting_attrs.patchapplication/octet-stream; name=0000_interesting_attrs.patchDownload
commit 5fc696cf695f3bc488ba8f4544166b1be44998e3
Author: Pavan Deolasee <pavan.deolasee@gmail.com>
Date:   Sun Jan 1 16:29:10 2017 +0530

    Alvaro's patch on interesting attrs

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 5fd7f1e..84447f0 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -95,11 +95,8 @@ static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
 				HeapTuple newtup, HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
-static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
-							 Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
+							 Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup);
 static bool heap_acquire_tuplock(Relation relation, ItemPointer tid,
 					 LockTupleMode mode, LockWaitPolicy wait_policy,
@@ -3454,6 +3451,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *interesting_attrs;
+	Bitmapset  *modified_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3471,9 +3470,6 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				pagefree;
 	bool		have_tuple_lock = false;
 	bool		iscombo;
-	bool		satisfies_hot;
-	bool		satisfies_key;
-	bool		satisfies_id;
 	bool		use_hot_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
@@ -3500,21 +3496,30 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				 errmsg("cannot update tuples during a parallel operation")));
 
 	/*
-	 * Fetch the list of attributes to be checked for HOT update.  This is
-	 * wasted effort if we fail to update or have to put the new tuple on a
-	 * different page.  But we must compute the list before obtaining buffer
-	 * lock --- in the worst case, if we are doing an update on one of the
-	 * relevant system catalogs, we could deadlock if we try to fetch the list
-	 * later.  In any case, the relcache caches the data so this is usually
-	 * pretty cheap.
+	 * Fetch the list of attributes to be checked for various operations.
 	 *
-	 * Note that we get a copy here, so we need not worry about relcache flush
-	 * happening midway through.
+	 * For HOT considerations, this is wasted effort if we fail to update or
+	 * have to put the new tuple on a different page.  But we must compute the
+	 * list before obtaining buffer lock --- in the worst case, if we are doing
+	 * an update on one of the relevant system catalogs, we could deadlock if
+	 * we try to fetch the list later.  In any case, the relcache caches the
+	 * data so this is usually pretty cheap.
+	 *
+	 * We also need columns used by the replica identity, the columns that
+	 * are considered the "key" of rows in the table, and columns that are
+	 * part of indirect indexes.
+	 *
+	 * Note that we get copies of each bitmap, so we need not worry about
+	 * relcache flush happening midway through.
 	 */
 	hot_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_ALL);
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	interesting_attrs = bms_add_members(NULL, hot_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
+
 
 	block = ItemPointerGetBlockNumber(otid);
 	buffer = ReadBuffer(relation, block);
@@ -3535,7 +3540,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Assert(ItemIdIsNormal(lp));
 
 	/*
-	 * Fill in enough data in oldtup for HeapSatisfiesHOTandKeyUpdate to work
+	 * Fill in enough data in oldtup for HeapDetermineModifiedColumns to work
 	 * properly.
 	 */
 	oldtup.t_tableOid = RelationGetRelid(relation);
@@ -3561,6 +3566,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 		Assert(!(newtup->t_data->t_infomask & HEAP_HASOID));
 	}
 
+	/* Determine columns modified by the update. */
+	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
+												  &oldtup, newtup);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3572,10 +3581,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	 * is updates that don't manipulate key columns, not those that
 	 * serendipitiously arrive at the same key values.
 	 */
-	HeapSatisfiesHOTandKeyUpdate(relation, hot_attrs, key_attrs, id_attrs,
-								 &satisfies_hot, &satisfies_key,
-								 &satisfies_id, &oldtup, newtup);
-	if (satisfies_key)
+	if (!bms_overlap(modified_attrs, key_attrs))
 	{
 		*lockmode = LockTupleNoKeyExclusive;
 		mxact_status = MultiXactStatusNoKeyUpdate;
@@ -3814,6 +3820,8 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(modified_attrs);
+		bms_free(interesting_attrs);
 		return result;
 	}
 
@@ -4118,7 +4126,7 @@ l2:
 		 * to do a HOT update.  Check if any of the index columns have been
 		 * changed.  If not, then HOT update is possible.
 		 */
-		if (satisfies_hot)
+		if (!bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
 	}
 	else
@@ -4133,7 +4141,9 @@ l2:
 	 * ExtractReplicaIdentity() will return NULL if nothing needs to be
 	 * logged.
 	 */
-	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup, !satisfies_id, &old_key_copied);
+	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup,
+										   bms_overlap(modified_attrs, id_attrs),
+										   &old_key_copied);
 
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
@@ -4281,13 +4291,15 @@ l2:
 	bms_free(hot_attrs);
 	bms_free(key_attrs);
 	bms_free(id_attrs);
+	bms_free(modified_attrs);
+	bms_free(interesting_attrs);
 
 	return HeapTupleMayBeUpdated;
 }
 
 /*
  * Check if the specified attribute's value is same in both given tuples.
- * Subroutine for HeapSatisfiesHOTandKeyUpdate.
+ * Subroutine for HeapDetermineModifiedColumns.
  */
 static bool
 heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
@@ -4321,7 +4333,7 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 
 	/*
 	 * Extract the corresponding values.  XXX this is pretty inefficient if
-	 * there are many indexed columns.  Should HeapSatisfiesHOTandKeyUpdate do
+	 * there are many indexed columns.  Should HeapDetermineModifiedColumns do
 	 * a single heap_deform_tuple call on each tuple, instead?	But that
 	 * doesn't work for system columns ...
 	 */
@@ -4366,114 +4378,30 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 /*
  * Check which columns are being updated.
  *
- * This simultaneously checks conditions for HOT updates, for FOR KEY
- * SHARE updates, and REPLICA IDENTITY concerns.  Since much of the time they
- * will be checking very similar sets of columns, and doing the same tests on
- * them, it makes sense to optimize and do them together.
- *
- * We receive three bitmapsets comprising the three sets of columns we're
- * interested in.  Note these are destructively modified; that is OK since
- * this is invoked at most once in heap_update.
+ * Given an updated tuple, determine (and return into the output bitmapset),
+ * from those listed as interesting, the set of columns that changed.
  *
- * hot_result is set to TRUE if it's okay to do a HOT update (i.e. it does not
- * modified indexed columns); key_result is set to TRUE if the update does not
- * modify columns used in the key; id_result is set to TRUE if the update does
- * not modify columns in any index marked as the REPLICA IDENTITY.
+ * The input bitmapset is destructively modified; that is OK since this is
+ * invoked at most once in heap_update.
  */
-static void
-HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *
+HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup)
 {
-	int			next_hot_attnum;
-	int			next_key_attnum;
-	int			next_id_attnum;
-	bool		hot_result = true;
-	bool		key_result = true;
-	bool		id_result = true;
-
-	/* If REPLICA IDENTITY is set to FULL, id_attrs will be empty. */
-	Assert(bms_is_subset(id_attrs, key_attrs));
-	Assert(bms_is_subset(key_attrs, hot_attrs));
-
-	/*
-	 * If one of these sets contains no remaining bits, bms_first_member will
-	 * return -1, and after adding FirstLowInvalidHeapAttributeNumber (which
-	 * is negative!)  we'll get an attribute number that can't possibly be
-	 * real, and thus won't match any actual attribute number.
-	 */
-	next_hot_attnum = bms_first_member(hot_attrs);
-	next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_key_attnum = bms_first_member(key_attrs);
-	next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_id_attnum = bms_first_member(id_attrs);
-	next_id_attnum += FirstLowInvalidHeapAttributeNumber;
+	int		attnum;
+	Bitmapset *modified = NULL;
 
-	for (;;)
+	while ((attnum = bms_first_member(interesting_cols)) >= 0)
 	{
-		bool		changed;
-		int			check_now;
-
-		/*
-		 * Since the HOT attributes are a superset of the key attributes and
-		 * the key attributes are a superset of the id attributes, this logic
-		 * is guaranteed to identify the next column that needs to be checked.
-		 */
-		if (hot_result && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_hot_attnum;
-		else if (key_result && next_key_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_key_attnum;
-		else if (id_result && next_id_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_id_attnum;
-		else
-			break;
+		attnum += FirstLowInvalidHeapAttributeNumber;
 
-		/* See whether it changed. */
-		changed = !heap_tuple_attr_equals(RelationGetDescr(relation),
-										  check_now, oldtup, newtup);
-		if (changed)
-		{
-			if (check_now == next_hot_attnum)
-				hot_result = false;
-			if (check_now == next_key_attnum)
-				key_result = false;
-			if (check_now == next_id_attnum)
-				id_result = false;
-
-			/* if all are false now, we can stop checking */
-			if (!hot_result && !key_result && !id_result)
-				break;
-		}
-
-		/*
-		 * Advance the next attribute numbers for the sets that contain the
-		 * attribute we just checked.  As we work our way through the columns,
-		 * the next_attnum values will rise; but when each set becomes empty,
-		 * bms_first_member() will return -1 and the attribute number will end
-		 * up with a value less than FirstLowInvalidHeapAttributeNumber.
-		 */
-		if (hot_result && check_now == next_hot_attnum)
-		{
-			next_hot_attnum = bms_first_member(hot_attrs);
-			next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (key_result && check_now == next_key_attnum)
-		{
-			next_key_attnum = bms_first_member(key_attrs);
-			next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (id_result && check_now == next_id_attnum)
-		{
-			next_id_attnum = bms_first_member(id_attrs);
-			next_id_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
+		if (!heap_tuple_attr_equals(RelationGetDescr(relation),
+								   attnum, oldtup, newtup))
+			modified = bms_add_member(modified,
+									  attnum - FirstLowInvalidHeapAttributeNumber);
 	}
 
-	*satisfies_hot = hot_result;
-	*satisfies_key = key_result;
-	*satisfies_id = id_result;
+	return modified;
 }
 
 /*
#40Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Pavan Deolasee (#39)
Re: Patch: Write Amplification Reduction Method (WARM)

Pavan Deolasee wrote:

On Thu, Jan 26, 2017 at 2:38 AM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

The simple_heap_update + CatalogUpdateIndexes pattern is getting
obnoxious. How about creating something like catalog_heap_update which
does both things at once, and stop bothering each callsite with the WARM
stuff?

What I realised that there are really 2 patterns:
1. simple_heap_insert, CatalogUpdateIndexes
2. simple_heap_update, CatalogUpdateIndexes

There are only couple of places where we already have indexes open or have
more than one tuple to update, so we call CatalogIndexInsert directly. What
I ended up doing in the attached patch is add two new APIs which combines
the two steps of each of these patterns. It seems much cleaner to me and
also less buggy for future users. I hope I am not missing a reason not to
do combine these steps.

CatalogUpdateIndexes was just added as a convenience function on top of
a very common pattern. If we now have a reason to create a second one
because there are now two very common patterns, it seems reasonable to
have two functions. I think I would commit the refactoring to create
these functions ahead of the larger WARM patch, since I think it'd be
bulky and largely mechanical. (I'm going from this description; didn't
read your actual code.)

+#define HeapTupleHeaderGetNextTid(tup, next_ctid) \
+do { \
+     AssertMacro(!((tup)->t_infomask2 & HEAP_LATEST_TUPLE)); \
+     ItemPointerCopy(&(tup)->t_ctid, (next_ctid)); \
+} while (0)

Actually, I think this macro could just return the TID so that it can be
used as struct assignment, just like ItemPointerCopy does internally --
callers can do
ctid = HeapTupleHeaderGetNextTid(tup);

While I agree with your proposal, I wonder why we have ItemPointerCopy() in
the first place because we freely copy TIDs as struct assignment. Is there
a reason for that? And if there is, does it impact this specific case?

I dunno. This macro is present in our very first commit d31084e9d1118b.
Maybe it's an artifact from the Lisp to C conversion. Even then, we had
some cases of iptrs being copied by struct assignment, so it's not like
it didn't work. Perhaps somebody envisioned that the internal details
could change, but that hasn't happened in two decades so why should we
worry about it now? If somebody needs it later, it can be changed then.

There is one issue that bothers me. The current implementation lacks
ability to convert WARM chains into HOT chains. The README.WARM has some
proposal to do that. But it requires additional free bit in tuple header
(which we don't have) and of course, it needs to be vetted and implemented.
If the heap ends up with many WARM tuples, then index-only-scans will
become ineffective because index-only-scan can not skip a heap page, if it
contains a WARM tuple. Alternate ideas/suggestions and review of the design
are welcome!

t_infomask2 contains one last unused bit, and we could reuse vacuum
full's bits (HEAP_MOVED_OUT, HEAP_MOVED_IN), but that will need some
thinking ahead. Maybe now's the time to start versioning relations so
that we can ensure clusters upgraded to pg10 do not contain any of those
bits in any tuple headers.

I don't have any ideas regarding the estate passed to recheck yet --
haven't looked at the callsites in detail. I'll give this another look
later.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#41Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Alvaro Herrera (#40)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Jan 31, 2017 at 7:21 PM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

Pavan Deolasee wrote:

On Thu, Jan 26, 2017 at 2:38 AM, Alvaro Herrera <

alvherre@2ndquadrant.com>

wrote:

The simple_heap_update + CatalogUpdateIndexes pattern is getting
obnoxious. How about creating something like catalog_heap_update which
does both things at once, and stop bothering each callsite with the

WARM

stuff?

What I realised that there are really 2 patterns:
1. simple_heap_insert, CatalogUpdateIndexes
2. simple_heap_update, CatalogUpdateIndexes

There are only couple of places where we already have indexes open or

have

more than one tuple to update, so we call CatalogIndexInsert directly.

What

I ended up doing in the attached patch is add two new APIs which combines
the two steps of each of these patterns. It seems much cleaner to me and
also less buggy for future users. I hope I am not missing a reason not to
do combine these steps.

CatalogUpdateIndexes was just added as a convenience function on top of
a very common pattern. If we now have a reason to create a second one
because there are now two very common patterns, it seems reasonable to
have two functions. I think I would commit the refactoring to create
these functions ahead of the larger WARM patch, since I think it'd be
bulky and largely mechanical. (I'm going from this description; didn't
read your actual code.)

Sounds good. Should I submit that as a separate patch on current master?

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#42Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Pavan Deolasee (#41)
Re: Patch: Write Amplification Reduction Method (WARM)

Pavan Deolasee wrote:

On Tue, Jan 31, 2017 at 7:21 PM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

CatalogUpdateIndexes was just added as a convenience function on top of
a very common pattern. If we now have a reason to create a second one
because there are now two very common patterns, it seems reasonable to
have two functions. I think I would commit the refactoring to create
these functions ahead of the larger WARM patch, since I think it'd be
bulky and largely mechanical. (I'm going from this description; didn't
read your actual code.)

Sounds good. Should I submit that as a separate patch on current master?

Yes, please.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#43Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Alvaro Herrera (#42)
1 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Jan 31, 2017 at 7:37 PM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

Pavan Deolasee wrote:

Sounds good. Should I submit that as a separate patch on current master?

Yes, please.

Attached.

Two new APIs added.

- CatalogInsertHeapAndIndex which does a simple_heap_insert followed by
catalog updates
- CatalogUpdateHeapAndIndex which does a simple_heap_update followed by
catalog updates

There are only a handful callers remain for simple_heap_insert/update after
this patch. They are typically working with already opened indexes and
hence I left them unchanged.

make check-world passes with the patch.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

catalog_update.patchapplication/octet-stream; name=catalog_update.patchDownload
diff --git a/src/backend/catalog/aclchk.c b/src/backend/catalog/aclchk.c
index 00a9aea..477f450 100644
--- a/src/backend/catalog/aclchk.c
+++ b/src/backend/catalog/aclchk.c
@@ -1252,7 +1252,7 @@ SetDefaultACL(InternalDefaultACL *iacls)
 			values[Anum_pg_default_acl_defaclacl - 1] = PointerGetDatum(new_acl);
 
 			newtuple = heap_form_tuple(RelationGetDescr(rel), values, nulls);
-			simple_heap_insert(rel, newtuple);
+			CatalogInsertHeapAndIndexes(rel, newtuple);
 		}
 		else
 		{
@@ -1262,12 +1262,9 @@ SetDefaultACL(InternalDefaultACL *iacls)
 
 			newtuple = heap_modify_tuple(tuple, RelationGetDescr(rel),
 										 values, nulls, replaces);
-			simple_heap_update(rel, &newtuple->t_self, newtuple);
+			CatalogUpdateHeapAndIndexes(rel, &newtuple->t_self, newtuple);
 		}
 
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(rel, newtuple);
-
 		/* these dependencies don't change in an update */
 		if (isNew)
 		{
@@ -1697,10 +1694,7 @@ ExecGrant_Attribute(InternalGrant *istmt, Oid relOid, const char *relname,
 		newtuple = heap_modify_tuple(attr_tuple, RelationGetDescr(attRelation),
 									 values, nulls, replaces);
 
-		simple_heap_update(attRelation, &newtuple->t_self, newtuple);
-
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(attRelation, newtuple);
+		CatalogUpdateHeapAndIndexes(attRelation, &newtuple->t_self, newtuple);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(relOid, RelationRelationId, attnum,
@@ -1963,10 +1957,7 @@ ExecGrant_Relation(InternalGrant *istmt)
 			newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation),
 										 values, nulls, replaces);
 
-			simple_heap_update(relation, &newtuple->t_self, newtuple);
-
-			/* keep the catalog indexes up to date */
-			CatalogUpdateIndexes(relation, newtuple);
+			CatalogUpdateHeapAndIndexes(relation, &newtuple->t_self, newtuple);
 
 			/* Update initial privileges for extensions */
 			recordExtensionInitPriv(relOid, RelationRelationId, 0, new_acl);
@@ -2156,10 +2147,7 @@ ExecGrant_Database(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
-
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateHeapAndIndexes(relation, &newtuple->t_self, newtuple);
 
 		/* Update the shared dependency ACL info */
 		updateAclDependencies(DatabaseRelationId, HeapTupleGetOid(tuple), 0,
@@ -2281,10 +2269,7 @@ ExecGrant_Fdw(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
-
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateHeapAndIndexes(relation, &newtuple->t_self, newtuple);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(fdwid, ForeignDataWrapperRelationId, 0,
@@ -2410,10 +2395,7 @@ ExecGrant_ForeignServer(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
-
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateHeapAndIndexes(relation, &newtuple->t_self, newtuple);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(srvid, ForeignServerRelationId, 0, new_acl);
@@ -2537,10 +2519,7 @@ ExecGrant_Function(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
-
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateHeapAndIndexes(relation, &newtuple->t_self, newtuple);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(funcId, ProcedureRelationId, 0, new_acl);
@@ -2671,10 +2650,7 @@ ExecGrant_Language(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
-
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateHeapAndIndexes(relation, &newtuple->t_self, newtuple);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(langId, LanguageRelationId, 0, new_acl);
@@ -2813,10 +2789,7 @@ ExecGrant_Largeobject(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation),
 									 values, nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
-
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateHeapAndIndexes(relation, &newtuple->t_self, newtuple);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(loid, LargeObjectRelationId, 0, new_acl);
@@ -2941,10 +2914,7 @@ ExecGrant_Namespace(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
-
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateHeapAndIndexes(relation, &newtuple->t_self, newtuple);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(nspid, NamespaceRelationId, 0, new_acl);
@@ -3068,10 +3038,7 @@ ExecGrant_Tablespace(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
-
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateHeapAndIndexes(relation, &newtuple->t_self, newtuple);
 
 		/* Update the shared dependency ACL info */
 		updateAclDependencies(TableSpaceRelationId, tblId, 0,
@@ -3205,10 +3172,7 @@ ExecGrant_Type(InternalGrant *istmt)
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values,
 									 nulls, replaces);
 
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
-
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relation, newtuple);
+		CatalogUpdateHeapAndIndexes(relation, &newtuple->t_self, newtuple);
 
 		/* Update initial privileges for extensions */
 		recordExtensionInitPriv(typId, TypeRelationId, 0, new_acl);
@@ -5751,10 +5715,7 @@ recordExtensionInitPrivWorker(Oid objoid, Oid classoid, int objsubid, Acl *new_a
 			oldtuple = heap_modify_tuple(oldtuple, RelationGetDescr(relation),
 										 values, nulls, replace);
 
-			simple_heap_update(relation, &oldtuple->t_self, oldtuple);
-
-			/* keep the catalog indexes up to date */
-			CatalogUpdateIndexes(relation, oldtuple);
+			CatalogUpdateHeapAndIndexes(relation, &oldtuple->t_self, oldtuple);
 		}
 		else
 			/* new_acl is NULL, so delete the entry we found. */
@@ -5788,10 +5749,7 @@ recordExtensionInitPrivWorker(Oid objoid, Oid classoid, int objsubid, Acl *new_a
 
 			tuple = heap_form_tuple(RelationGetDescr(relation), values, nulls);
 
-			simple_heap_insert(relation, tuple);
-
-			/* keep the catalog indexes up to date */
-			CatalogUpdateIndexes(relation, tuple);
+			CatalogInsertHeapAndIndexes(relation, tuple);
 		}
 	}
 
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index 7ce9115..de2ba12 100644
--- a/src/backend/catalog/heap.c
+++ b/src/backend/catalog/heap.c
@@ -824,9 +824,7 @@ InsertPgClassTuple(Relation pg_class_desc,
 	HeapTupleSetOid(tup, new_rel_oid);
 
 	/* finally insert the new tuple, update the indexes, and clean up */
-	simple_heap_insert(pg_class_desc, tup);
-
-	CatalogUpdateIndexes(pg_class_desc, tup);
+	CatalogInsertHeapAndIndexes(pg_class_desc, tup);
 
 	heap_freetuple(tup);
 }
@@ -1599,10 +1597,7 @@ RemoveAttributeById(Oid relid, AttrNumber attnum)
 				 "........pg.dropped.%d........", attnum);
 		namestrcpy(&(attStruct->attname), newattname);
 
-		simple_heap_update(attr_rel, &tuple->t_self, tuple);
-
-		/* keep the system catalog indexes current */
-		CatalogUpdateIndexes(attr_rel, tuple);
+		CatalogUpdateHeapAndIndexes(attr_rel, &tuple->t_self, tuple);
 	}
 
 	/*
@@ -1731,10 +1726,7 @@ RemoveAttrDefaultById(Oid attrdefId)
 
 	((Form_pg_attribute) GETSTRUCT(tuple))->atthasdef = false;
 
-	simple_heap_update(attr_rel, &tuple->t_self, tuple);
-
-	/* keep the system catalog indexes current */
-	CatalogUpdateIndexes(attr_rel, tuple);
+	CatalogUpdateHeapAndIndexes(attr_rel, &tuple->t_self, tuple);
 
 	/*
 	 * Our update of the pg_attribute row will force a relcache rebuild, so
@@ -1932,9 +1924,7 @@ StoreAttrDefault(Relation rel, AttrNumber attnum,
 	adrel = heap_open(AttrDefaultRelationId, RowExclusiveLock);
 
 	tuple = heap_form_tuple(adrel->rd_att, values, nulls);
-	attrdefOid = simple_heap_insert(adrel, tuple);
-
-	CatalogUpdateIndexes(adrel, tuple);
+	attrdefOid = CatalogInsertHeapAndIndexes(adrel, tuple);
 
 	defobject.classId = AttrDefaultRelationId;
 	defobject.objectId = attrdefOid;
@@ -1964,9 +1954,7 @@ StoreAttrDefault(Relation rel, AttrNumber attnum,
 	if (!attStruct->atthasdef)
 	{
 		attStruct->atthasdef = true;
-		simple_heap_update(attrrel, &atttup->t_self, atttup);
-		/* keep catalog indexes current */
-		CatalogUpdateIndexes(attrrel, atttup);
+		CatalogUpdateHeapAndIndexes(attrrel, &atttup->t_self, atttup);
 	}
 	heap_close(attrrel, RowExclusiveLock);
 	heap_freetuple(atttup);
@@ -2561,8 +2549,7 @@ MergeWithExistingConstraint(Relation rel, char *ccname, Node *expr,
 				Assert(is_local);
 				con->connoinherit = true;
 			}
-			simple_heap_update(conDesc, &tup->t_self, tup);
-			CatalogUpdateIndexes(conDesc, tup);
+			CatalogUpdateHeapAndIndexes(conDesc, &tup->t_self, tup);
 			break;
 		}
 	}
@@ -2602,10 +2589,7 @@ SetRelationNumChecks(Relation rel, int numchecks)
 	{
 		relStruct->relchecks = numchecks;
 
-		simple_heap_update(relrel, &reltup->t_self, reltup);
-
-		/* keep catalog indexes current */
-		CatalogUpdateIndexes(relrel, reltup);
+		CatalogUpdateHeapAndIndexes(relrel, &reltup->t_self, reltup);
 	}
 	else
 	{
@@ -3145,10 +3129,7 @@ StorePartitionKey(Relation rel,
 
 	tuple = heap_form_tuple(RelationGetDescr(pg_partitioned_table), values, nulls);
 
-	simple_heap_insert(pg_partitioned_table, tuple);
-
-	/* Update the indexes on pg_partitioned_table */
-	CatalogUpdateIndexes(pg_partitioned_table, tuple);
+	CatalogInsertHeapAndIndexes(pg_partitioned_table, tuple);
 	heap_close(pg_partitioned_table, RowExclusiveLock);
 
 	/* Mark this relation as dependent on a few things as follows */
@@ -3265,8 +3246,7 @@ StorePartitionBound(Relation rel, Relation parent, Node *bound)
 								 new_val, new_null, new_repl);
 	/* Also set the flag */
 	((Form_pg_class) GETSTRUCT(newtuple))->relispartition = true;
-	simple_heap_update(classRel, &newtuple->t_self, newtuple);
-	CatalogUpdateIndexes(classRel, newtuple);
+	CatalogUpdateHeapAndIndexes(classRel, &newtuple->t_self, newtuple);
 	heap_freetuple(newtuple);
 	heap_close(classRel, RowExclusiveLock);
 
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 26cbc0e..33ca96a 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -649,10 +649,7 @@ UpdateIndexRelation(Oid indexoid,
 	/*
 	 * insert the tuple into the pg_index catalog
 	 */
-	simple_heap_insert(pg_index, tuple);
-
-	/* update the indexes on pg_index */
-	CatalogUpdateIndexes(pg_index, tuple);
+	CatalogInsertHeapAndIndexes(pg_index, tuple);
 
 	/*
 	 * close the relation and free the tuple
@@ -1324,8 +1321,7 @@ index_constraint_create(Relation heapRelation,
 
 		if (dirty)
 		{
-			simple_heap_update(pg_index, &indexTuple->t_self, indexTuple);
-			CatalogUpdateIndexes(pg_index, indexTuple);
+			CatalogUpdateHeapAndIndexes(pg_index, &indexTuple->t_self, indexTuple);
 
 			InvokeObjectPostAlterHookArg(IndexRelationId, indexRelationId, 0,
 										 InvalidOid, is_internal);
@@ -2103,8 +2099,7 @@ index_build(Relation heapRelation,
 		Assert(!indexForm->indcheckxmin);
 
 		indexForm->indcheckxmin = true;
-		simple_heap_update(pg_index, &indexTuple->t_self, indexTuple);
-		CatalogUpdateIndexes(pg_index, indexTuple);
+		CatalogUpdateHeapAndIndexes(pg_index, &indexTuple->t_self, indexTuple);
 
 		heap_freetuple(indexTuple);
 		heap_close(pg_index, RowExclusiveLock);
@@ -3448,8 +3443,7 @@ reindex_index(Oid indexId, bool skip_constraint_checks, char persistence,
 			indexForm->indisvalid = true;
 			indexForm->indisready = true;
 			indexForm->indislive = true;
-			simple_heap_update(pg_index, &indexTuple->t_self, indexTuple);
-			CatalogUpdateIndexes(pg_index, indexTuple);
+			CatalogUpdateHeapAndIndexes(pg_index, &indexTuple->t_self, indexTuple);
 
 			/*
 			 * Invalidate the relcache for the table, so that after we commit
diff --git a/src/backend/catalog/indexing.c b/src/backend/catalog/indexing.c
index 1915ca3..bad9fb0 100644
--- a/src/backend/catalog/indexing.c
+++ b/src/backend/catalog/indexing.c
@@ -162,3 +162,31 @@ CatalogUpdateIndexes(Relation heapRel, HeapTuple heapTuple)
 	CatalogIndexInsert(indstate, heapTuple);
 	CatalogCloseIndexes(indstate);
 }
+
+/*
+ * A convenience routine which updates the heap tuple (identified by otid) with
+ * tup and also update all indexes on the table.
+ */
+void
+CatalogUpdateHeapAndIndexes(Relation heapRel, ItemPointer otid, HeapTuple tup)
+{
+	simple_heap_update(heapRel, otid, tup);
+
+	/* Make sure only indexes whose columns are modified receive new entries */
+	CatalogUpdateIndexes(heapRel, tup);
+}
+
+/*
+ * A convenience routine which inserts a new heap tuple and also update all
+ * indexes on the table.
+ *
+ * Oid of the inserted tuple is returned
+ */
+Oid
+CatalogInsertHeapAndIndexes(Relation heapRel, HeapTuple tup)
+{
+	Oid oid;
+	oid = simple_heap_insert(heapRel, tup);
+	CatalogUpdateIndexes(heapRel, tup);
+	return oid;
+}
diff --git a/src/backend/catalog/pg_aggregate.c b/src/backend/catalog/pg_aggregate.c
index 3a4e22f..9cab585 100644
--- a/src/backend/catalog/pg_aggregate.c
+++ b/src/backend/catalog/pg_aggregate.c
@@ -674,9 +674,7 @@ AggregateCreate(const char *aggName,
 	tupDesc = aggdesc->rd_att;
 
 	tup = heap_form_tuple(tupDesc, values, nulls);
-	simple_heap_insert(aggdesc, tup);
-
-	CatalogUpdateIndexes(aggdesc, tup);
+	CatalogInsertHeapAndIndexes(aggdesc, tup);
 
 	heap_close(aggdesc, RowExclusiveLock);
 
diff --git a/src/backend/catalog/pg_collation.c b/src/backend/catalog/pg_collation.c
index 694c0f6..ebaf3fd 100644
--- a/src/backend/catalog/pg_collation.c
+++ b/src/backend/catalog/pg_collation.c
@@ -134,12 +134,9 @@ CollationCreate(const char *collname, Oid collnamespace,
 	tup = heap_form_tuple(tupDesc, values, nulls);
 
 	/* insert a new tuple */
-	oid = simple_heap_insert(rel, tup);
+	oid = CatalogInsertHeapAndIndexes(rel, tup);
 	Assert(OidIsValid(oid));
 
-	/* update the index if any */
-	CatalogUpdateIndexes(rel, tup);
-
 	/* set up dependencies for the new collation */
 	myself.classId = CollationRelationId;
 	myself.objectId = oid;
diff --git a/src/backend/catalog/pg_constraint.c b/src/backend/catalog/pg_constraint.c
index b5a0ce9..9509cac 100644
--- a/src/backend/catalog/pg_constraint.c
+++ b/src/backend/catalog/pg_constraint.c
@@ -226,10 +226,7 @@ CreateConstraintEntry(const char *constraintName,
 
 	tup = heap_form_tuple(RelationGetDescr(conDesc), values, nulls);
 
-	conOid = simple_heap_insert(conDesc, tup);
-
-	/* update catalog indexes */
-	CatalogUpdateIndexes(conDesc, tup);
+	conOid = CatalogInsertHeapAndIndexes(conDesc, tup);
 
 	conobject.classId = ConstraintRelationId;
 	conobject.objectId = conOid;
@@ -584,9 +581,7 @@ RemoveConstraintById(Oid conId)
 					 RelationGetRelationName(rel));
 			classForm->relchecks--;
 
-			simple_heap_update(pgrel, &relTup->t_self, relTup);
-
-			CatalogUpdateIndexes(pgrel, relTup);
+			CatalogUpdateHeapAndIndexes(pgrel, &relTup->t_self, relTup);
 
 			heap_freetuple(relTup);
 
@@ -666,10 +661,7 @@ RenameConstraintById(Oid conId, const char *newname)
 	/* OK, do the rename --- tuple is a copy, so OK to scribble on it */
 	namestrcpy(&(con->conname), newname);
 
-	simple_heap_update(conDesc, &tuple->t_self, tuple);
-
-	/* update the system catalog indexes */
-	CatalogUpdateIndexes(conDesc, tuple);
+	CatalogUpdateHeapAndIndexes(conDesc, &tuple->t_self, tuple);
 
 	InvokeObjectPostAlterHook(ConstraintRelationId, conId, 0);
 
@@ -736,8 +728,7 @@ AlterConstraintNamespaces(Oid ownerId, Oid oldNspId,
 
 			conform->connamespace = newNspId;
 
-			simple_heap_update(conRel, &tup->t_self, tup);
-			CatalogUpdateIndexes(conRel, tup);
+			CatalogUpdateHeapAndIndexes(conRel, &tup->t_self, tup);
 
 			/*
 			 * Note: currently, the constraint will not have its own
diff --git a/src/backend/catalog/pg_conversion.c b/src/backend/catalog/pg_conversion.c
index adaf7b8..a942e02 100644
--- a/src/backend/catalog/pg_conversion.c
+++ b/src/backend/catalog/pg_conversion.c
@@ -105,10 +105,7 @@ ConversionCreate(const char *conname, Oid connamespace,
 	tup = heap_form_tuple(tupDesc, values, nulls);
 
 	/* insert a new tuple */
-	simple_heap_insert(rel, tup);
-
-	/* update the index if any */
-	CatalogUpdateIndexes(rel, tup);
+	CatalogInsertHeapAndIndexes(rel, tup);
 
 	myself.classId = ConversionRelationId;
 	myself.objectId = HeapTupleGetOid(tup);
diff --git a/src/backend/catalog/pg_db_role_setting.c b/src/backend/catalog/pg_db_role_setting.c
index 117cc8d..c206b03 100644
--- a/src/backend/catalog/pg_db_role_setting.c
+++ b/src/backend/catalog/pg_db_role_setting.c
@@ -88,10 +88,7 @@ AlterSetting(Oid databaseid, Oid roleid, VariableSetStmt *setstmt)
 
 				newtuple = heap_modify_tuple(tuple, RelationGetDescr(rel),
 											 repl_val, repl_null, repl_repl);
-				simple_heap_update(rel, &tuple->t_self, newtuple);
-
-				/* Update indexes */
-				CatalogUpdateIndexes(rel, newtuple);
+				CatalogUpdateHeapAndIndexes(rel, &tuple->t_self, newtuple);
 			}
 			else
 				simple_heap_delete(rel, &tuple->t_self);
@@ -129,10 +126,7 @@ AlterSetting(Oid databaseid, Oid roleid, VariableSetStmt *setstmt)
 
 			newtuple = heap_modify_tuple(tuple, RelationGetDescr(rel),
 										 repl_val, repl_null, repl_repl);
-			simple_heap_update(rel, &tuple->t_self, newtuple);
-
-			/* Update indexes */
-			CatalogUpdateIndexes(rel, newtuple);
+			CatalogUpdateHeapAndIndexes(rel, &tuple->t_self, newtuple);
 		}
 		else
 			simple_heap_delete(rel, &tuple->t_self);
@@ -155,10 +149,7 @@ AlterSetting(Oid databaseid, Oid roleid, VariableSetStmt *setstmt)
 		values[Anum_pg_db_role_setting_setconfig - 1] = PointerGetDatum(a);
 		newtuple = heap_form_tuple(RelationGetDescr(rel), values, nulls);
 
-		simple_heap_insert(rel, newtuple);
-
-		/* Update indexes */
-		CatalogUpdateIndexes(rel, newtuple);
+		CatalogInsertHeapAndIndexes(rel, newtuple);
 	}
 
 	InvokeObjectPostAlterHookArg(DbRoleSettingRelationId,
diff --git a/src/backend/catalog/pg_depend.c b/src/backend/catalog/pg_depend.c
index b71fa1b..49f3bf3 100644
--- a/src/backend/catalog/pg_depend.c
+++ b/src/backend/catalog/pg_depend.c
@@ -362,8 +362,7 @@ changeDependencyFor(Oid classId, Oid objectId,
 
 				depform->refobjid = newRefObjectId;
 
-				simple_heap_update(depRel, &tup->t_self, tup);
-				CatalogUpdateIndexes(depRel, tup);
+				CatalogUpdateHeapAndIndexes(depRel, &tup->t_self, tup);
 
 				heap_freetuple(tup);
 			}
diff --git a/src/backend/catalog/pg_enum.c b/src/backend/catalog/pg_enum.c
index 089a9a0..16a4e80 100644
--- a/src/backend/catalog/pg_enum.c
+++ b/src/backend/catalog/pg_enum.c
@@ -125,8 +125,7 @@ EnumValuesCreate(Oid enumTypeOid, List *vals)
 		tup = heap_form_tuple(RelationGetDescr(pg_enum), values, nulls);
 		HeapTupleSetOid(tup, oids[elemno]);
 
-		simple_heap_insert(pg_enum, tup);
-		CatalogUpdateIndexes(pg_enum, tup);
+		CatalogInsertHeapAndIndexes(pg_enum, tup);
 		heap_freetuple(tup);
 
 		elemno++;
@@ -458,8 +457,7 @@ restart:
 	values[Anum_pg_enum_enumlabel - 1] = NameGetDatum(&enumlabel);
 	enum_tup = heap_form_tuple(RelationGetDescr(pg_enum), values, nulls);
 	HeapTupleSetOid(enum_tup, newOid);
-	simple_heap_insert(pg_enum, enum_tup);
-	CatalogUpdateIndexes(pg_enum, enum_tup);
+	CatalogInsertHeapAndIndexes(pg_enum, enum_tup);
 	heap_freetuple(enum_tup);
 
 	heap_close(pg_enum, RowExclusiveLock);
@@ -543,8 +541,7 @@ RenameEnumLabel(Oid enumTypeOid,
 
 	/* Update the pg_enum entry */
 	namestrcpy(&en->enumlabel, newVal);
-	simple_heap_update(pg_enum, &enum_tup->t_self, enum_tup);
-	CatalogUpdateIndexes(pg_enum, enum_tup);
+	CatalogUpdateHeapAndIndexes(pg_enum, &enum_tup->t_self, enum_tup);
 	heap_freetuple(enum_tup);
 
 	heap_close(pg_enum, RowExclusiveLock);
@@ -597,9 +594,7 @@ RenumberEnumType(Relation pg_enum, HeapTuple *existing, int nelems)
 		{
 			en->enumsortorder = newsortorder;
 
-			simple_heap_update(pg_enum, &newtup->t_self, newtup);
-
-			CatalogUpdateIndexes(pg_enum, newtup);
+			CatalogUpdateHeapAndIndexes(pg_enum, &newtup->t_self, newtup);
 		}
 
 		heap_freetuple(newtup);
diff --git a/src/backend/catalog/pg_largeobject.c b/src/backend/catalog/pg_largeobject.c
index 24edf6a..d59d4b7 100644
--- a/src/backend/catalog/pg_largeobject.c
+++ b/src/backend/catalog/pg_largeobject.c
@@ -63,11 +63,9 @@ LargeObjectCreate(Oid loid)
 	if (OidIsValid(loid))
 		HeapTupleSetOid(ntup, loid);
 
-	loid_new = simple_heap_insert(pg_lo_meta, ntup);
+	loid_new = CatalogInsertHeapAndIndexes(pg_lo_meta, ntup);
 	Assert(!OidIsValid(loid) || loid == loid_new);
 
-	CatalogUpdateIndexes(pg_lo_meta, ntup);
-
 	heap_freetuple(ntup);
 
 	heap_close(pg_lo_meta, RowExclusiveLock);
diff --git a/src/backend/catalog/pg_namespace.c b/src/backend/catalog/pg_namespace.c
index f048ad4..4c06873 100644
--- a/src/backend/catalog/pg_namespace.c
+++ b/src/backend/catalog/pg_namespace.c
@@ -76,11 +76,9 @@ NamespaceCreate(const char *nspName, Oid ownerId, bool isTemp)
 
 	tup = heap_form_tuple(tupDesc, values, nulls);
 
-	nspoid = simple_heap_insert(nspdesc, tup);
+	nspoid = CatalogInsertHeapAndIndexes(nspdesc, tup);
 	Assert(OidIsValid(nspoid));
 
-	CatalogUpdateIndexes(nspdesc, tup);
-
 	heap_close(nspdesc, RowExclusiveLock);
 
 	/* Record dependencies */
diff --git a/src/backend/catalog/pg_operator.c b/src/backend/catalog/pg_operator.c
index 556f9fe..d3f71ca 100644
--- a/src/backend/catalog/pg_operator.c
+++ b/src/backend/catalog/pg_operator.c
@@ -262,9 +262,7 @@ OperatorShellMake(const char *operatorName,
 	/*
 	 * insert our "shell" operator tuple
 	 */
-	operatorObjectId = simple_heap_insert(pg_operator_desc, tup);
-
-	CatalogUpdateIndexes(pg_operator_desc, tup);
+	operatorObjectId = CatalogInsertHeapAndIndexes(pg_operator_desc, tup);
 
 	/* Add dependencies for the entry */
 	makeOperatorDependencies(tup, false);
@@ -526,7 +524,7 @@ OperatorCreate(const char *operatorName,
 								nulls,
 								replaces);
 
-		simple_heap_update(pg_operator_desc, &tup->t_self, tup);
+		CatalogUpdateHeapAndIndexes(pg_operator_desc, &tup->t_self, tup);
 	}
 	else
 	{
@@ -535,12 +533,9 @@ OperatorCreate(const char *operatorName,
 		tup = heap_form_tuple(RelationGetDescr(pg_operator_desc),
 							  values, nulls);
 
-		operatorObjectId = simple_heap_insert(pg_operator_desc, tup);
+		operatorObjectId = CatalogInsertHeapAndIndexes(pg_operator_desc, tup);
 	}
 
-	/* Must update the indexes in either case */
-	CatalogUpdateIndexes(pg_operator_desc, tup);
-
 	/* Add dependencies for the entry */
 	address = makeOperatorDependencies(tup, isUpdate);
 
@@ -695,8 +690,7 @@ OperatorUpd(Oid baseId, Oid commId, Oid negId, bool isDelete)
 		/* If any columns were found to need modification, update tuple. */
 		if (update_commutator)
 		{
-			simple_heap_update(pg_operator_desc, &tup->t_self, tup);
-			CatalogUpdateIndexes(pg_operator_desc, tup);
+			CatalogUpdateHeapAndIndexes(pg_operator_desc, &tup->t_self, tup);
 
 			/*
 			 * Do CCI to make the updated tuple visible.  We must do this in
@@ -741,8 +735,7 @@ OperatorUpd(Oid baseId, Oid commId, Oid negId, bool isDelete)
 		/* If any columns were found to need modification, update tuple. */
 		if (update_negator)
 		{
-			simple_heap_update(pg_operator_desc, &tup->t_self, tup);
-			CatalogUpdateIndexes(pg_operator_desc, tup);
+			CatalogUpdateHeapAndIndexes(pg_operator_desc, &tup->t_self, tup);
 
 			/*
 			 * In the deletion case, do CCI to make the updated tuple visible.
diff --git a/src/backend/catalog/pg_proc.c b/src/backend/catalog/pg_proc.c
index 6ab849c..f35769e 100644
--- a/src/backend/catalog/pg_proc.c
+++ b/src/backend/catalog/pg_proc.c
@@ -572,7 +572,7 @@ ProcedureCreate(const char *procedureName,
 
 		/* Okay, do it... */
 		tup = heap_modify_tuple(oldtup, tupDesc, values, nulls, replaces);
-		simple_heap_update(rel, &tup->t_self, tup);
+		CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 		ReleaseSysCache(oldtup);
 		is_update = true;
@@ -590,12 +590,10 @@ ProcedureCreate(const char *procedureName,
 			nulls[Anum_pg_proc_proacl - 1] = true;
 
 		tup = heap_form_tuple(tupDesc, values, nulls);
-		simple_heap_insert(rel, tup);
+		CatalogInsertHeapAndIndexes(rel, tup);
 		is_update = false;
 	}
 
-	/* Need to update indexes for either the insert or update case */
-	CatalogUpdateIndexes(rel, tup);
 
 	retval = HeapTupleGetOid(tup);
 
diff --git a/src/backend/catalog/pg_publication.c b/src/backend/catalog/pg_publication.c
index 00ed28f..2c7c3b5 100644
--- a/src/backend/catalog/pg_publication.c
+++ b/src/backend/catalog/pg_publication.c
@@ -149,8 +149,7 @@ publication_add_relation(Oid pubid, Relation targetrel,
 	tup = heap_form_tuple(RelationGetDescr(rel), values, nulls);
 
 	/* Insert tuple into catalog. */
-	prrelid = simple_heap_insert(rel, tup);
-	CatalogUpdateIndexes(rel, tup);
+	prrelid = CatalogInsertHeapAndIndexes(rel, tup);
 	heap_freetuple(tup);
 
 	ObjectAddressSet(myself, PublicationRelRelationId, prrelid);
diff --git a/src/backend/catalog/pg_range.c b/src/backend/catalog/pg_range.c
index d3a4c26..c21610d 100644
--- a/src/backend/catalog/pg_range.c
+++ b/src/backend/catalog/pg_range.c
@@ -58,8 +58,7 @@ RangeCreate(Oid rangeTypeOid, Oid rangeSubType, Oid rangeCollation,
 
 	tup = heap_form_tuple(RelationGetDescr(pg_range), values, nulls);
 
-	simple_heap_insert(pg_range, tup);
-	CatalogUpdateIndexes(pg_range, tup);
+	CatalogInsertHeapAndIndexes(pg_range, tup);
 	heap_freetuple(tup);
 
 	/* record type's dependencies on range-related items */
diff --git a/src/backend/catalog/pg_shdepend.c b/src/backend/catalog/pg_shdepend.c
index 60ed957..8d1ddab 100644
--- a/src/backend/catalog/pg_shdepend.c
+++ b/src/backend/catalog/pg_shdepend.c
@@ -260,10 +260,7 @@ shdepChangeDep(Relation sdepRel,
 		shForm->refclassid = refclassid;
 		shForm->refobjid = refobjid;
 
-		simple_heap_update(sdepRel, &oldtup->t_self, oldtup);
-
-		/* keep indexes current */
-		CatalogUpdateIndexes(sdepRel, oldtup);
+		CatalogUpdateHeapAndIndexes(sdepRel, &oldtup->t_self, oldtup);
 	}
 	else
 	{
@@ -287,10 +284,7 @@ shdepChangeDep(Relation sdepRel,
 		 * it's certainly a new tuple
 		 */
 		oldtup = heap_form_tuple(RelationGetDescr(sdepRel), values, nulls);
-		simple_heap_insert(sdepRel, oldtup);
-
-		/* keep indexes current */
-		CatalogUpdateIndexes(sdepRel, oldtup);
+		CatalogInsertHeapAndIndexes(sdepRel, oldtup);
 	}
 
 	if (oldtup)
@@ -759,10 +753,7 @@ copyTemplateDependencies(Oid templateDbId, Oid newDbId)
 		HeapTuple	newtup;
 
 		newtup = heap_modify_tuple(tup, sdepDesc, values, nulls, replace);
-		simple_heap_insert(sdepRel, newtup);
-
-		/* Keep indexes current */
-		CatalogIndexInsert(indstate, newtup);
+		CatalogInsertHeapAndIndexes(sdepRel, newtup);
 
 		heap_freetuple(newtup);
 	}
@@ -882,10 +873,7 @@ shdepAddDependency(Relation sdepRel,
 
 	tup = heap_form_tuple(sdepRel->rd_att, values, nulls);
 
-	simple_heap_insert(sdepRel, tup);
-
-	/* keep indexes current */
-	CatalogUpdateIndexes(sdepRel, tup);
+	CatalogInsertHeapAndIndexes(sdepRel, tup);
 
 	/* clean up */
 	heap_freetuple(tup);
diff --git a/src/backend/catalog/pg_type.c b/src/backend/catalog/pg_type.c
index 6d9a324..8dfd5f0 100644
--- a/src/backend/catalog/pg_type.c
+++ b/src/backend/catalog/pg_type.c
@@ -142,9 +142,7 @@ TypeShellMake(const char *typeName, Oid typeNamespace, Oid ownerId)
 	/*
 	 * insert the tuple in the relation and get the tuple's oid.
 	 */
-	typoid = simple_heap_insert(pg_type_desc, tup);
-
-	CatalogUpdateIndexes(pg_type_desc, tup);
+	typoid = CatalogInsertHeapAndIndexes(pg_type_desc, tup);
 
 	/*
 	 * Create dependencies.  We can/must skip this in bootstrap mode.
@@ -430,7 +428,7 @@ TypeCreate(Oid newTypeOid,
 								nulls,
 								replaces);
 
-		simple_heap_update(pg_type_desc, &tup->t_self, tup);
+		CatalogUpdateHeapAndIndexes(pg_type_desc, &tup->t_self, tup);
 
 		typeObjectId = HeapTupleGetOid(tup);
 
@@ -458,12 +456,9 @@ TypeCreate(Oid newTypeOid,
 		}
 		/* else allow system to assign oid */
 
-		typeObjectId = simple_heap_insert(pg_type_desc, tup);
+		typeObjectId = CatalogInsertHeapAndIndexes(pg_type_desc, tup);
 	}
 
-	/* Update indexes */
-	CatalogUpdateIndexes(pg_type_desc, tup);
-
 	/*
 	 * Create dependencies.  We can/must skip this in bootstrap mode.
 	 */
@@ -724,10 +719,7 @@ RenameTypeInternal(Oid typeOid, const char *newTypeName, Oid typeNamespace)
 	/* OK, do the rename --- tuple is a copy, so OK to scribble on it */
 	namestrcpy(&(typ->typname), newTypeName);
 
-	simple_heap_update(pg_type_desc, &tuple->t_self, tuple);
-
-	/* update the system catalog indexes */
-	CatalogUpdateIndexes(pg_type_desc, tuple);
+	CatalogUpdateHeapAndIndexes(pg_type_desc, &tuple->t_self, tuple);
 
 	InvokeObjectPostAlterHook(TypeRelationId, typeOid, 0);
 
diff --git a/src/backend/catalog/toasting.c b/src/backend/catalog/toasting.c
index ee4a182..cae1228 100644
--- a/src/backend/catalog/toasting.c
+++ b/src/backend/catalog/toasting.c
@@ -350,10 +350,7 @@ create_toast_table(Relation rel, Oid toastOid, Oid toastIndexOid,
 	if (!IsBootstrapProcessingMode())
 	{
 		/* normal case, use a transactional update */
-		simple_heap_update(class_rel, &reltup->t_self, reltup);
-
-		/* Keep catalog indexes current */
-		CatalogUpdateIndexes(class_rel, reltup);
+		CatalogUpdateHeapAndIndexes(class_rel, &reltup->t_self, reltup);
 	}
 	else
 	{
diff --git a/src/backend/commands/alter.c b/src/backend/commands/alter.c
index 768fcc8..d8d4bec 100644
--- a/src/backend/commands/alter.c
+++ b/src/backend/commands/alter.c
@@ -284,8 +284,7 @@ AlterObjectRename_internal(Relation rel, Oid objectId, const char *new_name)
 							   values, nulls, replaces);
 
 	/* Perform actual update */
-	simple_heap_update(rel, &oldtup->t_self, newtup);
-	CatalogUpdateIndexes(rel, newtup);
+	CatalogUpdateHeapAndIndexes(rel, &oldtup->t_self, newtup);
 
 	InvokeObjectPostAlterHook(classId, objectId, 0);
 
@@ -722,8 +721,7 @@ AlterObjectNamespace_internal(Relation rel, Oid objid, Oid nspOid)
 							   values, nulls, replaces);
 
 	/* Perform actual update */
-	simple_heap_update(rel, &tup->t_self, newtup);
-	CatalogUpdateIndexes(rel, newtup);
+	CatalogUpdateHeapAndIndexes(rel, &tup->t_self, newtup);
 
 	/* Release memory */
 	pfree(values);
@@ -954,8 +952,7 @@ AlterObjectOwner_internal(Relation rel, Oid objectId, Oid new_ownerId)
 								   values, nulls, replaces);
 
 		/* Perform actual update */
-		simple_heap_update(rel, &newtup->t_self, newtup);
-		CatalogUpdateIndexes(rel, newtup);
+		CatalogUpdateHeapAndIndexes(rel, &newtup->t_self, newtup);
 
 		/* Update owner dependency reference */
 		if (classId == LargeObjectMetadataRelationId)
diff --git a/src/backend/commands/amcmds.c b/src/backend/commands/amcmds.c
index 29061b8..33e207c 100644
--- a/src/backend/commands/amcmds.c
+++ b/src/backend/commands/amcmds.c
@@ -87,8 +87,7 @@ CreateAccessMethod(CreateAmStmt *stmt)
 
 	tup = heap_form_tuple(RelationGetDescr(rel), values, nulls);
 
-	amoid = simple_heap_insert(rel, tup);
-	CatalogUpdateIndexes(rel, tup);
+	amoid = CatalogInsertHeapAndIndexes(rel, tup);
 	heap_freetuple(tup);
 
 	myself.classId = AccessMethodRelationId;
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index c9f6afe..648520e 100644
--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -1589,18 +1589,15 @@ update_attstats(Oid relid, bool inh, int natts, VacAttrStats **vacattrstats)
 									 nulls,
 									 replaces);
 			ReleaseSysCache(oldtup);
-			simple_heap_update(sd, &stup->t_self, stup);
+			CatalogUpdateHeapAndIndexes(sd, &stup->t_self, stup);
 		}
 		else
 		{
 			/* No, insert new tuple */
 			stup = heap_form_tuple(RelationGetDescr(sd), values, nulls);
-			simple_heap_insert(sd, stup);
+			CatalogInsertHeapAndIndexes(sd, stup);
 		}
 
-		/* update indexes too */
-		CatalogUpdateIndexes(sd, stup);
-
 		heap_freetuple(stup);
 	}
 
diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c
index f9309fc..8060758 100644
--- a/src/backend/commands/cluster.c
+++ b/src/backend/commands/cluster.c
@@ -523,8 +523,7 @@ mark_index_clustered(Relation rel, Oid indexOid, bool is_internal)
 		if (indexForm->indisclustered)
 		{
 			indexForm->indisclustered = false;
-			simple_heap_update(pg_index, &indexTuple->t_self, indexTuple);
-			CatalogUpdateIndexes(pg_index, indexTuple);
+			CatalogUpdateHeapAndIndexes(pg_index, &indexTuple->t_self, indexTuple);
 		}
 		else if (thisIndexOid == indexOid)
 		{
@@ -532,8 +531,7 @@ mark_index_clustered(Relation rel, Oid indexOid, bool is_internal)
 			if (!IndexIsValid(indexForm))
 				elog(ERROR, "cannot cluster on invalid index %u", indexOid);
 			indexForm->indisclustered = true;
-			simple_heap_update(pg_index, &indexTuple->t_self, indexTuple);
-			CatalogUpdateIndexes(pg_index, indexTuple);
+			CatalogUpdateHeapAndIndexes(pg_index, &indexTuple->t_self, indexTuple);
 		}
 
 		InvokeObjectPostAlterHookArg(IndexRelationId, thisIndexOid, 0,
@@ -1558,8 +1556,7 @@ finish_heap_swap(Oid OIDOldHeap, Oid OIDNewHeap,
 		relform->relfrozenxid = frozenXid;
 		relform->relminmxid = cutoffMulti;
 
-		simple_heap_update(relRelation, &reltup->t_self, reltup);
-		CatalogUpdateIndexes(relRelation, reltup);
+		CatalogUpdateHeapAndIndexes(relRelation, &reltup->t_self, reltup);
 
 		heap_close(relRelation, RowExclusiveLock);
 	}
diff --git a/src/backend/commands/comment.c b/src/backend/commands/comment.c
index ada0b03..c250385 100644
--- a/src/backend/commands/comment.c
+++ b/src/backend/commands/comment.c
@@ -199,7 +199,7 @@ CreateComments(Oid oid, Oid classoid, int32 subid, char *comment)
 		{
 			newtuple = heap_modify_tuple(oldtuple, RelationGetDescr(description), values,
 										 nulls, replaces);
-			simple_heap_update(description, &oldtuple->t_self, newtuple);
+			CatalogUpdateHeapAndIndexes(description, &oldtuple->t_self, newtuple);
 		}
 
 		break;					/* Assume there can be only one match */
@@ -213,15 +213,11 @@ CreateComments(Oid oid, Oid classoid, int32 subid, char *comment)
 	{
 		newtuple = heap_form_tuple(RelationGetDescr(description),
 								   values, nulls);
-		simple_heap_insert(description, newtuple);
+		CatalogInsertHeapAndIndexes(description, newtuple);
 	}
 
-	/* Update indexes, if necessary */
 	if (newtuple != NULL)
-	{
-		CatalogUpdateIndexes(description, newtuple);
 		heap_freetuple(newtuple);
-	}
 
 	/* Done */
 
@@ -293,7 +289,7 @@ CreateSharedComments(Oid oid, Oid classoid, char *comment)
 		{
 			newtuple = heap_modify_tuple(oldtuple, RelationGetDescr(shdescription),
 										 values, nulls, replaces);
-			simple_heap_update(shdescription, &oldtuple->t_self, newtuple);
+			CatalogUpdateHeapAndIndexes(shdescription, &oldtuple->t_self, newtuple);
 		}
 
 		break;					/* Assume there can be only one match */
@@ -307,15 +303,11 @@ CreateSharedComments(Oid oid, Oid classoid, char *comment)
 	{
 		newtuple = heap_form_tuple(RelationGetDescr(shdescription),
 								   values, nulls);
-		simple_heap_insert(shdescription, newtuple);
+		CatalogInsertHeapAndIndexes(shdescription, newtuple);
 	}
 
-	/* Update indexes, if necessary */
 	if (newtuple != NULL)
-	{
-		CatalogUpdateIndexes(shdescription, newtuple);
 		heap_freetuple(newtuple);
-	}
 
 	/* Done */
 
diff --git a/src/backend/commands/dbcommands.c b/src/backend/commands/dbcommands.c
index 6ad8fd7..b6ef57d 100644
--- a/src/backend/commands/dbcommands.c
+++ b/src/backend/commands/dbcommands.c
@@ -546,10 +546,7 @@ createdb(ParseState *pstate, const CreatedbStmt *stmt)
 
 	HeapTupleSetOid(tuple, dboid);
 
-	simple_heap_insert(pg_database_rel, tuple);
-
-	/* Update indexes */
-	CatalogUpdateIndexes(pg_database_rel, tuple);
+	CatalogInsertHeapAndIndexes(pg_database_rel, tuple);
 
 	/*
 	 * Now generate additional catalog entries associated with the new DB
@@ -1040,8 +1037,7 @@ RenameDatabase(const char *oldname, const char *newname)
 	if (!HeapTupleIsValid(newtup))
 		elog(ERROR, "cache lookup failed for database %u", db_id);
 	namestrcpy(&(((Form_pg_database) GETSTRUCT(newtup))->datname), newname);
-	simple_heap_update(rel, &newtup->t_self, newtup);
-	CatalogUpdateIndexes(rel, newtup);
+	CatalogUpdateHeapAndIndexes(rel, &newtup->t_self, newtup);
 
 	InvokeObjectPostAlterHook(DatabaseRelationId, db_id, 0);
 
@@ -1296,10 +1292,7 @@ movedb(const char *dbname, const char *tblspcname)
 		newtuple = heap_modify_tuple(oldtuple, RelationGetDescr(pgdbrel),
 									 new_record,
 									 new_record_nulls, new_record_repl);
-		simple_heap_update(pgdbrel, &oldtuple->t_self, newtuple);
-
-		/* Update indexes */
-		CatalogUpdateIndexes(pgdbrel, newtuple);
+		CatalogUpdateHeapAndIndexes(pgdbrel, &oldtuple->t_self, newtuple);
 
 		InvokeObjectPostAlterHook(DatabaseRelationId,
 								  HeapTupleGetOid(newtuple), 0);
@@ -1554,10 +1547,7 @@ AlterDatabase(ParseState *pstate, AlterDatabaseStmt *stmt, bool isTopLevel)
 
 	newtuple = heap_modify_tuple(tuple, RelationGetDescr(rel), new_record,
 								 new_record_nulls, new_record_repl);
-	simple_heap_update(rel, &tuple->t_self, newtuple);
-
-	/* Update indexes */
-	CatalogUpdateIndexes(rel, newtuple);
+	CatalogUpdateHeapAndIndexes(rel, &tuple->t_self, newtuple);
 
 	InvokeObjectPostAlterHook(DatabaseRelationId,
 							  HeapTupleGetOid(newtuple), 0);
@@ -1692,8 +1682,7 @@ AlterDatabaseOwner(const char *dbname, Oid newOwnerId)
 		}
 
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(rel), repl_val, repl_null, repl_repl);
-		simple_heap_update(rel, &newtuple->t_self, newtuple);
-		CatalogUpdateIndexes(rel, newtuple);
+		CatalogUpdateHeapAndIndexes(rel, &newtuple->t_self, newtuple);
 
 		heap_freetuple(newtuple);
 
diff --git a/src/backend/commands/event_trigger.c b/src/backend/commands/event_trigger.c
index 8125537..a5460a3 100644
--- a/src/backend/commands/event_trigger.c
+++ b/src/backend/commands/event_trigger.c
@@ -405,8 +405,7 @@ insert_event_trigger_tuple(char *trigname, char *eventname, Oid evtOwner,
 
 	/* Insert heap tuple. */
 	tuple = heap_form_tuple(tgrel->rd_att, values, nulls);
-	trigoid = simple_heap_insert(tgrel, tuple);
-	CatalogUpdateIndexes(tgrel, tuple);
+	trigoid = CatalogInsertHeapAndIndexes(tgrel, tuple);
 	heap_freetuple(tuple);
 
 	/* Depend on owner. */
@@ -524,8 +523,7 @@ AlterEventTrigger(AlterEventTrigStmt *stmt)
 	evtForm = (Form_pg_event_trigger) GETSTRUCT(tup);
 	evtForm->evtenabled = tgenabled;
 
-	simple_heap_update(tgrel, &tup->t_self, tup);
-	CatalogUpdateIndexes(tgrel, tup);
+	CatalogUpdateHeapAndIndexes(tgrel, &tup->t_self, tup);
 
 	InvokeObjectPostAlterHook(EventTriggerRelationId,
 							  trigoid, 0);
@@ -621,8 +619,7 @@ AlterEventTriggerOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			 errhint("The owner of an event trigger must be a superuser.")));
 
 	form->evtowner = newOwnerId;
-	simple_heap_update(rel, &tup->t_self, tup);
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 	/* Update owner dependency reference */
 	changeDependencyOnOwner(EventTriggerRelationId,
diff --git a/src/backend/commands/extension.c b/src/backend/commands/extension.c
index f23c697..425d14b 100644
--- a/src/backend/commands/extension.c
+++ b/src/backend/commands/extension.c
@@ -1773,8 +1773,7 @@ InsertExtensionTuple(const char *extName, Oid extOwner,
 
 	tuple = heap_form_tuple(rel->rd_att, values, nulls);
 
-	extensionOid = simple_heap_insert(rel, tuple);
-	CatalogUpdateIndexes(rel, tuple);
+	extensionOid = CatalogInsertHeapAndIndexes(rel, tuple);
 
 	heap_freetuple(tuple);
 	heap_close(rel, RowExclusiveLock);
@@ -2485,8 +2484,7 @@ pg_extension_config_dump(PG_FUNCTION_ARGS)
 	extTup = heap_modify_tuple(extTup, RelationGetDescr(extRel),
 							   repl_val, repl_null, repl_repl);
 
-	simple_heap_update(extRel, &extTup->t_self, extTup);
-	CatalogUpdateIndexes(extRel, extTup);
+	CatalogUpdateHeapAndIndexes(extRel, &extTup->t_self, extTup);
 
 	systable_endscan(extScan);
 
@@ -2663,8 +2661,7 @@ extension_config_remove(Oid extensionoid, Oid tableoid)
 	extTup = heap_modify_tuple(extTup, RelationGetDescr(extRel),
 							   repl_val, repl_null, repl_repl);
 
-	simple_heap_update(extRel, &extTup->t_self, extTup);
-	CatalogUpdateIndexes(extRel, extTup);
+	CatalogUpdateHeapAndIndexes(extRel, &extTup->t_self, extTup);
 
 	systable_endscan(extScan);
 
@@ -2844,8 +2841,7 @@ AlterExtensionNamespace(List *names, const char *newschema, Oid *oldschema)
 	/* Now adjust pg_extension.extnamespace */
 	extForm->extnamespace = nspOid;
 
-	simple_heap_update(extRel, &extTup->t_self, extTup);
-	CatalogUpdateIndexes(extRel, extTup);
+	CatalogUpdateHeapAndIndexes(extRel, &extTup->t_self, extTup);
 
 	heap_close(extRel, RowExclusiveLock);
 
@@ -3091,8 +3087,7 @@ ApplyExtensionUpdates(Oid extensionOid,
 		extTup = heap_modify_tuple(extTup, RelationGetDescr(extRel),
 								   values, nulls, repl);
 
-		simple_heap_update(extRel, &extTup->t_self, extTup);
-		CatalogUpdateIndexes(extRel, extTup);
+		CatalogUpdateHeapAndIndexes(extRel, &extTup->t_self, extTup);
 
 		systable_endscan(extScan);
 
diff --git a/src/backend/commands/foreigncmds.c b/src/backend/commands/foreigncmds.c
index 6ff8b69..a67dc52 100644
--- a/src/backend/commands/foreigncmds.c
+++ b/src/backend/commands/foreigncmds.c
@@ -256,8 +256,7 @@ AlterForeignDataWrapperOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerI
 		tup = heap_modify_tuple(tup, RelationGetDescr(rel), repl_val, repl_null,
 								repl_repl);
 
-		simple_heap_update(rel, &tup->t_self, tup);
-		CatalogUpdateIndexes(rel, tup);
+		CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 		/* Update owner dependency reference */
 		changeDependencyOnOwner(ForeignDataWrapperRelationId,
@@ -397,8 +396,7 @@ AlterForeignServerOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 		tup = heap_modify_tuple(tup, RelationGetDescr(rel), repl_val, repl_null,
 								repl_repl);
 
-		simple_heap_update(rel, &tup->t_self, tup);
-		CatalogUpdateIndexes(rel, tup);
+		CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 		/* Update owner dependency reference */
 		changeDependencyOnOwner(ForeignServerRelationId, HeapTupleGetOid(tup),
@@ -629,8 +627,7 @@ CreateForeignDataWrapper(CreateFdwStmt *stmt)
 
 	tuple = heap_form_tuple(rel->rd_att, values, nulls);
 
-	fdwId = simple_heap_insert(rel, tuple);
-	CatalogUpdateIndexes(rel, tuple);
+	fdwId = CatalogInsertHeapAndIndexes(rel, tuple);
 
 	heap_freetuple(tuple);
 
@@ -786,8 +783,7 @@ AlterForeignDataWrapper(AlterFdwStmt *stmt)
 	tp = heap_modify_tuple(tp, RelationGetDescr(rel),
 						   repl_val, repl_null, repl_repl);
 
-	simple_heap_update(rel, &tp->t_self, tp);
-	CatalogUpdateIndexes(rel, tp);
+	CatalogUpdateHeapAndIndexes(rel, &tp->t_self, tp);
 
 	heap_freetuple(tp);
 
@@ -941,9 +937,7 @@ CreateForeignServer(CreateForeignServerStmt *stmt)
 
 	tuple = heap_form_tuple(rel->rd_att, values, nulls);
 
-	srvId = simple_heap_insert(rel, tuple);
-
-	CatalogUpdateIndexes(rel, tuple);
+	srvId = CatalogInsertHeapAndIndexes(rel, tuple);
 
 	heap_freetuple(tuple);
 
@@ -1056,8 +1050,7 @@ AlterForeignServer(AlterForeignServerStmt *stmt)
 	tp = heap_modify_tuple(tp, RelationGetDescr(rel),
 						   repl_val, repl_null, repl_repl);
 
-	simple_heap_update(rel, &tp->t_self, tp);
-	CatalogUpdateIndexes(rel, tp);
+	CatalogUpdateHeapAndIndexes(rel, &tp->t_self, tp);
 
 	InvokeObjectPostAlterHook(ForeignServerRelationId, srvId, 0);
 
@@ -1190,9 +1183,7 @@ CreateUserMapping(CreateUserMappingStmt *stmt)
 
 	tuple = heap_form_tuple(rel->rd_att, values, nulls);
 
-	umId = simple_heap_insert(rel, tuple);
-
-	CatalogUpdateIndexes(rel, tuple);
+	umId = CatalogInsertHeapAndIndexes(rel, tuple);
 
 	heap_freetuple(tuple);
 
@@ -1307,8 +1298,7 @@ AlterUserMapping(AlterUserMappingStmt *stmt)
 	tp = heap_modify_tuple(tp, RelationGetDescr(rel),
 						   repl_val, repl_null, repl_repl);
 
-	simple_heap_update(rel, &tp->t_self, tp);
-	CatalogUpdateIndexes(rel, tp);
+	CatalogUpdateHeapAndIndexes(rel, &tp->t_self, tp);
 
 	ObjectAddressSet(address, UserMappingRelationId, umId);
 
@@ -1484,8 +1474,7 @@ CreateForeignTable(CreateForeignTableStmt *stmt, Oid relid)
 
 	tuple = heap_form_tuple(ftrel->rd_att, values, nulls);
 
-	simple_heap_insert(ftrel, tuple);
-	CatalogUpdateIndexes(ftrel, tuple);
+	CatalogInsertHeapAndIndexes(ftrel, tuple);
 
 	heap_freetuple(tuple);
 
diff --git a/src/backend/commands/functioncmds.c b/src/backend/commands/functioncmds.c
index ec833c3..c58dc26 100644
--- a/src/backend/commands/functioncmds.c
+++ b/src/backend/commands/functioncmds.c
@@ -1292,8 +1292,7 @@ AlterFunction(ParseState *pstate, AlterFunctionStmt *stmt)
 		procForm->proparallel = interpret_func_parallel(parallel_item);
 
 	/* Do the update */
-	simple_heap_update(rel, &tup->t_self, tup);
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 	InvokeObjectPostAlterHook(ProcedureRelationId, funcOid, 0);
 
@@ -1333,9 +1332,7 @@ SetFunctionReturnType(Oid funcOid, Oid newRetType)
 	procForm->prorettype = newRetType;
 
 	/* update the catalog and its indexes */
-	simple_heap_update(pg_proc_rel, &tup->t_self, tup);
-
-	CatalogUpdateIndexes(pg_proc_rel, tup);
+	CatalogUpdateHeapAndIndexes(pg_proc_rel, &tup->t_self, tup);
 
 	heap_close(pg_proc_rel, RowExclusiveLock);
 }
@@ -1368,9 +1365,7 @@ SetFunctionArgType(Oid funcOid, int argIndex, Oid newArgType)
 	procForm->proargtypes.values[argIndex] = newArgType;
 
 	/* update the catalog and its indexes */
-	simple_heap_update(pg_proc_rel, &tup->t_self, tup);
-
-	CatalogUpdateIndexes(pg_proc_rel, tup);
+	CatalogUpdateHeapAndIndexes(pg_proc_rel, &tup->t_self, tup);
 
 	heap_close(pg_proc_rel, RowExclusiveLock);
 }
@@ -1656,9 +1651,7 @@ CreateCast(CreateCastStmt *stmt)
 
 	tuple = heap_form_tuple(RelationGetDescr(relation), values, nulls);
 
-	castid = simple_heap_insert(relation, tuple);
-
-	CatalogUpdateIndexes(relation, tuple);
+	castid = CatalogInsertHeapAndIndexes(relation, tuple);
 
 	/* make dependency entries */
 	myself.classId = CastRelationId;
@@ -1921,7 +1914,7 @@ CreateTransform(CreateTransformStmt *stmt)
 		replaces[Anum_pg_transform_trftosql - 1] = true;
 
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(relation), values, nulls, replaces);
-		simple_heap_update(relation, &newtuple->t_self, newtuple);
+		CatalogUpdateHeapAndIndexes(relation, &newtuple->t_self, newtuple);
 
 		transformid = HeapTupleGetOid(tuple);
 		ReleaseSysCache(tuple);
@@ -1930,12 +1923,10 @@ CreateTransform(CreateTransformStmt *stmt)
 	else
 	{
 		newtuple = heap_form_tuple(RelationGetDescr(relation), values, nulls);
-		transformid = simple_heap_insert(relation, newtuple);
+		transformid = CatalogInsertHeapAndIndexes(relation, newtuple);
 		is_replace = false;
 	}
 
-	CatalogUpdateIndexes(relation, newtuple);
-
 	if (is_replace)
 		deleteDependencyRecordsFor(TransformRelationId, transformid, true);
 
diff --git a/src/backend/commands/matview.c b/src/backend/commands/matview.c
index b7daf1c..53661a3 100644
--- a/src/backend/commands/matview.c
+++ b/src/backend/commands/matview.c
@@ -100,9 +100,7 @@ SetMatViewPopulatedState(Relation relation, bool newstate)
 
 	((Form_pg_class) GETSTRUCT(tuple))->relispopulated = newstate;
 
-	simple_heap_update(pgrel, &tuple->t_self, tuple);
-
-	CatalogUpdateIndexes(pgrel, tuple);
+	CatalogUpdateHeapAndIndexes(pgrel, &tuple->t_self, tuple);
 
 	heap_freetuple(tuple);
 	heap_close(pgrel, RowExclusiveLock);
diff --git a/src/backend/commands/opclasscmds.c b/src/backend/commands/opclasscmds.c
index bc43483..adb4a7d 100644
--- a/src/backend/commands/opclasscmds.c
+++ b/src/backend/commands/opclasscmds.c
@@ -278,9 +278,7 @@ CreateOpFamily(char *amname, char *opfname, Oid namespaceoid, Oid amoid)
 
 	tup = heap_form_tuple(rel->rd_att, values, nulls);
 
-	opfamilyoid = simple_heap_insert(rel, tup);
-
-	CatalogUpdateIndexes(rel, tup);
+	opfamilyoid = CatalogInsertHeapAndIndexes(rel, tup);
 
 	heap_freetuple(tup);
 
@@ -654,9 +652,7 @@ DefineOpClass(CreateOpClassStmt *stmt)
 
 	tup = heap_form_tuple(rel->rd_att, values, nulls);
 
-	opclassoid = simple_heap_insert(rel, tup);
-
-	CatalogUpdateIndexes(rel, tup);
+	opclassoid = CatalogInsertHeapAndIndexes(rel, tup);
 
 	heap_freetuple(tup);
 
@@ -1327,9 +1323,7 @@ storeOperators(List *opfamilyname, Oid amoid,
 
 		tup = heap_form_tuple(rel->rd_att, values, nulls);
 
-		entryoid = simple_heap_insert(rel, tup);
-
-		CatalogUpdateIndexes(rel, tup);
+		entryoid = CatalogInsertHeapAndIndexes(rel, tup);
 
 		heap_freetuple(tup);
 
@@ -1438,9 +1432,7 @@ storeProcedures(List *opfamilyname, Oid amoid,
 
 		tup = heap_form_tuple(rel->rd_att, values, nulls);
 
-		entryoid = simple_heap_insert(rel, tup);
-
-		CatalogUpdateIndexes(rel, tup);
+		entryoid = CatalogInsertHeapAndIndexes(rel, tup);
 
 		heap_freetuple(tup);
 
diff --git a/src/backend/commands/operatorcmds.c b/src/backend/commands/operatorcmds.c
index a273376..eb6b308 100644
--- a/src/backend/commands/operatorcmds.c
+++ b/src/backend/commands/operatorcmds.c
@@ -518,8 +518,7 @@ AlterOperator(AlterOperatorStmt *stmt)
 	tup = heap_modify_tuple(tup, RelationGetDescr(catalog),
 							values, nulls, replaces);
 
-	simple_heap_update(catalog, &tup->t_self, tup);
-	CatalogUpdateIndexes(catalog, tup);
+	CatalogUpdateHeapAndIndexes(catalog, &tup->t_self, tup);
 
 	address = makeOperatorDependencies(tup, true);
 
diff --git a/src/backend/commands/policy.c b/src/backend/commands/policy.c
index 5d9d3a6..d1513f7 100644
--- a/src/backend/commands/policy.c
+++ b/src/backend/commands/policy.c
@@ -614,10 +614,7 @@ RemoveRoleFromObjectPolicy(Oid roleid, Oid classid, Oid policy_id)
 		new_tuple = heap_modify_tuple(tuple,
 									  RelationGetDescr(pg_policy_rel),
 									  values, isnull, replaces);
-		simple_heap_update(pg_policy_rel, &new_tuple->t_self, new_tuple);
-
-		/* Update Catalog Indexes */
-		CatalogUpdateIndexes(pg_policy_rel, new_tuple);
+		CatalogUpdateHeapAndIndexes(pg_policy_rel, &new_tuple->t_self, new_tuple);
 
 		/* Remove all old dependencies. */
 		deleteDependencyRecordsFor(PolicyRelationId, policy_id, false);
@@ -823,10 +820,7 @@ CreatePolicy(CreatePolicyStmt *stmt)
 	policy_tuple = heap_form_tuple(RelationGetDescr(pg_policy_rel), values,
 								   isnull);
 
-	policy_id = simple_heap_insert(pg_policy_rel, policy_tuple);
-
-	/* Update Indexes */
-	CatalogUpdateIndexes(pg_policy_rel, policy_tuple);
+	policy_id = CatalogInsertHeapAndIndexes(pg_policy_rel, policy_tuple);
 
 	/* Record Dependencies */
 	target.classId = RelationRelationId;
@@ -1150,10 +1144,7 @@ AlterPolicy(AlterPolicyStmt *stmt)
 	new_tuple = heap_modify_tuple(policy_tuple,
 								  RelationGetDescr(pg_policy_rel),
 								  values, isnull, replaces);
-	simple_heap_update(pg_policy_rel, &new_tuple->t_self, new_tuple);
-
-	/* Update Catalog Indexes */
-	CatalogUpdateIndexes(pg_policy_rel, new_tuple);
+	CatalogUpdateHeapAndIndexes(pg_policy_rel, &new_tuple->t_self, new_tuple);
 
 	/* Update Dependencies. */
 	deleteDependencyRecordsFor(PolicyRelationId, policy_id, false);
@@ -1287,10 +1278,7 @@ rename_policy(RenameStmt *stmt)
 	namestrcpy(&((Form_pg_policy) GETSTRUCT(policy_tuple))->polname,
 			   stmt->newname);
 
-	simple_heap_update(pg_policy_rel, &policy_tuple->t_self, policy_tuple);
-
-	/* keep system catalog indexes current */
-	CatalogUpdateIndexes(pg_policy_rel, policy_tuple);
+	CatalogUpdateHeapAndIndexes(pg_policy_rel, &policy_tuple->t_self, policy_tuple);
 
 	InvokeObjectPostAlterHook(PolicyRelationId,
 							  HeapTupleGetOid(policy_tuple), 0);
diff --git a/src/backend/commands/proclang.c b/src/backend/commands/proclang.c
index b684f41..f7fa548 100644
--- a/src/backend/commands/proclang.c
+++ b/src/backend/commands/proclang.c
@@ -378,7 +378,7 @@ create_proc_lang(const char *languageName, bool replace,
 
 		/* Okay, do it... */
 		tup = heap_modify_tuple(oldtup, tupDesc, values, nulls, replaces);
-		simple_heap_update(rel, &tup->t_self, tup);
+		CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 		ReleaseSysCache(oldtup);
 		is_update = true;
@@ -387,13 +387,10 @@ create_proc_lang(const char *languageName, bool replace,
 	{
 		/* Creating a new language */
 		tup = heap_form_tuple(tupDesc, values, nulls);
-		simple_heap_insert(rel, tup);
+		CatalogInsertHeapAndIndexes(rel, tup);
 		is_update = false;
 	}
 
-	/* Need to update indexes for either the insert or update case */
-	CatalogUpdateIndexes(rel, tup);
-
 	/*
 	 * Create dependencies for the new language.  If we are updating an
 	 * existing language, first delete any existing pg_depend entries.
diff --git a/src/backend/commands/publicationcmds.c b/src/backend/commands/publicationcmds.c
index 173b076..57543e4 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -215,8 +215,7 @@ CreatePublication(CreatePublicationStmt *stmt)
 	tup = heap_form_tuple(RelationGetDescr(rel), values, nulls);
 
 	/* Insert tuple into catalog. */
-	puboid = simple_heap_insert(rel, tup);
-	CatalogUpdateIndexes(rel, tup);
+	puboid = CatalogInsertHeapAndIndexes(rel, tup);
 	heap_freetuple(tup);
 
 	recordDependencyOnOwner(PublicationRelationId, puboid, GetUserId());
@@ -295,8 +294,7 @@ AlterPublicationOptions(AlterPublicationStmt *stmt, Relation rel,
 							replaces);
 
 	/* Update the catalog. */
-	simple_heap_update(rel, &tup->t_self, tup);
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 	CommandCounterIncrement();
 
@@ -686,8 +684,7 @@ AlterPublicationOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 				 errhint("The owner of a publication must be a superuser.")));
 
 	form->pubowner = newOwnerId;
-	simple_heap_update(rel, &tup->t_self, tup);
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 	/* Update owner dependency reference */
 	changeDependencyOnOwner(PublicationRelationId,
diff --git a/src/backend/commands/schemacmds.c b/src/backend/commands/schemacmds.c
index c3b37b2..f49767e 100644
--- a/src/backend/commands/schemacmds.c
+++ b/src/backend/commands/schemacmds.c
@@ -281,8 +281,7 @@ RenameSchema(const char *oldname, const char *newname)
 
 	/* rename */
 	namestrcpy(&(((Form_pg_namespace) GETSTRUCT(tup))->nspname), newname);
-	simple_heap_update(rel, &tup->t_self, tup);
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 	InvokeObjectPostAlterHook(NamespaceRelationId, HeapTupleGetOid(tup), 0);
 
@@ -417,8 +416,7 @@ AlterSchemaOwner_internal(HeapTuple tup, Relation rel, Oid newOwnerId)
 
 		newtuple = heap_modify_tuple(tup, RelationGetDescr(rel), repl_val, repl_null, repl_repl);
 
-		simple_heap_update(rel, &newtuple->t_self, newtuple);
-		CatalogUpdateIndexes(rel, newtuple);
+		CatalogUpdateHeapAndIndexes(rel, &newtuple->t_self, newtuple);
 
 		heap_freetuple(newtuple);
 
diff --git a/src/backend/commands/seclabel.c b/src/backend/commands/seclabel.c
index 324f2e7..7e25411 100644
--- a/src/backend/commands/seclabel.c
+++ b/src/backend/commands/seclabel.c
@@ -299,7 +299,7 @@ SetSharedSecurityLabel(const ObjectAddress *object,
 			replaces[Anum_pg_shseclabel_label - 1] = true;
 			newtup = heap_modify_tuple(oldtup, RelationGetDescr(pg_shseclabel),
 									   values, nulls, replaces);
-			simple_heap_update(pg_shseclabel, &oldtup->t_self, newtup);
+			CatalogUpdateHeapAndIndexes(pg_shseclabel, &oldtup->t_self, newtup);
 		}
 	}
 	systable_endscan(scan);
@@ -309,15 +309,11 @@ SetSharedSecurityLabel(const ObjectAddress *object,
 	{
 		newtup = heap_form_tuple(RelationGetDescr(pg_shseclabel),
 								 values, nulls);
-		simple_heap_insert(pg_shseclabel, newtup);
+		CatalogInsertHeapAndIndexes(pg_shseclabel, newtup);
 	}
 
-	/* Update indexes, if necessary */
 	if (newtup != NULL)
-	{
-		CatalogUpdateIndexes(pg_shseclabel, newtup);
 		heap_freetuple(newtup);
-	}
 
 	heap_close(pg_shseclabel, RowExclusiveLock);
 }
@@ -390,7 +386,7 @@ SetSecurityLabel(const ObjectAddress *object,
 			replaces[Anum_pg_seclabel_label - 1] = true;
 			newtup = heap_modify_tuple(oldtup, RelationGetDescr(pg_seclabel),
 									   values, nulls, replaces);
-			simple_heap_update(pg_seclabel, &oldtup->t_self, newtup);
+			CatalogUpdateHeapAndIndexes(pg_seclabel, &oldtup->t_self, newtup);
 		}
 	}
 	systable_endscan(scan);
@@ -400,15 +396,12 @@ SetSecurityLabel(const ObjectAddress *object,
 	{
 		newtup = heap_form_tuple(RelationGetDescr(pg_seclabel),
 								 values, nulls);
-		simple_heap_insert(pg_seclabel, newtup);
+		CatalogInsertHeapAndIndexes(pg_seclabel, newtup);
 	}
 
 	/* Update indexes, if necessary */
 	if (newtup != NULL)
-	{
-		CatalogUpdateIndexes(pg_seclabel, newtup);
 		heap_freetuple(newtup);
-	}
 
 	heap_close(pg_seclabel, RowExclusiveLock);
 }
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 0c673f5..830b600 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -236,8 +236,7 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
 	pgs_values[Anum_pg_sequence_seqcache - 1] = Int64GetDatumFast(seqform.seqcache);
 
 	tuple = heap_form_tuple(tupDesc, pgs_values, pgs_nulls);
-	simple_heap_insert(rel, tuple);
-	CatalogUpdateIndexes(rel, tuple);
+	CatalogInsertHeapAndIndexes(rel, tuple);
 
 	heap_freetuple(tuple);
 	heap_close(rel, RowExclusiveLock);
@@ -504,8 +503,7 @@ AlterSequence(ParseState *pstate, AlterSeqStmt *stmt)
 
 	relation_close(seqrel, NoLock);
 
-	simple_heap_update(rel, &tuple->t_self, tuple);
-	CatalogUpdateIndexes(rel, tuple);
+	CatalogUpdateHeapAndIndexes(rel, &tuple->t_self, tuple);
 	heap_close(rel, RowExclusiveLock);
 
 	return address;
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 41ef7a3..853dcd3 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -277,8 +277,7 @@ CreateSubscription(CreateSubscriptionStmt *stmt)
 	tup = heap_form_tuple(RelationGetDescr(rel), values, nulls);
 
 	/* Insert tuple into catalog. */
-	subid = simple_heap_insert(rel, tup);
-	CatalogUpdateIndexes(rel, tup);
+	subid = CatalogInsertHeapAndIndexes(rel, tup);
 	heap_freetuple(tup);
 
 	recordDependencyOnOwner(SubscriptionRelationId, subid, owner);
@@ -408,8 +407,7 @@ AlterSubscription(AlterSubscriptionStmt *stmt)
 							replaces);
 
 	/* Update the catalog. */
-	simple_heap_update(rel, &tup->t_self, tup);
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 	ObjectAddressSet(myself, SubscriptionRelationId, subid);
 
@@ -588,8 +586,7 @@ AlterSubscriptionOwner_internal(Relation rel, HeapTuple tup, Oid newOwnerId)
 			 errhint("The owner of an subscription must be a superuser.")));
 
 	form->subowner = newOwnerId;
-	simple_heap_update(rel, &tup->t_self, tup);
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 	/* Update owner dependency reference */
 	changeDependencyOnOwner(SubscriptionRelationId,
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index 90f2f7f..f62f8d7 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -2308,9 +2308,7 @@ StoreCatalogInheritance1(Oid relationId, Oid parentOid,
 
 	tuple = heap_form_tuple(desc, values, nulls);
 
-	simple_heap_insert(inhRelation, tuple);
-
-	CatalogUpdateIndexes(inhRelation, tuple);
+	CatalogInsertHeapAndIndexes(inhRelation, tuple);
 
 	heap_freetuple(tuple);
 
@@ -2398,10 +2396,7 @@ SetRelationHasSubclass(Oid relationId, bool relhassubclass)
 	if (classtuple->relhassubclass != relhassubclass)
 	{
 		classtuple->relhassubclass = relhassubclass;
-		simple_heap_update(relationRelation, &tuple->t_self, tuple);
-
-		/* keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relationRelation, tuple);
+		CatalogUpdateHeapAndIndexes(relationRelation, &tuple->t_self, tuple);
 	}
 	else
 	{
@@ -2592,10 +2587,7 @@ renameatt_internal(Oid myrelid,
 	/* apply the update */
 	namestrcpy(&(attform->attname), newattname);
 
-	simple_heap_update(attrelation, &atttup->t_self, atttup);
-
-	/* keep system catalog indexes current */
-	CatalogUpdateIndexes(attrelation, atttup);
+	CatalogUpdateHeapAndIndexes(attrelation, &atttup->t_self, atttup);
 
 	InvokeObjectPostAlterHook(RelationRelationId, myrelid, attnum);
 
@@ -2902,10 +2894,7 @@ RenameRelationInternal(Oid myrelid, const char *newrelname, bool is_internal)
 	 */
 	namestrcpy(&(relform->relname), newrelname);
 
-	simple_heap_update(relrelation, &reltup->t_self, reltup);
-
-	/* keep the system catalog indexes current */
-	CatalogUpdateIndexes(relrelation, reltup);
+	CatalogUpdateHeapAndIndexes(relrelation, &reltup->t_self, reltup);
 
 	InvokeObjectPostAlterHookArg(RelationRelationId, myrelid, 0,
 								 InvalidOid, is_internal);
@@ -5097,8 +5086,7 @@ ATExecAddColumn(List **wqueue, AlteredTableInfo *tab, Relation rel,
 
 			/* Bump the existing child att's inhcount */
 			childatt->attinhcount++;
-			simple_heap_update(attrdesc, &tuple->t_self, tuple);
-			CatalogUpdateIndexes(attrdesc, tuple);
+			CatalogUpdateHeapAndIndexes(attrdesc, &tuple->t_self, tuple);
 
 			heap_freetuple(tuple);
 
@@ -5191,10 +5179,7 @@ ATExecAddColumn(List **wqueue, AlteredTableInfo *tab, Relation rel,
 	else
 		((Form_pg_class) GETSTRUCT(reltup))->relnatts = newattnum;
 
-	simple_heap_update(pgclass, &reltup->t_self, reltup);
-
-	/* keep catalog indexes current */
-	CatalogUpdateIndexes(pgclass, reltup);
+	CatalogUpdateHeapAndIndexes(pgclass, &reltup->t_self, reltup);
 
 	heap_freetuple(reltup);
 
@@ -5630,10 +5615,7 @@ ATExecDropNotNull(Relation rel, const char *colName, LOCKMODE lockmode)
 	{
 		((Form_pg_attribute) GETSTRUCT(tuple))->attnotnull = FALSE;
 
-		simple_heap_update(attr_rel, &tuple->t_self, tuple);
-
-		/* keep the system catalog indexes current */
-		CatalogUpdateIndexes(attr_rel, tuple);
+		CatalogUpdateHeapAndIndexes(attr_rel, &tuple->t_self, tuple);
 
 		ObjectAddressSubSet(address, RelationRelationId,
 							RelationGetRelid(rel), attnum);
@@ -5708,10 +5690,7 @@ ATExecSetNotNull(AlteredTableInfo *tab, Relation rel,
 	{
 		((Form_pg_attribute) GETSTRUCT(tuple))->attnotnull = TRUE;
 
-		simple_heap_update(attr_rel, &tuple->t_self, tuple);
-
-		/* keep the system catalog indexes current */
-		CatalogUpdateIndexes(attr_rel, tuple);
+		CatalogUpdateHeapAndIndexes(attr_rel, &tuple->t_self, tuple);
 
 		/* Tell Phase 3 it needs to test the constraint */
 		tab->new_notnull = true;
@@ -5876,10 +5855,7 @@ ATExecSetStatistics(Relation rel, const char *colName, Node *newValue, LOCKMODE
 
 	attrtuple->attstattarget = newtarget;
 
-	simple_heap_update(attrelation, &tuple->t_self, tuple);
-
-	/* keep system catalog indexes current */
-	CatalogUpdateIndexes(attrelation, tuple);
+	CatalogUpdateHeapAndIndexes(attrelation, &tuple->t_self, tuple);
 
 	InvokeObjectPostAlterHook(RelationRelationId,
 							  RelationGetRelid(rel),
@@ -5952,8 +5928,7 @@ ATExecSetOptions(Relation rel, const char *colName, Node *options,
 								 repl_val, repl_null, repl_repl);
 
 	/* Update system catalog. */
-	simple_heap_update(attrelation, &newtuple->t_self, newtuple);
-	CatalogUpdateIndexes(attrelation, newtuple);
+	CatalogUpdateHeapAndIndexes(attrelation, &newtuple->t_self, newtuple);
 
 	InvokeObjectPostAlterHook(RelationRelationId,
 							  RelationGetRelid(rel),
@@ -6036,10 +6011,7 @@ ATExecSetStorage(Relation rel, const char *colName, Node *newValue, LOCKMODE loc
 				 errmsg("column data type %s can only have storage PLAIN",
 						format_type_be(attrtuple->atttypid))));
 
-	simple_heap_update(attrelation, &tuple->t_self, tuple);
-
-	/* keep system catalog indexes current */
-	CatalogUpdateIndexes(attrelation, tuple);
+	CatalogUpdateHeapAndIndexes(attrelation, &tuple->t_self, tuple);
 
 	InvokeObjectPostAlterHook(RelationRelationId,
 							  RelationGetRelid(rel),
@@ -6277,10 +6249,7 @@ ATExecDropColumn(List **wqueue, Relation rel, const char *colName,
 					/* Child column must survive my deletion */
 					childatt->attinhcount--;
 
-					simple_heap_update(attr_rel, &tuple->t_self, tuple);
-
-					/* keep the system catalog indexes current */
-					CatalogUpdateIndexes(attr_rel, tuple);
+					CatalogUpdateHeapAndIndexes(attr_rel, &tuple->t_self, tuple);
 
 					/* Make update visible */
 					CommandCounterIncrement();
@@ -6296,10 +6265,7 @@ ATExecDropColumn(List **wqueue, Relation rel, const char *colName,
 				childatt->attinhcount--;
 				childatt->attislocal = true;
 
-				simple_heap_update(attr_rel, &tuple->t_self, tuple);
-
-				/* keep the system catalog indexes current */
-				CatalogUpdateIndexes(attr_rel, tuple);
+				CatalogUpdateHeapAndIndexes(attr_rel, &tuple->t_self, tuple);
 
 				/* Make update visible */
 				CommandCounterIncrement();
@@ -6343,10 +6309,7 @@ ATExecDropColumn(List **wqueue, Relation rel, const char *colName,
 		tuple_class = (Form_pg_class) GETSTRUCT(tuple);
 
 		tuple_class->relhasoids = false;
-		simple_heap_update(class_rel, &tuple->t_self, tuple);
-
-		/* Keep the catalog indexes up to date */
-		CatalogUpdateIndexes(class_rel, tuple);
+		CatalogUpdateHeapAndIndexes(class_rel, &tuple->t_self, tuple);
 
 		heap_close(class_rel, RowExclusiveLock);
 
@@ -7195,8 +7158,7 @@ ATExecAlterConstraint(Relation rel, AlterTableCmd *cmd,
 		copy_con = (Form_pg_constraint) GETSTRUCT(copyTuple);
 		copy_con->condeferrable = cmdcon->deferrable;
 		copy_con->condeferred = cmdcon->initdeferred;
-		simple_heap_update(conrel, &copyTuple->t_self, copyTuple);
-		CatalogUpdateIndexes(conrel, copyTuple);
+		CatalogUpdateHeapAndIndexes(conrel, &copyTuple->t_self, copyTuple);
 
 		InvokeObjectPostAlterHook(ConstraintRelationId,
 								  HeapTupleGetOid(contuple), 0);
@@ -7249,8 +7211,7 @@ ATExecAlterConstraint(Relation rel, AlterTableCmd *cmd,
 
 			copy_tg->tgdeferrable = cmdcon->deferrable;
 			copy_tg->tginitdeferred = cmdcon->initdeferred;
-			simple_heap_update(tgrel, &copyTuple->t_self, copyTuple);
-			CatalogUpdateIndexes(tgrel, copyTuple);
+			CatalogUpdateHeapAndIndexes(tgrel, &copyTuple->t_self, copyTuple);
 
 			InvokeObjectPostAlterHook(TriggerRelationId,
 									  HeapTupleGetOid(tgtuple), 0);
@@ -7436,8 +7397,7 @@ ATExecValidateConstraint(Relation rel, char *constrName, bool recurse,
 		copyTuple = heap_copytuple(tuple);
 		copy_con = (Form_pg_constraint) GETSTRUCT(copyTuple);
 		copy_con->convalidated = true;
-		simple_heap_update(conrel, &copyTuple->t_self, copyTuple);
-		CatalogUpdateIndexes(conrel, copyTuple);
+		CatalogUpdateHeapAndIndexes(conrel, &copyTuple->t_self, copyTuple);
 
 		InvokeObjectPostAlterHook(ConstraintRelationId,
 								  HeapTupleGetOid(tuple), 0);
@@ -8339,8 +8299,7 @@ ATExecDropConstraint(Relation rel, const char *constrName,
 			{
 				/* Child constraint must survive my deletion */
 				con->coninhcount--;
-				simple_heap_update(conrel, &copy_tuple->t_self, copy_tuple);
-				CatalogUpdateIndexes(conrel, copy_tuple);
+				CatalogUpdateHeapAndIndexes(conrel, &copy_tuple->t_self, copy_tuple);
 
 				/* Make update visible */
 				CommandCounterIncrement();
@@ -8356,8 +8315,7 @@ ATExecDropConstraint(Relation rel, const char *constrName,
 			con->coninhcount--;
 			con->conislocal = true;
 
-			simple_heap_update(conrel, &copy_tuple->t_self, copy_tuple);
-			CatalogUpdateIndexes(conrel, copy_tuple);
+			CatalogUpdateHeapAndIndexes(conrel, &copy_tuple->t_self, copy_tuple);
 
 			/* Make update visible */
 			CommandCounterIncrement();
@@ -9003,10 +8961,7 @@ ATExecAlterColumnType(AlteredTableInfo *tab, Relation rel,
 
 	ReleaseSysCache(typeTuple);
 
-	simple_heap_update(attrelation, &heapTup->t_self, heapTup);
-
-	/* keep system catalog indexes current */
-	CatalogUpdateIndexes(attrelation, heapTup);
+	CatalogUpdateHeapAndIndexes(attrelation, &heapTup->t_self, heapTup);
 
 	heap_close(attrelation, RowExclusiveLock);
 
@@ -9144,8 +9099,7 @@ ATExecAlterColumnGenericOptions(Relation rel,
 	newtuple = heap_modify_tuple(tuple, RelationGetDescr(attrel),
 								 repl_val, repl_null, repl_repl);
 
-	simple_heap_update(attrel, &newtuple->t_self, newtuple);
-	CatalogUpdateIndexes(attrel, newtuple);
+	CatalogUpdateHeapAndIndexes(attrel, &newtuple->t_self, newtuple);
 
 	InvokeObjectPostAlterHook(RelationRelationId,
 							  RelationGetRelid(rel),
@@ -9661,8 +9615,7 @@ ATExecChangeOwner(Oid relationOid, Oid newOwnerId, bool recursing, LOCKMODE lock
 
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(class_rel), repl_val, repl_null, repl_repl);
 
-		simple_heap_update(class_rel, &newtuple->t_self, newtuple);
-		CatalogUpdateIndexes(class_rel, newtuple);
+		CatalogUpdateHeapAndIndexes(class_rel, &newtuple->t_self, newtuple);
 
 		heap_freetuple(newtuple);
 
@@ -9789,8 +9742,7 @@ change_owner_fix_column_acls(Oid relationOid, Oid oldOwnerId, Oid newOwnerId)
 									 RelationGetDescr(attRelation),
 									 repl_val, repl_null, repl_repl);
 
-		simple_heap_update(attRelation, &newtuple->t_self, newtuple);
-		CatalogUpdateIndexes(attRelation, newtuple);
+		CatalogUpdateHeapAndIndexes(attRelation, &newtuple->t_self, newtuple);
 
 		heap_freetuple(newtuple);
 	}
@@ -10067,9 +10019,7 @@ ATExecSetRelOptions(Relation rel, List *defList, AlterTableType operation,
 	newtuple = heap_modify_tuple(tuple, RelationGetDescr(pgclass),
 								 repl_val, repl_null, repl_repl);
 
-	simple_heap_update(pgclass, &newtuple->t_self, newtuple);
-
-	CatalogUpdateIndexes(pgclass, newtuple);
+	CatalogUpdateHeapAndIndexes(pgclass, &newtuple->t_self, newtuple);
 
 	InvokeObjectPostAlterHook(RelationRelationId, RelationGetRelid(rel), 0);
 
@@ -10126,9 +10076,7 @@ ATExecSetRelOptions(Relation rel, List *defList, AlterTableType operation,
 		newtuple = heap_modify_tuple(tuple, RelationGetDescr(pgclass),
 									 repl_val, repl_null, repl_repl);
 
-		simple_heap_update(pgclass, &newtuple->t_self, newtuple);
-
-		CatalogUpdateIndexes(pgclass, newtuple);
+		CatalogUpdateHeapAndIndexes(pgclass, &newtuple->t_self, newtuple);
 
 		InvokeObjectPostAlterHookArg(RelationRelationId,
 									 RelationGetRelid(toastrel), 0,
@@ -10289,8 +10237,7 @@ ATExecSetTableSpace(Oid tableOid, Oid newTableSpace, LOCKMODE lockmode)
 	/* update the pg_class row */
 	rd_rel->reltablespace = (newTableSpace == MyDatabaseTableSpace) ? InvalidOid : newTableSpace;
 	rd_rel->relfilenode = newrelfilenode;
-	simple_heap_update(pg_class, &tuple->t_self, tuple);
-	CatalogUpdateIndexes(pg_class, tuple);
+	CatalogUpdateHeapAndIndexes(pg_class, &tuple->t_self, tuple);
 
 	InvokeObjectPostAlterHook(RelationRelationId, RelationGetRelid(rel), 0);
 
@@ -10940,8 +10887,7 @@ MergeAttributesIntoExisting(Relation child_rel, Relation parent_rel)
 				childatt->attislocal = false;
 			}
 
-			simple_heap_update(attrrel, &tuple->t_self, tuple);
-			CatalogUpdateIndexes(attrrel, tuple);
+			CatalogUpdateHeapAndIndexes(attrrel, &tuple->t_self, tuple);
 			heap_freetuple(tuple);
 		}
 		else
@@ -10980,8 +10926,7 @@ MergeAttributesIntoExisting(Relation child_rel, Relation parent_rel)
 				childatt->attislocal = false;
 			}
 
-			simple_heap_update(attrrel, &tuple->t_self, tuple);
-			CatalogUpdateIndexes(attrrel, tuple);
+			CatalogUpdateHeapAndIndexes(attrrel, &tuple->t_self, tuple);
 			heap_freetuple(tuple);
 		}
 		else
@@ -11118,8 +11063,7 @@ MergeConstraintsIntoExisting(Relation child_rel, Relation parent_rel)
 				child_con->conislocal = false;
 			}
 
-			simple_heap_update(catalog_relation, &child_copy->t_self, child_copy);
-			CatalogUpdateIndexes(catalog_relation, child_copy);
+			CatalogUpdateHeapAndIndexes(catalog_relation, &child_copy->t_self, child_copy);
 			heap_freetuple(child_copy);
 
 			found = true;
@@ -11289,8 +11233,7 @@ RemoveInheritance(Relation child_rel, Relation parent_rel)
 			if (copy_att->attinhcount == 0)
 				copy_att->attislocal = true;
 
-			simple_heap_update(catalogRelation, &copyTuple->t_self, copyTuple);
-			CatalogUpdateIndexes(catalogRelation, copyTuple);
+			CatalogUpdateHeapAndIndexes(catalogRelation, &copyTuple->t_self, copyTuple);
 			heap_freetuple(copyTuple);
 		}
 	}
@@ -11364,8 +11307,7 @@ RemoveInheritance(Relation child_rel, Relation parent_rel)
 			if (copy_con->coninhcount == 0)
 				copy_con->conislocal = true;
 
-			simple_heap_update(catalogRelation, &copyTuple->t_self, copyTuple);
-			CatalogUpdateIndexes(catalogRelation, copyTuple);
+			CatalogUpdateHeapAndIndexes(catalogRelation, &copyTuple->t_self, copyTuple);
 			heap_freetuple(copyTuple);
 		}
 	}
@@ -11565,8 +11507,7 @@ ATExecAddOf(Relation rel, const TypeName *ofTypename, LOCKMODE lockmode)
 	if (!HeapTupleIsValid(classtuple))
 		elog(ERROR, "cache lookup failed for relation %u", relid);
 	((Form_pg_class) GETSTRUCT(classtuple))->reloftype = typeid;
-	simple_heap_update(relationRelation, &classtuple->t_self, classtuple);
-	CatalogUpdateIndexes(relationRelation, classtuple);
+	CatalogUpdateHeapAndIndexes(relationRelation, &classtuple->t_self, classtuple);
 
 	InvokeObjectPostAlterHook(RelationRelationId, relid, 0);
 
@@ -11610,8 +11551,7 @@ ATExecDropOf(Relation rel, LOCKMODE lockmode)
 	if (!HeapTupleIsValid(tuple))
 		elog(ERROR, "cache lookup failed for relation %u", relid);
 	((Form_pg_class) GETSTRUCT(tuple))->reloftype = InvalidOid;
-	simple_heap_update(relationRelation, &tuple->t_self, tuple);
-	CatalogUpdateIndexes(relationRelation, tuple);
+	CatalogUpdateHeapAndIndexes(relationRelation, &tuple->t_self, tuple);
 
 	InvokeObjectPostAlterHook(RelationRelationId, relid, 0);
 
@@ -11651,8 +11591,7 @@ relation_mark_replica_identity(Relation rel, char ri_type, Oid indexOid,
 	if (pg_class_form->relreplident != ri_type)
 	{
 		pg_class_form->relreplident = ri_type;
-		simple_heap_update(pg_class, &pg_class_tuple->t_self, pg_class_tuple);
-		CatalogUpdateIndexes(pg_class, pg_class_tuple);
+		CatalogUpdateHeapAndIndexes(pg_class, &pg_class_tuple->t_self, pg_class_tuple);
 	}
 	heap_close(pg_class, RowExclusiveLock);
 	heap_freetuple(pg_class_tuple);
@@ -11711,8 +11650,7 @@ relation_mark_replica_identity(Relation rel, char ri_type, Oid indexOid,
 
 		if (dirty)
 		{
-			simple_heap_update(pg_index, &pg_index_tuple->t_self, pg_index_tuple);
-			CatalogUpdateIndexes(pg_index, pg_index_tuple);
+			CatalogUpdateHeapAndIndexes(pg_index, &pg_index_tuple->t_self, pg_index_tuple);
 			InvokeObjectPostAlterHookArg(IndexRelationId, thisIndexOid, 0,
 										 InvalidOid, is_internal);
 		}
@@ -11861,10 +11799,7 @@ ATExecEnableRowSecurity(Relation rel)
 		elog(ERROR, "cache lookup failed for relation %u", relid);
 
 	((Form_pg_class) GETSTRUCT(tuple))->relrowsecurity = true;
-	simple_heap_update(pg_class, &tuple->t_self, tuple);
-
-	/* keep catalog indexes current */
-	CatalogUpdateIndexes(pg_class, tuple);
+	CatalogUpdateHeapAndIndexes(pg_class, &tuple->t_self, tuple);
 
 	heap_close(pg_class, RowExclusiveLock);
 	heap_freetuple(tuple);
@@ -11888,10 +11823,7 @@ ATExecDisableRowSecurity(Relation rel)
 		elog(ERROR, "cache lookup failed for relation %u", relid);
 
 	((Form_pg_class) GETSTRUCT(tuple))->relrowsecurity = false;
-	simple_heap_update(pg_class, &tuple->t_self, tuple);
-
-	/* keep catalog indexes current */
-	CatalogUpdateIndexes(pg_class, tuple);
+	CatalogUpdateHeapAndIndexes(pg_class, &tuple->t_self, tuple);
 
 	heap_close(pg_class, RowExclusiveLock);
 	heap_freetuple(tuple);
@@ -11917,10 +11849,7 @@ ATExecForceNoForceRowSecurity(Relation rel, bool force_rls)
 		elog(ERROR, "cache lookup failed for relation %u", relid);
 
 	((Form_pg_class) GETSTRUCT(tuple))->relforcerowsecurity = force_rls;
-	simple_heap_update(pg_class, &tuple->t_self, tuple);
-
-	/* keep catalog indexes current */
-	CatalogUpdateIndexes(pg_class, tuple);
+	CatalogUpdateHeapAndIndexes(pg_class, &tuple->t_self, tuple);
 
 	heap_close(pg_class, RowExclusiveLock);
 	heap_freetuple(tuple);
@@ -11988,8 +11917,7 @@ ATExecGenericOptions(Relation rel, List *options)
 	tuple = heap_modify_tuple(tuple, RelationGetDescr(ftrel),
 							  repl_val, repl_null, repl_repl);
 
-	simple_heap_update(ftrel, &tuple->t_self, tuple);
-	CatalogUpdateIndexes(ftrel, tuple);
+	CatalogUpdateHeapAndIndexes(ftrel, &tuple->t_self, tuple);
 
 	/*
 	 * Invalidate relcache so that all sessions will refresh any cached plans
@@ -12284,8 +12212,7 @@ AlterRelationNamespaceInternal(Relation classRel, Oid relOid,
 		/* classTup is a copy, so OK to scribble on */
 		classForm->relnamespace = newNspOid;
 
-		simple_heap_update(classRel, &classTup->t_self, classTup);
-		CatalogUpdateIndexes(classRel, classTup);
+		CatalogUpdateHeapAndIndexes(classRel, &classTup->t_self, classTup);
 
 		/* Update dependency on schema if caller said so */
 		if (hasDependEntry &&
@@ -13520,8 +13447,7 @@ ATExecDetachPartition(Relation rel, RangeVar *name)
 								 new_val, new_null, new_repl);
 
 	((Form_pg_class) GETSTRUCT(newtuple))->relispartition = false;
-	simple_heap_update(classRel, &newtuple->t_self, newtuple);
-	CatalogUpdateIndexes(classRel, newtuple);
+	CatalogUpdateHeapAndIndexes(classRel, &newtuple->t_self, newtuple);
 	heap_freetuple(newtuple);
 	heap_close(classRel, RowExclusiveLock);
 
diff --git a/src/backend/commands/tablespace.c b/src/backend/commands/tablespace.c
index 651e1b3..f3c7436 100644
--- a/src/backend/commands/tablespace.c
+++ b/src/backend/commands/tablespace.c
@@ -344,9 +344,7 @@ CreateTableSpace(CreateTableSpaceStmt *stmt)
 
 	tuple = heap_form_tuple(rel->rd_att, values, nulls);
 
-	tablespaceoid = simple_heap_insert(rel, tuple);
-
-	CatalogUpdateIndexes(rel, tuple);
+	tablespaceoid = CatalogInsertHeapAndIndexes(rel, tuple);
 
 	heap_freetuple(tuple);
 
@@ -971,8 +969,7 @@ RenameTableSpace(const char *oldname, const char *newname)
 	/* OK, update the entry */
 	namestrcpy(&(newform->spcname), newname);
 
-	simple_heap_update(rel, &newtuple->t_self, newtuple);
-	CatalogUpdateIndexes(rel, newtuple);
+	CatalogUpdateHeapAndIndexes(rel, &newtuple->t_self, newtuple);
 
 	InvokeObjectPostAlterHook(TableSpaceRelationId, tspId, 0);
 
@@ -1044,8 +1041,7 @@ AlterTableSpaceOptions(AlterTableSpaceOptionsStmt *stmt)
 								 repl_null, repl_repl);
 
 	/* Update system catalog. */
-	simple_heap_update(rel, &newtuple->t_self, newtuple);
-	CatalogUpdateIndexes(rel, newtuple);
+	CatalogUpdateHeapAndIndexes(rel, &newtuple->t_self, newtuple);
 
 	InvokeObjectPostAlterHook(TableSpaceRelationId, HeapTupleGetOid(tup), 0);
 
diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c
index f067d0a..1cc67ef 100644
--- a/src/backend/commands/trigger.c
+++ b/src/backend/commands/trigger.c
@@ -773,9 +773,7 @@ CreateTrigger(CreateTrigStmt *stmt, const char *queryString,
 	/*
 	 * Insert tuple into pg_trigger.
 	 */
-	simple_heap_insert(tgrel, tuple);
-
-	CatalogUpdateIndexes(tgrel, tuple);
+	CatalogInsertHeapAndIndexes(tgrel, tuple);
 
 	heap_freetuple(tuple);
 	heap_close(tgrel, RowExclusiveLock);
@@ -802,9 +800,7 @@ CreateTrigger(CreateTrigStmt *stmt, const char *queryString,
 
 	((Form_pg_class) GETSTRUCT(tuple))->relhastriggers = true;
 
-	simple_heap_update(pgrel, &tuple->t_self, tuple);
-
-	CatalogUpdateIndexes(pgrel, tuple);
+	CatalogUpdateHeapAndIndexes(pgrel, &tuple->t_self, tuple);
 
 	heap_freetuple(tuple);
 	heap_close(pgrel, RowExclusiveLock);
@@ -1444,10 +1440,7 @@ renametrig(RenameStmt *stmt)
 		namestrcpy(&((Form_pg_trigger) GETSTRUCT(tuple))->tgname,
 				   stmt->newname);
 
-		simple_heap_update(tgrel, &tuple->t_self, tuple);
-
-		/* keep system catalog indexes current */
-		CatalogUpdateIndexes(tgrel, tuple);
+		CatalogUpdateHeapAndIndexes(tgrel, &tuple->t_self, tuple);
 
 		InvokeObjectPostAlterHook(TriggerRelationId,
 								  HeapTupleGetOid(tuple), 0);
@@ -1560,10 +1553,7 @@ EnableDisableTrigger(Relation rel, const char *tgname,
 
 			newtrig->tgenabled = fires_when;
 
-			simple_heap_update(tgrel, &newtup->t_self, newtup);
-
-			/* Keep catalog indexes current */
-			CatalogUpdateIndexes(tgrel, newtup);
+			CatalogUpdateHeapAndIndexes(tgrel, &newtup->t_self, newtup);
 
 			heap_freetuple(newtup);
 
diff --git a/src/backend/commands/tsearchcmds.c b/src/backend/commands/tsearchcmds.c
index 479a160..b9929a5 100644
--- a/src/backend/commands/tsearchcmds.c
+++ b/src/backend/commands/tsearchcmds.c
@@ -271,9 +271,7 @@ DefineTSParser(List *names, List *parameters)
 
 	tup = heap_form_tuple(prsRel->rd_att, values, nulls);
 
-	prsOid = simple_heap_insert(prsRel, tup);
-
-	CatalogUpdateIndexes(prsRel, tup);
+	prsOid = CatalogInsertHeapAndIndexes(prsRel, tup);
 
 	address = makeParserDependencies(tup);
 
@@ -482,9 +480,7 @@ DefineTSDictionary(List *names, List *parameters)
 
 	tup = heap_form_tuple(dictRel->rd_att, values, nulls);
 
-	dictOid = simple_heap_insert(dictRel, tup);
-
-	CatalogUpdateIndexes(dictRel, tup);
+	dictOid = CatalogInsertHeapAndIndexes(dictRel, tup);
 
 	address = makeDictionaryDependencies(tup);
 
@@ -620,9 +616,7 @@ AlterTSDictionary(AlterTSDictionaryStmt *stmt)
 	newtup = heap_modify_tuple(tup, RelationGetDescr(rel),
 							   repl_val, repl_null, repl_repl);
 
-	simple_heap_update(rel, &newtup->t_self, newtup);
-
-	CatalogUpdateIndexes(rel, newtup);
+	CatalogUpdateHeapAndIndexes(rel, &newtup->t_self, newtup);
 
 	InvokeObjectPostAlterHook(TSDictionaryRelationId, dictId, 0);
 
@@ -806,9 +800,7 @@ DefineTSTemplate(List *names, List *parameters)
 
 	tup = heap_form_tuple(tmplRel->rd_att, values, nulls);
 
-	tmplOid = simple_heap_insert(tmplRel, tup);
-
-	CatalogUpdateIndexes(tmplRel, tup);
+	tmplOid = CatalogInsertHeapAndIndexes(tmplRel, tup);
 
 	address = makeTSTemplateDependencies(tup);
 
@@ -1066,9 +1058,7 @@ DefineTSConfiguration(List *names, List *parameters, ObjectAddress *copied)
 
 	tup = heap_form_tuple(cfgRel->rd_att, values, nulls);
 
-	cfgOid = simple_heap_insert(cfgRel, tup);
-
-	CatalogUpdateIndexes(cfgRel, tup);
+	cfgOid = CatalogInsertHeapAndIndexes(cfgRel, tup);
 
 	if (OidIsValid(sourceOid))
 	{
@@ -1106,9 +1096,7 @@ DefineTSConfiguration(List *names, List *parameters, ObjectAddress *copied)
 
 			newmaptup = heap_form_tuple(mapRel->rd_att, mapvalues, mapnulls);
 
-			simple_heap_insert(mapRel, newmaptup);
-
-			CatalogUpdateIndexes(mapRel, newmaptup);
+			CatalogInsertHeapAndIndexes(mapRel, newmaptup);
 
 			heap_freetuple(newmaptup);
 		}
@@ -1409,9 +1397,7 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 				newtup = heap_modify_tuple(maptup,
 										   RelationGetDescr(relMap),
 										   repl_val, repl_null, repl_repl);
-				simple_heap_update(relMap, &newtup->t_self, newtup);
-
-				CatalogUpdateIndexes(relMap, newtup);
+				CatalogUpdateHeapAndIndexes(relMap, &newtup->t_self, newtup);
 			}
 		}
 
@@ -1436,8 +1422,7 @@ MakeConfigurationMapping(AlterTSConfigurationStmt *stmt,
 				values[Anum_pg_ts_config_map_mapdict - 1] = ObjectIdGetDatum(dictIds[j]);
 
 				tup = heap_form_tuple(relMap->rd_att, values, nulls);
-				simple_heap_insert(relMap, tup);
-				CatalogUpdateIndexes(relMap, tup);
+				CatalogInsertHeapAndIndexes(relMap, tup);
 
 				heap_freetuple(tup);
 			}
diff --git a/src/backend/commands/typecmds.c b/src/backend/commands/typecmds.c
index 4c33d55..68e93fc 100644
--- a/src/backend/commands/typecmds.c
+++ b/src/backend/commands/typecmds.c
@@ -2221,9 +2221,7 @@ AlterDomainDefault(List *names, Node *defaultRaw)
 								 new_record, new_record_nulls,
 								 new_record_repl);
 
-	simple_heap_update(rel, &tup->t_self, newtuple);
-
-	CatalogUpdateIndexes(rel, newtuple);
+	CatalogUpdateHeapAndIndexes(rel, &tup->t_self, newtuple);
 
 	/* Rebuild dependencies */
 	GenerateTypeDependencies(typTup->typnamespace,
@@ -2360,9 +2358,7 @@ AlterDomainNotNull(List *names, bool notNull)
 	 */
 	typTup->typnotnull = notNull;
 
-	simple_heap_update(typrel, &tup->t_self, tup);
-
-	CatalogUpdateIndexes(typrel, tup);
+	CatalogUpdateHeapAndIndexes(typrel, &tup->t_self, tup);
 
 	InvokeObjectPostAlterHook(TypeRelationId, domainoid, 0);
 
@@ -2662,8 +2658,7 @@ AlterDomainValidateConstraint(List *names, char *constrName)
 	copyTuple = heap_copytuple(tuple);
 	copy_con = (Form_pg_constraint) GETSTRUCT(copyTuple);
 	copy_con->convalidated = true;
-	simple_heap_update(conrel, &copyTuple->t_self, copyTuple);
-	CatalogUpdateIndexes(conrel, copyTuple);
+	CatalogUpdateHeapAndIndexes(conrel, &copyTuple->t_self, copyTuple);
 
 	InvokeObjectPostAlterHook(ConstraintRelationId,
 							  HeapTupleGetOid(copyTuple), 0);
@@ -3404,9 +3399,7 @@ AlterTypeOwnerInternal(Oid typeOid, Oid newOwnerId)
 	tup = heap_modify_tuple(tup, RelationGetDescr(rel), repl_val, repl_null,
 							repl_repl);
 
-	simple_heap_update(rel, &tup->t_self, tup);
-
-	CatalogUpdateIndexes(rel, tup);
+	CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 
 	/* If it has an array type, update that too */
 	if (OidIsValid(typTup->typarray))
@@ -3566,8 +3559,7 @@ AlterTypeNamespaceInternal(Oid typeOid, Oid nspOid,
 		/* tup is a copy, so we can scribble directly on it */
 		typform->typnamespace = nspOid;
 
-		simple_heap_update(rel, &tup->t_self, tup);
-		CatalogUpdateIndexes(rel, tup);
+		CatalogUpdateHeapAndIndexes(rel, &tup->t_self, tup);
 	}
 
 	/*
diff --git a/src/backend/commands/user.c b/src/backend/commands/user.c
index b746982..46e3a66 100644
--- a/src/backend/commands/user.c
+++ b/src/backend/commands/user.c
@@ -433,8 +433,7 @@ CreateRole(ParseState *pstate, CreateRoleStmt *stmt)
 	/*
 	 * Insert new record in the pg_authid table
 	 */
-	roleid = simple_heap_insert(pg_authid_rel, tuple);
-	CatalogUpdateIndexes(pg_authid_rel, tuple);
+	roleid = CatalogInsertHeapAndIndexes(pg_authid_rel, tuple);
 
 	/*
 	 * Advance command counter so we can see new record; else tests in
@@ -838,10 +837,7 @@ AlterRole(AlterRoleStmt *stmt)
 
 	new_tuple = heap_modify_tuple(tuple, pg_authid_dsc, new_record,
 								  new_record_nulls, new_record_repl);
-	simple_heap_update(pg_authid_rel, &tuple->t_self, new_tuple);
-
-	/* Update indexes */
-	CatalogUpdateIndexes(pg_authid_rel, new_tuple);
+	CatalogUpdateHeapAndIndexes(pg_authid_rel, &tuple->t_self, new_tuple);
 
 	InvokeObjectPostAlterHook(AuthIdRelationId, roleid, 0);
 
@@ -1243,9 +1239,7 @@ RenameRole(const char *oldname, const char *newname)
 	}
 
 	newtuple = heap_modify_tuple(oldtuple, dsc, repl_val, repl_null, repl_repl);
-	simple_heap_update(rel, &oldtuple->t_self, newtuple);
-
-	CatalogUpdateIndexes(rel, newtuple);
+	CatalogUpdateHeapAndIndexes(rel, &oldtuple->t_self, newtuple);
 
 	InvokeObjectPostAlterHook(AuthIdRelationId, roleid, 0);
 
@@ -1530,16 +1524,14 @@ AddRoleMems(const char *rolename, Oid roleid,
 			tuple = heap_modify_tuple(authmem_tuple, pg_authmem_dsc,
 									  new_record,
 									  new_record_nulls, new_record_repl);
-			simple_heap_update(pg_authmem_rel, &tuple->t_self, tuple);
-			CatalogUpdateIndexes(pg_authmem_rel, tuple);
+			CatalogUpdateHeapAndIndexes(pg_authmem_rel, &tuple->t_self, tuple);
 			ReleaseSysCache(authmem_tuple);
 		}
 		else
 		{
 			tuple = heap_form_tuple(pg_authmem_dsc,
 									new_record, new_record_nulls);
-			simple_heap_insert(pg_authmem_rel, tuple);
-			CatalogUpdateIndexes(pg_authmem_rel, tuple);
+			CatalogInsertHeapAndIndexes(pg_authmem_rel, tuple);
 		}
 
 		/* CCI after each change, in case there are duplicates in list */
@@ -1647,8 +1639,7 @@ DelRoleMems(const char *rolename, Oid roleid,
 			tuple = heap_modify_tuple(authmem_tuple, pg_authmem_dsc,
 									  new_record,
 									  new_record_nulls, new_record_repl);
-			simple_heap_update(pg_authmem_rel, &tuple->t_self, tuple);
-			CatalogUpdateIndexes(pg_authmem_rel, tuple);
+			CatalogUpdateHeapAndIndexes(pg_authmem_rel, &tuple->t_self, tuple);
 		}
 
 		ReleaseSysCache(authmem_tuple);
diff --git a/src/backend/replication/logical/origin.c b/src/backend/replication/logical/origin.c
index d7dda6a..7048f73 100644
--- a/src/backend/replication/logical/origin.c
+++ b/src/backend/replication/logical/origin.c
@@ -299,8 +299,7 @@ replorigin_create(char *roname)
 			values[Anum_pg_replication_origin_roname - 1] = roname_d;
 
 			tuple = heap_form_tuple(RelationGetDescr(rel), values, nulls);
-			simple_heap_insert(rel, tuple);
-			CatalogUpdateIndexes(rel, tuple);
+			CatalogInsertHeapAndIndexes(rel, tuple);
 			CommandCounterIncrement();
 			break;
 		}
diff --git a/src/backend/rewrite/rewriteDefine.c b/src/backend/rewrite/rewriteDefine.c
index 481868b..33d73c2 100644
--- a/src/backend/rewrite/rewriteDefine.c
+++ b/src/backend/rewrite/rewriteDefine.c
@@ -124,7 +124,7 @@ InsertRule(char *rulname,
 		tup = heap_modify_tuple(oldtup, RelationGetDescr(pg_rewrite_desc),
 								values, nulls, replaces);
 
-		simple_heap_update(pg_rewrite_desc, &tup->t_self, tup);
+		CatalogUpdateHeapAndIndexes(pg_rewrite_desc, &tup->t_self, tup);
 
 		ReleaseSysCache(oldtup);
 
@@ -135,11 +135,9 @@ InsertRule(char *rulname,
 	{
 		tup = heap_form_tuple(pg_rewrite_desc->rd_att, values, nulls);
 
-		rewriteObjectId = simple_heap_insert(pg_rewrite_desc, tup);
+		rewriteObjectId = CatalogInsertHeapAndIndexes(pg_rewrite_desc, tup);
 	}
 
-	/* Need to update indexes in either case */
-	CatalogUpdateIndexes(pg_rewrite_desc, tup);
 
 	heap_freetuple(tup);
 
@@ -613,8 +611,7 @@ DefineQueryRewrite(char *rulename,
 		classForm->relminmxid = InvalidMultiXactId;
 		classForm->relreplident = REPLICA_IDENTITY_NOTHING;
 
-		simple_heap_update(relationRelation, &classTup->t_self, classTup);
-		CatalogUpdateIndexes(relationRelation, classTup);
+		CatalogUpdateHeapAndIndexes(relationRelation, &classTup->t_self, classTup);
 
 		heap_freetuple(classTup);
 		heap_close(relationRelation, RowExclusiveLock);
@@ -866,10 +863,7 @@ EnableDisableRule(Relation rel, const char *rulename,
 	{
 		((Form_pg_rewrite) GETSTRUCT(ruletup))->ev_enabled =
 			CharGetDatum(fires_when);
-		simple_heap_update(pg_rewrite_desc, &ruletup->t_self, ruletup);
-
-		/* keep system catalog indexes current */
-		CatalogUpdateIndexes(pg_rewrite_desc, ruletup);
+		CatalogUpdateHeapAndIndexes(pg_rewrite_desc, &ruletup->t_self, ruletup);
 
 		changed = true;
 	}
@@ -985,10 +979,7 @@ RenameRewriteRule(RangeVar *relation, const char *oldName,
 	/* OK, do the update */
 	namestrcpy(&(ruleform->rulename), newName);
 
-	simple_heap_update(pg_rewrite_desc, &ruletup->t_self, ruletup);
-
-	/* keep system catalog indexes current */
-	CatalogUpdateIndexes(pg_rewrite_desc, ruletup);
+	CatalogUpdateHeapAndIndexes(pg_rewrite_desc, &ruletup->t_self, ruletup);
 
 	heap_freetuple(ruletup);
 	heap_close(pg_rewrite_desc, RowExclusiveLock);
diff --git a/src/backend/rewrite/rewriteSupport.c b/src/backend/rewrite/rewriteSupport.c
index 0154072..fc76fab 100644
--- a/src/backend/rewrite/rewriteSupport.c
+++ b/src/backend/rewrite/rewriteSupport.c
@@ -72,10 +72,7 @@ SetRelationRuleStatus(Oid relationId, bool relHasRules)
 		/* Do the update */
 		classForm->relhasrules = relHasRules;
 
-		simple_heap_update(relationRelation, &tuple->t_self, tuple);
-
-		/* Keep the catalog indexes up to date */
-		CatalogUpdateIndexes(relationRelation, tuple);
+		CatalogUpdateHeapAndIndexes(relationRelation, &tuple->t_self, tuple);
 	}
 	else
 	{
diff --git a/src/backend/storage/large_object/inv_api.c b/src/backend/storage/large_object/inv_api.c
index 262b0b2..de35e03 100644
--- a/src/backend/storage/large_object/inv_api.c
+++ b/src/backend/storage/large_object/inv_api.c
@@ -678,8 +678,7 @@ inv_write(LargeObjectDesc *obj_desc, const char *buf, int nbytes)
 			replace[Anum_pg_largeobject_data - 1] = true;
 			newtup = heap_modify_tuple(oldtuple, RelationGetDescr(lo_heap_r),
 									   values, nulls, replace);
-			simple_heap_update(lo_heap_r, &newtup->t_self, newtup);
-			CatalogIndexInsert(indstate, newtup);
+			CatalogUpdateHeapAndIndexes(lo_heap_r, &newtup->t_self, newtup);
 			heap_freetuple(newtup);
 
 			/*
@@ -721,8 +720,7 @@ inv_write(LargeObjectDesc *obj_desc, const char *buf, int nbytes)
 			values[Anum_pg_largeobject_pageno - 1] = Int32GetDatum(pageno);
 			values[Anum_pg_largeobject_data - 1] = PointerGetDatum(&workbuf);
 			newtup = heap_form_tuple(lo_heap_r->rd_att, values, nulls);
-			simple_heap_insert(lo_heap_r, newtup);
-			CatalogIndexInsert(indstate, newtup);
+			CatalogInsertHeapAndIndexes(lo_heap_r, newtup);
 			heap_freetuple(newtup);
 		}
 		pageno++;
@@ -850,8 +848,7 @@ inv_truncate(LargeObjectDesc *obj_desc, int64 len)
 		replace[Anum_pg_largeobject_data - 1] = true;
 		newtup = heap_modify_tuple(oldtuple, RelationGetDescr(lo_heap_r),
 								   values, nulls, replace);
-		simple_heap_update(lo_heap_r, &newtup->t_self, newtup);
-		CatalogIndexInsert(indstate, newtup);
+		CatalogUpdateHeapAndIndexes(lo_heap_r, &newtup->t_self, newtup);
 		heap_freetuple(newtup);
 	}
 	else
@@ -888,8 +885,7 @@ inv_truncate(LargeObjectDesc *obj_desc, int64 len)
 		values[Anum_pg_largeobject_pageno - 1] = Int32GetDatum(pageno);
 		values[Anum_pg_largeobject_data - 1] = PointerGetDatum(&workbuf);
 		newtup = heap_form_tuple(lo_heap_r->rd_att, values, nulls);
-		simple_heap_insert(lo_heap_r, newtup);
-		CatalogIndexInsert(indstate, newtup);
+		CatalogInsertHeapAndIndexes(lo_heap_r, newtup);
 		heap_freetuple(newtup);
 	}
 
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 26ff7e1..fe1ecbc 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -3484,8 +3484,7 @@ RelationSetNewRelfilenode(Relation relation, char persistence,
 	classform->relminmxid = minmulti;
 	classform->relpersistence = persistence;
 
-	simple_heap_update(pg_class, &tuple->t_self, tuple);
-	CatalogUpdateIndexes(pg_class, tuple);
+	CatalogUpdateHeapAndIndexes(pg_class, &tuple->t_self, tuple);
 
 	heap_freetuple(tuple);
 
diff --git a/src/include/catalog/indexing.h b/src/include/catalog/indexing.h
index a3635a4..1620d7a 100644
--- a/src/include/catalog/indexing.h
+++ b/src/include/catalog/indexing.h
@@ -33,6 +33,9 @@ extern void CatalogCloseIndexes(CatalogIndexState indstate);
 extern void CatalogIndexInsert(CatalogIndexState indstate,
 				   HeapTuple heapTuple);
 extern void CatalogUpdateIndexes(Relation heapRel, HeapTuple heapTuple);
+extern void CatalogUpdateHeapAndIndexes(Relation heapRel, ItemPointer otid,
+				   HeapTuple tup);
+extern Oid CatalogInsertHeapAndIndexes(Relation heapRel, HeapTuple tup);
 
 
 /*
#44Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Pavan Deolasee (#43)
Re: Patch: Write Amplification Reduction Method (WARM)

Pavan Deolasee wrote:

Two new APIs added.

- CatalogInsertHeapAndIndex which does a simple_heap_insert followed by
catalog updates
- CatalogUpdateHeapAndIndex which does a simple_heap_update followed by
catalog updates

There are only a handful callers remain for simple_heap_insert/update after
this patch. They are typically working with already opened indexes and
hence I left them unchanged.

Hmm, I was thinking we would get rid of CatalogUpdateIndexes altogether.
Two of the callers are in the new routines (which I propose to rename to
CatalogTupleInsert and CatalogTupleUpdate); the only remaining one is in
InsertPgAttributeTuple. I propose that we inline the three lines into
all those places and just remove CatalogUpdateIndexes. Half the out-of-
core places that are using this function will be broken as soon as WARM
lands anyway. I see no reason to keep it. (I have already modified the
patch this way -- no need to resend).

Unless there are objections I will push this later this afternoon.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#45Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#44)
Re: Patch: Write Amplification Reduction Method (WARM)

Alvaro Herrera wrote:

Unless there are objections I will push this later this afternoon.

Done. Let's get on with the show -- please post a rebased WARM.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#46Andres Freund
andres@anarazel.de
In reply to: Alvaro Herrera (#44)
Re: Patch: Write Amplification Reduction Method (WARM)

On 2017-01-31 14:10:01 -0300, Alvaro Herrera wrote:

Pavan Deolasee wrote:

Two new APIs added.

- CatalogInsertHeapAndIndex which does a simple_heap_insert followed by
catalog updates
- CatalogUpdateHeapAndIndex which does a simple_heap_update followed by
catalog updates

There are only a handful callers remain for simple_heap_insert/update after
this patch. They are typically working with already opened indexes and
hence I left them unchanged.

Hmm, I was thinking we would get rid of CatalogUpdateIndexes altogether.
Two of the callers are in the new routines (which I propose to rename to
CatalogTupleInsert and CatalogTupleUpdate); the only remaining one is in
InsertPgAttributeTuple. I propose that we inline the three lines into
all those places and just remove CatalogUpdateIndexes. Half the out-of-
core places that are using this function will be broken as soon as WARM
lands anyway. I see no reason to keep it. (I have already modified the
patch this way -- no need to resend).

Unless there are objections I will push this later this afternoon.

Hm, sorry for missing this earlier. I think CatalogUpdateIndexes() is
fairly widely used in extensions - it seems like a pretty harsh change
to not leave some backward compatibility layer in place.

Andres

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#47Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Andres Freund (#46)
Re: Patch: Write Amplification Reduction Method (WARM)

Andres Freund wrote:

On 2017-01-31 14:10:01 -0300, Alvaro Herrera wrote:

Hmm, I was thinking we would get rid of CatalogUpdateIndexes altogether.
Two of the callers are in the new routines (which I propose to rename to
CatalogTupleInsert and CatalogTupleUpdate); the only remaining one is in
InsertPgAttributeTuple. I propose that we inline the three lines into
all those places and just remove CatalogUpdateIndexes. Half the out-of-
core places that are using this function will be broken as soon as WARM
lands anyway. I see no reason to keep it. (I have already modified the
patch this way -- no need to resend).

Unless there are objections I will push this later this afternoon.

Hm, sorry for missing this earlier. I think CatalogUpdateIndexes() is
fairly widely used in extensions - it seems like a pretty harsh change
to not leave some backward compatibility layer in place.

Yeah, I can put it back if there's pushback about the removal, but I
think it's going to break due to WARM anyway.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#48Andres Freund
andres@anarazel.de
In reply to: Alvaro Herrera (#47)
Re: Patch: Write Amplification Reduction Method (WARM)

On 2017-01-31 19:10:05 -0300, Alvaro Herrera wrote:

Andres Freund wrote:

On 2017-01-31 14:10:01 -0300, Alvaro Herrera wrote:

Hmm, I was thinking we would get rid of CatalogUpdateIndexes altogether.
Two of the callers are in the new routines (which I propose to rename to
CatalogTupleInsert and CatalogTupleUpdate); the only remaining one is in
InsertPgAttributeTuple. I propose that we inline the three lines into
all those places and just remove CatalogUpdateIndexes. Half the out-of-
core places that are using this function will be broken as soon as WARM
lands anyway. I see no reason to keep it. (I have already modified the
patch this way -- no need to resend).

Unless there are objections I will push this later this afternoon.

Hm, sorry for missing this earlier. I think CatalogUpdateIndexes() is
fairly widely used in extensions - it seems like a pretty harsh change
to not leave some backward compatibility layer in place.

Yeah, I can put it back if there's pushback about the removal, but I
think it's going to break due to WARM anyway.

I'm a bit doubtful (but not extremely so) that that's ok.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#49Tom Lane
tgl@sss.pgh.pa.us
In reply to: Andres Freund (#46)
Re: Patch: Write Amplification Reduction Method (WARM)

Andres Freund <andres@anarazel.de> writes:

Hm, sorry for missing this earlier. I think CatalogUpdateIndexes() is
fairly widely used in extensions - it seems like a pretty harsh change
to not leave some backward compatibility layer in place.

If an extension is doing that, it is probably constructing tuples to put
into the catalog, which means it'd be equally (and much more quietly)
broken by any change to the catalog's schema. We've never considered
such an argument as a reason not to change catalog schemas, though.

In short, I've got mighty little sympathy for that argument.

(I'm a little more concerned by Alvaro's apparent position that WARM
is a done deal; I didn't think so. This particular change seems like
good cleanup anyhow, however.)

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#50Stephen Frost
sfrost@snowman.net
In reply to: Tom Lane (#49)
Re: Patch: Write Amplification Reduction Method (WARM)

* Tom Lane (tgl@sss.pgh.pa.us) wrote:

Andres Freund <andres@anarazel.de> writes:

Hm, sorry for missing this earlier. I think CatalogUpdateIndexes() is
fairly widely used in extensions - it seems like a pretty harsh change
to not leave some backward compatibility layer in place.

If an extension is doing that, it is probably constructing tuples to put
into the catalog, which means it'd be equally (and much more quietly)
broken by any change to the catalog's schema. We've never considered
such an argument as a reason not to change catalog schemas, though.

In short, I've got mighty little sympathy for that argument.

+1

(I'm a little more concerned by Alvaro's apparent position that WARM
is a done deal; I didn't think so. This particular change seems like
good cleanup anyhow, however.)

Agreed.

Thanks!

Stephen

#51Tom Lane
tgl@sss.pgh.pa.us
In reply to: Stephen Frost (#50)
Re: Patch: Write Amplification Reduction Method (WARM)

Stephen Frost <sfrost@snowman.net> writes:

* Tom Lane (tgl@sss.pgh.pa.us) wrote:

(I'm a little more concerned by Alvaro's apparent position that WARM
is a done deal; I didn't think so. This particular change seems like
good cleanup anyhow, however.)

Agreed.

BTW, the reason I think it's good cleanup is that it's something that my
colleagues at Salesforce also had to do as part of putting PG on top of a
different storage engine that had different ideas about index handling.
Essentially it's providing a bit of abstraction as to whether catalog
storage is exactly heaps or not (a topic I've noticed Robert is starting
to take some interest in, as well). However, the patch misses an
important part of such an abstraction layer by not also converting
catalog-related simple_heap_delete() calls into some sort of
CatalogTupleDelete() operation. It is certainly a peculiarity of
PG heaps that deletions don't require any immediate index work --- most
other storage engines would need that.

I propose that we should finish the job by inventing CatalogTupleDelete(),
which for the moment would be a trivial wrapper around
simple_heap_delete(), maybe just a macro for it.

If there's no objections I'll go make that happen in a day or two.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#52Andres Freund
andres@anarazel.de
In reply to: Tom Lane (#49)
Re: Patch: Write Amplification Reduction Method (WARM)

On 2017-01-31 17:21:28 -0500, Tom Lane wrote:

Andres Freund <andres@anarazel.de> writes:

Hm, sorry for missing this earlier. I think CatalogUpdateIndexes() is
fairly widely used in extensions - it seems like a pretty harsh change
to not leave some backward compatibility layer in place.

If an extension is doing that, it is probably constructing tuples to put
into the catalog, which means it'd be equally (and much more quietly)
broken by any change to the catalog's schema. We've never considered
such an argument as a reason not to change catalog schemas, though.

I know of several extensions that use CatalogUpdateIndexes() to update
their own tables. Citus included (It's trivial to change on our side, so
that's not a reason to do or not do something). There really is no
convenient API to do so without it.

(I'm a little more concerned by Alvaro's apparent position that WARM
is a done deal; I didn't think so. This particular change seems like
good cleanup anyhow, however.)

Yea, I don't think we're even close to that either.

Andres

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#53Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Tom Lane (#51)
Re: Patch: Write Amplification Reduction Method (WARM)

Tom Lane wrote:

BTW, the reason I think it's good cleanup is that it's something that my
colleagues at Salesforce also had to do as part of putting PG on top of a
different storage engine that had different ideas about index handling.
Essentially it's providing a bit of abstraction as to whether catalog
storage is exactly heaps or not (a topic I've noticed Robert is starting
to take some interest in, as well).

Yeah, I remembered that too. Of course, we'd need to change the whole
idea of mapping tuples to C structs too, but this seemed a nice step
forward. (I renamed Pavan's proposed routine precisely to avoid the
word "Heap" in it.)

However, the patch misses an
important part of such an abstraction layer by not also converting
catalog-related simple_heap_delete() calls into some sort of
CatalogTupleDelete() operation. It is certainly a peculiarity of
PG heaps that deletions don't require any immediate index work --- most
other storage engines would need that.

I propose that we should finish the job by inventing CatalogTupleDelete(),
which for the moment would be a trivial wrapper around
simple_heap_delete(), maybe just a macro for it.

If there's no objections I'll go make that happen in a day or two.

Sounds good.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#54Michael Paquier
michael.paquier@gmail.com
In reply to: Alvaro Herrera (#53)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Feb 1, 2017 at 9:36 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:

I propose that we should finish the job by inventing CatalogTupleDelete(),
which for the moment would be a trivial wrapper around
simple_heap_delete(), maybe just a macro for it.

If there's no objections I'll go make that happen in a day or two.

Sounds good.

As you are on it, I have moved the patch to CF 2017-03.
--
Michael

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#55Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Alvaro Herrera (#40)
1 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Jan 31, 2017 at 7:21 PM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

+#define HeapTupleHeaderGetNextTid(tup, next_ctid) \
+do { \
+     AssertMacro(!((tup)->t_infomask2 & HEAP_LATEST_TUPLE)); \
+     ItemPointerCopy(&(tup)->t_ctid, (next_ctid)); \
+} while (0)

Actually, I think this macro could just return the TID so that it can

be

used as struct assignment, just like ItemPointerCopy does internally --
callers can do
ctid = HeapTupleHeaderGetNextTid(tup);

While I agree with your proposal, I wonder why we have ItemPointerCopy()

in

the first place because we freely copy TIDs as struct assignment. Is

there

a reason for that? And if there is, does it impact this specific case?

I dunno. This macro is present in our very first commit d31084e9d1118b.
Maybe it's an artifact from the Lisp to C conversion. Even then, we had
some cases of iptrs being copied by struct assignment, so it's not like
it didn't work. Perhaps somebody envisioned that the internal details
could change, but that hasn't happened in two decades so why should we
worry about it now? If somebody needs it later, it can be changed then.

May I suggest in that case that we apply the attached patch which removes
all references to ItemPointerCopy and its definition as well? This will
avoid confusion in future too. No issues noticed in regression tests.

There is one issue that bothers me. The current implementation lacks
ability to convert WARM chains into HOT chains. The README.WARM has some
proposal to do that. But it requires additional free bit in tuple header
(which we don't have) and of course, it needs to be vetted and

implemented.

If the heap ends up with many WARM tuples, then index-only-scans will
become ineffective because index-only-scan can not skip a heap page, if

it

contains a WARM tuple. Alternate ideas/suggestions and review of the

design

are welcome!

t_infomask2 contains one last unused bit,

Umm, WARM is using 2 unused bits from t_infomask2. You mean there is
another free bit after that too?

and we could reuse vacuum
full's bits (HEAP_MOVED_OUT, HEAP_MOVED_IN), but that will need some
thinking ahead. Maybe now's the time to start versioning relations so
that we can ensure clusters upgraded to pg10 do not contain any of those
bits in any tuple headers.

Yeah, IIRC old VACUUM FULL was removed in 9.0, which is good 6 year old.
Obviously, there still a chance that a pre-9.0 binary upgraded cluster
exists and upgrades to 10. So we still need to do something about them if
we reuse these bits. I'm surprised to see that we don't have any mechanism
in place to clear those bits. So may be we add something to do that.

I'd some other ideas (and a patch too) to reuse bits from t_ctid.ip_pos
given that offset numbers can be represented in just 13 bits, even with the
maximum block size. Can look at that if it comes to finding more bits.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

remove_itempointercopy.patchapplication/octet-stream; name=remove_itempointercopy.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 5fd7f1e..2bbd59c 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -4642,7 +4642,7 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+		t_ctid = tuple->t_data->t_ctid;
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -5671,14 +5671,14 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
 
-	ItemPointerCopy(tid, &tupid);
+	tupid = *tid;
 
 	for (;;)
 	{
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
-		ItemPointerCopy(&tupid, &(mytup.t_self));
+		mytup.t_self = tupid;
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
 		{
@@ -5916,7 +5916,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		tupid = mytup.t_data->t_ctid;
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c
index b3e89a4..19b3d80 100644
--- a/src/backend/commands/trigger.c
+++ b/src/backend/commands/trigger.c
@@ -3792,7 +3792,7 @@ AfterTriggerExecute(AfterTriggerEvent event,
 		default:
 			if (ItemPointerIsValid(&(event->ate_ctid1)))
 			{
-				ItemPointerCopy(&(event->ate_ctid1), &(tuple1.t_self));
+				tuple1.t_self = event->ate_ctid1;
 				if (!heap_fetch(rel, SnapshotAny, &tuple1, &buffer1, false, NULL))
 					elog(ERROR, "failed to fetch tuple1 for AFTER trigger");
 				LocTriggerData.tg_trigtuple = &tuple1;
@@ -3809,7 +3809,7 @@ AfterTriggerExecute(AfterTriggerEvent event,
 				AFTER_TRIGGER_2CTID &&
 				ItemPointerIsValid(&(event->ate_ctid2)))
 			{
-				ItemPointerCopy(&(event->ate_ctid2), &(tuple2.t_self));
+				tuple2.t_self = event->ate_ctid2;
 				if (!heap_fetch(rel, SnapshotAny, &tuple2, &buffer2, false, NULL))
 					elog(ERROR, "failed to fetch tuple2 for AFTER trigger");
 				LocTriggerData.tg_newtuple = &tuple2;
@@ -5152,7 +5152,7 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo,
 			{
 				Assert(oldtup == NULL);
 				Assert(newtup != NULL);
-				ItemPointerCopy(&(newtup->t_self), &(new_event.ate_ctid1));
+				new_event.ate_ctid1 = newtup->t_self;
 				ItemPointerSetInvalid(&(new_event.ate_ctid2));
 			}
 			else
@@ -5169,7 +5169,7 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo,
 			{
 				Assert(oldtup != NULL);
 				Assert(newtup == NULL);
-				ItemPointerCopy(&(oldtup->t_self), &(new_event.ate_ctid1));
+				new_event.ate_ctid1 = oldtup->t_self;
 				ItemPointerSetInvalid(&(new_event.ate_ctid2));
 			}
 			else
@@ -5186,8 +5186,8 @@ AfterTriggerSaveEvent(EState *estate, ResultRelInfo *relinfo,
 			{
 				Assert(oldtup != NULL);
 				Assert(newtup != NULL);
-				ItemPointerCopy(&(oldtup->t_self), &(new_event.ate_ctid1));
-				ItemPointerCopy(&(newtup->t_self), &(new_event.ate_ctid2));
+				new_event.ate_ctid1 = oldtup->t_self;
+				new_event.ate_ctid2 = newtup->t_self;
 			}
 			else
 			{
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index a8bd583..7ea8e44 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -170,7 +170,7 @@ retry:
 		HTSU_Result res;
 		HeapTupleData locktup;
 
-		ItemPointerCopy(&outslot->tts_tuple->t_self, &locktup.t_self);
+		locktup.t_self = outslot->tts_tuple->t_self;
 
 		PushActiveSnapshot(GetLatestSnapshot());
 
@@ -317,7 +317,7 @@ retry:
 		HTSU_Result res;
 		HeapTupleData locktup;
 
-		ItemPointerCopy(&outslot->tts_tuple->t_self, &locktup.t_self);
+		locktup.t_self = outslot->tts_tuple->t_self;
 
 		PushActiveSnapshot(GetLatestSnapshot());
 
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index d805ef4..3a5d8fc 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -1233,8 +1233,7 @@ ReorderBufferBuildTupleCidHash(ReorderBuffer *rb, ReorderBufferTXN *txn)
 
 		key.relnode = change->data.tuplecid.node;
 
-		ItemPointerCopy(&change->data.tuplecid.tid,
-						&key.tid);
+		key.tid = change->data.tuplecid.tid;
 
 		ent = (ReorderBufferTupleCidEnt *)
 			hash_search(txn->tuplecid_hash,
@@ -3106,9 +3105,7 @@ ApplyLogicalMappingFile(HTAB *tuplecid_data, Oid relid, const char *fname)
 							(int32) sizeof(LogicalRewriteMappingData))));
 
 		key.relnode = map.old_node;
-		ItemPointerCopy(&map.old_tid,
-						&key.tid);
-
+		key.tid = map.old_tid;
 
 		ent = (ReorderBufferTupleCidEnt *)
 			hash_search(tuplecid_data,
@@ -3121,8 +3118,7 @@ ApplyLogicalMappingFile(HTAB *tuplecid_data, Oid relid, const char *fname)
 			continue;
 
 		key.relnode = map.new_node;
-		ItemPointerCopy(&map.new_tid,
-						&key.tid);
+		key.tid = map.new_tid;
 
 		new_ent = (ReorderBufferTupleCidEnt *)
 			hash_search(tuplecid_data,
@@ -3297,8 +3293,7 @@ ResolveCminCmaxDuringDecoding(HTAB *tuplecid_data,
 	Assert(forkno == MAIN_FORKNUM);
 	Assert(blockno == ItemPointerGetBlockNumber(&htup->t_self));
 
-	ItemPointerCopy(&htup->t_self,
-					&key.tid);
+	key.tid = htup->t_self;
 
 restart:
 	ent = (ReorderBufferTupleCidEnt *)
diff --git a/src/backend/utils/adt/tid.c b/src/backend/utils/adt/tid.c
index a3b372f..1eda80d 100644
--- a/src/backend/utils/adt/tid.c
+++ b/src/backend/utils/adt/tid.c
@@ -354,7 +354,7 @@ currtid_byreloid(PG_FUNCTION_ARGS)
 	if (rel->rd_rel->relkind == RELKIND_VIEW)
 		return currtid_for_view(rel, tid);
 
-	ItemPointerCopy(tid, result);
+	*result = *tid;
 
 	snapshot = RegisterSnapshot(GetLatestSnapshot());
 	heap_get_latest_tid(rel, snapshot, result);
@@ -389,7 +389,7 @@ currtid_byrelname(PG_FUNCTION_ARGS)
 		return currtid_for_view(rel, tid);
 
 	result = (ItemPointer) palloc(sizeof(ItemPointerData));
-	ItemPointerCopy(tid, result);
+	*result = *tid;
 
 	snapshot = RegisterSnapshot(GetLatestSnapshot());
 	heap_get_latest_tid(rel, snapshot, result);
#56Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alvaro Herrera (#53)
Re: Patch: Write Amplification Reduction Method (WARM)

Alvaro Herrera <alvherre@2ndquadrant.com> writes:

Tom Lane wrote:

However, the patch misses an
important part of such an abstraction layer by not also converting
catalog-related simple_heap_delete() calls into some sort of
CatalogTupleDelete() operation. It is certainly a peculiarity of
PG heaps that deletions don't require any immediate index work --- most
other storage engines would need that.
I propose that we should finish the job by inventing CatalogTupleDelete(),
which for the moment would be a trivial wrapper around
simple_heap_delete(), maybe just a macro for it.

If there's no objections I'll go make that happen in a day or two.

Sounds good.

So while I was working on this I got quite unhappy with the
already-committed patch: it's a leaky abstraction in more ways than
this, and it's created a possibly-serious performance regression
for large objects (and maybe other places).

The source of both of those problems is that in some places, we
did CatalogOpenIndexes and then used the CatalogIndexState for
multiple tuple inserts/updates before doing CatalogCloseIndexes.
The patch dealt with these either by not touching them, just
leaving the simple_heap_insert/update calls in place (thus failing
to create any abstraction), or by blithely ignoring the optimization
and doing s/simple_heap_insert/CatalogTupleInsert/ anyway. For example,
in inv_api.c we are now doing a CatalogOpenIndexes/CatalogCloseIndexes
cycle for each chunk of the large object ... and just to add insult to
injury, the now-useless open/close calls outside the loop are still there.

I think what we ought to do about this is invent additional API
functions, say

Oid CatalogTupleInsertWithInfo(Relation heapRel, HeapTuple tup,
CatalogIndexState indstate);
void CatalogTupleUpdateWithInfo(Relation heapRel, ItemPointer otid,
HeapTuple tup, CatalogIndexState indstate);

and use these in place of simple_heap_foo plus CatalogIndexInsert
in the places where this optimization had been applied.

An alternative but much more complicated fix would be to get rid of
the necessity for callers to worry about this at all, by caching
a CatalogIndexState in the catalog's relcache entry. That might be
worth doing eventually (because it would allow sharing index info
collection across unrelated operations) but I don't want to do it today.

Objections, better naming ideas?

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#57Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Tom Lane (#56)
Re: Patch: Write Amplification Reduction Method (WARM)

Tom Lane wrote:

The source of both of those problems is that in some places, we
did CatalogOpenIndexes and then used the CatalogIndexState for
multiple tuple inserts/updates before doing CatalogCloseIndexes.
The patch dealt with these either by not touching them, just
leaving the simple_heap_insert/update calls in place (thus failing
to create any abstraction), or by blithely ignoring the optimization
and doing s/simple_heap_insert/CatalogTupleInsert/ anyway. For example,
in inv_api.c we are now doing a CatalogOpenIndexes/CatalogCloseIndexes
cycle for each chunk of the large object ... and just to add insult to
injury, the now-useless open/close calls outside the loop are still there.

Ouch. You're right, I missed that.

I think what we ought to do about this is invent additional API
functions, say

Oid CatalogTupleInsertWithInfo(Relation heapRel, HeapTuple tup,
CatalogIndexState indstate);
void CatalogTupleUpdateWithInfo(Relation heapRel, ItemPointer otid,
HeapTuple tup, CatalogIndexState indstate);

and use these in place of simple_heap_foo plus CatalogIndexInsert
in the places where this optimization had been applied.

This looks reasonable enough to me.

An alternative but much more complicated fix would be to get rid of
the necessity for callers to worry about this at all, by caching
a CatalogIndexState in the catalog's relcache entry. That might be
worth doing eventually (because it would allow sharing index info
collection across unrelated operations) but I don't want to do it today.

Hmm, interesting idea. No disagreement on postponing.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#58Tom Lane
tgl@sss.pgh.pa.us
In reply to: Alvaro Herrera (#57)
Re: Patch: Write Amplification Reduction Method (WARM)

Alvaro Herrera <alvherre@2ndquadrant.com> writes:

Tom Lane wrote:

I think what we ought to do about this is invent additional API
functions, say

Oid CatalogTupleInsertWithInfo(Relation heapRel, HeapTuple tup,
CatalogIndexState indstate);
void CatalogTupleUpdateWithInfo(Relation heapRel, ItemPointer otid,
HeapTuple tup, CatalogIndexState indstate);

and use these in place of simple_heap_foo plus CatalogIndexInsert
in the places where this optimization had been applied.

This looks reasonable enough to me.

Done.

regards, tom lane

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#59Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Tom Lane (#58)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Feb 2, 2017 at 3:49 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

Alvaro Herrera <alvherre@2ndquadrant.com> writes:

Tom Lane wrote:

I think what we ought to do about this is invent additional API
functions, say

Oid CatalogTupleInsertWithInfo(Relation heapRel, HeapTuple tup,
CatalogIndexState indstate);
void CatalogTupleUpdateWithInfo(Relation heapRel, ItemPointer otid,
HeapTuple tup, CatalogIndexState indstate);

and use these in place of simple_heap_foo plus CatalogIndexInsert
in the places where this optimization had been applied.

This looks reasonable enough to me.

Done.

Thanks for taking care of this. Shame that I missed this because I'd
specifically noted the special casing for large objects etc. But looks like
while changing 180+ call sites, I forgot my notes.

Thanks again,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#60Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Alvaro Herrera (#45)
3 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Feb 1, 2017 at 3:21 AM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

Alvaro Herrera wrote:

Unless there are objections I will push this later this afternoon.

Done. Let's get on with the show -- please post a rebased WARM.

Please see rebased patches attached. There is not much change other than
the fact the patch now uses new catalog maintenance API.

Do you think we should apply the patch to remove ItemPointerCopy()? I will
rework the HeapTupleHeaderGetNextTid() after that. Not that it depends on
removing ItemPointerCopy(), but decided to postpone it until we make a call
on that patch.

BTW I've run now long stress tests with the patch applied and see no new
issues, even when indexes are dropped and recreated concurrently (includes
my patch to fix CIC bug in the master though). In another 24 hour test,
WARM could do 274M transactions where as master did 164M transactions. I
did not drop and recreate indexes during this run.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0002_warm_updates_v11.patchapplication/octet-stream; name=0002_warm_updates_v11.patchDownload
diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c
index 858798d..7a9a976 100644
--- a/contrib/bloom/blutils.c
+++ b/contrib/bloom/blutils.c
@@ -141,6 +141,7 @@ blhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index b2afdb7..ef3bfa3 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -115,6 +115,7 @@ brinhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index c2247ad..2135ae0 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -92,6 +92,7 @@ gisthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index ec8ed33..4861957 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -89,6 +89,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = hashrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -269,6 +270,8 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 	OffsetNumber offnum;
 	ItemPointer current;
 	bool		res;
+	IndexTuple	itup;
+
 
 	/* Hash indexes are always lossy since we store only the hash code */
 	scan->xs_recheck = true;
@@ -306,8 +309,6 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 			 offnum <= maxoffnum;
 			 offnum = OffsetNumberNext(offnum))
 		{
-			IndexTuple	itup;
-
 			itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 			if (ItemPointerEquals(&(so->hashso_heappos), &(itup->t_tid)))
 				break;
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index a59ad6f..46a334c 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -59,6 +59,8 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
 	return true;
 }
 
@@ -408,6 +410,9 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
+
 	return true;
 }
 
diff --git a/src/backend/access/hash/hashutil.c b/src/backend/access/hash/hashutil.c
index c705531..dcba734 100644
--- a/src/backend/access/hash/hashutil.c
+++ b/src/backend/access/hash/hashutil.c
@@ -17,8 +17,12 @@
 #include "access/hash.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
+#include "nodes/execnodes.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 #define CALC_NEW_BUCKET(old_bucket, lowmask) \
 			old_bucket | (lowmask + 1)
@@ -446,3 +450,109 @@ _hash_get_newbucket_from_oldbucket(Relation rel, Bucket old_bucket,
 
 	return new_bucket;
 }
+
+/*
+ * Recheck if the heap tuple satisfies the key stored in the index tuple
+ */
+bool
+hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	Datum		values2[INDEX_MAX_KEYS];
+	bool		isnull2[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	/*
+	 * HASH indexes compute a hash value of the key and store that in the
+	 * index. So we must first obtain the hash of the value obtained from the
+	 * heap and then do a comparison
+	 */
+	_hash_convert_tuple(indexRel, values, isnull, values2, isnull2);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL then they are equal
+		 */
+		if (isnull2[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If either is NULL then they are not equal
+		 */
+		if (isnull2[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now do a raw memory comparison
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values2[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+}
diff --git a/src/backend/access/heap/README.WARM b/src/backend/access/heap/README.WARM
new file mode 100644
index 0000000..7b9a712
--- /dev/null
+++ b/src/backend/access/heap/README.WARM
@@ -0,0 +1,306 @@
+src/backend/access/heap/README.WARM
+
+Write Amplification Reduction Method (WARM)
+===========================================
+
+The Heap Only Tuple (HOT) feature greatly eliminated redudant index
+entries and allowed re-use of the dead space occupied by previously
+updated or deleted tuples (see src/backend/access/heap/README.HOT)
+
+One of the necessary conditions for satisfying HOT update is that the
+update must not change a column used in any of the indexes on the table.
+The condition is sometimes hard to meet, especially for complex
+workloads with several indexes on large yet frequently updated tables.
+Worse, sometimes only one or two index columns may be updated, but the
+regular non-HOT update will still insert a new index entry in every
+index on the table, irrespective of whether the key pertaining to the
+index changed or not.
+
+WARM is a technique devised to address these problems.
+
+
+Update Chains With Multiple Index Entries Pointing to the Root
+--------------------------------------------------------------
+
+When a non-HOT update is caused by an index key change, a new index
+entry must be inserted for the changed index. But if the index key
+hasn't changed for other indexes, we don't really need to insert a new
+entry. Even though the existing index entry is pointing to the old
+tuple, the new tuple is reachable via the t_ctid chain. To keep things
+simple, a WARM update requires that the heap block must have enough
+space to store the new version of the tuple. This is same as HOT
+updates.
+
+In WARM, we ensure that every index entry always points to the root of
+the WARM chain. In fact, a WARM chain looks exactly like a HOT chain
+except for the fact that there could be multiple index entries pointing
+to the root of the chain. So when new entry is inserted in an index for
+updated tuple, and if we are doing a WARM update, the new entry is made
+point to the root of the WARM chain.
+
+For example, if we have a table with two columns and two indexes on each
+of the column. When a tuple is first inserted the table, we have exactly
+one index entry pointing to the tuple from both indexes.
+
+	lp [1]
+	[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's entry (aaaa) also points to 1
+
+Now if the tuple's second column is updated and if there is room on the
+page, we perform a WARM update. To do so, Index1 does not get any new
+entry and Index2's new entry will still point to the root tuple of the
+chain.
+
+	lp [1]  [2]
+	[1111, aaaa]->[111, bbbb]
+
+	Index1's entry (1111) points to 1
+	Index2's old entry (aaaa) points to 1
+	Index2's new entry (bbbb) also points to 1
+
+"A update chain which has more than one index entries pointing to its
+root line pointer is called WARM chain and the action that creates a
+WARM chain is called WARM update."
+
+Since all indexes always point to the root of the WARM chain, even when
+there are more than one index entries, WARM chains can be pruned and
+dead tuples can be removed without a need to do corresponding index
+cleanup.
+
+While this solves the problem of pruning dead tuples from a HOT/WARM
+chain, it also opens up a new technical challenge because now we have a
+situation where a heap tuple is reachable from multiple index entries,
+each having a different index key. While MVCC still ensures that only
+valid tuples are returned, a tuple with a wrong index key may be
+returned because of wrong index entries. In the above example, tuple
+[1111, bbbb] is reachable from both keys (aaaa) as well as (bbbb). For
+this reason, tuples returned from a WARM chain must always be rechecked
+for index key-match.
+
+Recheck Index Key Againt Heap Tuple
+-----------------------------------
+
+Since every Index AM has it's own notion of index tuples, each Index AM
+must implement its own method to recheck heap tuples. For example, a
+hash index stores the hash value of the column and hence recheck routine
+for hash AM must first compute the hash value of the heap attribute and
+then compare it against the value stored in the index tuple.
+
+The patch currently implement recheck routines for hash and btree
+indexes. If the table has an index which doesn't support recheck
+routine, WARM updates are disabled on such tables.
+
+Problem With Duplicate (key, ctid) Index Entries
+------------------------------------------------
+
+The index-key recheck logic works as long as there are no duplicate
+index keys, both pointing to the same WARM chain. In that case, the same
+valid tuple will be reachable via multiple index keys, yet satisfying
+the index key checks. In the above example, if the tuple [1111, bbbb] is
+again updated to [1111, aaaa] and if we insert a new index entry (aaaa)
+pointing to the root line pointer, we will end up with the following
+structure:
+
+	lp [1]  [2]  [3]
+	[1111, aaaa]->[1111, bbbb]->[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's oldest entry (aaaa) points to 1
+	Index2's old entry (bbbb) also points to 1
+	Index2's new entry (aaaa) also points to 1
+
+We must solve this problem to ensure that the same tuple is not
+reachable via multiple index pointers. There are couple of ways to
+address this issue:
+
+1. Do not allow WARM update to a tuple from a WARM chain. This
+guarantees that there can never be duplicate index entries to the same
+root line pointer because we must have checked for old and new index
+keys while doing the first WARM update.
+
+2. Do not allow duplicate (key, ctid) index pointers. In the above
+example, since (aaaa, 1) already exists in the index, we must not insert
+a duplicate index entry.
+
+The patch currently implements 1 i.e. do not do WARM updates to a tuple
+from a WARM chain. HOT updates are fine because they do not add a new
+index entry.
+
+Even with the restriction, this is a significant improvement because the
+number of regular UPDATEs are curtailed down to half.
+
+Expression and Partial Indexes
+------------------------------
+
+Expressions may evaluate to the same value even if the underlying column
+values have changed. A simple example is an index on "lower(col)" which
+will return the same value if the new heap value only differs in the
+case sensitivity. So we can not solely rely on the heap column check to
+decide whether or not to insert a new index entry for expression
+indexes. Similarly, for partial indexes, the predicate expression must
+be evaluated to decide whether or not to cause a new index entry when
+columns referred in the predicate expressions change.
+
+(None of these things are currently implemented and we squarely disallow
+WARM update if a column from expression indexes or predicate has
+changed).
+
+
+Efficiently Finding the Root Line Pointer
+-----------------------------------------
+
+During WARM update, we must be able to find the root line pointer of the
+tuple being updated. It must be noted that the t_ctid field in the heap
+tuple header is usually used to find the next tuple in the update chain.
+But the tuple that we are updating, must be the last tuple in the update
+chain. In such cases, the c_tid field usually points the tuple itself.
+So in theory, we could use the t_ctid to store additional information in
+the last tuple of the update chain, if the information about the tuple
+being the last tuple is stored elsewhere.
+
+We now utilize another bit from t_infomask2 to explicitly identify that
+this is the last tuple in the update chain.
+
+HEAP_LATEST_TUPLE - When this bit is set, the tuple is the last tuple in
+the update chain. The OffsetNumber part of t_ctid points to the root
+line pointer of the chain when HEAP_LATEST_TUPLE flag is set.
+
+If UPDATE operation is aborted, the last tuple in the update chain
+becomes dead. The root line pointer information stored in the tuple
+which remains the last valid tuple in the chain is also lost. In such
+rare cases, the root line pointer must be found in a hard way by
+scanning the entire heap page.
+
+Tracking WARM Chains
+--------------------
+
+The old and every subsequent tuple in the chain is marked with a special
+HEAP_WARM_TUPLE flag. We use the last remaining bit in t_infomask2 to
+store this information.
+
+When a tuple is returned from a WARM chain, the caller must do
+additional checks to ensure that the tuple matches the index key. Even
+if the tuple comes precedes the WARM update in the chain, it must still
+be rechecked for the index key match (case when old tuple is returned by
+the new index key). So we must follow the update chain everytime to the
+end to see check if this is a WARM chain.
+
+When the old updated tuple is retired and the root line pointer is
+converted into a redirected line pointer, we can copy the information
+about WARM chain to the redirected line pointer by storing a special
+value in the lp_len field of the line pointer. This will handle the most
+common case where a WARM chain is replaced by a redirect line pointer
+and a single tuple in the chain.
+
+Converting WARM chains back to HOT chains (VACUUM ?)
+----------------------------------------------------
+
+The current implementation of WARM allows only one WARM update per
+chain. This simplifies the design and addresses certain issues around
+duplicate scans. But this also implies that the benefit of WARM will be
+no more than 50%, which is still significant, but if we could return
+WARM chains back to normal status, we could do far more WARM updates.
+
+A distinct property of a WARM chain is that at least one index has more
+than one live index entries pointing to the root of the chain. In other
+words, if we can remove duplicate entry from every index or conclusively
+prove that there are no duplicate index entries for the root line
+pointer, the chain can again be marked as HOT.
+
+Here is one idea:
+
+A WARM chain has two parts, separated by the tuple that caused WARM
+update. All tuples in each part has matching index keys, but certain
+index keys may not match between these two parts. Lets say we mark heap
+tuples in each part with a special Red-Blue flag. The same flag is
+replicated in the index tuples. For example, when new rows are inserted
+in a table, they are marked with Blue flag and the index entries
+associated with those rows are also marked with Blue flag. When a row is
+WARM updated, the new version is marked with Red flag and the new index
+entry created by the update is also marked with Red flag.
+
+
+Heap chain: [1] [2] [3] [4]
+			[aaaa, 1111]B -> [aaaa, 1111]B -> [bbbb, 1111]R -> [bbbb, 1111]R
+
+Index1: 	(aaaa)B points to 1 (satisfies only tuples marked with B)
+			(bbbb)R points to 1 (satisfies only tuples marked with R)
+
+Index2:		(1111)B points to 1 (satisfied bith B and R tuples)
+
+
+It's clear that for indexes with Red and Blue pointers, a heap tuple
+with Blue flag will be reachable from Blue pointer and that with Red
+flag will be reachable from Red pointer. But for indexes which did not
+create a new entry, both Blue and Red tuples will be reachable from Blue
+pointer (there is no Red pointer in such indexes). So, as a side note,
+matching Red and Blue flags is not enough from index scan perspective.
+
+During first heap scan of VACUUM, we look for tuples with
+HEAP_WARM_TUPLE set.  If all live tuples in the chain are either marked
+with Blue flag or Red flag (but no mix of Red and Blue), then the chain
+is a candidate for HOT conversion.  We remember the root line pointer
+and Red-Blue flag of the WARM chain in a separate array.
+
+If we have a Red WARM chain, then our goal is to remove Blue pointers
+and vice versa. But there is a catch. For Index2 above, there is only
+Blue pointer and that must not be removed. IOW we should remove Blue
+pointer iff a Red pointer exists. Since index vacuum may visit Red and
+Blue pointers in any order, I think we will need another index pass to
+remove dead index pointers. So in the first index pass we check which
+WARM candidates have 2 index pointers. In the second pass, we remove the
+dead pointer and reset Red flag is the surviving index pointer is Red.
+
+During the second heap scan, we fix WARM chain by clearing
+HEAP_WARM_TUPLE flag and also reset Red flag to Blue.
+
+There are some more problems around aborted vacuums. For example, if
+vacuum aborts after changing Red index flag to Blue but before removing
+the other Blue pointer, we will end up with two Blue pointers to a Red
+WARM chain. But since the HEAP_WARM_TUPLE flag on the heap tuple is
+still set, further WARM updates to the chain will be blocked. I guess we
+will need some special handling for case with multiple Blue pointers. We
+can either leave these WARM chains alone and let them die with a
+subsequent non-WARM update or must apply heap-recheck logic during index
+vacuum to find the dead pointer. Given that vacuum-aborts are not
+common, I am inclined to leave this case unhandled. We must still check
+for presence of multiple Blue pointers and ensure that we don't
+accidently remove either of the Blue pointers and not clear WARM chains
+either.
+
+CREATE INDEX CONCURRENTLY
+-------------------------
+
+Currently CREATE INDEX CONCURRENTLY (CIC) is implemented as a 3-phase
+process.  In the first phase, we create catalog entry for the new index
+so that the index is visible to all other backends, but still don't use
+it for either read or write.  But we ensure that no new broken HOT
+chains are created by new transactions. In the second phase, we build
+the new index using a MVCC snapshot and then make the index available
+for inserts. We then do another pass over the index and insert any
+missing tuples, everytime indexing only it's root line pointer. See
+README.HOT for details about how HOT impacts CIC and how various
+challenges are tackeled.
+
+WARM poses another challenge because it allows creation of HOT chains
+even when an index key is changed. But since the index is not ready for
+insertion until the second phase is over, we might end up with a
+situation where the HOT chain has tuples with different index columns,
+yet only one of these values are indexed by the new index. Note that
+during the third phase, we only index tuples whose root line pointer is
+missing from the index. But we can't easily check if the existing index
+tuple is actually indexing the heap tuple visible to the new MVCC
+snapshot. Finding that information will require us to query the index
+again for every tuple in the chain, especially if it's a WARM tuple.
+This would require repeated access to the index. Another option would be
+to return index keys along with the heap TIDs when index is scanned for
+collecting all indexed TIDs during third phase. We can then compare the
+heap tuple against the already indexed key and decide whether or not to
+index the new tuple.
+
+We solve this problem more simply by disallowing WARM updates until the
+index is ready for insertion. We don't need to disallow WARM on a
+wholesale basis, but only those updates that change the columns of the
+new index are disallowed to be WARM updates.
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 5149c07..8be0137 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -1957,6 +1957,78 @@ heap_fetch(Relation relation,
 }
 
 /*
+ * Check if the HOT chain containing this tid is actually a WARM chain.
+ * Note that even if the WARM update ultimately aborted, we still must do a
+ * recheck because the failing UPDATE when have inserted created index entries
+ * which are now stale, but still referencing this chain.
+ */
+static bool
+hot_check_warm_chain(Page dp, ItemPointer tid)
+{
+	TransactionId prev_xmax = InvalidTransactionId;
+	OffsetNumber offnum;
+	HeapTupleData heapTuple;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+			break;
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Presence of either WARM or WARM updated tuple signals possible
+		 * breakage and the caller must recheck tuple returned from this chain
+		 * for index satisfaction
+		 */
+		if (HeapTupleHeaderIsHeapWarmTuple(heapTuple.t_data))
+			return true;
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	/* All OK. No need to recheck */
+	return false;
+}
+
+/*
  *	heap_hot_search_buffer	- search HOT chain for tuple satisfying snapshot
  *
  * On entry, *tid is the TID of a tuple (either a simple tuple, or the root
@@ -1976,11 +2048,14 @@ heap_fetch(Relation relation,
  * Unlike heap_fetch, the caller must already have pin and (at least) share
  * lock on the buffer; it is still pinned/locked at exit.  Also unlike
  * heap_fetch, we do not report any pgstats count; caller may do so if wanted.
+ *
+ * recheck should be set false on entry by caller, will be set true on exit
+ * if a WARM tuple is encountered.
  */
 bool
 heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 					   Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call)
+					   bool *all_dead, bool first_call, bool *recheck)
 {
 	Page		dp = (Page) BufferGetPage(buffer);
 	TransactionId prev_xmax = InvalidTransactionId;
@@ -2034,9 +2109,12 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		ItemPointerSetOffsetNumber(&heapTuple->t_self, offnum);
 
 		/*
-		 * Shouldn't see a HEAP_ONLY tuple at chain start.
+		 * Shouldn't see a HEAP_ONLY tuple at chain start, unless we are
+		 * dealing with a WARM updated tuple in which case deferred triggers
+		 * may request to fetch a WARM tuple from middle of a chain.
 		 */
-		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple))
+		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple) &&
+				!HeapTupleIsHeapWarmTuple(heapTuple))
 			break;
 
 		/*
@@ -2049,6 +2127,16 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 			break;
 
 		/*
+		 * Check if there exists a WARM tuple somewhere down the chain and set
+		 * recheck to TRUE.
+		 *
+		 * XXX This is not very efficient right now, and we should look for
+		 * possible improvements here
+		 */
+		if (recheck && *recheck == false)
+			*recheck = hot_check_warm_chain(dp, &heapTuple->t_self);
+
+		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
 		 * return the first tuple we find.  But on later passes, heapTuple
 		 * will initially be pointing to the tuple we returned last time.
@@ -2097,7 +2185,8 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		 * Check to see if HOT chain continues past this tuple; if so fetch
 		 * the next offnum and loop around.
 		 */
-		if (HeapTupleIsHotUpdated(heapTuple))
+		if (HeapTupleIsHotUpdated(heapTuple) &&
+			!HeapTupleHeaderHasRootOffset(heapTuple->t_data))
 		{
 			Assert(ItemPointerGetBlockNumber(&heapTuple->t_data->t_ctid) ==
 				   ItemPointerGetBlockNumber(tid));
@@ -2121,18 +2210,41 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
  */
 bool
 heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
-				bool *all_dead)
+				bool *all_dead, bool *recheck, Buffer *cbuffer,
+				HeapTuple heapTuple)
 {
 	bool		result;
 	Buffer		buffer;
-	HeapTupleData heapTuple;
+	ItemPointerData ret_tid = *tid;
 
 	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	LockBuffer(buffer, BUFFER_LOCK_SHARE);
-	result = heap_hot_search_buffer(tid, relation, buffer, snapshot,
-									&heapTuple, all_dead, true);
-	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buffer);
+	result = heap_hot_search_buffer(&ret_tid, relation, buffer, snapshot,
+									heapTuple, all_dead, true, recheck);
+
+	/*
+	 * If we are returning a potential candidate tuple from this chain and the
+	 * caller has requested for "recheck" hint, keep the buffer locked and
+	 * pinned. The caller must release the lock and pin on the buffer in all
+	 * such cases
+	 */
+	if (!result || !recheck || !(*recheck))
+	{
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	/*
+	 * Set the caller supplied tid with the actual location of the tuple being
+	 * returned
+	 */
+	if (result)
+	{
+		*tid = ret_tid;
+		if (cbuffer)
+			*cbuffer = buffer;
+	}
+
 	return result;
 }
 
@@ -3491,15 +3603,18 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 HTSU_Result
 heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode)
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update)
 {
 	HTSU_Result result;
 	TransactionId xid = GetCurrentTransactionId();
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *exprindx_attrs;
 	Bitmapset  *interesting_attrs;
 	Bitmapset  *modified_attrs;
+	Bitmapset  *notready_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3520,6 +3635,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		use_hot_update = false;
+	bool		use_warm_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
 	bool		all_visible_cleared_new = false;
@@ -3544,6 +3660,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot update tuples during a parallel operation")));
 
+	/* Assume no-warm update */
+	if (warm_update)
+		*warm_update = false;
+
 	/*
 	 * Fetch the list of attributes to be checked for various operations.
 	 *
@@ -3565,10 +3685,17 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	exprindx_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_EXPR_PREDICATE);
+	notready_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_NOTREADY);
+
+
 	interesting_attrs = bms_add_members(NULL, hot_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
-
+	interesting_attrs = bms_add_members(interesting_attrs, exprindx_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, notready_attrs);
 
 	block = ItemPointerGetBlockNumber(otid);
 	offnum = ItemPointerGetOffsetNumber(otid);
@@ -3620,6 +3747,9 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
 												  &oldtup, newtup);
 
+	if (modified_attrsp)
+		*modified_attrsp = bms_copy(modified_attrs);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3875,6 +4005,7 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(exprindx_attrs);
 		bms_free(modified_attrs);
 		bms_free(interesting_attrs);
 		return result;
@@ -4193,6 +4324,37 @@ l2:
 		 */
 		if (!bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
+		else
+		{
+			/*
+			 * If no WARM updates yet on this chain, let this update be a WARM
+			 * update.
+			 *
+			 * We check for both warm and warm updated tuples since if the
+			 * previous WARM update aborted, we may still have added
+			 * another index entry for this HOT chain. In such situations, we
+			 * must not attempt a WARM update until duplicate (key, CTID) index
+			 * entry issue is sorted out
+			 *
+			 * XXX Later we'll add more checks to ensure WARM chains can
+			 * further be WARM updated. This is probably good to do first rounf
+			 * of tests of remaining functionality
+			 *
+			 * XXX Disable WARM updates on system tables. There is nothing in
+			 * principle that stops us from supporting this. But it would
+			 * require API change to propogate the changed columns back to the
+			 * caller so that CatalogUpdateIndexes() can avoid adding new
+			 * entries to indexes that are not changed by update. This will be
+			 * fixed once basic patch is tested. !!FIXME
+			 */
+			if (relation->rd_supportswarm &&
+				!bms_overlap(modified_attrs, exprindx_attrs) &&
+				!bms_is_subset(hot_attrs, modified_attrs) &&
+				!IsSystemRelation(relation) &&
+				!bms_overlap(notready_attrs, modified_attrs) &&
+				!HeapTupleIsHeapWarmTuple(&oldtup))
+				use_warm_update = true;
+		}
 	}
 	else
 	{
@@ -4239,6 +4401,22 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+
+		/*
+		 * Even if we are doing a HOT update, we must carry forward the WARM
+		 * flag because we may have already inserted another index entry
+		 * pointing to our root and a third entry may create duplicates
+		 *
+		 * Note: If we ever have a mechanism to avoid duplicate <key, TID> in
+		 * indexes, we could look at relaxing this restriction and allow even
+		 * more WARM udpates
+		 */
+		if (HeapTupleIsHeapWarmTuple(&oldtup))
+		{
+			HeapTupleSetHeapWarmTuple(heaptup);
+			HeapTupleSetHeapWarmTuple(newtup);
+		}
+
 		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
@@ -4251,12 +4429,35 @@ l2:
 		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
 			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
+	else if (use_warm_update)
+	{
+		/* Mark the old tuple as HOT-updated */
+		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetHeapWarmTuple(&oldtup);
+		/* And mark the new tuple as heap-only */
+		HeapTupleSetHeapOnly(heaptup);
+		HeapTupleSetHeapWarmTuple(heaptup);
+		/* Mark the caller's copy too, in case different from heaptup */
+		HeapTupleSetHeapOnly(newtup);
+		HeapTupleSetHeapWarmTuple(newtup);
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
+		/* Let the caller know we did a WARM update */
+		if (warm_update)
+			*warm_update = true;
+	}
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		HeapTupleClearHeapWarmTuple(heaptup);
+		HeapTupleClearHeapWarmTuple(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4366,7 +4567,10 @@ l2:
 	if (have_tuple_lock)
 		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
 
-	pgstat_count_heap_update(relation, use_hot_update);
+	/*
+	 * Count HOT and WARM updates separately
+	 */
+	pgstat_count_heap_update(relation, use_hot_update, use_warm_update);
 
 	/*
 	 * If heaptup is a private copy, release it.  Don't forget to copy t_self
@@ -4506,7 +4710,8 @@ HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
  * via ereport().
  */
 void
-simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
+simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup,
+		Bitmapset **modified_attrs, bool *warm_update)
 {
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
@@ -4515,7 +4720,7 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &hufd, &lockmode);
+						 &hufd, &lockmode, modified_attrs, warm_update);
 	switch (result)
 	{
 		case HeapTupleSelfUpdated:
@@ -7567,6 +7772,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
+	bool		warm_update = false;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
@@ -7578,6 +7784,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	else
 		info = XLOG_HEAP_UPDATE;
 
+	if (HeapTupleIsHeapWarmTuple(newtup))
+		warm_update = true;
+
 	/*
 	 * If the old and new tuple are on the same page, we only need to log the
 	 * parts of the new tuple that were changed.  That saves on the amount of
@@ -7651,6 +7860,8 @@ log_heap_update(Relation reln, Buffer oldbuf,
 				xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY;
 		}
 	}
+	if (warm_update)
+		xlrec.flags |= XLH_UPDATE_WARM_UPDATE;
 
 	/* If new tuple is the single and first tuple on page... */
 	if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber &&
@@ -8628,16 +8839,22 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	Size		freespace = 0;
 	XLogRedoAction oldaction;
 	XLogRedoAction newaction;
+	bool		warm_update = false;
 
 	/* initialize to keep the compiler quiet */
 	oldtup.t_data = NULL;
 	oldtup.t_len = 0;
 
+	if (xlrec->flags & XLH_UPDATE_WARM_UPDATE)
+		warm_update = true;
+
 	XLogRecGetBlockTag(record, 0, &rnode, NULL, &newblk);
 	if (XLogRecGetBlockTag(record, 1, NULL, NULL, &oldblk))
 	{
 		/* HOT updates are never done across pages */
 		Assert(!hot_update);
+		/* WARM updates are never done across pages */
+		Assert(!warm_update);
 	}
 	else
 		oldblk = newblk;
@@ -8697,6 +8914,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 								   &htup->t_infomask2);
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
+
+		/* Mark the old tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		/* Set forward chain link in t_ctid */
 		HeapTupleHeaderSetNextTid(htup, &newtid);
 
@@ -8832,6 +9054,10 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
 
+		/* Mark the new tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index f54337c..c2bd7d6 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -834,6 +834,13 @@ heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				continue;
 
+			/* 
+			 * If the tuple has root line pointer, it must be the end of the
+			 * chain
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			/* Set up to scan the HOT-chain */
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index ba27c1e..3cbe1d0 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -75,10 +75,12 @@
 #include "access/xlog.h"
 #include "catalog/catalog.h"
 #include "catalog/index.h"
+#include "executor/executor.h"
 #include "pgstat.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
+#include "utils/datum.h"
 #include "utils/snapmgr.h"
 #include "utils/tqual.h"
 
@@ -233,6 +235,21 @@ index_beginscan(Relation heapRelation,
 	scan->heapRelation = heapRelation;
 	scan->xs_snapshot = snapshot;
 
+	/*
+	 * If the index supports recheck, make sure that index tuple is saved
+	 * during index scans.
+	 *
+	 * XXX Ideally, we should look at all indexes on the table and check if
+	 * WARM is at all supported on the base table. If WARM is not supported
+	 * then we don't need to do any recheck. RelationGetIndexAttrBitmap() does
+	 * do that and sets rd_supportswarm after looking at all indexes. But we
+	 * don't know if the function was called earlier in the session when we're
+	 * here. We can't call it now because there exists a risk of causing
+	 * deadlock.
+	 */
+	if (indexRelation->rd_amroutine->amrecheck)
+		scan->xs_want_itup = true;
+
 	return scan;
 }
 
@@ -534,7 +551,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
 	/*
 	 * The AM's amgettuple proc finds the next index entry matching the scan
 	 * keys, and puts the TID into scan->xs_ctup.t_self.  It should also set
-	 * scan->xs_recheck and possibly scan->xs_itup, though we pay no attention
+	 * scan->xs_tuple_recheck and possibly scan->xs_itup, though we pay no attention
 	 * to those fields here.
 	 */
 	found = scan->indexRelation->rd_amroutine->amgettuple(scan, direction);
@@ -573,7 +590,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
  * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
  * call).
  *
- * Note: caller must check scan->xs_recheck, and perform rechecking of the
+ * Note: caller must check scan->xs_tuple_recheck, and perform rechecking of the
  * scan keys if required.  We do not do that here because we don't have
  * enough information to do it efficiently in the general case.
  * ----------------
@@ -600,6 +617,12 @@ index_fetch_heap(IndexScanDesc scan)
 		 */
 		if (prev_buf != scan->xs_cbuf)
 			heap_page_prune_opt(scan->heapRelation, scan->xs_cbuf);
+
+		/*
+		 * If we're not always re-checking, reset recheck for this tuple.
+		 * Otherwise we must recheck every tuple.
+		 */
+		scan->xs_tuple_recheck = scan->xs_recheck;
 	}
 
 	/* Obtain share-lock on the buffer so we can examine visibility */
@@ -609,32 +632,64 @@ index_fetch_heap(IndexScanDesc scan)
 											scan->xs_snapshot,
 											&scan->xs_ctup,
 											&all_dead,
-											!scan->xs_continue_hot);
+											!scan->xs_continue_hot,
+											&scan->xs_tuple_recheck);
 	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
 
 	if (got_heap_tuple)
 	{
+		bool res = true;
+
+		/*
+		 * Ok we got a tuple which satisfies the snapshot, but if its part of a
+		 * WARM chain, we must do additional checks to ensure that we are
+		 * indeed returning a correct tuple. Note that if the index AM does not
+		 * implement amrecheck method, then we don't any additional checks
+		 * since WARM must have been disabled on such tables
+		 *
+		 * XXX What happens when a new index which does not support amcheck is
+		 * added to the table? Do we need to handle this case or is CREATE
+		 * INDEX and CREATE INDEX CONCURRENTLY smart enough to handle this
+		 * issue?
+		 */
+		if (scan->xs_tuple_recheck &&
+				scan->xs_itup &&
+				scan->indexRelation->rd_amroutine->amrecheck)
+		{
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
+			res = scan->indexRelation->rd_amroutine->amrecheck(
+						scan->indexRelation,
+						scan->xs_itup,
+						scan->heapRelation,
+						&scan->xs_ctup);
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+		}
+
 		/*
 		 * Only in a non-MVCC snapshot can more than one member of the HOT
 		 * chain be visible.
 		 */
 		scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
 		pgstat_count_heap_fetch(scan->indexRelation);
-		return &scan->xs_ctup;
-	}
 
-	/* We've reached the end of the HOT chain. */
-	scan->xs_continue_hot = false;
+		if (res)
+			return &scan->xs_ctup;
+	}
+	else
+	{
+		/* We've reached the end of the HOT chain. */
+		scan->xs_continue_hot = false;
 
-	/*
-	 * If we scanned a whole HOT chain and found only dead tuples, tell index
-	 * AM to kill its entry for that TID (this will take effect in the next
-	 * amgettuple call, in index_getnext_tid).  We do not do this when in
-	 * recovery because it may violate MVCC to do so.  See comments in
-	 * RelationGetIndexScan().
-	 */
-	if (!scan->xactStartedInRecovery)
-		scan->kill_prior_tuple = all_dead;
+		/*
+		 * If we scanned a whole HOT chain and found only dead tuples, tell index
+		 * AM to kill its entry for that TID (this will take effect in the next
+		 * amgettuple call, in index_getnext_tid).  We do not do this when in
+		 * recovery because it may violate MVCC to do so.  See comments in
+		 * RelationGetIndexScan().
+		 */
+		if (!scan->xactStartedInRecovery)
+			scan->kill_prior_tuple = all_dead;
+	}
 
 	return NULL;
 }
diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c
index 883d70d..6efccf7 100644
--- a/src/backend/access/nbtree/nbtinsert.c
+++ b/src/backend/access/nbtree/nbtinsert.c
@@ -19,11 +19,14 @@
 #include "access/nbtree.h"
 #include "access/transam.h"
 #include "access/xloginsert.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
 #include "utils/tqual.h"
-
+#include "utils/datum.h"
 
 typedef struct
 {
@@ -249,6 +252,9 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 	BTPageOpaque opaque;
 	Buffer		nbuf = InvalidBuffer;
 	bool		found = false;
+	Buffer		buffer;
+	HeapTupleData	heapTuple;
+	bool		recheck = false;
 
 	/* Assume unique until we find a duplicate */
 	*is_unique = true;
@@ -308,6 +314,8 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				curitup = (IndexTuple) PageGetItem(page, curitemid);
 				htid = curitup->t_tid;
 
+				recheck = false;
+
 				/*
 				 * If we are doing a recheck, we expect to find the tuple we
 				 * are rechecking.  It's not a duplicate, but we have to keep
@@ -325,112 +333,153 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				 * have just a single index entry for the entire chain.
 				 */
 				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
-										 &all_dead))
+							&all_dead, &recheck, &buffer,
+							&heapTuple))
 				{
 					TransactionId xwait;
+					bool result = true;
 
 					/*
-					 * It is a duplicate. If we are only doing a partial
-					 * check, then don't bother checking if the tuple is being
-					 * updated in another transaction. Just return the fact
-					 * that it is a potential conflict and leave the full
-					 * check till later.
+					 * If the tuple was WARM update, we may again see our own
+					 * tuple. Since WARM updates don't create new index
+					 * entries, our own tuple is only reachable via the old
+					 * index pointer
 					 */
-					if (checkUnique == UNIQUE_CHECK_PARTIAL)
+					if (checkUnique == UNIQUE_CHECK_EXISTING &&
+							ItemPointerCompare(&htid, &itup->t_tid) == 0)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						*is_unique = false;
-						return InvalidTransactionId;
+						found = true;
+						result = false;
+						if (recheck)
+							UnlockReleaseBuffer(buffer);
 					}
-
-					/*
-					 * If this tuple is being updated by other transaction
-					 * then we have to wait for its commit/abort.
-					 */
-					xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
-						SnapshotDirty.xmin : SnapshotDirty.xmax;
-
-					if (TransactionIdIsValid(xwait))
+					else if (recheck)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						/* Tell _bt_doinsert to wait... */
-						*speculativeToken = SnapshotDirty.speculativeToken;
-						return xwait;
+						result = btrecheck(rel, curitup, heapRel, &heapTuple);
+						UnlockReleaseBuffer(buffer);
 					}
 
-					/*
-					 * Otherwise we have a definite conflict.  But before
-					 * complaining, look to see if the tuple we want to insert
-					 * is itself now committed dead --- if so, don't complain.
-					 * This is a waste of time in normal scenarios but we must
-					 * do it to support CREATE INDEX CONCURRENTLY.
-					 *
-					 * We must follow HOT-chains here because during
-					 * concurrent index build, we insert the root TID though
-					 * the actual tuple may be somewhere in the HOT-chain.
-					 * While following the chain we might not stop at the
-					 * exact tuple which triggered the insert, but that's OK
-					 * because if we find a live tuple anywhere in this chain,
-					 * we have a unique key conflict.  The other live tuple is
-					 * not part of this chain because it had a different index
-					 * entry.
-					 */
-					htid = itup->t_tid;
-					if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL))
-					{
-						/* Normal case --- it's still live */
-					}
-					else
+					if (result)
 					{
 						/*
-						 * It's been deleted, so no error, and no need to
-						 * continue searching
+						 * It is a duplicate. If we are only doing a partial
+						 * check, then don't bother checking if the tuple is being
+						 * updated in another transaction. Just return the fact
+						 * that it is a potential conflict and leave the full
+						 * check till later.
 						 */
-						break;
-					}
+						if (checkUnique == UNIQUE_CHECK_PARTIAL)
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							*is_unique = false;
+							return InvalidTransactionId;
+						}
 
-					/*
-					 * Check for a conflict-in as we would if we were going to
-					 * write to this page.  We aren't actually going to write,
-					 * but we want a chance to report SSI conflicts that would
-					 * otherwise be masked by this unique constraint
-					 * violation.
-					 */
-					CheckForSerializableConflictIn(rel, NULL, buf);
+						/*
+						 * If this tuple is being updated by other transaction
+						 * then we have to wait for its commit/abort.
+						 */
+						xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
+							SnapshotDirty.xmin : SnapshotDirty.xmax;
+
+						if (TransactionIdIsValid(xwait))
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							/* Tell _bt_doinsert to wait... */
+							*speculativeToken = SnapshotDirty.speculativeToken;
+							return xwait;
+						}
 
-					/*
-					 * This is a definite conflict.  Break the tuple down into
-					 * datums and report the error.  But first, make sure we
-					 * release the buffer locks we're holding ---
-					 * BuildIndexValueDescription could make catalog accesses,
-					 * which in the worst case might touch this same index and
-					 * cause deadlocks.
-					 */
-					if (nbuf != InvalidBuffer)
-						_bt_relbuf(rel, nbuf);
-					_bt_relbuf(rel, buf);
+						/*
+						 * Otherwise we have a definite conflict.  But before
+						 * complaining, look to see if the tuple we want to insert
+						 * is itself now committed dead --- if so, don't complain.
+						 * This is a waste of time in normal scenarios but we must
+						 * do it to support CREATE INDEX CONCURRENTLY.
+						 *
+						 * We must follow HOT-chains here because during
+						 * concurrent index build, we insert the root TID though
+						 * the actual tuple may be somewhere in the HOT-chain.
+						 * While following the chain we might not stop at the
+						 * exact tuple which triggered the insert, but that's OK
+						 * because if we find a live tuple anywhere in this chain,
+						 * we have a unique key conflict.  The other live tuple is
+						 * not part of this chain because it had a different index
+						 * entry.
+						 */
+						recheck = false;
+						ItemPointerCopy(&itup->t_tid, &htid);
+						if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL,
+									&recheck, &buffer, &heapTuple))
+						{
+							bool result = true;
+							if (recheck)
+							{
+								/*
+								 * Recheck if the tuple actually satisfies the
+								 * index key. Otherwise, we might be following
+								 * a wrong index pointer and mustn't entertain
+								 * this tuple
+								 */
+								result = btrecheck(rel, itup, heapRel, &heapTuple);
+								UnlockReleaseBuffer(buffer);
+							}
+							if (!result)
+								break;
+							/* Normal case --- it's still live */
+						}
+						else
+						{
+							/*
+							 * It's been deleted, so no error, and no need to
+							 * continue searching
+							 */
+							break;
+						}
 
-					{
-						Datum		values[INDEX_MAX_KEYS];
-						bool		isnull[INDEX_MAX_KEYS];
-						char	   *key_desc;
-
-						index_deform_tuple(itup, RelationGetDescr(rel),
-										   values, isnull);
-
-						key_desc = BuildIndexValueDescription(rel, values,
-															  isnull);
-
-						ereport(ERROR,
-								(errcode(ERRCODE_UNIQUE_VIOLATION),
-								 errmsg("duplicate key value violates unique constraint \"%s\"",
-										RelationGetRelationName(rel)),
-							   key_desc ? errdetail("Key %s already exists.",
-													key_desc) : 0,
-								 errtableconstraint(heapRel,
-											 RelationGetRelationName(rel))));
+						/*
+						 * Check for a conflict-in as we would if we were going to
+						 * write to this page.  We aren't actually going to write,
+						 * but we want a chance to report SSI conflicts that would
+						 * otherwise be masked by this unique constraint
+						 * violation.
+						 */
+						CheckForSerializableConflictIn(rel, NULL, buf);
+
+						/*
+						 * This is a definite conflict.  Break the tuple down into
+						 * datums and report the error.  But first, make sure we
+						 * release the buffer locks we're holding ---
+						 * BuildIndexValueDescription could make catalog accesses,
+						 * which in the worst case might touch this same index and
+						 * cause deadlocks.
+						 */
+						if (nbuf != InvalidBuffer)
+							_bt_relbuf(rel, nbuf);
+						_bt_relbuf(rel, buf);
+
+						{
+							Datum		values[INDEX_MAX_KEYS];
+							bool		isnull[INDEX_MAX_KEYS];
+							char	   *key_desc;
+
+							index_deform_tuple(itup, RelationGetDescr(rel),
+									values, isnull);
+
+							key_desc = BuildIndexValueDescription(rel, values,
+									isnull);
+
+							ereport(ERROR,
+									(errcode(ERRCODE_UNIQUE_VIOLATION),
+									 errmsg("duplicate key value violates unique constraint \"%s\"",
+										 RelationGetRelationName(rel)),
+									 key_desc ? errdetail("Key %s already exists.",
+										 key_desc) : 0,
+									 errtableconstraint(heapRel,
+										 RelationGetRelationName(rel))));
+						}
 					}
 				}
 				else if (all_dead)
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 469e7ab..27013f4 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -23,6 +23,7 @@
 #include "access/xlog.h"
 #include "catalog/index.h"
 #include "commands/vacuum.h"
+#include "executor/nodeIndexscan.h"
 #include "storage/indexfsm.h"
 #include "storage/ipc.h"
 #include "storage/lmgr.h"
@@ -121,6 +122,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = btrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -301,8 +303,9 @@ btgettuple(IndexScanDesc scan, ScanDirection dir)
 	BTScanOpaque so = (BTScanOpaque) scan->opaque;
 	bool		res;
 
-	/* btree indexes are never lossy */
+	/* btree indexes are never lossy, except for WARM tuples */
 	scan->xs_recheck = false;
+	scan->xs_tuple_recheck = false;
 
 	/*
 	 * If we have any array keys, initialize them during first call for a
diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c
index da0f330..9becaeb 100644
--- a/src/backend/access/nbtree/nbtutils.c
+++ b/src/backend/access/nbtree/nbtutils.c
@@ -20,11 +20,15 @@
 #include "access/nbtree.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "utils/array.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 typedef struct BTSortArrayContext
@@ -2065,3 +2069,103 @@ btproperty(Oid index_oid, int attno,
 			return false;		/* punt to generic code */
 	}
 }
+
+/*
+ * Check if the index tuple's key matches the one computed from the given heap
+ * tuple's attribute
+ */
+bool
+btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	/* Get IndexInfo for this index */
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL, then they are equal
+		 */
+		if (isnull[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If just one is NULL, then they are not equal
+		 */
+		if (isnull[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now just do a raw memory comparison. If the index tuple was formed
+		 * using this heap tuple, the computed index values must match
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+}
diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c
index 78846be..2236f02 100644
--- a/src/backend/access/spgist/spgutils.c
+++ b/src/backend/access/spgist/spgutils.c
@@ -71,6 +71,7 @@ spghandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 815a694..1e8cdbd 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -54,6 +54,7 @@
 #include "nodes/makefuncs.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/var.h"
 #include "parser/parser.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
@@ -1687,6 +1688,20 @@ BuildIndexInfo(Relation index)
 	ii->ii_Concurrent = false;
 	ii->ii_BrokenHotChain = false;
 
+	/* build a bitmap of all table attributes referred by this index */
+	for (i = 0; i < ii->ii_NumIndexAttrs; i++)
+	{
+		AttrNumber attr = ii->ii_KeyAttrNumbers[i];
+		ii->ii_indxattrs = bms_add_member(ii->ii_indxattrs, attr -
+				FirstLowInvalidHeapAttributeNumber);
+	}
+
+	/* Collect all attributes used in expressions, too */
+	pull_varattnos((Node *) ii->ii_Expressions, 1, &ii->ii_indxattrs);
+
+	/* Collect all attributes in the index predicate, too */
+	pull_varattnos((Node *) ii->ii_Predicate, 1, &ii->ii_indxattrs);
+
 	return ii;
 }
 
diff --git a/src/backend/catalog/indexing.c b/src/backend/catalog/indexing.c
index 76268e1..b2bfa10 100644
--- a/src/backend/catalog/indexing.c
+++ b/src/backend/catalog/indexing.c
@@ -66,10 +66,15 @@ CatalogCloseIndexes(CatalogIndexState indstate)
  *
  * This should be called for each inserted or updated catalog tuple.
  *
+ * If the tuple was WARM updated, the modified_attrs contains the list of
+ * columns updated by the update. We must not insert new index entries for
+ * indexes which do not refer to any of the modified columns.
+ *
  * This is effectively a cut-down version of ExecInsertIndexTuples.
  */
 static void
-CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
+CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple,
+		Bitmapset *modified_attrs, bool warm_update)
 {
 	int			i;
 	int			numIndexes;
@@ -79,12 +84,28 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 	IndexInfo **indexInfoArray;
 	Datum		values[INDEX_MAX_KEYS];
 	bool		isnull[INDEX_MAX_KEYS];
+	ItemPointerData root_tid;
 
-	/* HOT update does not require index inserts */
-	if (HeapTupleIsHeapOnly(heapTuple))
+	/* 
+	 * HOT update does not require index inserts, but WARM may need for some
+	 * indexes.
+	 */
+	if (HeapTupleIsHeapOnly(heapTuple) && !warm_update)
 		return;
 
 	/*
+	 * If we've done a WARM update, then we must index the TID of the root line
+	 * pointer and not the actual TID of the new tuple.
+	 */
+	if (warm_update)
+		ItemPointerSet(&root_tid,
+				ItemPointerGetBlockNumber(&(heapTuple->t_self)),
+				HeapTupleHeaderGetRootOffset(heapTuple->t_data));
+	else
+		ItemPointerCopy(&heapTuple->t_self, &root_tid);
+
+
+	/*
 	 * Get information from the state structure.  Fall out if nothing to do.
 	 */
 	numIndexes = indstate->ri_NumIndices;
@@ -112,6 +133,17 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 			continue;
 
 		/*
+		 * If we've done WARM update, then we must not insert a new index tuple
+		 * if none of the index keys have changed. This is not just an
+		 * optimization, but a requirement for WARM to work correctly.
+		 */
+		if (warm_update)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
+		/*
 		 * Expressional and partial indexes on system catalogs are not
 		 * supported, nor exclusion constraints, nor deferred uniqueness
 		 */
@@ -136,7 +168,7 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 		index_insert(relationDescs[i],	/* index relation */
 					 values,	/* array of index Datums */
 					 isnull,	/* is-null flags */
-					 &(heapTuple->t_self),		/* tid of heap tuple */
+					 &root_tid,
 					 heapRelation,
 					 relationDescs[i]->rd_index->indisunique ?
 					 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO);
@@ -167,7 +199,7 @@ CatalogTupleInsert(Relation heapRel, HeapTuple tup)
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 	CatalogCloseIndexes(indstate);
 
 	return oid;
@@ -189,7 +221,7 @@ CatalogTupleInsertWithInfo(Relation heapRel, HeapTuple tup,
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, false, NULL);
 
 	return oid;
 }
@@ -209,12 +241,14 @@ void
 CatalogTupleUpdate(Relation heapRel, ItemPointer otid, HeapTuple tup)
 {
 	CatalogIndexState indstate;
+	bool	warm_update;
+	Bitmapset	*modified_attrs;
 
 	indstate = CatalogOpenIndexes(heapRel);
 
-	simple_heap_update(heapRel, otid, tup);
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 	CatalogCloseIndexes(indstate);
 }
 
@@ -230,9 +264,12 @@ void
 CatalogTupleUpdateWithInfo(Relation heapRel, ItemPointer otid, HeapTuple tup,
 						   CatalogIndexState indstate)
 {
-	simple_heap_update(heapRel, otid, tup);
+	Bitmapset  *modified_attrs;
+	bool		warm_update;
+
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 }
 
 /*
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 28be27a..92fa6e0 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -493,6 +493,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_tuples_deleted(C.oid) AS n_tup_del,
             pg_stat_get_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
@@ -523,7 +524,8 @@ CREATE VIEW pg_stat_xact_all_tables AS
             pg_stat_get_xact_tuples_inserted(C.oid) AS n_tup_ins,
             pg_stat_get_xact_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_xact_tuples_deleted(C.oid) AS n_tup_del,
-            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd
+            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_xact_tuples_warm_updated(C.oid) AS n_tup_warm_upd
     FROM pg_class C LEFT JOIN
          pg_index I ON C.oid = I.indrelid
          LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index e9eeacd..f199074 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -40,6 +40,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	TriggerData *trigdata = castNode(TriggerData, fcinfo->context);
 	const char *funcname = "unique_key_recheck";
 	HeapTuple	new_row;
+	HeapTupleData heapTuple;
 	ItemPointerData tmptid;
 	Relation	indexRel;
 	IndexInfo  *indexInfo;
@@ -102,7 +103,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	 * removed.
 	 */
 	tmptid = new_row->t_self;
-	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL,
+				NULL, NULL, &heapTuple))
 	{
 		/*
 		 * All rows in the HOT chain are dead, so skip the check.
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index 949844d..38702e5 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -2680,6 +2680,8 @@ CopyFrom(CopyState cstate)
 					if (resultRelInfo->ri_NumIndices > 0)
 						recheckIndexes = ExecInsertIndexTuples(slot,
 															&(tuple->t_self),
+															&(tuple->t_self),
+															NULL,
 															   estate,
 															   false,
 															   NULL,
@@ -2834,6 +2836,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
 			ExecStoreTuple(bufferedTuples[i], myslot, InvalidBuffer, false);
 			recheckIndexes =
 				ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
+									  &(bufferedTuples[i]->t_self), NULL,
 									  estate, false, NULL, NIL);
 			ExecARInsertTriggers(estate, resultRelInfo,
 								 bufferedTuples[i],
diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c
index ed6136c..0fc77b6 100644
--- a/src/backend/commands/indexcmds.c
+++ b/src/backend/commands/indexcmds.c
@@ -694,7 +694,14 @@ DefineIndex(Oid relationId,
 	 * visible to other transactions before we start to build the index. That
 	 * will prevent them from making incompatible HOT updates.  The new index
 	 * will be marked not indisready and not indisvalid, so that no one else
-	 * tries to either insert into it or use it for queries.
+	 * tries to either insert into it or use it for queries. In addition to
+	 * that, WARM updates will be disallowed if an update is modifying one of
+	 * the columns used by this new index. This is necessary to ensure that we
+	 * don't create WARM tuples which do not have corresponding entry in this
+	 * index. It must be noted that during the second phase, we will index only
+	 * those heap tuples whose root line pointer is not already in the index,
+	 * hence it's important that all tuples in a given chain, has the same
+	 * value for any indexed column (including this new index).
 	 *
 	 * We must commit our current transaction so that the index becomes
 	 * visible; then start another.  Note that all the data structures we just
@@ -742,7 +749,10 @@ DefineIndex(Oid relationId,
 	 * marked as "not-ready-for-inserts".  The index is consulted while
 	 * deciding HOT-safety though.  This arrangement ensures that no new HOT
 	 * chains can be created where the new tuple and the old tuple in the
-	 * chain have different index keys.
+	 * chain have different index keys. Also, the new index is consulted for
+	 * deciding whether a WARM update is possible, and WARM update is not done
+	 * if a column used by this index is being updated. This ensures that we
+	 * don't create WARM tuples which are not indexed by this index.
 	 *
 	 * We now take a new snapshot, and build the index using all tuples that
 	 * are visible in this snapshot.  We can be sure that any HOT updates to
@@ -777,7 +787,8 @@ DefineIndex(Oid relationId,
 	/*
 	 * Update the pg_index row to mark the index as ready for inserts. Once we
 	 * commit this transaction, any new transactions that open the table must
-	 * insert new entries into the index for insertions and non-HOT updates.
+	 * insert new entries into the index for insertions and non-HOT updates or
+	 * WARM updates where this index needs new entry.
 	 */
 	index_set_state_flags(indexRelationId, INDEX_CREATE_SET_READY);
 
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 005440e..1388be1 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -1032,6 +1032,19 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 							break;
 						}
 
+						/*
+						 * If this tuple was ever WARM updated or is a WARM
+						 * tuple, there could be multiple index entries
+						 * pointing to the root of this chain. We can't do
+						 * index-only scans for such tuples without verifying
+						 * index key check. So mark the page as !all_visible
+						 */
+						if (HeapTupleHeaderIsHeapWarmTuple(tuple.t_data))
+						{
+							all_visible = false;
+							break;
+						}
+
 						/* Track newest xmin on page. */
 						if (TransactionIdFollows(xmin, visibility_cutoff_xid))
 							visibility_cutoff_xid = xmin;
@@ -2158,6 +2171,18 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 						break;
 					}
 
+					/*
+					 * If this tuple was ever WARM updated or is a WARM tuple,
+					 * there could be multiple index entries pointing to the
+					 * root of this chain. We can't do index-only scans for
+					 * such tuples without verifying index key check. So mark
+					 * the page as !all_visible
+					 */
+					if (HeapTupleHeaderIsHeapWarmTuple(tuple.t_data))
+					{
+						all_visible = false;
+					}
+
 					/* Track newest xmin on page. */
 					if (TransactionIdFollows(xmin, *visibility_cutoff_xid))
 						*visibility_cutoff_xid = xmin;
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 9920f48..94cf92f 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -270,6 +270,8 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
 List *
 ExecInsertIndexTuples(TupleTableSlot *slot,
 					  ItemPointer tupleid,
+					  ItemPointer root_tid,
+					  Bitmapset *modified_attrs,
 					  EState *estate,
 					  bool noDupErr,
 					  bool *specConflict,
@@ -324,6 +326,17 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 		if (!indexInfo->ii_ReadyForInserts)
 			continue;
 
+		/*
+		 * If modified_attrs is set, we only insert index entries for those
+		 * indexes whose column has changed. All other indexes can use their
+		 * existing index pointers to look up the new tuple
+		 */
+		if (modified_attrs)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
 		/* Check for partial index */
 		if (indexInfo->ii_Predicate != NIL)
 		{
@@ -389,7 +402,7 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 			index_insert(indexRelation, /* index relation */
 						 values,	/* array of index Datums */
 						 isnull,	/* null flags */
-						 tupleid,		/* tid of heap tuple */
+						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique);	/* type of uniqueness check to do */
 
@@ -790,6 +803,9 @@ retry:
 		{
 			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
 				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
+			else
+				ItemPointerCopy(&tup->t_self, &ctid_wait);
+
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index a8bd583..b6c115d 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -399,6 +399,8 @@ ExecSimpleRelationInsert(EState *estate, TupleTableSlot *slot)
 
 		if (resultRelInfo->ri_NumIndices > 0)
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &(tuple->t_self),
+												   NULL,
 												   estate, false, NULL,
 												   NIL);
 
@@ -445,6 +447,8 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 	if (!skip_tuple)
 	{
 		List	   *recheckIndexes = NIL;
+		bool		warm_update;
+		Bitmapset  *modified_attrs;
 
 		/* Check the constraints of the tuple */
 		if (rel->rd_att->constr)
@@ -455,13 +459,30 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 
 		/* OK, update the tuple and index entries for it */
 		simple_heap_update(rel, &searchslot->tts_tuple->t_self,
-						   slot->tts_tuple);
+						   slot->tts_tuple, &modified_attrs, &warm_update);
 
 		if (resultRelInfo->ri_NumIndices > 0 &&
-			!HeapTupleIsHeapOnly(slot->tts_tuple))
+			(!HeapTupleIsHeapOnly(slot->tts_tuple) || warm_update))
+		{
+			ItemPointerData root_tid;
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self,
+						&root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
+
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL,
 												   NIL);
+		}
 
 		/* AFTER ROW UPDATE Triggers */
 		ExecARUpdateTriggers(estate, resultRelInfo,
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index f18827d..f81d290 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -37,6 +37,7 @@
 
 #include "access/relscan.h"
 #include "access/transam.h"
+#include "access/valid.h"
 #include "executor/execdebug.h"
 #include "executor/nodeBitmapHeapscan.h"
 #include "pgstat.h"
@@ -362,11 +363,27 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			OffsetNumber offnum = tbmres->offsets[curslot];
 			ItemPointerData tid;
 			HeapTupleData heapTuple;
+			bool recheck = false;
 
 			ItemPointerSet(&tid, page, offnum);
 			if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
-									   &heapTuple, NULL, true))
-				scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+									   &heapTuple, NULL, true, &recheck))
+			{
+				bool valid = true;
+
+				if (scan->rs_key)
+					HeapKeyTest(&heapTuple, RelationGetDescr(scan->rs_rd),
+							scan->rs_nkeys, scan->rs_key, valid);
+				if (valid)
+					scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+
+				/*
+				 * If the heap tuple needs a recheck because of a WARM update,
+				 * it's a lossy case
+				 */
+				if (recheck)
+					tbmres->recheck = true;
+			}
 		}
 	}
 	else
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index 5734550..c7be366 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -115,10 +115,10 @@ IndexNext(IndexScanState *node)
 					   false);	/* don't pfree */
 
 		/*
-		 * If the index was lossy, we have to recheck the index quals using
-		 * the fetched tuple.
+		 * If the index was lossy or the tuple was WARM, we have to recheck
+		 * the index quals using the fetched tuple.
 		 */
-		if (scandesc->xs_recheck)
+		if (scandesc->xs_recheck || scandesc->xs_tuple_recheck)
 		{
 			econtext->ecxt_scantuple = slot;
 			ResetExprContext(econtext);
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index 95e1589..a1f3440 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -512,6 +512,7 @@ ExecInsert(ModifyTableState *mtstate,
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												 &(tuple->t_self), NULL,
 												 estate, true, &specConflict,
 												   arbiterIndexes);
 
@@ -558,6 +559,7 @@ ExecInsert(ModifyTableState *mtstate,
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
 				recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+													   &(tuple->t_self), NULL,
 													   estate, false, NULL,
 													   arbiterIndexes);
 		}
@@ -891,6 +893,9 @@ ExecUpdate(ItemPointer tupleid,
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
 	List	   *recheckIndexes = NIL;
+	Bitmapset  *modified_attrs = NULL;
+	ItemPointerData	root_tid;
+	bool		warm_update;
 
 	/*
 	 * abort the operation if not running transactions
@@ -1007,7 +1012,7 @@ lreplace:;
 							 estate->es_output_cid,
 							 estate->es_crosscheck_snapshot,
 							 true /* wait for commit */ ,
-							 &hufd, &lockmode);
+							 &hufd, &lockmode, &modified_attrs, &warm_update);
 		switch (result)
 		{
 			case HeapTupleSelfUpdated:
@@ -1094,10 +1099,28 @@ lreplace:;
 		 * the t_self field.
 		 *
 		 * If it's a HOT update, we mustn't insert new index entries.
+		 *
+		 * If it's a WARM update, then we must insert new entries with TID
+		 * pointing to the root of the WARM chain.
 		 */
-		if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
+		if (resultRelInfo->ri_NumIndices > 0 &&
+			(!HeapTupleIsHeapOnly(tuple) || warm_update))
+		{
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL, NIL);
+		}
 	}
 
 	if (canSetTag)
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index 7176cf1..432dd4b 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -1823,7 +1823,7 @@ pgstat_count_heap_insert(Relation rel, int n)
  * pgstat_count_heap_update - count a tuple update
  */
 void
-pgstat_count_heap_update(Relation rel, bool hot)
+pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 {
 	PgStat_TableStatus *pgstat_info = rel->pgstat_info;
 
@@ -1841,6 +1841,8 @@ pgstat_count_heap_update(Relation rel, bool hot)
 		/* t_tuples_hot_updated is nontransactional, so just advance it */
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
+		else if (warm)
+			pgstat_info->t_counts.t_tuples_warm_updated++;
 	}
 }
 
@@ -4085,6 +4087,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_updated = 0;
 		result->tuples_deleted = 0;
 		result->tuples_hot_updated = 0;
+		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
 		result->changes_since_analyze = 0;
@@ -5194,6 +5197,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated = tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted = tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated = tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
@@ -5221,6 +5225,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated += tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted += tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated += tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated += tabmsg->t_counts.t_tuples_warm_updated;
 			/* If table was truncated, first reset the live/dead counters */
 			if (tabmsg->t_counts.t_truncated)
 			{
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index a987d0d..b8677f3 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -145,6 +145,22 @@ pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS)
 
 
 Datum
+pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+
+Datum
 pg_stat_get_live_tuples(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
@@ -1644,6 +1660,21 @@ pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS)
 }
 
 Datum
+pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_TableStatus *tabentry;
+
+	if ((tabentry = find_tabstat_entry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->t_counts.t_tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+Datum
 pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 8a7c560..5801703 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2338,6 +2338,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	list_free_deep(relation->rd_fkeylist);
 	list_free(relation->rd_indexlist);
 	bms_free(relation->rd_indexattr);
+	bms_free(relation->rd_exprindexattr);
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_pkattr);
 	bms_free(relation->rd_idattr);
@@ -4351,6 +4352,13 @@ RelationGetIndexList(Relation relation)
 		return list_copy(relation->rd_indexlist);
 
 	/*
+	 * If the index list was invalidated, we better also invalidate the index
+	 * attribute list (which should automatically invalidate other attributes
+	 * such as primary key and replica identity)
+	 */
+	relation->rd_indexattr = NULL;
+
+	/*
 	 * We build the list we intend to return (in the caller's context) while
 	 * doing the scan.  After successfully completing the scan, we copy that
 	 * list into the relcache entry.  This avoids cache-context memory leakage
@@ -4756,14 +4764,18 @@ Bitmapset *
 RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 {
 	Bitmapset  *indexattrs;		/* indexed columns */
+	Bitmapset  *exprindexattrs;	/* indexed columns in expression/prediacate
+									 indexes */
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *pkindexattrs;	/* columns in the primary index */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
+	Bitmapset  *indxnotreadyattrs;	/* columns in not ready indexes */
 	List	   *indexoidlist;
 	Oid			relpkindex;
 	Oid			relreplindex;
 	ListCell   *l;
 	MemoryContext oldcxt;
+	bool		supportswarm = true;/* True if the table can be WARM updated */
 
 	/* Quick exit if we already computed the result. */
 	if (relation->rd_indexattr != NULL)
@@ -4778,6 +4790,10 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_pkattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
+			case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+				return bms_copy(relation->rd_exprindexattr);
+			case INDEX_ATTR_BITMAP_NOTREADY:
+				return bms_copy(relation->rd_indxnotreadyattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -4818,9 +4834,11 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	 * won't be returned at all by RelationGetIndexList.
 	 */
 	indexattrs = NULL;
+	exprindexattrs = NULL;
 	uindexattrs = NULL;
 	pkindexattrs = NULL;
 	idindexattrs = NULL;
+	indxnotreadyattrs = NULL;
 	foreach(l, indexoidlist)
 	{
 		Oid			indexOid = lfirst_oid(l);
@@ -4857,6 +4875,10 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				indexattrs = bms_add_member(indexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
 
+				if (!indexInfo->ii_ReadyForInserts)
+					indxnotreadyattrs = bms_add_member(indxnotreadyattrs,
+							   attrnum - FirstLowInvalidHeapAttributeNumber);
+
 				if (isKey)
 					uindexattrs = bms_add_member(uindexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
@@ -4872,25 +4894,51 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 		}
 
 		/* Collect all attributes used in expressions, too */
-		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &exprindexattrs);
 
 		/* Collect all attributes in the index predicate, too */
-		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
+
+		/*
+		 * indexattrs should include attributes referenced in index expressions
+		 * and predicates too
+		 */
+		indexattrs = bms_add_members(indexattrs, exprindexattrs);
+
+		if (!indexInfo->ii_ReadyForInserts)
+			indxnotreadyattrs = bms_add_members(indxnotreadyattrs,
+					exprindexattrs);
+
+		/*
+		 * Check if the index has amrecheck method defined. If the method is
+		 * not defined, the index does not support WARM update. Completely
+		 * disable WARM updates on such tables
+		 */
+		if (!indexDesc->rd_amroutine->amrecheck)
+			supportswarm = false;
+
 
 		index_close(indexDesc, AccessShareLock);
 	}
 
 	list_free(indexoidlist);
 
+	/* Remember if the table can do WARM updates */
+	relation->rd_supportswarm = supportswarm;
+
 	/* Don't leak the old values of these bitmaps, if any */
 	bms_free(relation->rd_indexattr);
 	relation->rd_indexattr = NULL;
+	bms_free(relation->rd_exprindexattr);
+	relation->rd_exprindexattr = NULL;
 	bms_free(relation->rd_keyattr);
 	relation->rd_keyattr = NULL;
 	bms_free(relation->rd_pkattr);
 	relation->rd_pkattr = NULL;
 	bms_free(relation->rd_idattr);
 	relation->rd_idattr = NULL;
+	bms_free(relation->rd_indxnotreadyattr);
+	relation->rd_indxnotreadyattr = NULL;
 
 	/*
 	 * Now save copies of the bitmaps in the relcache entry.  We intentionally
@@ -4903,7 +4951,9 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_pkattr = bms_copy(pkindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_indexattr = bms_copy(indexattrs);
+	relation->rd_exprindexattr = bms_copy(exprindexattrs);
+	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
+	relation->rd_indxnotreadyattr = bms_copy(indxnotreadyattrs);
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -4917,6 +4967,10 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 			return bms_copy(relation->rd_pkattr);
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
+		case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+			return exprindexattrs;
+		case INDEX_ATTR_BITMAP_NOTREADY:
+			return indxnotreadyattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
@@ -5529,6 +5583,7 @@ load_relcache_init_file(bool shared)
 		rel->rd_keyattr = NULL;
 		rel->rd_pkattr = NULL;
 		rel->rd_idattr = NULL;
+		rel->rd_indxnotreadyattr = NULL;
 		rel->rd_pubactions = NULL;
 		rel->rd_createSubid = InvalidSubTransactionId;
 		rel->rd_newRelfilenodeSubid = InvalidSubTransactionId;
diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h
index e91e41d..34430a9 100644
--- a/src/include/access/amapi.h
+++ b/src/include/access/amapi.h
@@ -13,6 +13,7 @@
 #define AMAPI_H
 
 #include "access/genam.h"
+#include "access/itup.h"
 
 /*
  * We don't wish to include planner header files here, since most of an index
@@ -150,6 +151,10 @@ typedef void (*aminitparallelscan_function) (void *target);
 /* (re)start parallel index scan */
 typedef void (*amparallelrescan_function) (IndexScanDesc scan);
 
+/* recheck index tuple and heap tuple match */
+typedef bool (*amrecheck_function) (Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 /*
  * API struct for an index AM.  Note this must be stored in a single palloc'd
  * chunk of memory.
@@ -213,6 +218,9 @@ typedef struct IndexAmRoutine
 	amestimateparallelscan_function amestimateparallelscan;		/* can be NULL */
 	aminitparallelscan_function aminitparallelscan;		/* can be NULL */
 	amparallelrescan_function amparallelrescan; /* can be NULL */
+
+	/* interface function to support WARM */
+	amrecheck_function amrecheck;		/* can be NULL */
 } IndexAmRoutine;
 
 
diff --git a/src/include/access/hash.h b/src/include/access/hash.h
index 69a3873..3e14023 100644
--- a/src/include/access/hash.h
+++ b/src/include/access/hash.h
@@ -364,4 +364,8 @@ extern void hashbucketcleanup(Relation rel, Bucket cur_bucket,
 				  bool bucket_has_garbage,
 				  IndexBulkDeleteCallback callback, void *callback_state);
 
+/* hash.c */
+extern bool hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 #endif   /* HASH_H */
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 95aa976..9412c3a 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -137,9 +137,10 @@ extern bool heap_fetch(Relation relation, Snapshot snapshot,
 		   Relation stats_relation);
 extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
 					   Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call);
+					   bool *all_dead, bool first_call, bool *recheck);
 extern bool heap_hot_search(ItemPointer tid, Relation relation,
-				Snapshot snapshot, bool *all_dead);
+				Snapshot snapshot, bool *all_dead,
+				bool *recheck, Buffer *buffer, HeapTuple heapTuple);
 
 extern void heap_get_latest_tid(Relation relation, Snapshot snapshot,
 					ItemPointer tid);
@@ -161,7 +162,8 @@ extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
 extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
@@ -176,7 +178,9 @@ extern bool heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple);
 extern Oid	simple_heap_insert(Relation relation, HeapTuple tup);
 extern void simple_heap_delete(Relation relation, ItemPointer tid);
 extern void simple_heap_update(Relation relation, ItemPointer otid,
-				   HeapTuple tup);
+				   HeapTuple tup,
+				   Bitmapset **modified_attrs,
+				   bool *warm_update);
 
 extern void heap_sync(Relation relation);
 
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index a4a1fe1..b4238e5 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -80,6 +80,7 @@
 #define XLH_UPDATE_CONTAINS_NEW_TUPLE			(1<<4)
 #define XLH_UPDATE_PREFIX_FROM_OLD				(1<<5)
 #define XLH_UPDATE_SUFFIX_FROM_OLD				(1<<6)
+#define XLH_UPDATE_WARM_UPDATE					(1<<7)
 
 /* convenience macro for checking whether any form of old tuple was logged */
 #define XLH_UPDATE_CONTAINS_OLD						\
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 7552186..ddbdbcd 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,7 +260,8 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x0800 are available */
+#define HEAP_WARM_TUPLE			0x0800	/* This tuple is a part of a WARM chain
+										 */
 #define HEAP_LATEST_TUPLE		0x1000	/*
 										 * This is the last tuple in chain and
 										 * ip_posid points to the root line
@@ -271,7 +272,7 @@ struct HeapTupleHeaderData
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF800	/* visibility-related bits */
 
 
 /*
@@ -510,6 +511,21 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+#define HeapTupleHeaderSetHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 |= HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderClearHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 &= ~HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderIsHeapWarmTuple(tup) \
+( \
+  ((tup)->t_infomask2 & HEAP_WARM_TUPLE) != 0 \
+)
+
 /*
  * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
  * the TID of the tuple itself in t_ctid field to mark the end of the chain.
@@ -785,6 +801,15 @@ struct MinimalTupleData
 #define HeapTupleClearHeapOnly(tuple) \
 		HeapTupleHeaderClearHeapOnly((tuple)->t_data)
 
+#define HeapTupleIsHeapWarmTuple(tuple) \
+		HeapTupleHeaderIsHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleSetHeapWarmTuple(tuple) \
+		HeapTupleHeaderSetHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleClearHeapWarmTuple(tuple) \
+		HeapTupleHeaderClearHeapWarmTuple((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index 011a72e..98129d6 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -750,6 +750,8 @@ extern bytea *btoptions(Datum reloptions, bool validate);
 extern bool btproperty(Oid index_oid, int attno,
 		   IndexAMProperty prop, const char *propname,
 		   bool *res, bool *isnull);
+extern bool btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * prototypes for functions in nbtvalidate.c
diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h
index ce3ca8d..12d3b0c 100644
--- a/src/include/access/relscan.h
+++ b/src/include/access/relscan.h
@@ -112,7 +112,8 @@ typedef struct IndexScanDescData
 	HeapTupleData xs_ctup;		/* current heap tuple, if any */
 	Buffer		xs_cbuf;		/* current heap buffer in scan, if any */
 	/* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */
-	bool		xs_recheck;		/* T means scan keys must be rechecked */
+	bool		xs_recheck;		/* T means scan keys must be rechecked for each tuple */
+	bool		xs_tuple_recheck;	/* T means scan keys must be rechecked for current tuple */
 
 	/*
 	 * When fetching with an ordering operator, the values of the ORDER BY
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 05652e8..c132b10 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2740,6 +2740,8 @@ DATA(insert OID = 1933 (  pg_stat_get_tuples_deleted	PGNSP PGUID 12 1 0 0 0 f f
 DESCR("statistics: number of tuples deleted");
 DATA(insert OID = 1972 (  pg_stat_get_tuples_hot_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated");
+DATA(insert OID = 3353 (  pg_stat_get_tuples_warm_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated");
 DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_live_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
@@ -2892,6 +2894,8 @@ DATA(insert OID = 3042 (  pg_stat_get_xact_tuples_deleted		PGNSP PGUID 12 1 0 0
 DESCR("statistics: number of tuples deleted in current transaction");
 DATA(insert OID = 3043 (  pg_stat_get_xact_tuples_hot_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated in current transaction");
+DATA(insert OID = 3354 (  pg_stat_get_xact_tuples_warm_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated in current transaction");
 DATA(insert OID = 3044 (  pg_stat_get_xact_blocks_fetched		PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_fetched _null_ _null_ _null_ ));
 DESCR("statistics: number of blocks fetched in current transaction");
 DATA(insert OID = 3045 (  pg_stat_get_xact_blocks_hit			PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_hit _null_ _null_ _null_ ));
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 02dbe7b..c4495a3 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -382,6 +382,7 @@ extern void UnregisterExprContextCallback(ExprContext *econtext,
 extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
 extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
 extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
+					  ItemPointer root_tid, Bitmapset *modified_attrs,
 					  EState *estate, bool noDupErr, bool *specConflict,
 					  List *arbiterIndexes);
 extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
diff --git a/src/include/executor/nodeIndexscan.h b/src/include/executor/nodeIndexscan.h
index 46d6f45..2c4d884 100644
--- a/src/include/executor/nodeIndexscan.h
+++ b/src/include/executor/nodeIndexscan.h
@@ -37,5 +37,4 @@ extern void ExecIndexEvalRuntimeKeys(ExprContext *econtext,
 extern bool ExecIndexEvalArrayKeys(ExprContext *econtext,
 					   IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 extern bool ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
-
 #endif   /* NODEINDEXSCAN_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index f9bcdd6..07f2900 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -62,6 +62,7 @@ typedef struct IndexInfo
 	NodeTag		type;
 	int			ii_NumIndexAttrs;
 	AttrNumber	ii_KeyAttrNumbers[INDEX_MAX_KEYS];
+	Bitmapset  *ii_indxattrs;	/* bitmap of all columns used in this index */
 	List	   *ii_Expressions; /* list of Expr */
 	List	   *ii_ExpressionsState;	/* list of ExprState */
 	List	   *ii_Predicate;	/* list of Expr */
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index de8225b..ee635be 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -105,6 +105,7 @@ typedef struct PgStat_TableCounts
 	PgStat_Counter t_tuples_updated;
 	PgStat_Counter t_tuples_deleted;
 	PgStat_Counter t_tuples_hot_updated;
+	PgStat_Counter t_tuples_warm_updated;
 	bool		t_truncated;
 
 	PgStat_Counter t_delta_live_tuples;
@@ -625,6 +626,7 @@ typedef struct PgStat_StatTabEntry
 	PgStat_Counter tuples_updated;
 	PgStat_Counter tuples_deleted;
 	PgStat_Counter tuples_hot_updated;
+	PgStat_Counter tuples_warm_updated;
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
@@ -1177,7 +1179,7 @@ pgstat_report_wait_end(void)
 	(pgStatBlockWriteTime += (n))
 
 extern void pgstat_count_heap_insert(Relation rel, int n);
-extern void pgstat_count_heap_update(Relation rel, bool hot);
+extern void pgstat_count_heap_update(Relation rel, bool hot, bool warm);
 extern void pgstat_count_heap_delete(Relation rel);
 extern void pgstat_count_truncate(Relation rel);
 extern void pgstat_update_heap_dead_tuples(Relation rel, int delta);
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index a617a7c..fbac7c0 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -138,9 +138,14 @@ typedef struct RelationData
 
 	/* data managed by RelationGetIndexAttrBitmap: */
 	Bitmapset  *rd_indexattr;	/* identifies columns used in indexes */
+	Bitmapset  *rd_exprindexattr; /* indentified columns used in expression or
+									 predicate indexes */
+	Bitmapset  *rd_indxnotreadyattr;	/* columns used by indexes not yet
+										   ready */
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_pkattr;		/* cols included in primary key */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	PublicationActions  *rd_pubactions;	/* publication actions */
 
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index da36b67..d18bd09 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -50,7 +50,9 @@ typedef enum IndexAttrBitmapKind
 	INDEX_ATTR_BITMAP_ALL,
 	INDEX_ATTR_BITMAP_KEY,
 	INDEX_ATTR_BITMAP_PRIMARY_KEY,
-	INDEX_ATTR_BITMAP_IDENTITY_KEY
+	INDEX_ATTR_BITMAP_IDENTITY_KEY,
+	INDEX_ATTR_BITMAP_EXPR_PREDICATE,
+	INDEX_ATTR_BITMAP_NOTREADY
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index de5ae00..7656e6e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1728,6 +1728,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_tuples_deleted(c.oid) AS n_tup_del,
     pg_stat_get_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
@@ -1871,6 +1872,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1914,6 +1916,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1951,7 +1954,8 @@ pg_stat_xact_all_tables| SELECT c.oid AS relid,
     pg_stat_get_xact_tuples_inserted(c.oid) AS n_tup_ins,
     pg_stat_get_xact_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_xact_tuples_deleted(c.oid) AS n_tup_del,
-    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd
+    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_xact_tuples_warm_updated(c.oid) AS n_tup_warm_upd
    FROM ((pg_class c
      LEFT JOIN pg_index i ON ((c.oid = i.indrelid)))
      LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
@@ -1967,7 +1971,8 @@ pg_stat_xact_sys_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname = ANY (ARRAY['pg_catalog'::name, 'information_schema'::name])) OR (pg_stat_xact_all_tables.schemaname ~ '^pg_toast'::text));
 pg_stat_xact_user_functions| SELECT p.oid AS funcid,
@@ -1989,7 +1994,8 @@ pg_stat_xact_user_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname <> ALL (ARRAY['pg_catalog'::name, 'information_schema'::name])) AND (pg_stat_xact_all_tables.schemaname !~ '^pg_toast'::text));
 pg_statio_all_indexes| SELECT c.oid AS relid,
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
new file mode 100644
index 0000000..0aa3bb7
--- /dev/null
+++ b/src/test/regress/expected/warm.out
@@ -0,0 +1,367 @@
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on updtst_tab1  (cost=4.45..47.23 rows=22 width=72)
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1  (cost=0.00..4.45 rows=22 width=0)
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Check if index only scan works correctly
+EXPLAIN SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on updtst_tab1  (cost=4.45..47.23 rows=22 width=4)
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1  (cost=0.00..4.45 rows=22 width=0)
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+                                      QUERY PLAN                                      
+--------------------------------------------------------------------------------------
+ Index Only Scan using updtst_indx1 on updtst_tab1  (cost=0.29..9.16 rows=50 width=4)
+   Index Cond: (b = 140001)
+(2 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab1;
+------------------
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+EXPLAIN SELECT * FROM updtst_tab2 WHERE b = 701;
+                                QUERY PLAN                                 
+---------------------------------------------------------------------------
+ Bitmap Heap Scan on updtst_tab2  (cost=4.18..12.64 rows=4 width=72)
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE a = 1;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+SET enable_seqscan = false;
+EXPLAIN SELECT * FROM updtst_tab2 WHERE b = 701;
+                                QUERY PLAN                                 
+---------------------------------------------------------------------------
+ Bitmap Heap Scan on updtst_tab2  (cost=4.18..12.64 rows=4 width=72)
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE b = 701;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+VACUUM updtst_tab2;
+EXPLAIN SELECT b FROM updtst_tab2 WHERE b = 701;
+                                     QUERY PLAN                                      
+-------------------------------------------------------------------------------------
+ Index Only Scan using updtst_indx2 on updtst_tab2  (cost=0.14..4.16 rows=1 width=4)
+   Index Cond: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab2 WHERE b = 701;
+  b  
+-----
+ 701
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab2;
+------------------
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 1;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN SELECT b FROM updtst_tab3 WHERE b = 701;
+                        QUERY PLAN                         
+-----------------------------------------------------------
+ Seq Scan on updtst_tab3  (cost=0.00..2.25 rows=1 width=4)
+   Filter: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 701;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+  b   
+------
+ 1421
+(1 row)
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+SET enable_seqscan = false;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    98
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 2;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN SELECT b FROM updtst_tab3 WHERE b = 702;
+                                     QUERY PLAN                                      
+-------------------------------------------------------------------------------------
+ Index Only Scan using updtst_indx3 on updtst_tab3  (cost=0.14..8.16 rows=1 width=4)
+   Index Cond: (b = 702)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 702;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+  b   
+------
+ 1422
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab3;
+------------------
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+explain select * from test_warm where lower(a) = 'test';
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on test_warm  (cost=4.18..12.65 rows=4 width=64)
+   Recheck Cond: (lower(a) = 'test'::text)
+   ->  Bitmap Index Scan on test_warmindx  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (lower(a) = 'test'::text)
+(4 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+select *, ctid from test_warm where a = 'test';
+ a | b | ctid 
+---+---+------
+(0 rows)
+
+select *, ctid from test_warm where a = 'TEST';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+                                   QUERY PLAN                                    
+---------------------------------------------------------------------------------
+ Index Scan using test_warmindx on test_warm  (cost=0.15..20.22 rows=4 width=64)
+   Index Cond: (lower(a) = 'test'::text)
+(2 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+DROP TABLE test_warm;
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index edeb2d6..2268705 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -42,6 +42,8 @@ test: create_type
 test: create_table
 test: create_function_2
 
+test: warm
+
 # ----------
 # Load huge amounts of data
 # We should split the data files into single files and then
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
new file mode 100644
index 0000000..b73c278
--- /dev/null
+++ b/src/test/regress/sql/warm.sql
@@ -0,0 +1,172 @@
+
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Check if index only scan works correctly
+EXPLAIN SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab1;
+
+------------------
+
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+
+EXPLAIN SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE a = 1;
+
+SET enable_seqscan = false;
+EXPLAIN SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE b = 701;
+
+VACUUM updtst_tab2;
+EXPLAIN SELECT b FROM updtst_tab2 WHERE b = 701;
+SELECT b FROM updtst_tab2 WHERE b = 701;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab2;
+------------------
+
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+SELECT * FROM updtst_tab3 WHERE a = 1;
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+
+VACUUM updtst_tab3;
+EXPLAIN SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+SET enable_seqscan = false;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+SELECT * FROM updtst_tab3 WHERE a = 2;
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+
+VACUUM updtst_tab3;
+EXPLAIN SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab3;
+------------------
+
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where a = 'test';
+select *, ctid from test_warm where a = 'TEST';
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+DROP TABLE test_warm;
+
+
0001_track_root_lp_v11.patchapplication/octet-stream; name=0001_track_root_lp_v11.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 84447f0..5149c07 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -93,7 +93,8 @@ static HeapTuple heap_prepare_insert(Relation relation, HeapTuple tup,
 					TransactionId xid, CommandId cid, int options);
 static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
-				HeapTuple newtup, HeapTuple old_key_tup,
+				HeapTuple newtup, OffsetNumber root_offnum,
+				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
 static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
 							 Bitmapset *interesting_cols,
@@ -2247,13 +2248,13 @@ heap_get_latest_tid(Relation relation,
 		 */
 		if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) ||
 			HeapTupleHeaderIsOnlyLocked(tp.t_data) ||
-			ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid))
+			HeapTupleHeaderIsHeapLatest(tp.t_data, &ctid))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
 		}
 
-		ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tp.t_data, &ctid);
 		priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		UnlockReleaseBuffer(buffer);
 	}							/* end of loop */
@@ -2384,6 +2385,7 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	bool		all_visible_cleared = false;
+	OffsetNumber	root_offnum;
 
 	/*
 	 * Fill in tuple header fields, assign an OID, and toast the tuple if
@@ -2422,8 +2424,13 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
-	RelationPutHeapTuple(relation, buffer, heaptup,
-						 (options & HEAP_INSERT_SPECULATIVE) != 0);
+	root_offnum = RelationPutHeapTuple(relation, buffer, heaptup,
+						 (options & HEAP_INSERT_SPECULATIVE) != 0,
+						 InvalidOffsetNumber);
+
+	/* We must not overwrite the speculative insertion token. */
+	if ((options & HEAP_INSERT_SPECULATIVE) == 0)
+		HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 	if (PageIsAllVisible(BufferGetPage(buffer)))
 	{
@@ -2651,6 +2658,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 	Size		saveFreeSpace;
 	bool		need_tuple_data = RelationIsLogicallyLogged(relation);
 	bool		need_cids = RelationIsAccessibleInLogicalDecoding(relation);
+	OffsetNumber	root_offnum;
 
 	needwal = !(options & HEAP_INSERT_SKIP_WAL) && RelationNeedsWAL(relation);
 	saveFreeSpace = RelationGetTargetPageFreeSpace(relation,
@@ -2721,7 +2729,12 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		 * RelationGetBufferForTuple has ensured that the first tuple fits.
 		 * Put that on the page, and then as many other tuples as fit.
 		 */
-		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false);
+		root_offnum = RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false,
+				InvalidOffsetNumber);
+
+		/* Mark this tuple as the latest and also set root offset. */
+		HeapTupleHeaderSetHeapLatest(heaptuples[ndone]->t_data, root_offnum);
+
 		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++)
 		{
 			HeapTuple	heaptup = heaptuples[ndone + nthispage];
@@ -2729,7 +2742,10 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace)
 				break;
 
-			RelationPutHeapTuple(relation, buffer, heaptup, false);
+			root_offnum = RelationPutHeapTuple(relation, buffer, heaptup, false,
+					InvalidOffsetNumber);
+			/* Mark each tuple as the latest and also set root offset. */
+			HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 			/*
 			 * We don't use heap_multi_insert for catalog tuples yet, but
@@ -3001,6 +3017,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	HeapTupleData tp;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	TransactionId new_xmax;
@@ -3011,6 +3028,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	bool		all_visible_cleared = false;
 	HeapTuple	old_key_tuple = NULL;	/* replica identity of the tuple */
 	bool		old_key_copied = false;
+	OffsetNumber	root_offnum;
 
 	Assert(ItemPointerIsValid(tid));
 
@@ -3052,7 +3070,8 @@ heap_delete(Relation relation, ItemPointer tid,
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 	}
 
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
+	offnum = ItemPointerGetOffsetNumber(tid);
+	lp = PageGetItemId(page, offnum);
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
@@ -3182,7 +3201,17 @@ l1:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tp.t_data->t_ctid;
+
+		/*
+		 * If we're at the end of the chain, then just return the same TID back
+		 * to the caller. The caller uses that as a hint to know if we have hit
+		 * the end of the chain.
+		 */
+		if (!HeapTupleHeaderIsHeapLatest(tp.t_data, &tp.t_self))
+			HeapTupleHeaderGetNextTid(tp.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&tp.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tp.t_data);
@@ -3231,6 +3260,22 @@ l1:
 							  xid, LockTupleExclusive, true,
 							  &new_xmax, &new_infomask, &new_infomask2);
 
+	/*
+	 * heap_get_root_tuple_one() may call palloc, which is disallowed once we
+	 * enter the critical section. So check if the root offset is cached in the
+	 * tuple and if not, fetch that information hard way before entering the
+	 * critical section.
+	 *
+	 * Most often and unless we are dealing with a pg-upgraded cluster, the
+	 * root offset information should be cached. So there should not be too
+	 * much overhead of fetching this information. Also, once a tuple is
+	 * updated, the information will be copied to the new version. So it's not
+	 * as if we're going to pay this price forever.
+	 */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tp.t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -3258,8 +3303,10 @@ l1:
 	HeapTupleHeaderClearHotUpdated(tp.t_data);
 	HeapTupleHeaderSetXmax(tp.t_data, new_xmax);
 	HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo);
-	/* Make sure there is no forward chain link in t_ctid */
-	tp.t_data->t_ctid = tp.t_self;
+
+	/* Mark this tuple as the latest tuple in the update chain. */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		HeapTupleHeaderSetHeapLatest(tp.t_data, root_offnum);
 
 	MarkBufferDirty(buffer);
 
@@ -3460,6 +3507,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
+	OffsetNumber	root_offnum;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3522,6 +3571,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 
 
 	block = ItemPointerGetBlockNumber(otid);
+	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3806,7 +3856,12 @@ l2:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = oldtup.t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(oldtup.t_data, &oldtup.t_self))
+			HeapTupleHeaderGetNextTid(oldtup.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&oldtup.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
@@ -3946,6 +4001,7 @@ l2:
 		uint16		infomask_lock_old_tuple,
 					infomask2_lock_old_tuple;
 		bool		cleared_all_frozen = false;
+		OffsetNumber	root_offnum;
 
 		/*
 		 * To prevent concurrent sessions from updating the tuple, we have to
@@ -3973,6 +4029,14 @@ l2:
 
 		Assert(HEAP_XMAX_IS_LOCKED_ONLY(infomask_lock_old_tuple));
 
+		/*
+		 * Fetch root offset before entering the critical section. We do this
+		 * only if the information is not already available.
+		 */
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&oldtup.t_self));
+
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
@@ -3987,7 +4051,8 @@ l2:
 		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			HeapTupleHeaderSetHeapLatest(oldtup.t_data, root_offnum);
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4145,6 +4210,10 @@ l2:
 										   bms_overlap(modified_attrs, id_attrs),
 										   &old_key_copied);
 
+	if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
@@ -4170,6 +4239,17 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way (we should have done
+		 * that before entering the critical section above).
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
 	else
 	{
@@ -4177,10 +4257,22 @@ l2:
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		root_offnum = InvalidOffsetNumber;
 	}
 
-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
-
+	/* insert new tuple */
+	root_offnum = RelationPutHeapTuple(relation, newbuf, heaptup, false,
+									   root_offnum);
+	/*
+	 * Also mark both copies as latest and set the root offset information. If
+	 * we're doing a HOT/WARM update, then we just copy the information from
+	 * old tuple, if available or computed above. For regular updates,
+	 * RelationPutHeapTuple must have returned us the actual offset number
+	 * where the new version was inserted and we store the same value since the
+	 * update resulted in a new HOT-chain.
+	 */
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
 	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
@@ -4193,7 +4285,7 @@ l2:
 	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	HeapTupleHeaderSetNextTid(oldtup.t_data, &(heaptup->t_self));
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4232,6 +4324,7 @@ l2:
 
 		recptr = log_heap_update(relation, buffer,
 								 newbuf, &oldtup, heaptup,
+								 root_offnum,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4512,7 +4605,8 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	ItemId		lp;
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
-	BlockNumber block;
+	BlockNumber	block;
+	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4521,9 +4615,11 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	bool		first_time = true;
 	bool		have_tuple_lock = false;
 	bool		cleared_all_frozen = false;
+	OffsetNumber	root_offnum;
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
+	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -4543,6 +4639,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	tuple->t_data = (HeapTupleHeader) PageGetItem(page, lp);
 	tuple->t_len = ItemIdGetLength(lp);
 	tuple->t_tableOid = RelationGetRelid(relation);
+	tuple->t_self = *tid;
 
 l3:
 	result = HeapTupleSatisfiesUpdate(tuple, cid, *buffer);
@@ -4570,7 +4667,11 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &t_ctid);
+		else
+			ItemPointerCopy(tid, &t_ctid);
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -5008,7 +5109,12 @@ failed:
 		Assert(result == HeapTupleSelfUpdated || result == HeapTupleUpdated ||
 			   result == HeapTupleWouldBlock);
 		Assert(!(tuple->t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tuple->t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(tid, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tuple->t_data);
@@ -5056,6 +5162,10 @@ failed:
 							  GetCurrentTransactionId(), mode, false,
 							  &xid, &new_infomask, &new_infomask2);
 
+	if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tuple->t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -5084,7 +5194,10 @@ failed:
 	 * the tuple as well.
 	 */
 	if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask))
-		tuple->t_data->t_ctid = *tid;
+	{
+		if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+			HeapTupleHeaderSetHeapLatest(tuple->t_data, root_offnum);
+	}
 
 	/* Clear only the all-frozen bit on visibility map if needed */
 	if (PageIsAllVisible(page) &&
@@ -5598,6 +5711,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
+	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5606,6 +5720,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
+		offnum = ItemPointerGetOffsetNumber(&tupid);
+
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
@@ -5835,7 +5951,7 @@ l4:
 
 		/* if we find the end of update chain, we're done. */
 		if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID ||
-			ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) ||
+			HeapTupleHeaderIsHeapLatest(mytup.t_data, &mytup.t_self) ||
 			HeapTupleHeaderIsOnlyLocked(mytup.t_data))
 		{
 			result = HeapTupleMayBeUpdated;
@@ -5844,7 +5960,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		HeapTupleHeaderGetNextTid(mytup.t_data, &tupid);
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
@@ -5961,7 +6077,7 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	 * Replace the speculative insertion token with a real t_ctid, pointing to
 	 * itself like it does on regular tuples.
 	 */
-	htup->t_ctid = tuple->t_self;
+	HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 	/* XLOG stuff */
 	if (RelationNeedsWAL(relation))
@@ -6087,8 +6203,7 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
 
 	/* Clear the speculative insertion token too */
-	tp.t_data->t_ctid = tp.t_self;
-
+	HeapTupleHeaderSetHeapLatest(tp.t_data, ItemPointerGetOffsetNumber(tid));
 	MarkBufferDirty(buffer);
 
 	/*
@@ -7436,6 +7551,7 @@ log_heap_visible(RelFileNode rnode, Buffer heap_buffer, Buffer vm_buffer,
 static XLogRecPtr
 log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup, HeapTuple newtup,
+				OffsetNumber root_offnum,
 				HeapTuple old_key_tuple,
 				bool all_visible_cleared, bool new_all_visible_cleared)
 {
@@ -7556,6 +7672,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self);
 	xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data);
 
+	Assert(OffsetNumberIsValid(root_offnum));
+	xlrec.root_offnum = root_offnum;
+
 	bufflags = REGBUF_STANDARD;
 	if (init)
 		bufflags |= REGBUF_WILL_INIT;
@@ -8210,7 +8329,13 @@ heap_xlog_delete(XLogReaderState *record)
 			PageClearAllVisible(page);
 
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = target_tid;
+		if (!HeapTupleHeaderHasRootOffset(htup))
+		{
+			OffsetNumber	root_offnum;
+			root_offnum = heap_get_root_tuple(page, xlrec->offnum); 
+			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+		}
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8300,7 +8425,8 @@ heap_xlog_insert(XLogReaderState *record)
 		htup->t_hoff = xlhdr.t_hoff;
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
-		htup->t_ctid = target_tid;
+
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->offnum);
 
 		if (PageAddItem(page, (Item) htup, newlen, xlrec->offnum,
 						true, true) == InvalidOffsetNumber)
@@ -8435,8 +8561,8 @@ heap_xlog_multi_insert(XLogReaderState *record)
 			htup->t_hoff = xlhdr->t_hoff;
 			HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 			HeapTupleHeaderSetCmin(htup, FirstCommandId);
-			ItemPointerSetBlockNumber(&htup->t_ctid, blkno);
-			ItemPointerSetOffsetNumber(&htup->t_ctid, offnum);
+
+			HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 			offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 			if (offnum == InvalidOffsetNumber)
@@ -8572,7 +8698,7 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 		/* Set forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetNextTid(htup, &newtid);
 
 		/* Mark the page as a candidate for pruning */
 		PageSetPrunable(page, XLogRecGetXid(record));
@@ -8705,13 +8831,17 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
-		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = newtid;
 
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
 
+		/*
+		 * Make sure the tuple is marked as the latest and root offset
+		 * information is restored.
+		 */
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->root_offnum);
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
@@ -8774,6 +8904,9 @@ heap_xlog_confirm(XLogReaderState *record)
 		 */
 		ItemPointerSet(&htup->t_ctid, BufferGetBlockNumber(buffer), offnum);
 
+		/* For newly inserted tuple, set root offset to itself. */
+		HeapTupleHeaderSetHeapLatest(htup, offnum);
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8837,11 +8970,17 @@ heap_xlog_lock(XLogReaderState *record)
 		 */
 		if (HEAP_XMAX_IS_LOCKED_ONLY(htup->t_infomask))
 		{
+			ItemPointerData	target_tid;
+
+			ItemPointerSet(&target_tid, BufferGetBlockNumber(buffer), offnum);
 			HeapTupleHeaderClearHotUpdated(htup);
 			/* Make sure there is no forward chain link in t_ctid */
-			ItemPointerSet(&htup->t_ctid,
-						   BufferGetBlockNumber(buffer),
-						   offnum);
+			if (!HeapTupleHeaderHasRootOffset(htup))
+			{
+				OffsetNumber	root_offnum;
+				root_offnum = heap_get_root_tuple(page, offnum);
+				HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+			}
 		}
 		HeapTupleHeaderSetXmax(htup, xlrec->locking_xid);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index 6529fe3..8052519 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -31,12 +31,20 @@
  * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
  * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *
+ * The caller can optionally tell us to set the root offset to the given value.
+ * Otherwise, the root offset is set to the offset of the new location once its
+ * known. The former is used while updating an existing tuple where the caller
+ * tells us about the root line pointer of the chain.  The latter is used
+ * during insertion of a new row, hence root line pointer is set to the offset
+ * where this tuple is inserted.
  */
-void
+OffsetNumber
 RelationPutHeapTuple(Relation relation,
 					 Buffer buffer,
 					 HeapTuple tuple,
-					 bool token)
+					 bool token,
+					 OffsetNumber root_offnum)
 {
 	Page		pageHeader;
 	OffsetNumber offnum;
@@ -60,17 +68,24 @@ RelationPutHeapTuple(Relation relation,
 	ItemPointerSet(&(tuple->t_self), BufferGetBlockNumber(buffer), offnum);
 
 	/*
-	 * Insert the correct position into CTID of the stored tuple, too (unless
-	 * this is a speculative insertion, in which case the token is held in
-	 * CTID field instead)
+	 * Set block number and the root offset into CTID of the stored tuple, too
+	 * (unless this is a speculative insertion, in which case the token is held
+	 * in CTID field instead).
 	 */
 	if (!token)
 	{
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
+		/* Copy t_ctid to set the correct block number. */
 		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+
+		if (!OffsetNumberIsValid(root_offnum))
+			root_offnum = offnum;
+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item, root_offnum);
 	}
+
+	return root_offnum;
 }
 
 /*
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index d69a266..f54337c 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -55,6 +55,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
 
+static void heap_get_root_tuples_internal(Page page,
+				OffsetNumber target_offnum, OffsetNumber *root_offsets);
 
 /*
  * Optionally prune and repair fragmentation in the specified page.
@@ -553,6 +555,17 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 		if (!HeapTupleHeaderIsHotUpdated(htup))
 			break;
 
+
+		/*
+		 * If the tuple was HOT-updated and the update was later
+		 * aborted, someone could mark this tuple to be the last tuple
+		 * in the chain, without clearing the HOT-updated flag. So we must
+		 * check if this is the last tuple in the chain and stop following the
+		 * CTID, else we risk getting into an infinite recursion (though
+		 * prstate->marked[] currently protects against that).
+		 */
+		if (HeapTupleHeaderHasRootOffset(htup))
+			break;
 		/*
 		 * Advance to next chain member.
 		 */
@@ -726,27 +739,47 @@ heap_page_prune_execute(Buffer buffer,
 
 
 /*
- * For all items in this page, find their respective root line pointers.
- * If item k is part of a HOT-chain with root at item j, then we set
- * root_offsets[k - 1] = j.
+ * Either for all items in this page or for the given item, find their
+ * respective root line pointers.
+ *
+ * When target_offnum is a valid offset number, the caller is interested in
+ * just one item. In that case, the root line pointer is returned in
+ * root_offsets.
  *
- * The passed-in root_offsets array must have MaxHeapTuplesPerPage entries.
- * We zero out all unused entries.
+ * When target_offnum is a InvalidOffsetNumber then the caller wants to know
+ * the root line pointers of all the items in this page. The root_offsets array
+ * must have MaxHeapTuplesPerPage entries in that case. If item k is part of a
+ * HOT-chain with root at item j, then we set root_offsets[k - 1] = j. We zero
+ * out all unused entries.
  *
  * The function must be called with at least share lock on the buffer, to
  * prevent concurrent prune operations.
  *
+ * This is not a cheap function since it must scan through all line pointers
+ * and tuples on the page in order to find the root line pointers. To minimize
+ * the cost, we break early if target_offnum is specified and root line pointer
+ * to target_offnum is found.
+ *
  * Note: The information collected here is valid only as long as the caller
  * holds a pin on the buffer. Once pin is released, a tuple might be pruned
  * and reused by a completely unrelated tuple.
+ *
+ * Note: This function must not be called inside a critical section because it
+ * internally calls HeapTupleHeaderGetUpdateXid which somewhere down the stack
+ * may try to allocate heap memory. Memory allocation is disallowed in a
+ * critical section.
  */
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
 {
 	OffsetNumber offnum,
 				maxoff;
 
-	MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
+	if (OffsetNumberIsValid(target_offnum))
+		*root_offsets = InvalidOffsetNumber;
+	else
+		MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
 
 	maxoff = PageGetMaxOffsetNumber(page);
 	for (offnum = FirstOffsetNumber; offnum <= maxoff; offnum = OffsetNumberNext(offnum))
@@ -774,9 +807,28 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 
 			/*
 			 * This is either a plain tuple or the root of a HOT-chain.
-			 * Remember it in the mapping.
+			 *
+			 * If the target_offnum is specified and if we found its mapping,
+			 * return.
 			 */
-			root_offsets[offnum - 1] = offnum;
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (target_offnum == offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember it in the mapping. */
+				root_offsets[offnum - 1] = offnum;
+			}
 
 			/* If it's not the start of a HOT-chain, we're done with it */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
@@ -817,15 +869,65 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 				!TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(htup)))
 				break;
 
-			/* Remember the root line pointer for this item */
-			root_offsets[nextoffnum - 1] = offnum;
+			/*
+			 * If target_offnum is specified and we found its mapping, return.
+			 */
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (nextoffnum == target_offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember the root line pointer for this item. */
+				root_offsets[nextoffnum - 1] = offnum;
+			}
 
 			/* Advance to next chain member, if any */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				break;
 
+			/*
+			 * If the tuple was HOT-updated and the update was later aborted,
+			 * someone could mark this tuple to be the last tuple in the chain
+			 * and store root offset in CTID, without clearing the HOT-updated
+			 * flag. So we must check if CTID is actually root offset and break
+			 * to avoid infinite recursion.
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
 		}
 	}
 }
+
+/*
+ * Get root line pointer for the given tuple.
+ */
+OffsetNumber
+heap_get_root_tuple(Page page, OffsetNumber target_offnum)
+{
+	OffsetNumber offnum = InvalidOffsetNumber;
+	heap_get_root_tuples_internal(page, target_offnum, &offnum);
+	return offnum;
+}
+
+/*
+ * Get root line pointers for all tuples in the page
+ */
+void
+heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+{
+	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+			root_offsets);
+}
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index 90ab6f2..e11b4a2 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -419,14 +419,18 @@ rewrite_heap_tuple(RewriteState state,
 	 */
 	if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
 		  HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
-		!(ItemPointerEquals(&(old_tuple->t_self),
-							&(old_tuple->t_data->t_ctid))))
+		!(HeapTupleHeaderIsHeapLatest(old_tuple->t_data, &old_tuple->t_self)))
 	{
 		OldToNewMapping mapping;
 
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
-		hashkey.tid = old_tuple->t_data->t_ctid;
+
+		/* 
+		 * We've already checked that this is not the last tuple in the chain,
+		 * so fetch the next TID in the chain.
+		 */
+		HeapTupleHeaderGetNextTid(old_tuple->t_data, &hashkey.tid);
 
 		mapping = (OldToNewMapping)
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -439,7 +443,7 @@ rewrite_heap_tuple(RewriteState state,
 			 * set the ctid of this tuple to point to the new location, and
 			 * insert it right away.
 			 */
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextTid(new_tuple->t_data, &mapping->new_tid);
 
 			/* We don't need the mapping entry anymore */
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -525,7 +529,7 @@ rewrite_heap_tuple(RewriteState state,
 				new_tuple = unresolved->tuple;
 				free_new = true;
 				old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextTid(new_tuple->t_data, &new_tid);
 
 				/*
 				 * We don't need the hash entry anymore, but don't free its
@@ -731,7 +735,12 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		onpage_tup->t_ctid = tup->t_self;
+		/* 
+		 * Set t_ctid just to ensure that block number is copied correctly, but
+		 * then immediately mark the tuple as the latest.
+		 */
+		HeapTupleHeaderSetNextTid(onpage_tup, &tup->t_self);
+		HeapTupleHeaderSetHeapLatest(onpage_tup, newoff);
 	}
 
 	/* If heaptup is a private copy, release it. */
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 8d119f6..9920f48 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -788,7 +788,8 @@ retry:
 			  DirtySnapshot.speculativeToken &&
 			  TransactionIdPrecedes(GetCurrentTransactionId(), xwait))))
 		{
-			ctid_wait = tup->t_data->t_ctid;
+			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
+				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 3a5b5b2..12476e7 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2589,7 +2589,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		 * As above, it should be safe to examine xmax and t_ctid without the
 		 * buffer content lock, because they can't be changing.
 		 */
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, &tuple.t_self))
 		{
 			/* deleted, so forget about it */
 			ReleaseBuffer(buffer);
@@ -2597,7 +2597,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		}
 
 		/* updated, so look at the updated row */
-		tuple.t_self = tuple.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tuple.t_data, &tuple.t_self);
 		/* updated row should have xmin matching this xmax */
 		priorXmax = HeapTupleHeaderGetUpdateXid(tuple.t_data);
 		ReleaseBuffer(buffer);
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index a864f78..95aa976 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -189,6 +189,7 @@ extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
+extern OffsetNumber heap_get_root_tuple(Page page, OffsetNumber target_offnum);
 extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
 
 /* in heap/syncscan.c */
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index 52f28b8..a4a1fe1 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -193,6 +193,8 @@ typedef struct xl_heap_update
 	uint8		flags;
 	TransactionId new_xmax;		/* xmax of the new tuple */
 	OffsetNumber new_offnum;	/* new tuple's offset */
+	OffsetNumber root_offnum;	/* offset of the root line pointer in case of
+								   HOT or WARM update */
 
 	/*
 	 * If XLOG_HEAP_CONTAINS_OLD_TUPLE or XLOG_HEAP_CONTAINS_OLD_KEY flags are
@@ -200,7 +202,7 @@ typedef struct xl_heap_update
 	 */
 } xl_heap_update;
 
-#define SizeOfHeapUpdate	(offsetof(xl_heap_update, new_offnum) + sizeof(OffsetNumber))
+#define SizeOfHeapUpdate	(offsetof(xl_heap_update, root_offnum) + sizeof(OffsetNumber))
 
 /*
  * This is what we need to know about vacuum page cleanup/redirect
diff --git a/src/include/access/hio.h b/src/include/access/hio.h
index 2824f23..921cb37 100644
--- a/src/include/access/hio.h
+++ b/src/include/access/hio.h
@@ -35,8 +35,8 @@ typedef struct BulkInsertStateData
 }	BulkInsertStateData;
 
 
-extern void RelationPutHeapTuple(Relation relation, Buffer buffer,
-					 HeapTuple tuple, bool token);
+extern OffsetNumber RelationPutHeapTuple(Relation relation, Buffer buffer,
+					 HeapTuple tuple, bool token, OffsetNumber root_offnum);
 extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
 						  Buffer otherBuffer, int options,
 						  BulkInsertState bistate,
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index a6c7e31..7552186 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,13 +260,19 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1800 are available */
+/* bits 0x0800 are available */
+#define HEAP_LATEST_TUPLE		0x1000	/*
+										 * This is the last tuple in chain and
+										 * ip_posid points to the root line
+										 * pointer
+										 */
 #define HEAP_KEYS_UPDATED		0x2000	/* tuple was updated and key cols
 										 * modified, or tuple deleted */
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xE000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+
 
 /*
  * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins.  It is
@@ -504,6 +510,43 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+/*
+ * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
+ * the TID of the tuple itself in t_ctid field to mark the end of the chain.
+ * But starting PG v10, we use a special flag HEAP_LATEST_TUPLE to identify the
+ * last tuple and store the root line pointer of the HOT chain in t_ctid field
+ * instead.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetHeapLatest(tup, offnum) \
+do { \
+	AssertMacro(OffsetNumberIsValid(offnum)); \
+	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE; \
+	ItemPointerSetOffsetNumber(&(tup)->t_ctid, (offnum)); \
+} while (0)
+
+#define HeapTupleHeaderClearHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 &= ~HEAP_LATEST_TUPLE \
+)
+
+/*
+ * Starting from PostgreSQL 10, the latest tuple in an update chain has
+ * HEAP_LATEST_TUPLE set; but tuples upgraded from earlier versions do not.
+ * For those, we determine whether a tuple is latest by testing that its t_ctid
+ * points to itself.
+ *
+ * Note: beware of multiple evaluations of "tup" and "tid" arguments.
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  (((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(tid))) \
+)
+
+
 #define HeapTupleHeaderSetHeapOnly(tup) \
 ( \
   (tup)->t_infomask2 |= HEAP_ONLY_TUPLE \
@@ -542,6 +585,56 @@ do { \
 
 
 /*
+ * Set the t_ctid chain and also clear the HEAP_LATEST_TUPLE flag since we
+ * now have a new tuple in the chain and this is no longer the last tuple of
+ * the chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetNextTid(tup, tid) \
+do { \
+		ItemPointerCopy((tid), &((tup)->t_ctid)); \
+		HeapTupleHeaderClearHeapLatest((tup)); \
+} while (0)
+
+/*
+ * Get TID of next tuple in the update chain. Caller must have checked that
+ * we are not already at the end of the chain because in that case t_ctid may
+ * actually store the root line pointer of the HOT chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetNextTid(tup, next_ctid) \
+do { \
+	AssertMacro(!((tup)->t_infomask2 & HEAP_LATEST_TUPLE)); \
+	ItemPointerCopy(&(tup)->t_ctid, (next_ctid)); \
+} while (0)
+
+/*
+ * Get the root line pointer of the HOT chain. The caller should have confirmed
+ * that the root offset is cached before calling this macro.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+	AssertMacro(((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0), \
+	ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)
+
+/*
+ * Return whether the tuple has a cached root offset.  We don't use
+ * HeapTupleHeaderIsHeapLatest because that one also considers the case of
+ * t_ctid pointing to itself, for tuples migrated from pre v10 clusters. Here
+ * we are only interested in the tuples which are marked with HEAP_LATEST_TUPLE
+ * flag.
+ */
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+	((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0 \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
0000_interesting_attrs.patchapplication/octet-stream; name=0000_interesting_attrs.patchDownload
diff --git b/src/backend/access/heap/heapam.c a/src/backend/access/heap/heapam.c
index 5fd7f1e..84447f0 100644
--- b/src/backend/access/heap/heapam.c
+++ a/src/backend/access/heap/heapam.c
@@ -95,11 +95,8 @@ static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
 				HeapTuple newtup, HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
-static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
-							 Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
+							 Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup);
 static bool heap_acquire_tuplock(Relation relation, ItemPointer tid,
 					 LockTupleMode mode, LockWaitPolicy wait_policy,
@@ -3454,6 +3451,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *interesting_attrs;
+	Bitmapset  *modified_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3471,9 +3470,6 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				pagefree;
 	bool		have_tuple_lock = false;
 	bool		iscombo;
-	bool		satisfies_hot;
-	bool		satisfies_key;
-	bool		satisfies_id;
 	bool		use_hot_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
@@ -3500,21 +3496,30 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				 errmsg("cannot update tuples during a parallel operation")));
 
 	/*
-	 * Fetch the list of attributes to be checked for HOT update.  This is
-	 * wasted effort if we fail to update or have to put the new tuple on a
-	 * different page.  But we must compute the list before obtaining buffer
-	 * lock --- in the worst case, if we are doing an update on one of the
-	 * relevant system catalogs, we could deadlock if we try to fetch the list
-	 * later.  In any case, the relcache caches the data so this is usually
-	 * pretty cheap.
+	 * Fetch the list of attributes to be checked for various operations.
 	 *
-	 * Note that we get a copy here, so we need not worry about relcache flush
-	 * happening midway through.
+	 * For HOT considerations, this is wasted effort if we fail to update or
+	 * have to put the new tuple on a different page.  But we must compute the
+	 * list before obtaining buffer lock --- in the worst case, if we are doing
+	 * an update on one of the relevant system catalogs, we could deadlock if
+	 * we try to fetch the list later.  In any case, the relcache caches the
+	 * data so this is usually pretty cheap.
+	 *
+	 * We also need columns used by the replica identity, the columns that
+	 * are considered the "key" of rows in the table, and columns that are
+	 * part of indirect indexes.
+	 *
+	 * Note that we get copies of each bitmap, so we need not worry about
+	 * relcache flush happening midway through.
 	 */
 	hot_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_ALL);
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	interesting_attrs = bms_add_members(NULL, hot_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
+
 
 	block = ItemPointerGetBlockNumber(otid);
 	buffer = ReadBuffer(relation, block);
@@ -3535,7 +3540,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Assert(ItemIdIsNormal(lp));
 
 	/*
-	 * Fill in enough data in oldtup for HeapSatisfiesHOTandKeyUpdate to work
+	 * Fill in enough data in oldtup for HeapDetermineModifiedColumns to work
 	 * properly.
 	 */
 	oldtup.t_tableOid = RelationGetRelid(relation);
@@ -3561,6 +3566,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 		Assert(!(newtup->t_data->t_infomask & HEAP_HASOID));
 	}
 
+	/* Determine columns modified by the update. */
+	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
+												  &oldtup, newtup);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3572,10 +3581,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	 * is updates that don't manipulate key columns, not those that
 	 * serendipitiously arrive at the same key values.
 	 */
-	HeapSatisfiesHOTandKeyUpdate(relation, hot_attrs, key_attrs, id_attrs,
-								 &satisfies_hot, &satisfies_key,
-								 &satisfies_id, &oldtup, newtup);
-	if (satisfies_key)
+	if (!bms_overlap(modified_attrs, key_attrs))
 	{
 		*lockmode = LockTupleNoKeyExclusive;
 		mxact_status = MultiXactStatusNoKeyUpdate;
@@ -3814,6 +3820,8 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(modified_attrs);
+		bms_free(interesting_attrs);
 		return result;
 	}
 
@@ -4118,7 +4126,7 @@ l2:
 		 * to do a HOT update.  Check if any of the index columns have been
 		 * changed.  If not, then HOT update is possible.
 		 */
-		if (satisfies_hot)
+		if (!bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
 	}
 	else
@@ -4133,7 +4141,9 @@ l2:
 	 * ExtractReplicaIdentity() will return NULL if nothing needs to be
 	 * logged.
 	 */
-	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup, !satisfies_id, &old_key_copied);
+	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup,
+										   bms_overlap(modified_attrs, id_attrs),
+										   &old_key_copied);
 
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
@@ -4281,13 +4291,15 @@ l2:
 	bms_free(hot_attrs);
 	bms_free(key_attrs);
 	bms_free(id_attrs);
+	bms_free(modified_attrs);
+	bms_free(interesting_attrs);
 
 	return HeapTupleMayBeUpdated;
 }
 
 /*
  * Check if the specified attribute's value is same in both given tuples.
- * Subroutine for HeapSatisfiesHOTandKeyUpdate.
+ * Subroutine for HeapDetermineModifiedColumns.
  */
 static bool
 heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
@@ -4321,7 +4333,7 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 
 	/*
 	 * Extract the corresponding values.  XXX this is pretty inefficient if
-	 * there are many indexed columns.  Should HeapSatisfiesHOTandKeyUpdate do
+	 * there are many indexed columns.  Should HeapDetermineModifiedColumns do
 	 * a single heap_deform_tuple call on each tuple, instead?	But that
 	 * doesn't work for system columns ...
 	 */
@@ -4366,114 +4378,30 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 /*
  * Check which columns are being updated.
  *
- * This simultaneously checks conditions for HOT updates, for FOR KEY
- * SHARE updates, and REPLICA IDENTITY concerns.  Since much of the time they
- * will be checking very similar sets of columns, and doing the same tests on
- * them, it makes sense to optimize and do them together.
- *
- * We receive three bitmapsets comprising the three sets of columns we're
- * interested in.  Note these are destructively modified; that is OK since
- * this is invoked at most once in heap_update.
+ * Given an updated tuple, determine (and return into the output bitmapset),
+ * from those listed as interesting, the set of columns that changed.
  *
- * hot_result is set to TRUE if it's okay to do a HOT update (i.e. it does not
- * modified indexed columns); key_result is set to TRUE if the update does not
- * modify columns used in the key; id_result is set to TRUE if the update does
- * not modify columns in any index marked as the REPLICA IDENTITY.
+ * The input bitmapset is destructively modified; that is OK since this is
+ * invoked at most once in heap_update.
  */
-static void
-HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *
+HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup)
 {
-	int			next_hot_attnum;
-	int			next_key_attnum;
-	int			next_id_attnum;
-	bool		hot_result = true;
-	bool		key_result = true;
-	bool		id_result = true;
-
-	/* If REPLICA IDENTITY is set to FULL, id_attrs will be empty. */
-	Assert(bms_is_subset(id_attrs, key_attrs));
-	Assert(bms_is_subset(key_attrs, hot_attrs));
-
-	/*
-	 * If one of these sets contains no remaining bits, bms_first_member will
-	 * return -1, and after adding FirstLowInvalidHeapAttributeNumber (which
-	 * is negative!)  we'll get an attribute number that can't possibly be
-	 * real, and thus won't match any actual attribute number.
-	 */
-	next_hot_attnum = bms_first_member(hot_attrs);
-	next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_key_attnum = bms_first_member(key_attrs);
-	next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_id_attnum = bms_first_member(id_attrs);
-	next_id_attnum += FirstLowInvalidHeapAttributeNumber;
+	int		attnum;
+	Bitmapset *modified = NULL;
 
-	for (;;)
+	while ((attnum = bms_first_member(interesting_cols)) >= 0)
 	{
-		bool		changed;
-		int			check_now;
-
-		/*
-		 * Since the HOT attributes are a superset of the key attributes and
-		 * the key attributes are a superset of the id attributes, this logic
-		 * is guaranteed to identify the next column that needs to be checked.
-		 */
-		if (hot_result && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_hot_attnum;
-		else if (key_result && next_key_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_key_attnum;
-		else if (id_result && next_id_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_id_attnum;
-		else
-			break;
+		attnum += FirstLowInvalidHeapAttributeNumber;
 
-		/* See whether it changed. */
-		changed = !heap_tuple_attr_equals(RelationGetDescr(relation),
-										  check_now, oldtup, newtup);
-		if (changed)
-		{
-			if (check_now == next_hot_attnum)
-				hot_result = false;
-			if (check_now == next_key_attnum)
-				key_result = false;
-			if (check_now == next_id_attnum)
-				id_result = false;
-
-			/* if all are false now, we can stop checking */
-			if (!hot_result && !key_result && !id_result)
-				break;
-		}
-
-		/*
-		 * Advance the next attribute numbers for the sets that contain the
-		 * attribute we just checked.  As we work our way through the columns,
-		 * the next_attnum values will rise; but when each set becomes empty,
-		 * bms_first_member() will return -1 and the attribute number will end
-		 * up with a value less than FirstLowInvalidHeapAttributeNumber.
-		 */
-		if (hot_result && check_now == next_hot_attnum)
-		{
-			next_hot_attnum = bms_first_member(hot_attrs);
-			next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (key_result && check_now == next_key_attnum)
-		{
-			next_key_attnum = bms_first_member(key_attrs);
-			next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (id_result && check_now == next_id_attnum)
-		{
-			next_id_attnum = bms_first_member(id_attrs);
-			next_id_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
+		if (!heap_tuple_attr_equals(RelationGetDescr(relation),
+								   attnum, oldtup, newtup))
+			modified = bms_add_member(modified,
+									  attnum - FirstLowInvalidHeapAttributeNumber);
 	}
 
-	*satisfies_hot = hot_result;
-	*satisfies_key = key_result;
-	*satisfies_id = id_result;
+	return modified;
 }
 
 /*
#61Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Pavan Deolasee (#60)
Re: Patch: Write Amplification Reduction Method (WARM)

Pavan Deolasee wrote:

Do you think we should apply the patch to remove ItemPointerCopy()? I will
rework the HeapTupleHeaderGetNextTid() after that. Not that it depends on
removing ItemPointerCopy(), but decided to postpone it until we make a call
on that patch.

My inclination is not to. We don't really know where we are going with
storage layer reworks in the near future, and we might end up changing
this in other ways. We might find ourselves needing this kind of
abstraction again. I don't think this means we need to follow it
completely in new code, since it's already broken in other places, but
let's not destroy it completely just yet.

BTW I've run now long stress tests with the patch applied and see no new
issues, even when indexes are dropped and recreated concurrently (includes
my patch to fix CIC bug in the master though). In another 24 hour test,
WARM could do 274M transactions where as master did 164M transactions. I
did not drop and recreate indexes during this run.

Eh, that's a 67% performance improvement. Nice.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#62Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Pavan Deolasee (#60)
4 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Feb 2, 2017 at 6:17 PM, Pavan Deolasee <pavan.deolasee@gmail.com>
wrote:

Please see rebased patches attached. There is not much change other than
the fact the patch now uses new catalog maintenance API.

Another rebase on current master.

This time I am also attaching a proof-of-concept patch to demonstrate chain
conversion. The proposed algorithm is mentioned in the README.WARM, but
I'll briefly explain here.

The chain conversion works in two phases and requires another index pass
during vacuum. During first heap scan, we collect candidate chains for
conversion. A chain qualifies for conversion if it has all tuples with
matching index keys with respect to all current indexes (i.e. chain becomes
HOT). WARM chains become HOT as and when old versions retire (or new
versions retire in case of aborts). But before we can mark them HOT again,
we must first remove duplicate (and potentially wrong) index pointers. This
algorithm deals with that.

When a WARM update occurs and we insert a new index entry in one or more
indexes, we mark the new index pointer with a special RED flag. The heap
tuple created by this UPDATE is also marked as RED. If the tuple is then
HOT-updated, subsequent versions will be marked RED as well. IOW each WARM
chain has two HOT chains inside it and these chains are identified as BLUE
and RED chains. The index pointer which satisfies key in RED chain is
marked RED too.

When we collect candidate WARM chains in the first heap scan, we also
remember the color of the chain.

During first index scan we delete all known dead index pointers (same as
lazy_tid_reaped). Also we also count number of RED and BLUE pointers to
each candidate chain.

The next index scan will either 1. remove an index pointer which is known
to be useless or 2. color a RED pointer BLUE.
- A BLUE pointer to a RED chain is removed when there exists a RED pointer
to the chain. If there is no RED pointer, we can't remove the BLUE pointer
because that is the only path to the heap tuple (case when WARM does not
cause new index entry). But we instead color the heap tuples BLUE
- A BLUE pointer to a BLUE chain is always retained
- A RED pointer to a BLUE chain is always removed (aborted updates)
- A RED pointer to a RED chain is colored BLUE (we will color the heap
tuples BLUE in the second heap scan)

Once the index pointers are taken care of such that there exists exactly
one pointer to a chain, the chain can be converted into HOT chains by
clearing WARM and RED flags.

There is one case of aborted vacuums. If a crash happens after coloring RED
pointer BLUE, but before we can clear the heap tuples, we might end up with
two BLUE pointers to a RED chain. This case will require recheck logic and
is not yet implemented.

The POC only works with BTREEs because the unused bit in IndexTuple's
t_info is already used by HASH indexes. For heap tuples, we can reuse one
of HEAP_MOVED_IN/OFF bits for marking tuples RED since this is only
required for WARM tuples. So the bit can be checked along with WARM bit.

Unless there is an objection to the design or someone thinks it cannot
work, I'll look at some alternate mechanism to free up more bits in tuple
header or at least in the index tuples. One idea is to free up 3 bits from
ip_posid knowing that OffsetNumber can never really need more than 13 bits
with the other constraints in place. We could use some bit-field magic to
do that with minimal changes. The thing that concerns me is whether there
will be a guaranteed way to make that work on all hardwares without
breaking the on-disk layout.

Comments/suggestions?

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0003_convert_chains_v12.patchapplication/octet-stream; name=0003_convert_chains_v12.patchDownload
diff --git a/contrib/bloom/blvacuum.c b/contrib/bloom/blvacuum.c
index 04abd0f..ff50361 100644
--- a/contrib/bloom/blvacuum.c
+++ b/contrib/bloom/blvacuum.c
@@ -88,7 +88,7 @@ blbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 		while (itup < itupEnd)
 		{
 			/* Do we have to delete this tuple? */
-			if (callback(&itup->heapPtr, callback_state))
+			if (callback(&itup->heapPtr, false, callback_state) == IBDCR_DELETE)
 			{
 				/* Yes; adjust count of tuples that will be left on page */
 				BloomPageGetOpaque(page)->maxoff--;
diff --git a/src/backend/access/gin/ginvacuum.c b/src/backend/access/gin/ginvacuum.c
index c9ccfee..8ed71c5 100644
--- a/src/backend/access/gin/ginvacuum.c
+++ b/src/backend/access/gin/ginvacuum.c
@@ -56,7 +56,8 @@ ginVacuumItemPointers(GinVacuumState *gvs, ItemPointerData *items,
 	 */
 	for (i = 0; i < nitem; i++)
 	{
-		if (gvs->callback(items + i, gvs->callback_state))
+		if (gvs->callback(items + i, false, gvs->callback_state) ==
+				IBDCR_DELETE)
 		{
 			gvs->result->tuples_removed += 1;
 			if (!tmpitems)
diff --git a/src/backend/access/gist/gistvacuum.c b/src/backend/access/gist/gistvacuum.c
index 77d9d12..0955db6 100644
--- a/src/backend/access/gist/gistvacuum.c
+++ b/src/backend/access/gist/gistvacuum.c
@@ -202,7 +202,8 @@ gistbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 				iid = PageGetItemId(page, i);
 				idxtuple = (IndexTuple) PageGetItem(page, iid);
 
-				if (callback(&(idxtuple->t_tid), callback_state))
+				if (callback(&(idxtuple->t_tid), false, callback_state) ==
+						IBDCR_DELETE)
 					todelete[ntodelete++] = i;
 				else
 					stats->num_index_tuples += 1;
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index 6645160..8b7a8aa 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -764,7 +764,8 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 			 * To remove the dead tuples, we strictly want to rely on results
 			 * of callback function.  refer btvacuumpage for detailed reason.
 			 */
-			if (callback && callback(htup, callback_state))
+			if (callback && callback(htup, false, callback_state) ==
+					IBDCR_DELETE)
 			{
 				kill_tuple = true;
 				if (tuples_removed)
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 9c4522a..1f8f3eb 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -1958,17 +1958,32 @@ heap_fetch(Relation relation,
 }
 
 /*
- * Check if the HOT chain containing this tid is actually a WARM chain.
- * Note that even if the WARM update ultimately aborted, we still must do a
- * recheck because the failing UPDATE when have inserted created index entries
- * which are now stale, but still referencing this chain.
+ * Check status of a (possibly) WARM chain.
+ *
+ * This function looks at a HOT/WARM chain starting at tid and return a bitmask
+ * of information. We only follow the chain as long as it's known to be valid
+ * HOT chain. Information returned by the function consists of:
+ *
+ *  HCWC_WARM_TUPLE - a warm tuple is found somewhere in the chain. Note that
+ *  				  when a tuple is WARM updated, both old and new versions
+ *  				  of the tuple are treated as WARM tuple
+ *
+ *  HCWC_RED_TUPLE  - a warm tuple part of the Red chain is found somewhere in
+ *					  the chain.
+ *
+ *  HCWC_BLUE_TUPLE - a warm tuple part of the Blue chain is found somewhere in
+ *					  the chain.
+ *
+ *	If stop_at_warm is true, we stop when the first WARM tuple is found and
+ *	return information collected so far.
  */
-static bool
-hot_check_warm_chain(Page dp, ItemPointer tid)
+HeapCheckWarmChainStatus
+heap_check_warm_chain(Page dp, ItemPointer tid, bool stop_at_warm)
 {
-	TransactionId prev_xmax = InvalidTransactionId;
-	OffsetNumber offnum;
-	HeapTupleData heapTuple;
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	HeapCheckWarmChainStatus	status = 0;
 
 	offnum = ItemPointerGetOffsetNumber(tid);
 	heapTuple.t_self = *tid;
@@ -1985,7 +2000,16 @@ hot_check_warm_chain(Page dp, ItemPointer tid)
 
 		/* check for unused, dead, or redirected items */
 		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
 			break;
+		}
 
 		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
 		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
@@ -2000,13 +2024,113 @@ hot_check_warm_chain(Page dp, ItemPointer tid)
 			break;
 
 
+		if (HeapTupleHeaderIsHeapWarmTuple(heapTuple.t_data))
+		{
+			/* We found a WARM tuple */
+			status |= HCWC_WARM_TUPLE;
+
+			/* 
+			 * If we've been told to stop at the first WARM tuple, just return
+			 * whatever information collected so far.
+			 */
+			if (stop_at_warm)
+				return status;
+
+			/* We found a tuple belonging to the Red chain */
+			if (HeapTupleHeaderIsWarmRed(heapTuple.t_data))
+				status |= HCWC_RED_TUPLE;
+		}
+		else
+			/* Must be a tuple belonging to the Blue chain */
+			status |= HCWC_BLUE_TUPLE;
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
 		/*
-		 * Presence of either WARM or WARM updated tuple signals possible
-		 * breakage and the caller must recheck tuple returned from this chain
-		 * for index satisfaction
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	/* All OK. No need to recheck */
+	return status;
+}
+
+/*
+ * Scan through the WARM chain starting at tid and reset all WARM related
+ * flags. At the end, the chain will have all characteristics of a regular HOT
+ * chain.
+ *
+ * Return the number of cleared offnums. Cleared offnums are returned in the
+ * passed-in cleared_offnums array. The caller must ensure that the array is
+ * large enough to hold maximum offnums that can be cleared by this invokation
+ * of heap_clear_warm_chain().
+ */
+int
+heap_clear_warm_chain(Page dp, ItemPointer tid, OffsetNumber *cleared_offnums)
+{
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	int							num_cleared = 0;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
+			break;
+		}
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Clear WARM and Red flags
 		 */
 		if (HeapTupleHeaderIsHeapWarmTuple(heapTuple.t_data))
-			return true;
+		{
+			HeapTupleHeaderClearHeapWarmTuple(heapTuple.t_data);
+			HeapTupleHeaderClearWarmRed(heapTuple.t_data);
+			cleared_offnums[num_cleared++] = offnum;
+		}
 
 		/*
 		 * Check to see if HOT chain continues past this tuple; if so fetch
@@ -2025,8 +2149,7 @@ hot_check_warm_chain(Page dp, ItemPointer tid)
 		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
 	}
 
-	/* All OK. No need to recheck */
-	return false;
+	return num_cleared;
 }
 
 /*
@@ -2135,7 +2258,11 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		 * possible improvements here
 		 */
 		if (recheck && *recheck == false)
-			*recheck = hot_check_warm_chain(dp, &heapTuple->t_self);
+		{
+			HeapCheckWarmChainStatus status;
+			status = heap_check_warm_chain(dp, &heapTuple->t_self, true);
+			*recheck = HCWC_IS_WARM(status);
+		}
 
 		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
@@ -3409,7 +3536,9 @@ l1:
 	}
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+   	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	tp.t_data->t_infomask |= new_infomask;
 	tp.t_data->t_infomask2 |= new_infomask2;
@@ -4172,7 +4301,9 @@ l2:
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
-		oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(oldtup.t_data))
+			oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 		oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleClearHotUpdated(&oldtup);
 		/* ... and store info about transaction updating this tuple */
@@ -4419,6 +4550,16 @@ l2:
 		}
 
 		/*
+		 * If the old tuple is already a member of the Red chain, mark the new
+		 * tuple with the same flag
+		 */
+		if (HeapTupleIsHeapWarmTupleRed(&oldtup))
+		{
+			HeapTupleSetHeapWarmTupleRed(heaptup);
+			HeapTupleSetHeapWarmTupleRed(newtup);
+		}
+
+		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
 		 * Usually this information will be available in the corresponding
@@ -4435,12 +4576,20 @@ l2:
 		/* Mark the old tuple as HOT-updated */
 		HeapTupleSetHotUpdated(&oldtup);
 		HeapTupleSetHeapWarmTuple(&oldtup);
+		
 		/* And mark the new tuple as heap-only */
 		HeapTupleSetHeapOnly(heaptup);
+		/* Mark the new tuple as WARM tuple */ 
 		HeapTupleSetHeapWarmTuple(heaptup);
+		/* This update also starts a Red chain */
+		HeapTupleSetHeapWarmTupleRed(heaptup);
+		Assert(!HeapTupleIsHeapWarmTupleRed(&oldtup));
+
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
 		HeapTupleSetHeapWarmTuple(newtup);
+		HeapTupleSetHeapWarmTupleRed(newtup);
+
 		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
 			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 		else
@@ -4459,6 +4608,8 @@ l2:
 		HeapTupleClearHeapOnly(newtup);
 		HeapTupleClearHeapWarmTuple(heaptup);
 		HeapTupleClearHeapWarmTuple(newtup);
+		HeapTupleClearHeapWarmTupleRed(heaptup);
+		HeapTupleClearHeapWarmTupleRed(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4477,7 +4628,9 @@ l2:
 	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
-	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(oldtup.t_data))
+		oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 	oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	/* ... and store info about transaction updating this tuple */
 	Assert(TransactionIdIsValid(xmax_old_tuple));
@@ -6398,7 +6551,9 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	PageSetPrunable(page, RecentGlobalXmin);
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 
 	/*
@@ -6972,7 +7127,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 	 * Old-style VACUUM FULL is gone, but we have to keep this code as long as
 	 * we support having MOVED_OFF/MOVED_IN tuples in the database.
 	 */
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 
@@ -6991,7 +7146,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			 * have failed; whereas a non-dead MOVED_IN tuple must mean the
 			 * xvac transaction succeeded.
 			 */
-			if (tuple->t_infomask & HEAP_MOVED_OFF)
+			if (HeapTupleHeaderIsMovedOff(tuple))
 				frz->frzflags |= XLH_INVALID_XVAC;
 			else
 				frz->frzflags |= XLH_FREEZE_XVAC;
@@ -7461,7 +7616,7 @@ heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple)
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid))
@@ -7544,7 +7699,7 @@ heap_tuple_needs_freeze(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid) &&
@@ -7570,7 +7725,7 @@ HeapTupleHeaderAdvanceLatestRemovedXid(HeapTupleHeader tuple,
 	TransactionId xmax = HeapTupleHeaderGetUpdateXid(tuple);
 	TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		if (TransactionIdPrecedes(*latestRemovedXid, xvac))
 			*latestRemovedXid = xvac;
@@ -8523,7 +8678,9 @@ heap_xlog_delete(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleHeaderClearHotUpdated(htup);
 		fix_infomask_from_infobits(xlrec->infobits_set,
@@ -9186,7 +9343,9 @@ heap_xlog_lock(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9265,7 +9424,9 @@ heap_xlog_lock_updated(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
diff --git a/src/backend/access/heap/tuptoaster.c b/src/backend/access/heap/tuptoaster.c
index 19e7048..47b01eb 100644
--- a/src/backend/access/heap/tuptoaster.c
+++ b/src/backend/access/heap/tuptoaster.c
@@ -1620,7 +1620,8 @@ toast_save_datum(Relation rel, Datum value,
 							 toastrel,
 							 toastidxs[i]->rd_index->indisunique ?
 							 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-							 NULL);
+							 NULL,
+							 false);
 		}
 
 		/*
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index f56c58f..e8027f8 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -199,7 +199,8 @@ index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 IndexInfo *indexInfo)
+			 IndexInfo *indexInfo,
+			 bool warm_update)
 {
 	RELATION_CHECKS;
 	CHECK_REL_PROCEDURE(aminsert);
@@ -209,6 +210,12 @@ index_insert(Relation indexRelation,
 									   (HeapTuple) NULL,
 									   InvalidBuffer);
 
+	if (warm_update)
+	{
+		Assert(indexRelation->rd_amroutine->amwarminsert != NULL);
+		return indexRelation->rd_amroutine->amwarminsert(indexRelation, values,
+				isnull, heap_t_ctid, heapRelation, checkUnique, indexInfo);
+	}
 	return indexRelation->rd_amroutine->aminsert(indexRelation, values, isnull,
 												 heap_t_ctid, heapRelation,
 												 checkUnique, indexInfo);
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 952ed8f..4988f47 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -147,6 +147,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = btbuild;
 	amroutine->ambuildempty = btbuildempty;
 	amroutine->aminsert = btinsert;
+	amroutine->amwarminsert = btwarminsert;
 	amroutine->ambulkdelete = btbulkdelete;
 	amroutine->amvacuumcleanup = btvacuumcleanup;
 	amroutine->amcanreturn = btcanreturn;
@@ -317,11 +318,12 @@ btbuildempty(Relation index)
  *		Descend the tree recursively, find the appropriate location for our
  *		new tuple, and put it there.
  */
-bool
-btinsert(Relation rel, Datum *values, bool *isnull,
+static bool
+btinsert_internal(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
-		 IndexInfo *indexInfo)
+		 IndexInfo *indexInfo,
+		 bool warm_update)
 {
 	bool		result;
 	IndexTuple	itup;
@@ -330,6 +332,11 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	itup = index_form_tuple(RelationGetDescr(rel), values, isnull);
 	itup->t_tid = *ht_ctid;
 
+	if (warm_update)
+		itup->t_info |= INDEX_RED_CHAIN;
+	else
+		itup->t_info &= ~INDEX_RED_CHAIN;
+
 	result = _bt_doinsert(rel, itup, checkUnique, heapRel);
 
 	pfree(itup);
@@ -337,6 +344,26 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	return result;
 }
 
+bool
+btinsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, false);
+}
+
+bool
+btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, true);
+}
+
 /*
  *	btgettuple() -- Get the next tuple in the scan.
  */
@@ -1253,6 +1280,8 @@ restart:
 			{
 				IndexTuple	itup;
 				ItemPointer htup;
+				IndexBulkDeleteCallbackResult	result;
+				bool		is_red = false;
 
 				itup = (IndexTuple) PageGetItem(page,
 												PageGetItemId(page, offnum));
@@ -1279,8 +1308,29 @@ restart:
 				 * applies to *any* type of index that marks index tuples as
 				 * killed.
 				 */
-				if (callback(htup, callback_state))
+				if (itup->t_info & INDEX_RED_CHAIN)
+					is_red = true;
+
+				if (is_red)
+					stats->num_red_pointers++;
+				else
+					stats->num_blue_pointers++;
+
+				result = callback(htup, is_red, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					if (is_red)
+						stats->red_pointers_removed++;
+					else
+						stats->blue_pointers_removed++;
 					deletable[ndeletable++] = offnum;
+				}
+				else if (result == IBDCR_COLOR_BLUE)
+				{
+					stats->pointers_colored++;
+					itup->t_info &= ~INDEX_RED_CHAIN;
+				}
+				/* XXX XLOG stuff for converted pointers */
 			}
 		}
 
diff --git a/src/backend/access/spgist/spgvacuum.c b/src/backend/access/spgist/spgvacuum.c
index cce9b3f..5343b10 100644
--- a/src/backend/access/spgist/spgvacuum.c
+++ b/src/backend/access/spgist/spgvacuum.c
@@ -155,7 +155,8 @@ vacuumLeafPage(spgBulkDeleteState *bds, Relation index, Buffer buffer,
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				deletable[i] = true;
@@ -425,7 +426,8 @@ vacuumLeafRoot(spgBulkDeleteState *bds, Relation index, Buffer buffer)
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				toDelete[xlrec.nDelete] = i;
@@ -902,10 +904,10 @@ spgbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 }
 
 /* Dummy callback to delete no tuples during spgvacuumcleanup */
-static bool
-dummy_callback(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+dummy_callback(ItemPointer itemptr, bool is_red, void *state)
 {
-	return false;
+	return IBDCR_KEEP;
 }
 
 /*
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index bba52ec..ab37b43 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -115,7 +115,7 @@ static void IndexCheckExclusion(Relation heapRelation,
 					IndexInfo *indexInfo);
 static inline int64 itemptr_encode(ItemPointer itemptr);
 static inline void itemptr_decode(ItemPointer itemptr, int64 encoded);
-static bool validate_index_callback(ItemPointer itemptr, void *opaque);
+static IndexBulkDeleteCallbackResult validate_index_callback(ItemPointer itemptr, bool is_red, void *opaque);
 static void validate_index_heapscan(Relation heapRelation,
 						Relation indexRelation,
 						IndexInfo *indexInfo,
@@ -2949,15 +2949,15 @@ itemptr_decode(ItemPointer itemptr, int64 encoded)
 /*
  * validate_index_callback - bulkdelete callback to collect the index TIDs
  */
-static bool
-validate_index_callback(ItemPointer itemptr, void *opaque)
+static IndexBulkDeleteCallbackResult
+validate_index_callback(ItemPointer itemptr, bool is_red, void *opaque)
 {
 	v_i_state  *state = (v_i_state *) opaque;
 	int64		encoded = itemptr_encode(itemptr);
 
 	tuplesort_putdatum(state->tuplesort, Int64GetDatum(encoded), false);
 	state->itups += 1;
-	return false;				/* never actually delete anything */
+	return IBDCR_KEEP;				/* never actually delete anything */
 }
 
 /*
@@ -3178,7 +3178,8 @@ validate_index_heapscan(Relation heapRelation,
 						 heapRelation,
 						 indexInfo->ii_Unique ?
 						 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-						 indexInfo);
+						 indexInfo,
+						 false);
 
 			state->tups_inserted += 1;
 		}
diff --git a/src/backend/catalog/indexing.c b/src/backend/catalog/indexing.c
index e5355a8..5b6efcf 100644
--- a/src/backend/catalog/indexing.c
+++ b/src/backend/catalog/indexing.c
@@ -172,7 +172,8 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple,
 					 heapRelation,
 					 relationDescs[i]->rd_index->indisunique ?
 					 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-					 indexInfo);
+					 indexInfo,
+					 warm_update);
 	}
 
 	ExecDropSingleTupleTableSlot(slot);
@@ -222,7 +223,7 @@ CatalogTupleInsertWithInfo(Relation heapRel, HeapTuple tup,
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup, false, NULL);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 
 	return oid;
 }
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index d9c0fe7..330b661 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -168,7 +168,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 		 */
 		index_insert(indexRel, values, isnull, &(new_row->t_self),
 					 trigdata->tg_relation, UNIQUE_CHECK_EXISTING,
-					 indexInfo);
+					 indexInfo,
+					 false);
 	}
 	else
 	{
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 1388be1..33b1ac3 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -104,6 +104,25 @@
  */
 #define PREFETCH_SIZE			((BlockNumber) 32)
 
+/*
+ * Structure to track WARM chains that can be converted into HOT chains during
+ * this run.
+ * 
+ * To reduce space requirement, we're using bitfields. But the way things are
+ * laid down, we're still wasting 1-byte per candidate chain.
+ */
+typedef struct LVRedBlueChain
+{
+	ItemPointerData	chain_tid;			/* root of the chain */
+	uint8			is_red_chain:1;		/* is the WARM chain complete red ? */
+	uint8			keep_warm_chain:1;	/* this chain can't be cleared of WARM
+										 * tuples */
+	uint8			num_blue_pointers:2;/* number of blue pointers found so
+										 * far */
+	uint8			num_red_pointers:2; /* number of red pointers found so far
+										 * in the current index */
+} LVRedBlueChain;
+
 typedef struct LVRelStats
 {
 	/* hasindex = true means two-pass strategy; false means one-pass */
@@ -121,6 +140,16 @@ typedef struct LVRelStats
 	BlockNumber pages_removed;
 	double		tuples_deleted;
 	BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */
+	
+	double			num_warm_chains; /* number of warm chains seen so far */
+
+	/* List of WARM chains that can be converted into HOT chains */
+	/* NB: this list is ordered by TID of the root pointers */
+	int				num_redblue_chains;	/* current # of entries */
+	int				max_redblue_chains;	/* # slots allocated in array */
+	LVRedBlueChain *redblue_chains;	/* array of LVRedBlueChain */
+	double			num_non_convertible_warm_chains;
+
 	/* List of TIDs of tuples we intend to delete */
 	/* NB: this list is ordered by TID address */
 	int			num_dead_tuples;	/* current # of entries */
@@ -149,6 +178,7 @@ static void lazy_scan_heap(Relation onerel, int options,
 static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats);
 static bool lazy_check_needs_freeze(Buffer buf, bool *hastup);
 static void lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats);
 static void lazy_cleanup_index(Relation indrel,
@@ -156,6 +186,10 @@ static void lazy_cleanup_index(Relation indrel,
 				   LVRelStats *vacrelstats);
 static int lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 				 int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer);
+static int lazy_warmclear_page(Relation onerel, BlockNumber blkno,
+				 Buffer buffer, int chainindex, LVRelStats *vacrelstats,
+				 Buffer *vmbuffer);
+static void lazy_reset_redblue_pointer_count(LVRelStats *vacrelstats);
 static bool should_attempt_truncation(LVRelStats *vacrelstats);
 static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
 static BlockNumber count_nondeletable_pages(Relation onerel,
@@ -163,8 +197,15 @@ static BlockNumber count_nondeletable_pages(Relation onerel,
 static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks);
 static void lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr);
-static bool lazy_tid_reaped(ItemPointer itemptr, void *state);
+static void lazy_record_red_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static void lazy_record_blue_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static IndexBulkDeleteCallbackResult lazy_tid_reaped(ItemPointer itemptr, bool is_red, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase1(ItemPointer itemptr, bool is_red, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase2(ItemPointer itemptr, bool is_red, void *state);
 static int	vac_cmp_itemptr(const void *left, const void *right);
+static int vac_cmp_redblue_chain(const void *left, const void *right);
 static bool heap_page_is_all_visible(Relation rel, Buffer buf,
 					 TransactionId *visibility_cutoff_xid, bool *all_frozen);
 
@@ -683,8 +724,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		 * If we are close to overrunning the available space for dead-tuple
 		 * TIDs, pause and do a cycle of vacuuming before we tackle this page.
 		 */
-		if ((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
-			vacrelstats->num_dead_tuples > 0)
+		if (((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
+			vacrelstats->num_dead_tuples > 0) ||
+			((vacrelstats->max_redblue_chains - vacrelstats->num_redblue_chains) < MaxHeapTuplesPerPage &&
+			 vacrelstats->num_redblue_chains > 0))
 		{
 			const int	hvp_index[] = {
 				PROGRESS_VACUUM_PHASE,
@@ -714,6 +757,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			/* Remove index entries */
 			for (i = 0; i < nindexes; i++)
 				lazy_vacuum_index(Irel[i],
+								  (vacrelstats->num_redblue_chains > 0),
 								  &indstats[i],
 								  vacrelstats);
 
@@ -736,6 +780,9 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			 * valid.
 			 */
 			vacrelstats->num_dead_tuples = 0;
+			vacrelstats->num_redblue_chains = 0;
+			memset(vacrelstats->redblue_chains, 0,
+					vacrelstats->max_redblue_chains * sizeof (LVRedBlueChain));
 			vacrelstats->num_index_scans++;
 
 			/* Report that we are once again scanning the heap */
@@ -939,15 +986,33 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 				continue;
 			}
 
+			ItemPointerSet(&(tuple.t_self), blkno, offnum);
+
 			/* Redirect items mustn't be touched */
 			if (ItemIdIsRedirected(itemid))
 			{
+				HeapCheckWarmChainStatus status = heap_check_warm_chain(page,
+						&tuple.t_self, false);
+				if (HCWC_IS_WARM(status))
+				{
+					vacrelstats->num_warm_chains++;
+
+					/*
+					 * A chain which is either complete Red or Blue is a
+					 * candidate for chain conversion. Remember the chain and
+					 * its color.
+					 */
+					if (HCWC_IS_ALL_RED(status))
+						lazy_record_red_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_BLUE(status))
+						lazy_record_blue_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
 				hastup = true;	/* this page won't be truncatable */
 				continue;
 			}
 
-			ItemPointerSet(&(tuple.t_self), blkno, offnum);
-
 			/*
 			 * DEAD item pointers are to be vacuumed normally; but we don't
 			 * count them in tups_vacuumed, else we'd be double-counting (at
@@ -967,6 +1032,28 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			tuple.t_len = ItemIdGetLength(itemid);
 			tuple.t_tableOid = RelationGetRelid(onerel);
 
+			if (!HeapTupleIsHeapOnly(&tuple))
+			{
+				HeapCheckWarmChainStatus status = heap_check_warm_chain(page,
+						&tuple.t_self, false);
+				if (HCWC_IS_WARM(status))
+				{
+					vacrelstats->num_warm_chains++;
+
+					/*
+					 * A chain which is either complete Red or Blue is a
+					 * candidate for chain conversion. Remember the chain and
+					 * its color.
+					 */
+					if (HCWC_IS_ALL_RED(status))
+						lazy_record_red_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_BLUE(status))
+						lazy_record_blue_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
+			}
+
 			tupgone = false;
 
 			switch (HeapTupleSatisfiesVacuum(&tuple, OldestXmin, buf))
@@ -1287,7 +1374,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 
 	/* If any tuples need to be deleted, perform final vacuum cycle */
 	/* XXX put a threshold on min number of tuples here? */
-	if (vacrelstats->num_dead_tuples > 0)
+	if (vacrelstats->num_dead_tuples > 0 || vacrelstats->num_redblue_chains > 0)
 	{
 		const int	hvp_index[] = {
 			PROGRESS_VACUUM_PHASE,
@@ -1305,6 +1392,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		/* Remove index entries */
 		for (i = 0; i < nindexes; i++)
 			lazy_vacuum_index(Irel[i],
+							  (vacrelstats->num_redblue_chains > 0),
 							  &indstats[i],
 							  vacrelstats);
 
@@ -1372,7 +1460,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
  *
  *		This routine marks dead tuples as unused and compacts out free
  *		space on their pages.  Pages not having dead tuples recorded from
- *		lazy_scan_heap are not visited at all.
+ *		lazy_scan_heap are not visited at all. This routine also converts
+ *		candidate WARM chains to HOT chains by clearing WARM related flags. The
+ *		candidate chains are determined by the preceeding index scans after
+ *		looking at the data collected by the first heap scan.
  *
  * Note: the reason for doing this as a second pass is we cannot remove
  * the tuples until we've removed their index entries, and we want to
@@ -1381,7 +1472,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 static void
 lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 {
-	int			tupindex;
+	int			tupindex, chainindex;
 	int			npages;
 	PGRUsage	ru0;
 	Buffer		vmbuffer = InvalidBuffer;
@@ -1390,33 +1481,66 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 	npages = 0;
 
 	tupindex = 0;
-	while (tupindex < vacrelstats->num_dead_tuples)
+	chainindex = 0;
+	while (tupindex < vacrelstats->num_dead_tuples ||
+		   chainindex < vacrelstats->num_redblue_chains)
 	{
-		BlockNumber tblk;
+		BlockNumber tblk, chainblk, vacblk;
 		Buffer		buf;
 		Page		page;
 		Size		freespace;
 
 		vacuum_delay_point();
 
-		tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
-		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, tblk, RBM_NORMAL,
+		tblk = chainblk = InvalidBlockNumber;
+		if (chainindex < vacrelstats->num_redblue_chains)
+			chainblk =
+				ItemPointerGetBlockNumber(&(vacrelstats->redblue_chains[chainindex].chain_tid));
+		
+		if (tupindex < vacrelstats->num_dead_tuples)
+			tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
+
+		if (tblk == InvalidBlockNumber)
+			vacblk = chainblk;
+		else if (chainblk == InvalidBlockNumber)
+			vacblk = tblk;
+		else
+			vacblk = Min(chainblk, tblk);
+
+		Assert(vacblk != InvalidBlockNumber);
+
+		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, vacblk, RBM_NORMAL,
 								 vac_strategy);
-		if (!ConditionalLockBufferForCleanup(buf))
+
+
+		if (vacblk == chainblk)
+			LockBufferForCleanup(buf);
+		else if (!ConditionalLockBufferForCleanup(buf))
 		{
 			ReleaseBuffer(buf);
 			++tupindex;
 			continue;
 		}
-		tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
-									&vmbuffer);
+
+		/* 
+		 * Convert WARM chains on this page. This should be done before
+		 * vacuuming the page to ensure that we can correctly set visibility
+		 * bits after clearing WARM chains
+		*/
+		if (vacblk == chainblk)
+			chainindex = lazy_warmclear_page(onerel, chainblk, buf, chainindex,
+					vacrelstats, &vmbuffer);
+
+		if (vacblk == tblk)
+			tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
+					&vmbuffer);
 
 		/* Now that we've compacted the page, record its available space */
 		page = BufferGetPage(buf);
 		freespace = PageGetHeapFreeSpace(page);
 
 		UnlockReleaseBuffer(buf);
-		RecordPageWithFreeSpace(onerel, tblk, freespace);
+		RecordPageWithFreeSpace(onerel, vacblk, freespace);
 		npages++;
 	}
 
@@ -1435,6 +1559,63 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 }
 
 /*
+ *	lazy_warmclear_page() -- clear WARM flag and mark chains blue when possible
+ *
+ * Caller must hold pin and buffer cleanup lock on the buffer.
+ *
+ * chainindex is the index in vacrelstats->redblue_chains of the first dead
+ * tuple for this page.  We assume the rest follow sequentially.
+ * The return value is the first tupindex after the tuples of this page.
+ */
+static int
+lazy_warmclear_page(Relation onerel, BlockNumber blkno, Buffer buffer,
+				 int chainindex, LVRelStats *vacrelstats, Buffer *vmbuffer)
+{
+	Page			page = BufferGetPage(buffer);
+	OffsetNumber	cleared_offnums[MaxHeapTuplesPerPage];
+	int				num_cleared = 0;
+
+	pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED, blkno);
+
+	START_CRIT_SECTION();
+
+	for (; chainindex < vacrelstats->num_redblue_chains ; chainindex++)
+	{
+		BlockNumber tblk;
+		LVRedBlueChain	*chain;
+
+		chain = &vacrelstats->redblue_chains[chainindex];
+
+		tblk = ItemPointerGetBlockNumber(&chain->chain_tid);
+		if (tblk != blkno)
+			break;				/* past end of tuples for this block */
+
+		/*
+		 * Since a heap page can have no more than MaxHeapTuplesPerPage
+		 * offnums and we process each offnum only once, MaxHeapTuplesPerPage
+		 * size array should be enough to hold all cleared tuples in this page.
+		 */
+		if (!chain->keep_warm_chain)
+			num_cleared += heap_clear_warm_chain(page, &chain->chain_tid,
+					cleared_offnums + num_cleared);
+	}
+
+	/*
+	 * Mark buffer dirty before we write WAL.
+	 */
+	MarkBufferDirty(buffer);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(onerel))
+	{
+	}
+
+	END_CRIT_SECTION();
+
+	return chainindex;
+}
+
+/*
  *	lazy_vacuum_page() -- free dead tuples on a page
  *					 and repair its fragmentation.
  *
@@ -1587,6 +1768,16 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
 	return false;
 }
 
+static void
+lazy_reset_redblue_pointer_count(LVRelStats *vacrelstats)
+{
+	int i;
+	for (i = 0; i < vacrelstats->num_redblue_chains; i++)
+	{
+		LVRedBlueChain *chain = &vacrelstats->redblue_chains[i];
+		chain->num_blue_pointers = chain->num_red_pointers = 0;
+	}
+}
 
 /*
  *	lazy_vacuum_index() -- vacuum one index relation.
@@ -1596,6 +1787,7 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
  */
 static void
 lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats)
 {
@@ -1611,15 +1803,81 @@ lazy_vacuum_index(Relation indrel,
 	ivinfo.num_heap_tuples = vacrelstats->old_rel_tuples;
 	ivinfo.strategy = vac_strategy;
 
-	/* Do bulk deletion */
-	*stats = index_bulk_delete(&ivinfo, *stats,
-							   lazy_tid_reaped, (void *) vacrelstats);
+	/*
+	 * If told, convert WARM chains into HOT chains.
+	 *
+	 * We must have already collected candidate WARM chains i.e. chains which
+	 * has either has only Red or only Blue tuples, but not a mix of both.
+	 *
+	 * This works in two phases. In the first phase, we do a complete index
+	 * scan and collect information about index pointers to the candidate
+	 * chains, but we don't do conversion. To be precise, we count the number
+	 * of Blue and Red index pointers to each candidate chain and use that
+	 * knowledge to arrive at a decision and do the actual conversion during
+	 * the second phase (we kill known dead pointers though in this phase).
+	 *
+	 * In the second phase, for each Red chain we check if we have seen a Red
+	 * index pointer. For such chains, we kill the Blue pointer and color the
+	 * Red pointer Blue. the heap tuples are marked Blue in the second heap
+	 * scan. If we did not find any Red pointer to a Red chain, that means that
+	 * the chain is reachable from the Blue pointer (because say WARM update
+	 * did not added a new entry for this index). In that case, we do nothing.
+	 * There is a third case where we find more than one Blue pointers to a Red
+	 * chain. This can happen because of aborted vacuums. We don't handle that
+	 * case yet, but it should be possible to apply the same recheck logic and
+	 * find which of the Blue pointers is redundant and should be removed.
+	 *
+	 * For Blue chains, we just kill the Red pointer, if it exists and keep the
+	 * Blue pointer.
+	 */
+	if (clear_warm)
+	{
+		lazy_reset_redblue_pointer_count(vacrelstats);
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase1, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row version, found "
+						"%0.f red pointers, %0.f blue pointers, removed "
+						"%0.f red pointers, removed %0.f blue pointers",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples,
+						(*stats)->num_red_pointers,
+						(*stats)->num_blue_pointers,
+						(*stats)->red_pointers_removed,
+						(*stats)->blue_pointers_removed)));
+
+		(*stats)->num_red_pointers = 0;
+		(*stats)->num_blue_pointers = 0;
+		(*stats)->red_pointers_removed = 0;
+		(*stats)->blue_pointers_removed = 0;
+		(*stats)->pointers_colored = 0;
+
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase2, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to convert red pointers, found "
+						"%0.f red pointers, %0.f blue pointers, removed "
+						"%0.f red pointers, removed %0.f blue pointers, "
+						"colored %0.f red pointers blue",
+						RelationGetRelationName(indrel),
+						(*stats)->num_red_pointers,
+						(*stats)->num_blue_pointers,
+						(*stats)->red_pointers_removed,
+						(*stats)->blue_pointers_removed,
+						(*stats)->pointers_colored)));
+	}
+	else
+	{
+		/* Do bulk deletion */
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_tid_reaped, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row versions",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples),
+				 errdetail("%s.", pg_rusage_show(&ru0))));
+	}
 
-	ereport(elevel,
-			(errmsg("scanned index \"%s\" to remove %d row versions",
-					RelationGetRelationName(indrel),
-					vacrelstats->num_dead_tuples),
-			 errdetail("%s.", pg_rusage_show(&ru0))));
 }
 
 /*
@@ -1993,9 +2251,11 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 
 	if (vacrelstats->hasindex)
 	{
-		maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData);
+		maxtuples = (vac_work_mem * 1024L) / (sizeof(ItemPointerData) +
+				sizeof(LVRedBlueChain));
 		maxtuples = Min(maxtuples, INT_MAX);
-		maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData));
+		maxtuples = Min(maxtuples, MaxAllocSize / (sizeof(ItemPointerData) +
+					sizeof(LVRedBlueChain)));
 
 		/* curious coding here to ensure the multiplication can't overflow */
 		if ((BlockNumber) (maxtuples / LAZY_ALLOC_TUPLES) > relblocks)
@@ -2013,6 +2273,57 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 	vacrelstats->max_dead_tuples = (int) maxtuples;
 	vacrelstats->dead_tuples = (ItemPointer)
 		palloc(maxtuples * sizeof(ItemPointerData));
+
+	/* 
+	 * XXX Cheat for now and allocate the same size array for tracking blue and
+	 * red chains. maxtuples must have been already adjusted above to ensure we
+	 * don't cross vac_work_mem.
+	 */
+	vacrelstats->num_redblue_chains = 0;
+	vacrelstats->max_redblue_chains = (int) maxtuples;
+	vacrelstats->redblue_chains = (LVRedBlueChain *)
+		palloc0(maxtuples * sizeof(LVRedBlueChain));
+
+}
+
+/*
+ * lazy_record_blue_chain - remember one blue chain
+ */
+static void
+lazy_record_blue_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_redblue_chains < vacrelstats->max_redblue_chains)
+	{
+		vacrelstats->redblue_chains[vacrelstats->num_redblue_chains].chain_tid = *itemptr;
+		vacrelstats->redblue_chains[vacrelstats->num_redblue_chains].is_red_chain = 0;
+		vacrelstats->num_redblue_chains++;
+	}
+}
+
+/*
+ * lazy_record_red_chain - remember one red chain
+ */
+static void
+lazy_record_red_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_redblue_chains < vacrelstats->max_redblue_chains)
+	{
+		vacrelstats->redblue_chains[vacrelstats->num_redblue_chains].chain_tid = *itemptr;
+		vacrelstats->redblue_chains[vacrelstats->num_redblue_chains].is_red_chain = 1;
+		vacrelstats->num_redblue_chains++;
+	}
 }
 
 /*
@@ -2043,8 +2354,8 @@ lazy_record_dead_tuple(LVRelStats *vacrelstats,
  *
  *		Assumes dead_tuples array is in sorted order.
  */
-static bool
-lazy_tid_reaped(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+lazy_tid_reaped(ItemPointer itemptr, bool is_red, void *state)
 {
 	LVRelStats *vacrelstats = (LVRelStats *) state;
 	ItemPointer res;
@@ -2055,7 +2366,152 @@ lazy_tid_reaped(ItemPointer itemptr, void *state)
 								sizeof(ItemPointerData),
 								vac_cmp_itemptr);
 
-	return (res != NULL);
+	return (res != NULL) ? IBDCR_DELETE : IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase1() -- run first pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase1(ItemPointer itemptr, bool is_red, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	ItemPointer		res;
+	LVRedBlueChain	*chain;
+
+	res = (ItemPointer) bsearch((void *) itemptr,
+								(void *) vacrelstats->dead_tuples,
+								vacrelstats->num_dead_tuples,
+								sizeof(ItemPointerData),
+								vac_cmp_itemptr);
+
+	if (res != NULL)
+		return IBDCR_DELETE;
+
+	chain = (LVRedBlueChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->redblue_chains,
+								vacrelstats->num_redblue_chains,
+								sizeof(LVRedBlueChain),
+								vac_cmp_redblue_chain);
+	if (chain != NULL)
+	{
+		if (is_red)
+			chain->num_red_pointers++;
+		else
+			chain->num_blue_pointers++;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase2() -- run first pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase2(ItemPointer itemptr, bool is_red, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	LVRedBlueChain	*chain;
+
+	chain = (LVRedBlueChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->redblue_chains,
+								vacrelstats->num_redblue_chains,
+								sizeof(LVRedBlueChain),
+								vac_cmp_redblue_chain);
+
+	if (chain != NULL && (chain->keep_warm_chain != 1))
+	{
+		if (chain->is_red_chain == 1)
+		{
+			/*
+			 * For Red chains, color Red index pointer Blue and kill the Blue
+			 * pointer if we have a Red index pointer.
+			 */
+			if (is_red)
+			{
+				Assert(chain->num_red_pointers == 1);
+				chain->keep_warm_chain = 0;
+				return IBDCR_COLOR_BLUE;
+			}
+			else
+			{
+				if (chain->num_red_pointers > 0)
+				{
+					chain->keep_warm_chain = 0;
+					return IBDCR_DELETE;
+				}
+				else if (chain->num_blue_pointers == 1)
+				{
+					chain->keep_warm_chain = 0;
+					return IBDCR_KEEP;
+				}
+			}
+		}
+		else
+		{
+			/*
+			 * For Blue chains, kill the Red pointer
+			 */
+			if (chain->num_red_pointers > 0)
+			{
+				chain->keep_warm_chain = 0;
+				return IBDCR_DELETE;
+			}
+			
+			/*
+			 * If this is the only surviving Blue pointer, keep it but convert
+			 * the chain.
+			 */
+			if (chain->num_blue_pointers == 1)
+			{
+				chain->keep_warm_chain = 0;
+				return IBDCR_KEEP;
+			}
+
+			/*
+			 * If there are more than 1 Blue pointers to this chain, we can
+			 * apply the recheck logic and kill the redudant Blue pointer and
+			 * convert the chain. But that's not yet done.
+			 */
+		}
+		chain->keep_warm_chain = 1;
+		return IBDCR_KEEP;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ * Comparator routines for use with qsort() and bsearch(). Similar to
+ * vac_cmp_itemptr, but right hand argument is LVRedBlueChain struct pointer.
+ */
+static int
+vac_cmp_redblue_chain(const void *left, const void *right)
+{
+	BlockNumber lblk,
+				rblk;
+	OffsetNumber loff,
+				roff;
+
+	lblk = ItemPointerGetBlockNumber((ItemPointer) left);
+	rblk = ItemPointerGetBlockNumber(&((LVRedBlueChain *) right)->chain_tid);
+
+	if (lblk < rblk)
+		return -1;
+	if (lblk > rblk)
+		return 1;
+
+	loff = ItemPointerGetOffsetNumber((ItemPointer) left);
+	roff = ItemPointerGetOffsetNumber(&((LVRedBlueChain *) right)->chain_tid);
+
+	if (loff < roff)
+		return -1;
+	if (loff > roff)
+		return 1;
+
+	return 0;
 }
 
 /*
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index d62d2de..3e49a8f 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -405,7 +405,8 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique,	/* type of uniqueness check to do */
-						 indexInfo);	/* index AM may need this */
+						 indexInfo,	/* index AM may need this */
+						 (modified_attrs != NULL));	/* type of uniqueness check to do */
 
 		/*
 		 * If the index has an associated exclusion constraint, check that.
diff --git a/src/backend/utils/time/combocid.c b/src/backend/utils/time/combocid.c
index baff998..6a2e2f2 100644
--- a/src/backend/utils/time/combocid.c
+++ b/src/backend/utils/time/combocid.c
@@ -106,7 +106,7 @@ HeapTupleHeaderGetCmin(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 	Assert(TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmin(tup)));
 
 	if (tup->t_infomask & HEAP_COMBOCID)
@@ -120,7 +120,7 @@ HeapTupleHeaderGetCmax(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 
 	/*
 	 * Because GetUpdateXid() performs memory allocations if xmax is a
diff --git a/src/backend/utils/time/tqual.c b/src/backend/utils/time/tqual.c
index 703bdce..0df5a44 100644
--- a/src/backend/utils/time/tqual.c
+++ b/src/backend/utils/time/tqual.c
@@ -186,7 +186,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -205,7 +205,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -377,7 +377,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -396,7 +396,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -471,7 +471,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			return HeapTupleInvisible;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -490,7 +490,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -753,7 +753,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -772,7 +772,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -974,7 +974,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -993,7 +993,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1180,7 +1180,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 		if (HeapTupleHeaderXminInvalid(tuple))
 			return HEAPTUPLE_DEAD;
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_OFF)
+		else if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1198,7 +1198,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 						InvalidTransactionId);
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h
index d7702e5..68859f2 100644
--- a/src/include/access/amapi.h
+++ b/src/include/access/amapi.h
@@ -75,6 +75,14 @@ typedef bool (*aminsert_function) (Relation indexRelation,
 											   Relation heapRelation,
 											   IndexUniqueCheck checkUnique,
 											   struct IndexInfo *indexInfo);
+/* insert this WARM tuple */
+typedef bool (*amwarminsert_function) (Relation indexRelation,
+											   Datum *values,
+											   bool *isnull,
+											   ItemPointer heap_tid,
+											   Relation heapRelation,
+											   IndexUniqueCheck checkUnique,
+											   struct IndexInfo *indexInfo);
 
 /* bulk delete */
 typedef IndexBulkDeleteResult *(*ambulkdelete_function) (IndexVacuumInfo *info,
@@ -203,6 +211,7 @@ typedef struct IndexAmRoutine
 	ambuild_function ambuild;
 	ambuildempty_function ambuildempty;
 	aminsert_function aminsert;
+	amwarminsert_function amwarminsert;
 	ambulkdelete_function ambulkdelete;
 	amvacuumcleanup_function amvacuumcleanup;
 	amcanreturn_function amcanreturn;	/* can be NULL */
diff --git a/src/include/access/genam.h b/src/include/access/genam.h
index f467b18..bf1e6bd 100644
--- a/src/include/access/genam.h
+++ b/src/include/access/genam.h
@@ -75,12 +75,29 @@ typedef struct IndexBulkDeleteResult
 	bool		estimated_count;	/* num_index_tuples is an estimate */
 	double		num_index_tuples;		/* tuples remaining */
 	double		tuples_removed; /* # removed during vacuum operation */
+	double		num_red_pointers;	/* # red pointers found */
+	double		num_blue_pointers;	/* # blue pointers found */
+	double		pointers_colored;	/* # red pointers colored blue */
+	double		red_pointers_removed;	/* # red pointers removed */
+	double		blue_pointers_removed;	/* # blue pointers removed */
 	BlockNumber pages_deleted;	/* # unused pages in index */
 	BlockNumber pages_free;		/* # pages available for reuse */
 } IndexBulkDeleteResult;
 
+/*
+ * IndexBulkDeleteCallback should return one of the following
+ */
+typedef enum IndexBulkDeleteCallbackResult
+{
+	IBDCR_KEEP,			/* index tuple should be preserved */
+	IBDCR_DELETE,		/* index tuple should be deleted */
+	IBDCR_COLOR_BLUE	/* index tuple should be colored blue */
+} IndexBulkDeleteCallbackResult;
+
 /* Typedef for callback function to determine if a tuple is bulk-deletable */
-typedef bool (*IndexBulkDeleteCallback) (ItemPointer itemptr, void *state);
+typedef IndexBulkDeleteCallbackResult (*IndexBulkDeleteCallback) (
+										 ItemPointer itemptr,
+										 bool is_red, void *state);
 
 /* struct definitions appear in relscan.h */
 typedef struct IndexScanDescData *IndexScanDesc;
@@ -135,7 +152,8 @@ extern bool index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 struct IndexInfo *indexInfo);
+			 struct IndexInfo *indexInfo,
+			 bool warm_update);
 
 extern IndexScanDesc index_beginscan(Relation heapRelation,
 				Relation indexRelation,
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 9412c3a..719a725 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -72,6 +72,20 @@ typedef struct HeapUpdateFailureData
 	CommandId	cmax;
 } HeapUpdateFailureData;
 
+typedef int HeapCheckWarmChainStatus;
+
+#define HCWC_BLUE_TUPLE	0x0001
+#define	HCWC_RED_TUPLE	0x0002
+#define HCWC_WARM_TUPLE	0x0004
+
+#define HCWC_IS_MIXED(status) \
+	(((status) & (HCWC_BLUE_TUPLE | HCWC_RED_TUPLE)) != 0)
+#define HCWC_IS_ALL_RED(status) \
+	(((status) & HCWC_BLUE_TUPLE) == 0)
+#define HCWC_IS_ALL_BLUE(status) \
+	(((status) & HCWC_RED_TUPLE) == 0)
+#define HCWC_IS_WARM(status) \
+	(((status) & HCWC_WARM_TUPLE) != 0)
 
 /* ----------------
  *		function prototypes for heap access method
@@ -183,6 +197,10 @@ extern void simple_heap_update(Relation relation, ItemPointer otid,
 				   bool *warm_update);
 
 extern void heap_sync(Relation relation);
+extern HeapCheckWarmChainStatus heap_check_warm_chain(Page dp,
+				   ItemPointer tid, bool stop_at_warm);
+extern int heap_clear_warm_chain(Page dp, ItemPointer tid,
+				   OffsetNumber *cleared_offnums);
 
 /* in heap/pruneheap.c */
 extern void heap_page_prune_opt(Relation relation, Buffer buffer);
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index ddbdbcd..45fe12c 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -201,6 +201,21 @@ struct HeapTupleHeaderData
 										 * upgrade support */
 #define HEAP_MOVED (HEAP_MOVED_OFF | HEAP_MOVED_IN)
 
+/*
+ * A WARM chain usually consists of two parts. Each of these parts are HOT
+ * chains in themselves i.e. all indexed columns has the same value, but a WARM
+ * update separates these parts. We call these two parts as Blue chain and Red
+ * chain. We need a mechanism to identify which part a tuple belongs to. We
+ * can't just look at if it's a HeapTupleHeaderIsHeapWarmTuple() because during
+ * WARM update, both old and new tuples are marked as WARM tuples.
+ * 
+ * We need another infomask bit for this. But we use the same infomask bit that
+ * was earlier used for by old-style VACUUM FULL. This is safe because
+ * HEAP_WARM_TUPLE flag will always be set along with HEAP_WARM_RED. So if
+ * HEAP_WARM_TUPLE and HEAP_WARM_RED is set then we know that it's referring to
+ * red part of the WARM chain.
+ */
+#define HEAP_WARM_RED			0x4000
 #define HEAP_XACT_MASK			0xFFF0	/* visibility-related bits */
 
 /*
@@ -397,7 +412,7 @@ struct HeapTupleHeaderData
 /* SetCmin is reasonably simple since we never need a combo CID */
 #define HeapTupleHeaderSetCmin(tup, cid) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	(tup)->t_infomask &= ~HEAP_COMBOCID; \
 } while (0)
@@ -405,7 +420,7 @@ do { \
 /* SetCmax must be used after HeapTupleHeaderAdjustCmax; see combocid.c */
 #define HeapTupleHeaderSetCmax(tup, cid, iscombo) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	if (iscombo) \
 		(tup)->t_infomask |= HEAP_COMBOCID; \
@@ -415,7 +430,7 @@ do { \
 
 #define HeapTupleHeaderGetXvac(tup) \
 ( \
-	((tup)->t_infomask & HEAP_MOVED) ? \
+	HeapTupleHeaderIsMoved(tup) ? \
 		(tup)->t_choice.t_heap.t_field3.t_xvac \
 	: \
 		InvalidTransactionId \
@@ -423,7 +438,7 @@ do { \
 
 #define HeapTupleHeaderSetXvac(tup, xid) \
 do { \
-	Assert((tup)->t_infomask & HEAP_MOVED); \
+	Assert(HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_xvac = (xid); \
 } while (0)
 
@@ -651,6 +666,58 @@ do { \
 )
 
 /*
+ * Macros to check if tuple is a moved-off/in tuple by VACUUM FULL in from
+ * pre-9.0 era. Such tuple must not have HEAP_WARM_TUPLE flag set.
+ *
+ * Beware of multiple evaluations of the argument.
+ */ 
+#define HeapTupleHeaderIsMovedOff(tuple) \
+( \
+ 	!HeapTupleHeaderIsHeapWarmTuple((tuple)) && \
+  	((tuple)->t_infomask & HEAP_MOVED_OFF) \
+)
+
+#define HeapTupleHeaderIsMovedIn(tuple) \
+( \
+ 	!HeapTupleHeaderIsHeapWarmTuple((tuple)) && \
+  	((tuple)->t_infomask & HEAP_MOVED_IN) \
+)
+
+#define HeapTupleHeaderIsMoved(tuple) \
+( \
+ 	!HeapTupleHeaderIsHeapWarmTuple((tuple)) && \
+  	((tuple)->t_infomask & HEAP_MOVED) \
+)
+
+/*
+ * Check if tuple belongs to the Red part of the WARM chain.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderIsWarmRed(tuple) \
+( \
+	HeapTupleHeaderIsHeapWarmTuple(tuple) && \
+    (((tuple)->t_infomask & HEAP_WARM_RED) != 0) \
+)
+
+/*
+ * Mark tuple as a member of the Red chain. Must only be done on a tuple which
+ * is already marked a WARM-tuple.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderSetWarmRed(tuple) \
+( \
+  	AssertMacro(HeapTupleHeaderIsHeapWarmTuple(tuple)), \
+	(tuple)->t_infomask |= HEAP_WARM_RED \
+)
+
+#define HeapTupleHeaderClearWarmRed(tuple) \
+( \
+	(tuple)->t_infomask &= ~HEAP_WARM_RED \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
@@ -810,6 +877,15 @@ struct MinimalTupleData
 #define HeapTupleClearHeapWarmTuple(tuple) \
 		HeapTupleHeaderClearHeapWarmTuple((tuple)->t_data)
 
+#define HeapTupleIsHeapWarmTupleRed(tuple) \
+		HeapTupleHeaderIsWarmRed((tuple)->t_data)
+
+#define HeapTupleSetHeapWarmTupleRed(tuple) \
+		HeapTupleHeaderSetWarmRed((tuple)->t_data)
+
+#define HeapTupleClearHeapWarmTupleRed(tuple) \
+		HeapTupleHeaderClearWarmRed((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index 08d056d..40be895 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -427,6 +427,9 @@ typedef BTScanOpaqueData *BTScanOpaque;
 #define SK_BT_DESC			(INDOPTION_DESC << SK_BT_INDOPTION_SHIFT)
 #define SK_BT_NULLS_FIRST	(INDOPTION_NULLS_FIRST << SK_BT_INDOPTION_SHIFT)
 
+/* This index tuple points to the red part of the WARM chain */
+#define INDEX_RED_CHAIN	0x2000
+
 /*
  * external entry points for btree, in nbtree.c
  */
@@ -437,6 +440,10 @@ extern bool btinsert(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
 		 struct IndexInfo *indexInfo);
+extern bool btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 struct IndexInfo *indexInfo);
 extern IndexScanDesc btbeginscan(Relation rel, int nkeys, int norderbys);
 extern Size btestimateparallelscan(void);
 extern void btinitparallelscan(void *target);
diff --git a/src/include/commands/progress.h b/src/include/commands/progress.h
index 9472ecc..b355b61 100644
--- a/src/include/commands/progress.h
+++ b/src/include/commands/progress.h
@@ -25,6 +25,7 @@
 #define PROGRESS_VACUUM_NUM_INDEX_VACUUMS		4
 #define PROGRESS_VACUUM_MAX_DEAD_TUPLES			5
 #define PROGRESS_VACUUM_NUM_DEAD_TUPLES			6
+#define PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED	7
 
 /* Phases of vacuum (as advertised via PROGRESS_VACUUM_PHASE) */
 #define PROGRESS_VACUUM_PHASE_SCAN_HEAP			1
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
index 0aa3bb7..6391891 100644
--- a/src/test/regress/expected/warm.out
+++ b/src/test/regress/expected/warm.out
@@ -26,12 +26,12 @@ SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
 
 -- Even when seqscan is disabled and indexscan is forced
 SET enable_seqscan = false;
-EXPLAIN SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
-                                 QUERY PLAN                                 
-----------------------------------------------------------------------------
- Bitmap Heap Scan on updtst_tab1  (cost=4.45..47.23 rows=22 width=72)
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
    Recheck Cond: (b = 140001)
-   ->  Bitmap Index Scan on updtst_indx1  (cost=0.00..4.45 rows=22 width=0)
+   ->  Bitmap Index Scan on updtst_indx1
          Index Cond: (b = 140001)
 (4 rows)
 
@@ -42,12 +42,12 @@ SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
 (1 row)
 
 -- Check if index only scan works correctly
-EXPLAIN SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
-                                 QUERY PLAN                                 
-----------------------------------------------------------------------------
- Bitmap Heap Scan on updtst_tab1  (cost=4.45..47.23 rows=22 width=4)
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
    Recheck Cond: (b = 140001)
-   ->  Bitmap Index Scan on updtst_indx1  (cost=0.00..4.45 rows=22 width=0)
+   ->  Bitmap Index Scan on updtst_indx1
          Index Cond: (b = 140001)
 (4 rows)
 
@@ -59,10 +59,10 @@ SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
 
 -- Table must be vacuumed to force index-only scan
 VACUUM updtst_tab1;
-EXPLAIN SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
-                                      QUERY PLAN                                      
---------------------------------------------------------------------------------------
- Index Only Scan using updtst_indx1 on updtst_tab1  (cost=0.29..9.16 rows=50 width=4)
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx1 on updtst_tab1
    Index Cond: (b = 140001)
 (2 rows)
 
@@ -99,12 +99,12 @@ SELECT * FROM updtst_tab2 WHERE c = 'foo6';
  1 | 701 | foo6 | bar
 (1 row)
 
-EXPLAIN SELECT * FROM updtst_tab2 WHERE b = 701;
-                                QUERY PLAN                                 
----------------------------------------------------------------------------
- Bitmap Heap Scan on updtst_tab2  (cost=4.18..12.64 rows=4 width=72)
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
    Recheck Cond: (b = 701)
-   ->  Bitmap Index Scan on updtst_indx2  (cost=0.00..4.18 rows=4 width=0)
+   ->  Bitmap Index Scan on updtst_indx2
          Index Cond: (b = 701)
 (4 rows)
 
@@ -115,12 +115,12 @@ SELECT * FROM updtst_tab2 WHERE a = 1;
 (1 row)
 
 SET enable_seqscan = false;
-EXPLAIN SELECT * FROM updtst_tab2 WHERE b = 701;
-                                QUERY PLAN                                 
----------------------------------------------------------------------------
- Bitmap Heap Scan on updtst_tab2  (cost=4.18..12.64 rows=4 width=72)
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
    Recheck Cond: (b = 701)
-   ->  Bitmap Index Scan on updtst_indx2  (cost=0.00..4.18 rows=4 width=0)
+   ->  Bitmap Index Scan on updtst_indx2
          Index Cond: (b = 701)
 (4 rows)
 
@@ -131,10 +131,10 @@ SELECT * FROM updtst_tab2 WHERE b = 701;
 (1 row)
 
 VACUUM updtst_tab2;
-EXPLAIN SELECT b FROM updtst_tab2 WHERE b = 701;
-                                     QUERY PLAN                                      
--------------------------------------------------------------------------------------
- Index Only Scan using updtst_indx2 on updtst_tab2  (cost=0.14..4.16 rows=1 width=4)
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx2 on updtst_tab2
    Index Cond: (b = 701)
 (2 rows)
 
@@ -212,10 +212,10 @@ SELECT * FROM updtst_tab3 WHERE b = 1421;
 (1 row)
 
 VACUUM updtst_tab3;
-EXPLAIN SELECT b FROM updtst_tab3 WHERE b = 701;
-                        QUERY PLAN                         
------------------------------------------------------------
- Seq Scan on updtst_tab3  (cost=0.00..2.25 rows=1 width=4)
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+       QUERY PLAN        
+-------------------------
+ Seq Scan on updtst_tab3
    Filter: (b = 701)
 (2 rows)
 
@@ -293,10 +293,10 @@ SELECT * FROM updtst_tab3 WHERE b = 1422;
 (1 row)
 
 VACUUM updtst_tab3;
-EXPLAIN SELECT b FROM updtst_tab3 WHERE b = 702;
-                                     QUERY PLAN                                      
--------------------------------------------------------------------------------------
- Index Only Scan using updtst_indx3 on updtst_tab3  (cost=0.14..8.16 rows=1 width=4)
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx3 on updtst_tab3
    Index Cond: (b = 702)
 (2 rows)
 
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
index b73c278..f31127c 100644
--- a/src/test/regress/sql/warm.sql
+++ b/src/test/regress/sql/warm.sql
@@ -23,16 +23,16 @@ SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
 
 -- Even when seqscan is disabled and indexscan is forced
 SET enable_seqscan = false;
-EXPLAIN SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
 SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
 
 -- Check if index only scan works correctly
-EXPLAIN SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
 SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
 
 -- Table must be vacuumed to force index-only scan
 VACUUM updtst_tab1;
-EXPLAIN SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
 SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
 
 SET enable_seqscan = true;
@@ -58,15 +58,15 @@ UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
 SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
 SELECT * FROM updtst_tab2 WHERE c = 'foo6';
 
-EXPLAIN SELECT * FROM updtst_tab2 WHERE b = 701;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
 SELECT * FROM updtst_tab2 WHERE a = 1;
 
 SET enable_seqscan = false;
-EXPLAIN SELECT * FROM updtst_tab2 WHERE b = 701;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
 SELECT * FROM updtst_tab2 WHERE b = 701;
 
 VACUUM updtst_tab2;
-EXPLAIN SELECT b FROM updtst_tab2 WHERE b = 701;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
 SELECT b FROM updtst_tab2 WHERE b = 701;
 
 SET enable_seqscan = true;
@@ -109,7 +109,7 @@ SELECT * FROM updtst_tab3 WHERE b = 701;
 SELECT * FROM updtst_tab3 WHERE b = 1421;
 
 VACUUM updtst_tab3;
-EXPLAIN SELECT b FROM updtst_tab3 WHERE b = 701;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
 SELECT b FROM updtst_tab3 WHERE b = 701;
 SELECT b FROM updtst_tab3 WHERE b = 1421;
 
@@ -146,7 +146,7 @@ SELECT * FROM updtst_tab3 WHERE b = 702;
 SELECT * FROM updtst_tab3 WHERE b = 1422;
 
 VACUUM updtst_tab3;
-EXPLAIN SELECT b FROM updtst_tab3 WHERE b = 702;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
 SELECT b FROM updtst_tab3 WHERE b = 702;
 SELECT b FROM updtst_tab3 WHERE b = 1422;
 
0002_warm_updates_v12.patchapplication/octet-stream; name=0002_warm_updates_v12.patchDownload
diff --git b/contrib/bloom/blutils.c a/contrib/bloom/blutils.c
index f2eda67..b356e2b 100644
--- b/contrib/bloom/blutils.c
+++ a/contrib/bloom/blutils.c
@@ -142,6 +142,7 @@ blhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git b/src/backend/access/brin/brin.c a/src/backend/access/brin/brin.c
index b22563b..b4a1465 100644
--- b/src/backend/access/brin/brin.c
+++ a/src/backend/access/brin/brin.c
@@ -116,6 +116,7 @@ brinhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git b/src/backend/access/gist/gist.c a/src/backend/access/gist/gist.c
index 6593771..843389b 100644
--- b/src/backend/access/gist/gist.c
+++ a/src/backend/access/gist/gist.c
@@ -94,6 +94,7 @@ gisthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git b/src/backend/access/hash/hash.c a/src/backend/access/hash/hash.c
index 24510e7..6645160 100644
--- b/src/backend/access/hash/hash.c
+++ a/src/backend/access/hash/hash.c
@@ -90,6 +90,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = hashrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -271,6 +272,8 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 	OffsetNumber offnum;
 	ItemPointer current;
 	bool		res;
+	IndexTuple	itup;
+
 
 	/* Hash indexes are always lossy since we store only the hash code */
 	scan->xs_recheck = true;
@@ -308,8 +311,6 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 			 offnum <= maxoffnum;
 			 offnum = OffsetNumberNext(offnum))
 		{
-			IndexTuple	itup;
-
 			itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 			if (ItemPointerEquals(&(so->hashso_heappos), &(itup->t_tid)))
 				break;
diff --git b/src/backend/access/hash/hashsearch.c a/src/backend/access/hash/hashsearch.c
index 9e5d7e4..60e941d 100644
--- b/src/backend/access/hash/hashsearch.c
+++ a/src/backend/access/hash/hashsearch.c
@@ -59,6 +59,8 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
 	return true;
 }
 
@@ -363,6 +365,9 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
+
 	return true;
 }
 
diff --git b/src/backend/access/hash/hashutil.c a/src/backend/access/hash/hashutil.c
index c705531..dcba734 100644
--- b/src/backend/access/hash/hashutil.c
+++ a/src/backend/access/hash/hashutil.c
@@ -17,8 +17,12 @@
 #include "access/hash.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
+#include "nodes/execnodes.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 #define CALC_NEW_BUCKET(old_bucket, lowmask) \
 			old_bucket | (lowmask + 1)
@@ -446,3 +450,109 @@ _hash_get_newbucket_from_oldbucket(Relation rel, Bucket old_bucket,
 
 	return new_bucket;
 }
+
+/*
+ * Recheck if the heap tuple satisfies the key stored in the index tuple
+ */
+bool
+hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	Datum		values2[INDEX_MAX_KEYS];
+	bool		isnull2[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	/*
+	 * HASH indexes compute a hash value of the key and store that in the
+	 * index. So we must first obtain the hash of the value obtained from the
+	 * heap and then do a comparison
+	 */
+	_hash_convert_tuple(indexRel, values, isnull, values2, isnull2);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL then they are equal
+		 */
+		if (isnull2[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If either is NULL then they are not equal
+		 */
+		if (isnull2[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now do a raw memory comparison
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values2[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+}
diff --git b/src/backend/access/heap/README.WARM a/src/backend/access/heap/README.WARM
new file mode 100644
index 0000000..7b9a712
--- /dev/null
+++ a/src/backend/access/heap/README.WARM
@@ -0,0 +1,306 @@
+src/backend/access/heap/README.WARM
+
+Write Amplification Reduction Method (WARM)
+===========================================
+
+The Heap Only Tuple (HOT) feature greatly eliminated redudant index
+entries and allowed re-use of the dead space occupied by previously
+updated or deleted tuples (see src/backend/access/heap/README.HOT)
+
+One of the necessary conditions for satisfying HOT update is that the
+update must not change a column used in any of the indexes on the table.
+The condition is sometimes hard to meet, especially for complex
+workloads with several indexes on large yet frequently updated tables.
+Worse, sometimes only one or two index columns may be updated, but the
+regular non-HOT update will still insert a new index entry in every
+index on the table, irrespective of whether the key pertaining to the
+index changed or not.
+
+WARM is a technique devised to address these problems.
+
+
+Update Chains With Multiple Index Entries Pointing to the Root
+--------------------------------------------------------------
+
+When a non-HOT update is caused by an index key change, a new index
+entry must be inserted for the changed index. But if the index key
+hasn't changed for other indexes, we don't really need to insert a new
+entry. Even though the existing index entry is pointing to the old
+tuple, the new tuple is reachable via the t_ctid chain. To keep things
+simple, a WARM update requires that the heap block must have enough
+space to store the new version of the tuple. This is same as HOT
+updates.
+
+In WARM, we ensure that every index entry always points to the root of
+the WARM chain. In fact, a WARM chain looks exactly like a HOT chain
+except for the fact that there could be multiple index entries pointing
+to the root of the chain. So when new entry is inserted in an index for
+updated tuple, and if we are doing a WARM update, the new entry is made
+point to the root of the WARM chain.
+
+For example, if we have a table with two columns and two indexes on each
+of the column. When a tuple is first inserted the table, we have exactly
+one index entry pointing to the tuple from both indexes.
+
+	lp [1]
+	[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's entry (aaaa) also points to 1
+
+Now if the tuple's second column is updated and if there is room on the
+page, we perform a WARM update. To do so, Index1 does not get any new
+entry and Index2's new entry will still point to the root tuple of the
+chain.
+
+	lp [1]  [2]
+	[1111, aaaa]->[111, bbbb]
+
+	Index1's entry (1111) points to 1
+	Index2's old entry (aaaa) points to 1
+	Index2's new entry (bbbb) also points to 1
+
+"A update chain which has more than one index entries pointing to its
+root line pointer is called WARM chain and the action that creates a
+WARM chain is called WARM update."
+
+Since all indexes always point to the root of the WARM chain, even when
+there are more than one index entries, WARM chains can be pruned and
+dead tuples can be removed without a need to do corresponding index
+cleanup.
+
+While this solves the problem of pruning dead tuples from a HOT/WARM
+chain, it also opens up a new technical challenge because now we have a
+situation where a heap tuple is reachable from multiple index entries,
+each having a different index key. While MVCC still ensures that only
+valid tuples are returned, a tuple with a wrong index key may be
+returned because of wrong index entries. In the above example, tuple
+[1111, bbbb] is reachable from both keys (aaaa) as well as (bbbb). For
+this reason, tuples returned from a WARM chain must always be rechecked
+for index key-match.
+
+Recheck Index Key Againt Heap Tuple
+-----------------------------------
+
+Since every Index AM has it's own notion of index tuples, each Index AM
+must implement its own method to recheck heap tuples. For example, a
+hash index stores the hash value of the column and hence recheck routine
+for hash AM must first compute the hash value of the heap attribute and
+then compare it against the value stored in the index tuple.
+
+The patch currently implement recheck routines for hash and btree
+indexes. If the table has an index which doesn't support recheck
+routine, WARM updates are disabled on such tables.
+
+Problem With Duplicate (key, ctid) Index Entries
+------------------------------------------------
+
+The index-key recheck logic works as long as there are no duplicate
+index keys, both pointing to the same WARM chain. In that case, the same
+valid tuple will be reachable via multiple index keys, yet satisfying
+the index key checks. In the above example, if the tuple [1111, bbbb] is
+again updated to [1111, aaaa] and if we insert a new index entry (aaaa)
+pointing to the root line pointer, we will end up with the following
+structure:
+
+	lp [1]  [2]  [3]
+	[1111, aaaa]->[1111, bbbb]->[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's oldest entry (aaaa) points to 1
+	Index2's old entry (bbbb) also points to 1
+	Index2's new entry (aaaa) also points to 1
+
+We must solve this problem to ensure that the same tuple is not
+reachable via multiple index pointers. There are couple of ways to
+address this issue:
+
+1. Do not allow WARM update to a tuple from a WARM chain. This
+guarantees that there can never be duplicate index entries to the same
+root line pointer because we must have checked for old and new index
+keys while doing the first WARM update.
+
+2. Do not allow duplicate (key, ctid) index pointers. In the above
+example, since (aaaa, 1) already exists in the index, we must not insert
+a duplicate index entry.
+
+The patch currently implements 1 i.e. do not do WARM updates to a tuple
+from a WARM chain. HOT updates are fine because they do not add a new
+index entry.
+
+Even with the restriction, this is a significant improvement because the
+number of regular UPDATEs are curtailed down to half.
+
+Expression and Partial Indexes
+------------------------------
+
+Expressions may evaluate to the same value even if the underlying column
+values have changed. A simple example is an index on "lower(col)" which
+will return the same value if the new heap value only differs in the
+case sensitivity. So we can not solely rely on the heap column check to
+decide whether or not to insert a new index entry for expression
+indexes. Similarly, for partial indexes, the predicate expression must
+be evaluated to decide whether or not to cause a new index entry when
+columns referred in the predicate expressions change.
+
+(None of these things are currently implemented and we squarely disallow
+WARM update if a column from expression indexes or predicate has
+changed).
+
+
+Efficiently Finding the Root Line Pointer
+-----------------------------------------
+
+During WARM update, we must be able to find the root line pointer of the
+tuple being updated. It must be noted that the t_ctid field in the heap
+tuple header is usually used to find the next tuple in the update chain.
+But the tuple that we are updating, must be the last tuple in the update
+chain. In such cases, the c_tid field usually points the tuple itself.
+So in theory, we could use the t_ctid to store additional information in
+the last tuple of the update chain, if the information about the tuple
+being the last tuple is stored elsewhere.
+
+We now utilize another bit from t_infomask2 to explicitly identify that
+this is the last tuple in the update chain.
+
+HEAP_LATEST_TUPLE - When this bit is set, the tuple is the last tuple in
+the update chain. The OffsetNumber part of t_ctid points to the root
+line pointer of the chain when HEAP_LATEST_TUPLE flag is set.
+
+If UPDATE operation is aborted, the last tuple in the update chain
+becomes dead. The root line pointer information stored in the tuple
+which remains the last valid tuple in the chain is also lost. In such
+rare cases, the root line pointer must be found in a hard way by
+scanning the entire heap page.
+
+Tracking WARM Chains
+--------------------
+
+The old and every subsequent tuple in the chain is marked with a special
+HEAP_WARM_TUPLE flag. We use the last remaining bit in t_infomask2 to
+store this information.
+
+When a tuple is returned from a WARM chain, the caller must do
+additional checks to ensure that the tuple matches the index key. Even
+if the tuple comes precedes the WARM update in the chain, it must still
+be rechecked for the index key match (case when old tuple is returned by
+the new index key). So we must follow the update chain everytime to the
+end to see check if this is a WARM chain.
+
+When the old updated tuple is retired and the root line pointer is
+converted into a redirected line pointer, we can copy the information
+about WARM chain to the redirected line pointer by storing a special
+value in the lp_len field of the line pointer. This will handle the most
+common case where a WARM chain is replaced by a redirect line pointer
+and a single tuple in the chain.
+
+Converting WARM chains back to HOT chains (VACUUM ?)
+----------------------------------------------------
+
+The current implementation of WARM allows only one WARM update per
+chain. This simplifies the design and addresses certain issues around
+duplicate scans. But this also implies that the benefit of WARM will be
+no more than 50%, which is still significant, but if we could return
+WARM chains back to normal status, we could do far more WARM updates.
+
+A distinct property of a WARM chain is that at least one index has more
+than one live index entries pointing to the root of the chain. In other
+words, if we can remove duplicate entry from every index or conclusively
+prove that there are no duplicate index entries for the root line
+pointer, the chain can again be marked as HOT.
+
+Here is one idea:
+
+A WARM chain has two parts, separated by the tuple that caused WARM
+update. All tuples in each part has matching index keys, but certain
+index keys may not match between these two parts. Lets say we mark heap
+tuples in each part with a special Red-Blue flag. The same flag is
+replicated in the index tuples. For example, when new rows are inserted
+in a table, they are marked with Blue flag and the index entries
+associated with those rows are also marked with Blue flag. When a row is
+WARM updated, the new version is marked with Red flag and the new index
+entry created by the update is also marked with Red flag.
+
+
+Heap chain: [1] [2] [3] [4]
+			[aaaa, 1111]B -> [aaaa, 1111]B -> [bbbb, 1111]R -> [bbbb, 1111]R
+
+Index1: 	(aaaa)B points to 1 (satisfies only tuples marked with B)
+			(bbbb)R points to 1 (satisfies only tuples marked with R)
+
+Index2:		(1111)B points to 1 (satisfied bith B and R tuples)
+
+
+It's clear that for indexes with Red and Blue pointers, a heap tuple
+with Blue flag will be reachable from Blue pointer and that with Red
+flag will be reachable from Red pointer. But for indexes which did not
+create a new entry, both Blue and Red tuples will be reachable from Blue
+pointer (there is no Red pointer in such indexes). So, as a side note,
+matching Red and Blue flags is not enough from index scan perspective.
+
+During first heap scan of VACUUM, we look for tuples with
+HEAP_WARM_TUPLE set.  If all live tuples in the chain are either marked
+with Blue flag or Red flag (but no mix of Red and Blue), then the chain
+is a candidate for HOT conversion.  We remember the root line pointer
+and Red-Blue flag of the WARM chain in a separate array.
+
+If we have a Red WARM chain, then our goal is to remove Blue pointers
+and vice versa. But there is a catch. For Index2 above, there is only
+Blue pointer and that must not be removed. IOW we should remove Blue
+pointer iff a Red pointer exists. Since index vacuum may visit Red and
+Blue pointers in any order, I think we will need another index pass to
+remove dead index pointers. So in the first index pass we check which
+WARM candidates have 2 index pointers. In the second pass, we remove the
+dead pointer and reset Red flag is the surviving index pointer is Red.
+
+During the second heap scan, we fix WARM chain by clearing
+HEAP_WARM_TUPLE flag and also reset Red flag to Blue.
+
+There are some more problems around aborted vacuums. For example, if
+vacuum aborts after changing Red index flag to Blue but before removing
+the other Blue pointer, we will end up with two Blue pointers to a Red
+WARM chain. But since the HEAP_WARM_TUPLE flag on the heap tuple is
+still set, further WARM updates to the chain will be blocked. I guess we
+will need some special handling for case with multiple Blue pointers. We
+can either leave these WARM chains alone and let them die with a
+subsequent non-WARM update or must apply heap-recheck logic during index
+vacuum to find the dead pointer. Given that vacuum-aborts are not
+common, I am inclined to leave this case unhandled. We must still check
+for presence of multiple Blue pointers and ensure that we don't
+accidently remove either of the Blue pointers and not clear WARM chains
+either.
+
+CREATE INDEX CONCURRENTLY
+-------------------------
+
+Currently CREATE INDEX CONCURRENTLY (CIC) is implemented as a 3-phase
+process.  In the first phase, we create catalog entry for the new index
+so that the index is visible to all other backends, but still don't use
+it for either read or write.  But we ensure that no new broken HOT
+chains are created by new transactions. In the second phase, we build
+the new index using a MVCC snapshot and then make the index available
+for inserts. We then do another pass over the index and insert any
+missing tuples, everytime indexing only it's root line pointer. See
+README.HOT for details about how HOT impacts CIC and how various
+challenges are tackeled.
+
+WARM poses another challenge because it allows creation of HOT chains
+even when an index key is changed. But since the index is not ready for
+insertion until the second phase is over, we might end up with a
+situation where the HOT chain has tuples with different index columns,
+yet only one of these values are indexed by the new index. Note that
+during the third phase, we only index tuples whose root line pointer is
+missing from the index. But we can't easily check if the existing index
+tuple is actually indexing the heap tuple visible to the new MVCC
+snapshot. Finding that information will require us to query the index
+again for every tuple in the chain, especially if it's a WARM tuple.
+This would require repeated access to the index. Another option would be
+to return index keys along with the heap TIDs when index is scanned for
+collecting all indexed TIDs during third phase. We can then compare the
+heap tuple against the already indexed key and decide whether or not to
+index the new tuple.
+
+We solve this problem more simply by disallowing WARM updates until the
+index is ready for insertion. We don't need to disallow WARM on a
+wholesale basis, but only those updates that change the columns of the
+new index are disallowed to be WARM updates.
diff --git b/src/backend/access/heap/heapam.c a/src/backend/access/heap/heapam.c
index 064909a..9c4522a 100644
--- b/src/backend/access/heap/heapam.c
+++ a/src/backend/access/heap/heapam.c
@@ -1958,6 +1958,78 @@ heap_fetch(Relation relation,
 }
 
 /*
+ * Check if the HOT chain containing this tid is actually a WARM chain.
+ * Note that even if the WARM update ultimately aborted, we still must do a
+ * recheck because the failing UPDATE when have inserted created index entries
+ * which are now stale, but still referencing this chain.
+ */
+static bool
+hot_check_warm_chain(Page dp, ItemPointer tid)
+{
+	TransactionId prev_xmax = InvalidTransactionId;
+	OffsetNumber offnum;
+	HeapTupleData heapTuple;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+			break;
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Presence of either WARM or WARM updated tuple signals possible
+		 * breakage and the caller must recheck tuple returned from this chain
+		 * for index satisfaction
+		 */
+		if (HeapTupleHeaderIsHeapWarmTuple(heapTuple.t_data))
+			return true;
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	/* All OK. No need to recheck */
+	return false;
+}
+
+/*
  *	heap_hot_search_buffer	- search HOT chain for tuple satisfying snapshot
  *
  * On entry, *tid is the TID of a tuple (either a simple tuple, or the root
@@ -1977,11 +2049,14 @@ heap_fetch(Relation relation,
  * Unlike heap_fetch, the caller must already have pin and (at least) share
  * lock on the buffer; it is still pinned/locked at exit.  Also unlike
  * heap_fetch, we do not report any pgstats count; caller may do so if wanted.
+ *
+ * recheck should be set false on entry by caller, will be set true on exit
+ * if a WARM tuple is encountered.
  */
 bool
 heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 					   Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call)
+					   bool *all_dead, bool first_call, bool *recheck)
 {
 	Page		dp = (Page) BufferGetPage(buffer);
 	TransactionId prev_xmax = InvalidTransactionId;
@@ -2035,9 +2110,12 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		ItemPointerSetOffsetNumber(&heapTuple->t_self, offnum);
 
 		/*
-		 * Shouldn't see a HEAP_ONLY tuple at chain start.
+		 * Shouldn't see a HEAP_ONLY tuple at chain start, unless we are
+		 * dealing with a WARM updated tuple in which case deferred triggers
+		 * may request to fetch a WARM tuple from middle of a chain.
 		 */
-		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple))
+		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple) &&
+				!HeapTupleIsHeapWarmTuple(heapTuple))
 			break;
 
 		/*
@@ -2050,6 +2128,16 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 			break;
 
 		/*
+		 * Check if there exists a WARM tuple somewhere down the chain and set
+		 * recheck to TRUE.
+		 *
+		 * XXX This is not very efficient right now, and we should look for
+		 * possible improvements here
+		 */
+		if (recheck && *recheck == false)
+			*recheck = hot_check_warm_chain(dp, &heapTuple->t_self);
+
+		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
 		 * return the first tuple we find.  But on later passes, heapTuple
 		 * will initially be pointing to the tuple we returned last time.
@@ -2098,7 +2186,8 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		 * Check to see if HOT chain continues past this tuple; if so fetch
 		 * the next offnum and loop around.
 		 */
-		if (HeapTupleIsHotUpdated(heapTuple))
+		if (HeapTupleIsHotUpdated(heapTuple) &&
+			!HeapTupleHeaderHasRootOffset(heapTuple->t_data))
 		{
 			Assert(ItemPointerGetBlockNumber(&heapTuple->t_data->t_ctid) ==
 				   ItemPointerGetBlockNumber(tid));
@@ -2122,18 +2211,41 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
  */
 bool
 heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
-				bool *all_dead)
+				bool *all_dead, bool *recheck, Buffer *cbuffer,
+				HeapTuple heapTuple)
 {
 	bool		result;
 	Buffer		buffer;
-	HeapTupleData heapTuple;
+	ItemPointerData ret_tid = *tid;
 
 	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	LockBuffer(buffer, BUFFER_LOCK_SHARE);
-	result = heap_hot_search_buffer(tid, relation, buffer, snapshot,
-									&heapTuple, all_dead, true);
-	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buffer);
+	result = heap_hot_search_buffer(&ret_tid, relation, buffer, snapshot,
+									heapTuple, all_dead, true, recheck);
+
+	/*
+	 * If we are returning a potential candidate tuple from this chain and the
+	 * caller has requested for "recheck" hint, keep the buffer locked and
+	 * pinned. The caller must release the lock and pin on the buffer in all
+	 * such cases
+	 */
+	if (!result || !recheck || !(*recheck))
+	{
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	/*
+	 * Set the caller supplied tid with the actual location of the tuple being
+	 * returned
+	 */
+	if (result)
+	{
+		*tid = ret_tid;
+		if (cbuffer)
+			*cbuffer = buffer;
+	}
+
 	return result;
 }
 
@@ -3492,15 +3604,18 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 HTSU_Result
 heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode)
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update)
 {
 	HTSU_Result result;
 	TransactionId xid = GetCurrentTransactionId();
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *exprindx_attrs;
 	Bitmapset  *interesting_attrs;
 	Bitmapset  *modified_attrs;
+	Bitmapset  *notready_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3521,6 +3636,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		use_hot_update = false;
+	bool		use_warm_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
 	bool		all_visible_cleared_new = false;
@@ -3545,6 +3661,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot update tuples during a parallel operation")));
 
+	/* Assume no-warm update */
+	if (warm_update)
+		*warm_update = false;
+
 	/*
 	 * Fetch the list of attributes to be checked for various operations.
 	 *
@@ -3566,10 +3686,17 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	exprindx_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_EXPR_PREDICATE);
+	notready_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_NOTREADY);
+
+
 	interesting_attrs = bms_add_members(NULL, hot_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
-
+	interesting_attrs = bms_add_members(interesting_attrs, exprindx_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, notready_attrs);
 
 	block = ItemPointerGetBlockNumber(otid);
 	offnum = ItemPointerGetOffsetNumber(otid);
@@ -3621,6 +3748,9 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
 												  &oldtup, newtup);
 
+	if (modified_attrsp)
+		*modified_attrsp = bms_copy(modified_attrs);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3876,6 +4006,7 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(exprindx_attrs);
 		bms_free(modified_attrs);
 		bms_free(interesting_attrs);
 		return result;
@@ -4194,6 +4325,37 @@ l2:
 		 */
 		if (!bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
+		else
+		{
+			/*
+			 * If no WARM updates yet on this chain, let this update be a WARM
+			 * update.
+			 *
+			 * We check for both warm and warm updated tuples since if the
+			 * previous WARM update aborted, we may still have added
+			 * another index entry for this HOT chain. In such situations, we
+			 * must not attempt a WARM update until duplicate (key, CTID) index
+			 * entry issue is sorted out
+			 *
+			 * XXX Later we'll add more checks to ensure WARM chains can
+			 * further be WARM updated. This is probably good to do first rounf
+			 * of tests of remaining functionality
+			 *
+			 * XXX Disable WARM updates on system tables. There is nothing in
+			 * principle that stops us from supporting this. But it would
+			 * require API change to propogate the changed columns back to the
+			 * caller so that CatalogUpdateIndexes() can avoid adding new
+			 * entries to indexes that are not changed by update. This will be
+			 * fixed once basic patch is tested. !!FIXME
+			 */
+			if (relation->rd_supportswarm &&
+				!bms_overlap(modified_attrs, exprindx_attrs) &&
+				!bms_is_subset(hot_attrs, modified_attrs) &&
+				!IsSystemRelation(relation) &&
+				!bms_overlap(notready_attrs, modified_attrs) &&
+				!HeapTupleIsHeapWarmTuple(&oldtup))
+				use_warm_update = true;
+		}
 	}
 	else
 	{
@@ -4240,6 +4402,22 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+
+		/*
+		 * Even if we are doing a HOT update, we must carry forward the WARM
+		 * flag because we may have already inserted another index entry
+		 * pointing to our root and a third entry may create duplicates
+		 *
+		 * Note: If we ever have a mechanism to avoid duplicate <key, TID> in
+		 * indexes, we could look at relaxing this restriction and allow even
+		 * more WARM udpates
+		 */
+		if (HeapTupleIsHeapWarmTuple(&oldtup))
+		{
+			HeapTupleSetHeapWarmTuple(heaptup);
+			HeapTupleSetHeapWarmTuple(newtup);
+		}
+
 		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
@@ -4252,12 +4430,35 @@ l2:
 		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
 			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
+	else if (use_warm_update)
+	{
+		/* Mark the old tuple as HOT-updated */
+		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetHeapWarmTuple(&oldtup);
+		/* And mark the new tuple as heap-only */
+		HeapTupleSetHeapOnly(heaptup);
+		HeapTupleSetHeapWarmTuple(heaptup);
+		/* Mark the caller's copy too, in case different from heaptup */
+		HeapTupleSetHeapOnly(newtup);
+		HeapTupleSetHeapWarmTuple(newtup);
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
+		/* Let the caller know we did a WARM update */
+		if (warm_update)
+			*warm_update = true;
+	}
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		HeapTupleClearHeapWarmTuple(heaptup);
+		HeapTupleClearHeapWarmTuple(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4367,7 +4568,10 @@ l2:
 	if (have_tuple_lock)
 		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
 
-	pgstat_count_heap_update(relation, use_hot_update);
+	/*
+	 * Count HOT and WARM updates separately
+	 */
+	pgstat_count_heap_update(relation, use_hot_update, use_warm_update);
 
 	/*
 	 * If heaptup is a private copy, release it.  Don't forget to copy t_self
@@ -4507,7 +4711,8 @@ HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
  * via ereport().
  */
 void
-simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
+simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup,
+		Bitmapset **modified_attrs, bool *warm_update)
 {
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
@@ -4516,7 +4721,7 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &hufd, &lockmode);
+						 &hufd, &lockmode, modified_attrs, warm_update);
 	switch (result)
 	{
 		case HeapTupleSelfUpdated:
@@ -7568,6 +7773,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
+	bool		warm_update = false;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
@@ -7579,6 +7785,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	else
 		info = XLOG_HEAP_UPDATE;
 
+	if (HeapTupleIsHeapWarmTuple(newtup))
+		warm_update = true;
+
 	/*
 	 * If the old and new tuple are on the same page, we only need to log the
 	 * parts of the new tuple that were changed.  That saves on the amount of
@@ -7652,6 +7861,8 @@ log_heap_update(Relation reln, Buffer oldbuf,
 				xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY;
 		}
 	}
+	if (warm_update)
+		xlrec.flags |= XLH_UPDATE_WARM_UPDATE;
 
 	/* If new tuple is the single and first tuple on page... */
 	if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber &&
@@ -8629,16 +8840,22 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	Size		freespace = 0;
 	XLogRedoAction oldaction;
 	XLogRedoAction newaction;
+	bool		warm_update = false;
 
 	/* initialize to keep the compiler quiet */
 	oldtup.t_data = NULL;
 	oldtup.t_len = 0;
 
+	if (xlrec->flags & XLH_UPDATE_WARM_UPDATE)
+		warm_update = true;
+
 	XLogRecGetBlockTag(record, 0, &rnode, NULL, &newblk);
 	if (XLogRecGetBlockTag(record, 1, NULL, NULL, &oldblk))
 	{
 		/* HOT updates are never done across pages */
 		Assert(!hot_update);
+		/* WARM updates are never done across pages */
+		Assert(!warm_update);
 	}
 	else
 		oldblk = newblk;
@@ -8698,6 +8915,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 								   &htup->t_infomask2);
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
+
+		/* Mark the old tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		/* Set forward chain link in t_ctid */
 		HeapTupleHeaderSetNextTid(htup, &newtid);
 
@@ -8833,6 +9055,10 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
 
+		/* Mark the new tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
diff --git b/src/backend/access/heap/pruneheap.c a/src/backend/access/heap/pruneheap.c
index f54337c..c2bd7d6 100644
--- b/src/backend/access/heap/pruneheap.c
+++ a/src/backend/access/heap/pruneheap.c
@@ -834,6 +834,13 @@ heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				continue;
 
+			/* 
+			 * If the tuple has root line pointer, it must be the end of the
+			 * chain
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			/* Set up to scan the HOT-chain */
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
diff --git b/src/backend/access/index/indexam.c a/src/backend/access/index/indexam.c
index 4e7eca7..f56c58f 100644
--- b/src/backend/access/index/indexam.c
+++ a/src/backend/access/index/indexam.c
@@ -75,10 +75,12 @@
 #include "access/xlog.h"
 #include "catalog/catalog.h"
 #include "catalog/index.h"
+#include "executor/executor.h"
 #include "pgstat.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
+#include "utils/datum.h"
 #include "utils/snapmgr.h"
 #include "utils/tqual.h"
 
@@ -234,6 +236,21 @@ index_beginscan(Relation heapRelation,
 	scan->heapRelation = heapRelation;
 	scan->xs_snapshot = snapshot;
 
+	/*
+	 * If the index supports recheck, make sure that index tuple is saved
+	 * during index scans.
+	 *
+	 * XXX Ideally, we should look at all indexes on the table and check if
+	 * WARM is at all supported on the base table. If WARM is not supported
+	 * then we don't need to do any recheck. RelationGetIndexAttrBitmap() does
+	 * do that and sets rd_supportswarm after looking at all indexes. But we
+	 * don't know if the function was called earlier in the session when we're
+	 * here. We can't call it now because there exists a risk of causing
+	 * deadlock.
+	 */
+	if (indexRelation->rd_amroutine->amrecheck)
+		scan->xs_want_itup = true;
+
 	return scan;
 }
 
@@ -535,7 +552,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
 	/*
 	 * The AM's amgettuple proc finds the next index entry matching the scan
 	 * keys, and puts the TID into scan->xs_ctup.t_self.  It should also set
-	 * scan->xs_recheck and possibly scan->xs_itup, though we pay no attention
+	 * scan->xs_tuple_recheck and possibly scan->xs_itup, though we pay no attention
 	 * to those fields here.
 	 */
 	found = scan->indexRelation->rd_amroutine->amgettuple(scan, direction);
@@ -574,7 +591,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
  * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
  * call).
  *
- * Note: caller must check scan->xs_recheck, and perform rechecking of the
+ * Note: caller must check scan->xs_tuple_recheck, and perform rechecking of the
  * scan keys if required.  We do not do that here because we don't have
  * enough information to do it efficiently in the general case.
  * ----------------
@@ -601,6 +618,12 @@ index_fetch_heap(IndexScanDesc scan)
 		 */
 		if (prev_buf != scan->xs_cbuf)
 			heap_page_prune_opt(scan->heapRelation, scan->xs_cbuf);
+
+		/*
+		 * If we're not always re-checking, reset recheck for this tuple.
+		 * Otherwise we must recheck every tuple.
+		 */
+		scan->xs_tuple_recheck = scan->xs_recheck;
 	}
 
 	/* Obtain share-lock on the buffer so we can examine visibility */
@@ -610,32 +633,64 @@ index_fetch_heap(IndexScanDesc scan)
 											scan->xs_snapshot,
 											&scan->xs_ctup,
 											&all_dead,
-											!scan->xs_continue_hot);
+											!scan->xs_continue_hot,
+											&scan->xs_tuple_recheck);
 	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
 
 	if (got_heap_tuple)
 	{
+		bool res = true;
+
+		/*
+		 * Ok we got a tuple which satisfies the snapshot, but if its part of a
+		 * WARM chain, we must do additional checks to ensure that we are
+		 * indeed returning a correct tuple. Note that if the index AM does not
+		 * implement amrecheck method, then we don't any additional checks
+		 * since WARM must have been disabled on such tables
+		 *
+		 * XXX What happens when a new index which does not support amcheck is
+		 * added to the table? Do we need to handle this case or is CREATE
+		 * INDEX and CREATE INDEX CONCURRENTLY smart enough to handle this
+		 * issue?
+		 */
+		if (scan->xs_tuple_recheck &&
+				scan->xs_itup &&
+				scan->indexRelation->rd_amroutine->amrecheck)
+		{
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
+			res = scan->indexRelation->rd_amroutine->amrecheck(
+						scan->indexRelation,
+						scan->xs_itup,
+						scan->heapRelation,
+						&scan->xs_ctup);
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+		}
+
 		/*
 		 * Only in a non-MVCC snapshot can more than one member of the HOT
 		 * chain be visible.
 		 */
 		scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
 		pgstat_count_heap_fetch(scan->indexRelation);
-		return &scan->xs_ctup;
-	}
 
-	/* We've reached the end of the HOT chain. */
-	scan->xs_continue_hot = false;
+		if (res)
+			return &scan->xs_ctup;
+	}
+	else
+	{
+		/* We've reached the end of the HOT chain. */
+		scan->xs_continue_hot = false;
 
-	/*
-	 * If we scanned a whole HOT chain and found only dead tuples, tell index
-	 * AM to kill its entry for that TID (this will take effect in the next
-	 * amgettuple call, in index_getnext_tid).  We do not do this when in
-	 * recovery because it may violate MVCC to do so.  See comments in
-	 * RelationGetIndexScan().
-	 */
-	if (!scan->xactStartedInRecovery)
-		scan->kill_prior_tuple = all_dead;
+		/*
+		 * If we scanned a whole HOT chain and found only dead tuples, tell index
+		 * AM to kill its entry for that TID (this will take effect in the next
+		 * amgettuple call, in index_getnext_tid).  We do not do this when in
+		 * recovery because it may violate MVCC to do so.  See comments in
+		 * RelationGetIndexScan().
+		 */
+		if (!scan->xactStartedInRecovery)
+			scan->kill_prior_tuple = all_dead;
+	}
 
 	return NULL;
 }
diff --git b/src/backend/access/nbtree/nbtinsert.c a/src/backend/access/nbtree/nbtinsert.c
index 6dca810..b5cb619 100644
--- b/src/backend/access/nbtree/nbtinsert.c
+++ a/src/backend/access/nbtree/nbtinsert.c
@@ -20,11 +20,14 @@
 #include "access/nbtxlog.h"
 #include "access/transam.h"
 #include "access/xloginsert.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
 #include "utils/tqual.h"
-
+#include "utils/datum.h"
 
 typedef struct
 {
@@ -250,6 +253,9 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 	BTPageOpaque opaque;
 	Buffer		nbuf = InvalidBuffer;
 	bool		found = false;
+	Buffer		buffer;
+	HeapTupleData	heapTuple;
+	bool		recheck = false;
 
 	/* Assume unique until we find a duplicate */
 	*is_unique = true;
@@ -309,6 +315,8 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				curitup = (IndexTuple) PageGetItem(page, curitemid);
 				htid = curitup->t_tid;
 
+				recheck = false;
+
 				/*
 				 * If we are doing a recheck, we expect to find the tuple we
 				 * are rechecking.  It's not a duplicate, but we have to keep
@@ -326,112 +334,153 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				 * have just a single index entry for the entire chain.
 				 */
 				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
-										 &all_dead))
+							&all_dead, &recheck, &buffer,
+							&heapTuple))
 				{
 					TransactionId xwait;
+					bool result = true;
 
 					/*
-					 * It is a duplicate. If we are only doing a partial
-					 * check, then don't bother checking if the tuple is being
-					 * updated in another transaction. Just return the fact
-					 * that it is a potential conflict and leave the full
-					 * check till later.
+					 * If the tuple was WARM update, we may again see our own
+					 * tuple. Since WARM updates don't create new index
+					 * entries, our own tuple is only reachable via the old
+					 * index pointer
 					 */
-					if (checkUnique == UNIQUE_CHECK_PARTIAL)
+					if (checkUnique == UNIQUE_CHECK_EXISTING &&
+							ItemPointerCompare(&htid, &itup->t_tid) == 0)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						*is_unique = false;
-						return InvalidTransactionId;
+						found = true;
+						result = false;
+						if (recheck)
+							UnlockReleaseBuffer(buffer);
 					}
-
-					/*
-					 * If this tuple is being updated by other transaction
-					 * then we have to wait for its commit/abort.
-					 */
-					xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
-						SnapshotDirty.xmin : SnapshotDirty.xmax;
-
-					if (TransactionIdIsValid(xwait))
+					else if (recheck)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						/* Tell _bt_doinsert to wait... */
-						*speculativeToken = SnapshotDirty.speculativeToken;
-						return xwait;
+						result = btrecheck(rel, curitup, heapRel, &heapTuple);
+						UnlockReleaseBuffer(buffer);
 					}
 
-					/*
-					 * Otherwise we have a definite conflict.  But before
-					 * complaining, look to see if the tuple we want to insert
-					 * is itself now committed dead --- if so, don't complain.
-					 * This is a waste of time in normal scenarios but we must
-					 * do it to support CREATE INDEX CONCURRENTLY.
-					 *
-					 * We must follow HOT-chains here because during
-					 * concurrent index build, we insert the root TID though
-					 * the actual tuple may be somewhere in the HOT-chain.
-					 * While following the chain we might not stop at the
-					 * exact tuple which triggered the insert, but that's OK
-					 * because if we find a live tuple anywhere in this chain,
-					 * we have a unique key conflict.  The other live tuple is
-					 * not part of this chain because it had a different index
-					 * entry.
-					 */
-					htid = itup->t_tid;
-					if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL))
-					{
-						/* Normal case --- it's still live */
-					}
-					else
+					if (result)
 					{
 						/*
-						 * It's been deleted, so no error, and no need to
-						 * continue searching
+						 * It is a duplicate. If we are only doing a partial
+						 * check, then don't bother checking if the tuple is being
+						 * updated in another transaction. Just return the fact
+						 * that it is a potential conflict and leave the full
+						 * check till later.
 						 */
-						break;
-					}
+						if (checkUnique == UNIQUE_CHECK_PARTIAL)
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							*is_unique = false;
+							return InvalidTransactionId;
+						}
 
-					/*
-					 * Check for a conflict-in as we would if we were going to
-					 * write to this page.  We aren't actually going to write,
-					 * but we want a chance to report SSI conflicts that would
-					 * otherwise be masked by this unique constraint
-					 * violation.
-					 */
-					CheckForSerializableConflictIn(rel, NULL, buf);
+						/*
+						 * If this tuple is being updated by other transaction
+						 * then we have to wait for its commit/abort.
+						 */
+						xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
+							SnapshotDirty.xmin : SnapshotDirty.xmax;
+
+						if (TransactionIdIsValid(xwait))
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							/* Tell _bt_doinsert to wait... */
+							*speculativeToken = SnapshotDirty.speculativeToken;
+							return xwait;
+						}
 
-					/*
-					 * This is a definite conflict.  Break the tuple down into
-					 * datums and report the error.  But first, make sure we
-					 * release the buffer locks we're holding ---
-					 * BuildIndexValueDescription could make catalog accesses,
-					 * which in the worst case might touch this same index and
-					 * cause deadlocks.
-					 */
-					if (nbuf != InvalidBuffer)
-						_bt_relbuf(rel, nbuf);
-					_bt_relbuf(rel, buf);
+						/*
+						 * Otherwise we have a definite conflict.  But before
+						 * complaining, look to see if the tuple we want to insert
+						 * is itself now committed dead --- if so, don't complain.
+						 * This is a waste of time in normal scenarios but we must
+						 * do it to support CREATE INDEX CONCURRENTLY.
+						 *
+						 * We must follow HOT-chains here because during
+						 * concurrent index build, we insert the root TID though
+						 * the actual tuple may be somewhere in the HOT-chain.
+						 * While following the chain we might not stop at the
+						 * exact tuple which triggered the insert, but that's OK
+						 * because if we find a live tuple anywhere in this chain,
+						 * we have a unique key conflict.  The other live tuple is
+						 * not part of this chain because it had a different index
+						 * entry.
+						 */
+						recheck = false;
+						ItemPointerCopy(&itup->t_tid, &htid);
+						if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL,
+									&recheck, &buffer, &heapTuple))
+						{
+							bool result = true;
+							if (recheck)
+							{
+								/*
+								 * Recheck if the tuple actually satisfies the
+								 * index key. Otherwise, we might be following
+								 * a wrong index pointer and mustn't entertain
+								 * this tuple
+								 */
+								result = btrecheck(rel, itup, heapRel, &heapTuple);
+								UnlockReleaseBuffer(buffer);
+							}
+							if (!result)
+								break;
+							/* Normal case --- it's still live */
+						}
+						else
+						{
+							/*
+							 * It's been deleted, so no error, and no need to
+							 * continue searching
+							 */
+							break;
+						}
 
-					{
-						Datum		values[INDEX_MAX_KEYS];
-						bool		isnull[INDEX_MAX_KEYS];
-						char	   *key_desc;
-
-						index_deform_tuple(itup, RelationGetDescr(rel),
-										   values, isnull);
-
-						key_desc = BuildIndexValueDescription(rel, values,
-															  isnull);
-
-						ereport(ERROR,
-								(errcode(ERRCODE_UNIQUE_VIOLATION),
-								 errmsg("duplicate key value violates unique constraint \"%s\"",
-										RelationGetRelationName(rel)),
-							   key_desc ? errdetail("Key %s already exists.",
-													key_desc) : 0,
-								 errtableconstraint(heapRel,
-											 RelationGetRelationName(rel))));
+						/*
+						 * Check for a conflict-in as we would if we were going to
+						 * write to this page.  We aren't actually going to write,
+						 * but we want a chance to report SSI conflicts that would
+						 * otherwise be masked by this unique constraint
+						 * violation.
+						 */
+						CheckForSerializableConflictIn(rel, NULL, buf);
+
+						/*
+						 * This is a definite conflict.  Break the tuple down into
+						 * datums and report the error.  But first, make sure we
+						 * release the buffer locks we're holding ---
+						 * BuildIndexValueDescription could make catalog accesses,
+						 * which in the worst case might touch this same index and
+						 * cause deadlocks.
+						 */
+						if (nbuf != InvalidBuffer)
+							_bt_relbuf(rel, nbuf);
+						_bt_relbuf(rel, buf);
+
+						{
+							Datum		values[INDEX_MAX_KEYS];
+							bool		isnull[INDEX_MAX_KEYS];
+							char	   *key_desc;
+
+							index_deform_tuple(itup, RelationGetDescr(rel),
+									values, isnull);
+
+							key_desc = BuildIndexValueDescription(rel, values,
+									isnull);
+
+							ereport(ERROR,
+									(errcode(ERRCODE_UNIQUE_VIOLATION),
+									 errmsg("duplicate key value violates unique constraint \"%s\"",
+										 RelationGetRelationName(rel)),
+									 key_desc ? errdetail("Key %s already exists.",
+										 key_desc) : 0,
+									 errtableconstraint(heapRel,
+										 RelationGetRelationName(rel))));
+						}
 					}
 				}
 				else if (all_dead)
diff --git b/src/backend/access/nbtree/nbtree.c a/src/backend/access/nbtree/nbtree.c
index 775f2ff..952ed8f 100644
--- b/src/backend/access/nbtree/nbtree.c
+++ a/src/backend/access/nbtree/nbtree.c
@@ -23,6 +23,7 @@
 #include "access/xlog.h"
 #include "catalog/index.h"
 #include "commands/vacuum.h"
+#include "executor/nodeIndexscan.h"
 #include "pgstat.h"
 #include "storage/condition_variable.h"
 #include "storage/indexfsm.h"
@@ -163,6 +164,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = btestimateparallelscan;
 	amroutine->aminitparallelscan = btinitparallelscan;
 	amroutine->amparallelrescan = btparallelrescan;
+	amroutine->amrecheck = btrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -344,8 +346,9 @@ btgettuple(IndexScanDesc scan, ScanDirection dir)
 	BTScanOpaque so = (BTScanOpaque) scan->opaque;
 	bool		res;
 
-	/* btree indexes are never lossy */
+	/* btree indexes are never lossy, except for WARM tuples */
 	scan->xs_recheck = false;
+	scan->xs_tuple_recheck = false;
 
 	/*
 	 * If we have any array keys, initialize them during first call for a
diff --git b/src/backend/access/nbtree/nbtutils.c a/src/backend/access/nbtree/nbtutils.c
index 5b259a3..c376c1b 100644
--- b/src/backend/access/nbtree/nbtutils.c
+++ a/src/backend/access/nbtree/nbtutils.c
@@ -20,11 +20,15 @@
 #include "access/nbtree.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "utils/array.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 typedef struct BTSortArrayContext
@@ -2069,3 +2073,103 @@ btproperty(Oid index_oid, int attno,
 			return false;		/* punt to generic code */
 	}
 }
+
+/*
+ * Check if the index tuple's key matches the one computed from the given heap
+ * tuple's attribute
+ */
+bool
+btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	/* Get IndexInfo for this index */
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL, then they are equal
+		 */
+		if (isnull[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If just one is NULL, then they are not equal
+		 */
+		if (isnull[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now just do a raw memory comparison. If the index tuple was formed
+		 * using this heap tuple, the computed index values must match
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+}
diff --git b/src/backend/access/spgist/spgutils.c a/src/backend/access/spgist/spgutils.c
index e57ac49..59ef7f3 100644
--- b/src/backend/access/spgist/spgutils.c
+++ a/src/backend/access/spgist/spgutils.c
@@ -72,6 +72,7 @@ spghandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git b/src/backend/catalog/index.c a/src/backend/catalog/index.c
index f8d9214..bba52ec 100644
--- b/src/backend/catalog/index.c
+++ a/src/backend/catalog/index.c
@@ -54,6 +54,7 @@
 #include "nodes/makefuncs.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/var.h"
 #include "parser/parser.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
@@ -1691,6 +1692,20 @@ BuildIndexInfo(Relation index)
 	ii->ii_AmCache = NULL;
 	ii->ii_Context = CurrentMemoryContext;
 
+	/* build a bitmap of all table attributes referred by this index */
+	for (i = 0; i < ii->ii_NumIndexAttrs; i++)
+	{
+		AttrNumber attr = ii->ii_KeyAttrNumbers[i];
+		ii->ii_indxattrs = bms_add_member(ii->ii_indxattrs, attr -
+				FirstLowInvalidHeapAttributeNumber);
+	}
+
+	/* Collect all attributes used in expressions, too */
+	pull_varattnos((Node *) ii->ii_Expressions, 1, &ii->ii_indxattrs);
+
+	/* Collect all attributes in the index predicate, too */
+	pull_varattnos((Node *) ii->ii_Predicate, 1, &ii->ii_indxattrs);
+
 	return ii;
 }
 
diff --git b/src/backend/catalog/indexing.c a/src/backend/catalog/indexing.c
index abc344a..e5355a8 100644
--- b/src/backend/catalog/indexing.c
+++ a/src/backend/catalog/indexing.c
@@ -66,10 +66,15 @@ CatalogCloseIndexes(CatalogIndexState indstate)
  *
  * This should be called for each inserted or updated catalog tuple.
  *
+ * If the tuple was WARM updated, the modified_attrs contains the list of
+ * columns updated by the update. We must not insert new index entries for
+ * indexes which do not refer to any of the modified columns.
+ *
  * This is effectively a cut-down version of ExecInsertIndexTuples.
  */
 static void
-CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
+CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple,
+		Bitmapset *modified_attrs, bool warm_update)
 {
 	int			i;
 	int			numIndexes;
@@ -79,12 +84,28 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 	IndexInfo **indexInfoArray;
 	Datum		values[INDEX_MAX_KEYS];
 	bool		isnull[INDEX_MAX_KEYS];
+	ItemPointerData root_tid;
 
-	/* HOT update does not require index inserts */
-	if (HeapTupleIsHeapOnly(heapTuple))
+	/* 
+	 * HOT update does not require index inserts, but WARM may need for some
+	 * indexes.
+	 */
+	if (HeapTupleIsHeapOnly(heapTuple) && !warm_update)
 		return;
 
 	/*
+	 * If we've done a WARM update, then we must index the TID of the root line
+	 * pointer and not the actual TID of the new tuple.
+	 */
+	if (warm_update)
+		ItemPointerSet(&root_tid,
+				ItemPointerGetBlockNumber(&(heapTuple->t_self)),
+				HeapTupleHeaderGetRootOffset(heapTuple->t_data));
+	else
+		ItemPointerCopy(&heapTuple->t_self, &root_tid);
+
+
+	/*
 	 * Get information from the state structure.  Fall out if nothing to do.
 	 */
 	numIndexes = indstate->ri_NumIndices;
@@ -112,6 +133,17 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 			continue;
 
 		/*
+		 * If we've done WARM update, then we must not insert a new index tuple
+		 * if none of the index keys have changed. This is not just an
+		 * optimization, but a requirement for WARM to work correctly.
+		 */
+		if (warm_update)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
+		/*
 		 * Expressional and partial indexes on system catalogs are not
 		 * supported, nor exclusion constraints, nor deferred uniqueness
 		 */
@@ -136,7 +168,7 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 		index_insert(relationDescs[i],	/* index relation */
 					 values,	/* array of index Datums */
 					 isnull,	/* is-null flags */
-					 &(heapTuple->t_self),		/* tid of heap tuple */
+					 &root_tid,
 					 heapRelation,
 					 relationDescs[i]->rd_index->indisunique ?
 					 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
@@ -168,7 +200,7 @@ CatalogTupleInsert(Relation heapRel, HeapTuple tup)
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 	CatalogCloseIndexes(indstate);
 
 	return oid;
@@ -190,7 +222,7 @@ CatalogTupleInsertWithInfo(Relation heapRel, HeapTuple tup,
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, false, NULL);
 
 	return oid;
 }
@@ -210,12 +242,14 @@ void
 CatalogTupleUpdate(Relation heapRel, ItemPointer otid, HeapTuple tup)
 {
 	CatalogIndexState indstate;
+	bool	warm_update;
+	Bitmapset	*modified_attrs;
 
 	indstate = CatalogOpenIndexes(heapRel);
 
-	simple_heap_update(heapRel, otid, tup);
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 	CatalogCloseIndexes(indstate);
 }
 
@@ -231,9 +265,12 @@ void
 CatalogTupleUpdateWithInfo(Relation heapRel, ItemPointer otid, HeapTuple tup,
 						   CatalogIndexState indstate)
 {
-	simple_heap_update(heapRel, otid, tup);
+	Bitmapset  *modified_attrs;
+	bool		warm_update;
+
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 }
 
 /*
diff --git b/src/backend/catalog/system_views.sql a/src/backend/catalog/system_views.sql
index 38be9cf..7fb1295 100644
--- b/src/backend/catalog/system_views.sql
+++ a/src/backend/catalog/system_views.sql
@@ -498,6 +498,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_tuples_deleted(C.oid) AS n_tup_del,
             pg_stat_get_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
@@ -528,7 +529,8 @@ CREATE VIEW pg_stat_xact_all_tables AS
             pg_stat_get_xact_tuples_inserted(C.oid) AS n_tup_ins,
             pg_stat_get_xact_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_xact_tuples_deleted(C.oid) AS n_tup_del,
-            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd
+            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_xact_tuples_warm_updated(C.oid) AS n_tup_warm_upd
     FROM pg_class C LEFT JOIN
          pg_index I ON C.oid = I.indrelid
          LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
diff --git b/src/backend/commands/constraint.c a/src/backend/commands/constraint.c
index e2544e5..d9c0fe7 100644
--- b/src/backend/commands/constraint.c
+++ a/src/backend/commands/constraint.c
@@ -40,6 +40,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	TriggerData *trigdata = castNode(TriggerData, fcinfo->context);
 	const char *funcname = "unique_key_recheck";
 	HeapTuple	new_row;
+	HeapTupleData heapTuple;
 	ItemPointerData tmptid;
 	Relation	indexRel;
 	IndexInfo  *indexInfo;
@@ -102,7 +103,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	 * removed.
 	 */
 	tmptid = new_row->t_self;
-	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL,
+				NULL, NULL, &heapTuple))
 	{
 		/*
 		 * All rows in the HOT chain are dead, so skip the check.
diff --git b/src/backend/commands/copy.c a/src/backend/commands/copy.c
index 949844d..38702e5 100644
--- b/src/backend/commands/copy.c
+++ a/src/backend/commands/copy.c
@@ -2680,6 +2680,8 @@ CopyFrom(CopyState cstate)
 					if (resultRelInfo->ri_NumIndices > 0)
 						recheckIndexes = ExecInsertIndexTuples(slot,
 															&(tuple->t_self),
+															&(tuple->t_self),
+															NULL,
 															   estate,
 															   false,
 															   NULL,
@@ -2834,6 +2836,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
 			ExecStoreTuple(bufferedTuples[i], myslot, InvalidBuffer, false);
 			recheckIndexes =
 				ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
+									  &(bufferedTuples[i]->t_self), NULL,
 									  estate, false, NULL, NIL);
 			ExecARInsertTriggers(estate, resultRelInfo,
 								 bufferedTuples[i],
diff --git b/src/backend/commands/indexcmds.c a/src/backend/commands/indexcmds.c
index 72bb06c..d8f033d 100644
--- b/src/backend/commands/indexcmds.c
+++ a/src/backend/commands/indexcmds.c
@@ -699,7 +699,14 @@ DefineIndex(Oid relationId,
 	 * visible to other transactions before we start to build the index. That
 	 * will prevent them from making incompatible HOT updates.  The new index
 	 * will be marked not indisready and not indisvalid, so that no one else
-	 * tries to either insert into it or use it for queries.
+	 * tries to either insert into it or use it for queries. In addition to
+	 * that, WARM updates will be disallowed if an update is modifying one of
+	 * the columns used by this new index. This is necessary to ensure that we
+	 * don't create WARM tuples which do not have corresponding entry in this
+	 * index. It must be noted that during the second phase, we will index only
+	 * those heap tuples whose root line pointer is not already in the index,
+	 * hence it's important that all tuples in a given chain, has the same
+	 * value for any indexed column (including this new index).
 	 *
 	 * We must commit our current transaction so that the index becomes
 	 * visible; then start another.  Note that all the data structures we just
@@ -747,7 +754,10 @@ DefineIndex(Oid relationId,
 	 * marked as "not-ready-for-inserts".  The index is consulted while
 	 * deciding HOT-safety though.  This arrangement ensures that no new HOT
 	 * chains can be created where the new tuple and the old tuple in the
-	 * chain have different index keys.
+	 * chain have different index keys. Also, the new index is consulted for
+	 * deciding whether a WARM update is possible, and WARM update is not done
+	 * if a column used by this index is being updated. This ensures that we
+	 * don't create WARM tuples which are not indexed by this index.
 	 *
 	 * We now take a new snapshot, and build the index using all tuples that
 	 * are visible in this snapshot.  We can be sure that any HOT updates to
@@ -782,7 +792,8 @@ DefineIndex(Oid relationId,
 	/*
 	 * Update the pg_index row to mark the index as ready for inserts. Once we
 	 * commit this transaction, any new transactions that open the table must
-	 * insert new entries into the index for insertions and non-HOT updates.
+	 * insert new entries into the index for insertions and non-HOT updates or
+	 * WARM updates where this index needs new entry.
 	 */
 	index_set_state_flags(indexRelationId, INDEX_CREATE_SET_READY);
 
diff --git b/src/backend/commands/vacuumlazy.c a/src/backend/commands/vacuumlazy.c
index 005440e..1388be1 100644
--- b/src/backend/commands/vacuumlazy.c
+++ a/src/backend/commands/vacuumlazy.c
@@ -1032,6 +1032,19 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 							break;
 						}
 
+						/*
+						 * If this tuple was ever WARM updated or is a WARM
+						 * tuple, there could be multiple index entries
+						 * pointing to the root of this chain. We can't do
+						 * index-only scans for such tuples without verifying
+						 * index key check. So mark the page as !all_visible
+						 */
+						if (HeapTupleHeaderIsHeapWarmTuple(tuple.t_data))
+						{
+							all_visible = false;
+							break;
+						}
+
 						/* Track newest xmin on page. */
 						if (TransactionIdFollows(xmin, visibility_cutoff_xid))
 							visibility_cutoff_xid = xmin;
@@ -2158,6 +2171,18 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 						break;
 					}
 
+					/*
+					 * If this tuple was ever WARM updated or is a WARM tuple,
+					 * there could be multiple index entries pointing to the
+					 * root of this chain. We can't do index-only scans for
+					 * such tuples without verifying index key check. So mark
+					 * the page as !all_visible
+					 */
+					if (HeapTupleHeaderIsHeapWarmTuple(tuple.t_data))
+					{
+						all_visible = false;
+					}
+
 					/* Track newest xmin on page. */
 					if (TransactionIdFollows(xmin, *visibility_cutoff_xid))
 						*visibility_cutoff_xid = xmin;
diff --git b/src/backend/executor/execIndexing.c a/src/backend/executor/execIndexing.c
index 2142273..d62d2de 100644
--- b/src/backend/executor/execIndexing.c
+++ a/src/backend/executor/execIndexing.c
@@ -270,6 +270,8 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
 List *
 ExecInsertIndexTuples(TupleTableSlot *slot,
 					  ItemPointer tupleid,
+					  ItemPointer root_tid,
+					  Bitmapset *modified_attrs,
 					  EState *estate,
 					  bool noDupErr,
 					  bool *specConflict,
@@ -324,6 +326,17 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 		if (!indexInfo->ii_ReadyForInserts)
 			continue;
 
+		/*
+		 * If modified_attrs is set, we only insert index entries for those
+		 * indexes whose column has changed. All other indexes can use their
+		 * existing index pointers to look up the new tuple
+		 */
+		if (modified_attrs)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
 		/* Check for partial index */
 		if (indexInfo->ii_Predicate != NIL)
 		{
@@ -389,7 +402,7 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 			index_insert(indexRelation, /* index relation */
 						 values,	/* array of index Datums */
 						 isnull,	/* null flags */
-						 tupleid,		/* tid of heap tuple */
+						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique,	/* type of uniqueness check to do */
 						 indexInfo);	/* index AM may need this */
@@ -791,6 +804,9 @@ retry:
 		{
 			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
 				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
+			else
+				ItemPointerCopy(&tup->t_self, &ctid_wait);
+
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git b/src/backend/executor/execReplication.c a/src/backend/executor/execReplication.c
index ebf3f6b..1fa13a5 100644
--- b/src/backend/executor/execReplication.c
+++ a/src/backend/executor/execReplication.c
@@ -399,6 +399,8 @@ ExecSimpleRelationInsert(EState *estate, TupleTableSlot *slot)
 
 		if (resultRelInfo->ri_NumIndices > 0)
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &(tuple->t_self),
+												   NULL,
 												   estate, false, NULL,
 												   NIL);
 
@@ -445,6 +447,8 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 	if (!skip_tuple)
 	{
 		List	   *recheckIndexes = NIL;
+		bool		warm_update;
+		Bitmapset  *modified_attrs;
 
 		/* Check the constraints of the tuple */
 		if (rel->rd_att->constr)
@@ -455,13 +459,30 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 
 		/* OK, update the tuple and index entries for it */
 		simple_heap_update(rel, &searchslot->tts_tuple->t_self,
-						   slot->tts_tuple);
+						   slot->tts_tuple, &modified_attrs, &warm_update);
 
 		if (resultRelInfo->ri_NumIndices > 0 &&
-			!HeapTupleIsHeapOnly(slot->tts_tuple))
+			(!HeapTupleIsHeapOnly(slot->tts_tuple) || warm_update))
+		{
+			ItemPointerData root_tid;
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self,
+						&root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
+
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL,
 												   NIL);
+		}
 
 		/* AFTER ROW UPDATE Triggers */
 		ExecARUpdateTriggers(estate, resultRelInfo,
diff --git b/src/backend/executor/nodeBitmapHeapscan.c a/src/backend/executor/nodeBitmapHeapscan.c
index f18827d..f81d290 100644
--- b/src/backend/executor/nodeBitmapHeapscan.c
+++ a/src/backend/executor/nodeBitmapHeapscan.c
@@ -37,6 +37,7 @@
 
 #include "access/relscan.h"
 #include "access/transam.h"
+#include "access/valid.h"
 #include "executor/execdebug.h"
 #include "executor/nodeBitmapHeapscan.h"
 #include "pgstat.h"
@@ -362,11 +363,27 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			OffsetNumber offnum = tbmres->offsets[curslot];
 			ItemPointerData tid;
 			HeapTupleData heapTuple;
+			bool recheck = false;
 
 			ItemPointerSet(&tid, page, offnum);
 			if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
-									   &heapTuple, NULL, true))
-				scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+									   &heapTuple, NULL, true, &recheck))
+			{
+				bool valid = true;
+
+				if (scan->rs_key)
+					HeapKeyTest(&heapTuple, RelationGetDescr(scan->rs_rd),
+							scan->rs_nkeys, scan->rs_key, valid);
+				if (valid)
+					scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+
+				/*
+				 * If the heap tuple needs a recheck because of a WARM update,
+				 * it's a lossy case
+				 */
+				if (recheck)
+					tbmres->recheck = true;
+			}
 		}
 	}
 	else
diff --git b/src/backend/executor/nodeIndexscan.c a/src/backend/executor/nodeIndexscan.c
index 0a9dfdb..38c7827 100644
--- b/src/backend/executor/nodeIndexscan.c
+++ a/src/backend/executor/nodeIndexscan.c
@@ -118,10 +118,10 @@ IndexNext(IndexScanState *node)
 					   false);	/* don't pfree */
 
 		/*
-		 * If the index was lossy, we have to recheck the index quals using
-		 * the fetched tuple.
+		 * If the index was lossy or the tuple was WARM, we have to recheck
+		 * the index quals using the fetched tuple.
 		 */
-		if (scandesc->xs_recheck)
+		if (scandesc->xs_recheck || scandesc->xs_tuple_recheck)
 		{
 			econtext->ecxt_scantuple = slot;
 			ResetExprContext(econtext);
diff --git b/src/backend/executor/nodeModifyTable.c a/src/backend/executor/nodeModifyTable.c
index 95e1589..a1f3440 100644
--- b/src/backend/executor/nodeModifyTable.c
+++ a/src/backend/executor/nodeModifyTable.c
@@ -512,6 +512,7 @@ ExecInsert(ModifyTableState *mtstate,
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												 &(tuple->t_self), NULL,
 												 estate, true, &specConflict,
 												   arbiterIndexes);
 
@@ -558,6 +559,7 @@ ExecInsert(ModifyTableState *mtstate,
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
 				recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+													   &(tuple->t_self), NULL,
 													   estate, false, NULL,
 													   arbiterIndexes);
 		}
@@ -891,6 +893,9 @@ ExecUpdate(ItemPointer tupleid,
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
 	List	   *recheckIndexes = NIL;
+	Bitmapset  *modified_attrs = NULL;
+	ItemPointerData	root_tid;
+	bool		warm_update;
 
 	/*
 	 * abort the operation if not running transactions
@@ -1007,7 +1012,7 @@ lreplace:;
 							 estate->es_output_cid,
 							 estate->es_crosscheck_snapshot,
 							 true /* wait for commit */ ,
-							 &hufd, &lockmode);
+							 &hufd, &lockmode, &modified_attrs, &warm_update);
 		switch (result)
 		{
 			case HeapTupleSelfUpdated:
@@ -1094,10 +1099,28 @@ lreplace:;
 		 * the t_self field.
 		 *
 		 * If it's a HOT update, we mustn't insert new index entries.
+		 *
+		 * If it's a WARM update, then we must insert new entries with TID
+		 * pointing to the root of the WARM chain.
 		 */
-		if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
+		if (resultRelInfo->ri_NumIndices > 0 &&
+			(!HeapTupleIsHeapOnly(tuple) || warm_update))
+		{
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL, NIL);
+		}
 	}
 
 	if (canSetTag)
diff --git b/src/backend/postmaster/pgstat.c a/src/backend/postmaster/pgstat.c
index ada374c..308ae8c 100644
--- b/src/backend/postmaster/pgstat.c
+++ a/src/backend/postmaster/pgstat.c
@@ -1823,7 +1823,7 @@ pgstat_count_heap_insert(Relation rel, int n)
  * pgstat_count_heap_update - count a tuple update
  */
 void
-pgstat_count_heap_update(Relation rel, bool hot)
+pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 {
 	PgStat_TableStatus *pgstat_info = rel->pgstat_info;
 
@@ -1841,6 +1841,8 @@ pgstat_count_heap_update(Relation rel, bool hot)
 		/* t_tuples_hot_updated is nontransactional, so just advance it */
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
+		else if (warm)
+			pgstat_info->t_counts.t_tuples_warm_updated++;
 	}
 }
 
@@ -4088,6 +4090,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_updated = 0;
 		result->tuples_deleted = 0;
 		result->tuples_hot_updated = 0;
+		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
 		result->changes_since_analyze = 0;
@@ -5197,6 +5200,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated = tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted = tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated = tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
@@ -5224,6 +5228,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated += tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted += tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated += tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated += tabmsg->t_counts.t_tuples_warm_updated;
 			/* If table was truncated, first reset the live/dead counters */
 			if (tabmsg->t_counts.t_truncated)
 			{
diff --git b/src/backend/utils/adt/pgstatfuncs.c a/src/backend/utils/adt/pgstatfuncs.c
index a987d0d..b8677f3 100644
--- b/src/backend/utils/adt/pgstatfuncs.c
+++ a/src/backend/utils/adt/pgstatfuncs.c
@@ -145,6 +145,22 @@ pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS)
 
 
 Datum
+pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+
+Datum
 pg_stat_get_live_tuples(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
@@ -1644,6 +1660,21 @@ pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS)
 }
 
 Datum
+pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_TableStatus *tabentry;
+
+	if ((tabentry = find_tabstat_entry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->t_counts.t_tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+Datum
 pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
diff --git b/src/backend/utils/cache/relcache.c a/src/backend/utils/cache/relcache.c
index 9001e20..c85898c 100644
--- b/src/backend/utils/cache/relcache.c
+++ a/src/backend/utils/cache/relcache.c
@@ -2338,6 +2338,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	list_free_deep(relation->rd_fkeylist);
 	list_free(relation->rd_indexlist);
 	bms_free(relation->rd_indexattr);
+	bms_free(relation->rd_exprindexattr);
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_pkattr);
 	bms_free(relation->rd_idattr);
@@ -4352,6 +4353,13 @@ RelationGetIndexList(Relation relation)
 		return list_copy(relation->rd_indexlist);
 
 	/*
+	 * If the index list was invalidated, we better also invalidate the index
+	 * attribute list (which should automatically invalidate other attributes
+	 * such as primary key and replica identity)
+	 */
+	relation->rd_indexattr = NULL;
+
+	/*
 	 * We build the list we intend to return (in the caller's context) while
 	 * doing the scan.  After successfully completing the scan, we copy that
 	 * list into the relcache entry.  This avoids cache-context memory leakage
@@ -4759,15 +4767,19 @@ Bitmapset *
 RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 {
 	Bitmapset  *indexattrs;		/* indexed columns */
+	Bitmapset  *exprindexattrs;	/* indexed columns in expression/prediacate
+									 indexes */
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *pkindexattrs;	/* columns in the primary index */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
+	Bitmapset  *indxnotreadyattrs;	/* columns in not ready indexes */
 	List	   *indexoidlist;
 	List	   *newindexoidlist;
 	Oid			relpkindex;
 	Oid			relreplindex;
 	ListCell   *l;
 	MemoryContext oldcxt;
+	bool		supportswarm = true;/* True if the table can be WARM updated */
 
 	/* Quick exit if we already computed the result. */
 	if (relation->rd_indexattr != NULL)
@@ -4782,6 +4794,10 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_pkattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
+			case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+				return bms_copy(relation->rd_exprindexattr);
+			case INDEX_ATTR_BITMAP_NOTREADY:
+				return bms_copy(relation->rd_indxnotreadyattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -4822,9 +4838,11 @@ restart:
 	 * won't be returned at all by RelationGetIndexList.
 	 */
 	indexattrs = NULL;
+	exprindexattrs = NULL;
 	uindexattrs = NULL;
 	pkindexattrs = NULL;
 	idindexattrs = NULL;
+	indxnotreadyattrs = NULL;
 	foreach(l, indexoidlist)
 	{
 		Oid			indexOid = lfirst_oid(l);
@@ -4861,6 +4879,10 @@ restart:
 				indexattrs = bms_add_member(indexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
 
+				if (!indexInfo->ii_ReadyForInserts)
+					indxnotreadyattrs = bms_add_member(indxnotreadyattrs,
+							   attrnum - FirstLowInvalidHeapAttributeNumber);
+
 				if (isKey)
 					uindexattrs = bms_add_member(uindexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
@@ -4876,10 +4898,29 @@ restart:
 		}
 
 		/* Collect all attributes used in expressions, too */
-		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &exprindexattrs);
 
 		/* Collect all attributes in the index predicate, too */
-		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
+
+		/*
+		 * indexattrs should include attributes referenced in index expressions
+		 * and predicates too
+		 */
+		indexattrs = bms_add_members(indexattrs, exprindexattrs);
+
+		if (!indexInfo->ii_ReadyForInserts)
+			indxnotreadyattrs = bms_add_members(indxnotreadyattrs,
+					exprindexattrs);
+
+		/*
+		 * Check if the index has amrecheck method defined. If the method is
+		 * not defined, the index does not support WARM update. Completely
+		 * disable WARM updates on such tables
+		 */
+		if (!indexDesc->rd_amroutine->amrecheck)
+			supportswarm = false;
+
 
 		index_close(indexDesc, AccessShareLock);
 	}
@@ -4912,15 +4953,22 @@ restart:
 		goto restart;
 	}
 
+	/* Remember if the table can do WARM updates */
+	relation->rd_supportswarm = supportswarm;
+
 	/* Don't leak the old values of these bitmaps, if any */
 	bms_free(relation->rd_indexattr);
 	relation->rd_indexattr = NULL;
+	bms_free(relation->rd_exprindexattr);
+	relation->rd_exprindexattr = NULL;
 	bms_free(relation->rd_keyattr);
 	relation->rd_keyattr = NULL;
 	bms_free(relation->rd_pkattr);
 	relation->rd_pkattr = NULL;
 	bms_free(relation->rd_idattr);
 	relation->rd_idattr = NULL;
+	bms_free(relation->rd_indxnotreadyattr);
+	relation->rd_indxnotreadyattr = NULL;
 
 	/*
 	 * Now save copies of the bitmaps in the relcache entry.  We intentionally
@@ -4933,7 +4981,9 @@ restart:
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_pkattr = bms_copy(pkindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_indexattr = bms_copy(indexattrs);
+	relation->rd_exprindexattr = bms_copy(exprindexattrs);
+	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
+	relation->rd_indxnotreadyattr = bms_copy(indxnotreadyattrs);
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -4947,6 +4997,10 @@ restart:
 			return bms_copy(relation->rd_pkattr);
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
+		case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+			return exprindexattrs;
+		case INDEX_ATTR_BITMAP_NOTREADY:
+			return indxnotreadyattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
@@ -5559,6 +5613,7 @@ load_relcache_init_file(bool shared)
 		rel->rd_keyattr = NULL;
 		rel->rd_pkattr = NULL;
 		rel->rd_idattr = NULL;
+		rel->rd_indxnotreadyattr = NULL;
 		rel->rd_pubactions = NULL;
 		rel->rd_createSubid = InvalidSubTransactionId;
 		rel->rd_newRelfilenodeSubid = InvalidSubTransactionId;
diff --git b/src/include/access/amapi.h a/src/include/access/amapi.h
index f919cf8..d7702e5 100644
--- b/src/include/access/amapi.h
+++ a/src/include/access/amapi.h
@@ -13,6 +13,7 @@
 #define AMAPI_H
 
 #include "access/genam.h"
+#include "access/itup.h"
 
 /*
  * We don't wish to include planner header files here, since most of an index
@@ -152,6 +153,10 @@ typedef void (*aminitparallelscan_function) (void *target);
 /* (re)start parallel index scan */
 typedef void (*amparallelrescan_function) (IndexScanDesc scan);
 
+/* recheck index tuple and heap tuple match */
+typedef bool (*amrecheck_function) (Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 /*
  * API struct for an index AM.  Note this must be stored in a single palloc'd
  * chunk of memory.
@@ -217,6 +222,9 @@ typedef struct IndexAmRoutine
 	amestimateparallelscan_function amestimateparallelscan;		/* can be NULL */
 	aminitparallelscan_function aminitparallelscan;		/* can be NULL */
 	amparallelrescan_function amparallelrescan; /* can be NULL */
+
+	/* interface function to support WARM */
+	amrecheck_function amrecheck;		/* can be NULL */
 } IndexAmRoutine;
 
 
diff --git b/src/include/access/hash.h a/src/include/access/hash.h
index 3bf587b..bc9c8fe 100644
--- b/src/include/access/hash.h
+++ a/src/include/access/hash.h
@@ -385,4 +385,8 @@ extern void hashbucketcleanup(Relation rel, Bucket cur_bucket,
 				  bool bucket_has_garbage,
 				  IndexBulkDeleteCallback callback, void *callback_state);
 
+/* hash.c */
+extern bool hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 #endif   /* HASH_H */
diff --git b/src/include/access/heapam.h a/src/include/access/heapam.h
index 95aa976..9412c3a 100644
--- b/src/include/access/heapam.h
+++ a/src/include/access/heapam.h
@@ -137,9 +137,10 @@ extern bool heap_fetch(Relation relation, Snapshot snapshot,
 		   Relation stats_relation);
 extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
 					   Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call);
+					   bool *all_dead, bool first_call, bool *recheck);
 extern bool heap_hot_search(ItemPointer tid, Relation relation,
-				Snapshot snapshot, bool *all_dead);
+				Snapshot snapshot, bool *all_dead,
+				bool *recheck, Buffer *buffer, HeapTuple heapTuple);
 
 extern void heap_get_latest_tid(Relation relation, Snapshot snapshot,
 					ItemPointer tid);
@@ -161,7 +162,8 @@ extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
 extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
@@ -176,7 +178,9 @@ extern bool heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple);
 extern Oid	simple_heap_insert(Relation relation, HeapTuple tup);
 extern void simple_heap_delete(Relation relation, ItemPointer tid);
 extern void simple_heap_update(Relation relation, ItemPointer otid,
-				   HeapTuple tup);
+				   HeapTuple tup,
+				   Bitmapset **modified_attrs,
+				   bool *warm_update);
 
 extern void heap_sync(Relation relation);
 
diff --git b/src/include/access/heapam_xlog.h a/src/include/access/heapam_xlog.h
index e6019d5..9b081bf 100644
--- b/src/include/access/heapam_xlog.h
+++ a/src/include/access/heapam_xlog.h
@@ -80,6 +80,7 @@
 #define XLH_UPDATE_CONTAINS_NEW_TUPLE			(1<<4)
 #define XLH_UPDATE_PREFIX_FROM_OLD				(1<<5)
 #define XLH_UPDATE_SUFFIX_FROM_OLD				(1<<6)
+#define XLH_UPDATE_WARM_UPDATE					(1<<7)
 
 /* convenience macro for checking whether any form of old tuple was logged */
 #define XLH_UPDATE_CONTAINS_OLD						\
diff --git b/src/include/access/htup_details.h a/src/include/access/htup_details.h
index 7552186..ddbdbcd 100644
--- b/src/include/access/htup_details.h
+++ a/src/include/access/htup_details.h
@@ -260,7 +260,8 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x0800 are available */
+#define HEAP_WARM_TUPLE			0x0800	/* This tuple is a part of a WARM chain
+										 */
 #define HEAP_LATEST_TUPLE		0x1000	/*
 										 * This is the last tuple in chain and
 										 * ip_posid points to the root line
@@ -271,7 +272,7 @@ struct HeapTupleHeaderData
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF800	/* visibility-related bits */
 
 
 /*
@@ -510,6 +511,21 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+#define HeapTupleHeaderSetHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 |= HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderClearHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 &= ~HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderIsHeapWarmTuple(tup) \
+( \
+  ((tup)->t_infomask2 & HEAP_WARM_TUPLE) != 0 \
+)
+
 /*
  * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
  * the TID of the tuple itself in t_ctid field to mark the end of the chain.
@@ -785,6 +801,15 @@ struct MinimalTupleData
 #define HeapTupleClearHeapOnly(tuple) \
 		HeapTupleHeaderClearHeapOnly((tuple)->t_data)
 
+#define HeapTupleIsHeapWarmTuple(tuple) \
+		HeapTupleHeaderIsHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleSetHeapWarmTuple(tuple) \
+		HeapTupleHeaderSetHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleClearHeapWarmTuple(tuple) \
+		HeapTupleHeaderClearHeapWarmTuple((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git b/src/include/access/nbtree.h a/src/include/access/nbtree.h
index 6289ffa..08d056d 100644
--- b/src/include/access/nbtree.h
+++ a/src/include/access/nbtree.h
@@ -538,6 +538,8 @@ extern bytea *btoptions(Datum reloptions, bool validate);
 extern bool btproperty(Oid index_oid, int attno,
 		   IndexAMProperty prop, const char *propname,
 		   bool *res, bool *isnull);
+extern bool btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * prototypes for functions in nbtvalidate.c
diff --git b/src/include/access/relscan.h a/src/include/access/relscan.h
index ce3ca8d..12d3b0c 100644
--- b/src/include/access/relscan.h
+++ a/src/include/access/relscan.h
@@ -112,7 +112,8 @@ typedef struct IndexScanDescData
 	HeapTupleData xs_ctup;		/* current heap tuple, if any */
 	Buffer		xs_cbuf;		/* current heap buffer in scan, if any */
 	/* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */
-	bool		xs_recheck;		/* T means scan keys must be rechecked */
+	bool		xs_recheck;		/* T means scan keys must be rechecked for each tuple */
+	bool		xs_tuple_recheck;	/* T means scan keys must be rechecked for current tuple */
 
 	/*
 	 * When fetching with an ordering operator, the values of the ORDER BY
diff --git b/src/include/catalog/pg_proc.h a/src/include/catalog/pg_proc.h
index bb7053a..21d0789 100644
--- b/src/include/catalog/pg_proc.h
+++ a/src/include/catalog/pg_proc.h
@@ -2740,6 +2740,8 @@ DATA(insert OID = 1933 (  pg_stat_get_tuples_deleted	PGNSP PGUID 12 1 0 0 0 f f
 DESCR("statistics: number of tuples deleted");
 DATA(insert OID = 1972 (  pg_stat_get_tuples_hot_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated");
+DATA(insert OID = 3353 (  pg_stat_get_tuples_warm_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated");
 DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_live_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
@@ -2892,6 +2894,8 @@ DATA(insert OID = 3042 (  pg_stat_get_xact_tuples_deleted		PGNSP PGUID 12 1 0 0
 DESCR("statistics: number of tuples deleted in current transaction");
 DATA(insert OID = 3043 (  pg_stat_get_xact_tuples_hot_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated in current transaction");
+DATA(insert OID = 3354 (  pg_stat_get_xact_tuples_warm_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated in current transaction");
 DATA(insert OID = 3044 (  pg_stat_get_xact_blocks_fetched		PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_fetched _null_ _null_ _null_ ));
 DESCR("statistics: number of blocks fetched in current transaction");
 DATA(insert OID = 3045 (  pg_stat_get_xact_blocks_hit			PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_hit _null_ _null_ _null_ ));
diff --git b/src/include/executor/executor.h a/src/include/executor/executor.h
index 02dbe7b..c4495a3 100644
--- b/src/include/executor/executor.h
+++ a/src/include/executor/executor.h
@@ -382,6 +382,7 @@ extern void UnregisterExprContextCallback(ExprContext *econtext,
 extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
 extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
 extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
+					  ItemPointer root_tid, Bitmapset *modified_attrs,
 					  EState *estate, bool noDupErr, bool *specConflict,
 					  List *arbiterIndexes);
 extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
diff --git b/src/include/executor/nodeIndexscan.h a/src/include/executor/nodeIndexscan.h
index ea3f3a5..ebeec74 100644
--- b/src/include/executor/nodeIndexscan.h
+++ a/src/include/executor/nodeIndexscan.h
@@ -41,5 +41,4 @@ extern void ExecIndexEvalRuntimeKeys(ExprContext *econtext,
 extern bool ExecIndexEvalArrayKeys(ExprContext *econtext,
 					   IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 extern bool ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
-
 #endif   /* NODEINDEXSCAN_H */
diff --git b/src/include/nodes/execnodes.h a/src/include/nodes/execnodes.h
index 1c1cb80..fb00b96 100644
--- b/src/include/nodes/execnodes.h
+++ a/src/include/nodes/execnodes.h
@@ -64,6 +64,7 @@ typedef struct IndexInfo
 	NodeTag		type;
 	int			ii_NumIndexAttrs;
 	AttrNumber	ii_KeyAttrNumbers[INDEX_MAX_KEYS];
+	Bitmapset  *ii_indxattrs;	/* bitmap of all columns used in this index */
 	List	   *ii_Expressions; /* list of Expr */
 	List	   *ii_ExpressionsState;	/* list of ExprState */
 	List	   *ii_Predicate;	/* list of Expr */
diff --git b/src/include/pgstat.h a/src/include/pgstat.h
index 8b710ec..2ee690b 100644
--- b/src/include/pgstat.h
+++ a/src/include/pgstat.h
@@ -105,6 +105,7 @@ typedef struct PgStat_TableCounts
 	PgStat_Counter t_tuples_updated;
 	PgStat_Counter t_tuples_deleted;
 	PgStat_Counter t_tuples_hot_updated;
+	PgStat_Counter t_tuples_warm_updated;
 	bool		t_truncated;
 
 	PgStat_Counter t_delta_live_tuples;
@@ -625,6 +626,7 @@ typedef struct PgStat_StatTabEntry
 	PgStat_Counter tuples_updated;
 	PgStat_Counter tuples_deleted;
 	PgStat_Counter tuples_hot_updated;
+	PgStat_Counter tuples_warm_updated;
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
@@ -1178,7 +1180,7 @@ pgstat_report_wait_end(void)
 	(pgStatBlockWriteTime += (n))
 
 extern void pgstat_count_heap_insert(Relation rel, int n);
-extern void pgstat_count_heap_update(Relation rel, bool hot);
+extern void pgstat_count_heap_update(Relation rel, bool hot, bool warm);
 extern void pgstat_count_heap_delete(Relation rel);
 extern void pgstat_count_truncate(Relation rel);
 extern void pgstat_update_heap_dead_tuples(Relation rel, int delta);
diff --git b/src/include/utils/rel.h a/src/include/utils/rel.h
index a617a7c..fbac7c0 100644
--- b/src/include/utils/rel.h
+++ a/src/include/utils/rel.h
@@ -138,9 +138,14 @@ typedef struct RelationData
 
 	/* data managed by RelationGetIndexAttrBitmap: */
 	Bitmapset  *rd_indexattr;	/* identifies columns used in indexes */
+	Bitmapset  *rd_exprindexattr; /* indentified columns used in expression or
+									 predicate indexes */
+	Bitmapset  *rd_indxnotreadyattr;	/* columns used by indexes not yet
+										   ready */
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_pkattr;		/* cols included in primary key */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	PublicationActions  *rd_pubactions;	/* publication actions */
 
diff --git b/src/include/utils/relcache.h a/src/include/utils/relcache.h
index da36b67..d18bd09 100644
--- b/src/include/utils/relcache.h
+++ a/src/include/utils/relcache.h
@@ -50,7 +50,9 @@ typedef enum IndexAttrBitmapKind
 	INDEX_ATTR_BITMAP_ALL,
 	INDEX_ATTR_BITMAP_KEY,
 	INDEX_ATTR_BITMAP_PRIMARY_KEY,
-	INDEX_ATTR_BITMAP_IDENTITY_KEY
+	INDEX_ATTR_BITMAP_IDENTITY_KEY,
+	INDEX_ATTR_BITMAP_EXPR_PREDICATE,
+	INDEX_ATTR_BITMAP_NOTREADY
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
diff --git b/src/test/regress/expected/rules.out a/src/test/regress/expected/rules.out
index c661f1d..561d9579 100644
--- b/src/test/regress/expected/rules.out
+++ a/src/test/regress/expected/rules.out
@@ -1732,6 +1732,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_tuples_deleted(c.oid) AS n_tup_del,
     pg_stat_get_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
@@ -1875,6 +1876,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1918,6 +1920,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1955,7 +1958,8 @@ pg_stat_xact_all_tables| SELECT c.oid AS relid,
     pg_stat_get_xact_tuples_inserted(c.oid) AS n_tup_ins,
     pg_stat_get_xact_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_xact_tuples_deleted(c.oid) AS n_tup_del,
-    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd
+    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_xact_tuples_warm_updated(c.oid) AS n_tup_warm_upd
    FROM ((pg_class c
      LEFT JOIN pg_index i ON ((c.oid = i.indrelid)))
      LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
@@ -1971,7 +1975,8 @@ pg_stat_xact_sys_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname = ANY (ARRAY['pg_catalog'::name, 'information_schema'::name])) OR (pg_stat_xact_all_tables.schemaname ~ '^pg_toast'::text));
 pg_stat_xact_user_functions| SELECT p.oid AS funcid,
@@ -1993,7 +1998,8 @@ pg_stat_xact_user_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname <> ALL (ARRAY['pg_catalog'::name, 'information_schema'::name])) AND (pg_stat_xact_all_tables.schemaname !~ '^pg_toast'::text));
 pg_statio_all_indexes| SELECT c.oid AS relid,
diff --git b/src/test/regress/expected/warm.out a/src/test/regress/expected/warm.out
new file mode 100644
index 0000000..0aa3bb7
--- /dev/null
+++ a/src/test/regress/expected/warm.out
@@ -0,0 +1,367 @@
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on updtst_tab1  (cost=4.45..47.23 rows=22 width=72)
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1  (cost=0.00..4.45 rows=22 width=0)
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Check if index only scan works correctly
+EXPLAIN SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on updtst_tab1  (cost=4.45..47.23 rows=22 width=4)
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1  (cost=0.00..4.45 rows=22 width=0)
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+                                      QUERY PLAN                                      
+--------------------------------------------------------------------------------------
+ Index Only Scan using updtst_indx1 on updtst_tab1  (cost=0.29..9.16 rows=50 width=4)
+   Index Cond: (b = 140001)
+(2 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab1;
+------------------
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+EXPLAIN SELECT * FROM updtst_tab2 WHERE b = 701;
+                                QUERY PLAN                                 
+---------------------------------------------------------------------------
+ Bitmap Heap Scan on updtst_tab2  (cost=4.18..12.64 rows=4 width=72)
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE a = 1;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+SET enable_seqscan = false;
+EXPLAIN SELECT * FROM updtst_tab2 WHERE b = 701;
+                                QUERY PLAN                                 
+---------------------------------------------------------------------------
+ Bitmap Heap Scan on updtst_tab2  (cost=4.18..12.64 rows=4 width=72)
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE b = 701;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+VACUUM updtst_tab2;
+EXPLAIN SELECT b FROM updtst_tab2 WHERE b = 701;
+                                     QUERY PLAN                                      
+-------------------------------------------------------------------------------------
+ Index Only Scan using updtst_indx2 on updtst_tab2  (cost=0.14..4.16 rows=1 width=4)
+   Index Cond: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab2 WHERE b = 701;
+  b  
+-----
+ 701
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab2;
+------------------
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 1;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN SELECT b FROM updtst_tab3 WHERE b = 701;
+                        QUERY PLAN                         
+-----------------------------------------------------------
+ Seq Scan on updtst_tab3  (cost=0.00..2.25 rows=1 width=4)
+   Filter: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 701;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+  b   
+------
+ 1421
+(1 row)
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+SET enable_seqscan = false;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    98
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 2;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN SELECT b FROM updtst_tab3 WHERE b = 702;
+                                     QUERY PLAN                                      
+-------------------------------------------------------------------------------------
+ Index Only Scan using updtst_indx3 on updtst_tab3  (cost=0.14..8.16 rows=1 width=4)
+   Index Cond: (b = 702)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 702;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+  b   
+------
+ 1422
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab3;
+------------------
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+explain select * from test_warm where lower(a) = 'test';
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on test_warm  (cost=4.18..12.65 rows=4 width=64)
+   Recheck Cond: (lower(a) = 'test'::text)
+   ->  Bitmap Index Scan on test_warmindx  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (lower(a) = 'test'::text)
+(4 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+select *, ctid from test_warm where a = 'test';
+ a | b | ctid 
+---+---+------
+(0 rows)
+
+select *, ctid from test_warm where a = 'TEST';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+                                   QUERY PLAN                                    
+---------------------------------------------------------------------------------
+ Index Scan using test_warmindx on test_warm  (cost=0.15..20.22 rows=4 width=64)
+   Index Cond: (lower(a) = 'test'::text)
+(2 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+DROP TABLE test_warm;
diff --git b/src/test/regress/parallel_schedule a/src/test/regress/parallel_schedule
index edeb2d6..2268705 100644
--- b/src/test/regress/parallel_schedule
+++ a/src/test/regress/parallel_schedule
@@ -42,6 +42,8 @@ test: create_type
 test: create_table
 test: create_function_2
 
+test: warm
+
 # ----------
 # Load huge amounts of data
 # We should split the data files into single files and then
diff --git b/src/test/regress/sql/warm.sql a/src/test/regress/sql/warm.sql
new file mode 100644
index 0000000..b73c278
--- /dev/null
+++ a/src/test/regress/sql/warm.sql
@@ -0,0 +1,172 @@
+
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Check if index only scan works correctly
+EXPLAIN SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab1;
+
+------------------
+
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+
+EXPLAIN SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE a = 1;
+
+SET enable_seqscan = false;
+EXPLAIN SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE b = 701;
+
+VACUUM updtst_tab2;
+EXPLAIN SELECT b FROM updtst_tab2 WHERE b = 701;
+SELECT b FROM updtst_tab2 WHERE b = 701;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab2;
+------------------
+
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+SELECT * FROM updtst_tab3 WHERE a = 1;
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+
+VACUUM updtst_tab3;
+EXPLAIN SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+SET enable_seqscan = false;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+SELECT * FROM updtst_tab3 WHERE a = 2;
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+
+VACUUM updtst_tab3;
+EXPLAIN SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab3;
+------------------
+
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where a = 'test';
+select *, ctid from test_warm where a = 'TEST';
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+DROP TABLE test_warm;
+
+
0001_track_root_lp_v12.patchapplication/octet-stream; name=0001_track_root_lp_v12.patchDownload
diff --git b/src/backend/access/heap/heapam.c a/src/backend/access/heap/heapam.c
index 74fb09c..064909a 100644
--- b/src/backend/access/heap/heapam.c
+++ a/src/backend/access/heap/heapam.c
@@ -94,7 +94,8 @@ static HeapTuple heap_prepare_insert(Relation relation, HeapTuple tup,
 					TransactionId xid, CommandId cid, int options);
 static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
-				HeapTuple newtup, HeapTuple old_key_tup,
+				HeapTuple newtup, OffsetNumber root_offnum,
+				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
 static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
 							 Bitmapset *interesting_cols,
@@ -2248,13 +2249,13 @@ heap_get_latest_tid(Relation relation,
 		 */
 		if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) ||
 			HeapTupleHeaderIsOnlyLocked(tp.t_data) ||
-			ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid))
+			HeapTupleHeaderIsHeapLatest(tp.t_data, &ctid))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
 		}
 
-		ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tp.t_data, &ctid);
 		priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		UnlockReleaseBuffer(buffer);
 	}							/* end of loop */
@@ -2385,6 +2386,7 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	bool		all_visible_cleared = false;
+	OffsetNumber	root_offnum;
 
 	/*
 	 * Fill in tuple header fields, assign an OID, and toast the tuple if
@@ -2423,8 +2425,13 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
-	RelationPutHeapTuple(relation, buffer, heaptup,
-						 (options & HEAP_INSERT_SPECULATIVE) != 0);
+	root_offnum = RelationPutHeapTuple(relation, buffer, heaptup,
+						 (options & HEAP_INSERT_SPECULATIVE) != 0,
+						 InvalidOffsetNumber);
+
+	/* We must not overwrite the speculative insertion token. */
+	if ((options & HEAP_INSERT_SPECULATIVE) == 0)
+		HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 	if (PageIsAllVisible(BufferGetPage(buffer)))
 	{
@@ -2652,6 +2659,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 	Size		saveFreeSpace;
 	bool		need_tuple_data = RelationIsLogicallyLogged(relation);
 	bool		need_cids = RelationIsAccessibleInLogicalDecoding(relation);
+	OffsetNumber	root_offnum;
 
 	needwal = !(options & HEAP_INSERT_SKIP_WAL) && RelationNeedsWAL(relation);
 	saveFreeSpace = RelationGetTargetPageFreeSpace(relation,
@@ -2722,7 +2730,12 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		 * RelationGetBufferForTuple has ensured that the first tuple fits.
 		 * Put that on the page, and then as many other tuples as fit.
 		 */
-		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false);
+		root_offnum = RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false,
+				InvalidOffsetNumber);
+
+		/* Mark this tuple as the latest and also set root offset. */
+		HeapTupleHeaderSetHeapLatest(heaptuples[ndone]->t_data, root_offnum);
+
 		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++)
 		{
 			HeapTuple	heaptup = heaptuples[ndone + nthispage];
@@ -2730,7 +2743,10 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace)
 				break;
 
-			RelationPutHeapTuple(relation, buffer, heaptup, false);
+			root_offnum = RelationPutHeapTuple(relation, buffer, heaptup, false,
+					InvalidOffsetNumber);
+			/* Mark each tuple as the latest and also set root offset. */
+			HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 			/*
 			 * We don't use heap_multi_insert for catalog tuples yet, but
@@ -3002,6 +3018,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	HeapTupleData tp;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	TransactionId new_xmax;
@@ -3012,6 +3029,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	bool		all_visible_cleared = false;
 	HeapTuple	old_key_tuple = NULL;	/* replica identity of the tuple */
 	bool		old_key_copied = false;
+	OffsetNumber	root_offnum;
 
 	Assert(ItemPointerIsValid(tid));
 
@@ -3053,7 +3071,8 @@ heap_delete(Relation relation, ItemPointer tid,
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 	}
 
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
+	offnum = ItemPointerGetOffsetNumber(tid);
+	lp = PageGetItemId(page, offnum);
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
@@ -3183,7 +3202,17 @@ l1:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tp.t_data->t_ctid;
+
+		/*
+		 * If we're at the end of the chain, then just return the same TID back
+		 * to the caller. The caller uses that as a hint to know if we have hit
+		 * the end of the chain.
+		 */
+		if (!HeapTupleHeaderIsHeapLatest(tp.t_data, &tp.t_self))
+			HeapTupleHeaderGetNextTid(tp.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&tp.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tp.t_data);
@@ -3232,6 +3261,22 @@ l1:
 							  xid, LockTupleExclusive, true,
 							  &new_xmax, &new_infomask, &new_infomask2);
 
+	/*
+	 * heap_get_root_tuple_one() may call palloc, which is disallowed once we
+	 * enter the critical section. So check if the root offset is cached in the
+	 * tuple and if not, fetch that information hard way before entering the
+	 * critical section.
+	 *
+	 * Most often and unless we are dealing with a pg-upgraded cluster, the
+	 * root offset information should be cached. So there should not be too
+	 * much overhead of fetching this information. Also, once a tuple is
+	 * updated, the information will be copied to the new version. So it's not
+	 * as if we're going to pay this price forever.
+	 */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tp.t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -3259,8 +3304,10 @@ l1:
 	HeapTupleHeaderClearHotUpdated(tp.t_data);
 	HeapTupleHeaderSetXmax(tp.t_data, new_xmax);
 	HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo);
-	/* Make sure there is no forward chain link in t_ctid */
-	tp.t_data->t_ctid = tp.t_self;
+
+	/* Mark this tuple as the latest tuple in the update chain. */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		HeapTupleHeaderSetHeapLatest(tp.t_data, root_offnum);
 
 	MarkBufferDirty(buffer);
 
@@ -3461,6 +3508,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
+	OffsetNumber	root_offnum;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3523,6 +3572,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 
 
 	block = ItemPointerGetBlockNumber(otid);
+	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3807,7 +3857,12 @@ l2:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = oldtup.t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(oldtup.t_data, &oldtup.t_self))
+			HeapTupleHeaderGetNextTid(oldtup.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&oldtup.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
@@ -3947,6 +4002,7 @@ l2:
 		uint16		infomask_lock_old_tuple,
 					infomask2_lock_old_tuple;
 		bool		cleared_all_frozen = false;
+		OffsetNumber	root_offnum;
 
 		/*
 		 * To prevent concurrent sessions from updating the tuple, we have to
@@ -3974,6 +4030,14 @@ l2:
 
 		Assert(HEAP_XMAX_IS_LOCKED_ONLY(infomask_lock_old_tuple));
 
+		/*
+		 * Fetch root offset before entering the critical section. We do this
+		 * only if the information is not already available.
+		 */
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&oldtup.t_self));
+
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
@@ -3988,7 +4052,8 @@ l2:
 		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			HeapTupleHeaderSetHeapLatest(oldtup.t_data, root_offnum);
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4146,6 +4211,10 @@ l2:
 										   bms_overlap(modified_attrs, id_attrs),
 										   &old_key_copied);
 
+	if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
@@ -4171,6 +4240,17 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way (we should have done
+		 * that before entering the critical section above).
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
 	else
 	{
@@ -4178,10 +4258,22 @@ l2:
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		root_offnum = InvalidOffsetNumber;
 	}
 
-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
-
+	/* insert new tuple */
+	root_offnum = RelationPutHeapTuple(relation, newbuf, heaptup, false,
+									   root_offnum);
+	/*
+	 * Also mark both copies as latest and set the root offset information. If
+	 * we're doing a HOT/WARM update, then we just copy the information from
+	 * old tuple, if available or computed above. For regular updates,
+	 * RelationPutHeapTuple must have returned us the actual offset number
+	 * where the new version was inserted and we store the same value since the
+	 * update resulted in a new HOT-chain.
+	 */
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
 	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
@@ -4194,7 +4286,7 @@ l2:
 	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	HeapTupleHeaderSetNextTid(oldtup.t_data, &(heaptup->t_self));
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4233,6 +4325,7 @@ l2:
 
 		recptr = log_heap_update(relation, buffer,
 								 newbuf, &oldtup, heaptup,
+								 root_offnum,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4513,7 +4606,8 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	ItemId		lp;
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
-	BlockNumber block;
+	BlockNumber	block;
+	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4522,9 +4616,11 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	bool		first_time = true;
 	bool		have_tuple_lock = false;
 	bool		cleared_all_frozen = false;
+	OffsetNumber	root_offnum;
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
+	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -4544,6 +4640,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	tuple->t_data = (HeapTupleHeader) PageGetItem(page, lp);
 	tuple->t_len = ItemIdGetLength(lp);
 	tuple->t_tableOid = RelationGetRelid(relation);
+	tuple->t_self = *tid;
 
 l3:
 	result = HeapTupleSatisfiesUpdate(tuple, cid, *buffer);
@@ -4571,7 +4668,11 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &t_ctid);
+		else
+			ItemPointerCopy(tid, &t_ctid);
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -5009,7 +5110,12 @@ failed:
 		Assert(result == HeapTupleSelfUpdated || result == HeapTupleUpdated ||
 			   result == HeapTupleWouldBlock);
 		Assert(!(tuple->t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tuple->t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(tid, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tuple->t_data);
@@ -5057,6 +5163,10 @@ failed:
 							  GetCurrentTransactionId(), mode, false,
 							  &xid, &new_infomask, &new_infomask2);
 
+	if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tuple->t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -5085,7 +5195,10 @@ failed:
 	 * the tuple as well.
 	 */
 	if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask))
-		tuple->t_data->t_ctid = *tid;
+	{
+		if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+			HeapTupleHeaderSetHeapLatest(tuple->t_data, root_offnum);
+	}
 
 	/* Clear only the all-frozen bit on visibility map if needed */
 	if (PageIsAllVisible(page) &&
@@ -5599,6 +5712,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
+	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5607,6 +5721,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
+		offnum = ItemPointerGetOffsetNumber(&tupid);
+
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
@@ -5836,7 +5952,7 @@ l4:
 
 		/* if we find the end of update chain, we're done. */
 		if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID ||
-			ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) ||
+			HeapTupleHeaderIsHeapLatest(mytup.t_data, &mytup.t_self) ||
 			HeapTupleHeaderIsOnlyLocked(mytup.t_data))
 		{
 			result = HeapTupleMayBeUpdated;
@@ -5845,7 +5961,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		HeapTupleHeaderGetNextTid(mytup.t_data, &tupid);
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
@@ -5962,7 +6078,7 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	 * Replace the speculative insertion token with a real t_ctid, pointing to
 	 * itself like it does on regular tuples.
 	 */
-	htup->t_ctid = tuple->t_self;
+	HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 	/* XLOG stuff */
 	if (RelationNeedsWAL(relation))
@@ -6088,8 +6204,7 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
 
 	/* Clear the speculative insertion token too */
-	tp.t_data->t_ctid = tp.t_self;
-
+	HeapTupleHeaderSetHeapLatest(tp.t_data, ItemPointerGetOffsetNumber(tid));
 	MarkBufferDirty(buffer);
 
 	/*
@@ -7437,6 +7552,7 @@ log_heap_visible(RelFileNode rnode, Buffer heap_buffer, Buffer vm_buffer,
 static XLogRecPtr
 log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup, HeapTuple newtup,
+				OffsetNumber root_offnum,
 				HeapTuple old_key_tuple,
 				bool all_visible_cleared, bool new_all_visible_cleared)
 {
@@ -7557,6 +7673,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self);
 	xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data);
 
+	Assert(OffsetNumberIsValid(root_offnum));
+	xlrec.root_offnum = root_offnum;
+
 	bufflags = REGBUF_STANDARD;
 	if (init)
 		bufflags |= REGBUF_WILL_INIT;
@@ -8211,7 +8330,13 @@ heap_xlog_delete(XLogReaderState *record)
 			PageClearAllVisible(page);
 
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = target_tid;
+		if (!HeapTupleHeaderHasRootOffset(htup))
+		{
+			OffsetNumber	root_offnum;
+			root_offnum = heap_get_root_tuple(page, xlrec->offnum); 
+			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+		}
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8301,7 +8426,8 @@ heap_xlog_insert(XLogReaderState *record)
 		htup->t_hoff = xlhdr.t_hoff;
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
-		htup->t_ctid = target_tid;
+
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->offnum);
 
 		if (PageAddItem(page, (Item) htup, newlen, xlrec->offnum,
 						true, true) == InvalidOffsetNumber)
@@ -8436,8 +8562,8 @@ heap_xlog_multi_insert(XLogReaderState *record)
 			htup->t_hoff = xlhdr->t_hoff;
 			HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 			HeapTupleHeaderSetCmin(htup, FirstCommandId);
-			ItemPointerSetBlockNumber(&htup->t_ctid, blkno);
-			ItemPointerSetOffsetNumber(&htup->t_ctid, offnum);
+
+			HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 			offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 			if (offnum == InvalidOffsetNumber)
@@ -8573,7 +8699,7 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 		/* Set forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetNextTid(htup, &newtid);
 
 		/* Mark the page as a candidate for pruning */
 		PageSetPrunable(page, XLogRecGetXid(record));
@@ -8706,13 +8832,17 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
-		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = newtid;
 
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
 
+		/*
+		 * Make sure the tuple is marked as the latest and root offset
+		 * information is restored.
+		 */
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->root_offnum);
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
@@ -8775,6 +8905,9 @@ heap_xlog_confirm(XLogReaderState *record)
 		 */
 		ItemPointerSet(&htup->t_ctid, BufferGetBlockNumber(buffer), offnum);
 
+		/* For newly inserted tuple, set root offset to itself. */
+		HeapTupleHeaderSetHeapLatest(htup, offnum);
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8838,11 +8971,17 @@ heap_xlog_lock(XLogReaderState *record)
 		 */
 		if (HEAP_XMAX_IS_LOCKED_ONLY(htup->t_infomask))
 		{
+			ItemPointerData	target_tid;
+
+			ItemPointerSet(&target_tid, BufferGetBlockNumber(buffer), offnum);
 			HeapTupleHeaderClearHotUpdated(htup);
 			/* Make sure there is no forward chain link in t_ctid */
-			ItemPointerSet(&htup->t_ctid,
-						   BufferGetBlockNumber(buffer),
-						   offnum);
+			if (!HeapTupleHeaderHasRootOffset(htup))
+			{
+				OffsetNumber	root_offnum;
+				root_offnum = heap_get_root_tuple(page, offnum);
+				HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+			}
 		}
 		HeapTupleHeaderSetXmax(htup, xlrec->locking_xid);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
diff --git b/src/backend/access/heap/hio.c a/src/backend/access/heap/hio.c
index 6529fe3..8052519 100644
--- b/src/backend/access/heap/hio.c
+++ a/src/backend/access/heap/hio.c
@@ -31,12 +31,20 @@
  * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
  * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *
+ * The caller can optionally tell us to set the root offset to the given value.
+ * Otherwise, the root offset is set to the offset of the new location once its
+ * known. The former is used while updating an existing tuple where the caller
+ * tells us about the root line pointer of the chain.  The latter is used
+ * during insertion of a new row, hence root line pointer is set to the offset
+ * where this tuple is inserted.
  */
-void
+OffsetNumber
 RelationPutHeapTuple(Relation relation,
 					 Buffer buffer,
 					 HeapTuple tuple,
-					 bool token)
+					 bool token,
+					 OffsetNumber root_offnum)
 {
 	Page		pageHeader;
 	OffsetNumber offnum;
@@ -60,17 +68,24 @@ RelationPutHeapTuple(Relation relation,
 	ItemPointerSet(&(tuple->t_self), BufferGetBlockNumber(buffer), offnum);
 
 	/*
-	 * Insert the correct position into CTID of the stored tuple, too (unless
-	 * this is a speculative insertion, in which case the token is held in
-	 * CTID field instead)
+	 * Set block number and the root offset into CTID of the stored tuple, too
+	 * (unless this is a speculative insertion, in which case the token is held
+	 * in CTID field instead).
 	 */
 	if (!token)
 	{
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
+		/* Copy t_ctid to set the correct block number. */
 		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+
+		if (!OffsetNumberIsValid(root_offnum))
+			root_offnum = offnum;
+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item, root_offnum);
 	}
+
+	return root_offnum;
 }
 
 /*
diff --git b/src/backend/access/heap/pruneheap.c a/src/backend/access/heap/pruneheap.c
index d69a266..f54337c 100644
--- b/src/backend/access/heap/pruneheap.c
+++ a/src/backend/access/heap/pruneheap.c
@@ -55,6 +55,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
 
+static void heap_get_root_tuples_internal(Page page,
+				OffsetNumber target_offnum, OffsetNumber *root_offsets);
 
 /*
  * Optionally prune and repair fragmentation in the specified page.
@@ -553,6 +555,17 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 		if (!HeapTupleHeaderIsHotUpdated(htup))
 			break;
 
+
+		/*
+		 * If the tuple was HOT-updated and the update was later
+		 * aborted, someone could mark this tuple to be the last tuple
+		 * in the chain, without clearing the HOT-updated flag. So we must
+		 * check if this is the last tuple in the chain and stop following the
+		 * CTID, else we risk getting into an infinite recursion (though
+		 * prstate->marked[] currently protects against that).
+		 */
+		if (HeapTupleHeaderHasRootOffset(htup))
+			break;
 		/*
 		 * Advance to next chain member.
 		 */
@@ -726,27 +739,47 @@ heap_page_prune_execute(Buffer buffer,
 
 
 /*
- * For all items in this page, find their respective root line pointers.
- * If item k is part of a HOT-chain with root at item j, then we set
- * root_offsets[k - 1] = j.
+ * Either for all items in this page or for the given item, find their
+ * respective root line pointers.
+ *
+ * When target_offnum is a valid offset number, the caller is interested in
+ * just one item. In that case, the root line pointer is returned in
+ * root_offsets.
  *
- * The passed-in root_offsets array must have MaxHeapTuplesPerPage entries.
- * We zero out all unused entries.
+ * When target_offnum is a InvalidOffsetNumber then the caller wants to know
+ * the root line pointers of all the items in this page. The root_offsets array
+ * must have MaxHeapTuplesPerPage entries in that case. If item k is part of a
+ * HOT-chain with root at item j, then we set root_offsets[k - 1] = j. We zero
+ * out all unused entries.
  *
  * The function must be called with at least share lock on the buffer, to
  * prevent concurrent prune operations.
  *
+ * This is not a cheap function since it must scan through all line pointers
+ * and tuples on the page in order to find the root line pointers. To minimize
+ * the cost, we break early if target_offnum is specified and root line pointer
+ * to target_offnum is found.
+ *
  * Note: The information collected here is valid only as long as the caller
  * holds a pin on the buffer. Once pin is released, a tuple might be pruned
  * and reused by a completely unrelated tuple.
+ *
+ * Note: This function must not be called inside a critical section because it
+ * internally calls HeapTupleHeaderGetUpdateXid which somewhere down the stack
+ * may try to allocate heap memory. Memory allocation is disallowed in a
+ * critical section.
  */
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
 {
 	OffsetNumber offnum,
 				maxoff;
 
-	MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
+	if (OffsetNumberIsValid(target_offnum))
+		*root_offsets = InvalidOffsetNumber;
+	else
+		MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
 
 	maxoff = PageGetMaxOffsetNumber(page);
 	for (offnum = FirstOffsetNumber; offnum <= maxoff; offnum = OffsetNumberNext(offnum))
@@ -774,9 +807,28 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 
 			/*
 			 * This is either a plain tuple or the root of a HOT-chain.
-			 * Remember it in the mapping.
+			 *
+			 * If the target_offnum is specified and if we found its mapping,
+			 * return.
 			 */
-			root_offsets[offnum - 1] = offnum;
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (target_offnum == offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember it in the mapping. */
+				root_offsets[offnum - 1] = offnum;
+			}
 
 			/* If it's not the start of a HOT-chain, we're done with it */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
@@ -817,15 +869,65 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 				!TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(htup)))
 				break;
 
-			/* Remember the root line pointer for this item */
-			root_offsets[nextoffnum - 1] = offnum;
+			/*
+			 * If target_offnum is specified and we found its mapping, return.
+			 */
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (nextoffnum == target_offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember the root line pointer for this item. */
+				root_offsets[nextoffnum - 1] = offnum;
+			}
 
 			/* Advance to next chain member, if any */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				break;
 
+			/*
+			 * If the tuple was HOT-updated and the update was later aborted,
+			 * someone could mark this tuple to be the last tuple in the chain
+			 * and store root offset in CTID, without clearing the HOT-updated
+			 * flag. So we must check if CTID is actually root offset and break
+			 * to avoid infinite recursion.
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
 		}
 	}
 }
+
+/*
+ * Get root line pointer for the given tuple.
+ */
+OffsetNumber
+heap_get_root_tuple(Page page, OffsetNumber target_offnum)
+{
+	OffsetNumber offnum = InvalidOffsetNumber;
+	heap_get_root_tuples_internal(page, target_offnum, &offnum);
+	return offnum;
+}
+
+/*
+ * Get root line pointers for all tuples in the page
+ */
+void
+heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+{
+	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+			root_offsets);
+}
diff --git b/src/backend/access/heap/rewriteheap.c a/src/backend/access/heap/rewriteheap.c
index c7b283c..6ced1e7 100644
--- b/src/backend/access/heap/rewriteheap.c
+++ a/src/backend/access/heap/rewriteheap.c
@@ -419,14 +419,18 @@ rewrite_heap_tuple(RewriteState state,
 	 */
 	if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
 		  HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
-		!(ItemPointerEquals(&(old_tuple->t_self),
-							&(old_tuple->t_data->t_ctid))))
+		!(HeapTupleHeaderIsHeapLatest(old_tuple->t_data, &old_tuple->t_self)))
 	{
 		OldToNewMapping mapping;
 
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
-		hashkey.tid = old_tuple->t_data->t_ctid;
+
+		/* 
+		 * We've already checked that this is not the last tuple in the chain,
+		 * so fetch the next TID in the chain.
+		 */
+		HeapTupleHeaderGetNextTid(old_tuple->t_data, &hashkey.tid);
 
 		mapping = (OldToNewMapping)
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -439,7 +443,7 @@ rewrite_heap_tuple(RewriteState state,
 			 * set the ctid of this tuple to point to the new location, and
 			 * insert it right away.
 			 */
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextTid(new_tuple->t_data, &mapping->new_tid);
 
 			/* We don't need the mapping entry anymore */
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -525,7 +529,7 @@ rewrite_heap_tuple(RewriteState state,
 				new_tuple = unresolved->tuple;
 				free_new = true;
 				old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextTid(new_tuple->t_data, &new_tid);
 
 				/*
 				 * We don't need the hash entry anymore, but don't free its
@@ -731,7 +735,12 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		onpage_tup->t_ctid = tup->t_self;
+		/* 
+		 * Set t_ctid just to ensure that block number is copied correctly, but
+		 * then immediately mark the tuple as the latest.
+		 */
+		HeapTupleHeaderSetNextTid(onpage_tup, &tup->t_self);
+		HeapTupleHeaderSetHeapLatest(onpage_tup, newoff);
 	}
 
 	/* If heaptup is a private copy, release it. */
diff --git b/src/backend/executor/execIndexing.c a/src/backend/executor/execIndexing.c
index 5242dee..2142273 100644
--- b/src/backend/executor/execIndexing.c
+++ a/src/backend/executor/execIndexing.c
@@ -789,7 +789,8 @@ retry:
 			  DirtySnapshot.speculativeToken &&
 			  TransactionIdPrecedes(GetCurrentTransactionId(), xwait))))
 		{
-			ctid_wait = tup->t_data->t_ctid;
+			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
+				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git b/src/backend/executor/execMain.c a/src/backend/executor/execMain.c
index a666391..bd72ad3 100644
--- b/src/backend/executor/execMain.c
+++ a/src/backend/executor/execMain.c
@@ -2585,7 +2585,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		 * As above, it should be safe to examine xmax and t_ctid without the
 		 * buffer content lock, because they can't be changing.
 		 */
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, &tuple.t_self))
 		{
 			/* deleted, so forget about it */
 			ReleaseBuffer(buffer);
@@ -2593,7 +2593,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		}
 
 		/* updated, so look at the updated row */
-		tuple.t_self = tuple.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tuple.t_data, &tuple.t_self);
 		/* updated row should have xmin matching this xmax */
 		priorXmax = HeapTupleHeaderGetUpdateXid(tuple.t_data);
 		ReleaseBuffer(buffer);
diff --git b/src/include/access/heapam.h a/src/include/access/heapam.h
index a864f78..95aa976 100644
--- b/src/include/access/heapam.h
+++ a/src/include/access/heapam.h
@@ -189,6 +189,7 @@ extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
+extern OffsetNumber heap_get_root_tuple(Page page, OffsetNumber target_offnum);
 extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
 
 /* in heap/syncscan.c */
diff --git b/src/include/access/heapam_xlog.h a/src/include/access/heapam_xlog.h
index b285f17..e6019d5 100644
--- b/src/include/access/heapam_xlog.h
+++ a/src/include/access/heapam_xlog.h
@@ -193,6 +193,8 @@ typedef struct xl_heap_update
 	uint8		flags;
 	TransactionId new_xmax;		/* xmax of the new tuple */
 	OffsetNumber new_offnum;	/* new tuple's offset */
+	OffsetNumber root_offnum;	/* offset of the root line pointer in case of
+								   HOT or WARM update */
 
 	/*
 	 * If XLOG_HEAP_CONTAINS_OLD_TUPLE or XLOG_HEAP_CONTAINS_OLD_KEY flags are
@@ -200,7 +202,7 @@ typedef struct xl_heap_update
 	 */
 } xl_heap_update;
 
-#define SizeOfHeapUpdate	(offsetof(xl_heap_update, new_offnum) + sizeof(OffsetNumber))
+#define SizeOfHeapUpdate	(offsetof(xl_heap_update, root_offnum) + sizeof(OffsetNumber))
 
 /*
  * This is what we need to know about vacuum page cleanup/redirect
diff --git b/src/include/access/hio.h a/src/include/access/hio.h
index 2824f23..921cb37 100644
--- b/src/include/access/hio.h
+++ a/src/include/access/hio.h
@@ -35,8 +35,8 @@ typedef struct BulkInsertStateData
 }	BulkInsertStateData;
 
 
-extern void RelationPutHeapTuple(Relation relation, Buffer buffer,
-					 HeapTuple tuple, bool token);
+extern OffsetNumber RelationPutHeapTuple(Relation relation, Buffer buffer,
+					 HeapTuple tuple, bool token, OffsetNumber root_offnum);
 extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
 						  Buffer otherBuffer, int options,
 						  BulkInsertState bistate,
diff --git b/src/include/access/htup_details.h a/src/include/access/htup_details.h
index a6c7e31..7552186 100644
--- b/src/include/access/htup_details.h
+++ a/src/include/access/htup_details.h
@@ -260,13 +260,19 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1800 are available */
+/* bits 0x0800 are available */
+#define HEAP_LATEST_TUPLE		0x1000	/*
+										 * This is the last tuple in chain and
+										 * ip_posid points to the root line
+										 * pointer
+										 */
 #define HEAP_KEYS_UPDATED		0x2000	/* tuple was updated and key cols
 										 * modified, or tuple deleted */
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xE000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+
 
 /*
  * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins.  It is
@@ -504,6 +510,43 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+/*
+ * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
+ * the TID of the tuple itself in t_ctid field to mark the end of the chain.
+ * But starting PG v10, we use a special flag HEAP_LATEST_TUPLE to identify the
+ * last tuple and store the root line pointer of the HOT chain in t_ctid field
+ * instead.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetHeapLatest(tup, offnum) \
+do { \
+	AssertMacro(OffsetNumberIsValid(offnum)); \
+	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE; \
+	ItemPointerSetOffsetNumber(&(tup)->t_ctid, (offnum)); \
+} while (0)
+
+#define HeapTupleHeaderClearHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 &= ~HEAP_LATEST_TUPLE \
+)
+
+/*
+ * Starting from PostgreSQL 10, the latest tuple in an update chain has
+ * HEAP_LATEST_TUPLE set; but tuples upgraded from earlier versions do not.
+ * For those, we determine whether a tuple is latest by testing that its t_ctid
+ * points to itself.
+ *
+ * Note: beware of multiple evaluations of "tup" and "tid" arguments.
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  (((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(tid))) \
+)
+
+
 #define HeapTupleHeaderSetHeapOnly(tup) \
 ( \
   (tup)->t_infomask2 |= HEAP_ONLY_TUPLE \
@@ -542,6 +585,56 @@ do { \
 
 
 /*
+ * Set the t_ctid chain and also clear the HEAP_LATEST_TUPLE flag since we
+ * now have a new tuple in the chain and this is no longer the last tuple of
+ * the chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetNextTid(tup, tid) \
+do { \
+		ItemPointerCopy((tid), &((tup)->t_ctid)); \
+		HeapTupleHeaderClearHeapLatest((tup)); \
+} while (0)
+
+/*
+ * Get TID of next tuple in the update chain. Caller must have checked that
+ * we are not already at the end of the chain because in that case t_ctid may
+ * actually store the root line pointer of the HOT chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetNextTid(tup, next_ctid) \
+do { \
+	AssertMacro(!((tup)->t_infomask2 & HEAP_LATEST_TUPLE)); \
+	ItemPointerCopy(&(tup)->t_ctid, (next_ctid)); \
+} while (0)
+
+/*
+ * Get the root line pointer of the HOT chain. The caller should have confirmed
+ * that the root offset is cached before calling this macro.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+	AssertMacro(((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0), \
+	ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)
+
+/*
+ * Return whether the tuple has a cached root offset.  We don't use
+ * HeapTupleHeaderIsHeapLatest because that one also considers the case of
+ * t_ctid pointing to itself, for tuples migrated from pre v10 clusters. Here
+ * we are only interested in the tuples which are marked with HEAP_LATEST_TUPLE
+ * flag.
+ */
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+	((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0 \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
0000_interesting_attrs.patchapplication/octet-stream; name=0000_interesting_attrs.patchDownload
commit 2c2e2be0a6459521ad1aebb285a3555649cc02ba
Author: Pavan Deolasee <pavan.deolasee@gmail.com>
Date:   Sun Jan 1 16:29:10 2017 +0530

    Alvaro's patch on interesting attrs

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index af25836..74fb09c 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -96,11 +96,8 @@ static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
 				HeapTuple newtup, HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
-static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
-							 Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
+							 Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup);
 static bool heap_acquire_tuplock(Relation relation, ItemPointer tid,
 					 LockTupleMode mode, LockWaitPolicy wait_policy,
@@ -3455,6 +3452,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *interesting_attrs;
+	Bitmapset  *modified_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3472,9 +3471,6 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				pagefree;
 	bool		have_tuple_lock = false;
 	bool		iscombo;
-	bool		satisfies_hot;
-	bool		satisfies_key;
-	bool		satisfies_id;
 	bool		use_hot_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
@@ -3501,21 +3497,30 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				 errmsg("cannot update tuples during a parallel operation")));
 
 	/*
-	 * Fetch the list of attributes to be checked for HOT update.  This is
-	 * wasted effort if we fail to update or have to put the new tuple on a
-	 * different page.  But we must compute the list before obtaining buffer
-	 * lock --- in the worst case, if we are doing an update on one of the
-	 * relevant system catalogs, we could deadlock if we try to fetch the list
-	 * later.  In any case, the relcache caches the data so this is usually
-	 * pretty cheap.
+	 * Fetch the list of attributes to be checked for various operations.
 	 *
-	 * Note that we get a copy here, so we need not worry about relcache flush
-	 * happening midway through.
+	 * For HOT considerations, this is wasted effort if we fail to update or
+	 * have to put the new tuple on a different page.  But we must compute the
+	 * list before obtaining buffer lock --- in the worst case, if we are doing
+	 * an update on one of the relevant system catalogs, we could deadlock if
+	 * we try to fetch the list later.  In any case, the relcache caches the
+	 * data so this is usually pretty cheap.
+	 *
+	 * We also need columns used by the replica identity, the columns that
+	 * are considered the "key" of rows in the table, and columns that are
+	 * part of indirect indexes.
+	 *
+	 * Note that we get copies of each bitmap, so we need not worry about
+	 * relcache flush happening midway through.
 	 */
 	hot_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_ALL);
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	interesting_attrs = bms_add_members(NULL, hot_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
+
 
 	block = ItemPointerGetBlockNumber(otid);
 	buffer = ReadBuffer(relation, block);
@@ -3536,7 +3541,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Assert(ItemIdIsNormal(lp));
 
 	/*
-	 * Fill in enough data in oldtup for HeapSatisfiesHOTandKeyUpdate to work
+	 * Fill in enough data in oldtup for HeapDetermineModifiedColumns to work
 	 * properly.
 	 */
 	oldtup.t_tableOid = RelationGetRelid(relation);
@@ -3562,6 +3567,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 		Assert(!(newtup->t_data->t_infomask & HEAP_HASOID));
 	}
 
+	/* Determine columns modified by the update. */
+	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
+												  &oldtup, newtup);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3573,10 +3582,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	 * is updates that don't manipulate key columns, not those that
 	 * serendipitiously arrive at the same key values.
 	 */
-	HeapSatisfiesHOTandKeyUpdate(relation, hot_attrs, key_attrs, id_attrs,
-								 &satisfies_hot, &satisfies_key,
-								 &satisfies_id, &oldtup, newtup);
-	if (satisfies_key)
+	if (!bms_overlap(modified_attrs, key_attrs))
 	{
 		*lockmode = LockTupleNoKeyExclusive;
 		mxact_status = MultiXactStatusNoKeyUpdate;
@@ -3815,6 +3821,8 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(modified_attrs);
+		bms_free(interesting_attrs);
 		return result;
 	}
 
@@ -4119,7 +4127,7 @@ l2:
 		 * to do a HOT update.  Check if any of the index columns have been
 		 * changed.  If not, then HOT update is possible.
 		 */
-		if (satisfies_hot)
+		if (!bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
 	}
 	else
@@ -4134,7 +4142,9 @@ l2:
 	 * ExtractReplicaIdentity() will return NULL if nothing needs to be
 	 * logged.
 	 */
-	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup, !satisfies_id, &old_key_copied);
+	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup,
+										   bms_overlap(modified_attrs, id_attrs),
+										   &old_key_copied);
 
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
@@ -4282,13 +4292,15 @@ l2:
 	bms_free(hot_attrs);
 	bms_free(key_attrs);
 	bms_free(id_attrs);
+	bms_free(modified_attrs);
+	bms_free(interesting_attrs);
 
 	return HeapTupleMayBeUpdated;
 }
 
 /*
  * Check if the specified attribute's value is same in both given tuples.
- * Subroutine for HeapSatisfiesHOTandKeyUpdate.
+ * Subroutine for HeapDetermineModifiedColumns.
  */
 static bool
 heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
@@ -4322,7 +4334,7 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 
 	/*
 	 * Extract the corresponding values.  XXX this is pretty inefficient if
-	 * there are many indexed columns.  Should HeapSatisfiesHOTandKeyUpdate do
+	 * there are many indexed columns.  Should HeapDetermineModifiedColumns do
 	 * a single heap_deform_tuple call on each tuple, instead?	But that
 	 * doesn't work for system columns ...
 	 */
@@ -4367,114 +4379,30 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 /*
  * Check which columns are being updated.
  *
- * This simultaneously checks conditions for HOT updates, for FOR KEY
- * SHARE updates, and REPLICA IDENTITY concerns.  Since much of the time they
- * will be checking very similar sets of columns, and doing the same tests on
- * them, it makes sense to optimize and do them together.
- *
- * We receive three bitmapsets comprising the three sets of columns we're
- * interested in.  Note these are destructively modified; that is OK since
- * this is invoked at most once in heap_update.
+ * Given an updated tuple, determine (and return into the output bitmapset),
+ * from those listed as interesting, the set of columns that changed.
  *
- * hot_result is set to TRUE if it's okay to do a HOT update (i.e. it does not
- * modified indexed columns); key_result is set to TRUE if the update does not
- * modify columns used in the key; id_result is set to TRUE if the update does
- * not modify columns in any index marked as the REPLICA IDENTITY.
+ * The input bitmapset is destructively modified; that is OK since this is
+ * invoked at most once in heap_update.
  */
-static void
-HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *
+HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup)
 {
-	int			next_hot_attnum;
-	int			next_key_attnum;
-	int			next_id_attnum;
-	bool		hot_result = true;
-	bool		key_result = true;
-	bool		id_result = true;
-
-	/* If REPLICA IDENTITY is set to FULL, id_attrs will be empty. */
-	Assert(bms_is_subset(id_attrs, key_attrs));
-	Assert(bms_is_subset(key_attrs, hot_attrs));
-
-	/*
-	 * If one of these sets contains no remaining bits, bms_first_member will
-	 * return -1, and after adding FirstLowInvalidHeapAttributeNumber (which
-	 * is negative!)  we'll get an attribute number that can't possibly be
-	 * real, and thus won't match any actual attribute number.
-	 */
-	next_hot_attnum = bms_first_member(hot_attrs);
-	next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_key_attnum = bms_first_member(key_attrs);
-	next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_id_attnum = bms_first_member(id_attrs);
-	next_id_attnum += FirstLowInvalidHeapAttributeNumber;
+	int		attnum;
+	Bitmapset *modified = NULL;
 
-	for (;;)
+	while ((attnum = bms_first_member(interesting_cols)) >= 0)
 	{
-		bool		changed;
-		int			check_now;
-
-		/*
-		 * Since the HOT attributes are a superset of the key attributes and
-		 * the key attributes are a superset of the id attributes, this logic
-		 * is guaranteed to identify the next column that needs to be checked.
-		 */
-		if (hot_result && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_hot_attnum;
-		else if (key_result && next_key_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_key_attnum;
-		else if (id_result && next_id_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_id_attnum;
-		else
-			break;
+		attnum += FirstLowInvalidHeapAttributeNumber;
 
-		/* See whether it changed. */
-		changed = !heap_tuple_attr_equals(RelationGetDescr(relation),
-										  check_now, oldtup, newtup);
-		if (changed)
-		{
-			if (check_now == next_hot_attnum)
-				hot_result = false;
-			if (check_now == next_key_attnum)
-				key_result = false;
-			if (check_now == next_id_attnum)
-				id_result = false;
-
-			/* if all are false now, we can stop checking */
-			if (!hot_result && !key_result && !id_result)
-				break;
-		}
-
-		/*
-		 * Advance the next attribute numbers for the sets that contain the
-		 * attribute we just checked.  As we work our way through the columns,
-		 * the next_attnum values will rise; but when each set becomes empty,
-		 * bms_first_member() will return -1 and the attribute number will end
-		 * up with a value less than FirstLowInvalidHeapAttributeNumber.
-		 */
-		if (hot_result && check_now == next_hot_attnum)
-		{
-			next_hot_attnum = bms_first_member(hot_attrs);
-			next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (key_result && check_now == next_key_attnum)
-		{
-			next_key_attnum = bms_first_member(key_attrs);
-			next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (id_result && check_now == next_id_attnum)
-		{
-			next_id_attnum = bms_first_member(id_attrs);
-			next_id_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
+		if (!heap_tuple_attr_equals(RelationGetDescr(relation),
+								   attnum, oldtup, newtup))
+			modified = bms_add_member(modified,
+									  attnum - FirstLowInvalidHeapAttributeNumber);
 	}
 
-	*satisfies_hot = hot_result;
-	*satisfies_key = key_result;
-	*satisfies_id = id_result;
+	return modified;
 }
 
 /*
#63Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Tom Lane (#49)
Re: Patch: Write Amplification Reduction Method (WARM)

Hi Tom,

On Wed, Feb 1, 2017 at 3:51 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

(I'm a little more concerned by Alvaro's apparent position that WARM
is a done deal; I didn't think so.

Are there any specific aspects of the design that you're not comfortable
with? I'm sure there could be some rough edges in the implementation that
I'm hoping will get handled during the further review process. But if there
are some obvious things I'm overlooking please let me know.

Probably the same question to Andres/Robert who has flagged concerns. On my
side, I've run some very long tests with data validation and haven't found
any new issues with the most recent patches.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#64Bruce Momjian
bruce@momjian.us
In reply to: Pavan Deolasee (#39)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Jan 31, 2017 at 04:52:39PM +0530, Pavan Deolasee wrote:

The other critical bug I found, which unfortunately exists in the master too,
is the index corruption during CIC. The patch includes the same fix that I've
proposed on the other thread. With these changes, WARM stress is running fine
for last 24 hours on a decently powerful box. Multiple CREATE/DROP INDEX cycles
and updates via different indexed columns, with a mix of FOR SHARE/UPDATE and
rollbacks did not produce any consistency issues. A side note: while
performance measurement wasn't a goal of stress tests, WARM has done about 67%
more transaction than master in 24 hour period (95M in master vs 156M in WARM
to be precise on a 30GB table including indexes). I believe the numbers would
be far better had the test not dropping and recreating the indexes, thus
effectively cleaning up all index bloats. Also the table is small enough to fit
in the shared buffers. I'll rerun these tests with much larger scale factor and
without dropping indexes.

Thanks for setting up the test harness. I know it is hard but
in this case it has found an existing bug and given good performance
numbers. :-)

I have what might be a supid question. As I remember, WARM only allows
a single index-column change in the chain. Why are you seeing such a
large performance improvement? I would have thought it would be that
high if we allowed an unlimited number of index changes in the chain.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#65Bruce Momjian
bruce@momjian.us
In reply to: Pavan Deolasee (#55)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Feb 1, 2017 at 10:46:45AM +0530, Pavan Deolasee wrote:

contains a WARM tuple. Alternate ideas/suggestions and review of the

design

are welcome!

t_infomask2 contains one last unused bit,

Umm, WARM is using 2 unused bits from t_infomask2. You mean there is another
free bit after that too?

We are obviously going to use several heap or item pointer bits for
WARM, and once we do that it is going to be hard to undo that. Pavan,
are you saying you could do more with WARM if you had more bits? Are we
sure we have given you all the bits we can? Do we want to commit to a
lesser feature because the bits are not available?

and we could reuse vacuum
full's bits (HEAP_MOVED_OUT, HEAP_MOVED_IN), but that will need some
thinking ahead.� Maybe now's the time to start versioning relations so
that we can ensure clusters upgraded to pg10 do not contain any of those
bits in any tuple headers.

Yeah, IIRC old VACUUM FULL was removed in 9.0, which is good 6 year old.
Obviously, there still a chance that a pre-9.0 binary upgraded cluster exists
and upgrades to 10. So we still need to do something about them if we reuse
these bits. I'm surprised to see that we don't have any mechanism in place to
clear those bits. So may be we add something to do that.

Yeah, good question. :-( We have talked about adding some page,
table, or cluster-level version number so we could identify if a given
tuple _could_ be using those bits, but never did it.

I'd some other ideas (and a patch too) to reuse bits from t_ctid.ip_pos given
that offset numbers can be represented in just 13 bits, even with the maximum
block size. Can look at that if it comes to finding more bits.

OK, so it seems more bits is not a blocker to enhancements, yet.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#66Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Bruce Momjian (#64)
Re: Patch: Write Amplification Reduction Method (WARM)

Bruce Momjian wrote:

As I remember, WARM only allows
a single index-column change in the chain. Why are you seeing such a
large performance improvement? I would have thought it would be that
high if we allowed an unlimited number of index changes in the chain.

The second update in a chain creates another non-warm-updated tuple, so
the third update can be a warm update again, and so on.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#67Bruce Momjian
bruce@momjian.us
In reply to: Alvaro Herrera (#66)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Feb 23, 2017 at 03:03:39PM -0300, Alvaro Herrera wrote:

Bruce Momjian wrote:

As I remember, WARM only allows
a single index-column change in the chain. Why are you seeing such a
large performance improvement? I would have thought it would be that
high if we allowed an unlimited number of index changes in the chain.

The second update in a chain creates another non-warm-updated tuple, so
the third update can be a warm update again, and so on.

Right, before this patch they would be two independent HOT chains. It
still seems like an unexpectedly-high performance win. Are two
independent HOT chains that much more expensive than joining them via
WARM?

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#68Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Bruce Momjian (#67)
Re: Patch: Write Amplification Reduction Method (WARM)

Bruce Momjian wrote:

On Thu, Feb 23, 2017 at 03:03:39PM -0300, Alvaro Herrera wrote:

Bruce Momjian wrote:

As I remember, WARM only allows
a single index-column change in the chain. Why are you seeing such a
large performance improvement? I would have thought it would be that
high if we allowed an unlimited number of index changes in the chain.

The second update in a chain creates another non-warm-updated tuple, so
the third update can be a warm update again, and so on.

Right, before this patch they would be two independent HOT chains.

No, they would be a regular update chain, not HOT updates.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#69Bruce Momjian
bruce@momjian.us
In reply to: Alvaro Herrera (#68)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Feb 23, 2017 at 03:26:09PM -0300, Alvaro Herrera wrote:

Bruce Momjian wrote:

On Thu, Feb 23, 2017 at 03:03:39PM -0300, Alvaro Herrera wrote:

Bruce Momjian wrote:

As I remember, WARM only allows
a single index-column change in the chain. Why are you seeing such a
large performance improvement? I would have thought it would be that
high if we allowed an unlimited number of index changes in the chain.

The second update in a chain creates another non-warm-updated tuple, so
the third update can be a warm update again, and so on.

Right, before this patch they would be two independent HOT chains.

No, they would be a regular update chain, not HOT updates.

Well, let's walk through this. Let's suppose you have three updates
that stay on the same page and don't update any indexed columns --- that
would produce a HOT chain of four tuples. If you then do an update that
changes an indexed column, prior to this patch, you get a normal update,
and more HOT updates can be added to this. With WARM, we can join those
chains and potentially trim the first HOT chain as those tuples become
invisible.

Am I missing something?

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#70Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Bruce Momjian (#69)
Re: Patch: Write Amplification Reduction Method (WARM)

Bruce Momjian wrote:

Well, let's walk through this. Let's suppose you have three updates
that stay on the same page and don't update any indexed columns --- that
would produce a HOT chain of four tuples. If you then do an update that
changes an indexed column, prior to this patch, you get a normal update,
and more HOT updates can be added to this. With WARM, we can join those
chains

With WARM, what happens is that the first three updates are HOT updates
just like currently, and the fourth one is a WARM update.

and potentially trim the first HOT chain as those tuples become
invisible.

That can already happen even without WARM, no?

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#71Bruce Momjian
bruce@momjian.us
In reply to: Alvaro Herrera (#70)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Feb 23, 2017 at 03:45:24PM -0300, Alvaro Herrera wrote:

Bruce Momjian wrote:

Well, let's walk through this. Let's suppose you have three updates
that stay on the same page and don't update any indexed columns --- that
would produce a HOT chain of four tuples. If you then do an update that
changes an indexed column, prior to this patch, you get a normal update,
and more HOT updates can be added to this. With WARM, we can join those
chains

With WARM, what happens is that the first three updates are HOT updates
just like currently, and the fourth one is a WARM update.

Right.

and potentially trim the first HOT chain as those tuples become
invisible.

That can already happen even without WARM, no?

Uh, the point is that with WARM those four early tuples can be removed
via a prune, rather than requiring a VACUUM. Without WARM, the fourth
tuple can't be removed until the index is cleared by VACUUM.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#72Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Bruce Momjian (#71)
Re: Patch: Write Amplification Reduction Method (WARM)

Bruce Momjian wrote:

On Thu, Feb 23, 2017 at 03:45:24PM -0300, Alvaro Herrera wrote:

and potentially trim the first HOT chain as those tuples become
invisible.

That can already happen even without WARM, no?

Uh, the point is that with WARM those four early tuples can be removed
via a prune, rather than requiring a VACUUM. Without WARM, the fourth
tuple can't be removed until the index is cleared by VACUUM.

I *think* that the WARM-updated one cannot be pruned either, because
it's pointed to by at least one index (otherwise it'd have been a HOT
update). The ones prior to that can be removed either way.

I think the part you want (be able to prune the WARM updated tuple) is
part of what Pavan calls "turning the WARM chain into a HOT chain", so
not part of the initial patch. Pavan can explain this part better, and
also set me straight in case I'm wrong in the above :-)

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#73Bruce Momjian
bruce@momjian.us
In reply to: Alvaro Herrera (#72)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Feb 23, 2017 at 03:58:59PM -0300, Alvaro Herrera wrote:

Bruce Momjian wrote:

On Thu, Feb 23, 2017 at 03:45:24PM -0300, Alvaro Herrera wrote:

and potentially trim the first HOT chain as those tuples become
invisible.

That can already happen even without WARM, no?

Uh, the point is that with WARM those four early tuples can be removed
via a prune, rather than requiring a VACUUM. Without WARM, the fourth
tuple can't be removed until the index is cleared by VACUUM.

I *think* that the WARM-updated one cannot be pruned either, because
it's pointed to by at least one index (otherwise it'd have been a HOT
update). The ones prior to that can be removed either way.

Well, if you can't prune across index-column changes, how is a WARM
update different than just two HOT chains with no WARM linkage?

I think the part you want (be able to prune the WARM updated tuple) is
part of what Pavan calls "turning the WARM chain into a HOT chain", so
not part of the initial patch. Pavan can explain this part better, and
also set me straight in case I'm wrong in the above :-)

VACUUM can already remove entire HOT chains that have expired. What
his VACUUM patch does, I think, is to remove the index entries that no
longer point to values in the HOT/WARM chain, turning the chain into
fully HOT, so another WARM addition to the chain can happen.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#74Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Bruce Momjian (#65)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Feb 23, 2017 at 11:30 PM, Bruce Momjian <bruce@momjian.us> wrote:

On Wed, Feb 1, 2017 at 10:46:45AM +0530, Pavan Deolasee wrote:

contains a WARM tuple. Alternate ideas/suggestions and review of

the

design

are welcome!

t_infomask2 contains one last unused bit,

Umm, WARM is using 2 unused bits from t_infomask2. You mean there is

another

free bit after that too?

We are obviously going to use several heap or item pointer bits for
WARM, and once we do that it is going to be hard to undo that. Pavan,
are you saying you could do more with WARM if you had more bits? Are we
sure we have given you all the bits we can? Do we want to commit to a
lesser feature because the bits are not available?

btree implementation is complete as much as I would like (there are a few
TODOs, but no show stoppers), at least for the first release. There is a
free bit in btree index tuple header that I could use for chain conversion.
In the heap tuples, I can reuse HEAP_MOVED_OFF because that bit will only
be set along with HEAP_WARM_TUPLE bit. Since none of the upgraded clusters
can have HEAP_WARM_TUPLE bit set, I think we are safe.

WARM currently also supports hash indexes, but there is no free bit left in
hash index tuple header. I think I can work around that by using a bit from
ip_posid (not yet implemented/tested, but seems doable).

IMHO if we can do that i.e. support btree and hash indexes to start with,
we should be good to go for the first release. We can try to support other
popular index AMs in the subsequent release.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#75Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#64)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Feb 23, 2017 at 9:21 PM, Bruce Momjian <bruce@momjian.us> wrote:

On Tue, Jan 31, 2017 at 04:52:39PM +0530, Pavan Deolasee wrote:

The other critical bug I found, which unfortunately exists in the master too,
is the index corruption during CIC. The patch includes the same fix that I've
proposed on the other thread. With these changes, WARM stress is running fine
for last 24 hours on a decently powerful box. Multiple CREATE/DROP INDEX cycles
and updates via different indexed columns, with a mix of FOR SHARE/UPDATE and
rollbacks did not produce any consistency issues. A side note: while
performance measurement wasn't a goal of stress tests, WARM has done about 67%
more transaction than master in 24 hour period (95M in master vs 156M in WARM
to be precise on a 30GB table including indexes). I believe the numbers would
be far better had the test not dropping and recreating the indexes, thus
effectively cleaning up all index bloats. Also the table is small enough to fit
in the shared buffers. I'll rerun these tests with much larger scale factor and
without dropping indexes.

Thanks for setting up the test harness. I know it is hard but
in this case it has found an existing bug and given good performance
numbers. :-)

I have what might be a supid question. As I remember, WARM only allows
a single index-column change in the chain. Why are you seeing such a
large performance improvement? I would have thought it would be that
high if we allowed an unlimited number of index changes in the chain.

I'm not sure how the test case is set up. If the table has multiple
indexes, each on a different column, and only one of the indexes is
updated, then you figure to win because now the other indexes need
less maintenance (and get less bloated). If you have only a single
index, then I don't see how WARM can be any better than HOT, but maybe
I just don't understand the situation.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#76Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Bruce Momjian (#67)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Feb 23, 2017 at 11:53 PM, Bruce Momjian <bruce@momjian.us> wrote:

On Thu, Feb 23, 2017 at 03:03:39PM -0300, Alvaro Herrera wrote:

Bruce Momjian wrote:

As I remember, WARM only allows
a single index-column change in the chain. Why are you seeing such a
large performance improvement? I would have thought it would be that
high if we allowed an unlimited number of index changes in the chain.

The second update in a chain creates another non-warm-updated tuple, so
the third update can be a warm update again, and so on.

Right, before this patch they would be two independent HOT chains. It
still seems like an unexpectedly-high performance win. Are two
independent HOT chains that much more expensive than joining them via
WARM?

In these tests, there are zero HOT updates, since every update modifies
some index column. With WARM, we could reduce regular updates to half, even
when we allow only one WARM update per chain (chain really has a single
tuple for this discussion). IOW approximately half updates insert new index
entry in *every* index and half updates
insert new index entry *only* in affected index. That itself does a good
bit for performance.

So to answer your question: yes, joining two HOT chains via WARM is much
cheaper because it results in creating new index entries just for affected
indexes.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#77Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#75)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Feb 24, 2017 at 2:13 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Thu, Feb 23, 2017 at 9:21 PM, Bruce Momjian <bruce@momjian.us> wrote:

I have what might be a supid question. As I remember, WARM only allows
a single index-column change in the chain. Why are you seeing such a
large performance improvement? I would have thought it would be that
high if we allowed an unlimited number of index changes in the chain.

I'm not sure how the test case is set up. If the table has multiple
indexes, each on a different column, and only one of the indexes is
updated, then you figure to win because now the other indexes need
less maintenance (and get less bloated). If you have only a single
index, then I don't see how WARM can be any better than HOT, but maybe
I just don't understand the situation.

That's correct. If you have just one index and if the UPDATE modifies
indexed indexed, the UPDATE won't be a WARM update and the patch gives you
no benefit. OTOH if the UPDATE doesn't modify any indexed columns, then it
will be a HOT update and again the patch gives you no benefit. It might be
worthwhile to see if patch causes any regression in these scenarios, though
I think it will be minimal or zero.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#78Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Alvaro Herrera (#72)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Feb 24, 2017 at 12:28 AM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

Bruce Momjian wrote:

On Thu, Feb 23, 2017 at 03:45:24PM -0300, Alvaro Herrera wrote:

and potentially trim the first HOT chain as those tuples become
invisible.

That can already happen even without WARM, no?

Uh, the point is that with WARM those four early tuples can be removed
via a prune, rather than requiring a VACUUM. Without WARM, the fourth
tuple can't be removed until the index is cleared by VACUUM.

I *think* that the WARM-updated one cannot be pruned either, because
it's pointed to by at least one index (otherwise it'd have been a HOT
update). The ones prior to that can be removed either way.

No, even the WARM-updated can be pruned and if there are further HOT
updates, those can be pruned too. All indexes and even multiple pointers
from the same index are always pointing to the root of the WARM chain and
that line pointer does not go away unless the entire chain become dead. The
only material difference between HOT and WARM is that since there are two
index pointers from the same index to the same root line pointer, we must
do recheck. But HOT-pruning and all such things remain the same.

Let's take an example. Say, we have a table (a int, b int, c text) and two
indexes on first two columns.

H W
H
(1, 100, 'foo') -----> (1, 100, 'bar') ------> (1, 200, 'bar') -----> (1,
200, 'foo')

The first update will be a HOT update, the second update will be a WARM
update and the third update will again be a HOT update. The first and third
update do not create any new index entry, though the second update will
create a new index entry in the second index. Any further WARM updates to
this chain is not allowed, but further HOT updates are ok.

If all but the last version become DEAD, HOT-prune will remove all of them
and turn the first line pointer into REDIRECT line pointer. At this point,
the first index has one index pointer and the second index has two index
pointers, but all pointing to the same root line pointer, which has not
become REDIRECT line pointer.

Redirect
o-----------------------> (1, 200, 'foo')

I think the part you want (be able to prune the WARM updated tuple) is

part of what Pavan calls "turning the WARM chain into a HOT chain", so
not part of the initial patch. Pavan can explain this part better, and
also set me straight in case I'm wrong in the above :-)

Umm.. it's a bit different. Without chain conversion, we still don't allow
further WARM updates to the above chain because that might create a third
index pointer and our recheck logic can't cope up with duplicate scans. HOT
updates are allowed though.

The latest patch that I proposed will handle this case and convert such
chains into regular HOT-pruned chains. To do that, we must remove the
duplicate (and now wrong) index pointer to the chain. Once we do that and
change the state on the heap tuple, we can once again do WARM update to
this tuple. Note that in this example the chain has just one tuple, which
will be the case typically, but the algorithm can deal with the case where
there are multiple tuples but with matching index keys.

Hope this helps.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#79Robert Haas
robertmhaas@gmail.com
In reply to: Pavan Deolasee (#78)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Feb 24, 2017 at 2:42 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

Let's take an example. Say, we have a table (a int, b int, c text) and two
indexes on first two columns.

H W
H
(1, 100, 'foo') -----> (1, 100, 'bar') ------> (1, 200, 'bar') -----> (1,
200, 'foo')

The first update will be a HOT update, the second update will be a WARM
update and the third update will again be a HOT update. The first and third
update do not create any new index entry, though the second update will
create a new index entry in the second index. Any further WARM updates to
this chain is not allowed, but further HOT updates are ok.

If all but the last version become DEAD, HOT-prune will remove all of them
and turn the first line pointer into REDIRECT line pointer.

So, when you do the WARM update, the new index entries still point at
the original root, which they don't match, not the version where that
new value first appeared?

I don't immediately see how this will work with index-only scans. If
the tuple is HOT updated several times, HOT-pruned back to a single
version, and then the page is all-visible, the index entries are
guaranteed to agree with the remaining tuple, so it's fine to believe
the data in the index tuple. But with WARM, that would no longer be
true, unless you have some trick for that...

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#80Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#79)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Feb 24, 2017 at 3:23 PM, Robert Haas <robertmhaas@gmail.com> wrote:

I don't immediately see how this will work with index-only scans. If
the tuple is HOT updated several times, HOT-pruned back to a single
version, and then the page is all-visible, the index entries are
guaranteed to agree with the remaining tuple, so it's fine to believe
the data in the index tuple. But with WARM, that would no longer be
true, unless you have some trick for that...

Well the trick is to not allow index-only scans on such pages by not
marking them all-visible. That's why when a tuple is WARM updated, we carry
that information in the subsequent versions even when later updates are HOT
updates. The chain conversion algorithm will handle this by clearing those
bits and thus allowing index-only scans again.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#81Robert Haas
robertmhaas@gmail.com
In reply to: Pavan Deolasee (#80)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Feb 24, 2017 at 3:31 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Fri, Feb 24, 2017 at 3:23 PM, Robert Haas <robertmhaas@gmail.com> wrote:

I don't immediately see how this will work with index-only scans. If
the tuple is HOT updated several times, HOT-pruned back to a single
version, and then the page is all-visible, the index entries are
guaranteed to agree with the remaining tuple, so it's fine to believe
the data in the index tuple. But with WARM, that would no longer be
true, unless you have some trick for that...

Well the trick is to not allow index-only scans on such pages by not marking
them all-visible. That's why when a tuple is WARM updated, we carry that
information in the subsequent versions even when later updates are HOT
updates. The chain conversion algorithm will handle this by clearing those
bits and thus allowing index-only scans again.

Wow, OK. In my view, that makes the chain conversion code pretty much
essential, because if you had WARM without chain conversion then the
visibility map gets more or less irrevocably less effective over time,
which sounds terrible. But it sounds to me like even with the chain
conversion, it might take multiple vacuum passes before all visibility
map bits are set, which isn't such a great property (thus e.g.
fdf9e21196a6f58c6021c967dc5776a16190f295).

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#82Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#81)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Feb 24, 2017 at 3:42 PM, Robert Haas <robertmhaas@gmail.com> wrote:

Wow, OK. In my view, that makes the chain conversion code pretty much
essential, because if you had WARM without chain conversion then the
visibility map gets more or less irrevocably less effective over time,
which sounds terrible.

Yes. I decided to complete chain conversion patch when I realised that IOS
will otherwise become completely useful if large percentage of rows are
updated just once. So I agree. It's not an optional patch and should get in
with the main WARM patch.

But it sounds to me like even with the chain
conversion, it might take multiple vacuum passes before all visibility
map bits are set, which isn't such a great property (thus e.g.
fdf9e21196a6f58c6021c967dc5776a16190f295).

The chain conversion algorithm first converts the chains during vacuum and
then checks if the page can be set all-visible. So I'm not sure why it
would take multiple vacuums before a page is set all-visible. The commit
you quote was written to ensure that we make another attempt to set the
page all-visible after al dead tuples are removed from the page. Similarly,
we will convert all WARM chains to HOT chains and then check for
all-visibility of the page.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#83Robert Haas
robertmhaas@gmail.com
In reply to: Pavan Deolasee (#82)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Feb 24, 2017 at 4:06 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

Wow, OK. In my view, that makes the chain conversion code pretty much
essential, because if you had WARM without chain conversion then the
visibility map gets more or less irrevocably less effective over time,
which sounds terrible.

Yes. I decided to complete chain conversion patch when I realised that IOS
will otherwise become completely useful if large percentage of rows are
updated just once. So I agree. It's not an optional patch and should get in
with the main WARM patch.

Right, and it's not just index-only scans. VACUUM gets permanently
more expensive, too, which is probably a much worse problem.

But it sounds to me like even with the chain
conversion, it might take multiple vacuum passes before all visibility
map bits are set, which isn't such a great property (thus e.g.
fdf9e21196a6f58c6021c967dc5776a16190f295).

The chain conversion algorithm first converts the chains during vacuum and
then checks if the page can be set all-visible. So I'm not sure why it would
take multiple vacuums before a page is set all-visible. The commit you quote
was written to ensure that we make another attempt to set the page
all-visible after al dead tuples are removed from the page. Similarly, we
will convert all WARM chains to HOT chains and then check for all-visibility
of the page.

OK, that sounds good. And there are no bugs, right? :-)

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#84Bruce Momjian
bruce@momjian.us
In reply to: Pavan Deolasee (#76)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Feb 24, 2017 at 02:14:23PM +0530, Pavan Deolasee wrote:

On Thu, Feb 23, 2017 at 11:53 PM, Bruce Momjian <bruce@momjian.us> wrote:

On Thu, Feb 23, 2017 at 03:03:39PM -0300, Alvaro Herrera wrote:

Bruce Momjian wrote:

As I remember, WARM only allows
a single index-column change in the chain.� Why are you seeing such a
large performance improvement?� I would have thought it would be that
high if we allowed an unlimited number of index changes in the chain.

The second update in a chain creates another non-warm-updated tuple, so
the third update can be a warm update again, and so on.

Right, before this patch they would be two independent HOT chains.� It
still seems like an unexpectedly-high performance win.� Are two
independent HOT chains that much more expensive than joining them via
WARM?

In these tests, there are zero HOT updates, since every update modifies some
index column. With WARM, we could reduce regular updates to half, even when we
allow only one WARM update per chain (chain really has a single tuple for this
discussion). IOW approximately half updates insert new index entry in *every*
index and half updates�
insert new index entry *only* in affected index. That itself does a good bit
for performance.

So to answer your question: yes, joining two HOT chains via WARM is much
cheaper because it results in creating new index entries just for affected
indexes.

OK, all my questions have been answered, including the use of flag bits.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#85Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#83)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Feb 24, 2017 at 9:47 PM, Robert Haas <robertmhaas@gmail.com> wrote:

And there are no bugs, right? :-)

Yeah yeah absolutely nothing. Just like any other feature committed to
Postgres so far ;-)

I need to polish the chain conversion patch a bit and also add missing
support for redo, hash indexes etc. Support for hash indexes will need
overloading of ip_posid bits in the index tuple (since there are no free
bits left in hash tuples). I plan to work on that next and submit a fully
functional patch, hopefully before the commit-fest starts.

(I have mentioned the idea of overloading ip_posid bits a few times now and
haven't heard any objection so far. Well, that could either mean that
nobody has read those emails seriously or there is general acceptance to
that idea.. I am assuming latter :-))

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#86Bruce Momjian
bruce@momjian.us
In reply to: Pavan Deolasee (#85)
Re: Patch: Write Amplification Reduction Method (WARM)

On Sat, Feb 25, 2017 at 10:50:57AM +0530, Pavan Deolasee wrote:

On Fri, Feb 24, 2017 at 9:47 PM, Robert Haas <robertmhaas@gmail.com> wrote:
And there are no bugs, right?� :-)

Yeah yeah absolutely nothing. Just like any other feature committed to Postgres
so far ;-)

I need to polish the chain conversion patch a bit and also add missing support
for redo, hash indexes etc. Support for hash indexes will need overloading of
ip_posid bits in the index tuple (since there are no free bits left in hash
tuples). I plan to work on that next and submit a fully functional patch,
hopefully before the commit-fest starts.

(I have mentioned the idea of overloading ip_posid bits a few times now and
haven't heard any objection so far. Well, that could either mean that nobody
has read those emails seriously or there is general acceptance to that idea.. I
am assuming latter :-))

Yes, I think it is the latter.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#87Robert Haas
robertmhaas@gmail.com
In reply to: Pavan Deolasee (#85)
Re: Patch: Write Amplification Reduction Method (WARM)

On Sat, Feb 25, 2017 at 10:50 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Fri, Feb 24, 2017 at 9:47 PM, Robert Haas <robertmhaas@gmail.com> wrote:

And there are no bugs, right? :-)

Yeah yeah absolutely nothing. Just like any other feature committed to
Postgres so far ;-)

Fair point, but I've already said why I think the stakes for this
particular feature are pretty high.

I need to polish the chain conversion patch a bit and also add missing
support for redo, hash indexes etc. Support for hash indexes will need
overloading of ip_posid bits in the index tuple (since there are no free
bits left in hash tuples). I plan to work on that next and submit a fully
functional patch, hopefully before the commit-fest starts.

(I have mentioned the idea of overloading ip_posid bits a few times now and
haven't heard any objection so far. Well, that could either mean that nobody
has read those emails seriously or there is general acceptance to that
idea.. I am assuming latter :-))

I'm not sure about that. I'm not really sure I have an opinion on
that yet, without seeing the patch. The discussion upthread was a bit
vague:

"One idea is to free up 3 bits from ip_posid knowing that OffsetNumber
can never really need more than 13 bits with the other constraints in
place."

Not sure exactly what "the other constraints" are, exactly.

/me goes off, tries to figure it out.

If I'm reading the definition of MaxIndexTuplesPerPage correctly, it
thinks that the minimum number of bytes per index tuple is at least
16: I think sizeof(IndexTupleData) will be 8, so when you add 1 and
MAXALIGN, you get to 12, and then ItemIdData is another 4. So an 8k
page (2^13 bits) could have, on a platform with MAXIMUM_ALIGNOF == 4,
as many as 2^9 tuples. To store more than 2^13 tuples, we'd need a
block size > 128k, but it seems 32k is the most we support. So that
seems OK, if I haven't gotten confused about the logic.

I suppose the only other point of concern about stealing some bits
there is that it might make some operations a little more expensive,
because they've got to start masking out the high bits. But that's
*probably* negligible.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#88Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#87)
6 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Sun, Feb 26, 2017 at 2:14 PM, Robert Haas <robertmhaas@gmail.com> wrote:

Fair point, but I've already said why I think the stakes for this
particular feature are pretty high.

I understand your concerns and not trying to downplay them. I'm doing my
best to test the patch in different ways to ensure we can catch most of the
bugs before the patch is committed. Hopefully with additional reviews and
tests we can plug remaining holes, if any, and be in a comfortable state.

(I have mentioned the idea of overloading ip_posid bits a few times now

and

haven't heard any objection so far. Well, that could either mean that

nobody

has read those emails seriously or there is general acceptance to that
idea.. I am assuming latter :-))

I'm not sure about that. I'm not really sure I have an opinion on
that yet, without seeing the patch. The discussion upthread was a bit
vague:

Attached is a complete set of rebased and finished patches. Patches 0002
and 0003 does what I've in mind as far as OffsetNumber bits.

AFAICS this version is a fully functional implementation of WARM, ready for
serious review/test. The chain conversion is now fully functional and
tested with btrees. I've also added support for chain conversion in hash
indexes by overloading ip_posid high order bits. Even though there is a
free bit available in btree index tuple, the patch now uses the same
ip_posid bit even for btree indexes.

A short summary of all attached patches.

0000_interesting_attrs_v15.patch:

This is Alvaro's patch to refactor HeapSatisfiesHOTandKeyUpdate. We now
return a set of modified attributes and let the caller consume that
information in a way it wants. The main WARM patch uses this refactored API.

0001_track_root_lp_v15.patch:

This implements the logic to store the root offset of the HOT chain in the
t_ctid.ip_posid field. We use a free bit in heap tuple header to mark that
a particular tuple is at the end of the chain and store the root offset in
the ip_posid. For pg_upgraded clusters, this information could be missing
and we do the hard-work of going through the page tuples to find the root
offset.

0002_clear_ip_posid_blkid_refs_v15.patch:

This is mostly a cleanup patch which removes direct references to ip_posid
and ip_blkid from various places and replace them with appropriate
ItemPointer[Get|Set][Offset|Block]Number macros.

0003_freeup_3bits_ip_posid_v15.patch:

This patch frees up the high order 3 bits from ip_posid and makes them
available for other uses. As noted, we only need 13 bits to represent
OffsetNumber and hence the high order bits are unused. This patch should
only be applied along with 0002_clear_ip_posid_blkid_refs_v15.patch

0004_warm_updates_v15.patch:

This implements the main WARM logic, except for chain conversion (which is
implemented in the last patch of the series). It uses another free bit in
the heap tuple header to identify the WARM tuples. When the first WARM
update happens, the old and new versions of the tuple are marked with this
flag. All subsequent HOT tuples in the chain are also marked with this flag
so we never lose information about WARM updates, irrespective of whether it
commits or aborts. We then implement recheck logic to decide which index
pointer should return a tuple from the HOT chain.

WARM is currently supported for hash and btree indexes. If a table has an
index of any other type, WARM is disabled.

0005_warm_chain_conversion_v15.patch:

This patch implements the WARM chain conversion as discussed upthread and
also noted in the README.WARM. This patch requires yet another bit in the
heap tuple header. But since the bit is only set along with the
HEAP_WARM_TUPLE bit, we can safely reuse HEAP_MOVED_OFF bit for this
purpose. We also need a bit to distinguish two copies of index pointers to
know which pointer points to the pre-WARM-update HOT chain (Blue chain) and
which pointer points to post-WARM-update HOT chain (Red chain). We steal
this bit from t_tid.ip_posid field in the index tuple headers. As part of
this patch, I moved XLOG_HEAP2_MULTI_INSERT to RM_HEAP_ID (and renamed it
to XLOG_HEAP_MULTI_INSERT). While it's not necessary, I thought it will
allow us to restrict XLOG_HEAP_INIT_PAGE to RM_HEAP_ID and make that bit
available to define additional opcodes in RM_HEAD2_ID.

I've done some elaborate tests with these patches applied. I've primarily
used make-world, pgbench with additional indexes and the WARM stress test
(which was useful in catching CIC bug) to test the feature. While it does
not mean there are no additional bugs, all bugs that were known to me are
fixed in this version. I'll continue to run more tests, especially around
crash recovery, when indexes are dropped and recreated and also do more
performance tests.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0000_interesting_attrs_v15.patchapplication/octet-stream; name=0000_interesting_attrs_v15.patchDownload
commit 8b8bf7805d661c7450f87e237bb9b68eeab465bc
Author: Pavan Deolasee <pavan.deolasee@gmail.com>
Date:   Sun Jan 1 16:29:10 2017 +0530

    Alvaro's patch on interesting attrs

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index af25836..74fb09c 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -96,11 +96,8 @@ static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
 				HeapTuple newtup, HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
-static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
-							 Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
+							 Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup);
 static bool heap_acquire_tuplock(Relation relation, ItemPointer tid,
 					 LockTupleMode mode, LockWaitPolicy wait_policy,
@@ -3455,6 +3452,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *interesting_attrs;
+	Bitmapset  *modified_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3472,9 +3471,6 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				pagefree;
 	bool		have_tuple_lock = false;
 	bool		iscombo;
-	bool		satisfies_hot;
-	bool		satisfies_key;
-	bool		satisfies_id;
 	bool		use_hot_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
@@ -3501,21 +3497,30 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				 errmsg("cannot update tuples during a parallel operation")));
 
 	/*
-	 * Fetch the list of attributes to be checked for HOT update.  This is
-	 * wasted effort if we fail to update or have to put the new tuple on a
-	 * different page.  But we must compute the list before obtaining buffer
-	 * lock --- in the worst case, if we are doing an update on one of the
-	 * relevant system catalogs, we could deadlock if we try to fetch the list
-	 * later.  In any case, the relcache caches the data so this is usually
-	 * pretty cheap.
+	 * Fetch the list of attributes to be checked for various operations.
 	 *
-	 * Note that we get a copy here, so we need not worry about relcache flush
-	 * happening midway through.
+	 * For HOT considerations, this is wasted effort if we fail to update or
+	 * have to put the new tuple on a different page.  But we must compute the
+	 * list before obtaining buffer lock --- in the worst case, if we are doing
+	 * an update on one of the relevant system catalogs, we could deadlock if
+	 * we try to fetch the list later.  In any case, the relcache caches the
+	 * data so this is usually pretty cheap.
+	 *
+	 * We also need columns used by the replica identity, the columns that
+	 * are considered the "key" of rows in the table, and columns that are
+	 * part of indirect indexes.
+	 *
+	 * Note that we get copies of each bitmap, so we need not worry about
+	 * relcache flush happening midway through.
 	 */
 	hot_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_ALL);
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	interesting_attrs = bms_add_members(NULL, hot_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
+
 
 	block = ItemPointerGetBlockNumber(otid);
 	buffer = ReadBuffer(relation, block);
@@ -3536,7 +3541,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Assert(ItemIdIsNormal(lp));
 
 	/*
-	 * Fill in enough data in oldtup for HeapSatisfiesHOTandKeyUpdate to work
+	 * Fill in enough data in oldtup for HeapDetermineModifiedColumns to work
 	 * properly.
 	 */
 	oldtup.t_tableOid = RelationGetRelid(relation);
@@ -3562,6 +3567,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 		Assert(!(newtup->t_data->t_infomask & HEAP_HASOID));
 	}
 
+	/* Determine columns modified by the update. */
+	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
+												  &oldtup, newtup);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3573,10 +3582,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	 * is updates that don't manipulate key columns, not those that
 	 * serendipitiously arrive at the same key values.
 	 */
-	HeapSatisfiesHOTandKeyUpdate(relation, hot_attrs, key_attrs, id_attrs,
-								 &satisfies_hot, &satisfies_key,
-								 &satisfies_id, &oldtup, newtup);
-	if (satisfies_key)
+	if (!bms_overlap(modified_attrs, key_attrs))
 	{
 		*lockmode = LockTupleNoKeyExclusive;
 		mxact_status = MultiXactStatusNoKeyUpdate;
@@ -3815,6 +3821,8 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(modified_attrs);
+		bms_free(interesting_attrs);
 		return result;
 	}
 
@@ -4119,7 +4127,7 @@ l2:
 		 * to do a HOT update.  Check if any of the index columns have been
 		 * changed.  If not, then HOT update is possible.
 		 */
-		if (satisfies_hot)
+		if (!bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
 	}
 	else
@@ -4134,7 +4142,9 @@ l2:
 	 * ExtractReplicaIdentity() will return NULL if nothing needs to be
 	 * logged.
 	 */
-	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup, !satisfies_id, &old_key_copied);
+	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup,
+										   bms_overlap(modified_attrs, id_attrs),
+										   &old_key_copied);
 
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
@@ -4282,13 +4292,15 @@ l2:
 	bms_free(hot_attrs);
 	bms_free(key_attrs);
 	bms_free(id_attrs);
+	bms_free(modified_attrs);
+	bms_free(interesting_attrs);
 
 	return HeapTupleMayBeUpdated;
 }
 
 /*
  * Check if the specified attribute's value is same in both given tuples.
- * Subroutine for HeapSatisfiesHOTandKeyUpdate.
+ * Subroutine for HeapDetermineModifiedColumns.
  */
 static bool
 heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
@@ -4322,7 +4334,7 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 
 	/*
 	 * Extract the corresponding values.  XXX this is pretty inefficient if
-	 * there are many indexed columns.  Should HeapSatisfiesHOTandKeyUpdate do
+	 * there are many indexed columns.  Should HeapDetermineModifiedColumns do
 	 * a single heap_deform_tuple call on each tuple, instead?	But that
 	 * doesn't work for system columns ...
 	 */
@@ -4367,114 +4379,30 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 /*
  * Check which columns are being updated.
  *
- * This simultaneously checks conditions for HOT updates, for FOR KEY
- * SHARE updates, and REPLICA IDENTITY concerns.  Since much of the time they
- * will be checking very similar sets of columns, and doing the same tests on
- * them, it makes sense to optimize and do them together.
- *
- * We receive three bitmapsets comprising the three sets of columns we're
- * interested in.  Note these are destructively modified; that is OK since
- * this is invoked at most once in heap_update.
+ * Given an updated tuple, determine (and return into the output bitmapset),
+ * from those listed as interesting, the set of columns that changed.
  *
- * hot_result is set to TRUE if it's okay to do a HOT update (i.e. it does not
- * modified indexed columns); key_result is set to TRUE if the update does not
- * modify columns used in the key; id_result is set to TRUE if the update does
- * not modify columns in any index marked as the REPLICA IDENTITY.
+ * The input bitmapset is destructively modified; that is OK since this is
+ * invoked at most once in heap_update.
  */
-static void
-HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *
+HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup)
 {
-	int			next_hot_attnum;
-	int			next_key_attnum;
-	int			next_id_attnum;
-	bool		hot_result = true;
-	bool		key_result = true;
-	bool		id_result = true;
-
-	/* If REPLICA IDENTITY is set to FULL, id_attrs will be empty. */
-	Assert(bms_is_subset(id_attrs, key_attrs));
-	Assert(bms_is_subset(key_attrs, hot_attrs));
-
-	/*
-	 * If one of these sets contains no remaining bits, bms_first_member will
-	 * return -1, and after adding FirstLowInvalidHeapAttributeNumber (which
-	 * is negative!)  we'll get an attribute number that can't possibly be
-	 * real, and thus won't match any actual attribute number.
-	 */
-	next_hot_attnum = bms_first_member(hot_attrs);
-	next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_key_attnum = bms_first_member(key_attrs);
-	next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_id_attnum = bms_first_member(id_attrs);
-	next_id_attnum += FirstLowInvalidHeapAttributeNumber;
+	int		attnum;
+	Bitmapset *modified = NULL;
 
-	for (;;)
+	while ((attnum = bms_first_member(interesting_cols)) >= 0)
 	{
-		bool		changed;
-		int			check_now;
-
-		/*
-		 * Since the HOT attributes are a superset of the key attributes and
-		 * the key attributes are a superset of the id attributes, this logic
-		 * is guaranteed to identify the next column that needs to be checked.
-		 */
-		if (hot_result && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_hot_attnum;
-		else if (key_result && next_key_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_key_attnum;
-		else if (id_result && next_id_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_id_attnum;
-		else
-			break;
+		attnum += FirstLowInvalidHeapAttributeNumber;
 
-		/* See whether it changed. */
-		changed = !heap_tuple_attr_equals(RelationGetDescr(relation),
-										  check_now, oldtup, newtup);
-		if (changed)
-		{
-			if (check_now == next_hot_attnum)
-				hot_result = false;
-			if (check_now == next_key_attnum)
-				key_result = false;
-			if (check_now == next_id_attnum)
-				id_result = false;
-
-			/* if all are false now, we can stop checking */
-			if (!hot_result && !key_result && !id_result)
-				break;
-		}
-
-		/*
-		 * Advance the next attribute numbers for the sets that contain the
-		 * attribute we just checked.  As we work our way through the columns,
-		 * the next_attnum values will rise; but when each set becomes empty,
-		 * bms_first_member() will return -1 and the attribute number will end
-		 * up with a value less than FirstLowInvalidHeapAttributeNumber.
-		 */
-		if (hot_result && check_now == next_hot_attnum)
-		{
-			next_hot_attnum = bms_first_member(hot_attrs);
-			next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (key_result && check_now == next_key_attnum)
-		{
-			next_key_attnum = bms_first_member(key_attrs);
-			next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (id_result && check_now == next_id_attnum)
-		{
-			next_id_attnum = bms_first_member(id_attrs);
-			next_id_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
+		if (!heap_tuple_attr_equals(RelationGetDescr(relation),
+								   attnum, oldtup, newtup))
+			modified = bms_add_member(modified,
+									  attnum - FirstLowInvalidHeapAttributeNumber);
 	}
 
-	*satisfies_hot = hot_result;
-	*satisfies_key = key_result;
-	*satisfies_id = id_result;
+	return modified;
 }
 
 /*
0005_warm_chain_conversion_v15.patchapplication/octet-stream; name=0005_warm_chain_conversion_v15.patchDownload
commit fb4bc555adc078f4c0fb6b808d1046d8212a90ee
Author: Pavan Deolasee <pavan.deolasee@gmail.com>
Date:   Tue Feb 28 10:39:01 2017 +0530

    Warm chain conversion - v15

diff --git a/contrib/bloom/blvacuum.c b/contrib/bloom/blvacuum.c
index 04abd0f..ff50361 100644
--- a/contrib/bloom/blvacuum.c
+++ b/contrib/bloom/blvacuum.c
@@ -88,7 +88,7 @@ blbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 		while (itup < itupEnd)
 		{
 			/* Do we have to delete this tuple? */
-			if (callback(&itup->heapPtr, callback_state))
+			if (callback(&itup->heapPtr, false, callback_state) == IBDCR_DELETE)
 			{
 				/* Yes; adjust count of tuples that will be left on page */
 				BloomPageGetOpaque(page)->maxoff--;
diff --git a/src/backend/access/gin/ginvacuum.c b/src/backend/access/gin/ginvacuum.c
index c9ccfee..8ed71c5 100644
--- a/src/backend/access/gin/ginvacuum.c
+++ b/src/backend/access/gin/ginvacuum.c
@@ -56,7 +56,8 @@ ginVacuumItemPointers(GinVacuumState *gvs, ItemPointerData *items,
 	 */
 	for (i = 0; i < nitem; i++)
 	{
-		if (gvs->callback(items + i, gvs->callback_state))
+		if (gvs->callback(items + i, false, gvs->callback_state) ==
+				IBDCR_DELETE)
 		{
 			gvs->result->tuples_removed += 1;
 			if (!tmpitems)
diff --git a/src/backend/access/gist/gistvacuum.c b/src/backend/access/gist/gistvacuum.c
index 77d9d12..0955db6 100644
--- a/src/backend/access/gist/gistvacuum.c
+++ b/src/backend/access/gist/gistvacuum.c
@@ -202,7 +202,8 @@ gistbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 				iid = PageGetItemId(page, i);
 				idxtuple = (IndexTuple) PageGetItem(page, iid);
 
-				if (callback(&(idxtuple->t_tid), callback_state))
+				if (callback(&(idxtuple->t_tid), false, callback_state) ==
+						IBDCR_DELETE)
 					todelete[ntodelete++] = i;
 				else
 					stats->num_index_tuples += 1;
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index 6645160..c8a1f43 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -73,6 +73,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = hashbuild;
 	amroutine->ambuildempty = hashbuildempty;
 	amroutine->aminsert = hashinsert;
+	amroutine->amwarminsert = hashwarminsert;
 	amroutine->ambulkdelete = hashbulkdelete;
 	amroutine->amvacuumcleanup = hashvacuumcleanup;
 	amroutine->amcanreturn = NULL;
@@ -231,11 +232,11 @@ hashbuildCallback(Relation index,
  *	Hash on the heap tuple's key, form an index tuple with hash code.
  *	Find the appropriate location for the new tuple, and put it there.
  */
-bool
-hashinsert(Relation rel, Datum *values, bool *isnull,
+static bool
+hashinsert_internal(Relation rel, Datum *values, bool *isnull,
 		   ItemPointer ht_ctid, Relation heapRel,
 		   IndexUniqueCheck checkUnique,
-		   IndexInfo *indexInfo)
+		   IndexInfo *indexInfo, bool warm_update)
 {
 	Datum		index_values[1];
 	bool		index_isnull[1];
@@ -251,6 +252,11 @@ hashinsert(Relation rel, Datum *values, bool *isnull,
 	itup = index_form_tuple(RelationGetDescr(rel), index_values, index_isnull);
 	itup->t_tid = *ht_ctid;
 
+	if (warm_update)
+		ItemPointerSetFlags(&itup->t_tid, HASH_INDEX_RED_POINTER);
+	else
+		ItemPointerClearFlags(&itup->t_tid);
+
 	_hash_doinsert(rel, itup);
 
 	pfree(itup);
@@ -258,6 +264,26 @@ hashinsert(Relation rel, Datum *values, bool *isnull,
 	return false;
 }
 
+bool
+hashinsert(Relation rel, Datum *values, bool *isnull,
+		   ItemPointer ht_ctid, Relation heapRel,
+		   IndexUniqueCheck checkUnique,
+		   IndexInfo *indexInfo)
+{
+	return hashinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, false);
+}
+
+bool
+hashwarminsert(Relation rel, Datum *values, bool *isnull,
+		   ItemPointer ht_ctid, Relation heapRel,
+		   IndexUniqueCheck checkUnique,
+		   IndexInfo *indexInfo)
+{
+	return hashinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, true);
+
+}
 
 /*
  *	hashgettuple() -- Get the next tuple in the scan.
@@ -738,6 +764,8 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 		Page		page;
 		OffsetNumber deletable[MaxOffsetNumber];
 		int			ndeletable = 0;
+		OffsetNumber colorblue[MaxOffsetNumber];
+		int			ncolorblue = 0;
 		bool		retain_pin = false;
 
 		vacuum_delay_point();
@@ -755,20 +783,35 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 			IndexTuple	itup;
 			Bucket		bucket;
 			bool		kill_tuple = false;
+			bool		color_tuple = false;
+			int			flags;
+			bool		is_red;
+			IndexBulkDeleteCallbackResult	result;
 
 			itup = (IndexTuple) PageGetItem(page,
 											PageGetItemId(page, offno));
 			htup = &(itup->t_tid);
 
+			flags = ItemPointerGetFlags(&itup->t_tid);
+			is_red = ((flags & HASH_INDEX_RED_POINTER) != 0);
+
 			/*
 			 * To remove the dead tuples, we strictly want to rely on results
 			 * of callback function.  refer btvacuumpage for detailed reason.
 			 */
-			if (callback && callback(htup, callback_state))
+			if (callback)
 			{
-				kill_tuple = true;
-				if (tuples_removed)
-					*tuples_removed += 1;
+				result = callback(htup, is_red, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					kill_tuple = true;
+					if (tuples_removed)
+						*tuples_removed += 1;
+				}
+				else if (result == IBDCR_COLOR_BLUE)
+				{
+					color_tuple = true;
+				}
 			}
 			else if (split_cleanup)
 			{
@@ -791,6 +834,12 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 				}
 			}
 
+			if (color_tuple)
+			{
+				/* color the pointer blue */
+				colorblue[ncolorblue++] = offno;
+			}
+
 			if (kill_tuple)
 			{
 				/* mark the item for deletion */
@@ -815,9 +864,24 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 		/*
 		 * Apply deletions, advance to next page and write page if needed.
 		 */
-		if (ndeletable > 0)
+		if (ndeletable > 0 || ncolorblue > 0)
 		{
-			PageIndexMultiDelete(page, deletable, ndeletable);
+			/*
+			 * Color the Red pointers Blue.
+			 *
+			 * We must do this before dealing with the dead items because
+			 * PageIndexMultiDelete may move items around to compactify the
+			 * array and hence offnums recorded earlier won't make any sense
+			 * after PageIndexMultiDelete is called..
+			 */
+			if (ncolorblue > 0)
+				_hash_color_items(page, colorblue, ncolorblue);
+
+			/*
+			 * And delete the deletable items
+			 */
+			if (ndeletable > 0)
+				PageIndexMultiDelete(page, deletable, ndeletable);
 			bucket_dirty = true;
 			MarkBufferDirty(buf);
 		}
diff --git a/src/backend/access/hash/hashpage.c b/src/backend/access/hash/hashpage.c
index 00f3ea8..6af1fb9 100644
--- a/src/backend/access/hash/hashpage.c
+++ b/src/backend/access/hash/hashpage.c
@@ -1333,3 +1333,17 @@ _hash_getbucketbuf_from_hashkey(Relation rel, uint32 hashkey, int access,
 
 	return buf;
 }
+
+void _hash_color_items(Page page, OffsetNumber *coloritemnos,
+					   uint16 ncoloritems)
+{
+	int			i;
+	IndexTuple	itup;
+
+	for (i = 0; i < ncoloritems; i++)
+	{
+		itup = (IndexTuple) PageGetItem(page,
+				PageGetItemId(page, coloritemnos[i]));
+		ItemPointerClearFlags(&itup->t_tid);
+	}
+}
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 9c4522a..efdd439 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -1958,17 +1958,32 @@ heap_fetch(Relation relation,
 }
 
 /*
- * Check if the HOT chain containing this tid is actually a WARM chain.
- * Note that even if the WARM update ultimately aborted, we still must do a
- * recheck because the failing UPDATE when have inserted created index entries
- * which are now stale, but still referencing this chain.
+ * Check status of a (possibly) WARM chain.
+ *
+ * This function looks at a HOT/WARM chain starting at tid and return a bitmask
+ * of information. We only follow the chain as long as it's known to be valid
+ * HOT chain. Information returned by the function consists of:
+ *
+ *  HCWC_WARM_TUPLE - a warm tuple is found somewhere in the chain. Note that
+ *  				  when a tuple is WARM updated, both old and new versions
+ *  				  of the tuple are treated as WARM tuple
+ *
+ *  HCWC_RED_TUPLE  - a warm tuple part of the Red chain is found somewhere in
+ *					  the chain.
+ *
+ *  HCWC_BLUE_TUPLE - a warm tuple part of the Blue chain is found somewhere in
+ *					  the chain.
+ *
+ *	If stop_at_warm is true, we stop when the first WARM tuple is found and
+ *	return information collected so far.
  */
-static bool
-hot_check_warm_chain(Page dp, ItemPointer tid)
+HeapCheckWarmChainStatus
+heap_check_warm_chain(Page dp, ItemPointer tid, bool stop_at_warm)
 {
-	TransactionId prev_xmax = InvalidTransactionId;
-	OffsetNumber offnum;
-	HeapTupleData heapTuple;
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	HeapCheckWarmChainStatus	status = 0;
 
 	offnum = ItemPointerGetOffsetNumber(tid);
 	heapTuple.t_self = *tid;
@@ -1985,7 +2000,16 @@ hot_check_warm_chain(Page dp, ItemPointer tid)
 
 		/* check for unused, dead, or redirected items */
 		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
 			break;
+		}
 
 		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
 		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
@@ -2000,13 +2024,118 @@ hot_check_warm_chain(Page dp, ItemPointer tid)
 			break;
 
 
+		if (HeapTupleHeaderIsHeapWarmTuple(heapTuple.t_data))
+		{
+			/* We found a WARM tuple */
+			status |= HCWC_WARM_TUPLE;
+
+			/* 
+			 * If we've been told to stop at the first WARM tuple, just return
+			 * whatever information collected so far.
+			 */
+			if (stop_at_warm)
+				return status;
+
+			/*
+			 * If it's not a Red tuple, then it's definitely a Blue tuple. Set
+			 * either of the bit correctly.
+			 */
+			if (HeapTupleHeaderIsWarmRed(heapTuple.t_data))
+				status |= HCWC_RED_TUPLE;
+			else
+				status |= HCWC_BLUE_TUPLE;
+		}
+		else
+			/* Must be a tuple belonging to the Blue chain */
+			status |= HCWC_BLUE_TUPLE;
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	/* All OK. No need to recheck */
+	return status;
+}
+
+/*
+ * Scan through the WARM chain starting at tid and reset all WARM related
+ * flags. At the end, the chain will have all characteristics of a regular HOT
+ * chain.
+ *
+ * Return the number of cleared offnums. Cleared offnums are returned in the
+ * passed-in cleared_offnums array. The caller must ensure that the array is
+ * large enough to hold maximum offnums that can be cleared by this invokation
+ * of heap_clear_warm_chain().
+ */
+int
+heap_clear_warm_chain(Page dp, ItemPointer tid, OffsetNumber *cleared_offnums)
+{
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	int							num_cleared = 0;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
+			break;
+		}
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
 		/*
-		 * Presence of either WARM or WARM updated tuple signals possible
-		 * breakage and the caller must recheck tuple returned from this chain
-		 * for index satisfaction
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Clear WARM and Red flags
 		 */
 		if (HeapTupleHeaderIsHeapWarmTuple(heapTuple.t_data))
-			return true;
+		{
+			HeapTupleHeaderClearHeapWarmTuple(heapTuple.t_data);
+			HeapTupleHeaderClearWarmRed(heapTuple.t_data);
+			cleared_offnums[num_cleared++] = offnum;
+		}
 
 		/*
 		 * Check to see if HOT chain continues past this tuple; if so fetch
@@ -2025,8 +2154,7 @@ hot_check_warm_chain(Page dp, ItemPointer tid)
 		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
 	}
 
-	/* All OK. No need to recheck */
-	return false;
+	return num_cleared;
 }
 
 /*
@@ -2135,7 +2263,11 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		 * possible improvements here
 		 */
 		if (recheck && *recheck == false)
-			*recheck = hot_check_warm_chain(dp, &heapTuple->t_self);
+		{
+			HeapCheckWarmChainStatus status;
+			status = heap_check_warm_chain(dp, &heapTuple->t_self, true);
+			*recheck = HCWC_IS_WARM(status);
+		}
 
 		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
@@ -2888,7 +3020,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		{
 			XLogRecPtr	recptr;
 			xl_heap_multi_insert *xlrec;
-			uint8		info = XLOG_HEAP2_MULTI_INSERT;
+			uint8		info = XLOG_HEAP_MULTI_INSERT;
 			char	   *tupledata;
 			int			totaldatalen;
 			char	   *scratchptr = scratch;
@@ -2985,7 +3117,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			/* filtering by origin on a row level is much more efficient */
 			XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);
 
-			recptr = XLogInsert(RM_HEAP2_ID, info);
+			recptr = XLogInsert(RM_HEAP_ID, info);
 
 			PageSetLSN(page, recptr);
 		}
@@ -3409,7 +3541,9 @@ l1:
 	}
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+   	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	tp.t_data->t_infomask |= new_infomask;
 	tp.t_data->t_infomask2 |= new_infomask2;
@@ -4172,7 +4306,9 @@ l2:
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
-		oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(oldtup.t_data))
+			oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 		oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleClearHotUpdated(&oldtup);
 		/* ... and store info about transaction updating this tuple */
@@ -4419,6 +4555,16 @@ l2:
 		}
 
 		/*
+		 * If the old tuple is already a member of the Red chain, mark the new
+		 * tuple with the same flag
+		 */
+		if (HeapTupleIsHeapWarmTupleRed(&oldtup))
+		{
+			HeapTupleSetHeapWarmTupleRed(heaptup);
+			HeapTupleSetHeapWarmTupleRed(newtup);
+		}
+
+		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
 		 * Usually this information will be available in the corresponding
@@ -4435,12 +4581,20 @@ l2:
 		/* Mark the old tuple as HOT-updated */
 		HeapTupleSetHotUpdated(&oldtup);
 		HeapTupleSetHeapWarmTuple(&oldtup);
+		
 		/* And mark the new tuple as heap-only */
 		HeapTupleSetHeapOnly(heaptup);
+		/* Mark the new tuple as WARM tuple */ 
 		HeapTupleSetHeapWarmTuple(heaptup);
+		/* This update also starts a Red chain */
+		HeapTupleSetHeapWarmTupleRed(heaptup);
+		Assert(!HeapTupleIsHeapWarmTupleRed(&oldtup));
+
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
 		HeapTupleSetHeapWarmTuple(newtup);
+		HeapTupleSetHeapWarmTupleRed(newtup);
+
 		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
 			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 		else
@@ -4459,6 +4613,8 @@ l2:
 		HeapTupleClearHeapOnly(newtup);
 		HeapTupleClearHeapWarmTuple(heaptup);
 		HeapTupleClearHeapWarmTuple(newtup);
+		HeapTupleClearHeapWarmTupleRed(heaptup);
+		HeapTupleClearHeapWarmTupleRed(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4477,7 +4633,9 @@ l2:
 	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
-	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(oldtup.t_data))
+		oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 	oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	/* ... and store info about transaction updating this tuple */
 	Assert(TransactionIdIsValid(xmax_old_tuple));
@@ -6398,7 +6556,9 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	PageSetPrunable(page, RecentGlobalXmin);
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 
 	/*
@@ -6972,7 +7132,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 	 * Old-style VACUUM FULL is gone, but we have to keep this code as long as
 	 * we support having MOVED_OFF/MOVED_IN tuples in the database.
 	 */
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 
@@ -6991,7 +7151,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			 * have failed; whereas a non-dead MOVED_IN tuple must mean the
 			 * xvac transaction succeeded.
 			 */
-			if (tuple->t_infomask & HEAP_MOVED_OFF)
+			if (HeapTupleHeaderIsMovedOff(tuple))
 				frz->frzflags |= XLH_INVALID_XVAC;
 			else
 				frz->frzflags |= XLH_FREEZE_XVAC;
@@ -7461,7 +7621,7 @@ heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple)
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid))
@@ -7544,7 +7704,7 @@ heap_tuple_needs_freeze(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid) &&
@@ -7570,7 +7730,7 @@ HeapTupleHeaderAdvanceLatestRemovedXid(HeapTupleHeader tuple,
 	TransactionId xmax = HeapTupleHeaderGetUpdateXid(tuple);
 	TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		if (TransactionIdPrecedes(*latestRemovedXid, xvac))
 			*latestRemovedXid = xvac;
@@ -7619,6 +7779,36 @@ log_heap_cleanup_info(RelFileNode rnode, TransactionId latestRemovedXid)
 }
 
 /*
+ * Perform XLogInsert for a heap-warm-clear operation.  Caller must already
+ * have modified the buffer and marked it dirty.
+ */
+XLogRecPtr
+log_heap_warmclear(Relation reln, Buffer buffer,
+			   OffsetNumber *cleared, int ncleared)
+{
+	xl_heap_warmclear	xlrec;
+	XLogRecPtr			recptr;
+
+	/* Caller should not call me on a non-WAL-logged relation */
+	Assert(RelationNeedsWAL(reln));
+
+	xlrec.ncleared = ncleared;
+
+	XLogBeginInsert();
+	XLogRegisterData((char *) &xlrec, SizeOfHeapWarmClear);
+
+	XLogRegisterBuffer(0, buffer, REGBUF_STANDARD);
+
+	if (ncleared > 0)
+		XLogRegisterBufData(0, (char *) cleared,
+							ncleared * sizeof(OffsetNumber));
+
+	recptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_WARMCLEAR);
+
+	return recptr;
+}
+
+/*
  * Perform XLogInsert for a heap-clean operation.  Caller must already
  * have modified the buffer and marked it dirty.
  *
@@ -8277,6 +8467,60 @@ heap_xlog_clean(XLogReaderState *record)
 		XLogRecordPageWithFreeSpace(rnode, blkno, freespace);
 }
 
+
+/*
+ * Handles HEAP2_WARMCLEAR record type
+ */
+static void
+heap_xlog_warmclear(XLogReaderState *record)
+{
+	XLogRecPtr	lsn = record->EndRecPtr;
+	xl_heap_warmclear	*xlrec = (xl_heap_warmclear *) XLogRecGetData(record);
+	Buffer		buffer;
+	RelFileNode rnode;
+	BlockNumber blkno;
+	XLogRedoAction action;
+
+	XLogRecGetBlockTag(record, 0, &rnode, NULL, &blkno);
+
+	/*
+	 * If we have a full-page image, restore it (using a cleanup lock) and
+	 * we're done.
+	 */
+	action = XLogReadBufferForRedoExtended(record, 0, RBM_NORMAL, true,
+										   &buffer);
+	if (action == BLK_NEEDS_REDO)
+	{
+		Page		page = (Page) BufferGetPage(buffer);
+		OffsetNumber *cleared;
+		int			ncleared;
+		Size		datalen;
+		int			i;
+
+		cleared = (OffsetNumber *) XLogRecGetBlockData(record, 0, &datalen);
+
+		ncleared = xlrec->ncleared;
+
+		for (i = 0; i < ncleared; i++)
+		{
+			ItemId			lp;
+			OffsetNumber	offnum = cleared[i];
+			HeapTupleData	heapTuple;
+
+			lp = PageGetItemId(page, offnum);
+			heapTuple.t_data = (HeapTupleHeader) PageGetItem(page, lp);
+
+			HeapTupleHeaderClearHeapWarmTuple(heapTuple.t_data);
+			HeapTupleHeaderClearWarmRed(heapTuple.t_data);
+		}
+
+		PageSetLSN(page, lsn);
+		MarkBufferDirty(buffer);
+	}
+	if (BufferIsValid(buffer))
+		UnlockReleaseBuffer(buffer);
+}
+
 /*
  * Replay XLOG_HEAP2_VISIBLE record.
  *
@@ -8523,7 +8767,9 @@ heap_xlog_delete(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleHeaderClearHotUpdated(htup);
 		fix_infomask_from_infobits(xlrec->infobits_set,
@@ -9186,7 +9432,9 @@ heap_xlog_lock(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9265,7 +9513,9 @@ heap_xlog_lock_updated(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9334,6 +9584,9 @@ heap_redo(XLogReaderState *record)
 		case XLOG_HEAP_INSERT:
 			heap_xlog_insert(record);
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			heap_xlog_multi_insert(record);
+			break;
 		case XLOG_HEAP_DELETE:
 			heap_xlog_delete(record);
 			break;
@@ -9362,7 +9615,7 @@ heap2_redo(XLogReaderState *record)
 {
 	uint8		info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
 
-	switch (info & XLOG_HEAP_OPMASK)
+	switch (info & XLOG_HEAP2_OPMASK)
 	{
 		case XLOG_HEAP2_CLEAN:
 			heap_xlog_clean(record);
@@ -9376,9 +9629,6 @@ heap2_redo(XLogReaderState *record)
 		case XLOG_HEAP2_VISIBLE:
 			heap_xlog_visible(record);
 			break;
-		case XLOG_HEAP2_MULTI_INSERT:
-			heap_xlog_multi_insert(record);
-			break;
 		case XLOG_HEAP2_LOCK_UPDATED:
 			heap_xlog_lock_updated(record);
 			break;
@@ -9392,6 +9642,9 @@ heap2_redo(XLogReaderState *record)
 		case XLOG_HEAP2_REWRITE:
 			heap_xlog_logical_rewrite(record);
 			break;
+		case XLOG_HEAP2_WARMCLEAR:
+			heap_xlog_warmclear(record);
+			break;
 		default:
 			elog(PANIC, "heap2_redo: unknown op code %u", info);
 	}
diff --git a/src/backend/access/heap/tuptoaster.c b/src/backend/access/heap/tuptoaster.c
index 19e7048..47b01eb 100644
--- a/src/backend/access/heap/tuptoaster.c
+++ b/src/backend/access/heap/tuptoaster.c
@@ -1620,7 +1620,8 @@ toast_save_datum(Relation rel, Datum value,
 							 toastrel,
 							 toastidxs[i]->rd_index->indisunique ?
 							 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-							 NULL);
+							 NULL,
+							 false);
 		}
 
 		/*
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index f56c58f..e8027f8 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -199,7 +199,8 @@ index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 IndexInfo *indexInfo)
+			 IndexInfo *indexInfo,
+			 bool warm_update)
 {
 	RELATION_CHECKS;
 	CHECK_REL_PROCEDURE(aminsert);
@@ -209,6 +210,12 @@ index_insert(Relation indexRelation,
 									   (HeapTuple) NULL,
 									   InvalidBuffer);
 
+	if (warm_update)
+	{
+		Assert(indexRelation->rd_amroutine->amwarminsert != NULL);
+		return indexRelation->rd_amroutine->amwarminsert(indexRelation, values,
+				isnull, heap_t_ctid, heapRelation, checkUnique, indexInfo);
+	}
 	return indexRelation->rd_amroutine->aminsert(indexRelation, values, isnull,
 												 heap_t_ctid, heapRelation,
 												 checkUnique, indexInfo);
diff --git a/src/backend/access/nbtree/nbtpage.c b/src/backend/access/nbtree/nbtpage.c
index f815fd4..7959155 100644
--- a/src/backend/access/nbtree/nbtpage.c
+++ b/src/backend/access/nbtree/nbtpage.c
@@ -766,11 +766,12 @@ _bt_page_recyclable(Page page)
 }
 
 /*
- * Delete item(s) from a btree page during VACUUM.
+ * Delete item(s) and color item(s) blue on a btree page during VACUUM.
  *
  * This must only be used for deleting leaf items.  Deleting an item on a
  * non-leaf page has to be done as part of an atomic action that includes
- * deleting the page it points to.
+ * deleting the page it points to. We don't ever color pointers on a non-leaf
+ * page.
  *
  * This routine assumes that the caller has pinned and locked the buffer.
  * Also, the given itemnos *must* appear in increasing order in the array.
@@ -786,9 +787,9 @@ _bt_page_recyclable(Page page)
  * ensure correct locking.
  */
 void
-_bt_delitems_vacuum(Relation rel, Buffer buf,
-					OffsetNumber *itemnos, int nitems,
-					BlockNumber lastBlockVacuumed)
+_bt_handleitems_vacuum(Relation rel, Buffer buf,
+					OffsetNumber *delitemnos, int ndelitems,
+					OffsetNumber *coloritemnos, int ncoloritems)
 {
 	Page		page = BufferGetPage(buf);
 	BTPageOpaque opaque;
@@ -796,9 +797,20 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 	/* No ereport(ERROR) until changes are logged */
 	START_CRIT_SECTION();
 
+	/*
+	 * Color the Red pointers Blue.
+	 *
+	 * We must do this before dealing with the dead items because
+	 * PageIndexMultiDelete may move items around to compactify the array and
+	 * hence offnums recorded earlier won't make any sense after
+	 * PageIndexMultiDelete is called..
+	 */
+	if (ncoloritems > 0)
+		_bt_color_items(page, coloritemnos, ncoloritems);
+
 	/* Fix the page */
-	if (nitems > 0)
-		PageIndexMultiDelete(page, itemnos, nitems);
+	if (ndelitems > 0)
+		PageIndexMultiDelete(page, delitemnos, ndelitems);
 
 	/*
 	 * We can clear the vacuum cycle ID since this page has certainly been
@@ -824,7 +836,8 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 		XLogRecPtr	recptr;
 		xl_btree_vacuum xlrec_vacuum;
 
-		xlrec_vacuum.lastBlockVacuumed = lastBlockVacuumed;
+		xlrec_vacuum.ndelitems = ndelitems;
+		xlrec_vacuum.ncoloritems = ncoloritems;
 
 		XLogBeginInsert();
 		XLogRegisterBuffer(0, buf, REGBUF_STANDARD);
@@ -835,8 +848,11 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 		 * is.  When XLogInsert stores the whole buffer, the offsets array
 		 * need not be stored too.
 		 */
-		if (nitems > 0)
-			XLogRegisterBufData(0, (char *) itemnos, nitems * sizeof(OffsetNumber));
+		if (ndelitems > 0)
+			XLogRegisterBufData(0, (char *) delitemnos, ndelitems * sizeof(OffsetNumber));
+
+		if (ncoloritems > 0)
+			XLogRegisterBufData(0, (char *) coloritemnos, ncoloritems * sizeof(OffsetNumber));
 
 		recptr = XLogInsert(RM_BTREE_ID, XLOG_BTREE_VACUUM);
 
@@ -1882,3 +1898,18 @@ _bt_unlink_halfdead_page(Relation rel, Buffer leafbuf, bool *rightsib_empty)
 
 	return true;
 }
+
+void
+_bt_color_items(Page page, OffsetNumber *coloritemnos, uint16 ncoloritems)
+{
+	int			i;
+	ItemId		itemid;
+	IndexTuple	itup;
+
+	for (i = 0; i < ncoloritems; i++)
+	{
+		itemid = PageGetItemId(page, coloritemnos[i]);
+		itup = (IndexTuple) PageGetItem(page, itemid);
+		ItemPointerClearFlags(&itup->t_tid);
+	}
+}
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 952ed8f..92f490e 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -147,6 +147,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = btbuild;
 	amroutine->ambuildempty = btbuildempty;
 	amroutine->aminsert = btinsert;
+	amroutine->amwarminsert = btwarminsert;
 	amroutine->ambulkdelete = btbulkdelete;
 	amroutine->amvacuumcleanup = btvacuumcleanup;
 	amroutine->amcanreturn = btcanreturn;
@@ -317,11 +318,12 @@ btbuildempty(Relation index)
  *		Descend the tree recursively, find the appropriate location for our
  *		new tuple, and put it there.
  */
-bool
-btinsert(Relation rel, Datum *values, bool *isnull,
+static bool
+btinsert_internal(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
-		 IndexInfo *indexInfo)
+		 IndexInfo *indexInfo,
+		 bool warm_update)
 {
 	bool		result;
 	IndexTuple	itup;
@@ -330,6 +332,11 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	itup = index_form_tuple(RelationGetDescr(rel), values, isnull);
 	itup->t_tid = *ht_ctid;
 
+	if (warm_update)
+		ItemPointerSetFlags(&itup->t_tid, BTREE_INDEX_RED_POINTER);
+	else
+		ItemPointerClearFlags(&itup->t_tid);
+
 	result = _bt_doinsert(rel, itup, checkUnique, heapRel);
 
 	pfree(itup);
@@ -337,6 +344,26 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	return result;
 }
 
+bool
+btinsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, false);
+}
+
+bool
+btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, true);
+}
+
 /*
  *	btgettuple() -- Get the next tuple in the scan.
  */
@@ -1106,7 +1133,7 @@ btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 								 RBM_NORMAL, info->strategy);
 		LockBufferForCleanup(buf);
 		_bt_checkpage(rel, buf);
-		_bt_delitems_vacuum(rel, buf, NULL, 0, vstate.lastBlockVacuumed);
+		_bt_handleitems_vacuum(rel, buf, NULL, 0, NULL, 0);
 		_bt_relbuf(rel, buf);
 	}
 
@@ -1204,6 +1231,8 @@ restart:
 	{
 		OffsetNumber deletable[MaxOffsetNumber];
 		int			ndeletable;
+		OffsetNumber colorblue[MaxOffsetNumber];
+		int			ncolorblue;
 		OffsetNumber offnum,
 					minoff,
 					maxoff;
@@ -1242,7 +1271,7 @@ restart:
 		 * Scan over all items to see which ones need deleted according to the
 		 * callback function.
 		 */
-		ndeletable = 0;
+		ndeletable = ncolorblue = 0;
 		minoff = P_FIRSTDATAKEY(opaque);
 		maxoff = PageGetMaxOffsetNumber(page);
 		if (callback)
@@ -1253,6 +1282,9 @@ restart:
 			{
 				IndexTuple	itup;
 				ItemPointer htup;
+				int			flags;
+				bool		is_red = false;
+				IndexBulkDeleteCallbackResult	result;
 
 				itup = (IndexTuple) PageGetItem(page,
 												PageGetItemId(page, offnum));
@@ -1279,16 +1311,36 @@ restart:
 				 * applies to *any* type of index that marks index tuples as
 				 * killed.
 				 */
-				if (callback(htup, callback_state))
+				flags = ItemPointerGetFlags(&itup->t_tid);
+				is_red = ((flags & BTREE_INDEX_RED_POINTER) != 0);
+
+				if (is_red)
+					stats->num_red_pointers++;
+				else
+					stats->num_blue_pointers++;
+
+				result = callback(htup, is_red, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					if (is_red)
+						stats->red_pointers_removed++;
+					else
+						stats->blue_pointers_removed++;
 					deletable[ndeletable++] = offnum;
+				}
+				else if (result == IBDCR_COLOR_BLUE)
+				{
+					colorblue[ncolorblue++] = offnum;
+				}
 			}
 		}
 
 		/*
-		 * Apply any needed deletes.  We issue just one _bt_delitems_vacuum()
-		 * call per page, so as to minimize WAL traffic.
+		 * Apply any needed deletes and coloring.  We issue just one
+		 * _bt_handleitems_vacuum() call per page, so as to minimize WAL
+		 * traffic.
 		 */
-		if (ndeletable > 0)
+		if (ndeletable > 0 || ncolorblue > 0)
 		{
 			/*
 			 * Notice that the issued XLOG_BTREE_VACUUM WAL record includes
@@ -1304,8 +1356,8 @@ restart:
 			 * doesn't seem worth the amount of bookkeeping it'd take to avoid
 			 * that.
 			 */
-			_bt_delitems_vacuum(rel, buf, deletable, ndeletable,
-								vstate->lastBlockVacuumed);
+			_bt_handleitems_vacuum(rel, buf, deletable, ndeletable,
+								colorblue, ncolorblue);
 
 			/*
 			 * Remember highest leaf page number we've issued a
@@ -1315,6 +1367,7 @@ restart:
 				vstate->lastBlockVacuumed = blkno;
 
 			stats->tuples_removed += ndeletable;
+			stats->pointers_colored += ncolorblue;
 			/* must recompute maxoff */
 			maxoff = PageGetMaxOffsetNumber(page);
 		}
diff --git a/src/backend/access/nbtree/nbtxlog.c b/src/backend/access/nbtree/nbtxlog.c
index ac60db0..916c76e 100644
--- a/src/backend/access/nbtree/nbtxlog.c
+++ b/src/backend/access/nbtree/nbtxlog.c
@@ -390,83 +390,9 @@ btree_xlog_vacuum(XLogReaderState *record)
 	Buffer		buffer;
 	Page		page;
 	BTPageOpaque opaque;
-#ifdef UNUSED
 	xl_btree_vacuum *xlrec = (xl_btree_vacuum *) XLogRecGetData(record);
 
 	/*
-	 * This section of code is thought to be no longer needed, after analysis
-	 * of the calling paths. It is retained to allow the code to be reinstated
-	 * if a flaw is revealed in that thinking.
-	 *
-	 * If we are running non-MVCC scans using this index we need to do some
-	 * additional work to ensure correctness, which is known as a "pin scan"
-	 * described in more detail in next paragraphs. We used to do the extra
-	 * work in all cases, whereas we now avoid that work in most cases. If
-	 * lastBlockVacuumed is set to InvalidBlockNumber then we skip the
-	 * additional work required for the pin scan.
-	 *
-	 * Avoiding this extra work is important since it requires us to touch
-	 * every page in the index, so is an O(N) operation. Worse, it is an
-	 * operation performed in the foreground during redo, so it delays
-	 * replication directly.
-	 *
-	 * If queries might be active then we need to ensure every leaf page is
-	 * unpinned between the lastBlockVacuumed and the current block, if there
-	 * are any.  This prevents replay of the VACUUM from reaching the stage of
-	 * removing heap tuples while there could still be indexscans "in flight"
-	 * to those particular tuples for those scans which could be confused by
-	 * finding new tuples at the old TID locations (see nbtree/README).
-	 *
-	 * It might be worth checking if there are actually any backends running;
-	 * if not, we could just skip this.
-	 *
-	 * Since VACUUM can visit leaf pages out-of-order, it might issue records
-	 * with lastBlockVacuumed >= block; that's not an error, it just means
-	 * nothing to do now.
-	 *
-	 * Note: since we touch all pages in the range, we will lock non-leaf
-	 * pages, and also any empty (all-zero) pages that may be in the index. It
-	 * doesn't seem worth the complexity to avoid that.  But it's important
-	 * that HotStandbyActiveInReplay() will not return true if the database
-	 * isn't yet consistent; so we need not fear reading still-corrupt blocks
-	 * here during crash recovery.
-	 */
-	if (HotStandbyActiveInReplay() && BlockNumberIsValid(xlrec->lastBlockVacuumed))
-	{
-		RelFileNode thisrnode;
-		BlockNumber thisblkno;
-		BlockNumber blkno;
-
-		XLogRecGetBlockTag(record, 0, &thisrnode, NULL, &thisblkno);
-
-		for (blkno = xlrec->lastBlockVacuumed + 1; blkno < thisblkno; blkno++)
-		{
-			/*
-			 * We use RBM_NORMAL_NO_LOG mode because it's not an error
-			 * condition to see all-zero pages.  The original btvacuumpage
-			 * scan would have skipped over all-zero pages, noting them in FSM
-			 * but not bothering to initialize them just yet; so we mustn't
-			 * throw an error here.  (We could skip acquiring the cleanup lock
-			 * if PageIsNew, but it's probably not worth the cycles to test.)
-			 *
-			 * XXX we don't actually need to read the block, we just need to
-			 * confirm it is unpinned. If we had a special call into the
-			 * buffer manager we could optimise this so that if the block is
-			 * not in shared_buffers we confirm it as unpinned. Optimizing
-			 * this is now moot, since in most cases we avoid the scan.
-			 */
-			buffer = XLogReadBufferExtended(thisrnode, MAIN_FORKNUM, blkno,
-											RBM_NORMAL_NO_LOG);
-			if (BufferIsValid(buffer))
-			{
-				LockBufferForCleanup(buffer);
-				UnlockReleaseBuffer(buffer);
-			}
-		}
-	}
-#endif
-
-	/*
 	 * Like in btvacuumpage(), we need to take a cleanup lock on every leaf
 	 * page. See nbtree/README for details.
 	 */
@@ -482,19 +408,30 @@ btree_xlog_vacuum(XLogReaderState *record)
 
 		if (len > 0)
 		{
-			OffsetNumber *unused;
-			OffsetNumber *unend;
+			OffsetNumber *offnums = (OffsetNumber *) ptr;
 
-			unused = (OffsetNumber *) ptr;
-			unend = (OffsetNumber *) ((char *) ptr + len);
+			/*
+			 * Color the Red pointers Blue.
+			 *
+			 * We must do this before dealing with the dead items because
+			 * PageIndexMultiDelete may move items around to compactify the
+			 * array and hence offnums recorded earlier won't make any sense
+			 * after PageIndexMultiDelete is called..
+			 */
+			if (xlrec->ncoloritems > 0)
+				_bt_color_items(page, offnums + xlrec->ndelitems,
+						xlrec->ncoloritems);
 
-			if ((unend - unused) > 0)
-				PageIndexMultiDelete(page, unused, unend - unused);
+			/*
+			 * And handle the deleted items too
+			 */
+			if (xlrec->ndelitems > 0)
+				PageIndexMultiDelete(page, offnums, xlrec->ndelitems);
 		}
 
 		/*
 		 * Mark the page as not containing any LP_DEAD items --- see comments
-		 * in _bt_delitems_vacuum().
+		 * in _bt_handleitems_vacuum().
 		 */
 		opaque = (BTPageOpaque) PageGetSpecialPointer(page);
 		opaque->btpo_flags &= ~BTP_HAS_GARBAGE;
diff --git a/src/backend/access/rmgrdesc/heapdesc.c b/src/backend/access/rmgrdesc/heapdesc.c
index 44d2d63..d373e61 100644
--- a/src/backend/access/rmgrdesc/heapdesc.c
+++ b/src/backend/access/rmgrdesc/heapdesc.c
@@ -44,6 +44,12 @@ heap_desc(StringInfo buf, XLogReaderState *record)
 
 		appendStringInfo(buf, "off %u", xlrec->offnum);
 	}
+	else if (info == XLOG_HEAP_MULTI_INSERT)
+	{
+		xl_heap_multi_insert *xlrec = (xl_heap_multi_insert *) rec;
+
+		appendStringInfo(buf, "%d tuples", xlrec->ntuples);
+	}
 	else if (info == XLOG_HEAP_DELETE)
 	{
 		xl_heap_delete *xlrec = (xl_heap_delete *) rec;
@@ -102,7 +108,7 @@ heap2_desc(StringInfo buf, XLogReaderState *record)
 	char	   *rec = XLogRecGetData(record);
 	uint8		info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
 
-	info &= XLOG_HEAP_OPMASK;
+	info &= XLOG_HEAP2_OPMASK;
 	if (info == XLOG_HEAP2_CLEAN)
 	{
 		xl_heap_clean *xlrec = (xl_heap_clean *) rec;
@@ -129,12 +135,6 @@ heap2_desc(StringInfo buf, XLogReaderState *record)
 		appendStringInfo(buf, "cutoff xid %u flags %d",
 						 xlrec->cutoff_xid, xlrec->flags);
 	}
-	else if (info == XLOG_HEAP2_MULTI_INSERT)
-	{
-		xl_heap_multi_insert *xlrec = (xl_heap_multi_insert *) rec;
-
-		appendStringInfo(buf, "%d tuples", xlrec->ntuples);
-	}
 	else if (info == XLOG_HEAP2_LOCK_UPDATED)
 	{
 		xl_heap_lock_updated *xlrec = (xl_heap_lock_updated *) rec;
@@ -171,6 +171,12 @@ heap_identify(uint8 info)
 		case XLOG_HEAP_INSERT | XLOG_HEAP_INIT_PAGE:
 			id = "INSERT+INIT";
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			id = "MULTI_INSERT";
+			break;
+		case XLOG_HEAP_MULTI_INSERT | XLOG_HEAP_INIT_PAGE:
+			id = "MULTI_INSERT+INIT";
+			break;
 		case XLOG_HEAP_DELETE:
 			id = "DELETE";
 			break;
@@ -219,12 +225,6 @@ heap2_identify(uint8 info)
 		case XLOG_HEAP2_VISIBLE:
 			id = "VISIBLE";
 			break;
-		case XLOG_HEAP2_MULTI_INSERT:
-			id = "MULTI_INSERT";
-			break;
-		case XLOG_HEAP2_MULTI_INSERT | XLOG_HEAP_INIT_PAGE:
-			id = "MULTI_INSERT+INIT";
-			break;
 		case XLOG_HEAP2_LOCK_UPDATED:
 			id = "LOCK_UPDATED";
 			break;
diff --git a/src/backend/access/rmgrdesc/nbtdesc.c b/src/backend/access/rmgrdesc/nbtdesc.c
index fbde9d6..0e9a2eb 100644
--- a/src/backend/access/rmgrdesc/nbtdesc.c
+++ b/src/backend/access/rmgrdesc/nbtdesc.c
@@ -48,8 +48,8 @@ btree_desc(StringInfo buf, XLogReaderState *record)
 			{
 				xl_btree_vacuum *xlrec = (xl_btree_vacuum *) rec;
 
-				appendStringInfo(buf, "lastBlockVacuumed %u",
-								 xlrec->lastBlockVacuumed);
+				appendStringInfo(buf, "ndelitems %u, ncoloritems %u",
+								 xlrec->ndelitems, xlrec->ncoloritems);
 				break;
 			}
 		case XLOG_BTREE_DELETE:
diff --git a/src/backend/access/spgist/spgvacuum.c b/src/backend/access/spgist/spgvacuum.c
index cce9b3f..5343b10 100644
--- a/src/backend/access/spgist/spgvacuum.c
+++ b/src/backend/access/spgist/spgvacuum.c
@@ -155,7 +155,8 @@ vacuumLeafPage(spgBulkDeleteState *bds, Relation index, Buffer buffer,
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				deletable[i] = true;
@@ -425,7 +426,8 @@ vacuumLeafRoot(spgBulkDeleteState *bds, Relation index, Buffer buffer)
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				toDelete[xlrec.nDelete] = i;
@@ -902,10 +904,10 @@ spgbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 }
 
 /* Dummy callback to delete no tuples during spgvacuumcleanup */
-static bool
-dummy_callback(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+dummy_callback(ItemPointer itemptr, bool is_red, void *state)
 {
-	return false;
+	return IBDCR_KEEP;
 }
 
 /*
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index bba52ec..ab37b43 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -115,7 +115,7 @@ static void IndexCheckExclusion(Relation heapRelation,
 					IndexInfo *indexInfo);
 static inline int64 itemptr_encode(ItemPointer itemptr);
 static inline void itemptr_decode(ItemPointer itemptr, int64 encoded);
-static bool validate_index_callback(ItemPointer itemptr, void *opaque);
+static IndexBulkDeleteCallbackResult validate_index_callback(ItemPointer itemptr, bool is_red, void *opaque);
 static void validate_index_heapscan(Relation heapRelation,
 						Relation indexRelation,
 						IndexInfo *indexInfo,
@@ -2949,15 +2949,15 @@ itemptr_decode(ItemPointer itemptr, int64 encoded)
 /*
  * validate_index_callback - bulkdelete callback to collect the index TIDs
  */
-static bool
-validate_index_callback(ItemPointer itemptr, void *opaque)
+static IndexBulkDeleteCallbackResult
+validate_index_callback(ItemPointer itemptr, bool is_red, void *opaque)
 {
 	v_i_state  *state = (v_i_state *) opaque;
 	int64		encoded = itemptr_encode(itemptr);
 
 	tuplesort_putdatum(state->tuplesort, Int64GetDatum(encoded), false);
 	state->itups += 1;
-	return false;				/* never actually delete anything */
+	return IBDCR_KEEP;				/* never actually delete anything */
 }
 
 /*
@@ -3178,7 +3178,8 @@ validate_index_heapscan(Relation heapRelation,
 						 heapRelation,
 						 indexInfo->ii_Unique ?
 						 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-						 indexInfo);
+						 indexInfo,
+						 false);
 
 			state->tups_inserted += 1;
 		}
diff --git a/src/backend/catalog/indexing.c b/src/backend/catalog/indexing.c
index e5355a8..5b6efcf 100644
--- a/src/backend/catalog/indexing.c
+++ b/src/backend/catalog/indexing.c
@@ -172,7 +172,8 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple,
 					 heapRelation,
 					 relationDescs[i]->rd_index->indisunique ?
 					 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-					 indexInfo);
+					 indexInfo,
+					 warm_update);
 	}
 
 	ExecDropSingleTupleTableSlot(slot);
@@ -222,7 +223,7 @@ CatalogTupleInsertWithInfo(Relation heapRel, HeapTuple tup,
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup, false, NULL);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 
 	return oid;
 }
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index d9c0fe7..330b661 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -168,7 +168,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 		 */
 		index_insert(indexRel, values, isnull, &(new_row->t_self),
 					 trigdata->tg_relation, UNIQUE_CHECK_EXISTING,
-					 indexInfo);
+					 indexInfo,
+					 false);
 	}
 	else
 	{
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 1388be1..e5d5ca0 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -104,6 +104,25 @@
  */
 #define PREFETCH_SIZE			((BlockNumber) 32)
 
+/*
+ * Structure to track WARM chains that can be converted into HOT chains during
+ * this run.
+ * 
+ * To reduce space requirement, we're using bitfields. But the way things are
+ * laid down, we're still wasting 1-byte per candidate chain.
+ */
+typedef struct LVRedBlueChain
+{
+	ItemPointerData	chain_tid;			/* root of the chain */
+	uint8			is_red_chain:2;		/* is the WARM chain complete red ? */
+	uint8			keep_warm_chain:2;	/* this chain can't be cleared of WARM
+										 * tuples */
+	uint8			num_blue_pointers:2;/* number of blue pointers found so
+										 * far */
+	uint8			num_red_pointers:2; /* number of red pointers found so far
+										 * in the current index */
+} LVRedBlueChain;
+
 typedef struct LVRelStats
 {
 	/* hasindex = true means two-pass strategy; false means one-pass */
@@ -121,6 +140,16 @@ typedef struct LVRelStats
 	BlockNumber pages_removed;
 	double		tuples_deleted;
 	BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */
+	
+	double			num_warm_chains; /* number of warm chains seen so far */
+
+	/* List of WARM chains that can be converted into HOT chains */
+	/* NB: this list is ordered by TID of the root pointers */
+	int				num_redblue_chains;	/* current # of entries */
+	int				max_redblue_chains;	/* # slots allocated in array */
+	LVRedBlueChain *redblue_chains;	/* array of LVRedBlueChain */
+	double			num_non_convertible_warm_chains;
+
 	/* List of TIDs of tuples we intend to delete */
 	/* NB: this list is ordered by TID address */
 	int			num_dead_tuples;	/* current # of entries */
@@ -149,6 +178,7 @@ static void lazy_scan_heap(Relation onerel, int options,
 static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats);
 static bool lazy_check_needs_freeze(Buffer buf, bool *hastup);
 static void lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats);
 static void lazy_cleanup_index(Relation indrel,
@@ -156,6 +186,10 @@ static void lazy_cleanup_index(Relation indrel,
 				   LVRelStats *vacrelstats);
 static int lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 				 int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer);
+static int lazy_warmclear_page(Relation onerel, BlockNumber blkno,
+				 Buffer buffer, int chainindex, LVRelStats *vacrelstats,
+				 Buffer *vmbuffer, bool check_all_visible);
+static void lazy_reset_redblue_pointer_count(LVRelStats *vacrelstats);
 static bool should_attempt_truncation(LVRelStats *vacrelstats);
 static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
 static BlockNumber count_nondeletable_pages(Relation onerel,
@@ -163,8 +197,15 @@ static BlockNumber count_nondeletable_pages(Relation onerel,
 static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks);
 static void lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr);
-static bool lazy_tid_reaped(ItemPointer itemptr, void *state);
+static void lazy_record_red_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static void lazy_record_blue_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static IndexBulkDeleteCallbackResult lazy_tid_reaped(ItemPointer itemptr, bool is_red, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase1(ItemPointer itemptr, bool is_red, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase2(ItemPointer itemptr, bool is_red, void *state);
 static int	vac_cmp_itemptr(const void *left, const void *right);
+static int vac_cmp_redblue_chain(const void *left, const void *right);
 static bool heap_page_is_all_visible(Relation rel, Buffer buf,
 					 TransactionId *visibility_cutoff_xid, bool *all_frozen);
 
@@ -683,8 +724,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		 * If we are close to overrunning the available space for dead-tuple
 		 * TIDs, pause and do a cycle of vacuuming before we tackle this page.
 		 */
-		if ((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
-			vacrelstats->num_dead_tuples > 0)
+		if (((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
+			vacrelstats->num_dead_tuples > 0) ||
+			((vacrelstats->max_redblue_chains - vacrelstats->num_redblue_chains) < MaxHeapTuplesPerPage &&
+			 vacrelstats->num_redblue_chains > 0))
 		{
 			const int	hvp_index[] = {
 				PROGRESS_VACUUM_PHASE,
@@ -714,6 +757,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			/* Remove index entries */
 			for (i = 0; i < nindexes; i++)
 				lazy_vacuum_index(Irel[i],
+								  (vacrelstats->num_redblue_chains > 0),
 								  &indstats[i],
 								  vacrelstats);
 
@@ -736,6 +780,9 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			 * valid.
 			 */
 			vacrelstats->num_dead_tuples = 0;
+			vacrelstats->num_redblue_chains = 0;
+			memset(vacrelstats->redblue_chains, 0,
+					vacrelstats->max_redblue_chains * sizeof (LVRedBlueChain));
 			vacrelstats->num_index_scans++;
 
 			/* Report that we are once again scanning the heap */
@@ -939,15 +986,33 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 				continue;
 			}
 
+			ItemPointerSet(&(tuple.t_self), blkno, offnum);
+
 			/* Redirect items mustn't be touched */
 			if (ItemIdIsRedirected(itemid))
 			{
+				HeapCheckWarmChainStatus status = heap_check_warm_chain(page,
+						&tuple.t_self, false);
+				if (HCWC_IS_WARM(status))
+				{
+					vacrelstats->num_warm_chains++;
+
+					/*
+					 * A chain which is either complete Red or Blue is a
+					 * candidate for chain conversion. Remember the chain and
+					 * its color.
+					 */
+					if (HCWC_IS_ALL_RED(status))
+						lazy_record_red_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_BLUE(status))
+						lazy_record_blue_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
 				hastup = true;	/* this page won't be truncatable */
 				continue;
 			}
 
-			ItemPointerSet(&(tuple.t_self), blkno, offnum);
-
 			/*
 			 * DEAD item pointers are to be vacuumed normally; but we don't
 			 * count them in tups_vacuumed, else we'd be double-counting (at
@@ -967,6 +1032,28 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			tuple.t_len = ItemIdGetLength(itemid);
 			tuple.t_tableOid = RelationGetRelid(onerel);
 
+			if (!HeapTupleIsHeapOnly(&tuple))
+			{
+				HeapCheckWarmChainStatus status = heap_check_warm_chain(page,
+						&tuple.t_self, false);
+				if (HCWC_IS_WARM(status))
+				{
+					vacrelstats->num_warm_chains++;
+
+					/*
+					 * A chain which is either complete Red or Blue is a
+					 * candidate for chain conversion. Remember the chain and
+					 * its color.
+					 */
+					if (HCWC_IS_ALL_RED(status))
+						lazy_record_red_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_BLUE(status))
+						lazy_record_blue_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
+			}
+
 			tupgone = false;
 
 			switch (HeapTupleSatisfiesVacuum(&tuple, OldestXmin, buf))
@@ -1287,7 +1374,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 
 	/* If any tuples need to be deleted, perform final vacuum cycle */
 	/* XXX put a threshold on min number of tuples here? */
-	if (vacrelstats->num_dead_tuples > 0)
+	if (vacrelstats->num_dead_tuples > 0 || vacrelstats->num_redblue_chains > 0)
 	{
 		const int	hvp_index[] = {
 			PROGRESS_VACUUM_PHASE,
@@ -1305,6 +1392,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		/* Remove index entries */
 		for (i = 0; i < nindexes; i++)
 			lazy_vacuum_index(Irel[i],
+							  (vacrelstats->num_redblue_chains > 0),
 							  &indstats[i],
 							  vacrelstats);
 
@@ -1372,7 +1460,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
  *
  *		This routine marks dead tuples as unused and compacts out free
  *		space on their pages.  Pages not having dead tuples recorded from
- *		lazy_scan_heap are not visited at all.
+ *		lazy_scan_heap are not visited at all. This routine also converts
+ *		candidate WARM chains to HOT chains by clearing WARM related flags. The
+ *		candidate chains are determined by the preceeding index scans after
+ *		looking at the data collected by the first heap scan.
  *
  * Note: the reason for doing this as a second pass is we cannot remove
  * the tuples until we've removed their index entries, and we want to
@@ -1381,7 +1472,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 static void
 lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 {
-	int			tupindex;
+	int			tupindex, chainindex;
 	int			npages;
 	PGRUsage	ru0;
 	Buffer		vmbuffer = InvalidBuffer;
@@ -1390,33 +1481,69 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 	npages = 0;
 
 	tupindex = 0;
-	while (tupindex < vacrelstats->num_dead_tuples)
+	chainindex = 0;
+	while (tupindex < vacrelstats->num_dead_tuples ||
+		   chainindex < vacrelstats->num_redblue_chains)
 	{
-		BlockNumber tblk;
+		BlockNumber tblk, chainblk, vacblk;
 		Buffer		buf;
 		Page		page;
 		Size		freespace;
 
 		vacuum_delay_point();
 
-		tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
-		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, tblk, RBM_NORMAL,
+		tblk = chainblk = InvalidBlockNumber;
+		if (chainindex < vacrelstats->num_redblue_chains)
+			chainblk =
+				ItemPointerGetBlockNumber(&(vacrelstats->redblue_chains[chainindex].chain_tid));
+		
+		if (tupindex < vacrelstats->num_dead_tuples)
+			tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
+
+		if (tblk == InvalidBlockNumber)
+			vacblk = chainblk;
+		else if (chainblk == InvalidBlockNumber)
+			vacblk = tblk;
+		else
+			vacblk = Min(chainblk, tblk);
+
+		Assert(vacblk != InvalidBlockNumber);
+
+		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, vacblk, RBM_NORMAL,
 								 vac_strategy);
-		if (!ConditionalLockBufferForCleanup(buf))
+
+
+		if (vacblk == chainblk)
+			LockBufferForCleanup(buf);
+		else if (!ConditionalLockBufferForCleanup(buf))
 		{
 			ReleaseBuffer(buf);
 			++tupindex;
 			continue;
 		}
-		tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
-									&vmbuffer);
+
+		/* 
+		 * Convert WARM chains on this page. This should be done before
+		 * vacuuming the page to ensure that we can correctly set visibility
+		 * bits after clearing WARM chains.
+		 *
+		 * If we are going to vacuum this page then don't check for
+		 * all-visibility just yet.
+		*/
+		if (vacblk == chainblk)
+			chainindex = lazy_warmclear_page(onerel, chainblk, buf, chainindex,
+					vacrelstats, &vmbuffer, chainblk != tblk);
+
+		if (vacblk == tblk)
+			tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
+					&vmbuffer);
 
 		/* Now that we've compacted the page, record its available space */
 		page = BufferGetPage(buf);
 		freespace = PageGetHeapFreeSpace(page);
 
 		UnlockReleaseBuffer(buf);
-		RecordPageWithFreeSpace(onerel, tblk, freespace);
+		RecordPageWithFreeSpace(onerel, vacblk, freespace);
 		npages++;
 	}
 
@@ -1435,6 +1562,107 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 }
 
 /*
+ *	lazy_warmclear_page() -- clear WARM flag and mark chains blue when possible
+ *
+ * Caller must hold pin and buffer cleanup lock on the buffer.
+ *
+ * chainindex is the index in vacrelstats->redblue_chains of the first dead
+ * tuple for this page.  We assume the rest follow sequentially.
+ * The return value is the first tupindex after the tuples of this page.
+ *
+ * If check_all_visible is set then we also check if the page has now become
+ * all visible and update visibility map.
+ */
+static int
+lazy_warmclear_page(Relation onerel, BlockNumber blkno, Buffer buffer,
+				 int chainindex, LVRelStats *vacrelstats, Buffer *vmbuffer,
+				 bool check_all_visible)
+{
+	Page			page = BufferGetPage(buffer);
+	OffsetNumber	cleared_offnums[MaxHeapTuplesPerPage];
+	int				num_cleared = 0;
+	TransactionId	visibility_cutoff_xid;
+	bool			all_frozen;
+
+	pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED, blkno);
+
+	START_CRIT_SECTION();
+
+	for (; chainindex < vacrelstats->num_redblue_chains ; chainindex++)
+	{
+		BlockNumber tblk;
+		LVRedBlueChain	*chain;
+
+		chain = &vacrelstats->redblue_chains[chainindex];
+
+		tblk = ItemPointerGetBlockNumber(&chain->chain_tid);
+		if (tblk != blkno)
+			break;				/* past end of tuples for this block */
+
+		/*
+		 * Since a heap page can have no more than MaxHeapTuplesPerPage
+		 * offnums and we process each offnum only once, MaxHeapTuplesPerPage
+		 * size array should be enough to hold all cleared tuples in this page.
+		 */
+		if (!chain->keep_warm_chain)
+			num_cleared += heap_clear_warm_chain(page, &chain->chain_tid,
+					cleared_offnums + num_cleared);
+	}
+
+	/*
+	 * Mark buffer dirty before we write WAL.
+	 */
+	MarkBufferDirty(buffer);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(onerel))
+	{
+		XLogRecPtr	recptr;
+
+		recptr = log_heap_warmclear(onerel, buffer,
+								cleared_offnums, num_cleared);
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	/* If not checking for all-visibility then we're done */
+	if (!check_all_visible)
+		return chainindex;
+
+	/* 
+	 * The following code should match the corresponding code in
+	 * lazy_vacuum_page
+	 **/
+	if (heap_page_is_all_visible(onerel, buffer, &visibility_cutoff_xid,
+								 &all_frozen))
+		PageSetAllVisible(page);
+
+	/*
+	 * All the changes to the heap page have been done. If the all-visible
+	 * flag is now set, also set the VM all-visible bit (and, if possible, the
+	 * all-frozen bit) unless this has already been done previously.
+	 */
+	if (PageIsAllVisible(page))
+	{
+		uint8		vm_status = visibilitymap_get_status(onerel, blkno, vmbuffer);
+		uint8		flags = 0;
+
+		/* Set the VM all-frozen bit to flag, if needed */
+		if ((vm_status & VISIBILITYMAP_ALL_VISIBLE) == 0)
+			flags |= VISIBILITYMAP_ALL_VISIBLE;
+		if ((vm_status & VISIBILITYMAP_ALL_FROZEN) == 0 && all_frozen)
+			flags |= VISIBILITYMAP_ALL_FROZEN;
+
+		Assert(BufferIsValid(*vmbuffer));
+		if (flags != 0)
+			visibilitymap_set(onerel, blkno, buffer, InvalidXLogRecPtr,
+							  *vmbuffer, visibility_cutoff_xid, flags);
+	}
+	return chainindex;
+}
+
+/*
  *	lazy_vacuum_page() -- free dead tuples on a page
  *					 and repair its fragmentation.
  *
@@ -1587,6 +1815,16 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
 	return false;
 }
 
+static void
+lazy_reset_redblue_pointer_count(LVRelStats *vacrelstats)
+{
+	int i;
+	for (i = 0; i < vacrelstats->num_redblue_chains; i++)
+	{
+		LVRedBlueChain *chain = &vacrelstats->redblue_chains[i];
+		chain->num_blue_pointers = chain->num_red_pointers = 0;
+	}
+}
 
 /*
  *	lazy_vacuum_index() -- vacuum one index relation.
@@ -1596,6 +1834,7 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
  */
 static void
 lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats)
 {
@@ -1611,15 +1850,81 @@ lazy_vacuum_index(Relation indrel,
 	ivinfo.num_heap_tuples = vacrelstats->old_rel_tuples;
 	ivinfo.strategy = vac_strategy;
 
-	/* Do bulk deletion */
-	*stats = index_bulk_delete(&ivinfo, *stats,
-							   lazy_tid_reaped, (void *) vacrelstats);
+	/*
+	 * If told, convert WARM chains into HOT chains.
+	 *
+	 * We must have already collected candidate WARM chains i.e. chains which
+	 * has either has only Red or only Blue tuples, but not a mix of both.
+	 *
+	 * This works in two phases. In the first phase, we do a complete index
+	 * scan and collect information about index pointers to the candidate
+	 * chains, but we don't do conversion. To be precise, we count the number
+	 * of Blue and Red index pointers to each candidate chain and use that
+	 * knowledge to arrive at a decision and do the actual conversion during
+	 * the second phase (we kill known dead pointers though in this phase).
+	 *
+	 * In the second phase, for each Red chain we check if we have seen a Red
+	 * index pointer. For such chains, we kill the Blue pointer and color the
+	 * Red pointer Blue. the heap tuples are marked Blue in the second heap
+	 * scan. If we did not find any Red pointer to a Red chain, that means that
+	 * the chain is reachable from the Blue pointer (because say WARM update
+	 * did not added a new entry for this index). In that case, we do nothing.
+	 * There is a third case where we find more than one Blue pointers to a Red
+	 * chain. This can happen because of aborted vacuums. We don't handle that
+	 * case yet, but it should be possible to apply the same recheck logic and
+	 * find which of the Blue pointers is redundant and should be removed.
+	 *
+	 * For Blue chains, we just kill the Red pointer, if it exists and keep the
+	 * Blue pointer.
+	 */
+	if (clear_warm)
+	{
+		lazy_reset_redblue_pointer_count(vacrelstats);
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase1, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row version, found "
+						"%0.f red pointers, %0.f blue pointers, removed "
+						"%0.f red pointers, removed %0.f blue pointers",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples,
+						(*stats)->num_red_pointers,
+						(*stats)->num_blue_pointers,
+						(*stats)->red_pointers_removed,
+						(*stats)->blue_pointers_removed)));
+
+		(*stats)->num_red_pointers = 0;
+		(*stats)->num_blue_pointers = 0;
+		(*stats)->red_pointers_removed = 0;
+		(*stats)->blue_pointers_removed = 0;
+		(*stats)->pointers_colored = 0;
+
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase2, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to convert red pointers, found "
+						"%0.f red pointers, %0.f blue pointers, removed "
+						"%0.f red pointers, removed %0.f blue pointers, "
+						"colored %0.f red pointers blue",
+						RelationGetRelationName(indrel),
+						(*stats)->num_red_pointers,
+						(*stats)->num_blue_pointers,
+						(*stats)->red_pointers_removed,
+						(*stats)->blue_pointers_removed,
+						(*stats)->pointers_colored)));
+	}
+	else
+	{
+		/* Do bulk deletion */
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_tid_reaped, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row versions",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples),
+				 errdetail("%s.", pg_rusage_show(&ru0))));
+	}
 
-	ereport(elevel,
-			(errmsg("scanned index \"%s\" to remove %d row versions",
-					RelationGetRelationName(indrel),
-					vacrelstats->num_dead_tuples),
-			 errdetail("%s.", pg_rusage_show(&ru0))));
 }
 
 /*
@@ -1993,9 +2298,11 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 
 	if (vacrelstats->hasindex)
 	{
-		maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData);
+		maxtuples = (vac_work_mem * 1024L) / (sizeof(ItemPointerData) +
+				sizeof(LVRedBlueChain));
 		maxtuples = Min(maxtuples, INT_MAX);
-		maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData));
+		maxtuples = Min(maxtuples, MaxAllocSize / (sizeof(ItemPointerData) +
+					sizeof(LVRedBlueChain)));
 
 		/* curious coding here to ensure the multiplication can't overflow */
 		if ((BlockNumber) (maxtuples / LAZY_ALLOC_TUPLES) > relblocks)
@@ -2013,6 +2320,57 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 	vacrelstats->max_dead_tuples = (int) maxtuples;
 	vacrelstats->dead_tuples = (ItemPointer)
 		palloc(maxtuples * sizeof(ItemPointerData));
+
+	/* 
+	 * XXX Cheat for now and allocate the same size array for tracking blue and
+	 * red chains. maxtuples must have been already adjusted above to ensure we
+	 * don't cross vac_work_mem.
+	 */
+	vacrelstats->num_redblue_chains = 0;
+	vacrelstats->max_redblue_chains = (int) maxtuples;
+	vacrelstats->redblue_chains = (LVRedBlueChain *)
+		palloc0(maxtuples * sizeof(LVRedBlueChain));
+
+}
+
+/*
+ * lazy_record_blue_chain - remember one blue chain
+ */
+static void
+lazy_record_blue_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_redblue_chains < vacrelstats->max_redblue_chains)
+	{
+		vacrelstats->redblue_chains[vacrelstats->num_redblue_chains].chain_tid = *itemptr;
+		vacrelstats->redblue_chains[vacrelstats->num_redblue_chains].is_red_chain = 0;
+		vacrelstats->num_redblue_chains++;
+	}
+}
+
+/*
+ * lazy_record_red_chain - remember one red chain
+ */
+static void
+lazy_record_red_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_redblue_chains < vacrelstats->max_redblue_chains)
+	{
+		vacrelstats->redblue_chains[vacrelstats->num_redblue_chains].chain_tid = *itemptr;
+		vacrelstats->redblue_chains[vacrelstats->num_redblue_chains].is_red_chain = 1;
+		vacrelstats->num_redblue_chains++;
+	}
 }
 
 /*
@@ -2043,8 +2401,8 @@ lazy_record_dead_tuple(LVRelStats *vacrelstats,
  *
  *		Assumes dead_tuples array is in sorted order.
  */
-static bool
-lazy_tid_reaped(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+lazy_tid_reaped(ItemPointer itemptr, bool is_red, void *state)
 {
 	LVRelStats *vacrelstats = (LVRelStats *) state;
 	ItemPointer res;
@@ -2055,7 +2413,193 @@ lazy_tid_reaped(ItemPointer itemptr, void *state)
 								sizeof(ItemPointerData),
 								vac_cmp_itemptr);
 
-	return (res != NULL);
+	return (res != NULL) ? IBDCR_DELETE : IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase1() -- run first pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase1(ItemPointer itemptr, bool is_red, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	ItemPointer		res;
+	LVRedBlueChain	*chain;
+
+	res = (ItemPointer) bsearch((void *) itemptr,
+								(void *) vacrelstats->dead_tuples,
+								vacrelstats->num_dead_tuples,
+								sizeof(ItemPointerData),
+								vac_cmp_itemptr);
+
+	if (res != NULL)
+		return IBDCR_DELETE;
+
+	chain = (LVRedBlueChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->redblue_chains,
+								vacrelstats->num_redblue_chains,
+								sizeof(LVRedBlueChain),
+								vac_cmp_redblue_chain);
+	if (chain != NULL)
+	{
+		if (is_red)
+			chain->num_red_pointers++;
+		else
+			chain->num_blue_pointers++;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase2() -- run first pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase2(ItemPointer itemptr, bool is_red, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	LVRedBlueChain	*chain;
+
+	chain = (LVRedBlueChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->redblue_chains,
+								vacrelstats->num_redblue_chains,
+								sizeof(LVRedBlueChain),
+								vac_cmp_redblue_chain);
+
+	if (chain != NULL && (chain->keep_warm_chain != 1))
+	{
+		/*
+		 * At no point, we can have more than 1 Red pointer to any chain and no
+		 * more than 2 Blue pointers.
+		 */
+		Assert(chain->num_red_pointers <= 1);
+		Assert(chain->num_blue_pointers <= 2);
+
+		if (chain->is_red_chain == 1)
+		{
+			if (is_red)
+			{
+				/*
+				 * A Red pointer, pointing to a Blue chain.
+				 *
+				 * Color the Red pointer Blue (and delete the Blue pointer). We
+				 * may have already seen the Blue pointer in the scan and
+				 * deleted that or we may see it later in the scan. It doesn't
+				 * matter if we fail at any point because we won't clear up
+				 * WARM bits on the heap tuples until we have dealt with the
+				 * index pointers cleanly.
+				 */
+				return IBDCR_COLOR_BLUE;
+			}
+			else
+			{
+				/*
+				 * Blue pointer to a Red chain.
+				 */
+				if (chain->num_red_pointers > 0)
+				{
+					/*
+					 * If there exists a Red pointer to the chain, we can
+					 * delete the Blue pointer and clear the WARM bits on the
+					 * heap tuples.
+					 */
+					return IBDCR_DELETE;
+				}
+				else if (chain->num_blue_pointers == 1)
+				{
+					/*
+					 * If this is the only pointer to a Red chain, we must keep the
+					 * Blue pointer.
+					 *
+					 * The presence of Red chain indicates that the WARM update
+					 * must have been committed good. But during the update
+					 * this index was probably not updated and hence it
+					 * contains just one, original Blue pointer to the chain.
+					 * We should be able to clear the WARM bits on heap tuples
+					 * unless we later find another index which prevents the
+					 * cleanup.
+					 */
+					return IBDCR_KEEP;
+				}
+			}
+		}
+		else
+		{
+			/*
+			 * This is a Blue chain.
+			 */
+			if (is_red)
+			{
+				/*
+				 * A Red pointer to a Blue chain.
+				 *
+				 * This can happen when a WARM update is aborted. Later the HOT
+				 * chain is pruned leaving behind only Blue tuples in the
+				 * chain. But the Red index pointer inserted in the index
+				 * remains and it must now be deleted before we clear WARM bits
+				 * from the heap tuple.
+				 */
+				return IBDCR_DELETE;
+			}
+			
+			/*
+			 * Blue pointer to a Blue chain.
+			 *
+			 * If this is the only surviving Blue pointer, keep it and clear
+			 * the WARM bits from the heap tuples.
+			 */
+			if (chain->num_blue_pointers == 1)
+				return IBDCR_KEEP;
+
+			/*
+			 * If there are more than 1 Blue pointers to this chain, we can
+			 * apply the recheck logic and kill the redudant Blue pointer and
+			 * convert the chain. But that's not yet done.
+			 */
+		}
+
+		/*
+		 * For everything else, we must keep the WARM bits and also keep the
+		 * index pointers.
+		 */
+		chain->keep_warm_chain = 1;
+		return IBDCR_KEEP;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ * Comparator routines for use with qsort() and bsearch(). Similar to
+ * vac_cmp_itemptr, but right hand argument is LVRedBlueChain struct pointer.
+ */
+static int
+vac_cmp_redblue_chain(const void *left, const void *right)
+{
+	BlockNumber lblk,
+				rblk;
+	OffsetNumber loff,
+				roff;
+
+	lblk = ItemPointerGetBlockNumber((ItemPointer) left);
+	rblk = ItemPointerGetBlockNumber(&((LVRedBlueChain *) right)->chain_tid);
+
+	if (lblk < rblk)
+		return -1;
+	if (lblk > rblk)
+		return 1;
+
+	loff = ItemPointerGetOffsetNumber((ItemPointer) left);
+	roff = ItemPointerGetOffsetNumber(&((LVRedBlueChain *) right)->chain_tid);
+
+	if (loff < roff)
+		return -1;
+	if (loff > roff)
+		return 1;
+
+	return 0;
 }
 
 /*
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index d62d2de..3e49a8f 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -405,7 +405,8 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique,	/* type of uniqueness check to do */
-						 indexInfo);	/* index AM may need this */
+						 indexInfo,	/* index AM may need this */
+						 (modified_attrs != NULL));	/* type of uniqueness check to do */
 
 		/*
 		 * If the index has an associated exclusion constraint, check that.
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 5c13d26..7a9b48a 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -347,7 +347,7 @@ DecodeStandbyOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 static void
 DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 {
-	uint8		info = XLogRecGetInfo(buf->record) & XLOG_HEAP_OPMASK;
+	uint8		info = XLogRecGetInfo(buf->record) & XLOG_HEAP2_OPMASK;
 	TransactionId xid = XLogRecGetXid(buf->record);
 	SnapBuild  *builder = ctx->snapshot_builder;
 
@@ -359,10 +359,6 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 
 	switch (info)
 	{
-		case XLOG_HEAP2_MULTI_INSERT:
-			if (SnapBuildProcessChange(builder, xid, buf->origptr))
-				DecodeMultiInsert(ctx, buf);
-			break;
 		case XLOG_HEAP2_NEW_CID:
 			{
 				xl_heap_new_cid *xlrec;
@@ -390,6 +386,7 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 		case XLOG_HEAP2_CLEANUP_INFO:
 		case XLOG_HEAP2_VISIBLE:
 		case XLOG_HEAP2_LOCK_UPDATED:
+		case XLOG_HEAP2_WARMCLEAR:
 			break;
 		default:
 			elog(ERROR, "unexpected RM_HEAP2_ID record type: %u", info);
@@ -418,6 +415,10 @@ DecodeHeapOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 			if (SnapBuildProcessChange(builder, xid, buf->origptr))
 				DecodeInsert(ctx, buf);
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			if (SnapBuildProcessChange(builder, xid, buf->origptr))
+				DecodeMultiInsert(ctx, buf);
+			break;
 
 			/*
 			 * Treat HOT update as normal updates. There is no useful
@@ -809,7 +810,7 @@ DecodeDelete(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 }
 
 /*
- * Decode XLOG_HEAP2_MULTI_INSERT_insert record into multiple tuplebufs.
+ * Decode XLOG_HEAP_MULTI_INSERT_insert record into multiple tuplebufs.
  *
  * Currently MULTI_INSERT will always contain the full tuples.
  */
diff --git a/src/backend/utils/time/combocid.c b/src/backend/utils/time/combocid.c
index baff998..6a2e2f2 100644
--- a/src/backend/utils/time/combocid.c
+++ b/src/backend/utils/time/combocid.c
@@ -106,7 +106,7 @@ HeapTupleHeaderGetCmin(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 	Assert(TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmin(tup)));
 
 	if (tup->t_infomask & HEAP_COMBOCID)
@@ -120,7 +120,7 @@ HeapTupleHeaderGetCmax(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 
 	/*
 	 * Because GetUpdateXid() performs memory allocations if xmax is a
diff --git a/src/backend/utils/time/tqual.c b/src/backend/utils/time/tqual.c
index 703bdce..0df5a44 100644
--- a/src/backend/utils/time/tqual.c
+++ b/src/backend/utils/time/tqual.c
@@ -186,7 +186,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -205,7 +205,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -377,7 +377,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -396,7 +396,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -471,7 +471,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			return HeapTupleInvisible;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -490,7 +490,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -753,7 +753,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -772,7 +772,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -974,7 +974,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -993,7 +993,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1180,7 +1180,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 		if (HeapTupleHeaderXminInvalid(tuple))
 			return HEAPTUPLE_DEAD;
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_OFF)
+		else if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1198,7 +1198,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 						InvalidTransactionId);
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h
index d7702e5..68859f2 100644
--- a/src/include/access/amapi.h
+++ b/src/include/access/amapi.h
@@ -75,6 +75,14 @@ typedef bool (*aminsert_function) (Relation indexRelation,
 											   Relation heapRelation,
 											   IndexUniqueCheck checkUnique,
 											   struct IndexInfo *indexInfo);
+/* insert this WARM tuple */
+typedef bool (*amwarminsert_function) (Relation indexRelation,
+											   Datum *values,
+											   bool *isnull,
+											   ItemPointer heap_tid,
+											   Relation heapRelation,
+											   IndexUniqueCheck checkUnique,
+											   struct IndexInfo *indexInfo);
 
 /* bulk delete */
 typedef IndexBulkDeleteResult *(*ambulkdelete_function) (IndexVacuumInfo *info,
@@ -203,6 +211,7 @@ typedef struct IndexAmRoutine
 	ambuild_function ambuild;
 	ambuildempty_function ambuildempty;
 	aminsert_function aminsert;
+	amwarminsert_function amwarminsert;
 	ambulkdelete_function ambulkdelete;
 	amvacuumcleanup_function amvacuumcleanup;
 	amcanreturn_function amcanreturn;	/* can be NULL */
diff --git a/src/include/access/genam.h b/src/include/access/genam.h
index f467b18..bf1e6bd 100644
--- a/src/include/access/genam.h
+++ b/src/include/access/genam.h
@@ -75,12 +75,29 @@ typedef struct IndexBulkDeleteResult
 	bool		estimated_count;	/* num_index_tuples is an estimate */
 	double		num_index_tuples;		/* tuples remaining */
 	double		tuples_removed; /* # removed during vacuum operation */
+	double		num_red_pointers;	/* # red pointers found */
+	double		num_blue_pointers;	/* # blue pointers found */
+	double		pointers_colored;	/* # red pointers colored blue */
+	double		red_pointers_removed;	/* # red pointers removed */
+	double		blue_pointers_removed;	/* # blue pointers removed */
 	BlockNumber pages_deleted;	/* # unused pages in index */
 	BlockNumber pages_free;		/* # pages available for reuse */
 } IndexBulkDeleteResult;
 
+/*
+ * IndexBulkDeleteCallback should return one of the following
+ */
+typedef enum IndexBulkDeleteCallbackResult
+{
+	IBDCR_KEEP,			/* index tuple should be preserved */
+	IBDCR_DELETE,		/* index tuple should be deleted */
+	IBDCR_COLOR_BLUE	/* index tuple should be colored blue */
+} IndexBulkDeleteCallbackResult;
+
 /* Typedef for callback function to determine if a tuple is bulk-deletable */
-typedef bool (*IndexBulkDeleteCallback) (ItemPointer itemptr, void *state);
+typedef IndexBulkDeleteCallbackResult (*IndexBulkDeleteCallback) (
+										 ItemPointer itemptr,
+										 bool is_red, void *state);
 
 /* struct definitions appear in relscan.h */
 typedef struct IndexScanDescData *IndexScanDesc;
@@ -135,7 +152,8 @@ extern bool index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 struct IndexInfo *indexInfo);
+			 struct IndexInfo *indexInfo,
+			 bool warm_update);
 
 extern IndexScanDesc index_beginscan(Relation heapRelation,
 				Relation indexRelation,
diff --git a/src/include/access/hash.h b/src/include/access/hash.h
index e76a7aa..a2720c1 100644
--- a/src/include/access/hash.h
+++ b/src/include/access/hash.h
@@ -269,6 +269,11 @@ typedef HashMetaPageData *HashMetaPage;
 #define HASHPROC		1
 #define HASHNProcs		1
 
+/*
+ * Flags overloaded on t_tid.ip_posid field. They are managed by
+ * ItemPointerSetFlags and corresponing routines.
+ */
+#define HASH_INDEX_RED_POINTER	0x01
 
 /* public routines */
 
@@ -279,6 +284,10 @@ extern bool hashinsert(Relation rel, Datum *values, bool *isnull,
 		   ItemPointer ht_ctid, Relation heapRel,
 		   IndexUniqueCheck checkUnique,
 		   struct IndexInfo *indexInfo);
+extern bool hashwarminsert(Relation rel, Datum *values, bool *isnull,
+		   ItemPointer ht_ctid, Relation heapRel,
+		   IndexUniqueCheck checkUnique,
+		   struct IndexInfo *indexInfo);
 extern bool hashgettuple(IndexScanDesc scan, ScanDirection dir);
 extern int64 hashgetbitmap(IndexScanDesc scan, TIDBitmap *tbm);
 extern IndexScanDesc hashbeginscan(Relation rel, int nkeys, int norderbys);
@@ -346,6 +355,8 @@ extern void _hash_expandtable(Relation rel, Buffer metabuf);
 extern void _hash_finish_split(Relation rel, Buffer metabuf, Buffer obuf,
 				   Bucket obucket, uint32 maxbucket, uint32 highmask,
 				   uint32 lowmask);
+extern void _hash_color_items(Page page, OffsetNumber *coloritemsno,
+				   uint16 ncoloritems);
 
 /* hashsearch.c */
 extern bool _hash_next(IndexScanDesc scan, ScanDirection dir);
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 9412c3a..719a725 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -72,6 +72,20 @@ typedef struct HeapUpdateFailureData
 	CommandId	cmax;
 } HeapUpdateFailureData;
 
+typedef int HeapCheckWarmChainStatus;
+
+#define HCWC_BLUE_TUPLE	0x0001
+#define	HCWC_RED_TUPLE	0x0002
+#define HCWC_WARM_TUPLE	0x0004
+
+#define HCWC_IS_MIXED(status) \
+	(((status) & (HCWC_BLUE_TUPLE | HCWC_RED_TUPLE)) != 0)
+#define HCWC_IS_ALL_RED(status) \
+	(((status) & HCWC_BLUE_TUPLE) == 0)
+#define HCWC_IS_ALL_BLUE(status) \
+	(((status) & HCWC_RED_TUPLE) == 0)
+#define HCWC_IS_WARM(status) \
+	(((status) & HCWC_WARM_TUPLE) != 0)
 
 /* ----------------
  *		function prototypes for heap access method
@@ -183,6 +197,10 @@ extern void simple_heap_update(Relation relation, ItemPointer otid,
 				   bool *warm_update);
 
 extern void heap_sync(Relation relation);
+extern HeapCheckWarmChainStatus heap_check_warm_chain(Page dp,
+				   ItemPointer tid, bool stop_at_warm);
+extern int heap_clear_warm_chain(Page dp, ItemPointer tid,
+				   OffsetNumber *cleared_offnums);
 
 /* in heap/pruneheap.c */
 extern void heap_page_prune_opt(Relation relation, Buffer buffer);
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index 9b081bf..66fd0ea 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -32,7 +32,7 @@
 #define XLOG_HEAP_INSERT		0x00
 #define XLOG_HEAP_DELETE		0x10
 #define XLOG_HEAP_UPDATE		0x20
-/* 0x030 is free, was XLOG_HEAP_MOVE */
+#define XLOG_HEAP_MULTI_INSERT	0x30
 #define XLOG_HEAP_HOT_UPDATE	0x40
 #define XLOG_HEAP_CONFIRM		0x50
 #define XLOG_HEAP_LOCK			0x60
@@ -47,18 +47,23 @@
 /*
  * We ran out of opcodes, so heapam.c now has a second RmgrId.  These opcodes
  * are associated with RM_HEAP2_ID, but are not logically different from
- * the ones above associated with RM_HEAP_ID.  XLOG_HEAP_OPMASK applies to
- * these, too.
+ * the ones above associated with RM_HEAP_ID.
+ *
+ * In PG 10, we moved XLOG_HEAP2_MULTI_INSERT to RM_HEAP_ID. That allows us to
+ * use 0x80 bit in RM_HEAP2_ID, thus potentially creating another 8 possible
+ * opcodes in RM_HEAP2_ID.
  */
 #define XLOG_HEAP2_REWRITE		0x00
 #define XLOG_HEAP2_CLEAN		0x10
 #define XLOG_HEAP2_FREEZE_PAGE	0x20
 #define XLOG_HEAP2_CLEANUP_INFO 0x30
 #define XLOG_HEAP2_VISIBLE		0x40
-#define XLOG_HEAP2_MULTI_INSERT 0x50
+#define XLOG_HEAP2_WARMCLEAR	0x50
 #define XLOG_HEAP2_LOCK_UPDATED 0x60
 #define XLOG_HEAP2_NEW_CID		0x70
 
+#define XLOG_HEAP2_OPMASK		0x70
+
 /*
  * xl_heap_insert/xl_heap_multi_insert flag values, 8 bits are available.
  */
@@ -226,6 +231,14 @@ typedef struct xl_heap_clean
 
 #define SizeOfHeapClean (offsetof(xl_heap_clean, ndead) + sizeof(uint16))
 
+typedef struct xl_heap_warmclear
+{
+	uint16		ncleared;
+	/* OFFSET NUMBERS are in the block reference 0 */
+} xl_heap_warmclear;
+
+#define SizeOfHeapWarmClear (offsetof(xl_heap_warmclear, ncleared) + sizeof(uint16))
+
 /*
  * Cleanup_info is required in some cases during a lazy VACUUM.
  * Used for reporting the results of HeapTupleHeaderAdvanceLatestRemovedXid()
@@ -389,6 +402,8 @@ extern XLogRecPtr log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid);
+extern XLogRecPtr log_heap_warmclear(Relation reln, Buffer buffer,
+			   OffsetNumber *cleared, int ncleared);
 extern XLogRecPtr log_heap_freeze(Relation reln, Buffer buffer,
 				TransactionId cutoff_xid, xl_heap_freeze_tuple *tuples,
 				int ntuples);
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index b5891ca..1f6ab0d 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -201,6 +201,21 @@ struct HeapTupleHeaderData
 										 * upgrade support */
 #define HEAP_MOVED (HEAP_MOVED_OFF | HEAP_MOVED_IN)
 
+/*
+ * A WARM chain usually consists of two parts. Each of these parts are HOT
+ * chains in themselves i.e. all indexed columns has the same value, but a WARM
+ * update separates these parts. We call these two parts as Blue chain and Red
+ * chain. We need a mechanism to identify which part a tuple belongs to. We
+ * can't just look at if it's a HeapTupleHeaderIsHeapWarmTuple() because during
+ * WARM update, both old and new tuples are marked as WARM tuples.
+ * 
+ * We need another infomask bit for this. But we use the same infomask bit that
+ * was earlier used for by old-style VACUUM FULL. This is safe because
+ * HEAP_WARM_TUPLE flag will always be set along with HEAP_WARM_RED. So if
+ * HEAP_WARM_TUPLE and HEAP_WARM_RED is set then we know that it's referring to
+ * red part of the WARM chain.
+ */
+#define HEAP_WARM_RED			0x4000
 #define HEAP_XACT_MASK			0xFFF0	/* visibility-related bits */
 
 /*
@@ -397,7 +412,7 @@ struct HeapTupleHeaderData
 /* SetCmin is reasonably simple since we never need a combo CID */
 #define HeapTupleHeaderSetCmin(tup, cid) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	(tup)->t_infomask &= ~HEAP_COMBOCID; \
 } while (0)
@@ -405,7 +420,7 @@ do { \
 /* SetCmax must be used after HeapTupleHeaderAdjustCmax; see combocid.c */
 #define HeapTupleHeaderSetCmax(tup, cid, iscombo) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	if (iscombo) \
 		(tup)->t_infomask |= HEAP_COMBOCID; \
@@ -415,7 +430,7 @@ do { \
 
 #define HeapTupleHeaderGetXvac(tup) \
 ( \
-	((tup)->t_infomask & HEAP_MOVED) ? \
+	HeapTupleHeaderIsMoved(tup) ? \
 		(tup)->t_choice.t_heap.t_field3.t_xvac \
 	: \
 		InvalidTransactionId \
@@ -423,7 +438,7 @@ do { \
 
 #define HeapTupleHeaderSetXvac(tup, xid) \
 do { \
-	Assert((tup)->t_infomask & HEAP_MOVED); \
+	Assert(HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_xvac = (xid); \
 } while (0)
 
@@ -651,6 +666,58 @@ do { \
 )
 
 /*
+ * Macros to check if tuple is a moved-off/in tuple by VACUUM FULL in from
+ * pre-9.0 era. Such tuple must not have HEAP_WARM_TUPLE flag set.
+ *
+ * Beware of multiple evaluations of the argument.
+ */ 
+#define HeapTupleHeaderIsMovedOff(tuple) \
+( \
+ 	!HeapTupleHeaderIsHeapWarmTuple((tuple)) && \
+  	((tuple)->t_infomask & HEAP_MOVED_OFF) \
+)
+
+#define HeapTupleHeaderIsMovedIn(tuple) \
+( \
+ 	!HeapTupleHeaderIsHeapWarmTuple((tuple)) && \
+  	((tuple)->t_infomask & HEAP_MOVED_IN) \
+)
+
+#define HeapTupleHeaderIsMoved(tuple) \
+( \
+ 	!HeapTupleHeaderIsHeapWarmTuple((tuple)) && \
+  	((tuple)->t_infomask & HEAP_MOVED) \
+)
+
+/*
+ * Check if tuple belongs to the Red part of the WARM chain.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderIsWarmRed(tuple) \
+( \
+	HeapTupleHeaderIsHeapWarmTuple(tuple) && \
+    (((tuple)->t_infomask & HEAP_WARM_RED) != 0) \
+)
+
+/*
+ * Mark tuple as a member of the Red chain. Must only be done on a tuple which
+ * is already marked a WARM-tuple.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderSetWarmRed(tuple) \
+( \
+  	AssertMacro(HeapTupleHeaderIsHeapWarmTuple(tuple)), \
+	(tuple)->t_infomask |= HEAP_WARM_RED \
+)
+
+#define HeapTupleHeaderClearWarmRed(tuple) \
+( \
+	(tuple)->t_infomask &= ~HEAP_WARM_RED \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
@@ -810,6 +877,15 @@ struct MinimalTupleData
 #define HeapTupleClearHeapWarmTuple(tuple) \
 		HeapTupleHeaderClearHeapWarmTuple((tuple)->t_data)
 
+#define HeapTupleIsHeapWarmTupleRed(tuple) \
+		HeapTupleHeaderIsWarmRed((tuple)->t_data)
+
+#define HeapTupleSetHeapWarmTupleRed(tuple) \
+		HeapTupleHeaderSetWarmRed((tuple)->t_data)
+
+#define HeapTupleClearHeapWarmTupleRed(tuple) \
+		HeapTupleHeaderClearWarmRed((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index d4b35ca..1f4f0bd 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -427,6 +427,12 @@ typedef BTScanOpaqueData *BTScanOpaque;
 #define SK_BT_NULLS_FIRST	(INDOPTION_NULLS_FIRST << SK_BT_INDOPTION_SHIFT)
 
 /*
+ * Flags overloaded on t_tid.ip_posid field. They are managed by
+ * ItemPointerSetFlags and corresponing routines.
+ */
+#define BTREE_INDEX_RED_POINTER	0x01
+
+/*
  * external entry points for btree, in nbtree.c
  */
 extern IndexBuildResult *btbuild(Relation heap, Relation index,
@@ -436,6 +442,10 @@ extern bool btinsert(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
 		 struct IndexInfo *indexInfo);
+extern bool btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 struct IndexInfo *indexInfo);
 extern IndexScanDesc btbeginscan(Relation rel, int nkeys, int norderbys);
 extern Size btestimateparallelscan(void);
 extern void btinitparallelscan(void *target);
@@ -487,10 +497,12 @@ extern void _bt_pageinit(Page page, Size size);
 extern bool _bt_page_recyclable(Page page);
 extern void _bt_delitems_delete(Relation rel, Buffer buf,
 					OffsetNumber *itemnos, int nitems, Relation heapRel);
-extern void _bt_delitems_vacuum(Relation rel, Buffer buf,
-					OffsetNumber *itemnos, int nitems,
-					BlockNumber lastBlockVacuumed);
+extern void _bt_handleitems_vacuum(Relation rel, Buffer buf,
+					OffsetNumber *delitemnos, int ndelitems,
+					OffsetNumber *coloritemnos, int ncoloritems);
 extern int	_bt_pagedel(Relation rel, Buffer buf);
+extern void	_bt_color_items(Page page, OffsetNumber *coloritemnos,
+					uint16 ncoloritems);
 
 /*
  * prototypes for functions in nbtsearch.c
diff --git a/src/include/access/nbtxlog.h b/src/include/access/nbtxlog.h
index d6a3085..5555742 100644
--- a/src/include/access/nbtxlog.h
+++ b/src/include/access/nbtxlog.h
@@ -142,34 +142,20 @@ typedef struct xl_btree_reuse_page
 /*
  * This is what we need to know about vacuum of individual leaf index tuples.
  * The WAL record can represent deletion of any number of index tuples on a
- * single index page when executed by VACUUM.
- *
- * For MVCC scans, lastBlockVacuumed will be set to InvalidBlockNumber.
- * For a non-MVCC index scans there is an additional correctness requirement
- * for applying these changes during recovery, which is that we must do one
- * of these two things for every block in the index:
- *		* lock the block for cleanup and apply any required changes
- *		* EnsureBlockUnpinned()
- * The purpose of this is to ensure that no index scans started before we
- * finish scanning the index are still running by the time we begin to remove
- * heap tuples.
- *
- * Any changes to any one block are registered on just one WAL record. All
- * blocks that we need to run EnsureBlockUnpinned() are listed as a block range
- * starting from the last block vacuumed through until this one. Individual
- * block numbers aren't given.
+ * single index page when executed by VACUUM. It also includes tuples whose
+ * color is changed from red to blue by VACUUM.
  *
  * Note that the *last* WAL record in any vacuum of an index is allowed to
  * have a zero length array of offsets. Earlier records must have at least one.
  */
 typedef struct xl_btree_vacuum
 {
-	BlockNumber lastBlockVacuumed;
-
-	/* TARGET OFFSET NUMBERS FOLLOW */
+	uint16		ndelitems;
+	uint16		ncoloritems;
+	/* ndelitems + ncoloritems TARGET OFFSET NUMBERS FOLLOW */
 } xl_btree_vacuum;
 
-#define SizeOfBtreeVacuum	(offsetof(xl_btree_vacuum, lastBlockVacuumed) + sizeof(BlockNumber))
+#define SizeOfBtreeVacuum	(offsetof(xl_btree_vacuum, ncoloritems) + sizeof(uint16))
 
 /*
  * This is what we need to know about marking an empty branch for deletion.
diff --git a/src/include/commands/progress.h b/src/include/commands/progress.h
index 9472ecc..b355b61 100644
--- a/src/include/commands/progress.h
+++ b/src/include/commands/progress.h
@@ -25,6 +25,7 @@
 #define PROGRESS_VACUUM_NUM_INDEX_VACUUMS		4
 #define PROGRESS_VACUUM_MAX_DEAD_TUPLES			5
 #define PROGRESS_VACUUM_NUM_DEAD_TUPLES			6
+#define PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED	7
 
 /* Phases of vacuum (as advertised via PROGRESS_VACUUM_PHASE) */
 #define PROGRESS_VACUUM_PHASE_SCAN_HEAP			1
0004_warm_updates_v15.patchapplication/octet-stream; name=0004_warm_updates_v15.patchDownload
commit d1dd6d5fdab8c4d6d2dbb574ac3f8a339ba7cde0
Author: Pavan Deolasee <pavan.deolasee@gmail.com>
Date:   Tue Feb 28 10:37:15 2017 +0530

    Main warm patch - v15

diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c
index f2eda67..b356e2b 100644
--- a/contrib/bloom/blutils.c
+++ b/contrib/bloom/blutils.c
@@ -142,6 +142,7 @@ blhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index b22563b..b4a1465 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -116,6 +116,7 @@ brinhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index 6593771..843389b 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -94,6 +94,7 @@ gisthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index 24510e7..6645160 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -90,6 +90,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = hashrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -271,6 +272,8 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 	OffsetNumber offnum;
 	ItemPointer current;
 	bool		res;
+	IndexTuple	itup;
+
 
 	/* Hash indexes are always lossy since we store only the hash code */
 	scan->xs_recheck = true;
@@ -308,8 +311,6 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 			 offnum <= maxoffnum;
 			 offnum = OffsetNumberNext(offnum))
 		{
-			IndexTuple	itup;
-
 			itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 			if (ItemPointerEquals(&(so->hashso_heappos), &(itup->t_tid)))
 				break;
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index 9e5d7e4..60e941d 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -59,6 +59,8 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
 	return true;
 }
 
@@ -363,6 +365,9 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
+
 	return true;
 }
 
diff --git a/src/backend/access/hash/hashutil.c b/src/backend/access/hash/hashutil.c
index c705531..dcba734 100644
--- a/src/backend/access/hash/hashutil.c
+++ b/src/backend/access/hash/hashutil.c
@@ -17,8 +17,12 @@
 #include "access/hash.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
+#include "nodes/execnodes.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 #define CALC_NEW_BUCKET(old_bucket, lowmask) \
 			old_bucket | (lowmask + 1)
@@ -446,3 +450,109 @@ _hash_get_newbucket_from_oldbucket(Relation rel, Bucket old_bucket,
 
 	return new_bucket;
 }
+
+/*
+ * Recheck if the heap tuple satisfies the key stored in the index tuple
+ */
+bool
+hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	Datum		values2[INDEX_MAX_KEYS];
+	bool		isnull2[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	/*
+	 * HASH indexes compute a hash value of the key and store that in the
+	 * index. So we must first obtain the hash of the value obtained from the
+	 * heap and then do a comparison
+	 */
+	_hash_convert_tuple(indexRel, values, isnull, values2, isnull2);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL then they are equal
+		 */
+		if (isnull2[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If either is NULL then they are not equal
+		 */
+		if (isnull2[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now do a raw memory comparison
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values2[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+}
diff --git a/src/backend/access/heap/README.WARM b/src/backend/access/heap/README.WARM
new file mode 100644
index 0000000..7b9a712
--- /dev/null
+++ b/src/backend/access/heap/README.WARM
@@ -0,0 +1,306 @@
+src/backend/access/heap/README.WARM
+
+Write Amplification Reduction Method (WARM)
+===========================================
+
+The Heap Only Tuple (HOT) feature greatly eliminated redudant index
+entries and allowed re-use of the dead space occupied by previously
+updated or deleted tuples (see src/backend/access/heap/README.HOT)
+
+One of the necessary conditions for satisfying HOT update is that the
+update must not change a column used in any of the indexes on the table.
+The condition is sometimes hard to meet, especially for complex
+workloads with several indexes on large yet frequently updated tables.
+Worse, sometimes only one or two index columns may be updated, but the
+regular non-HOT update will still insert a new index entry in every
+index on the table, irrespective of whether the key pertaining to the
+index changed or not.
+
+WARM is a technique devised to address these problems.
+
+
+Update Chains With Multiple Index Entries Pointing to the Root
+--------------------------------------------------------------
+
+When a non-HOT update is caused by an index key change, a new index
+entry must be inserted for the changed index. But if the index key
+hasn't changed for other indexes, we don't really need to insert a new
+entry. Even though the existing index entry is pointing to the old
+tuple, the new tuple is reachable via the t_ctid chain. To keep things
+simple, a WARM update requires that the heap block must have enough
+space to store the new version of the tuple. This is same as HOT
+updates.
+
+In WARM, we ensure that every index entry always points to the root of
+the WARM chain. In fact, a WARM chain looks exactly like a HOT chain
+except for the fact that there could be multiple index entries pointing
+to the root of the chain. So when new entry is inserted in an index for
+updated tuple, and if we are doing a WARM update, the new entry is made
+point to the root of the WARM chain.
+
+For example, if we have a table with two columns and two indexes on each
+of the column. When a tuple is first inserted the table, we have exactly
+one index entry pointing to the tuple from both indexes.
+
+	lp [1]
+	[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's entry (aaaa) also points to 1
+
+Now if the tuple's second column is updated and if there is room on the
+page, we perform a WARM update. To do so, Index1 does not get any new
+entry and Index2's new entry will still point to the root tuple of the
+chain.
+
+	lp [1]  [2]
+	[1111, aaaa]->[111, bbbb]
+
+	Index1's entry (1111) points to 1
+	Index2's old entry (aaaa) points to 1
+	Index2's new entry (bbbb) also points to 1
+
+"A update chain which has more than one index entries pointing to its
+root line pointer is called WARM chain and the action that creates a
+WARM chain is called WARM update."
+
+Since all indexes always point to the root of the WARM chain, even when
+there are more than one index entries, WARM chains can be pruned and
+dead tuples can be removed without a need to do corresponding index
+cleanup.
+
+While this solves the problem of pruning dead tuples from a HOT/WARM
+chain, it also opens up a new technical challenge because now we have a
+situation where a heap tuple is reachable from multiple index entries,
+each having a different index key. While MVCC still ensures that only
+valid tuples are returned, a tuple with a wrong index key may be
+returned because of wrong index entries. In the above example, tuple
+[1111, bbbb] is reachable from both keys (aaaa) as well as (bbbb). For
+this reason, tuples returned from a WARM chain must always be rechecked
+for index key-match.
+
+Recheck Index Key Againt Heap Tuple
+-----------------------------------
+
+Since every Index AM has it's own notion of index tuples, each Index AM
+must implement its own method to recheck heap tuples. For example, a
+hash index stores the hash value of the column and hence recheck routine
+for hash AM must first compute the hash value of the heap attribute and
+then compare it against the value stored in the index tuple.
+
+The patch currently implement recheck routines for hash and btree
+indexes. If the table has an index which doesn't support recheck
+routine, WARM updates are disabled on such tables.
+
+Problem With Duplicate (key, ctid) Index Entries
+------------------------------------------------
+
+The index-key recheck logic works as long as there are no duplicate
+index keys, both pointing to the same WARM chain. In that case, the same
+valid tuple will be reachable via multiple index keys, yet satisfying
+the index key checks. In the above example, if the tuple [1111, bbbb] is
+again updated to [1111, aaaa] and if we insert a new index entry (aaaa)
+pointing to the root line pointer, we will end up with the following
+structure:
+
+	lp [1]  [2]  [3]
+	[1111, aaaa]->[1111, bbbb]->[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's oldest entry (aaaa) points to 1
+	Index2's old entry (bbbb) also points to 1
+	Index2's new entry (aaaa) also points to 1
+
+We must solve this problem to ensure that the same tuple is not
+reachable via multiple index pointers. There are couple of ways to
+address this issue:
+
+1. Do not allow WARM update to a tuple from a WARM chain. This
+guarantees that there can never be duplicate index entries to the same
+root line pointer because we must have checked for old and new index
+keys while doing the first WARM update.
+
+2. Do not allow duplicate (key, ctid) index pointers. In the above
+example, since (aaaa, 1) already exists in the index, we must not insert
+a duplicate index entry.
+
+The patch currently implements 1 i.e. do not do WARM updates to a tuple
+from a WARM chain. HOT updates are fine because they do not add a new
+index entry.
+
+Even with the restriction, this is a significant improvement because the
+number of regular UPDATEs are curtailed down to half.
+
+Expression and Partial Indexes
+------------------------------
+
+Expressions may evaluate to the same value even if the underlying column
+values have changed. A simple example is an index on "lower(col)" which
+will return the same value if the new heap value only differs in the
+case sensitivity. So we can not solely rely on the heap column check to
+decide whether or not to insert a new index entry for expression
+indexes. Similarly, for partial indexes, the predicate expression must
+be evaluated to decide whether or not to cause a new index entry when
+columns referred in the predicate expressions change.
+
+(None of these things are currently implemented and we squarely disallow
+WARM update if a column from expression indexes or predicate has
+changed).
+
+
+Efficiently Finding the Root Line Pointer
+-----------------------------------------
+
+During WARM update, we must be able to find the root line pointer of the
+tuple being updated. It must be noted that the t_ctid field in the heap
+tuple header is usually used to find the next tuple in the update chain.
+But the tuple that we are updating, must be the last tuple in the update
+chain. In such cases, the c_tid field usually points the tuple itself.
+So in theory, we could use the t_ctid to store additional information in
+the last tuple of the update chain, if the information about the tuple
+being the last tuple is stored elsewhere.
+
+We now utilize another bit from t_infomask2 to explicitly identify that
+this is the last tuple in the update chain.
+
+HEAP_LATEST_TUPLE - When this bit is set, the tuple is the last tuple in
+the update chain. The OffsetNumber part of t_ctid points to the root
+line pointer of the chain when HEAP_LATEST_TUPLE flag is set.
+
+If UPDATE operation is aborted, the last tuple in the update chain
+becomes dead. The root line pointer information stored in the tuple
+which remains the last valid tuple in the chain is also lost. In such
+rare cases, the root line pointer must be found in a hard way by
+scanning the entire heap page.
+
+Tracking WARM Chains
+--------------------
+
+The old and every subsequent tuple in the chain is marked with a special
+HEAP_WARM_TUPLE flag. We use the last remaining bit in t_infomask2 to
+store this information.
+
+When a tuple is returned from a WARM chain, the caller must do
+additional checks to ensure that the tuple matches the index key. Even
+if the tuple comes precedes the WARM update in the chain, it must still
+be rechecked for the index key match (case when old tuple is returned by
+the new index key). So we must follow the update chain everytime to the
+end to see check if this is a WARM chain.
+
+When the old updated tuple is retired and the root line pointer is
+converted into a redirected line pointer, we can copy the information
+about WARM chain to the redirected line pointer by storing a special
+value in the lp_len field of the line pointer. This will handle the most
+common case where a WARM chain is replaced by a redirect line pointer
+and a single tuple in the chain.
+
+Converting WARM chains back to HOT chains (VACUUM ?)
+----------------------------------------------------
+
+The current implementation of WARM allows only one WARM update per
+chain. This simplifies the design and addresses certain issues around
+duplicate scans. But this also implies that the benefit of WARM will be
+no more than 50%, which is still significant, but if we could return
+WARM chains back to normal status, we could do far more WARM updates.
+
+A distinct property of a WARM chain is that at least one index has more
+than one live index entries pointing to the root of the chain. In other
+words, if we can remove duplicate entry from every index or conclusively
+prove that there are no duplicate index entries for the root line
+pointer, the chain can again be marked as HOT.
+
+Here is one idea:
+
+A WARM chain has two parts, separated by the tuple that caused WARM
+update. All tuples in each part has matching index keys, but certain
+index keys may not match between these two parts. Lets say we mark heap
+tuples in each part with a special Red-Blue flag. The same flag is
+replicated in the index tuples. For example, when new rows are inserted
+in a table, they are marked with Blue flag and the index entries
+associated with those rows are also marked with Blue flag. When a row is
+WARM updated, the new version is marked with Red flag and the new index
+entry created by the update is also marked with Red flag.
+
+
+Heap chain: [1] [2] [3] [4]
+			[aaaa, 1111]B -> [aaaa, 1111]B -> [bbbb, 1111]R -> [bbbb, 1111]R
+
+Index1: 	(aaaa)B points to 1 (satisfies only tuples marked with B)
+			(bbbb)R points to 1 (satisfies only tuples marked with R)
+
+Index2:		(1111)B points to 1 (satisfied bith B and R tuples)
+
+
+It's clear that for indexes with Red and Blue pointers, a heap tuple
+with Blue flag will be reachable from Blue pointer and that with Red
+flag will be reachable from Red pointer. But for indexes which did not
+create a new entry, both Blue and Red tuples will be reachable from Blue
+pointer (there is no Red pointer in such indexes). So, as a side note,
+matching Red and Blue flags is not enough from index scan perspective.
+
+During first heap scan of VACUUM, we look for tuples with
+HEAP_WARM_TUPLE set.  If all live tuples in the chain are either marked
+with Blue flag or Red flag (but no mix of Red and Blue), then the chain
+is a candidate for HOT conversion.  We remember the root line pointer
+and Red-Blue flag of the WARM chain in a separate array.
+
+If we have a Red WARM chain, then our goal is to remove Blue pointers
+and vice versa. But there is a catch. For Index2 above, there is only
+Blue pointer and that must not be removed. IOW we should remove Blue
+pointer iff a Red pointer exists. Since index vacuum may visit Red and
+Blue pointers in any order, I think we will need another index pass to
+remove dead index pointers. So in the first index pass we check which
+WARM candidates have 2 index pointers. In the second pass, we remove the
+dead pointer and reset Red flag is the surviving index pointer is Red.
+
+During the second heap scan, we fix WARM chain by clearing
+HEAP_WARM_TUPLE flag and also reset Red flag to Blue.
+
+There are some more problems around aborted vacuums. For example, if
+vacuum aborts after changing Red index flag to Blue but before removing
+the other Blue pointer, we will end up with two Blue pointers to a Red
+WARM chain. But since the HEAP_WARM_TUPLE flag on the heap tuple is
+still set, further WARM updates to the chain will be blocked. I guess we
+will need some special handling for case with multiple Blue pointers. We
+can either leave these WARM chains alone and let them die with a
+subsequent non-WARM update or must apply heap-recheck logic during index
+vacuum to find the dead pointer. Given that vacuum-aborts are not
+common, I am inclined to leave this case unhandled. We must still check
+for presence of multiple Blue pointers and ensure that we don't
+accidently remove either of the Blue pointers and not clear WARM chains
+either.
+
+CREATE INDEX CONCURRENTLY
+-------------------------
+
+Currently CREATE INDEX CONCURRENTLY (CIC) is implemented as a 3-phase
+process.  In the first phase, we create catalog entry for the new index
+so that the index is visible to all other backends, but still don't use
+it for either read or write.  But we ensure that no new broken HOT
+chains are created by new transactions. In the second phase, we build
+the new index using a MVCC snapshot and then make the index available
+for inserts. We then do another pass over the index and insert any
+missing tuples, everytime indexing only it's root line pointer. See
+README.HOT for details about how HOT impacts CIC and how various
+challenges are tackeled.
+
+WARM poses another challenge because it allows creation of HOT chains
+even when an index key is changed. But since the index is not ready for
+insertion until the second phase is over, we might end up with a
+situation where the HOT chain has tuples with different index columns,
+yet only one of these values are indexed by the new index. Note that
+during the third phase, we only index tuples whose root line pointer is
+missing from the index. But we can't easily check if the existing index
+tuple is actually indexing the heap tuple visible to the new MVCC
+snapshot. Finding that information will require us to query the index
+again for every tuple in the chain, especially if it's a WARM tuple.
+This would require repeated access to the index. Another option would be
+to return index keys along with the heap TIDs when index is scanned for
+collecting all indexed TIDs during third phase. We can then compare the
+heap tuple against the already indexed key and decide whether or not to
+index the new tuple.
+
+We solve this problem more simply by disallowing WARM updates until the
+index is ready for insertion. We don't need to disallow WARM on a
+wholesale basis, but only those updates that change the columns of the
+new index are disallowed to be WARM updates.
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 064909a..9c4522a 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -1958,6 +1958,78 @@ heap_fetch(Relation relation,
 }
 
 /*
+ * Check if the HOT chain containing this tid is actually a WARM chain.
+ * Note that even if the WARM update ultimately aborted, we still must do a
+ * recheck because the failing UPDATE when have inserted created index entries
+ * which are now stale, but still referencing this chain.
+ */
+static bool
+hot_check_warm_chain(Page dp, ItemPointer tid)
+{
+	TransactionId prev_xmax = InvalidTransactionId;
+	OffsetNumber offnum;
+	HeapTupleData heapTuple;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+			break;
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Presence of either WARM or WARM updated tuple signals possible
+		 * breakage and the caller must recheck tuple returned from this chain
+		 * for index satisfaction
+		 */
+		if (HeapTupleHeaderIsHeapWarmTuple(heapTuple.t_data))
+			return true;
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	/* All OK. No need to recheck */
+	return false;
+}
+
+/*
  *	heap_hot_search_buffer	- search HOT chain for tuple satisfying snapshot
  *
  * On entry, *tid is the TID of a tuple (either a simple tuple, or the root
@@ -1977,11 +2049,14 @@ heap_fetch(Relation relation,
  * Unlike heap_fetch, the caller must already have pin and (at least) share
  * lock on the buffer; it is still pinned/locked at exit.  Also unlike
  * heap_fetch, we do not report any pgstats count; caller may do so if wanted.
+ *
+ * recheck should be set false on entry by caller, will be set true on exit
+ * if a WARM tuple is encountered.
  */
 bool
 heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 					   Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call)
+					   bool *all_dead, bool first_call, bool *recheck)
 {
 	Page		dp = (Page) BufferGetPage(buffer);
 	TransactionId prev_xmax = InvalidTransactionId;
@@ -2035,9 +2110,12 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		ItemPointerSetOffsetNumber(&heapTuple->t_self, offnum);
 
 		/*
-		 * Shouldn't see a HEAP_ONLY tuple at chain start.
+		 * Shouldn't see a HEAP_ONLY tuple at chain start, unless we are
+		 * dealing with a WARM updated tuple in which case deferred triggers
+		 * may request to fetch a WARM tuple from middle of a chain.
 		 */
-		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple))
+		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple) &&
+				!HeapTupleIsHeapWarmTuple(heapTuple))
 			break;
 
 		/*
@@ -2050,6 +2128,16 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 			break;
 
 		/*
+		 * Check if there exists a WARM tuple somewhere down the chain and set
+		 * recheck to TRUE.
+		 *
+		 * XXX This is not very efficient right now, and we should look for
+		 * possible improvements here
+		 */
+		if (recheck && *recheck == false)
+			*recheck = hot_check_warm_chain(dp, &heapTuple->t_self);
+
+		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
 		 * return the first tuple we find.  But on later passes, heapTuple
 		 * will initially be pointing to the tuple we returned last time.
@@ -2098,7 +2186,8 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		 * Check to see if HOT chain continues past this tuple; if so fetch
 		 * the next offnum and loop around.
 		 */
-		if (HeapTupleIsHotUpdated(heapTuple))
+		if (HeapTupleIsHotUpdated(heapTuple) &&
+			!HeapTupleHeaderHasRootOffset(heapTuple->t_data))
 		{
 			Assert(ItemPointerGetBlockNumber(&heapTuple->t_data->t_ctid) ==
 				   ItemPointerGetBlockNumber(tid));
@@ -2122,18 +2211,41 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
  */
 bool
 heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
-				bool *all_dead)
+				bool *all_dead, bool *recheck, Buffer *cbuffer,
+				HeapTuple heapTuple)
 {
 	bool		result;
 	Buffer		buffer;
-	HeapTupleData heapTuple;
+	ItemPointerData ret_tid = *tid;
 
 	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	LockBuffer(buffer, BUFFER_LOCK_SHARE);
-	result = heap_hot_search_buffer(tid, relation, buffer, snapshot,
-									&heapTuple, all_dead, true);
-	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buffer);
+	result = heap_hot_search_buffer(&ret_tid, relation, buffer, snapshot,
+									heapTuple, all_dead, true, recheck);
+
+	/*
+	 * If we are returning a potential candidate tuple from this chain and the
+	 * caller has requested for "recheck" hint, keep the buffer locked and
+	 * pinned. The caller must release the lock and pin on the buffer in all
+	 * such cases
+	 */
+	if (!result || !recheck || !(*recheck))
+	{
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	/*
+	 * Set the caller supplied tid with the actual location of the tuple being
+	 * returned
+	 */
+	if (result)
+	{
+		*tid = ret_tid;
+		if (cbuffer)
+			*cbuffer = buffer;
+	}
+
 	return result;
 }
 
@@ -3492,15 +3604,18 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 HTSU_Result
 heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode)
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update)
 {
 	HTSU_Result result;
 	TransactionId xid = GetCurrentTransactionId();
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *exprindx_attrs;
 	Bitmapset  *interesting_attrs;
 	Bitmapset  *modified_attrs;
+	Bitmapset  *notready_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3521,6 +3636,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		use_hot_update = false;
+	bool		use_warm_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
 	bool		all_visible_cleared_new = false;
@@ -3545,6 +3661,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot update tuples during a parallel operation")));
 
+	/* Assume no-warm update */
+	if (warm_update)
+		*warm_update = false;
+
 	/*
 	 * Fetch the list of attributes to be checked for various operations.
 	 *
@@ -3566,10 +3686,17 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	exprindx_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_EXPR_PREDICATE);
+	notready_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_NOTREADY);
+
+
 	interesting_attrs = bms_add_members(NULL, hot_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
-
+	interesting_attrs = bms_add_members(interesting_attrs, exprindx_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, notready_attrs);
 
 	block = ItemPointerGetBlockNumber(otid);
 	offnum = ItemPointerGetOffsetNumber(otid);
@@ -3621,6 +3748,9 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
 												  &oldtup, newtup);
 
+	if (modified_attrsp)
+		*modified_attrsp = bms_copy(modified_attrs);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3876,6 +4006,7 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(exprindx_attrs);
 		bms_free(modified_attrs);
 		bms_free(interesting_attrs);
 		return result;
@@ -4194,6 +4325,37 @@ l2:
 		 */
 		if (!bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
+		else
+		{
+			/*
+			 * If no WARM updates yet on this chain, let this update be a WARM
+			 * update.
+			 *
+			 * We check for both warm and warm updated tuples since if the
+			 * previous WARM update aborted, we may still have added
+			 * another index entry for this HOT chain. In such situations, we
+			 * must not attempt a WARM update until duplicate (key, CTID) index
+			 * entry issue is sorted out
+			 *
+			 * XXX Later we'll add more checks to ensure WARM chains can
+			 * further be WARM updated. This is probably good to do first rounf
+			 * of tests of remaining functionality
+			 *
+			 * XXX Disable WARM updates on system tables. There is nothing in
+			 * principle that stops us from supporting this. But it would
+			 * require API change to propogate the changed columns back to the
+			 * caller so that CatalogUpdateIndexes() can avoid adding new
+			 * entries to indexes that are not changed by update. This will be
+			 * fixed once basic patch is tested. !!FIXME
+			 */
+			if (relation->rd_supportswarm &&
+				!bms_overlap(modified_attrs, exprindx_attrs) &&
+				!bms_is_subset(hot_attrs, modified_attrs) &&
+				!IsSystemRelation(relation) &&
+				!bms_overlap(notready_attrs, modified_attrs) &&
+				!HeapTupleIsHeapWarmTuple(&oldtup))
+				use_warm_update = true;
+		}
 	}
 	else
 	{
@@ -4240,6 +4402,22 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+
+		/*
+		 * Even if we are doing a HOT update, we must carry forward the WARM
+		 * flag because we may have already inserted another index entry
+		 * pointing to our root and a third entry may create duplicates
+		 *
+		 * Note: If we ever have a mechanism to avoid duplicate <key, TID> in
+		 * indexes, we could look at relaxing this restriction and allow even
+		 * more WARM udpates
+		 */
+		if (HeapTupleIsHeapWarmTuple(&oldtup))
+		{
+			HeapTupleSetHeapWarmTuple(heaptup);
+			HeapTupleSetHeapWarmTuple(newtup);
+		}
+
 		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
@@ -4252,12 +4430,35 @@ l2:
 		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
 			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
+	else if (use_warm_update)
+	{
+		/* Mark the old tuple as HOT-updated */
+		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetHeapWarmTuple(&oldtup);
+		/* And mark the new tuple as heap-only */
+		HeapTupleSetHeapOnly(heaptup);
+		HeapTupleSetHeapWarmTuple(heaptup);
+		/* Mark the caller's copy too, in case different from heaptup */
+		HeapTupleSetHeapOnly(newtup);
+		HeapTupleSetHeapWarmTuple(newtup);
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
+		/* Let the caller know we did a WARM update */
+		if (warm_update)
+			*warm_update = true;
+	}
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		HeapTupleClearHeapWarmTuple(heaptup);
+		HeapTupleClearHeapWarmTuple(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4367,7 +4568,10 @@ l2:
 	if (have_tuple_lock)
 		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
 
-	pgstat_count_heap_update(relation, use_hot_update);
+	/*
+	 * Count HOT and WARM updates separately
+	 */
+	pgstat_count_heap_update(relation, use_hot_update, use_warm_update);
 
 	/*
 	 * If heaptup is a private copy, release it.  Don't forget to copy t_self
@@ -4507,7 +4711,8 @@ HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
  * via ereport().
  */
 void
-simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
+simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup,
+		Bitmapset **modified_attrs, bool *warm_update)
 {
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
@@ -4516,7 +4721,7 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &hufd, &lockmode);
+						 &hufd, &lockmode, modified_attrs, warm_update);
 	switch (result)
 	{
 		case HeapTupleSelfUpdated:
@@ -7568,6 +7773,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
+	bool		warm_update = false;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
@@ -7579,6 +7785,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	else
 		info = XLOG_HEAP_UPDATE;
 
+	if (HeapTupleIsHeapWarmTuple(newtup))
+		warm_update = true;
+
 	/*
 	 * If the old and new tuple are on the same page, we only need to log the
 	 * parts of the new tuple that were changed.  That saves on the amount of
@@ -7652,6 +7861,8 @@ log_heap_update(Relation reln, Buffer oldbuf,
 				xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY;
 		}
 	}
+	if (warm_update)
+		xlrec.flags |= XLH_UPDATE_WARM_UPDATE;
 
 	/* If new tuple is the single and first tuple on page... */
 	if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber &&
@@ -8629,16 +8840,22 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	Size		freespace = 0;
 	XLogRedoAction oldaction;
 	XLogRedoAction newaction;
+	bool		warm_update = false;
 
 	/* initialize to keep the compiler quiet */
 	oldtup.t_data = NULL;
 	oldtup.t_len = 0;
 
+	if (xlrec->flags & XLH_UPDATE_WARM_UPDATE)
+		warm_update = true;
+
 	XLogRecGetBlockTag(record, 0, &rnode, NULL, &newblk);
 	if (XLogRecGetBlockTag(record, 1, NULL, NULL, &oldblk))
 	{
 		/* HOT updates are never done across pages */
 		Assert(!hot_update);
+		/* WARM updates are never done across pages */
+		Assert(!warm_update);
 	}
 	else
 		oldblk = newblk;
@@ -8698,6 +8915,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 								   &htup->t_infomask2);
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
+
+		/* Mark the old tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		/* Set forward chain link in t_ctid */
 		HeapTupleHeaderSetNextTid(htup, &newtid);
 
@@ -8833,6 +9055,10 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
 
+		/* Mark the new tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index f54337c..c2bd7d6 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -834,6 +834,13 @@ heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				continue;
 
+			/* 
+			 * If the tuple has root line pointer, it must be the end of the
+			 * chain
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			/* Set up to scan the HOT-chain */
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index 4e7eca7..f56c58f 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -75,10 +75,12 @@
 #include "access/xlog.h"
 #include "catalog/catalog.h"
 #include "catalog/index.h"
+#include "executor/executor.h"
 #include "pgstat.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
+#include "utils/datum.h"
 #include "utils/snapmgr.h"
 #include "utils/tqual.h"
 
@@ -234,6 +236,21 @@ index_beginscan(Relation heapRelation,
 	scan->heapRelation = heapRelation;
 	scan->xs_snapshot = snapshot;
 
+	/*
+	 * If the index supports recheck, make sure that index tuple is saved
+	 * during index scans.
+	 *
+	 * XXX Ideally, we should look at all indexes on the table and check if
+	 * WARM is at all supported on the base table. If WARM is not supported
+	 * then we don't need to do any recheck. RelationGetIndexAttrBitmap() does
+	 * do that and sets rd_supportswarm after looking at all indexes. But we
+	 * don't know if the function was called earlier in the session when we're
+	 * here. We can't call it now because there exists a risk of causing
+	 * deadlock.
+	 */
+	if (indexRelation->rd_amroutine->amrecheck)
+		scan->xs_want_itup = true;
+
 	return scan;
 }
 
@@ -535,7 +552,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
 	/*
 	 * The AM's amgettuple proc finds the next index entry matching the scan
 	 * keys, and puts the TID into scan->xs_ctup.t_self.  It should also set
-	 * scan->xs_recheck and possibly scan->xs_itup, though we pay no attention
+	 * scan->xs_tuple_recheck and possibly scan->xs_itup, though we pay no attention
 	 * to those fields here.
 	 */
 	found = scan->indexRelation->rd_amroutine->amgettuple(scan, direction);
@@ -574,7 +591,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
  * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
  * call).
  *
- * Note: caller must check scan->xs_recheck, and perform rechecking of the
+ * Note: caller must check scan->xs_tuple_recheck, and perform rechecking of the
  * scan keys if required.  We do not do that here because we don't have
  * enough information to do it efficiently in the general case.
  * ----------------
@@ -601,6 +618,12 @@ index_fetch_heap(IndexScanDesc scan)
 		 */
 		if (prev_buf != scan->xs_cbuf)
 			heap_page_prune_opt(scan->heapRelation, scan->xs_cbuf);
+
+		/*
+		 * If we're not always re-checking, reset recheck for this tuple.
+		 * Otherwise we must recheck every tuple.
+		 */
+		scan->xs_tuple_recheck = scan->xs_recheck;
 	}
 
 	/* Obtain share-lock on the buffer so we can examine visibility */
@@ -610,32 +633,64 @@ index_fetch_heap(IndexScanDesc scan)
 											scan->xs_snapshot,
 											&scan->xs_ctup,
 											&all_dead,
-											!scan->xs_continue_hot);
+											!scan->xs_continue_hot,
+											&scan->xs_tuple_recheck);
 	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
 
 	if (got_heap_tuple)
 	{
+		bool res = true;
+
+		/*
+		 * Ok we got a tuple which satisfies the snapshot, but if its part of a
+		 * WARM chain, we must do additional checks to ensure that we are
+		 * indeed returning a correct tuple. Note that if the index AM does not
+		 * implement amrecheck method, then we don't any additional checks
+		 * since WARM must have been disabled on such tables
+		 *
+		 * XXX What happens when a new index which does not support amcheck is
+		 * added to the table? Do we need to handle this case or is CREATE
+		 * INDEX and CREATE INDEX CONCURRENTLY smart enough to handle this
+		 * issue?
+		 */
+		if (scan->xs_tuple_recheck &&
+				scan->xs_itup &&
+				scan->indexRelation->rd_amroutine->amrecheck)
+		{
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
+			res = scan->indexRelation->rd_amroutine->amrecheck(
+						scan->indexRelation,
+						scan->xs_itup,
+						scan->heapRelation,
+						&scan->xs_ctup);
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+		}
+
 		/*
 		 * Only in a non-MVCC snapshot can more than one member of the HOT
 		 * chain be visible.
 		 */
 		scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
 		pgstat_count_heap_fetch(scan->indexRelation);
-		return &scan->xs_ctup;
-	}
 
-	/* We've reached the end of the HOT chain. */
-	scan->xs_continue_hot = false;
+		if (res)
+			return &scan->xs_ctup;
+	}
+	else
+	{
+		/* We've reached the end of the HOT chain. */
+		scan->xs_continue_hot = false;
 
-	/*
-	 * If we scanned a whole HOT chain and found only dead tuples, tell index
-	 * AM to kill its entry for that TID (this will take effect in the next
-	 * amgettuple call, in index_getnext_tid).  We do not do this when in
-	 * recovery because it may violate MVCC to do so.  See comments in
-	 * RelationGetIndexScan().
-	 */
-	if (!scan->xactStartedInRecovery)
-		scan->kill_prior_tuple = all_dead;
+		/*
+		 * If we scanned a whole HOT chain and found only dead tuples, tell index
+		 * AM to kill its entry for that TID (this will take effect in the next
+		 * amgettuple call, in index_getnext_tid).  We do not do this when in
+		 * recovery because it may violate MVCC to do so.  See comments in
+		 * RelationGetIndexScan().
+		 */
+		if (!scan->xactStartedInRecovery)
+			scan->kill_prior_tuple = all_dead;
+	}
 
 	return NULL;
 }
diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c
index 6dca810..b5cb619 100644
--- a/src/backend/access/nbtree/nbtinsert.c
+++ b/src/backend/access/nbtree/nbtinsert.c
@@ -20,11 +20,14 @@
 #include "access/nbtxlog.h"
 #include "access/transam.h"
 #include "access/xloginsert.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
 #include "utils/tqual.h"
-
+#include "utils/datum.h"
 
 typedef struct
 {
@@ -250,6 +253,9 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 	BTPageOpaque opaque;
 	Buffer		nbuf = InvalidBuffer;
 	bool		found = false;
+	Buffer		buffer;
+	HeapTupleData	heapTuple;
+	bool		recheck = false;
 
 	/* Assume unique until we find a duplicate */
 	*is_unique = true;
@@ -309,6 +315,8 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				curitup = (IndexTuple) PageGetItem(page, curitemid);
 				htid = curitup->t_tid;
 
+				recheck = false;
+
 				/*
 				 * If we are doing a recheck, we expect to find the tuple we
 				 * are rechecking.  It's not a duplicate, but we have to keep
@@ -326,112 +334,153 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				 * have just a single index entry for the entire chain.
 				 */
 				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
-										 &all_dead))
+							&all_dead, &recheck, &buffer,
+							&heapTuple))
 				{
 					TransactionId xwait;
+					bool result = true;
 
 					/*
-					 * It is a duplicate. If we are only doing a partial
-					 * check, then don't bother checking if the tuple is being
-					 * updated in another transaction. Just return the fact
-					 * that it is a potential conflict and leave the full
-					 * check till later.
+					 * If the tuple was WARM update, we may again see our own
+					 * tuple. Since WARM updates don't create new index
+					 * entries, our own tuple is only reachable via the old
+					 * index pointer
 					 */
-					if (checkUnique == UNIQUE_CHECK_PARTIAL)
+					if (checkUnique == UNIQUE_CHECK_EXISTING &&
+							ItemPointerCompare(&htid, &itup->t_tid) == 0)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						*is_unique = false;
-						return InvalidTransactionId;
+						found = true;
+						result = false;
+						if (recheck)
+							UnlockReleaseBuffer(buffer);
 					}
-
-					/*
-					 * If this tuple is being updated by other transaction
-					 * then we have to wait for its commit/abort.
-					 */
-					xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
-						SnapshotDirty.xmin : SnapshotDirty.xmax;
-
-					if (TransactionIdIsValid(xwait))
+					else if (recheck)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						/* Tell _bt_doinsert to wait... */
-						*speculativeToken = SnapshotDirty.speculativeToken;
-						return xwait;
+						result = btrecheck(rel, curitup, heapRel, &heapTuple);
+						UnlockReleaseBuffer(buffer);
 					}
 
-					/*
-					 * Otherwise we have a definite conflict.  But before
-					 * complaining, look to see if the tuple we want to insert
-					 * is itself now committed dead --- if so, don't complain.
-					 * This is a waste of time in normal scenarios but we must
-					 * do it to support CREATE INDEX CONCURRENTLY.
-					 *
-					 * We must follow HOT-chains here because during
-					 * concurrent index build, we insert the root TID though
-					 * the actual tuple may be somewhere in the HOT-chain.
-					 * While following the chain we might not stop at the
-					 * exact tuple which triggered the insert, but that's OK
-					 * because if we find a live tuple anywhere in this chain,
-					 * we have a unique key conflict.  The other live tuple is
-					 * not part of this chain because it had a different index
-					 * entry.
-					 */
-					htid = itup->t_tid;
-					if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL))
-					{
-						/* Normal case --- it's still live */
-					}
-					else
+					if (result)
 					{
 						/*
-						 * It's been deleted, so no error, and no need to
-						 * continue searching
+						 * It is a duplicate. If we are only doing a partial
+						 * check, then don't bother checking if the tuple is being
+						 * updated in another transaction. Just return the fact
+						 * that it is a potential conflict and leave the full
+						 * check till later.
 						 */
-						break;
-					}
+						if (checkUnique == UNIQUE_CHECK_PARTIAL)
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							*is_unique = false;
+							return InvalidTransactionId;
+						}
 
-					/*
-					 * Check for a conflict-in as we would if we were going to
-					 * write to this page.  We aren't actually going to write,
-					 * but we want a chance to report SSI conflicts that would
-					 * otherwise be masked by this unique constraint
-					 * violation.
-					 */
-					CheckForSerializableConflictIn(rel, NULL, buf);
+						/*
+						 * If this tuple is being updated by other transaction
+						 * then we have to wait for its commit/abort.
+						 */
+						xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
+							SnapshotDirty.xmin : SnapshotDirty.xmax;
+
+						if (TransactionIdIsValid(xwait))
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							/* Tell _bt_doinsert to wait... */
+							*speculativeToken = SnapshotDirty.speculativeToken;
+							return xwait;
+						}
 
-					/*
-					 * This is a definite conflict.  Break the tuple down into
-					 * datums and report the error.  But first, make sure we
-					 * release the buffer locks we're holding ---
-					 * BuildIndexValueDescription could make catalog accesses,
-					 * which in the worst case might touch this same index and
-					 * cause deadlocks.
-					 */
-					if (nbuf != InvalidBuffer)
-						_bt_relbuf(rel, nbuf);
-					_bt_relbuf(rel, buf);
+						/*
+						 * Otherwise we have a definite conflict.  But before
+						 * complaining, look to see if the tuple we want to insert
+						 * is itself now committed dead --- if so, don't complain.
+						 * This is a waste of time in normal scenarios but we must
+						 * do it to support CREATE INDEX CONCURRENTLY.
+						 *
+						 * We must follow HOT-chains here because during
+						 * concurrent index build, we insert the root TID though
+						 * the actual tuple may be somewhere in the HOT-chain.
+						 * While following the chain we might not stop at the
+						 * exact tuple which triggered the insert, but that's OK
+						 * because if we find a live tuple anywhere in this chain,
+						 * we have a unique key conflict.  The other live tuple is
+						 * not part of this chain because it had a different index
+						 * entry.
+						 */
+						recheck = false;
+						ItemPointerCopy(&itup->t_tid, &htid);
+						if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL,
+									&recheck, &buffer, &heapTuple))
+						{
+							bool result = true;
+							if (recheck)
+							{
+								/*
+								 * Recheck if the tuple actually satisfies the
+								 * index key. Otherwise, we might be following
+								 * a wrong index pointer and mustn't entertain
+								 * this tuple
+								 */
+								result = btrecheck(rel, itup, heapRel, &heapTuple);
+								UnlockReleaseBuffer(buffer);
+							}
+							if (!result)
+								break;
+							/* Normal case --- it's still live */
+						}
+						else
+						{
+							/*
+							 * It's been deleted, so no error, and no need to
+							 * continue searching
+							 */
+							break;
+						}
 
-					{
-						Datum		values[INDEX_MAX_KEYS];
-						bool		isnull[INDEX_MAX_KEYS];
-						char	   *key_desc;
-
-						index_deform_tuple(itup, RelationGetDescr(rel),
-										   values, isnull);
-
-						key_desc = BuildIndexValueDescription(rel, values,
-															  isnull);
-
-						ereport(ERROR,
-								(errcode(ERRCODE_UNIQUE_VIOLATION),
-								 errmsg("duplicate key value violates unique constraint \"%s\"",
-										RelationGetRelationName(rel)),
-							   key_desc ? errdetail("Key %s already exists.",
-													key_desc) : 0,
-								 errtableconstraint(heapRel,
-											 RelationGetRelationName(rel))));
+						/*
+						 * Check for a conflict-in as we would if we were going to
+						 * write to this page.  We aren't actually going to write,
+						 * but we want a chance to report SSI conflicts that would
+						 * otherwise be masked by this unique constraint
+						 * violation.
+						 */
+						CheckForSerializableConflictIn(rel, NULL, buf);
+
+						/*
+						 * This is a definite conflict.  Break the tuple down into
+						 * datums and report the error.  But first, make sure we
+						 * release the buffer locks we're holding ---
+						 * BuildIndexValueDescription could make catalog accesses,
+						 * which in the worst case might touch this same index and
+						 * cause deadlocks.
+						 */
+						if (nbuf != InvalidBuffer)
+							_bt_relbuf(rel, nbuf);
+						_bt_relbuf(rel, buf);
+
+						{
+							Datum		values[INDEX_MAX_KEYS];
+							bool		isnull[INDEX_MAX_KEYS];
+							char	   *key_desc;
+
+							index_deform_tuple(itup, RelationGetDescr(rel),
+									values, isnull);
+
+							key_desc = BuildIndexValueDescription(rel, values,
+									isnull);
+
+							ereport(ERROR,
+									(errcode(ERRCODE_UNIQUE_VIOLATION),
+									 errmsg("duplicate key value violates unique constraint \"%s\"",
+										 RelationGetRelationName(rel)),
+									 key_desc ? errdetail("Key %s already exists.",
+										 key_desc) : 0,
+									 errtableconstraint(heapRel,
+										 RelationGetRelationName(rel))));
+						}
 					}
 				}
 				else if (all_dead)
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 775f2ff..952ed8f 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -23,6 +23,7 @@
 #include "access/xlog.h"
 #include "catalog/index.h"
 #include "commands/vacuum.h"
+#include "executor/nodeIndexscan.h"
 #include "pgstat.h"
 #include "storage/condition_variable.h"
 #include "storage/indexfsm.h"
@@ -163,6 +164,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = btestimateparallelscan;
 	amroutine->aminitparallelscan = btinitparallelscan;
 	amroutine->amparallelrescan = btparallelrescan;
+	amroutine->amrecheck = btrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -344,8 +346,9 @@ btgettuple(IndexScanDesc scan, ScanDirection dir)
 	BTScanOpaque so = (BTScanOpaque) scan->opaque;
 	bool		res;
 
-	/* btree indexes are never lossy */
+	/* btree indexes are never lossy, except for WARM tuples */
 	scan->xs_recheck = false;
+	scan->xs_tuple_recheck = false;
 
 	/*
 	 * If we have any array keys, initialize them during first call for a
diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c
index 5b259a3..c376c1b 100644
--- a/src/backend/access/nbtree/nbtutils.c
+++ b/src/backend/access/nbtree/nbtutils.c
@@ -20,11 +20,15 @@
 #include "access/nbtree.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "utils/array.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 typedef struct BTSortArrayContext
@@ -2069,3 +2073,103 @@ btproperty(Oid index_oid, int attno,
 			return false;		/* punt to generic code */
 	}
 }
+
+/*
+ * Check if the index tuple's key matches the one computed from the given heap
+ * tuple's attribute
+ */
+bool
+btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	/* Get IndexInfo for this index */
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL, then they are equal
+		 */
+		if (isnull[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If just one is NULL, then they are not equal
+		 */
+		if (isnull[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now just do a raw memory comparison. If the index tuple was formed
+		 * using this heap tuple, the computed index values must match
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+}
diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c
index e57ac49..59ef7f3 100644
--- a/src/backend/access/spgist/spgutils.c
+++ b/src/backend/access/spgist/spgutils.c
@@ -72,6 +72,7 @@ spghandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index f8d9214..bba52ec 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -54,6 +54,7 @@
 #include "nodes/makefuncs.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/var.h"
 #include "parser/parser.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
@@ -1691,6 +1692,20 @@ BuildIndexInfo(Relation index)
 	ii->ii_AmCache = NULL;
 	ii->ii_Context = CurrentMemoryContext;
 
+	/* build a bitmap of all table attributes referred by this index */
+	for (i = 0; i < ii->ii_NumIndexAttrs; i++)
+	{
+		AttrNumber attr = ii->ii_KeyAttrNumbers[i];
+		ii->ii_indxattrs = bms_add_member(ii->ii_indxattrs, attr -
+				FirstLowInvalidHeapAttributeNumber);
+	}
+
+	/* Collect all attributes used in expressions, too */
+	pull_varattnos((Node *) ii->ii_Expressions, 1, &ii->ii_indxattrs);
+
+	/* Collect all attributes in the index predicate, too */
+	pull_varattnos((Node *) ii->ii_Predicate, 1, &ii->ii_indxattrs);
+
 	return ii;
 }
 
diff --git a/src/backend/catalog/indexing.c b/src/backend/catalog/indexing.c
index abc344a..e5355a8 100644
--- a/src/backend/catalog/indexing.c
+++ b/src/backend/catalog/indexing.c
@@ -66,10 +66,15 @@ CatalogCloseIndexes(CatalogIndexState indstate)
  *
  * This should be called for each inserted or updated catalog tuple.
  *
+ * If the tuple was WARM updated, the modified_attrs contains the list of
+ * columns updated by the update. We must not insert new index entries for
+ * indexes which do not refer to any of the modified columns.
+ *
  * This is effectively a cut-down version of ExecInsertIndexTuples.
  */
 static void
-CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
+CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple,
+		Bitmapset *modified_attrs, bool warm_update)
 {
 	int			i;
 	int			numIndexes;
@@ -79,12 +84,28 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 	IndexInfo **indexInfoArray;
 	Datum		values[INDEX_MAX_KEYS];
 	bool		isnull[INDEX_MAX_KEYS];
+	ItemPointerData root_tid;
 
-	/* HOT update does not require index inserts */
-	if (HeapTupleIsHeapOnly(heapTuple))
+	/* 
+	 * HOT update does not require index inserts, but WARM may need for some
+	 * indexes.
+	 */
+	if (HeapTupleIsHeapOnly(heapTuple) && !warm_update)
 		return;
 
 	/*
+	 * If we've done a WARM update, then we must index the TID of the root line
+	 * pointer and not the actual TID of the new tuple.
+	 */
+	if (warm_update)
+		ItemPointerSet(&root_tid,
+				ItemPointerGetBlockNumber(&(heapTuple->t_self)),
+				HeapTupleHeaderGetRootOffset(heapTuple->t_data));
+	else
+		ItemPointerCopy(&heapTuple->t_self, &root_tid);
+
+
+	/*
 	 * Get information from the state structure.  Fall out if nothing to do.
 	 */
 	numIndexes = indstate->ri_NumIndices;
@@ -112,6 +133,17 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 			continue;
 
 		/*
+		 * If we've done WARM update, then we must not insert a new index tuple
+		 * if none of the index keys have changed. This is not just an
+		 * optimization, but a requirement for WARM to work correctly.
+		 */
+		if (warm_update)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
+		/*
 		 * Expressional and partial indexes on system catalogs are not
 		 * supported, nor exclusion constraints, nor deferred uniqueness
 		 */
@@ -136,7 +168,7 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 		index_insert(relationDescs[i],	/* index relation */
 					 values,	/* array of index Datums */
 					 isnull,	/* is-null flags */
-					 &(heapTuple->t_self),		/* tid of heap tuple */
+					 &root_tid,
 					 heapRelation,
 					 relationDescs[i]->rd_index->indisunique ?
 					 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
@@ -168,7 +200,7 @@ CatalogTupleInsert(Relation heapRel, HeapTuple tup)
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 	CatalogCloseIndexes(indstate);
 
 	return oid;
@@ -190,7 +222,7 @@ CatalogTupleInsertWithInfo(Relation heapRel, HeapTuple tup,
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, false, NULL);
 
 	return oid;
 }
@@ -210,12 +242,14 @@ void
 CatalogTupleUpdate(Relation heapRel, ItemPointer otid, HeapTuple tup)
 {
 	CatalogIndexState indstate;
+	bool	warm_update;
+	Bitmapset	*modified_attrs;
 
 	indstate = CatalogOpenIndexes(heapRel);
 
-	simple_heap_update(heapRel, otid, tup);
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 	CatalogCloseIndexes(indstate);
 }
 
@@ -231,9 +265,12 @@ void
 CatalogTupleUpdateWithInfo(Relation heapRel, ItemPointer otid, HeapTuple tup,
 						   CatalogIndexState indstate)
 {
-	simple_heap_update(heapRel, otid, tup);
+	Bitmapset  *modified_attrs;
+	bool		warm_update;
+
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 }
 
 /*
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 38be9cf..7fb1295 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -498,6 +498,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_tuples_deleted(C.oid) AS n_tup_del,
             pg_stat_get_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
@@ -528,7 +529,8 @@ CREATE VIEW pg_stat_xact_all_tables AS
             pg_stat_get_xact_tuples_inserted(C.oid) AS n_tup_ins,
             pg_stat_get_xact_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_xact_tuples_deleted(C.oid) AS n_tup_del,
-            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd
+            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_xact_tuples_warm_updated(C.oid) AS n_tup_warm_upd
     FROM pg_class C LEFT JOIN
          pg_index I ON C.oid = I.indrelid
          LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index e2544e5..d9c0fe7 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -40,6 +40,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	TriggerData *trigdata = castNode(TriggerData, fcinfo->context);
 	const char *funcname = "unique_key_recheck";
 	HeapTuple	new_row;
+	HeapTupleData heapTuple;
 	ItemPointerData tmptid;
 	Relation	indexRel;
 	IndexInfo  *indexInfo;
@@ -102,7 +103,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	 * removed.
 	 */
 	tmptid = new_row->t_self;
-	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL,
+				NULL, NULL, &heapTuple))
 	{
 		/*
 		 * All rows in the HOT chain are dead, so skip the check.
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index 01a63c8..f078a5d 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -2680,6 +2680,8 @@ CopyFrom(CopyState cstate)
 					if (resultRelInfo->ri_NumIndices > 0)
 						recheckIndexes = ExecInsertIndexTuples(slot,
 															&(tuple->t_self),
+															&(tuple->t_self),
+															NULL,
 															   estate,
 															   false,
 															   NULL,
@@ -2834,6 +2836,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
 			ExecStoreTuple(bufferedTuples[i], myslot, InvalidBuffer, false);
 			recheckIndexes =
 				ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
+									  &(bufferedTuples[i]->t_self), NULL,
 									  estate, false, NULL, NIL);
 			ExecARInsertTriggers(estate, resultRelInfo,
 								 bufferedTuples[i],
diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c
index 72bb06c..d8f033d 100644
--- a/src/backend/commands/indexcmds.c
+++ b/src/backend/commands/indexcmds.c
@@ -699,7 +699,14 @@ DefineIndex(Oid relationId,
 	 * visible to other transactions before we start to build the index. That
 	 * will prevent them from making incompatible HOT updates.  The new index
 	 * will be marked not indisready and not indisvalid, so that no one else
-	 * tries to either insert into it or use it for queries.
+	 * tries to either insert into it or use it for queries. In addition to
+	 * that, WARM updates will be disallowed if an update is modifying one of
+	 * the columns used by this new index. This is necessary to ensure that we
+	 * don't create WARM tuples which do not have corresponding entry in this
+	 * index. It must be noted that during the second phase, we will index only
+	 * those heap tuples whose root line pointer is not already in the index,
+	 * hence it's important that all tuples in a given chain, has the same
+	 * value for any indexed column (including this new index).
 	 *
 	 * We must commit our current transaction so that the index becomes
 	 * visible; then start another.  Note that all the data structures we just
@@ -747,7 +754,10 @@ DefineIndex(Oid relationId,
 	 * marked as "not-ready-for-inserts".  The index is consulted while
 	 * deciding HOT-safety though.  This arrangement ensures that no new HOT
 	 * chains can be created where the new tuple and the old tuple in the
-	 * chain have different index keys.
+	 * chain have different index keys. Also, the new index is consulted for
+	 * deciding whether a WARM update is possible, and WARM update is not done
+	 * if a column used by this index is being updated. This ensures that we
+	 * don't create WARM tuples which are not indexed by this index.
 	 *
 	 * We now take a new snapshot, and build the index using all tuples that
 	 * are visible in this snapshot.  We can be sure that any HOT updates to
@@ -782,7 +792,8 @@ DefineIndex(Oid relationId,
 	/*
 	 * Update the pg_index row to mark the index as ready for inserts. Once we
 	 * commit this transaction, any new transactions that open the table must
-	 * insert new entries into the index for insertions and non-HOT updates.
+	 * insert new entries into the index for insertions and non-HOT updates or
+	 * WARM updates where this index needs new entry.
 	 */
 	index_set_state_flags(indexRelationId, INDEX_CREATE_SET_READY);
 
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 005440e..1388be1 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -1032,6 +1032,19 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 							break;
 						}
 
+						/*
+						 * If this tuple was ever WARM updated or is a WARM
+						 * tuple, there could be multiple index entries
+						 * pointing to the root of this chain. We can't do
+						 * index-only scans for such tuples without verifying
+						 * index key check. So mark the page as !all_visible
+						 */
+						if (HeapTupleHeaderIsHeapWarmTuple(tuple.t_data))
+						{
+							all_visible = false;
+							break;
+						}
+
 						/* Track newest xmin on page. */
 						if (TransactionIdFollows(xmin, visibility_cutoff_xid))
 							visibility_cutoff_xid = xmin;
@@ -2158,6 +2171,18 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 						break;
 					}
 
+					/*
+					 * If this tuple was ever WARM updated or is a WARM tuple,
+					 * there could be multiple index entries pointing to the
+					 * root of this chain. We can't do index-only scans for
+					 * such tuples without verifying index key check. So mark
+					 * the page as !all_visible
+					 */
+					if (HeapTupleHeaderIsHeapWarmTuple(tuple.t_data))
+					{
+						all_visible = false;
+					}
+
 					/* Track newest xmin on page. */
 					if (TransactionIdFollows(xmin, *visibility_cutoff_xid))
 						*visibility_cutoff_xid = xmin;
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 2142273..d62d2de 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -270,6 +270,8 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
 List *
 ExecInsertIndexTuples(TupleTableSlot *slot,
 					  ItemPointer tupleid,
+					  ItemPointer root_tid,
+					  Bitmapset *modified_attrs,
 					  EState *estate,
 					  bool noDupErr,
 					  bool *specConflict,
@@ -324,6 +326,17 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 		if (!indexInfo->ii_ReadyForInserts)
 			continue;
 
+		/*
+		 * If modified_attrs is set, we only insert index entries for those
+		 * indexes whose column has changed. All other indexes can use their
+		 * existing index pointers to look up the new tuple
+		 */
+		if (modified_attrs)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
 		/* Check for partial index */
 		if (indexInfo->ii_Predicate != NIL)
 		{
@@ -389,7 +402,7 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 			index_insert(indexRelation, /* index relation */
 						 values,	/* array of index Datums */
 						 isnull,	/* null flags */
-						 tupleid,		/* tid of heap tuple */
+						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique,	/* type of uniqueness check to do */
 						 indexInfo);	/* index AM may need this */
@@ -791,6 +804,9 @@ retry:
 		{
 			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
 				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
+			else
+				ItemPointerCopy(&tup->t_self, &ctid_wait);
+
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index ebf3f6b..1fa13a5 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -399,6 +399,8 @@ ExecSimpleRelationInsert(EState *estate, TupleTableSlot *slot)
 
 		if (resultRelInfo->ri_NumIndices > 0)
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &(tuple->t_self),
+												   NULL,
 												   estate, false, NULL,
 												   NIL);
 
@@ -445,6 +447,8 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 	if (!skip_tuple)
 	{
 		List	   *recheckIndexes = NIL;
+		bool		warm_update;
+		Bitmapset  *modified_attrs;
 
 		/* Check the constraints of the tuple */
 		if (rel->rd_att->constr)
@@ -455,13 +459,30 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 
 		/* OK, update the tuple and index entries for it */
 		simple_heap_update(rel, &searchslot->tts_tuple->t_self,
-						   slot->tts_tuple);
+						   slot->tts_tuple, &modified_attrs, &warm_update);
 
 		if (resultRelInfo->ri_NumIndices > 0 &&
-			!HeapTupleIsHeapOnly(slot->tts_tuple))
+			(!HeapTupleIsHeapOnly(slot->tts_tuple) || warm_update))
+		{
+			ItemPointerData root_tid;
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self,
+						&root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
+
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL,
 												   NIL);
+		}
 
 		/* AFTER ROW UPDATE Triggers */
 		ExecARUpdateTriggers(estate, resultRelInfo,
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index c871aa0..eb98b2d 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -39,6 +39,7 @@
 
 #include "access/relscan.h"
 #include "access/transam.h"
+#include "access/valid.h"
 #include "executor/execdebug.h"
 #include "executor/nodeBitmapHeapscan.h"
 #include "pgstat.h"
@@ -364,11 +365,27 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			OffsetNumber offnum = tbmres->offsets[curslot];
 			ItemPointerData tid;
 			HeapTupleData heapTuple;
+			bool recheck = false;
 
 			ItemPointerSet(&tid, page, offnum);
 			if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
-									   &heapTuple, NULL, true))
-				scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+									   &heapTuple, NULL, true, &recheck))
+			{
+				bool valid = true;
+
+				if (scan->rs_key)
+					HeapKeyTest(&heapTuple, RelationGetDescr(scan->rs_rd),
+							scan->rs_nkeys, scan->rs_key, valid);
+				if (valid)
+					scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+
+				/*
+				 * If the heap tuple needs a recheck because of a WARM update,
+				 * it's a lossy case
+				 */
+				if (recheck)
+					tbmres->recheck = true;
+			}
 		}
 	}
 	else
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index 0a9dfdb..38c7827 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -118,10 +118,10 @@ IndexNext(IndexScanState *node)
 					   false);	/* don't pfree */
 
 		/*
-		 * If the index was lossy, we have to recheck the index quals using
-		 * the fetched tuple.
+		 * If the index was lossy or the tuple was WARM, we have to recheck
+		 * the index quals using the fetched tuple.
 		 */
-		if (scandesc->xs_recheck)
+		if (scandesc->xs_recheck || scandesc->xs_tuple_recheck)
 		{
 			econtext->ecxt_scantuple = slot;
 			ResetExprContext(econtext);
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index 95e1589..a1f3440 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -512,6 +512,7 @@ ExecInsert(ModifyTableState *mtstate,
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												 &(tuple->t_self), NULL,
 												 estate, true, &specConflict,
 												   arbiterIndexes);
 
@@ -558,6 +559,7 @@ ExecInsert(ModifyTableState *mtstate,
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
 				recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+													   &(tuple->t_self), NULL,
 													   estate, false, NULL,
 													   arbiterIndexes);
 		}
@@ -891,6 +893,9 @@ ExecUpdate(ItemPointer tupleid,
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
 	List	   *recheckIndexes = NIL;
+	Bitmapset  *modified_attrs = NULL;
+	ItemPointerData	root_tid;
+	bool		warm_update;
 
 	/*
 	 * abort the operation if not running transactions
@@ -1007,7 +1012,7 @@ lreplace:;
 							 estate->es_output_cid,
 							 estate->es_crosscheck_snapshot,
 							 true /* wait for commit */ ,
-							 &hufd, &lockmode);
+							 &hufd, &lockmode, &modified_attrs, &warm_update);
 		switch (result)
 		{
 			case HeapTupleSelfUpdated:
@@ -1094,10 +1099,28 @@ lreplace:;
 		 * the t_self field.
 		 *
 		 * If it's a HOT update, we mustn't insert new index entries.
+		 *
+		 * If it's a WARM update, then we must insert new entries with TID
+		 * pointing to the root of the WARM chain.
 		 */
-		if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
+		if (resultRelInfo->ri_NumIndices > 0 &&
+			(!HeapTupleIsHeapOnly(tuple) || warm_update))
+		{
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL, NIL);
+		}
 	}
 
 	if (canSetTag)
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index ada374c..308ae8c 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -1823,7 +1823,7 @@ pgstat_count_heap_insert(Relation rel, int n)
  * pgstat_count_heap_update - count a tuple update
  */
 void
-pgstat_count_heap_update(Relation rel, bool hot)
+pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 {
 	PgStat_TableStatus *pgstat_info = rel->pgstat_info;
 
@@ -1841,6 +1841,8 @@ pgstat_count_heap_update(Relation rel, bool hot)
 		/* t_tuples_hot_updated is nontransactional, so just advance it */
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
+		else if (warm)
+			pgstat_info->t_counts.t_tuples_warm_updated++;
 	}
 }
 
@@ -4088,6 +4090,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_updated = 0;
 		result->tuples_deleted = 0;
 		result->tuples_hot_updated = 0;
+		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
 		result->changes_since_analyze = 0;
@@ -5197,6 +5200,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated = tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted = tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated = tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
@@ -5224,6 +5228,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated += tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted += tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated += tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated += tabmsg->t_counts.t_tuples_warm_updated;
 			/* If table was truncated, first reset the live/dead counters */
 			if (tabmsg->t_counts.t_truncated)
 			{
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index a987d0d..b8677f3 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -145,6 +145,22 @@ pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS)
 
 
 Datum
+pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+
+Datum
 pg_stat_get_live_tuples(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
@@ -1644,6 +1660,21 @@ pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS)
 }
 
 Datum
+pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_TableStatus *tabentry;
+
+	if ((tabentry = find_tabstat_entry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->t_counts.t_tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+Datum
 pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 9001e20..c85898c 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2338,6 +2338,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	list_free_deep(relation->rd_fkeylist);
 	list_free(relation->rd_indexlist);
 	bms_free(relation->rd_indexattr);
+	bms_free(relation->rd_exprindexattr);
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_pkattr);
 	bms_free(relation->rd_idattr);
@@ -4352,6 +4353,13 @@ RelationGetIndexList(Relation relation)
 		return list_copy(relation->rd_indexlist);
 
 	/*
+	 * If the index list was invalidated, we better also invalidate the index
+	 * attribute list (which should automatically invalidate other attributes
+	 * such as primary key and replica identity)
+	 */
+	relation->rd_indexattr = NULL;
+
+	/*
 	 * We build the list we intend to return (in the caller's context) while
 	 * doing the scan.  After successfully completing the scan, we copy that
 	 * list into the relcache entry.  This avoids cache-context memory leakage
@@ -4759,15 +4767,19 @@ Bitmapset *
 RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 {
 	Bitmapset  *indexattrs;		/* indexed columns */
+	Bitmapset  *exprindexattrs;	/* indexed columns in expression/prediacate
+									 indexes */
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *pkindexattrs;	/* columns in the primary index */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
+	Bitmapset  *indxnotreadyattrs;	/* columns in not ready indexes */
 	List	   *indexoidlist;
 	List	   *newindexoidlist;
 	Oid			relpkindex;
 	Oid			relreplindex;
 	ListCell   *l;
 	MemoryContext oldcxt;
+	bool		supportswarm = true;/* True if the table can be WARM updated */
 
 	/* Quick exit if we already computed the result. */
 	if (relation->rd_indexattr != NULL)
@@ -4782,6 +4794,10 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_pkattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
+			case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+				return bms_copy(relation->rd_exprindexattr);
+			case INDEX_ATTR_BITMAP_NOTREADY:
+				return bms_copy(relation->rd_indxnotreadyattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -4822,9 +4838,11 @@ restart:
 	 * won't be returned at all by RelationGetIndexList.
 	 */
 	indexattrs = NULL;
+	exprindexattrs = NULL;
 	uindexattrs = NULL;
 	pkindexattrs = NULL;
 	idindexattrs = NULL;
+	indxnotreadyattrs = NULL;
 	foreach(l, indexoidlist)
 	{
 		Oid			indexOid = lfirst_oid(l);
@@ -4861,6 +4879,10 @@ restart:
 				indexattrs = bms_add_member(indexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
 
+				if (!indexInfo->ii_ReadyForInserts)
+					indxnotreadyattrs = bms_add_member(indxnotreadyattrs,
+							   attrnum - FirstLowInvalidHeapAttributeNumber);
+
 				if (isKey)
 					uindexattrs = bms_add_member(uindexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
@@ -4876,10 +4898,29 @@ restart:
 		}
 
 		/* Collect all attributes used in expressions, too */
-		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &exprindexattrs);
 
 		/* Collect all attributes in the index predicate, too */
-		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
+
+		/*
+		 * indexattrs should include attributes referenced in index expressions
+		 * and predicates too
+		 */
+		indexattrs = bms_add_members(indexattrs, exprindexattrs);
+
+		if (!indexInfo->ii_ReadyForInserts)
+			indxnotreadyattrs = bms_add_members(indxnotreadyattrs,
+					exprindexattrs);
+
+		/*
+		 * Check if the index has amrecheck method defined. If the method is
+		 * not defined, the index does not support WARM update. Completely
+		 * disable WARM updates on such tables
+		 */
+		if (!indexDesc->rd_amroutine->amrecheck)
+			supportswarm = false;
+
 
 		index_close(indexDesc, AccessShareLock);
 	}
@@ -4912,15 +4953,22 @@ restart:
 		goto restart;
 	}
 
+	/* Remember if the table can do WARM updates */
+	relation->rd_supportswarm = supportswarm;
+
 	/* Don't leak the old values of these bitmaps, if any */
 	bms_free(relation->rd_indexattr);
 	relation->rd_indexattr = NULL;
+	bms_free(relation->rd_exprindexattr);
+	relation->rd_exprindexattr = NULL;
 	bms_free(relation->rd_keyattr);
 	relation->rd_keyattr = NULL;
 	bms_free(relation->rd_pkattr);
 	relation->rd_pkattr = NULL;
 	bms_free(relation->rd_idattr);
 	relation->rd_idattr = NULL;
+	bms_free(relation->rd_indxnotreadyattr);
+	relation->rd_indxnotreadyattr = NULL;
 
 	/*
 	 * Now save copies of the bitmaps in the relcache entry.  We intentionally
@@ -4933,7 +4981,9 @@ restart:
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_pkattr = bms_copy(pkindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_indexattr = bms_copy(indexattrs);
+	relation->rd_exprindexattr = bms_copy(exprindexattrs);
+	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
+	relation->rd_indxnotreadyattr = bms_copy(indxnotreadyattrs);
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -4947,6 +4997,10 @@ restart:
 			return bms_copy(relation->rd_pkattr);
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
+		case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+			return exprindexattrs;
+		case INDEX_ATTR_BITMAP_NOTREADY:
+			return indxnotreadyattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
@@ -5559,6 +5613,7 @@ load_relcache_init_file(bool shared)
 		rel->rd_keyattr = NULL;
 		rel->rd_pkattr = NULL;
 		rel->rd_idattr = NULL;
+		rel->rd_indxnotreadyattr = NULL;
 		rel->rd_pubactions = NULL;
 		rel->rd_createSubid = InvalidSubTransactionId;
 		rel->rd_newRelfilenodeSubid = InvalidSubTransactionId;
diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h
index f919cf8..d7702e5 100644
--- a/src/include/access/amapi.h
+++ b/src/include/access/amapi.h
@@ -13,6 +13,7 @@
 #define AMAPI_H
 
 #include "access/genam.h"
+#include "access/itup.h"
 
 /*
  * We don't wish to include planner header files here, since most of an index
@@ -152,6 +153,10 @@ typedef void (*aminitparallelscan_function) (void *target);
 /* (re)start parallel index scan */
 typedef void (*amparallelrescan_function) (IndexScanDesc scan);
 
+/* recheck index tuple and heap tuple match */
+typedef bool (*amrecheck_function) (Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 /*
  * API struct for an index AM.  Note this must be stored in a single palloc'd
  * chunk of memory.
@@ -217,6 +222,9 @@ typedef struct IndexAmRoutine
 	amestimateparallelscan_function amestimateparallelscan;		/* can be NULL */
 	aminitparallelscan_function aminitparallelscan;		/* can be NULL */
 	amparallelrescan_function amparallelrescan; /* can be NULL */
+
+	/* interface function to support WARM */
+	amrecheck_function amrecheck;		/* can be NULL */
 } IndexAmRoutine;
 
 
diff --git a/src/include/access/hash.h b/src/include/access/hash.h
index 9c0b79f..e76a7aa 100644
--- a/src/include/access/hash.h
+++ b/src/include/access/hash.h
@@ -389,4 +389,8 @@ extern void hashbucketcleanup(Relation rel, Bucket cur_bucket,
 				  bool bucket_has_garbage,
 				  IndexBulkDeleteCallback callback, void *callback_state);
 
+/* hash.c */
+extern bool hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 #endif   /* HASH_H */
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 95aa976..9412c3a 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -137,9 +137,10 @@ extern bool heap_fetch(Relation relation, Snapshot snapshot,
 		   Relation stats_relation);
 extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
 					   Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call);
+					   bool *all_dead, bool first_call, bool *recheck);
 extern bool heap_hot_search(ItemPointer tid, Relation relation,
-				Snapshot snapshot, bool *all_dead);
+				Snapshot snapshot, bool *all_dead,
+				bool *recheck, Buffer *buffer, HeapTuple heapTuple);
 
 extern void heap_get_latest_tid(Relation relation, Snapshot snapshot,
 					ItemPointer tid);
@@ -161,7 +162,8 @@ extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
 extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
@@ -176,7 +178,9 @@ extern bool heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple);
 extern Oid	simple_heap_insert(Relation relation, HeapTuple tup);
 extern void simple_heap_delete(Relation relation, ItemPointer tid);
 extern void simple_heap_update(Relation relation, ItemPointer otid,
-				   HeapTuple tup);
+				   HeapTuple tup,
+				   Bitmapset **modified_attrs,
+				   bool *warm_update);
 
 extern void heap_sync(Relation relation);
 
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index e6019d5..9b081bf 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -80,6 +80,7 @@
 #define XLH_UPDATE_CONTAINS_NEW_TUPLE			(1<<4)
 #define XLH_UPDATE_PREFIX_FROM_OLD				(1<<5)
 #define XLH_UPDATE_SUFFIX_FROM_OLD				(1<<6)
+#define XLH_UPDATE_WARM_UPDATE					(1<<7)
 
 /* convenience macro for checking whether any form of old tuple was logged */
 #define XLH_UPDATE_CONTAINS_OLD						\
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 4d614b7..b5891ca 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,7 +260,8 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x0800 are available */
+#define HEAP_WARM_TUPLE			0x0800	/* This tuple is a part of a WARM chain
+										 */
 #define HEAP_LATEST_TUPLE		0x1000	/*
 										 * This is the last tuple in chain and
 										 * ip_posid points to the root line
@@ -271,7 +272,7 @@ struct HeapTupleHeaderData
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF800	/* visibility-related bits */
 
 
 /*
@@ -510,6 +511,21 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+#define HeapTupleHeaderSetHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 |= HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderClearHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 &= ~HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderIsHeapWarmTuple(tup) \
+( \
+  ((tup)->t_infomask2 & HEAP_WARM_TUPLE) != 0 \
+)
+
 /*
  * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
  * the TID of the tuple itself in t_ctid field to mark the end of the chain.
@@ -785,6 +801,15 @@ struct MinimalTupleData
 #define HeapTupleClearHeapOnly(tuple) \
 		HeapTupleHeaderClearHeapOnly((tuple)->t_data)
 
+#define HeapTupleIsHeapWarmTuple(tuple) \
+		HeapTupleHeaderIsHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleSetHeapWarmTuple(tuple) \
+		HeapTupleHeaderSetHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleClearHeapWarmTuple(tuple) \
+		HeapTupleHeaderClearHeapWarmTuple((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index f9304db..d4b35ca 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -537,6 +537,8 @@ extern bytea *btoptions(Datum reloptions, bool validate);
 extern bool btproperty(Oid index_oid, int attno,
 		   IndexAMProperty prop, const char *propname,
 		   bool *res, bool *isnull);
+extern bool btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * prototypes for functions in nbtvalidate.c
diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h
index ce3ca8d..12d3b0c 100644
--- a/src/include/access/relscan.h
+++ b/src/include/access/relscan.h
@@ -112,7 +112,8 @@ typedef struct IndexScanDescData
 	HeapTupleData xs_ctup;		/* current heap tuple, if any */
 	Buffer		xs_cbuf;		/* current heap buffer in scan, if any */
 	/* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */
-	bool		xs_recheck;		/* T means scan keys must be rechecked */
+	bool		xs_recheck;		/* T means scan keys must be rechecked for each tuple */
+	bool		xs_tuple_recheck;	/* T means scan keys must be rechecked for current tuple */
 
 	/*
 	 * When fetching with an ordering operator, the values of the ORDER BY
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index a4cc86d..aec9c89 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2740,6 +2740,8 @@ DATA(insert OID = 1933 (  pg_stat_get_tuples_deleted	PGNSP PGUID 12 1 0 0 0 f f
 DESCR("statistics: number of tuples deleted");
 DATA(insert OID = 1972 (  pg_stat_get_tuples_hot_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated");
+DATA(insert OID = 3353 (  pg_stat_get_tuples_warm_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated");
 DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_live_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
@@ -2892,6 +2894,8 @@ DATA(insert OID = 3042 (  pg_stat_get_xact_tuples_deleted		PGNSP PGUID 12 1 0 0
 DESCR("statistics: number of tuples deleted in current transaction");
 DATA(insert OID = 3043 (  pg_stat_get_xact_tuples_hot_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated in current transaction");
+DATA(insert OID = 3354 (  pg_stat_get_xact_tuples_warm_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated in current transaction");
 DATA(insert OID = 3044 (  pg_stat_get_xact_blocks_fetched		PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_fetched _null_ _null_ _null_ ));
 DESCR("statistics: number of blocks fetched in current transaction");
 DATA(insert OID = 3045 (  pg_stat_get_xact_blocks_hit			PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_hit _null_ _null_ _null_ ));
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 02dbe7b..c4495a3 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -382,6 +382,7 @@ extern void UnregisterExprContextCallback(ExprContext *econtext,
 extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
 extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
 extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
+					  ItemPointer root_tid, Bitmapset *modified_attrs,
 					  EState *estate, bool noDupErr, bool *specConflict,
 					  List *arbiterIndexes);
 extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
diff --git a/src/include/executor/nodeIndexscan.h b/src/include/executor/nodeIndexscan.h
index ea3f3a5..ebeec74 100644
--- a/src/include/executor/nodeIndexscan.h
+++ b/src/include/executor/nodeIndexscan.h
@@ -41,5 +41,4 @@ extern void ExecIndexEvalRuntimeKeys(ExprContext *econtext,
 extern bool ExecIndexEvalArrayKeys(ExprContext *econtext,
 					   IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 extern bool ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
-
 #endif   /* NODEINDEXSCAN_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 6332ea0..41e270b 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -64,6 +64,7 @@ typedef struct IndexInfo
 	NodeTag		type;
 	int			ii_NumIndexAttrs;
 	AttrNumber	ii_KeyAttrNumbers[INDEX_MAX_KEYS];
+	Bitmapset  *ii_indxattrs;	/* bitmap of all columns used in this index */
 	List	   *ii_Expressions; /* list of Expr */
 	List	   *ii_ExpressionsState;	/* list of ExprState */
 	List	   *ii_Predicate;	/* list of Expr */
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 8b710ec..2ee690b 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -105,6 +105,7 @@ typedef struct PgStat_TableCounts
 	PgStat_Counter t_tuples_updated;
 	PgStat_Counter t_tuples_deleted;
 	PgStat_Counter t_tuples_hot_updated;
+	PgStat_Counter t_tuples_warm_updated;
 	bool		t_truncated;
 
 	PgStat_Counter t_delta_live_tuples;
@@ -625,6 +626,7 @@ typedef struct PgStat_StatTabEntry
 	PgStat_Counter tuples_updated;
 	PgStat_Counter tuples_deleted;
 	PgStat_Counter tuples_hot_updated;
+	PgStat_Counter tuples_warm_updated;
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
@@ -1178,7 +1180,7 @@ pgstat_report_wait_end(void)
 	(pgStatBlockWriteTime += (n))
 
 extern void pgstat_count_heap_insert(Relation rel, int n);
-extern void pgstat_count_heap_update(Relation rel, bool hot);
+extern void pgstat_count_heap_update(Relation rel, bool hot, bool warm);
 extern void pgstat_count_heap_delete(Relation rel);
 extern void pgstat_count_truncate(Relation rel);
 extern void pgstat_update_heap_dead_tuples(Relation rel, int delta);
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index a617a7c..fbac7c0 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -138,9 +138,14 @@ typedef struct RelationData
 
 	/* data managed by RelationGetIndexAttrBitmap: */
 	Bitmapset  *rd_indexattr;	/* identifies columns used in indexes */
+	Bitmapset  *rd_exprindexattr; /* indentified columns used in expression or
+									 predicate indexes */
+	Bitmapset  *rd_indxnotreadyattr;	/* columns used by indexes not yet
+										   ready */
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_pkattr;		/* cols included in primary key */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	PublicationActions  *rd_pubactions;	/* publication actions */
 
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index da36b67..d18bd09 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -50,7 +50,9 @@ typedef enum IndexAttrBitmapKind
 	INDEX_ATTR_BITMAP_ALL,
 	INDEX_ATTR_BITMAP_KEY,
 	INDEX_ATTR_BITMAP_PRIMARY_KEY,
-	INDEX_ATTR_BITMAP_IDENTITY_KEY
+	INDEX_ATTR_BITMAP_IDENTITY_KEY,
+	INDEX_ATTR_BITMAP_EXPR_PREDICATE,
+	INDEX_ATTR_BITMAP_NOTREADY
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index c661f1d..561d9579 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1732,6 +1732,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_tuples_deleted(c.oid) AS n_tup_del,
     pg_stat_get_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
@@ -1875,6 +1876,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1918,6 +1920,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1955,7 +1958,8 @@ pg_stat_xact_all_tables| SELECT c.oid AS relid,
     pg_stat_get_xact_tuples_inserted(c.oid) AS n_tup_ins,
     pg_stat_get_xact_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_xact_tuples_deleted(c.oid) AS n_tup_del,
-    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd
+    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_xact_tuples_warm_updated(c.oid) AS n_tup_warm_upd
    FROM ((pg_class c
      LEFT JOIN pg_index i ON ((c.oid = i.indrelid)))
      LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
@@ -1971,7 +1975,8 @@ pg_stat_xact_sys_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname = ANY (ARRAY['pg_catalog'::name, 'information_schema'::name])) OR (pg_stat_xact_all_tables.schemaname ~ '^pg_toast'::text));
 pg_stat_xact_user_functions| SELECT p.oid AS funcid,
@@ -1993,7 +1998,8 @@ pg_stat_xact_user_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname <> ALL (ARRAY['pg_catalog'::name, 'information_schema'::name])) AND (pg_stat_xact_all_tables.schemaname !~ '^pg_toast'::text));
 pg_statio_all_indexes| SELECT c.oid AS relid,
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
new file mode 100644
index 0000000..6391891
--- /dev/null
+++ b/src/test/regress/expected/warm.out
@@ -0,0 +1,367 @@
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Check if index only scan works correctly
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx1 on updtst_tab1
+   Index Cond: (b = 140001)
+(2 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab1;
+------------------
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE a = 1;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE b = 701;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+VACUUM updtst_tab2;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx2 on updtst_tab2
+   Index Cond: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab2 WHERE b = 701;
+  b  
+-----
+ 701
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab2;
+------------------
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 1;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+       QUERY PLAN        
+-------------------------
+ Seq Scan on updtst_tab3
+   Filter: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 701;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+  b   
+------
+ 1421
+(1 row)
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+SET enable_seqscan = false;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    98
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 2;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx3 on updtst_tab3
+   Index Cond: (b = 702)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 702;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+  b   
+------
+ 1422
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab3;
+------------------
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+explain select * from test_warm where lower(a) = 'test';
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on test_warm  (cost=4.18..12.65 rows=4 width=64)
+   Recheck Cond: (lower(a) = 'test'::text)
+   ->  Bitmap Index Scan on test_warmindx  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (lower(a) = 'test'::text)
+(4 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+select *, ctid from test_warm where a = 'test';
+ a | b | ctid 
+---+---+------
+(0 rows)
+
+select *, ctid from test_warm where a = 'TEST';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+                                   QUERY PLAN                                    
+---------------------------------------------------------------------------------
+ Index Scan using test_warmindx on test_warm  (cost=0.15..20.22 rows=4 width=64)
+   Index Cond: (lower(a) = 'test'::text)
+(2 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+DROP TABLE test_warm;
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index edeb2d6..2268705 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -42,6 +42,8 @@ test: create_type
 test: create_table
 test: create_function_2
 
+test: warm
+
 # ----------
 # Load huge amounts of data
 # We should split the data files into single files and then
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
new file mode 100644
index 0000000..f31127c
--- /dev/null
+++ b/src/test/regress/sql/warm.sql
@@ -0,0 +1,172 @@
+
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Check if index only scan works correctly
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab1;
+
+------------------
+
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE a = 1;
+
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE b = 701;
+
+VACUUM updtst_tab2;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+SELECT b FROM updtst_tab2 WHERE b = 701;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab2;
+------------------
+
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+SELECT * FROM updtst_tab3 WHERE a = 1;
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+SET enable_seqscan = false;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+SELECT * FROM updtst_tab3 WHERE a = 2;
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab3;
+------------------
+
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where a = 'test';
+select *, ctid from test_warm where a = 'TEST';
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+DROP TABLE test_warm;
+
+
0003_freeup_3bits_ip_posid_v15.patchapplication/octet-stream; name=0003_freeup_3bits_ip_posid_v15.patchDownload
commit a5838065cefe12c84c97eaaa1f1d9b571641273a
Author: Pavan Deolasee <pavan.deolasee@gmail.com>
Date:   Fri Feb 24 10:41:31 2017 +0530

    Free up 3 bits from ip_posid

diff --git a/src/backend/access/gin/ginget.c b/src/backend/access/gin/ginget.c
index b4e9fec..9c7e6ea 100644
--- a/src/backend/access/gin/ginget.c
+++ b/src/backend/access/gin/ginget.c
@@ -928,7 +928,7 @@ keyGetItem(GinState *ginstate, MemoryContext tempCtx, GinScanKey key,
 	 * Find the minimum item > advancePast among the active entry streams.
 	 *
 	 * Note: a lossy-page entry is encoded by a ItemPointer with max value for
-	 * offset (0xffff), so that it will sort after any exact entries for the
+	 * offset (0x1fff), so that it will sort after any exact entries for the
 	 * same page.  So we'll prefer to return exact pointers not lossy
 	 * pointers, which is good.
 	 */
diff --git a/src/backend/access/gin/ginpostinglist.c b/src/backend/access/gin/ginpostinglist.c
index 8d2d31a..b22b9f5 100644
--- a/src/backend/access/gin/ginpostinglist.c
+++ b/src/backend/access/gin/ginpostinglist.c
@@ -253,7 +253,7 @@ ginCompressPostingList(const ItemPointer ipd, int nipd, int maxsize,
 
 		Assert(ndecoded == totalpacked);
 		for (i = 0; i < ndecoded; i++)
-			Assert(memcmp(&tmp[i], &ipd[i], sizeof(ItemPointerData)) == 0);
+			Assert(ItemPointerEquals(&tmp[i], &ipd[i]));
 		pfree(tmp);
 	}
 #endif
diff --git a/src/include/access/ginblock.h b/src/include/access/ginblock.h
index 438912c..3f7a3f0 100644
--- a/src/include/access/ginblock.h
+++ b/src/include/access/ginblock.h
@@ -160,14 +160,14 @@ typedef struct GinMetaPageData
 	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)0 && \
 	 GinItemPointerGetBlockNumber(p) == (BlockNumber)0)
 #define ItemPointerSetMax(p)  \
-	ItemPointerSet((p), InvalidBlockNumber, (OffsetNumber)0xffff)
+	ItemPointerSet((p), InvalidBlockNumber, (OffsetNumber)OffsetNumberMask)
 #define ItemPointerIsMax(p)  \
-	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)0xffff && \
+	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)OffsetNumberMask && \
 	 GinItemPointerGetBlockNumber(p) == InvalidBlockNumber)
 #define ItemPointerSetLossyPage(p, b)  \
-	ItemPointerSet((p), (b), (OffsetNumber)0xffff)
+	ItemPointerSet((p), (b), (OffsetNumber)OffsetNumberMask)
 #define ItemPointerIsLossyPage(p)  \
-	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)0xffff && \
+	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)OffsetNumberMask && \
 	 GinItemPointerGetBlockNumber(p) != InvalidBlockNumber)
 
 /*
@@ -218,7 +218,7 @@ typedef signed char GinNullCategory;
  */
 #define GinGetNPosting(itup)	GinItemPointerGetOffsetNumber(&(itup)->t_tid)
 #define GinSetNPosting(itup,n)	ItemPointerSetOffsetNumber(&(itup)->t_tid,n)
-#define GIN_TREE_POSTING		((OffsetNumber)0xffff)
+#define GIN_TREE_POSTING		((OffsetNumber)OffsetNumberMask)
 #define GinIsPostingTree(itup)	(GinGetNPosting(itup) == GIN_TREE_POSTING)
 #define GinSetPostingTree(itup, blkno)	( GinSetNPosting((itup),GIN_TREE_POSTING), ItemPointerSetBlockNumber(&(itup)->t_tid, blkno) )
 #define GinGetPostingTree(itup) GinItemPointerGetBlockNumber(&(itup)->t_tid)
diff --git a/src/include/access/gist_private.h b/src/include/access/gist_private.h
index 5b33030..c532dc3 100644
--- a/src/include/access/gist_private.h
+++ b/src/include/access/gist_private.h
@@ -269,8 +269,8 @@ typedef struct
  * invalid tuples in an index, so throwing an error is as far as we go with
  * supporting that.
  */
-#define TUPLE_IS_VALID		0xffff
-#define TUPLE_IS_INVALID	0xfffe
+#define TUPLE_IS_VALID		OffsetNumberMask
+#define TUPLE_IS_INVALID	OffsetNumberPrev(OffsetNumberMask)
 
 #define  GistTupleIsInvalid(itup)	( ItemPointerGetOffsetNumber( &((itup)->t_tid) ) == TUPLE_IS_INVALID )
 #define  GistTupleSetValid(itup)	ItemPointerSetOffsetNumber( &((itup)->t_tid), TUPLE_IS_VALID )
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 24433c7..4d614b7 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -288,7 +288,7 @@ struct HeapTupleHeaderData
  * than MaxOffsetNumber, so that it can be distinguished from a valid
  * offset number in a regular item pointer.
  */
-#define SpecTokenOffsetNumber		0xfffe
+#define SpecTokenOffsetNumber		OffsetNumberPrev(OffsetNumberMask)
 
 /*
  * HeapTupleHeader accessor macros
diff --git a/src/include/storage/itemptr.h b/src/include/storage/itemptr.h
index 60d0070..3144bdd 100644
--- a/src/include/storage/itemptr.h
+++ b/src/include/storage/itemptr.h
@@ -57,7 +57,7 @@ typedef ItemPointerData *ItemPointer;
  *		True iff the disk item pointer is not NULL.
  */
 #define ItemPointerIsValid(pointer) \
-	((bool) (PointerIsValid(pointer) && ((pointer)->ip_posid != 0)))
+	((bool) (PointerIsValid(pointer) && (((pointer)->ip_posid & OffsetNumberMask) != 0)))
 
 /*
  * ItemPointerGetBlockNumber
@@ -82,13 +82,37 @@ typedef ItemPointerData *ItemPointer;
 #define ItemPointerGetOffsetNumber(pointer) \
 ( \
 	AssertMacro(ItemPointerIsValid(pointer)), \
-	(pointer)->ip_posid \
+	((pointer)->ip_posid & OffsetNumberMask) \
 )
 
 /* Same as ItemPointerGetOffsetNumber but without any assert-checks */
 #define ItemPointerGetOffsetNumberNoCheck(pointer) \
 ( \
-	(pointer)->ip_posid \
+	((pointer)->ip_posid & OffsetNumberMask) \
+)
+
+/*
+ * Get the flags stored in high order bits in the OffsetNumber.
+ */
+#define ItemPointerGetFlags(pointer) \
+( \
+	((pointer)->ip_posid & ~OffsetNumberMask) >> OffsetNumberBits \
+)
+
+/*
+ * Set the flag bits. We first left-shift since flags are defined starting 0x01
+ */
+#define ItemPointerSetFlags(pointer, flags) \
+( \
+	((pointer)->ip_posid |= ((flags) << OffsetNumberBits)) \
+)
+
+/*
+ * Clear all flags.
+ */
+#define ItemPointerClearFlags(pointer) \
+( \
+	((pointer)->ip_posid &= OffsetNumberMask) \
 )
 
 /*
@@ -99,7 +123,7 @@ typedef ItemPointerData *ItemPointer;
 ( \
 	AssertMacro(PointerIsValid(pointer)), \
 	BlockIdSet(&((pointer)->ip_blkid), blockNumber), \
-	(pointer)->ip_posid = offNum \
+	(pointer)->ip_posid = (offNum) \
 )
 
 /*
diff --git a/src/include/storage/off.h b/src/include/storage/off.h
index fe8638f..fe1834c 100644
--- a/src/include/storage/off.h
+++ b/src/include/storage/off.h
@@ -26,8 +26,15 @@ typedef uint16 OffsetNumber;
 #define InvalidOffsetNumber		((OffsetNumber) 0)
 #define FirstOffsetNumber		((OffsetNumber) 1)
 #define MaxOffsetNumber			((OffsetNumber) (BLCKSZ / sizeof(ItemIdData)))
-#define OffsetNumberMask		(0xffff)		/* valid uint16 bits */
 
+/*
+ * Currently we support maxinum 32kB blocks and each ItemId takes 6 bytes. That
+ * limits the number of line pointers to (32kB/6 = 5461). 13 bits are enought o
+ * represent all line pointers. Hence we can reuse the high order bits in
+ * OffsetNumber for other purposes.
+ */
+#define OffsetNumberMask		(0x1fff)		/* valid uint16 bits */
+#define OffsetNumberBits		13	/* number of valid bits in OffsetNumber */
 /* ----------------
  *		support macros
  * ----------------
0002_clear_ip_posid_blkid_refs_v15.patchapplication/octet-stream; name=0002_clear_ip_posid_blkid_refs_v15.patchDownload
commit 93e9160e3f85159c4f58d57ef6c3ba30c421db4b
Author: Pavan Deolasee <pavan.deolasee@gmail.com>
Date:   Thu Feb 23 10:12:17 2017 +0530

    Remove direct references to ip_posid and ip_blkid - same as
    remove_ip_posid_blkid_ref_v3 submitted to hackers

diff --git a/contrib/pageinspect/btreefuncs.c b/contrib/pageinspect/btreefuncs.c
index d50ec3a..2ec265e 100644
--- a/contrib/pageinspect/btreefuncs.c
+++ b/contrib/pageinspect/btreefuncs.c
@@ -363,8 +363,8 @@ bt_page_items(PG_FUNCTION_ARGS)
 		j = 0;
 		values[j++] = psprintf("%d", uargs->offset);
 		values[j++] = psprintf("(%u,%u)",
-							   BlockIdGetBlockNumber(&(itup->t_tid.ip_blkid)),
-							   itup->t_tid.ip_posid);
+							   ItemPointerGetBlockNumberNoCheck(&itup->t_tid),
+							   ItemPointerGetOffsetNumberNoCheck(&itup->t_tid));
 		values[j++] = psprintf("%d", (int) IndexTupleSize(itup));
 		values[j++] = psprintf("%c", IndexTupleHasNulls(itup) ? 't' : 'f');
 		values[j++] = psprintf("%c", IndexTupleHasVarwidths(itup) ? 't' : 'f');
diff --git a/contrib/pgstattuple/pgstattuple.c b/contrib/pgstattuple/pgstattuple.c
index 06a1992..e65040d 100644
--- a/contrib/pgstattuple/pgstattuple.c
+++ b/contrib/pgstattuple/pgstattuple.c
@@ -353,7 +353,7 @@ pgstat_heap(Relation rel, FunctionCallInfo fcinfo)
 		 * heap_getnext may find no tuples on a given page, so we cannot
 		 * simply examine the pages returned by the heap scan.
 		 */
-		tupblock = BlockIdGetBlockNumber(&tuple->t_self.ip_blkid);
+		tupblock = ItemPointerGetBlockNumber(&tuple->t_self);
 
 		while (block <= tupblock)
 		{
diff --git a/src/backend/access/gin/ginget.c b/src/backend/access/gin/ginget.c
index 60f005c..b4e9fec 100644
--- a/src/backend/access/gin/ginget.c
+++ b/src/backend/access/gin/ginget.c
@@ -626,8 +626,9 @@ entryLoadMoreItems(GinState *ginstate, GinScanEntry entry,
 		}
 		else
 		{
-			entry->btree.itemptr = advancePast;
-			entry->btree.itemptr.ip_posid++;
+			ItemPointerSet(&entry->btree.itemptr,
+					GinItemPointerGetBlockNumber(&advancePast),
+					OffsetNumberNext(GinItemPointerGetOffsetNumber(&advancePast)));
 		}
 		entry->btree.fullScan = false;
 		stack = ginFindLeafPage(&entry->btree, true, snapshot);
@@ -979,15 +980,17 @@ keyGetItem(GinState *ginstate, MemoryContext tempCtx, GinScanKey key,
 		if (GinItemPointerGetBlockNumber(&advancePast) <
 			GinItemPointerGetBlockNumber(&minItem))
 		{
-			advancePast.ip_blkid = minItem.ip_blkid;
-			advancePast.ip_posid = 0;
+			ItemPointerSet(&advancePast,
+					GinItemPointerGetBlockNumber(&minItem),
+					InvalidOffsetNumber);
 		}
 	}
 	else
 	{
-		Assert(minItem.ip_posid > 0);
-		advancePast = minItem;
-		advancePast.ip_posid--;
+		Assert(GinItemPointerGetOffsetNumber(&minItem) > 0);
+		ItemPointerSet(&advancePast,
+				GinItemPointerGetBlockNumber(&minItem),
+				OffsetNumberPrev(GinItemPointerGetOffsetNumber(&minItem)));
 	}
 
 	/*
@@ -1245,15 +1248,17 @@ scanGetItem(IndexScanDesc scan, ItemPointerData advancePast,
 				if (GinItemPointerGetBlockNumber(&advancePast) <
 					GinItemPointerGetBlockNumber(&key->curItem))
 				{
-					advancePast.ip_blkid = key->curItem.ip_blkid;
-					advancePast.ip_posid = 0;
+					ItemPointerSet(&advancePast,
+						GinItemPointerGetBlockNumber(&key->curItem),
+						InvalidOffsetNumber);
 				}
 			}
 			else
 			{
-				Assert(key->curItem.ip_posid > 0);
-				advancePast = key->curItem;
-				advancePast.ip_posid--;
+				Assert(GinItemPointerGetOffsetNumber(&key->curItem) > 0);
+				ItemPointerSet(&advancePast,
+						GinItemPointerGetBlockNumber(&key->curItem),
+						OffsetNumberPrev(GinItemPointerGetOffsetNumber(&key->curItem)));
 			}
 
 			/*
diff --git a/src/backend/access/gin/ginpostinglist.c b/src/backend/access/gin/ginpostinglist.c
index 598069d..8d2d31a 100644
--- a/src/backend/access/gin/ginpostinglist.c
+++ b/src/backend/access/gin/ginpostinglist.c
@@ -79,13 +79,11 @@ itemptr_to_uint64(const ItemPointer iptr)
 	uint64		val;
 
 	Assert(ItemPointerIsValid(iptr));
-	Assert(iptr->ip_posid < (1 << MaxHeapTuplesPerPageBits));
+	Assert(GinItemPointerGetOffsetNumber(iptr) < (1 << MaxHeapTuplesPerPageBits));
 
-	val = iptr->ip_blkid.bi_hi;
-	val <<= 16;
-	val |= iptr->ip_blkid.bi_lo;
+	val = GinItemPointerGetBlockNumber(iptr);
 	val <<= MaxHeapTuplesPerPageBits;
-	val |= iptr->ip_posid;
+	val |= GinItemPointerGetOffsetNumber(iptr);
 
 	return val;
 }
@@ -93,11 +91,9 @@ itemptr_to_uint64(const ItemPointer iptr)
 static inline void
 uint64_to_itemptr(uint64 val, ItemPointer iptr)
 {
-	iptr->ip_posid = val & ((1 << MaxHeapTuplesPerPageBits) - 1);
+	GinItemPointerSetOffsetNumber(iptr, val & ((1 << MaxHeapTuplesPerPageBits) - 1));
 	val = val >> MaxHeapTuplesPerPageBits;
-	iptr->ip_blkid.bi_lo = val & 0xFFFF;
-	val = val >> 16;
-	iptr->ip_blkid.bi_hi = val & 0xFFFF;
+	GinItemPointerSetBlockNumber(iptr, val);
 
 	Assert(ItemPointerIsValid(iptr));
 }
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 8aac670..b6f8f5a 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -3006,8 +3006,8 @@ DisplayMapping(HTAB *tuplecid_data)
 			 ent->key.relnode.dbNode,
 			 ent->key.relnode.spcNode,
 			 ent->key.relnode.relNode,
-			 BlockIdGetBlockNumber(&ent->key.tid.ip_blkid),
-			 ent->key.tid.ip_posid,
+			 ItemPointerGetBlockNumber(&ent->key.tid),
+			 ItemPointerGetOffsetNumber(&ent->key.tid),
 			 ent->cmin,
 			 ent->cmax
 			);
diff --git a/src/backend/storage/page/itemptr.c b/src/backend/storage/page/itemptr.c
index 703cbb9..28ac885 100644
--- a/src/backend/storage/page/itemptr.c
+++ b/src/backend/storage/page/itemptr.c
@@ -54,18 +54,21 @@ ItemPointerCompare(ItemPointer arg1, ItemPointer arg2)
 	/*
 	 * Don't use ItemPointerGetBlockNumber or ItemPointerGetOffsetNumber here,
 	 * because they assert ip_posid != 0 which might not be true for a
-	 * user-supplied TID.
+	 * user-supplied TID. Instead we use ItemPointerGetBlockNumberNoCheck and
+	 * ItemPointerGetOffsetNumberNoCheck which do not do any validation.
 	 */
-	BlockNumber b1 = BlockIdGetBlockNumber(&(arg1->ip_blkid));
-	BlockNumber b2 = BlockIdGetBlockNumber(&(arg2->ip_blkid));
+	BlockNumber b1 = ItemPointerGetBlockNumberNoCheck(arg1);
+	BlockNumber b2 = ItemPointerGetBlockNumberNoCheck(arg2);
 
 	if (b1 < b2)
 		return -1;
 	else if (b1 > b2)
 		return 1;
-	else if (arg1->ip_posid < arg2->ip_posid)
+	else if (ItemPointerGetOffsetNumberNoCheck(arg1) <
+			ItemPointerGetOffsetNumberNoCheck(arg2))
 		return -1;
-	else if (arg1->ip_posid > arg2->ip_posid)
+	else if (ItemPointerGetOffsetNumberNoCheck(arg1) >
+			ItemPointerGetOffsetNumberNoCheck(arg2))
 		return 1;
 	else
 		return 0;
diff --git a/src/backend/utils/adt/tid.c b/src/backend/utils/adt/tid.c
index a3b372f..735c006 100644
--- a/src/backend/utils/adt/tid.c
+++ b/src/backend/utils/adt/tid.c
@@ -109,8 +109,8 @@ tidout(PG_FUNCTION_ARGS)
 	OffsetNumber offsetNumber;
 	char		buf[32];
 
-	blockNumber = BlockIdGetBlockNumber(&(itemPtr->ip_blkid));
-	offsetNumber = itemPtr->ip_posid;
+	blockNumber = ItemPointerGetBlockNumberNoCheck(itemPtr);
+	offsetNumber = ItemPointerGetOffsetNumberNoCheck(itemPtr);
 
 	/* Perhaps someday we should output this as a record. */
 	snprintf(buf, sizeof(buf), "(%u,%u)", blockNumber, offsetNumber);
@@ -146,14 +146,12 @@ Datum
 tidsend(PG_FUNCTION_ARGS)
 {
 	ItemPointer itemPtr = PG_GETARG_ITEMPOINTER(0);
-	BlockId		blockId;
 	BlockNumber blockNumber;
 	OffsetNumber offsetNumber;
 	StringInfoData buf;
 
-	blockId = &(itemPtr->ip_blkid);
-	blockNumber = BlockIdGetBlockNumber(blockId);
-	offsetNumber = itemPtr->ip_posid;
+	blockNumber = ItemPointerGetBlockNumberNoCheck(itemPtr);
+	offsetNumber = ItemPointerGetOffsetNumberNoCheck(itemPtr);
 
 	pq_begintypsend(&buf);
 	pq_sendint(&buf, blockNumber, sizeof(blockNumber));
diff --git a/src/include/access/gin_private.h b/src/include/access/gin_private.h
index 34e7339..2fd4479 100644
--- a/src/include/access/gin_private.h
+++ b/src/include/access/gin_private.h
@@ -460,8 +460,8 @@ extern ItemPointer ginMergeItemPointers(ItemPointerData *a, uint32 na,
 static inline int
 ginCompareItemPointers(ItemPointer a, ItemPointer b)
 {
-	uint64		ia = (uint64) a->ip_blkid.bi_hi << 32 | (uint64) a->ip_blkid.bi_lo << 16 | a->ip_posid;
-	uint64		ib = (uint64) b->ip_blkid.bi_hi << 32 | (uint64) b->ip_blkid.bi_lo << 16 | b->ip_posid;
+	uint64		ia = (uint64) GinItemPointerGetBlockNumber(a) << 32 | GinItemPointerGetOffsetNumber(a);
+	uint64		ib = (uint64) GinItemPointerGetBlockNumber(b) << 32 | GinItemPointerGetOffsetNumber(b);
 
 	if (ia == ib)
 		return 0;
diff --git a/src/include/access/ginblock.h b/src/include/access/ginblock.h
index a3fb056..438912c 100644
--- a/src/include/access/ginblock.h
+++ b/src/include/access/ginblock.h
@@ -132,10 +132,17 @@ typedef struct GinMetaPageData
  * to avoid Asserts, since sometimes the ip_posid isn't "valid"
  */
 #define GinItemPointerGetBlockNumber(pointer) \
-	BlockIdGetBlockNumber(&(pointer)->ip_blkid)
+	(ItemPointerGetBlockNumberNoCheck(pointer))
 
 #define GinItemPointerGetOffsetNumber(pointer) \
-	((pointer)->ip_posid)
+	(ItemPointerGetOffsetNumberNoCheck(pointer))
+
+#define GinItemPointerSetBlockNumber(pointer, blkno) \
+	(ItemPointerSetBlockNumber((pointer), (blkno)))
+
+#define GinItemPointerSetOffsetNumber(pointer, offnum) \
+	(ItemPointerSetOffsetNumber((pointer), (offnum)))
+
 
 /*
  * Special-case item pointer values needed by the GIN search logic.
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 7552186..24433c7 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -428,7 +428,7 @@ do { \
 
 #define HeapTupleHeaderIsSpeculative(tup) \
 ( \
-	(tup)->t_ctid.ip_posid == SpecTokenOffsetNumber \
+	(ItemPointerGetOffsetNumberNoCheck(&(tup)->t_ctid) == SpecTokenOffsetNumber) \
 )
 
 #define HeapTupleHeaderGetSpeculativeToken(tup) \
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index 6289ffa..f9304db 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -151,9 +151,8 @@ typedef struct BTMetaPageData
  *	within a level). - vadim 04/09/97
  */
 #define BTTidSame(i1, i2)	\
-	( (i1).ip_blkid.bi_hi == (i2).ip_blkid.bi_hi && \
-	  (i1).ip_blkid.bi_lo == (i2).ip_blkid.bi_lo && \
-	  (i1).ip_posid == (i2).ip_posid )
+	((ItemPointerGetBlockNumber(&(i1)) == ItemPointerGetBlockNumber(&(i2))) && \
+	 (ItemPointerGetOffsetNumber(&(i1)) == ItemPointerGetOffsetNumber(&(i2))))
 #define BTEntrySame(i1, i2) \
 	BTTidSame((i1)->t_tid, (i2)->t_tid)
 
diff --git a/src/include/storage/itemptr.h b/src/include/storage/itemptr.h
index 576aaa8..60d0070 100644
--- a/src/include/storage/itemptr.h
+++ b/src/include/storage/itemptr.h
@@ -69,6 +69,12 @@ typedef ItemPointerData *ItemPointer;
 	BlockIdGetBlockNumber(&(pointer)->ip_blkid) \
 )
 
+/* Same as ItemPointerGetBlockNumber but without any assert-checks */
+#define ItemPointerGetBlockNumberNoCheck(pointer) \
+( \
+	BlockIdGetBlockNumber(&(pointer)->ip_blkid) \
+)
+
 /*
  * ItemPointerGetOffsetNumber
  *		Returns the offset number of a disk item pointer.
@@ -79,6 +85,12 @@ typedef ItemPointerData *ItemPointer;
 	(pointer)->ip_posid \
 )
 
+/* Same as ItemPointerGetOffsetNumber but without any assert-checks */
+#define ItemPointerGetOffsetNumberNoCheck(pointer) \
+( \
+	(pointer)->ip_posid \
+)
+
 /*
  * ItemPointerSet
  *		Sets a disk item pointer to the specified block and offset.
0001_track_root_lp_v15.patchapplication/octet-stream; name=0001_track_root_lp_v15.patchDownload
commit 98408805eac6736d7a0e7850d34c75fc866dfaff
Author: Pavan Deolasee <pavan.deolasee@gmail.com>
Date:   Sun Jan 1 16:29:53 2017 +0530

    Track root line pointer in t_ctid->ip_posid field.
    
    This patch is same as v12 submitted to hackers

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 74fb09c..064909a 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -94,7 +94,8 @@ static HeapTuple heap_prepare_insert(Relation relation, HeapTuple tup,
 					TransactionId xid, CommandId cid, int options);
 static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
-				HeapTuple newtup, HeapTuple old_key_tup,
+				HeapTuple newtup, OffsetNumber root_offnum,
+				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
 static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
 							 Bitmapset *interesting_cols,
@@ -2248,13 +2249,13 @@ heap_get_latest_tid(Relation relation,
 		 */
 		if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) ||
 			HeapTupleHeaderIsOnlyLocked(tp.t_data) ||
-			ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid))
+			HeapTupleHeaderIsHeapLatest(tp.t_data, &ctid))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
 		}
 
-		ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tp.t_data, &ctid);
 		priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		UnlockReleaseBuffer(buffer);
 	}							/* end of loop */
@@ -2385,6 +2386,7 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	bool		all_visible_cleared = false;
+	OffsetNumber	root_offnum;
 
 	/*
 	 * Fill in tuple header fields, assign an OID, and toast the tuple if
@@ -2423,8 +2425,13 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
-	RelationPutHeapTuple(relation, buffer, heaptup,
-						 (options & HEAP_INSERT_SPECULATIVE) != 0);
+	root_offnum = RelationPutHeapTuple(relation, buffer, heaptup,
+						 (options & HEAP_INSERT_SPECULATIVE) != 0,
+						 InvalidOffsetNumber);
+
+	/* We must not overwrite the speculative insertion token. */
+	if ((options & HEAP_INSERT_SPECULATIVE) == 0)
+		HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 	if (PageIsAllVisible(BufferGetPage(buffer)))
 	{
@@ -2652,6 +2659,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 	Size		saveFreeSpace;
 	bool		need_tuple_data = RelationIsLogicallyLogged(relation);
 	bool		need_cids = RelationIsAccessibleInLogicalDecoding(relation);
+	OffsetNumber	root_offnum;
 
 	needwal = !(options & HEAP_INSERT_SKIP_WAL) && RelationNeedsWAL(relation);
 	saveFreeSpace = RelationGetTargetPageFreeSpace(relation,
@@ -2722,7 +2730,12 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		 * RelationGetBufferForTuple has ensured that the first tuple fits.
 		 * Put that on the page, and then as many other tuples as fit.
 		 */
-		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false);
+		root_offnum = RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false,
+				InvalidOffsetNumber);
+
+		/* Mark this tuple as the latest and also set root offset. */
+		HeapTupleHeaderSetHeapLatest(heaptuples[ndone]->t_data, root_offnum);
+
 		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++)
 		{
 			HeapTuple	heaptup = heaptuples[ndone + nthispage];
@@ -2730,7 +2743,10 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace)
 				break;
 
-			RelationPutHeapTuple(relation, buffer, heaptup, false);
+			root_offnum = RelationPutHeapTuple(relation, buffer, heaptup, false,
+					InvalidOffsetNumber);
+			/* Mark each tuple as the latest and also set root offset. */
+			HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 			/*
 			 * We don't use heap_multi_insert for catalog tuples yet, but
@@ -3002,6 +3018,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	HeapTupleData tp;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	TransactionId new_xmax;
@@ -3012,6 +3029,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	bool		all_visible_cleared = false;
 	HeapTuple	old_key_tuple = NULL;	/* replica identity of the tuple */
 	bool		old_key_copied = false;
+	OffsetNumber	root_offnum;
 
 	Assert(ItemPointerIsValid(tid));
 
@@ -3053,7 +3071,8 @@ heap_delete(Relation relation, ItemPointer tid,
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 	}
 
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
+	offnum = ItemPointerGetOffsetNumber(tid);
+	lp = PageGetItemId(page, offnum);
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
@@ -3183,7 +3202,17 @@ l1:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tp.t_data->t_ctid;
+
+		/*
+		 * If we're at the end of the chain, then just return the same TID back
+		 * to the caller. The caller uses that as a hint to know if we have hit
+		 * the end of the chain.
+		 */
+		if (!HeapTupleHeaderIsHeapLatest(tp.t_data, &tp.t_self))
+			HeapTupleHeaderGetNextTid(tp.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&tp.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tp.t_data);
@@ -3232,6 +3261,22 @@ l1:
 							  xid, LockTupleExclusive, true,
 							  &new_xmax, &new_infomask, &new_infomask2);
 
+	/*
+	 * heap_get_root_tuple_one() may call palloc, which is disallowed once we
+	 * enter the critical section. So check if the root offset is cached in the
+	 * tuple and if not, fetch that information hard way before entering the
+	 * critical section.
+	 *
+	 * Most often and unless we are dealing with a pg-upgraded cluster, the
+	 * root offset information should be cached. So there should not be too
+	 * much overhead of fetching this information. Also, once a tuple is
+	 * updated, the information will be copied to the new version. So it's not
+	 * as if we're going to pay this price forever.
+	 */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tp.t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -3259,8 +3304,10 @@ l1:
 	HeapTupleHeaderClearHotUpdated(tp.t_data);
 	HeapTupleHeaderSetXmax(tp.t_data, new_xmax);
 	HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo);
-	/* Make sure there is no forward chain link in t_ctid */
-	tp.t_data->t_ctid = tp.t_self;
+
+	/* Mark this tuple as the latest tuple in the update chain. */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		HeapTupleHeaderSetHeapLatest(tp.t_data, root_offnum);
 
 	MarkBufferDirty(buffer);
 
@@ -3461,6 +3508,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
+	OffsetNumber	root_offnum;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3523,6 +3572,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 
 
 	block = ItemPointerGetBlockNumber(otid);
+	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3807,7 +3857,12 @@ l2:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = oldtup.t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(oldtup.t_data, &oldtup.t_self))
+			HeapTupleHeaderGetNextTid(oldtup.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&oldtup.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
@@ -3947,6 +4002,7 @@ l2:
 		uint16		infomask_lock_old_tuple,
 					infomask2_lock_old_tuple;
 		bool		cleared_all_frozen = false;
+		OffsetNumber	root_offnum;
 
 		/*
 		 * To prevent concurrent sessions from updating the tuple, we have to
@@ -3974,6 +4030,14 @@ l2:
 
 		Assert(HEAP_XMAX_IS_LOCKED_ONLY(infomask_lock_old_tuple));
 
+		/*
+		 * Fetch root offset before entering the critical section. We do this
+		 * only if the information is not already available.
+		 */
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&oldtup.t_self));
+
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
@@ -3988,7 +4052,8 @@ l2:
 		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			HeapTupleHeaderSetHeapLatest(oldtup.t_data, root_offnum);
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4146,6 +4211,10 @@ l2:
 										   bms_overlap(modified_attrs, id_attrs),
 										   &old_key_copied);
 
+	if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
@@ -4171,6 +4240,17 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way (we should have done
+		 * that before entering the critical section above).
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
 	else
 	{
@@ -4178,10 +4258,22 @@ l2:
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		root_offnum = InvalidOffsetNumber;
 	}
 
-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
-
+	/* insert new tuple */
+	root_offnum = RelationPutHeapTuple(relation, newbuf, heaptup, false,
+									   root_offnum);
+	/*
+	 * Also mark both copies as latest and set the root offset information. If
+	 * we're doing a HOT/WARM update, then we just copy the information from
+	 * old tuple, if available or computed above. For regular updates,
+	 * RelationPutHeapTuple must have returned us the actual offset number
+	 * where the new version was inserted and we store the same value since the
+	 * update resulted in a new HOT-chain.
+	 */
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
 	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
@@ -4194,7 +4286,7 @@ l2:
 	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	HeapTupleHeaderSetNextTid(oldtup.t_data, &(heaptup->t_self));
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4233,6 +4325,7 @@ l2:
 
 		recptr = log_heap_update(relation, buffer,
 								 newbuf, &oldtup, heaptup,
+								 root_offnum,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4513,7 +4606,8 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	ItemId		lp;
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
-	BlockNumber block;
+	BlockNumber	block;
+	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4522,9 +4616,11 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	bool		first_time = true;
 	bool		have_tuple_lock = false;
 	bool		cleared_all_frozen = false;
+	OffsetNumber	root_offnum;
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
+	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -4544,6 +4640,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	tuple->t_data = (HeapTupleHeader) PageGetItem(page, lp);
 	tuple->t_len = ItemIdGetLength(lp);
 	tuple->t_tableOid = RelationGetRelid(relation);
+	tuple->t_self = *tid;
 
 l3:
 	result = HeapTupleSatisfiesUpdate(tuple, cid, *buffer);
@@ -4571,7 +4668,11 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &t_ctid);
+		else
+			ItemPointerCopy(tid, &t_ctid);
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -5009,7 +5110,12 @@ failed:
 		Assert(result == HeapTupleSelfUpdated || result == HeapTupleUpdated ||
 			   result == HeapTupleWouldBlock);
 		Assert(!(tuple->t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tuple->t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(tid, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tuple->t_data);
@@ -5057,6 +5163,10 @@ failed:
 							  GetCurrentTransactionId(), mode, false,
 							  &xid, &new_infomask, &new_infomask2);
 
+	if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tuple->t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -5085,7 +5195,10 @@ failed:
 	 * the tuple as well.
 	 */
 	if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask))
-		tuple->t_data->t_ctid = *tid;
+	{
+		if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+			HeapTupleHeaderSetHeapLatest(tuple->t_data, root_offnum);
+	}
 
 	/* Clear only the all-frozen bit on visibility map if needed */
 	if (PageIsAllVisible(page) &&
@@ -5599,6 +5712,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
+	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5607,6 +5721,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
+		offnum = ItemPointerGetOffsetNumber(&tupid);
+
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
@@ -5836,7 +5952,7 @@ l4:
 
 		/* if we find the end of update chain, we're done. */
 		if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID ||
-			ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) ||
+			HeapTupleHeaderIsHeapLatest(mytup.t_data, &mytup.t_self) ||
 			HeapTupleHeaderIsOnlyLocked(mytup.t_data))
 		{
 			result = HeapTupleMayBeUpdated;
@@ -5845,7 +5961,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		HeapTupleHeaderGetNextTid(mytup.t_data, &tupid);
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
@@ -5962,7 +6078,7 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	 * Replace the speculative insertion token with a real t_ctid, pointing to
 	 * itself like it does on regular tuples.
 	 */
-	htup->t_ctid = tuple->t_self;
+	HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 	/* XLOG stuff */
 	if (RelationNeedsWAL(relation))
@@ -6088,8 +6204,7 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
 
 	/* Clear the speculative insertion token too */
-	tp.t_data->t_ctid = tp.t_self;
-
+	HeapTupleHeaderSetHeapLatest(tp.t_data, ItemPointerGetOffsetNumber(tid));
 	MarkBufferDirty(buffer);
 
 	/*
@@ -7437,6 +7552,7 @@ log_heap_visible(RelFileNode rnode, Buffer heap_buffer, Buffer vm_buffer,
 static XLogRecPtr
 log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup, HeapTuple newtup,
+				OffsetNumber root_offnum,
 				HeapTuple old_key_tuple,
 				bool all_visible_cleared, bool new_all_visible_cleared)
 {
@@ -7557,6 +7673,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self);
 	xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data);
 
+	Assert(OffsetNumberIsValid(root_offnum));
+	xlrec.root_offnum = root_offnum;
+
 	bufflags = REGBUF_STANDARD;
 	if (init)
 		bufflags |= REGBUF_WILL_INIT;
@@ -8211,7 +8330,13 @@ heap_xlog_delete(XLogReaderState *record)
 			PageClearAllVisible(page);
 
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = target_tid;
+		if (!HeapTupleHeaderHasRootOffset(htup))
+		{
+			OffsetNumber	root_offnum;
+			root_offnum = heap_get_root_tuple(page, xlrec->offnum); 
+			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+		}
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8301,7 +8426,8 @@ heap_xlog_insert(XLogReaderState *record)
 		htup->t_hoff = xlhdr.t_hoff;
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
-		htup->t_ctid = target_tid;
+
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->offnum);
 
 		if (PageAddItem(page, (Item) htup, newlen, xlrec->offnum,
 						true, true) == InvalidOffsetNumber)
@@ -8436,8 +8562,8 @@ heap_xlog_multi_insert(XLogReaderState *record)
 			htup->t_hoff = xlhdr->t_hoff;
 			HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 			HeapTupleHeaderSetCmin(htup, FirstCommandId);
-			ItemPointerSetBlockNumber(&htup->t_ctid, blkno);
-			ItemPointerSetOffsetNumber(&htup->t_ctid, offnum);
+
+			HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 			offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 			if (offnum == InvalidOffsetNumber)
@@ -8573,7 +8699,7 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 		/* Set forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetNextTid(htup, &newtid);
 
 		/* Mark the page as a candidate for pruning */
 		PageSetPrunable(page, XLogRecGetXid(record));
@@ -8706,13 +8832,17 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
-		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = newtid;
 
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
 
+		/*
+		 * Make sure the tuple is marked as the latest and root offset
+		 * information is restored.
+		 */
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->root_offnum);
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
@@ -8775,6 +8905,9 @@ heap_xlog_confirm(XLogReaderState *record)
 		 */
 		ItemPointerSet(&htup->t_ctid, BufferGetBlockNumber(buffer), offnum);
 
+		/* For newly inserted tuple, set root offset to itself. */
+		HeapTupleHeaderSetHeapLatest(htup, offnum);
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8838,11 +8971,17 @@ heap_xlog_lock(XLogReaderState *record)
 		 */
 		if (HEAP_XMAX_IS_LOCKED_ONLY(htup->t_infomask))
 		{
+			ItemPointerData	target_tid;
+
+			ItemPointerSet(&target_tid, BufferGetBlockNumber(buffer), offnum);
 			HeapTupleHeaderClearHotUpdated(htup);
 			/* Make sure there is no forward chain link in t_ctid */
-			ItemPointerSet(&htup->t_ctid,
-						   BufferGetBlockNumber(buffer),
-						   offnum);
+			if (!HeapTupleHeaderHasRootOffset(htup))
+			{
+				OffsetNumber	root_offnum;
+				root_offnum = heap_get_root_tuple(page, offnum);
+				HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+			}
 		}
 		HeapTupleHeaderSetXmax(htup, xlrec->locking_xid);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index 6529fe3..8052519 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -31,12 +31,20 @@
  * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
  * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *
+ * The caller can optionally tell us to set the root offset to the given value.
+ * Otherwise, the root offset is set to the offset of the new location once its
+ * known. The former is used while updating an existing tuple where the caller
+ * tells us about the root line pointer of the chain.  The latter is used
+ * during insertion of a new row, hence root line pointer is set to the offset
+ * where this tuple is inserted.
  */
-void
+OffsetNumber
 RelationPutHeapTuple(Relation relation,
 					 Buffer buffer,
 					 HeapTuple tuple,
-					 bool token)
+					 bool token,
+					 OffsetNumber root_offnum)
 {
 	Page		pageHeader;
 	OffsetNumber offnum;
@@ -60,17 +68,24 @@ RelationPutHeapTuple(Relation relation,
 	ItemPointerSet(&(tuple->t_self), BufferGetBlockNumber(buffer), offnum);
 
 	/*
-	 * Insert the correct position into CTID of the stored tuple, too (unless
-	 * this is a speculative insertion, in which case the token is held in
-	 * CTID field instead)
+	 * Set block number and the root offset into CTID of the stored tuple, too
+	 * (unless this is a speculative insertion, in which case the token is held
+	 * in CTID field instead).
 	 */
 	if (!token)
 	{
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
+		/* Copy t_ctid to set the correct block number. */
 		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+
+		if (!OffsetNumberIsValid(root_offnum))
+			root_offnum = offnum;
+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item, root_offnum);
 	}
+
+	return root_offnum;
 }
 
 /*
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index d69a266..f54337c 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -55,6 +55,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
 
+static void heap_get_root_tuples_internal(Page page,
+				OffsetNumber target_offnum, OffsetNumber *root_offsets);
 
 /*
  * Optionally prune and repair fragmentation in the specified page.
@@ -553,6 +555,17 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 		if (!HeapTupleHeaderIsHotUpdated(htup))
 			break;
 
+
+		/*
+		 * If the tuple was HOT-updated and the update was later
+		 * aborted, someone could mark this tuple to be the last tuple
+		 * in the chain, without clearing the HOT-updated flag. So we must
+		 * check if this is the last tuple in the chain and stop following the
+		 * CTID, else we risk getting into an infinite recursion (though
+		 * prstate->marked[] currently protects against that).
+		 */
+		if (HeapTupleHeaderHasRootOffset(htup))
+			break;
 		/*
 		 * Advance to next chain member.
 		 */
@@ -726,27 +739,47 @@ heap_page_prune_execute(Buffer buffer,
 
 
 /*
- * For all items in this page, find their respective root line pointers.
- * If item k is part of a HOT-chain with root at item j, then we set
- * root_offsets[k - 1] = j.
+ * Either for all items in this page or for the given item, find their
+ * respective root line pointers.
+ *
+ * When target_offnum is a valid offset number, the caller is interested in
+ * just one item. In that case, the root line pointer is returned in
+ * root_offsets.
  *
- * The passed-in root_offsets array must have MaxHeapTuplesPerPage entries.
- * We zero out all unused entries.
+ * When target_offnum is a InvalidOffsetNumber then the caller wants to know
+ * the root line pointers of all the items in this page. The root_offsets array
+ * must have MaxHeapTuplesPerPage entries in that case. If item k is part of a
+ * HOT-chain with root at item j, then we set root_offsets[k - 1] = j. We zero
+ * out all unused entries.
  *
  * The function must be called with at least share lock on the buffer, to
  * prevent concurrent prune operations.
  *
+ * This is not a cheap function since it must scan through all line pointers
+ * and tuples on the page in order to find the root line pointers. To minimize
+ * the cost, we break early if target_offnum is specified and root line pointer
+ * to target_offnum is found.
+ *
  * Note: The information collected here is valid only as long as the caller
  * holds a pin on the buffer. Once pin is released, a tuple might be pruned
  * and reused by a completely unrelated tuple.
+ *
+ * Note: This function must not be called inside a critical section because it
+ * internally calls HeapTupleHeaderGetUpdateXid which somewhere down the stack
+ * may try to allocate heap memory. Memory allocation is disallowed in a
+ * critical section.
  */
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
 {
 	OffsetNumber offnum,
 				maxoff;
 
-	MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
+	if (OffsetNumberIsValid(target_offnum))
+		*root_offsets = InvalidOffsetNumber;
+	else
+		MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
 
 	maxoff = PageGetMaxOffsetNumber(page);
 	for (offnum = FirstOffsetNumber; offnum <= maxoff; offnum = OffsetNumberNext(offnum))
@@ -774,9 +807,28 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 
 			/*
 			 * This is either a plain tuple or the root of a HOT-chain.
-			 * Remember it in the mapping.
+			 *
+			 * If the target_offnum is specified and if we found its mapping,
+			 * return.
 			 */
-			root_offsets[offnum - 1] = offnum;
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (target_offnum == offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember it in the mapping. */
+				root_offsets[offnum - 1] = offnum;
+			}
 
 			/* If it's not the start of a HOT-chain, we're done with it */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
@@ -817,15 +869,65 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 				!TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(htup)))
 				break;
 
-			/* Remember the root line pointer for this item */
-			root_offsets[nextoffnum - 1] = offnum;
+			/*
+			 * If target_offnum is specified and we found its mapping, return.
+			 */
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (nextoffnum == target_offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember the root line pointer for this item. */
+				root_offsets[nextoffnum - 1] = offnum;
+			}
 
 			/* Advance to next chain member, if any */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				break;
 
+			/*
+			 * If the tuple was HOT-updated and the update was later aborted,
+			 * someone could mark this tuple to be the last tuple in the chain
+			 * and store root offset in CTID, without clearing the HOT-updated
+			 * flag. So we must check if CTID is actually root offset and break
+			 * to avoid infinite recursion.
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
 		}
 	}
 }
+
+/*
+ * Get root line pointer for the given tuple.
+ */
+OffsetNumber
+heap_get_root_tuple(Page page, OffsetNumber target_offnum)
+{
+	OffsetNumber offnum = InvalidOffsetNumber;
+	heap_get_root_tuples_internal(page, target_offnum, &offnum);
+	return offnum;
+}
+
+/*
+ * Get root line pointers for all tuples in the page
+ */
+void
+heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+{
+	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+			root_offsets);
+}
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index c7b283c..6ced1e7 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -419,14 +419,18 @@ rewrite_heap_tuple(RewriteState state,
 	 */
 	if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
 		  HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
-		!(ItemPointerEquals(&(old_tuple->t_self),
-							&(old_tuple->t_data->t_ctid))))
+		!(HeapTupleHeaderIsHeapLatest(old_tuple->t_data, &old_tuple->t_self)))
 	{
 		OldToNewMapping mapping;
 
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
-		hashkey.tid = old_tuple->t_data->t_ctid;
+
+		/* 
+		 * We've already checked that this is not the last tuple in the chain,
+		 * so fetch the next TID in the chain.
+		 */
+		HeapTupleHeaderGetNextTid(old_tuple->t_data, &hashkey.tid);
 
 		mapping = (OldToNewMapping)
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -439,7 +443,7 @@ rewrite_heap_tuple(RewriteState state,
 			 * set the ctid of this tuple to point to the new location, and
 			 * insert it right away.
 			 */
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextTid(new_tuple->t_data, &mapping->new_tid);
 
 			/* We don't need the mapping entry anymore */
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -525,7 +529,7 @@ rewrite_heap_tuple(RewriteState state,
 				new_tuple = unresolved->tuple;
 				free_new = true;
 				old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextTid(new_tuple->t_data, &new_tid);
 
 				/*
 				 * We don't need the hash entry anymore, but don't free its
@@ -731,7 +735,12 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		onpage_tup->t_ctid = tup->t_self;
+		/* 
+		 * Set t_ctid just to ensure that block number is copied correctly, but
+		 * then immediately mark the tuple as the latest.
+		 */
+		HeapTupleHeaderSetNextTid(onpage_tup, &tup->t_self);
+		HeapTupleHeaderSetHeapLatest(onpage_tup, newoff);
 	}
 
 	/* If heaptup is a private copy, release it. */
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 5242dee..2142273 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -789,7 +789,8 @@ retry:
 			  DirtySnapshot.speculativeToken &&
 			  TransactionIdPrecedes(GetCurrentTransactionId(), xwait))))
 		{
-			ctid_wait = tup->t_data->t_ctid;
+			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
+				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 3f76a40..1705799 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2587,7 +2587,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		 * As above, it should be safe to examine xmax and t_ctid without the
 		 * buffer content lock, because they can't be changing.
 		 */
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, &tuple.t_self))
 		{
 			/* deleted, so forget about it */
 			ReleaseBuffer(buffer);
@@ -2595,7 +2595,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		}
 
 		/* updated, so look at the updated row */
-		tuple.t_self = tuple.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tuple.t_data, &tuple.t_self);
 		/* updated row should have xmin matching this xmax */
 		priorXmax = HeapTupleHeaderGetUpdateXid(tuple.t_data);
 		ReleaseBuffer(buffer);
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index a864f78..95aa976 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -189,6 +189,7 @@ extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
+extern OffsetNumber heap_get_root_tuple(Page page, OffsetNumber target_offnum);
 extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
 
 /* in heap/syncscan.c */
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index b285f17..e6019d5 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -193,6 +193,8 @@ typedef struct xl_heap_update
 	uint8		flags;
 	TransactionId new_xmax;		/* xmax of the new tuple */
 	OffsetNumber new_offnum;	/* new tuple's offset */
+	OffsetNumber root_offnum;	/* offset of the root line pointer in case of
+								   HOT or WARM update */
 
 	/*
 	 * If XLOG_HEAP_CONTAINS_OLD_TUPLE or XLOG_HEAP_CONTAINS_OLD_KEY flags are
@@ -200,7 +202,7 @@ typedef struct xl_heap_update
 	 */
 } xl_heap_update;
 
-#define SizeOfHeapUpdate	(offsetof(xl_heap_update, new_offnum) + sizeof(OffsetNumber))
+#define SizeOfHeapUpdate	(offsetof(xl_heap_update, root_offnum) + sizeof(OffsetNumber))
 
 /*
  * This is what we need to know about vacuum page cleanup/redirect
diff --git a/src/include/access/hio.h b/src/include/access/hio.h
index 2824f23..921cb37 100644
--- a/src/include/access/hio.h
+++ b/src/include/access/hio.h
@@ -35,8 +35,8 @@ typedef struct BulkInsertStateData
 }	BulkInsertStateData;
 
 
-extern void RelationPutHeapTuple(Relation relation, Buffer buffer,
-					 HeapTuple tuple, bool token);
+extern OffsetNumber RelationPutHeapTuple(Relation relation, Buffer buffer,
+					 HeapTuple tuple, bool token, OffsetNumber root_offnum);
 extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
 						  Buffer otherBuffer, int options,
 						  BulkInsertState bistate,
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index a6c7e31..7552186 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,13 +260,19 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1800 are available */
+/* bits 0x0800 are available */
+#define HEAP_LATEST_TUPLE		0x1000	/*
+										 * This is the last tuple in chain and
+										 * ip_posid points to the root line
+										 * pointer
+										 */
 #define HEAP_KEYS_UPDATED		0x2000	/* tuple was updated and key cols
 										 * modified, or tuple deleted */
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xE000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+
 
 /*
  * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins.  It is
@@ -504,6 +510,43 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+/*
+ * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
+ * the TID of the tuple itself in t_ctid field to mark the end of the chain.
+ * But starting PG v10, we use a special flag HEAP_LATEST_TUPLE to identify the
+ * last tuple and store the root line pointer of the HOT chain in t_ctid field
+ * instead.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetHeapLatest(tup, offnum) \
+do { \
+	AssertMacro(OffsetNumberIsValid(offnum)); \
+	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE; \
+	ItemPointerSetOffsetNumber(&(tup)->t_ctid, (offnum)); \
+} while (0)
+
+#define HeapTupleHeaderClearHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 &= ~HEAP_LATEST_TUPLE \
+)
+
+/*
+ * Starting from PostgreSQL 10, the latest tuple in an update chain has
+ * HEAP_LATEST_TUPLE set; but tuples upgraded from earlier versions do not.
+ * For those, we determine whether a tuple is latest by testing that its t_ctid
+ * points to itself.
+ *
+ * Note: beware of multiple evaluations of "tup" and "tid" arguments.
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  (((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(tid))) \
+)
+
+
 #define HeapTupleHeaderSetHeapOnly(tup) \
 ( \
   (tup)->t_infomask2 |= HEAP_ONLY_TUPLE \
@@ -542,6 +585,56 @@ do { \
 
 
 /*
+ * Set the t_ctid chain and also clear the HEAP_LATEST_TUPLE flag since we
+ * now have a new tuple in the chain and this is no longer the last tuple of
+ * the chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetNextTid(tup, tid) \
+do { \
+		ItemPointerCopy((tid), &((tup)->t_ctid)); \
+		HeapTupleHeaderClearHeapLatest((tup)); \
+} while (0)
+
+/*
+ * Get TID of next tuple in the update chain. Caller must have checked that
+ * we are not already at the end of the chain because in that case t_ctid may
+ * actually store the root line pointer of the HOT chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetNextTid(tup, next_ctid) \
+do { \
+	AssertMacro(!((tup)->t_infomask2 & HEAP_LATEST_TUPLE)); \
+	ItemPointerCopy(&(tup)->t_ctid, (next_ctid)); \
+} while (0)
+
+/*
+ * Get the root line pointer of the HOT chain. The caller should have confirmed
+ * that the root offset is cached before calling this macro.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+	AssertMacro(((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0), \
+	ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)
+
+/*
+ * Return whether the tuple has a cached root offset.  We don't use
+ * HeapTupleHeaderIsHeapLatest because that one also considers the case of
+ * t_ctid pointing to itself, for tuples migrated from pre v10 clusters. Here
+ * we are only interested in the tuples which are marked with HEAP_LATEST_TUPLE
+ * flag.
+ */
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+	((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0 \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
#89Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Pavan Deolasee (#88)
Re: Patch: Write Amplification Reduction Method (WARM)

Here's a rebased set of patches. This is the same Pavan posted; I only
fixed some whitespace and a trivial conflict in indexam.c, per 9b88f27cb42f.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#90Robert Haas
robertmhaas@gmail.com
In reply to: Alvaro Herrera (#89)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Mar 8, 2017 at 12:00 PM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:

Here's a rebased set of patches. This is the same Pavan posted; I only
fixed some whitespace and a trivial conflict in indexam.c, per 9b88f27cb42f.

No attachments.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#91Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#89)
6 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

Alvaro Herrera wrote:

Here's a rebased set of patches. This is the same Pavan posted; I only
fixed some whitespace and a trivial conflict in indexam.c, per 9b88f27cb42f.

Jaime noted that I forgot the attachments. Here they are

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachments:

0001-interesting-attrs-v16.patchtext/plain; charset=us-asciiDownload
From ba96dd9053eaf326fb6fa28cf80dcc28daa5551d Mon Sep 17 00:00:00 2001
From: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Wed, 8 Mar 2017 13:48:13 -0300
Subject: [PATCH 1/6] interesting attrs v16

---
 src/backend/access/heap/heapam.c | 178 ++++++++++++---------------------------
 1 file changed, 53 insertions(+), 125 deletions(-)

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index af25836..74fb09c 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -96,11 +96,8 @@ static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
 				HeapTuple newtup, HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
-static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
-							 Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
+							 Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup);
 static bool heap_acquire_tuplock(Relation relation, ItemPointer tid,
 					 LockTupleMode mode, LockWaitPolicy wait_policy,
@@ -3455,6 +3452,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *interesting_attrs;
+	Bitmapset  *modified_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3472,9 +3471,6 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				pagefree;
 	bool		have_tuple_lock = false;
 	bool		iscombo;
-	bool		satisfies_hot;
-	bool		satisfies_key;
-	bool		satisfies_id;
 	bool		use_hot_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
@@ -3501,21 +3497,30 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				 errmsg("cannot update tuples during a parallel operation")));
 
 	/*
-	 * Fetch the list of attributes to be checked for HOT update.  This is
-	 * wasted effort if we fail to update or have to put the new tuple on a
-	 * different page.  But we must compute the list before obtaining buffer
-	 * lock --- in the worst case, if we are doing an update on one of the
-	 * relevant system catalogs, we could deadlock if we try to fetch the list
-	 * later.  In any case, the relcache caches the data so this is usually
-	 * pretty cheap.
+	 * Fetch the list of attributes to be checked for various operations.
 	 *
-	 * Note that we get a copy here, so we need not worry about relcache flush
-	 * happening midway through.
+	 * For HOT considerations, this is wasted effort if we fail to update or
+	 * have to put the new tuple on a different page.  But we must compute the
+	 * list before obtaining buffer lock --- in the worst case, if we are doing
+	 * an update on one of the relevant system catalogs, we could deadlock if
+	 * we try to fetch the list later.  In any case, the relcache caches the
+	 * data so this is usually pretty cheap.
+	 *
+	 * We also need columns used by the replica identity, the columns that
+	 * are considered the "key" of rows in the table, and columns that are
+	 * part of indirect indexes.
+	 *
+	 * Note that we get copies of each bitmap, so we need not worry about
+	 * relcache flush happening midway through.
 	 */
 	hot_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_ALL);
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	interesting_attrs = bms_add_members(NULL, hot_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
+
 
 	block = ItemPointerGetBlockNumber(otid);
 	buffer = ReadBuffer(relation, block);
@@ -3536,7 +3541,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Assert(ItemIdIsNormal(lp));
 
 	/*
-	 * Fill in enough data in oldtup for HeapSatisfiesHOTandKeyUpdate to work
+	 * Fill in enough data in oldtup for HeapDetermineModifiedColumns to work
 	 * properly.
 	 */
 	oldtup.t_tableOid = RelationGetRelid(relation);
@@ -3562,6 +3567,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 		Assert(!(newtup->t_data->t_infomask & HEAP_HASOID));
 	}
 
+	/* Determine columns modified by the update. */
+	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
+												  &oldtup, newtup);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3573,10 +3582,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	 * is updates that don't manipulate key columns, not those that
 	 * serendipitiously arrive at the same key values.
 	 */
-	HeapSatisfiesHOTandKeyUpdate(relation, hot_attrs, key_attrs, id_attrs,
-								 &satisfies_hot, &satisfies_key,
-								 &satisfies_id, &oldtup, newtup);
-	if (satisfies_key)
+	if (!bms_overlap(modified_attrs, key_attrs))
 	{
 		*lockmode = LockTupleNoKeyExclusive;
 		mxact_status = MultiXactStatusNoKeyUpdate;
@@ -3815,6 +3821,8 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(modified_attrs);
+		bms_free(interesting_attrs);
 		return result;
 	}
 
@@ -4119,7 +4127,7 @@ l2:
 		 * to do a HOT update.  Check if any of the index columns have been
 		 * changed.  If not, then HOT update is possible.
 		 */
-		if (satisfies_hot)
+		if (!bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
 	}
 	else
@@ -4134,7 +4142,9 @@ l2:
 	 * ExtractReplicaIdentity() will return NULL if nothing needs to be
 	 * logged.
 	 */
-	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup, !satisfies_id, &old_key_copied);
+	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup,
+										   bms_overlap(modified_attrs, id_attrs),
+										   &old_key_copied);
 
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
@@ -4282,13 +4292,15 @@ l2:
 	bms_free(hot_attrs);
 	bms_free(key_attrs);
 	bms_free(id_attrs);
+	bms_free(modified_attrs);
+	bms_free(interesting_attrs);
 
 	return HeapTupleMayBeUpdated;
 }
 
 /*
  * Check if the specified attribute's value is same in both given tuples.
- * Subroutine for HeapSatisfiesHOTandKeyUpdate.
+ * Subroutine for HeapDetermineModifiedColumns.
  */
 static bool
 heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
@@ -4322,7 +4334,7 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 
 	/*
 	 * Extract the corresponding values.  XXX this is pretty inefficient if
-	 * there are many indexed columns.  Should HeapSatisfiesHOTandKeyUpdate do
+	 * there are many indexed columns.  Should HeapDetermineModifiedColumns do
 	 * a single heap_deform_tuple call on each tuple, instead?	But that
 	 * doesn't work for system columns ...
 	 */
@@ -4367,114 +4379,30 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 /*
  * Check which columns are being updated.
  *
- * This simultaneously checks conditions for HOT updates, for FOR KEY
- * SHARE updates, and REPLICA IDENTITY concerns.  Since much of the time they
- * will be checking very similar sets of columns, and doing the same tests on
- * them, it makes sense to optimize and do them together.
+ * Given an updated tuple, determine (and return into the output bitmapset),
+ * from those listed as interesting, the set of columns that changed.
  *
- * We receive three bitmapsets comprising the three sets of columns we're
- * interested in.  Note these are destructively modified; that is OK since
- * this is invoked at most once in heap_update.
- *
- * hot_result is set to TRUE if it's okay to do a HOT update (i.e. it does not
- * modified indexed columns); key_result is set to TRUE if the update does not
- * modify columns used in the key; id_result is set to TRUE if the update does
- * not modify columns in any index marked as the REPLICA IDENTITY.
+ * The input bitmapset is destructively modified; that is OK since this is
+ * invoked at most once in heap_update.
  */
-static void
-HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *
+HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup)
 {
-	int			next_hot_attnum;
-	int			next_key_attnum;
-	int			next_id_attnum;
-	bool		hot_result = true;
-	bool		key_result = true;
-	bool		id_result = true;
+	int		attnum;
+	Bitmapset *modified = NULL;
 
-	/* If REPLICA IDENTITY is set to FULL, id_attrs will be empty. */
-	Assert(bms_is_subset(id_attrs, key_attrs));
-	Assert(bms_is_subset(key_attrs, hot_attrs));
-
-	/*
-	 * If one of these sets contains no remaining bits, bms_first_member will
-	 * return -1, and after adding FirstLowInvalidHeapAttributeNumber (which
-	 * is negative!)  we'll get an attribute number that can't possibly be
-	 * real, and thus won't match any actual attribute number.
-	 */
-	next_hot_attnum = bms_first_member(hot_attrs);
-	next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_key_attnum = bms_first_member(key_attrs);
-	next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_id_attnum = bms_first_member(id_attrs);
-	next_id_attnum += FirstLowInvalidHeapAttributeNumber;
-
-	for (;;)
+	while ((attnum = bms_first_member(interesting_cols)) >= 0)
 	{
-		bool		changed;
-		int			check_now;
+		attnum += FirstLowInvalidHeapAttributeNumber;
 
-		/*
-		 * Since the HOT attributes are a superset of the key attributes and
-		 * the key attributes are a superset of the id attributes, this logic
-		 * is guaranteed to identify the next column that needs to be checked.
-		 */
-		if (hot_result && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_hot_attnum;
-		else if (key_result && next_key_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_key_attnum;
-		else if (id_result && next_id_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_id_attnum;
-		else
-			break;
-
-		/* See whether it changed. */
-		changed = !heap_tuple_attr_equals(RelationGetDescr(relation),
-										  check_now, oldtup, newtup);
-		if (changed)
-		{
-			if (check_now == next_hot_attnum)
-				hot_result = false;
-			if (check_now == next_key_attnum)
-				key_result = false;
-			if (check_now == next_id_attnum)
-				id_result = false;
-
-			/* if all are false now, we can stop checking */
-			if (!hot_result && !key_result && !id_result)
-				break;
-		}
-
-		/*
-		 * Advance the next attribute numbers for the sets that contain the
-		 * attribute we just checked.  As we work our way through the columns,
-		 * the next_attnum values will rise; but when each set becomes empty,
-		 * bms_first_member() will return -1 and the attribute number will end
-		 * up with a value less than FirstLowInvalidHeapAttributeNumber.
-		 */
-		if (hot_result && check_now == next_hot_attnum)
-		{
-			next_hot_attnum = bms_first_member(hot_attrs);
-			next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (key_result && check_now == next_key_attnum)
-		{
-			next_key_attnum = bms_first_member(key_attrs);
-			next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (id_result && check_now == next_id_attnum)
-		{
-			next_id_attnum = bms_first_member(id_attrs);
-			next_id_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
+		if (!heap_tuple_attr_equals(RelationGetDescr(relation),
+								   attnum, oldtup, newtup))
+			modified = bms_add_member(modified,
+									  attnum - FirstLowInvalidHeapAttributeNumber);
 	}
 
-	*satisfies_hot = hot_result;
-	*satisfies_key = key_result;
-	*satisfies_id = id_result;
+	return modified;
 }
 
 /*
-- 
2.1.4

0002-track-root-lp-v16.patchtext/plain; charset=us-asciiDownload
From 6c4c004a2a7f5f269dc33942f7c397fe962c8685 Mon Sep 17 00:00:00 2001
From: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Wed, 8 Mar 2017 13:48:33 -0300
Subject: [PATCH 2/6] track root lp v16

---
 src/backend/access/heap/heapam.c      | 209 ++++++++++++++++++++++++++++------
 src/backend/access/heap/hio.c         |  25 +++-
 src/backend/access/heap/pruneheap.c   | 126 ++++++++++++++++++--
 src/backend/access/heap/rewriteheap.c |  21 +++-
 src/backend/executor/execIndexing.c   |   3 +-
 src/backend/executor/execMain.c       |   4 +-
 src/include/access/heapam.h           |   1 +
 src/include/access/heapam_xlog.h      |   4 +-
 src/include/access/hio.h              |   4 +-
 src/include/access/htup_details.h     |  97 +++++++++++++++-
 10 files changed, 428 insertions(+), 66 deletions(-)

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 74fb09c..93cde9a 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -94,7 +94,8 @@ static HeapTuple heap_prepare_insert(Relation relation, HeapTuple tup,
 					TransactionId xid, CommandId cid, int options);
 static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
-				HeapTuple newtup, HeapTuple old_key_tup,
+				HeapTuple newtup, OffsetNumber root_offnum,
+				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
 static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
 							 Bitmapset *interesting_cols,
@@ -2248,13 +2249,13 @@ heap_get_latest_tid(Relation relation,
 		 */
 		if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) ||
 			HeapTupleHeaderIsOnlyLocked(tp.t_data) ||
-			ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid))
+			HeapTupleHeaderIsHeapLatest(tp.t_data, &ctid))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
 		}
 
-		ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tp.t_data, &ctid);
 		priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		UnlockReleaseBuffer(buffer);
 	}							/* end of loop */
@@ -2385,6 +2386,7 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	bool		all_visible_cleared = false;
+	OffsetNumber	root_offnum;
 
 	/*
 	 * Fill in tuple header fields, assign an OID, and toast the tuple if
@@ -2423,8 +2425,13 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
-	RelationPutHeapTuple(relation, buffer, heaptup,
-						 (options & HEAP_INSERT_SPECULATIVE) != 0);
+	root_offnum = RelationPutHeapTuple(relation, buffer, heaptup,
+						 (options & HEAP_INSERT_SPECULATIVE) != 0,
+						 InvalidOffsetNumber);
+
+	/* We must not overwrite the speculative insertion token. */
+	if ((options & HEAP_INSERT_SPECULATIVE) == 0)
+		HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 	if (PageIsAllVisible(BufferGetPage(buffer)))
 	{
@@ -2652,6 +2659,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 	Size		saveFreeSpace;
 	bool		need_tuple_data = RelationIsLogicallyLogged(relation);
 	bool		need_cids = RelationIsAccessibleInLogicalDecoding(relation);
+	OffsetNumber	root_offnum;
 
 	needwal = !(options & HEAP_INSERT_SKIP_WAL) && RelationNeedsWAL(relation);
 	saveFreeSpace = RelationGetTargetPageFreeSpace(relation,
@@ -2722,7 +2730,12 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		 * RelationGetBufferForTuple has ensured that the first tuple fits.
 		 * Put that on the page, and then as many other tuples as fit.
 		 */
-		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false);
+		root_offnum = RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false,
+				InvalidOffsetNumber);
+
+		/* Mark this tuple as the latest and also set root offset. */
+		HeapTupleHeaderSetHeapLatest(heaptuples[ndone]->t_data, root_offnum);
+
 		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++)
 		{
 			HeapTuple	heaptup = heaptuples[ndone + nthispage];
@@ -2730,7 +2743,10 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace)
 				break;
 
-			RelationPutHeapTuple(relation, buffer, heaptup, false);
+			root_offnum = RelationPutHeapTuple(relation, buffer, heaptup, false,
+					InvalidOffsetNumber);
+			/* Mark each tuple as the latest and also set root offset. */
+			HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 			/*
 			 * We don't use heap_multi_insert for catalog tuples yet, but
@@ -3002,6 +3018,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	HeapTupleData tp;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	TransactionId new_xmax;
@@ -3012,6 +3029,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	bool		all_visible_cleared = false;
 	HeapTuple	old_key_tuple = NULL;	/* replica identity of the tuple */
 	bool		old_key_copied = false;
+	OffsetNumber	root_offnum;
 
 	Assert(ItemPointerIsValid(tid));
 
@@ -3053,7 +3071,8 @@ heap_delete(Relation relation, ItemPointer tid,
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 	}
 
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
+	offnum = ItemPointerGetOffsetNumber(tid);
+	lp = PageGetItemId(page, offnum);
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
@@ -3183,7 +3202,17 @@ l1:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tp.t_data->t_ctid;
+
+		/*
+		 * If we're at the end of the chain, then just return the same TID back
+		 * to the caller. The caller uses that as a hint to know if we have hit
+		 * the end of the chain.
+		 */
+		if (!HeapTupleHeaderIsHeapLatest(tp.t_data, &tp.t_self))
+			HeapTupleHeaderGetNextTid(tp.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&tp.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tp.t_data);
@@ -3232,6 +3261,22 @@ l1:
 							  xid, LockTupleExclusive, true,
 							  &new_xmax, &new_infomask, &new_infomask2);
 
+	/*
+	 * heap_get_root_tuple_one() may call palloc, which is disallowed once we
+	 * enter the critical section. So check if the root offset is cached in the
+	 * tuple and if not, fetch that information hard way before entering the
+	 * critical section.
+	 *
+	 * Most often and unless we are dealing with a pg-upgraded cluster, the
+	 * root offset information should be cached. So there should not be too
+	 * much overhead of fetching this information. Also, once a tuple is
+	 * updated, the information will be copied to the new version. So it's not
+	 * as if we're going to pay this price forever.
+	 */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tp.t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -3259,8 +3304,10 @@ l1:
 	HeapTupleHeaderClearHotUpdated(tp.t_data);
 	HeapTupleHeaderSetXmax(tp.t_data, new_xmax);
 	HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo);
-	/* Make sure there is no forward chain link in t_ctid */
-	tp.t_data->t_ctid = tp.t_self;
+
+	/* Mark this tuple as the latest tuple in the update chain. */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		HeapTupleHeaderSetHeapLatest(tp.t_data, root_offnum);
 
 	MarkBufferDirty(buffer);
 
@@ -3461,6 +3508,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
+	OffsetNumber	root_offnum;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3523,6 +3572,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 
 
 	block = ItemPointerGetBlockNumber(otid);
+	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3807,7 +3857,12 @@ l2:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = oldtup.t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(oldtup.t_data, &oldtup.t_self))
+			HeapTupleHeaderGetNextTid(oldtup.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&oldtup.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
@@ -3947,6 +4002,7 @@ l2:
 		uint16		infomask_lock_old_tuple,
 					infomask2_lock_old_tuple;
 		bool		cleared_all_frozen = false;
+		OffsetNumber	root_offnum;
 
 		/*
 		 * To prevent concurrent sessions from updating the tuple, we have to
@@ -3974,6 +4030,14 @@ l2:
 
 		Assert(HEAP_XMAX_IS_LOCKED_ONLY(infomask_lock_old_tuple));
 
+		/*
+		 * Fetch root offset before entering the critical section. We do this
+		 * only if the information is not already available.
+		 */
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&oldtup.t_self));
+
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
@@ -3988,7 +4052,8 @@ l2:
 		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			HeapTupleHeaderSetHeapLatest(oldtup.t_data, root_offnum);
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4146,6 +4211,10 @@ l2:
 										   bms_overlap(modified_attrs, id_attrs),
 										   &old_key_copied);
 
+	if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
@@ -4171,6 +4240,17 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way (we should have done
+		 * that before entering the critical section above).
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
 	else
 	{
@@ -4178,10 +4258,22 @@ l2:
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		root_offnum = InvalidOffsetNumber;
 	}
 
-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
-
+	/* insert new tuple */
+	root_offnum = RelationPutHeapTuple(relation, newbuf, heaptup, false,
+									   root_offnum);
+	/*
+	 * Also mark both copies as latest and set the root offset information. If
+	 * we're doing a HOT/WARM update, then we just copy the information from
+	 * old tuple, if available or computed above. For regular updates,
+	 * RelationPutHeapTuple must have returned us the actual offset number
+	 * where the new version was inserted and we store the same value since the
+	 * update resulted in a new HOT-chain.
+	 */
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
 	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
@@ -4194,7 +4286,7 @@ l2:
 	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	HeapTupleHeaderSetNextTid(oldtup.t_data, &(heaptup->t_self));
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4233,6 +4325,7 @@ l2:
 
 		recptr = log_heap_update(relation, buffer,
 								 newbuf, &oldtup, heaptup,
+								 root_offnum,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4513,7 +4606,8 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	ItemId		lp;
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
-	BlockNumber block;
+	BlockNumber	block;
+	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4522,9 +4616,11 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	bool		first_time = true;
 	bool		have_tuple_lock = false;
 	bool		cleared_all_frozen = false;
+	OffsetNumber	root_offnum;
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
+	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -4544,6 +4640,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	tuple->t_data = (HeapTupleHeader) PageGetItem(page, lp);
 	tuple->t_len = ItemIdGetLength(lp);
 	tuple->t_tableOid = RelationGetRelid(relation);
+	tuple->t_self = *tid;
 
 l3:
 	result = HeapTupleSatisfiesUpdate(tuple, cid, *buffer);
@@ -4571,7 +4668,11 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &t_ctid);
+		else
+			ItemPointerCopy(tid, &t_ctid);
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -5009,7 +5110,12 @@ failed:
 		Assert(result == HeapTupleSelfUpdated || result == HeapTupleUpdated ||
 			   result == HeapTupleWouldBlock);
 		Assert(!(tuple->t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tuple->t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(tid, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tuple->t_data);
@@ -5057,6 +5163,10 @@ failed:
 							  GetCurrentTransactionId(), mode, false,
 							  &xid, &new_infomask, &new_infomask2);
 
+	if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tuple->t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -5085,7 +5195,10 @@ failed:
 	 * the tuple as well.
 	 */
 	if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask))
-		tuple->t_data->t_ctid = *tid;
+	{
+		if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+			HeapTupleHeaderSetHeapLatest(tuple->t_data, root_offnum);
+	}
 
 	/* Clear only the all-frozen bit on visibility map if needed */
 	if (PageIsAllVisible(page) &&
@@ -5599,6 +5712,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
+	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5607,6 +5721,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
+		offnum = ItemPointerGetOffsetNumber(&tupid);
+
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
@@ -5836,7 +5952,7 @@ l4:
 
 		/* if we find the end of update chain, we're done. */
 		if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID ||
-			ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) ||
+			HeapTupleHeaderIsHeapLatest(mytup.t_data, &mytup.t_self) ||
 			HeapTupleHeaderIsOnlyLocked(mytup.t_data))
 		{
 			result = HeapTupleMayBeUpdated;
@@ -5845,7 +5961,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		HeapTupleHeaderGetNextTid(mytup.t_data, &tupid);
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
@@ -5962,7 +6078,7 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	 * Replace the speculative insertion token with a real t_ctid, pointing to
 	 * itself like it does on regular tuples.
 	 */
-	htup->t_ctid = tuple->t_self;
+	HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 	/* XLOG stuff */
 	if (RelationNeedsWAL(relation))
@@ -6088,8 +6204,7 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
 
 	/* Clear the speculative insertion token too */
-	tp.t_data->t_ctid = tp.t_self;
-
+	HeapTupleHeaderSetHeapLatest(tp.t_data, ItemPointerGetOffsetNumber(tid));
 	MarkBufferDirty(buffer);
 
 	/*
@@ -7437,6 +7552,7 @@ log_heap_visible(RelFileNode rnode, Buffer heap_buffer, Buffer vm_buffer,
 static XLogRecPtr
 log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup, HeapTuple newtup,
+				OffsetNumber root_offnum,
 				HeapTuple old_key_tuple,
 				bool all_visible_cleared, bool new_all_visible_cleared)
 {
@@ -7557,6 +7673,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self);
 	xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data);
 
+	Assert(OffsetNumberIsValid(root_offnum));
+	xlrec.root_offnum = root_offnum;
+
 	bufflags = REGBUF_STANDARD;
 	if (init)
 		bufflags |= REGBUF_WILL_INIT;
@@ -8211,7 +8330,13 @@ heap_xlog_delete(XLogReaderState *record)
 			PageClearAllVisible(page);
 
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = target_tid;
+		if (!HeapTupleHeaderHasRootOffset(htup))
+		{
+			OffsetNumber	root_offnum;
+			root_offnum = heap_get_root_tuple(page, xlrec->offnum);
+			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+		}
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8301,7 +8426,8 @@ heap_xlog_insert(XLogReaderState *record)
 		htup->t_hoff = xlhdr.t_hoff;
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
-		htup->t_ctid = target_tid;
+
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->offnum);
 
 		if (PageAddItem(page, (Item) htup, newlen, xlrec->offnum,
 						true, true) == InvalidOffsetNumber)
@@ -8436,8 +8562,8 @@ heap_xlog_multi_insert(XLogReaderState *record)
 			htup->t_hoff = xlhdr->t_hoff;
 			HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 			HeapTupleHeaderSetCmin(htup, FirstCommandId);
-			ItemPointerSetBlockNumber(&htup->t_ctid, blkno);
-			ItemPointerSetOffsetNumber(&htup->t_ctid, offnum);
+
+			HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 			offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 			if (offnum == InvalidOffsetNumber)
@@ -8573,7 +8699,7 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 		/* Set forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetNextTid(htup, &newtid);
 
 		/* Mark the page as a candidate for pruning */
 		PageSetPrunable(page, XLogRecGetXid(record));
@@ -8706,13 +8832,17 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
-		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = newtid;
 
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
 
+		/*
+		 * Make sure the tuple is marked as the latest and root offset
+		 * information is restored.
+		 */
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->root_offnum);
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
@@ -8775,6 +8905,9 @@ heap_xlog_confirm(XLogReaderState *record)
 		 */
 		ItemPointerSet(&htup->t_ctid, BufferGetBlockNumber(buffer), offnum);
 
+		/* For newly inserted tuple, set root offset to itself. */
+		HeapTupleHeaderSetHeapLatest(htup, offnum);
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8838,11 +8971,17 @@ heap_xlog_lock(XLogReaderState *record)
 		 */
 		if (HEAP_XMAX_IS_LOCKED_ONLY(htup->t_infomask))
 		{
+			ItemPointerData	target_tid;
+
+			ItemPointerSet(&target_tid, BufferGetBlockNumber(buffer), offnum);
 			HeapTupleHeaderClearHotUpdated(htup);
 			/* Make sure there is no forward chain link in t_ctid */
-			ItemPointerSet(&htup->t_ctid,
-						   BufferGetBlockNumber(buffer),
-						   offnum);
+			if (!HeapTupleHeaderHasRootOffset(htup))
+			{
+				OffsetNumber	root_offnum;
+				root_offnum = heap_get_root_tuple(page, offnum);
+				HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+			}
 		}
 		HeapTupleHeaderSetXmax(htup, xlrec->locking_xid);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index 6529fe3..8052519 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -31,12 +31,20 @@
  * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
  * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *
+ * The caller can optionally tell us to set the root offset to the given value.
+ * Otherwise, the root offset is set to the offset of the new location once its
+ * known. The former is used while updating an existing tuple where the caller
+ * tells us about the root line pointer of the chain.  The latter is used
+ * during insertion of a new row, hence root line pointer is set to the offset
+ * where this tuple is inserted.
  */
-void
+OffsetNumber
 RelationPutHeapTuple(Relation relation,
 					 Buffer buffer,
 					 HeapTuple tuple,
-					 bool token)
+					 bool token,
+					 OffsetNumber root_offnum)
 {
 	Page		pageHeader;
 	OffsetNumber offnum;
@@ -60,17 +68,24 @@ RelationPutHeapTuple(Relation relation,
 	ItemPointerSet(&(tuple->t_self), BufferGetBlockNumber(buffer), offnum);
 
 	/*
-	 * Insert the correct position into CTID of the stored tuple, too (unless
-	 * this is a speculative insertion, in which case the token is held in
-	 * CTID field instead)
+	 * Set block number and the root offset into CTID of the stored tuple, too
+	 * (unless this is a speculative insertion, in which case the token is held
+	 * in CTID field instead).
 	 */
 	if (!token)
 	{
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
+		/* Copy t_ctid to set the correct block number. */
 		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+
+		if (!OffsetNumberIsValid(root_offnum))
+			root_offnum = offnum;
+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item, root_offnum);
 	}
+
+	return root_offnum;
 }
 
 /*
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index d69a266..f54337c 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -55,6 +55,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
 
+static void heap_get_root_tuples_internal(Page page,
+				OffsetNumber target_offnum, OffsetNumber *root_offsets);
 
 /*
  * Optionally prune and repair fragmentation in the specified page.
@@ -553,6 +555,17 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 		if (!HeapTupleHeaderIsHotUpdated(htup))
 			break;
 
+
+		/*
+		 * If the tuple was HOT-updated and the update was later
+		 * aborted, someone could mark this tuple to be the last tuple
+		 * in the chain, without clearing the HOT-updated flag. So we must
+		 * check if this is the last tuple in the chain and stop following the
+		 * CTID, else we risk getting into an infinite recursion (though
+		 * prstate->marked[] currently protects against that).
+		 */
+		if (HeapTupleHeaderHasRootOffset(htup))
+			break;
 		/*
 		 * Advance to next chain member.
 		 */
@@ -726,27 +739,47 @@ heap_page_prune_execute(Buffer buffer,
 
 
 /*
- * For all items in this page, find their respective root line pointers.
- * If item k is part of a HOT-chain with root at item j, then we set
- * root_offsets[k - 1] = j.
+ * Either for all items in this page or for the given item, find their
+ * respective root line pointers.
  *
- * The passed-in root_offsets array must have MaxHeapTuplesPerPage entries.
- * We zero out all unused entries.
+ * When target_offnum is a valid offset number, the caller is interested in
+ * just one item. In that case, the root line pointer is returned in
+ * root_offsets.
+ *
+ * When target_offnum is a InvalidOffsetNumber then the caller wants to know
+ * the root line pointers of all the items in this page. The root_offsets array
+ * must have MaxHeapTuplesPerPage entries in that case. If item k is part of a
+ * HOT-chain with root at item j, then we set root_offsets[k - 1] = j. We zero
+ * out all unused entries.
  *
  * The function must be called with at least share lock on the buffer, to
  * prevent concurrent prune operations.
  *
+ * This is not a cheap function since it must scan through all line pointers
+ * and tuples on the page in order to find the root line pointers. To minimize
+ * the cost, we break early if target_offnum is specified and root line pointer
+ * to target_offnum is found.
+ *
  * Note: The information collected here is valid only as long as the caller
  * holds a pin on the buffer. Once pin is released, a tuple might be pruned
  * and reused by a completely unrelated tuple.
+ *
+ * Note: This function must not be called inside a critical section because it
+ * internally calls HeapTupleHeaderGetUpdateXid which somewhere down the stack
+ * may try to allocate heap memory. Memory allocation is disallowed in a
+ * critical section.
  */
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
 {
 	OffsetNumber offnum,
 				maxoff;
 
-	MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
+	if (OffsetNumberIsValid(target_offnum))
+		*root_offsets = InvalidOffsetNumber;
+	else
+		MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
 
 	maxoff = PageGetMaxOffsetNumber(page);
 	for (offnum = FirstOffsetNumber; offnum <= maxoff; offnum = OffsetNumberNext(offnum))
@@ -774,9 +807,28 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 
 			/*
 			 * This is either a plain tuple or the root of a HOT-chain.
-			 * Remember it in the mapping.
+			 *
+			 * If the target_offnum is specified and if we found its mapping,
+			 * return.
 			 */
-			root_offsets[offnum - 1] = offnum;
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (target_offnum == offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember it in the mapping. */
+				root_offsets[offnum - 1] = offnum;
+			}
 
 			/* If it's not the start of a HOT-chain, we're done with it */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
@@ -817,15 +869,65 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 				!TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(htup)))
 				break;
 
-			/* Remember the root line pointer for this item */
-			root_offsets[nextoffnum - 1] = offnum;
+			/*
+			 * If target_offnum is specified and we found its mapping, return.
+			 */
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (nextoffnum == target_offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember the root line pointer for this item. */
+				root_offsets[nextoffnum - 1] = offnum;
+			}
 
 			/* Advance to next chain member, if any */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				break;
 
+			/*
+			 * If the tuple was HOT-updated and the update was later aborted,
+			 * someone could mark this tuple to be the last tuple in the chain
+			 * and store root offset in CTID, without clearing the HOT-updated
+			 * flag. So we must check if CTID is actually root offset and break
+			 * to avoid infinite recursion.
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
 		}
 	}
 }
+
+/*
+ * Get root line pointer for the given tuple.
+ */
+OffsetNumber
+heap_get_root_tuple(Page page, OffsetNumber target_offnum)
+{
+	OffsetNumber offnum = InvalidOffsetNumber;
+	heap_get_root_tuples_internal(page, target_offnum, &offnum);
+	return offnum;
+}
+
+/*
+ * Get root line pointers for all tuples in the page
+ */
+void
+heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+{
+	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+			root_offsets);
+}
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index c7b283c..0792971 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -419,14 +419,18 @@ rewrite_heap_tuple(RewriteState state,
 	 */
 	if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
 		  HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
-		!(ItemPointerEquals(&(old_tuple->t_self),
-							&(old_tuple->t_data->t_ctid))))
+		!(HeapTupleHeaderIsHeapLatest(old_tuple->t_data, &old_tuple->t_self)))
 	{
 		OldToNewMapping mapping;
 
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
-		hashkey.tid = old_tuple->t_data->t_ctid;
+
+		/*
+		 * We've already checked that this is not the last tuple in the chain,
+		 * so fetch the next TID in the chain.
+		 */
+		HeapTupleHeaderGetNextTid(old_tuple->t_data, &hashkey.tid);
 
 		mapping = (OldToNewMapping)
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -439,7 +443,7 @@ rewrite_heap_tuple(RewriteState state,
 			 * set the ctid of this tuple to point to the new location, and
 			 * insert it right away.
 			 */
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextTid(new_tuple->t_data, &mapping->new_tid);
 
 			/* We don't need the mapping entry anymore */
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -525,7 +529,7 @@ rewrite_heap_tuple(RewriteState state,
 				new_tuple = unresolved->tuple;
 				free_new = true;
 				old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextTid(new_tuple->t_data, &new_tid);
 
 				/*
 				 * We don't need the hash entry anymore, but don't free its
@@ -731,7 +735,12 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		onpage_tup->t_ctid = tup->t_self;
+		/*
+		 * Set t_ctid just to ensure that block number is copied correctly, but
+		 * then immediately mark the tuple as the latest.
+		 */
+		HeapTupleHeaderSetNextTid(onpage_tup, &tup->t_self);
+		HeapTupleHeaderSetHeapLatest(onpage_tup, newoff);
 	}
 
 	/* If heaptup is a private copy, release it. */
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 5242dee..2142273 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -789,7 +789,8 @@ retry:
 			  DirtySnapshot.speculativeToken &&
 			  TransactionIdPrecedes(GetCurrentTransactionId(), xwait))))
 		{
-			ctid_wait = tup->t_data->t_ctid;
+			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
+				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index f5cd65d..44a501f 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2592,7 +2592,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		 * As above, it should be safe to examine xmax and t_ctid without the
 		 * buffer content lock, because they can't be changing.
 		 */
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, &tuple.t_self))
 		{
 			/* deleted, so forget about it */
 			ReleaseBuffer(buffer);
@@ -2600,7 +2600,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		}
 
 		/* updated, so look at the updated row */
-		tuple.t_self = tuple.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tuple.t_data, &tuple.t_self);
 		/* updated row should have xmin matching this xmax */
 		priorXmax = HeapTupleHeaderGetUpdateXid(tuple.t_data);
 		ReleaseBuffer(buffer);
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index a864f78..95aa976 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -189,6 +189,7 @@ extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
+extern OffsetNumber heap_get_root_tuple(Page page, OffsetNumber target_offnum);
 extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
 
 /* in heap/syncscan.c */
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index b285f17..e6019d5 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -193,6 +193,8 @@ typedef struct xl_heap_update
 	uint8		flags;
 	TransactionId new_xmax;		/* xmax of the new tuple */
 	OffsetNumber new_offnum;	/* new tuple's offset */
+	OffsetNumber root_offnum;	/* offset of the root line pointer in case of
+								   HOT or WARM update */
 
 	/*
 	 * If XLOG_HEAP_CONTAINS_OLD_TUPLE or XLOG_HEAP_CONTAINS_OLD_KEY flags are
@@ -200,7 +202,7 @@ typedef struct xl_heap_update
 	 */
 } xl_heap_update;
 
-#define SizeOfHeapUpdate	(offsetof(xl_heap_update, new_offnum) + sizeof(OffsetNumber))
+#define SizeOfHeapUpdate	(offsetof(xl_heap_update, root_offnum) + sizeof(OffsetNumber))
 
 /*
  * This is what we need to know about vacuum page cleanup/redirect
diff --git a/src/include/access/hio.h b/src/include/access/hio.h
index 2824f23..921cb37 100644
--- a/src/include/access/hio.h
+++ b/src/include/access/hio.h
@@ -35,8 +35,8 @@ typedef struct BulkInsertStateData
 }	BulkInsertStateData;
 
 
-extern void RelationPutHeapTuple(Relation relation, Buffer buffer,
-					 HeapTuple tuple, bool token);
+extern OffsetNumber RelationPutHeapTuple(Relation relation, Buffer buffer,
+					 HeapTuple tuple, bool token, OffsetNumber root_offnum);
 extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
 						  Buffer otherBuffer, int options,
 						  BulkInsertState bistate,
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index a6c7e31..7552186 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,13 +260,19 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1800 are available */
+/* bits 0x0800 are available */
+#define HEAP_LATEST_TUPLE		0x1000	/*
+										 * This is the last tuple in chain and
+										 * ip_posid points to the root line
+										 * pointer
+										 */
 #define HEAP_KEYS_UPDATED		0x2000	/* tuple was updated and key cols
 										 * modified, or tuple deleted */
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xE000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+
 
 /*
  * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins.  It is
@@ -504,6 +510,43 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+/*
+ * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
+ * the TID of the tuple itself in t_ctid field to mark the end of the chain.
+ * But starting PG v10, we use a special flag HEAP_LATEST_TUPLE to identify the
+ * last tuple and store the root line pointer of the HOT chain in t_ctid field
+ * instead.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetHeapLatest(tup, offnum) \
+do { \
+	AssertMacro(OffsetNumberIsValid(offnum)); \
+	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE; \
+	ItemPointerSetOffsetNumber(&(tup)->t_ctid, (offnum)); \
+} while (0)
+
+#define HeapTupleHeaderClearHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 &= ~HEAP_LATEST_TUPLE \
+)
+
+/*
+ * Starting from PostgreSQL 10, the latest tuple in an update chain has
+ * HEAP_LATEST_TUPLE set; but tuples upgraded from earlier versions do not.
+ * For those, we determine whether a tuple is latest by testing that its t_ctid
+ * points to itself.
+ *
+ * Note: beware of multiple evaluations of "tup" and "tid" arguments.
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  (((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(tid))) \
+)
+
+
 #define HeapTupleHeaderSetHeapOnly(tup) \
 ( \
   (tup)->t_infomask2 |= HEAP_ONLY_TUPLE \
@@ -542,6 +585,56 @@ do { \
 
 
 /*
+ * Set the t_ctid chain and also clear the HEAP_LATEST_TUPLE flag since we
+ * now have a new tuple in the chain and this is no longer the last tuple of
+ * the chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetNextTid(tup, tid) \
+do { \
+		ItemPointerCopy((tid), &((tup)->t_ctid)); \
+		HeapTupleHeaderClearHeapLatest((tup)); \
+} while (0)
+
+/*
+ * Get TID of next tuple in the update chain. Caller must have checked that
+ * we are not already at the end of the chain because in that case t_ctid may
+ * actually store the root line pointer of the HOT chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetNextTid(tup, next_ctid) \
+do { \
+	AssertMacro(!((tup)->t_infomask2 & HEAP_LATEST_TUPLE)); \
+	ItemPointerCopy(&(tup)->t_ctid, (next_ctid)); \
+} while (0)
+
+/*
+ * Get the root line pointer of the HOT chain. The caller should have confirmed
+ * that the root offset is cached before calling this macro.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+	AssertMacro(((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0), \
+	ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)
+
+/*
+ * Return whether the tuple has a cached root offset.  We don't use
+ * HeapTupleHeaderIsHeapLatest because that one also considers the case of
+ * t_ctid pointing to itself, for tuples migrated from pre v10 clusters. Here
+ * we are only interested in the tuples which are marked with HEAP_LATEST_TUPLE
+ * flag.
+ */
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+	((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0 \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
-- 
2.1.4

0003-clear-ip_posid-blkid-refs-v16.patchtext/plain; charset=us-asciiDownload
From 2621b6c0eea72452341994f1f7b5dce9ca17652a Mon Sep 17 00:00:00 2001
From: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Wed, 8 Mar 2017 13:48:58 -0300
Subject: [PATCH 3/6] clear ip_posid/blkid refs v16

---
 contrib/pageinspect/btreefuncs.c                |  4 ++--
 contrib/pgstattuple/pgstattuple.c               |  2 +-
 src/backend/access/gin/ginget.c                 | 29 +++++++++++++++----------
 src/backend/access/gin/ginpostinglist.c         | 14 +++++-------
 src/backend/replication/logical/reorderbuffer.c |  4 ++--
 src/backend/storage/page/itemptr.c              | 13 ++++++-----
 src/backend/utils/adt/tid.c                     | 10 ++++-----
 src/include/access/gin_private.h                |  4 ++--
 src/include/access/ginblock.h                   | 11 ++++++++--
 src/include/access/htup_details.h               |  2 +-
 src/include/access/nbtree.h                     |  5 ++---
 src/include/storage/itemptr.h                   | 12 ++++++++++
 12 files changed, 65 insertions(+), 45 deletions(-)

diff --git a/contrib/pageinspect/btreefuncs.c b/contrib/pageinspect/btreefuncs.c
index d50ec3a..2ec265e 100644
--- a/contrib/pageinspect/btreefuncs.c
+++ b/contrib/pageinspect/btreefuncs.c
@@ -363,8 +363,8 @@ bt_page_items(PG_FUNCTION_ARGS)
 		j = 0;
 		values[j++] = psprintf("%d", uargs->offset);
 		values[j++] = psprintf("(%u,%u)",
-							   BlockIdGetBlockNumber(&(itup->t_tid.ip_blkid)),
-							   itup->t_tid.ip_posid);
+							   ItemPointerGetBlockNumberNoCheck(&itup->t_tid),
+							   ItemPointerGetOffsetNumberNoCheck(&itup->t_tid));
 		values[j++] = psprintf("%d", (int) IndexTupleSize(itup));
 		values[j++] = psprintf("%c", IndexTupleHasNulls(itup) ? 't' : 'f');
 		values[j++] = psprintf("%c", IndexTupleHasVarwidths(itup) ? 't' : 'f');
diff --git a/contrib/pgstattuple/pgstattuple.c b/contrib/pgstattuple/pgstattuple.c
index 06a1992..e65040d 100644
--- a/contrib/pgstattuple/pgstattuple.c
+++ b/contrib/pgstattuple/pgstattuple.c
@@ -353,7 +353,7 @@ pgstat_heap(Relation rel, FunctionCallInfo fcinfo)
 		 * heap_getnext may find no tuples on a given page, so we cannot
 		 * simply examine the pages returned by the heap scan.
 		 */
-		tupblock = BlockIdGetBlockNumber(&tuple->t_self.ip_blkid);
+		tupblock = ItemPointerGetBlockNumber(&tuple->t_self);
 
 		while (block <= tupblock)
 		{
diff --git a/src/backend/access/gin/ginget.c b/src/backend/access/gin/ginget.c
index 87cd9ea..aa0b02f 100644
--- a/src/backend/access/gin/ginget.c
+++ b/src/backend/access/gin/ginget.c
@@ -626,8 +626,9 @@ entryLoadMoreItems(GinState *ginstate, GinScanEntry entry,
 		}
 		else
 		{
-			entry->btree.itemptr = advancePast;
-			entry->btree.itemptr.ip_posid++;
+			ItemPointerSet(&entry->btree.itemptr,
+					GinItemPointerGetBlockNumber(&advancePast),
+					OffsetNumberNext(GinItemPointerGetOffsetNumber(&advancePast)));
 		}
 		entry->btree.fullScan = false;
 		stack = ginFindLeafPage(&entry->btree, true, snapshot);
@@ -979,15 +980,17 @@ keyGetItem(GinState *ginstate, MemoryContext tempCtx, GinScanKey key,
 		if (GinItemPointerGetBlockNumber(&advancePast) <
 			GinItemPointerGetBlockNumber(&minItem))
 		{
-			advancePast.ip_blkid = minItem.ip_blkid;
-			advancePast.ip_posid = 0;
+			ItemPointerSet(&advancePast,
+					GinItemPointerGetBlockNumber(&minItem),
+					InvalidOffsetNumber);
 		}
 	}
 	else
 	{
-		Assert(minItem.ip_posid > 0);
-		advancePast = minItem;
-		advancePast.ip_posid--;
+		Assert(GinItemPointerGetOffsetNumber(&minItem) > 0);
+		ItemPointerSet(&advancePast,
+				GinItemPointerGetBlockNumber(&minItem),
+				OffsetNumberPrev(GinItemPointerGetOffsetNumber(&minItem)));
 	}
 
 	/*
@@ -1245,15 +1248,17 @@ scanGetItem(IndexScanDesc scan, ItemPointerData advancePast,
 				if (GinItemPointerGetBlockNumber(&advancePast) <
 					GinItemPointerGetBlockNumber(&key->curItem))
 				{
-					advancePast.ip_blkid = key->curItem.ip_blkid;
-					advancePast.ip_posid = 0;
+					ItemPointerSet(&advancePast,
+						GinItemPointerGetBlockNumber(&key->curItem),
+						InvalidOffsetNumber);
 				}
 			}
 			else
 			{
-				Assert(key->curItem.ip_posid > 0);
-				advancePast = key->curItem;
-				advancePast.ip_posid--;
+				Assert(GinItemPointerGetOffsetNumber(&key->curItem) > 0);
+				ItemPointerSet(&advancePast,
+						GinItemPointerGetBlockNumber(&key->curItem),
+						OffsetNumberPrev(GinItemPointerGetOffsetNumber(&key->curItem)));
 			}
 
 			/*
diff --git a/src/backend/access/gin/ginpostinglist.c b/src/backend/access/gin/ginpostinglist.c
index 598069d..8d2d31a 100644
--- a/src/backend/access/gin/ginpostinglist.c
+++ b/src/backend/access/gin/ginpostinglist.c
@@ -79,13 +79,11 @@ itemptr_to_uint64(const ItemPointer iptr)
 	uint64		val;
 
 	Assert(ItemPointerIsValid(iptr));
-	Assert(iptr->ip_posid < (1 << MaxHeapTuplesPerPageBits));
+	Assert(GinItemPointerGetOffsetNumber(iptr) < (1 << MaxHeapTuplesPerPageBits));
 
-	val = iptr->ip_blkid.bi_hi;
-	val <<= 16;
-	val |= iptr->ip_blkid.bi_lo;
+	val = GinItemPointerGetBlockNumber(iptr);
 	val <<= MaxHeapTuplesPerPageBits;
-	val |= iptr->ip_posid;
+	val |= GinItemPointerGetOffsetNumber(iptr);
 
 	return val;
 }
@@ -93,11 +91,9 @@ itemptr_to_uint64(const ItemPointer iptr)
 static inline void
 uint64_to_itemptr(uint64 val, ItemPointer iptr)
 {
-	iptr->ip_posid = val & ((1 << MaxHeapTuplesPerPageBits) - 1);
+	GinItemPointerSetOffsetNumber(iptr, val & ((1 << MaxHeapTuplesPerPageBits) - 1));
 	val = val >> MaxHeapTuplesPerPageBits;
-	iptr->ip_blkid.bi_lo = val & 0xFFFF;
-	val = val >> 16;
-	iptr->ip_blkid.bi_hi = val & 0xFFFF;
+	GinItemPointerSetBlockNumber(iptr, val);
 
 	Assert(ItemPointerIsValid(iptr));
 }
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index 8aac670..b6f8f5a 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -3006,8 +3006,8 @@ DisplayMapping(HTAB *tuplecid_data)
 			 ent->key.relnode.dbNode,
 			 ent->key.relnode.spcNode,
 			 ent->key.relnode.relNode,
-			 BlockIdGetBlockNumber(&ent->key.tid.ip_blkid),
-			 ent->key.tid.ip_posid,
+			 ItemPointerGetBlockNumber(&ent->key.tid),
+			 ItemPointerGetOffsetNumber(&ent->key.tid),
 			 ent->cmin,
 			 ent->cmax
 			);
diff --git a/src/backend/storage/page/itemptr.c b/src/backend/storage/page/itemptr.c
index 703cbb9..28ac885 100644
--- a/src/backend/storage/page/itemptr.c
+++ b/src/backend/storage/page/itemptr.c
@@ -54,18 +54,21 @@ ItemPointerCompare(ItemPointer arg1, ItemPointer arg2)
 	/*
 	 * Don't use ItemPointerGetBlockNumber or ItemPointerGetOffsetNumber here,
 	 * because they assert ip_posid != 0 which might not be true for a
-	 * user-supplied TID.
+	 * user-supplied TID. Instead we use ItemPointerGetBlockNumberNoCheck and
+	 * ItemPointerGetOffsetNumberNoCheck which do not do any validation.
 	 */
-	BlockNumber b1 = BlockIdGetBlockNumber(&(arg1->ip_blkid));
-	BlockNumber b2 = BlockIdGetBlockNumber(&(arg2->ip_blkid));
+	BlockNumber b1 = ItemPointerGetBlockNumberNoCheck(arg1);
+	BlockNumber b2 = ItemPointerGetBlockNumberNoCheck(arg2);
 
 	if (b1 < b2)
 		return -1;
 	else if (b1 > b2)
 		return 1;
-	else if (arg1->ip_posid < arg2->ip_posid)
+	else if (ItemPointerGetOffsetNumberNoCheck(arg1) <
+			ItemPointerGetOffsetNumberNoCheck(arg2))
 		return -1;
-	else if (arg1->ip_posid > arg2->ip_posid)
+	else if (ItemPointerGetOffsetNumberNoCheck(arg1) >
+			ItemPointerGetOffsetNumberNoCheck(arg2))
 		return 1;
 	else
 		return 0;
diff --git a/src/backend/utils/adt/tid.c b/src/backend/utils/adt/tid.c
index a3b372f..735c006 100644
--- a/src/backend/utils/adt/tid.c
+++ b/src/backend/utils/adt/tid.c
@@ -109,8 +109,8 @@ tidout(PG_FUNCTION_ARGS)
 	OffsetNumber offsetNumber;
 	char		buf[32];
 
-	blockNumber = BlockIdGetBlockNumber(&(itemPtr->ip_blkid));
-	offsetNumber = itemPtr->ip_posid;
+	blockNumber = ItemPointerGetBlockNumberNoCheck(itemPtr);
+	offsetNumber = ItemPointerGetOffsetNumberNoCheck(itemPtr);
 
 	/* Perhaps someday we should output this as a record. */
 	snprintf(buf, sizeof(buf), "(%u,%u)", blockNumber, offsetNumber);
@@ -146,14 +146,12 @@ Datum
 tidsend(PG_FUNCTION_ARGS)
 {
 	ItemPointer itemPtr = PG_GETARG_ITEMPOINTER(0);
-	BlockId		blockId;
 	BlockNumber blockNumber;
 	OffsetNumber offsetNumber;
 	StringInfoData buf;
 
-	blockId = &(itemPtr->ip_blkid);
-	blockNumber = BlockIdGetBlockNumber(blockId);
-	offsetNumber = itemPtr->ip_posid;
+	blockNumber = ItemPointerGetBlockNumberNoCheck(itemPtr);
+	offsetNumber = ItemPointerGetOffsetNumberNoCheck(itemPtr);
 
 	pq_begintypsend(&buf);
 	pq_sendint(&buf, blockNumber, sizeof(blockNumber));
diff --git a/src/include/access/gin_private.h b/src/include/access/gin_private.h
index 34e7339..2fd4479 100644
--- a/src/include/access/gin_private.h
+++ b/src/include/access/gin_private.h
@@ -460,8 +460,8 @@ extern ItemPointer ginMergeItemPointers(ItemPointerData *a, uint32 na,
 static inline int
 ginCompareItemPointers(ItemPointer a, ItemPointer b)
 {
-	uint64		ia = (uint64) a->ip_blkid.bi_hi << 32 | (uint64) a->ip_blkid.bi_lo << 16 | a->ip_posid;
-	uint64		ib = (uint64) b->ip_blkid.bi_hi << 32 | (uint64) b->ip_blkid.bi_lo << 16 | b->ip_posid;
+	uint64		ia = (uint64) GinItemPointerGetBlockNumber(a) << 32 | GinItemPointerGetOffsetNumber(a);
+	uint64		ib = (uint64) GinItemPointerGetBlockNumber(b) << 32 | GinItemPointerGetOffsetNumber(b);
 
 	if (ia == ib)
 		return 0;
diff --git a/src/include/access/ginblock.h b/src/include/access/ginblock.h
index a3fb056..438912c 100644
--- a/src/include/access/ginblock.h
+++ b/src/include/access/ginblock.h
@@ -132,10 +132,17 @@ typedef struct GinMetaPageData
  * to avoid Asserts, since sometimes the ip_posid isn't "valid"
  */
 #define GinItemPointerGetBlockNumber(pointer) \
-	BlockIdGetBlockNumber(&(pointer)->ip_blkid)
+	(ItemPointerGetBlockNumberNoCheck(pointer))
 
 #define GinItemPointerGetOffsetNumber(pointer) \
-	((pointer)->ip_posid)
+	(ItemPointerGetOffsetNumberNoCheck(pointer))
+
+#define GinItemPointerSetBlockNumber(pointer, blkno) \
+	(ItemPointerSetBlockNumber((pointer), (blkno)))
+
+#define GinItemPointerSetOffsetNumber(pointer, offnum) \
+	(ItemPointerSetOffsetNumber((pointer), (offnum)))
+
 
 /*
  * Special-case item pointer values needed by the GIN search logic.
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 7552186..24433c7 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -428,7 +428,7 @@ do { \
 
 #define HeapTupleHeaderIsSpeculative(tup) \
 ( \
-	(tup)->t_ctid.ip_posid == SpecTokenOffsetNumber \
+	(ItemPointerGetOffsetNumberNoCheck(&(tup)->t_ctid) == SpecTokenOffsetNumber) \
 )
 
 #define HeapTupleHeaderGetSpeculativeToken(tup) \
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index 6289ffa..f9304db 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -151,9 +151,8 @@ typedef struct BTMetaPageData
  *	within a level). - vadim 04/09/97
  */
 #define BTTidSame(i1, i2)	\
-	( (i1).ip_blkid.bi_hi == (i2).ip_blkid.bi_hi && \
-	  (i1).ip_blkid.bi_lo == (i2).ip_blkid.bi_lo && \
-	  (i1).ip_posid == (i2).ip_posid )
+	((ItemPointerGetBlockNumber(&(i1)) == ItemPointerGetBlockNumber(&(i2))) && \
+	 (ItemPointerGetOffsetNumber(&(i1)) == ItemPointerGetOffsetNumber(&(i2))))
 #define BTEntrySame(i1, i2) \
 	BTTidSame((i1)->t_tid, (i2)->t_tid)
 
diff --git a/src/include/storage/itemptr.h b/src/include/storage/itemptr.h
index 576aaa8..60d0070 100644
--- a/src/include/storage/itemptr.h
+++ b/src/include/storage/itemptr.h
@@ -69,6 +69,12 @@ typedef ItemPointerData *ItemPointer;
 	BlockIdGetBlockNumber(&(pointer)->ip_blkid) \
 )
 
+/* Same as ItemPointerGetBlockNumber but without any assert-checks */
+#define ItemPointerGetBlockNumberNoCheck(pointer) \
+( \
+	BlockIdGetBlockNumber(&(pointer)->ip_blkid) \
+)
+
 /*
  * ItemPointerGetOffsetNumber
  *		Returns the offset number of a disk item pointer.
@@ -79,6 +85,12 @@ typedef ItemPointerData *ItemPointer;
 	(pointer)->ip_posid \
 )
 
+/* Same as ItemPointerGetOffsetNumber but without any assert-checks */
+#define ItemPointerGetOffsetNumberNoCheck(pointer) \
+( \
+	(pointer)->ip_posid \
+)
+
 /*
  * ItemPointerSet
  *		Sets a disk item pointer to the specified block and offset.
-- 
2.1.4

0004-freeup-3bits-ip_posid-v16.patchtext/plain; charset=us-asciiDownload
From b9bd5c82336decc3124b10a6d34d8222d0dd487f Mon Sep 17 00:00:00 2001
From: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Wed, 8 Mar 2017 13:49:22 -0300
Subject: [PATCH 4/6] freeup 3bits ip_posid v16

---
 src/backend/access/gin/ginget.c         |  2 +-
 src/backend/access/gin/ginpostinglist.c |  2 +-
 src/include/access/ginblock.h           | 10 +++++-----
 src/include/access/gist_private.h       |  4 ++--
 src/include/access/htup_details.h       |  2 +-
 src/include/storage/itemptr.h           | 32 ++++++++++++++++++++++++++++----
 src/include/storage/off.h               |  9 ++++++++-
 7 files changed, 46 insertions(+), 15 deletions(-)

diff --git a/src/backend/access/gin/ginget.c b/src/backend/access/gin/ginget.c
index aa0b02f..1e1c978 100644
--- a/src/backend/access/gin/ginget.c
+++ b/src/backend/access/gin/ginget.c
@@ -928,7 +928,7 @@ keyGetItem(GinState *ginstate, MemoryContext tempCtx, GinScanKey key,
 	 * Find the minimum item > advancePast among the active entry streams.
 	 *
 	 * Note: a lossy-page entry is encoded by a ItemPointer with max value for
-	 * offset (0xffff), so that it will sort after any exact entries for the
+	 * offset (0x1fff), so that it will sort after any exact entries for the
 	 * same page.  So we'll prefer to return exact pointers not lossy
 	 * pointers, which is good.
 	 */
diff --git a/src/backend/access/gin/ginpostinglist.c b/src/backend/access/gin/ginpostinglist.c
index 8d2d31a..b22b9f5 100644
--- a/src/backend/access/gin/ginpostinglist.c
+++ b/src/backend/access/gin/ginpostinglist.c
@@ -253,7 +253,7 @@ ginCompressPostingList(const ItemPointer ipd, int nipd, int maxsize,
 
 		Assert(ndecoded == totalpacked);
 		for (i = 0; i < ndecoded; i++)
-			Assert(memcmp(&tmp[i], &ipd[i], sizeof(ItemPointerData)) == 0);
+			Assert(ItemPointerEquals(&tmp[i], &ipd[i]));
 		pfree(tmp);
 	}
 #endif
diff --git a/src/include/access/ginblock.h b/src/include/access/ginblock.h
index 438912c..3f7a3f0 100644
--- a/src/include/access/ginblock.h
+++ b/src/include/access/ginblock.h
@@ -160,14 +160,14 @@ typedef struct GinMetaPageData
 	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)0 && \
 	 GinItemPointerGetBlockNumber(p) == (BlockNumber)0)
 #define ItemPointerSetMax(p)  \
-	ItemPointerSet((p), InvalidBlockNumber, (OffsetNumber)0xffff)
+	ItemPointerSet((p), InvalidBlockNumber, (OffsetNumber)OffsetNumberMask)
 #define ItemPointerIsMax(p)  \
-	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)0xffff && \
+	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)OffsetNumberMask && \
 	 GinItemPointerGetBlockNumber(p) == InvalidBlockNumber)
 #define ItemPointerSetLossyPage(p, b)  \
-	ItemPointerSet((p), (b), (OffsetNumber)0xffff)
+	ItemPointerSet((p), (b), (OffsetNumber)OffsetNumberMask)
 #define ItemPointerIsLossyPage(p)  \
-	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)0xffff && \
+	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)OffsetNumberMask && \
 	 GinItemPointerGetBlockNumber(p) != InvalidBlockNumber)
 
 /*
@@ -218,7 +218,7 @@ typedef signed char GinNullCategory;
  */
 #define GinGetNPosting(itup)	GinItemPointerGetOffsetNumber(&(itup)->t_tid)
 #define GinSetNPosting(itup,n)	ItemPointerSetOffsetNumber(&(itup)->t_tid,n)
-#define GIN_TREE_POSTING		((OffsetNumber)0xffff)
+#define GIN_TREE_POSTING		((OffsetNumber)OffsetNumberMask)
 #define GinIsPostingTree(itup)	(GinGetNPosting(itup) == GIN_TREE_POSTING)
 #define GinSetPostingTree(itup, blkno)	( GinSetNPosting((itup),GIN_TREE_POSTING), ItemPointerSetBlockNumber(&(itup)->t_tid, blkno) )
 #define GinGetPostingTree(itup) GinItemPointerGetBlockNumber(&(itup)->t_tid)
diff --git a/src/include/access/gist_private.h b/src/include/access/gist_private.h
index 1ad4ed6..0ad11f1 100644
--- a/src/include/access/gist_private.h
+++ b/src/include/access/gist_private.h
@@ -269,8 +269,8 @@ typedef struct
  * invalid tuples in an index, so throwing an error is as far as we go with
  * supporting that.
  */
-#define TUPLE_IS_VALID		0xffff
-#define TUPLE_IS_INVALID	0xfffe
+#define TUPLE_IS_VALID		OffsetNumberMask
+#define TUPLE_IS_INVALID	OffsetNumberPrev(OffsetNumberMask)
 
 #define  GistTupleIsInvalid(itup)	( ItemPointerGetOffsetNumber( &((itup)->t_tid) ) == TUPLE_IS_INVALID )
 #define  GistTupleSetValid(itup)	ItemPointerSetOffsetNumber( &((itup)->t_tid), TUPLE_IS_VALID )
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 24433c7..4d614b7 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -288,7 +288,7 @@ struct HeapTupleHeaderData
  * than MaxOffsetNumber, so that it can be distinguished from a valid
  * offset number in a regular item pointer.
  */
-#define SpecTokenOffsetNumber		0xfffe
+#define SpecTokenOffsetNumber		OffsetNumberPrev(OffsetNumberMask)
 
 /*
  * HeapTupleHeader accessor macros
diff --git a/src/include/storage/itemptr.h b/src/include/storage/itemptr.h
index 60d0070..3144bdd 100644
--- a/src/include/storage/itemptr.h
+++ b/src/include/storage/itemptr.h
@@ -57,7 +57,7 @@ typedef ItemPointerData *ItemPointer;
  *		True iff the disk item pointer is not NULL.
  */
 #define ItemPointerIsValid(pointer) \
-	((bool) (PointerIsValid(pointer) && ((pointer)->ip_posid != 0)))
+	((bool) (PointerIsValid(pointer) && (((pointer)->ip_posid & OffsetNumberMask) != 0)))
 
 /*
  * ItemPointerGetBlockNumber
@@ -82,13 +82,37 @@ typedef ItemPointerData *ItemPointer;
 #define ItemPointerGetOffsetNumber(pointer) \
 ( \
 	AssertMacro(ItemPointerIsValid(pointer)), \
-	(pointer)->ip_posid \
+	((pointer)->ip_posid & OffsetNumberMask) \
 )
 
 /* Same as ItemPointerGetOffsetNumber but without any assert-checks */
 #define ItemPointerGetOffsetNumberNoCheck(pointer) \
 ( \
-	(pointer)->ip_posid \
+	((pointer)->ip_posid & OffsetNumberMask) \
+)
+
+/*
+ * Get the flags stored in high order bits in the OffsetNumber.
+ */
+#define ItemPointerGetFlags(pointer) \
+( \
+	((pointer)->ip_posid & ~OffsetNumberMask) >> OffsetNumberBits \
+)
+
+/*
+ * Set the flag bits. We first left-shift since flags are defined starting 0x01
+ */
+#define ItemPointerSetFlags(pointer, flags) \
+( \
+	((pointer)->ip_posid |= ((flags) << OffsetNumberBits)) \
+)
+
+/*
+ * Clear all flags.
+ */
+#define ItemPointerClearFlags(pointer) \
+( \
+	((pointer)->ip_posid &= OffsetNumberMask) \
 )
 
 /*
@@ -99,7 +123,7 @@ typedef ItemPointerData *ItemPointer;
 ( \
 	AssertMacro(PointerIsValid(pointer)), \
 	BlockIdSet(&((pointer)->ip_blkid), blockNumber), \
-	(pointer)->ip_posid = offNum \
+	(pointer)->ip_posid = (offNum) \
 )
 
 /*
diff --git a/src/include/storage/off.h b/src/include/storage/off.h
index fe8638f..fe1834c 100644
--- a/src/include/storage/off.h
+++ b/src/include/storage/off.h
@@ -26,8 +26,15 @@ typedef uint16 OffsetNumber;
 #define InvalidOffsetNumber		((OffsetNumber) 0)
 #define FirstOffsetNumber		((OffsetNumber) 1)
 #define MaxOffsetNumber			((OffsetNumber) (BLCKSZ / sizeof(ItemIdData)))
-#define OffsetNumberMask		(0xffff)		/* valid uint16 bits */
 
+/*
+ * Currently we support maxinum 32kB blocks and each ItemId takes 6 bytes. That
+ * limits the number of line pointers to (32kB/6 = 5461). 13 bits are enought o
+ * represent all line pointers. Hence we can reuse the high order bits in
+ * OffsetNumber for other purposes.
+ */
+#define OffsetNumberMask		(0x1fff)		/* valid uint16 bits */
+#define OffsetNumberBits		13	/* number of valid bits in OffsetNumber */
 /* ----------------
  *		support macros
  * ----------------
-- 
2.1.4

0005-warm-updates-v16.patchtext/plain; charset=us-asciiDownload
From ec58bc56548045d34bd92d2042432f7d5eaee5d4 Mon Sep 17 00:00:00 2001
From: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Wed, 8 Mar 2017 13:49:43 -0300
Subject: [PATCH 5/6] warm updates v16

---
 contrib/bloom/blutils.c                   |   1 +
 src/backend/access/brin/brin.c            |   1 +
 src/backend/access/gist/gist.c            |   1 +
 src/backend/access/hash/hash.c            |   5 +-
 src/backend/access/hash/hashsearch.c      |   5 +
 src/backend/access/hash/hashutil.c        | 110 +++++++++
 src/backend/access/heap/README.WARM       | 306 +++++++++++++++++++++++++
 src/backend/access/heap/heapam.c          | 256 +++++++++++++++++++--
 src/backend/access/heap/pruneheap.c       |   7 +
 src/backend/access/index/indexam.c        |  89 ++++++--
 src/backend/access/nbtree/nbtinsert.c     | 229 +++++++++++--------
 src/backend/access/nbtree/nbtree.c        |   5 +-
 src/backend/access/nbtree/nbtutils.c      | 104 +++++++++
 src/backend/access/spgist/spgutils.c      |   1 +
 src/backend/catalog/index.c               |  15 ++
 src/backend/catalog/indexing.c            |  57 ++++-
 src/backend/catalog/system_views.sql      |   4 +-
 src/backend/commands/constraint.c         |   4 +-
 src/backend/commands/copy.c               |   3 +
 src/backend/commands/indexcmds.c          |  17 +-
 src/backend/commands/vacuumlazy.c         |  25 ++
 src/backend/executor/execIndexing.c       |  18 +-
 src/backend/executor/execReplication.c    |  25 +-
 src/backend/executor/nodeBitmapHeapscan.c |  21 +-
 src/backend/executor/nodeIndexscan.c      |   6 +-
 src/backend/executor/nodeModifyTable.c    |  27 ++-
 src/backend/postmaster/pgstat.c           |   7 +-
 src/backend/utils/adt/pgstatfuncs.c       |  31 +++
 src/backend/utils/cache/relcache.c        |  61 ++++-
 src/include/access/amapi.h                |   8 +
 src/include/access/hash.h                 |   4 +
 src/include/access/heapam.h               |  12 +-
 src/include/access/heapam_xlog.h          |   1 +
 src/include/access/htup_details.h         |  29 ++-
 src/include/access/nbtree.h               |   2 +
 src/include/access/relscan.h              |   3 +-
 src/include/catalog/pg_proc.h             |   4 +
 src/include/executor/executor.h           |   1 +
 src/include/executor/nodeIndexscan.h      |   1 -
 src/include/nodes/execnodes.h             |   1 +
 src/include/pgstat.h                      |   4 +-
 src/include/utils/rel.h                   |   5 +
 src/include/utils/relcache.h              |   4 +-
 src/test/regress/expected/rules.out       |  12 +-
 src/test/regress/expected/warm.out        | 367 ++++++++++++++++++++++++++++++
 src/test/regress/parallel_schedule        |   2 +
 src/test/regress/sql/warm.sql             | 171 ++++++++++++++
 47 files changed, 1905 insertions(+), 167 deletions(-)
 create mode 100644 src/backend/access/heap/README.WARM
 create mode 100644 src/test/regress/expected/warm.out
 create mode 100644 src/test/regress/sql/warm.sql

diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c
index f2eda67..b356e2b 100644
--- a/contrib/bloom/blutils.c
+++ b/contrib/bloom/blutils.c
@@ -142,6 +142,7 @@ blhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index b22563b..b4a1465 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -116,6 +116,7 @@ brinhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index 6593771..843389b 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -94,6 +94,7 @@ gisthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index 1f8a7f6..9b20ae6 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -90,6 +90,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = hashrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -271,6 +272,8 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 	OffsetNumber offnum;
 	ItemPointer current;
 	bool		res;
+	IndexTuple	itup;
+
 
 	/* Hash indexes are always lossy since we store only the hash code */
 	scan->xs_recheck = true;
@@ -308,8 +311,6 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 			 offnum <= maxoffnum;
 			 offnum = OffsetNumberNext(offnum))
 		{
-			IndexTuple	itup;
-
 			itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 			if (ItemPointerEquals(&(so->hashso_heappos), &(itup->t_tid)))
 				break;
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index 9e5d7e4..60e941d 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -59,6 +59,8 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
 	return true;
 }
 
@@ -363,6 +365,9 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
+
 	return true;
 }
 
diff --git a/src/backend/access/hash/hashutil.c b/src/backend/access/hash/hashutil.c
index c705531..dcba734 100644
--- a/src/backend/access/hash/hashutil.c
+++ b/src/backend/access/hash/hashutil.c
@@ -17,8 +17,12 @@
 #include "access/hash.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
+#include "nodes/execnodes.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 #define CALC_NEW_BUCKET(old_bucket, lowmask) \
 			old_bucket | (lowmask + 1)
@@ -446,3 +450,109 @@ _hash_get_newbucket_from_oldbucket(Relation rel, Bucket old_bucket,
 
 	return new_bucket;
 }
+
+/*
+ * Recheck if the heap tuple satisfies the key stored in the index tuple
+ */
+bool
+hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	Datum		values2[INDEX_MAX_KEYS];
+	bool		isnull2[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	/*
+	 * HASH indexes compute a hash value of the key and store that in the
+	 * index. So we must first obtain the hash of the value obtained from the
+	 * heap and then do a comparison
+	 */
+	_hash_convert_tuple(indexRel, values, isnull, values2, isnull2);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL then they are equal
+		 */
+		if (isnull2[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If either is NULL then they are not equal
+		 */
+		if (isnull2[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now do a raw memory comparison
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values2[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+}
diff --git a/src/backend/access/heap/README.WARM b/src/backend/access/heap/README.WARM
new file mode 100644
index 0000000..7b9a712
--- /dev/null
+++ b/src/backend/access/heap/README.WARM
@@ -0,0 +1,306 @@
+src/backend/access/heap/README.WARM
+
+Write Amplification Reduction Method (WARM)
+===========================================
+
+The Heap Only Tuple (HOT) feature greatly eliminated redudant index
+entries and allowed re-use of the dead space occupied by previously
+updated or deleted tuples (see src/backend/access/heap/README.HOT)
+
+One of the necessary conditions for satisfying HOT update is that the
+update must not change a column used in any of the indexes on the table.
+The condition is sometimes hard to meet, especially for complex
+workloads with several indexes on large yet frequently updated tables.
+Worse, sometimes only one or two index columns may be updated, but the
+regular non-HOT update will still insert a new index entry in every
+index on the table, irrespective of whether the key pertaining to the
+index changed or not.
+
+WARM is a technique devised to address these problems.
+
+
+Update Chains With Multiple Index Entries Pointing to the Root
+--------------------------------------------------------------
+
+When a non-HOT update is caused by an index key change, a new index
+entry must be inserted for the changed index. But if the index key
+hasn't changed for other indexes, we don't really need to insert a new
+entry. Even though the existing index entry is pointing to the old
+tuple, the new tuple is reachable via the t_ctid chain. To keep things
+simple, a WARM update requires that the heap block must have enough
+space to store the new version of the tuple. This is same as HOT
+updates.
+
+In WARM, we ensure that every index entry always points to the root of
+the WARM chain. In fact, a WARM chain looks exactly like a HOT chain
+except for the fact that there could be multiple index entries pointing
+to the root of the chain. So when new entry is inserted in an index for
+updated tuple, and if we are doing a WARM update, the new entry is made
+point to the root of the WARM chain.
+
+For example, if we have a table with two columns and two indexes on each
+of the column. When a tuple is first inserted the table, we have exactly
+one index entry pointing to the tuple from both indexes.
+
+	lp [1]
+	[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's entry (aaaa) also points to 1
+
+Now if the tuple's second column is updated and if there is room on the
+page, we perform a WARM update. To do so, Index1 does not get any new
+entry and Index2's new entry will still point to the root tuple of the
+chain.
+
+	lp [1]  [2]
+	[1111, aaaa]->[111, bbbb]
+
+	Index1's entry (1111) points to 1
+	Index2's old entry (aaaa) points to 1
+	Index2's new entry (bbbb) also points to 1
+
+"A update chain which has more than one index entries pointing to its
+root line pointer is called WARM chain and the action that creates a
+WARM chain is called WARM update."
+
+Since all indexes always point to the root of the WARM chain, even when
+there are more than one index entries, WARM chains can be pruned and
+dead tuples can be removed without a need to do corresponding index
+cleanup.
+
+While this solves the problem of pruning dead tuples from a HOT/WARM
+chain, it also opens up a new technical challenge because now we have a
+situation where a heap tuple is reachable from multiple index entries,
+each having a different index key. While MVCC still ensures that only
+valid tuples are returned, a tuple with a wrong index key may be
+returned because of wrong index entries. In the above example, tuple
+[1111, bbbb] is reachable from both keys (aaaa) as well as (bbbb). For
+this reason, tuples returned from a WARM chain must always be rechecked
+for index key-match.
+
+Recheck Index Key Againt Heap Tuple
+-----------------------------------
+
+Since every Index AM has it's own notion of index tuples, each Index AM
+must implement its own method to recheck heap tuples. For example, a
+hash index stores the hash value of the column and hence recheck routine
+for hash AM must first compute the hash value of the heap attribute and
+then compare it against the value stored in the index tuple.
+
+The patch currently implement recheck routines for hash and btree
+indexes. If the table has an index which doesn't support recheck
+routine, WARM updates are disabled on such tables.
+
+Problem With Duplicate (key, ctid) Index Entries
+------------------------------------------------
+
+The index-key recheck logic works as long as there are no duplicate
+index keys, both pointing to the same WARM chain. In that case, the same
+valid tuple will be reachable via multiple index keys, yet satisfying
+the index key checks. In the above example, if the tuple [1111, bbbb] is
+again updated to [1111, aaaa] and if we insert a new index entry (aaaa)
+pointing to the root line pointer, we will end up with the following
+structure:
+
+	lp [1]  [2]  [3]
+	[1111, aaaa]->[1111, bbbb]->[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's oldest entry (aaaa) points to 1
+	Index2's old entry (bbbb) also points to 1
+	Index2's new entry (aaaa) also points to 1
+
+We must solve this problem to ensure that the same tuple is not
+reachable via multiple index pointers. There are couple of ways to
+address this issue:
+
+1. Do not allow WARM update to a tuple from a WARM chain. This
+guarantees that there can never be duplicate index entries to the same
+root line pointer because we must have checked for old and new index
+keys while doing the first WARM update.
+
+2. Do not allow duplicate (key, ctid) index pointers. In the above
+example, since (aaaa, 1) already exists in the index, we must not insert
+a duplicate index entry.
+
+The patch currently implements 1 i.e. do not do WARM updates to a tuple
+from a WARM chain. HOT updates are fine because they do not add a new
+index entry.
+
+Even with the restriction, this is a significant improvement because the
+number of regular UPDATEs are curtailed down to half.
+
+Expression and Partial Indexes
+------------------------------
+
+Expressions may evaluate to the same value even if the underlying column
+values have changed. A simple example is an index on "lower(col)" which
+will return the same value if the new heap value only differs in the
+case sensitivity. So we can not solely rely on the heap column check to
+decide whether or not to insert a new index entry for expression
+indexes. Similarly, for partial indexes, the predicate expression must
+be evaluated to decide whether or not to cause a new index entry when
+columns referred in the predicate expressions change.
+
+(None of these things are currently implemented and we squarely disallow
+WARM update if a column from expression indexes or predicate has
+changed).
+
+
+Efficiently Finding the Root Line Pointer
+-----------------------------------------
+
+During WARM update, we must be able to find the root line pointer of the
+tuple being updated. It must be noted that the t_ctid field in the heap
+tuple header is usually used to find the next tuple in the update chain.
+But the tuple that we are updating, must be the last tuple in the update
+chain. In such cases, the c_tid field usually points the tuple itself.
+So in theory, we could use the t_ctid to store additional information in
+the last tuple of the update chain, if the information about the tuple
+being the last tuple is stored elsewhere.
+
+We now utilize another bit from t_infomask2 to explicitly identify that
+this is the last tuple in the update chain.
+
+HEAP_LATEST_TUPLE - When this bit is set, the tuple is the last tuple in
+the update chain. The OffsetNumber part of t_ctid points to the root
+line pointer of the chain when HEAP_LATEST_TUPLE flag is set.
+
+If UPDATE operation is aborted, the last tuple in the update chain
+becomes dead. The root line pointer information stored in the tuple
+which remains the last valid tuple in the chain is also lost. In such
+rare cases, the root line pointer must be found in a hard way by
+scanning the entire heap page.
+
+Tracking WARM Chains
+--------------------
+
+The old and every subsequent tuple in the chain is marked with a special
+HEAP_WARM_TUPLE flag. We use the last remaining bit in t_infomask2 to
+store this information.
+
+When a tuple is returned from a WARM chain, the caller must do
+additional checks to ensure that the tuple matches the index key. Even
+if the tuple comes precedes the WARM update in the chain, it must still
+be rechecked for the index key match (case when old tuple is returned by
+the new index key). So we must follow the update chain everytime to the
+end to see check if this is a WARM chain.
+
+When the old updated tuple is retired and the root line pointer is
+converted into a redirected line pointer, we can copy the information
+about WARM chain to the redirected line pointer by storing a special
+value in the lp_len field of the line pointer. This will handle the most
+common case where a WARM chain is replaced by a redirect line pointer
+and a single tuple in the chain.
+
+Converting WARM chains back to HOT chains (VACUUM ?)
+----------------------------------------------------
+
+The current implementation of WARM allows only one WARM update per
+chain. This simplifies the design and addresses certain issues around
+duplicate scans. But this also implies that the benefit of WARM will be
+no more than 50%, which is still significant, but if we could return
+WARM chains back to normal status, we could do far more WARM updates.
+
+A distinct property of a WARM chain is that at least one index has more
+than one live index entries pointing to the root of the chain. In other
+words, if we can remove duplicate entry from every index or conclusively
+prove that there are no duplicate index entries for the root line
+pointer, the chain can again be marked as HOT.
+
+Here is one idea:
+
+A WARM chain has two parts, separated by the tuple that caused WARM
+update. All tuples in each part has matching index keys, but certain
+index keys may not match between these two parts. Lets say we mark heap
+tuples in each part with a special Red-Blue flag. The same flag is
+replicated in the index tuples. For example, when new rows are inserted
+in a table, they are marked with Blue flag and the index entries
+associated with those rows are also marked with Blue flag. When a row is
+WARM updated, the new version is marked with Red flag and the new index
+entry created by the update is also marked with Red flag.
+
+
+Heap chain: [1] [2] [3] [4]
+			[aaaa, 1111]B -> [aaaa, 1111]B -> [bbbb, 1111]R -> [bbbb, 1111]R
+
+Index1: 	(aaaa)B points to 1 (satisfies only tuples marked with B)
+			(bbbb)R points to 1 (satisfies only tuples marked with R)
+
+Index2:		(1111)B points to 1 (satisfied bith B and R tuples)
+
+
+It's clear that for indexes with Red and Blue pointers, a heap tuple
+with Blue flag will be reachable from Blue pointer and that with Red
+flag will be reachable from Red pointer. But for indexes which did not
+create a new entry, both Blue and Red tuples will be reachable from Blue
+pointer (there is no Red pointer in such indexes). So, as a side note,
+matching Red and Blue flags is not enough from index scan perspective.
+
+During first heap scan of VACUUM, we look for tuples with
+HEAP_WARM_TUPLE set.  If all live tuples in the chain are either marked
+with Blue flag or Red flag (but no mix of Red and Blue), then the chain
+is a candidate for HOT conversion.  We remember the root line pointer
+and Red-Blue flag of the WARM chain in a separate array.
+
+If we have a Red WARM chain, then our goal is to remove Blue pointers
+and vice versa. But there is a catch. For Index2 above, there is only
+Blue pointer and that must not be removed. IOW we should remove Blue
+pointer iff a Red pointer exists. Since index vacuum may visit Red and
+Blue pointers in any order, I think we will need another index pass to
+remove dead index pointers. So in the first index pass we check which
+WARM candidates have 2 index pointers. In the second pass, we remove the
+dead pointer and reset Red flag is the surviving index pointer is Red.
+
+During the second heap scan, we fix WARM chain by clearing
+HEAP_WARM_TUPLE flag and also reset Red flag to Blue.
+
+There are some more problems around aborted vacuums. For example, if
+vacuum aborts after changing Red index flag to Blue but before removing
+the other Blue pointer, we will end up with two Blue pointers to a Red
+WARM chain. But since the HEAP_WARM_TUPLE flag on the heap tuple is
+still set, further WARM updates to the chain will be blocked. I guess we
+will need some special handling for case with multiple Blue pointers. We
+can either leave these WARM chains alone and let them die with a
+subsequent non-WARM update or must apply heap-recheck logic during index
+vacuum to find the dead pointer. Given that vacuum-aborts are not
+common, I am inclined to leave this case unhandled. We must still check
+for presence of multiple Blue pointers and ensure that we don't
+accidently remove either of the Blue pointers and not clear WARM chains
+either.
+
+CREATE INDEX CONCURRENTLY
+-------------------------
+
+Currently CREATE INDEX CONCURRENTLY (CIC) is implemented as a 3-phase
+process.  In the first phase, we create catalog entry for the new index
+so that the index is visible to all other backends, but still don't use
+it for either read or write.  But we ensure that no new broken HOT
+chains are created by new transactions. In the second phase, we build
+the new index using a MVCC snapshot and then make the index available
+for inserts. We then do another pass over the index and insert any
+missing tuples, everytime indexing only it's root line pointer. See
+README.HOT for details about how HOT impacts CIC and how various
+challenges are tackeled.
+
+WARM poses another challenge because it allows creation of HOT chains
+even when an index key is changed. But since the index is not ready for
+insertion until the second phase is over, we might end up with a
+situation where the HOT chain has tuples with different index columns,
+yet only one of these values are indexed by the new index. Note that
+during the third phase, we only index tuples whose root line pointer is
+missing from the index. But we can't easily check if the existing index
+tuple is actually indexing the heap tuple visible to the new MVCC
+snapshot. Finding that information will require us to query the index
+again for every tuple in the chain, especially if it's a WARM tuple.
+This would require repeated access to the index. Another option would be
+to return index keys along with the heap TIDs when index is scanned for
+collecting all indexed TIDs during third phase. We can then compare the
+heap tuple against the already indexed key and decide whether or not to
+index the new tuple.
+
+We solve this problem more simply by disallowing WARM updates until the
+index is ready for insertion. We don't need to disallow WARM on a
+wholesale basis, but only those updates that change the columns of the
+new index are disallowed to be WARM updates.
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 93cde9a..b9ff94d 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -1958,6 +1958,78 @@ heap_fetch(Relation relation,
 }
 
 /*
+ * Check if the HOT chain containing this tid is actually a WARM chain.
+ * Note that even if the WARM update ultimately aborted, we still must do a
+ * recheck because the failing UPDATE when have inserted created index entries
+ * which are now stale, but still referencing this chain.
+ */
+static bool
+hot_check_warm_chain(Page dp, ItemPointer tid)
+{
+	TransactionId prev_xmax = InvalidTransactionId;
+	OffsetNumber offnum;
+	HeapTupleData heapTuple;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+			break;
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Presence of either WARM or WARM updated tuple signals possible
+		 * breakage and the caller must recheck tuple returned from this chain
+		 * for index satisfaction
+		 */
+		if (HeapTupleHeaderIsHeapWarmTuple(heapTuple.t_data))
+			return true;
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	/* All OK. No need to recheck */
+	return false;
+}
+
+/*
  *	heap_hot_search_buffer	- search HOT chain for tuple satisfying snapshot
  *
  * On entry, *tid is the TID of a tuple (either a simple tuple, or the root
@@ -1977,11 +2049,14 @@ heap_fetch(Relation relation,
  * Unlike heap_fetch, the caller must already have pin and (at least) share
  * lock on the buffer; it is still pinned/locked at exit.  Also unlike
  * heap_fetch, we do not report any pgstats count; caller may do so if wanted.
+ *
+ * recheck should be set false on entry by caller, will be set true on exit
+ * if a WARM tuple is encountered.
  */
 bool
 heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 					   Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call)
+					   bool *all_dead, bool first_call, bool *recheck)
 {
 	Page		dp = (Page) BufferGetPage(buffer);
 	TransactionId prev_xmax = InvalidTransactionId;
@@ -2035,9 +2110,12 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		ItemPointerSetOffsetNumber(&heapTuple->t_self, offnum);
 
 		/*
-		 * Shouldn't see a HEAP_ONLY tuple at chain start.
+		 * Shouldn't see a HEAP_ONLY tuple at chain start, unless we are
+		 * dealing with a WARM updated tuple in which case deferred triggers
+		 * may request to fetch a WARM tuple from middle of a chain.
 		 */
-		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple))
+		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple) &&
+				!HeapTupleIsHeapWarmTuple(heapTuple))
 			break;
 
 		/*
@@ -2050,6 +2128,16 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 			break;
 
 		/*
+		 * Check if there exists a WARM tuple somewhere down the chain and set
+		 * recheck to TRUE.
+		 *
+		 * XXX This is not very efficient right now, and we should look for
+		 * possible improvements here
+		 */
+		if (recheck && *recheck == false)
+			*recheck = hot_check_warm_chain(dp, &heapTuple->t_self);
+
+		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
 		 * return the first tuple we find.  But on later passes, heapTuple
 		 * will initially be pointing to the tuple we returned last time.
@@ -2098,7 +2186,8 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		 * Check to see if HOT chain continues past this tuple; if so fetch
 		 * the next offnum and loop around.
 		 */
-		if (HeapTupleIsHotUpdated(heapTuple))
+		if (HeapTupleIsHotUpdated(heapTuple) &&
+			!HeapTupleHeaderHasRootOffset(heapTuple->t_data))
 		{
 			Assert(ItemPointerGetBlockNumber(&heapTuple->t_data->t_ctid) ==
 				   ItemPointerGetBlockNumber(tid));
@@ -2122,18 +2211,41 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
  */
 bool
 heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
-				bool *all_dead)
+				bool *all_dead, bool *recheck, Buffer *cbuffer,
+				HeapTuple heapTuple)
 {
 	bool		result;
 	Buffer		buffer;
-	HeapTupleData heapTuple;
+	ItemPointerData ret_tid = *tid;
 
 	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	LockBuffer(buffer, BUFFER_LOCK_SHARE);
-	result = heap_hot_search_buffer(tid, relation, buffer, snapshot,
-									&heapTuple, all_dead, true);
-	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buffer);
+	result = heap_hot_search_buffer(&ret_tid, relation, buffer, snapshot,
+									heapTuple, all_dead, true, recheck);
+
+	/*
+	 * If we are returning a potential candidate tuple from this chain and the
+	 * caller has requested for "recheck" hint, keep the buffer locked and
+	 * pinned. The caller must release the lock and pin on the buffer in all
+	 * such cases
+	 */
+	if (!result || !recheck || !(*recheck))
+	{
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	/*
+	 * Set the caller supplied tid with the actual location of the tuple being
+	 * returned
+	 */
+	if (result)
+	{
+		*tid = ret_tid;
+		if (cbuffer)
+			*cbuffer = buffer;
+	}
+
 	return result;
 }
 
@@ -3492,15 +3604,18 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 HTSU_Result
 heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode)
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update)
 {
 	HTSU_Result result;
 	TransactionId xid = GetCurrentTransactionId();
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *exprindx_attrs;
 	Bitmapset  *interesting_attrs;
 	Bitmapset  *modified_attrs;
+	Bitmapset  *notready_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3521,6 +3636,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		use_hot_update = false;
+	bool		use_warm_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
 	bool		all_visible_cleared_new = false;
@@ -3545,6 +3661,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot update tuples during a parallel operation")));
 
+	/* Assume no-warm update */
+	if (warm_update)
+		*warm_update = false;
+
 	/*
 	 * Fetch the list of attributes to be checked for various operations.
 	 *
@@ -3566,10 +3686,17 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	exprindx_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_EXPR_PREDICATE);
+	notready_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_NOTREADY);
+
+
 	interesting_attrs = bms_add_members(NULL, hot_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
-
+	interesting_attrs = bms_add_members(interesting_attrs, exprindx_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, notready_attrs);
 
 	block = ItemPointerGetBlockNumber(otid);
 	offnum = ItemPointerGetOffsetNumber(otid);
@@ -3621,6 +3748,9 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
 												  &oldtup, newtup);
 
+	if (modified_attrsp)
+		*modified_attrsp = bms_copy(modified_attrs);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3876,6 +4006,7 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(exprindx_attrs);
 		bms_free(modified_attrs);
 		bms_free(interesting_attrs);
 		return result;
@@ -4194,6 +4325,37 @@ l2:
 		 */
 		if (!bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
+		else
+		{
+			/*
+			 * If no WARM updates yet on this chain, let this update be a WARM
+			 * update.
+			 *
+			 * We check for both warm and warm updated tuples since if the
+			 * previous WARM update aborted, we may still have added
+			 * another index entry for this HOT chain. In such situations, we
+			 * must not attempt a WARM update until duplicate (key, CTID) index
+			 * entry issue is sorted out
+			 *
+			 * XXX Later we'll add more checks to ensure WARM chains can
+			 * further be WARM updated. This is probably good to do first rounf
+			 * of tests of remaining functionality
+			 *
+			 * XXX Disable WARM updates on system tables. There is nothing in
+			 * principle that stops us from supporting this. But it would
+			 * require API change to propogate the changed columns back to the
+			 * caller so that CatalogUpdateIndexes() can avoid adding new
+			 * entries to indexes that are not changed by update. This will be
+			 * fixed once basic patch is tested. !!FIXME
+			 */
+			if (relation->rd_supportswarm &&
+				!bms_overlap(modified_attrs, exprindx_attrs) &&
+				!bms_is_subset(hot_attrs, modified_attrs) &&
+				!IsSystemRelation(relation) &&
+				!bms_overlap(notready_attrs, modified_attrs) &&
+				!HeapTupleIsHeapWarmTuple(&oldtup))
+				use_warm_update = true;
+		}
 	}
 	else
 	{
@@ -4240,6 +4402,22 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+
+		/*
+		 * Even if we are doing a HOT update, we must carry forward the WARM
+		 * flag because we may have already inserted another index entry
+		 * pointing to our root and a third entry may create duplicates
+		 *
+		 * Note: If we ever have a mechanism to avoid duplicate <key, TID> in
+		 * indexes, we could look at relaxing this restriction and allow even
+		 * more WARM udpates
+		 */
+		if (HeapTupleIsHeapWarmTuple(&oldtup))
+		{
+			HeapTupleSetHeapWarmTuple(heaptup);
+			HeapTupleSetHeapWarmTuple(newtup);
+		}
+
 		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
@@ -4252,12 +4430,35 @@ l2:
 		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
 			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
+	else if (use_warm_update)
+	{
+		/* Mark the old tuple as HOT-updated */
+		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetHeapWarmTuple(&oldtup);
+		/* And mark the new tuple as heap-only */
+		HeapTupleSetHeapOnly(heaptup);
+		HeapTupleSetHeapWarmTuple(heaptup);
+		/* Mark the caller's copy too, in case different from heaptup */
+		HeapTupleSetHeapOnly(newtup);
+		HeapTupleSetHeapWarmTuple(newtup);
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
+		/* Let the caller know we did a WARM update */
+		if (warm_update)
+			*warm_update = true;
+	}
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		HeapTupleClearHeapWarmTuple(heaptup);
+		HeapTupleClearHeapWarmTuple(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4367,7 +4568,10 @@ l2:
 	if (have_tuple_lock)
 		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
 
-	pgstat_count_heap_update(relation, use_hot_update);
+	/*
+	 * Count HOT and WARM updates separately
+	 */
+	pgstat_count_heap_update(relation, use_hot_update, use_warm_update);
 
 	/*
 	 * If heaptup is a private copy, release it.  Don't forget to copy t_self
@@ -4507,7 +4711,8 @@ HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
  * via ereport().
  */
 void
-simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
+simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup,
+		Bitmapset **modified_attrs, bool *warm_update)
 {
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
@@ -4516,7 +4721,7 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &hufd, &lockmode);
+						 &hufd, &lockmode, modified_attrs, warm_update);
 	switch (result)
 	{
 		case HeapTupleSelfUpdated:
@@ -7568,6 +7773,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
+	bool		warm_update = false;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
@@ -7579,6 +7785,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	else
 		info = XLOG_HEAP_UPDATE;
 
+	if (HeapTupleIsHeapWarmTuple(newtup))
+		warm_update = true;
+
 	/*
 	 * If the old and new tuple are on the same page, we only need to log the
 	 * parts of the new tuple that were changed.  That saves on the amount of
@@ -7652,6 +7861,8 @@ log_heap_update(Relation reln, Buffer oldbuf,
 				xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY;
 		}
 	}
+	if (warm_update)
+		xlrec.flags |= XLH_UPDATE_WARM_UPDATE;
 
 	/* If new tuple is the single and first tuple on page... */
 	if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber &&
@@ -8629,16 +8840,22 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	Size		freespace = 0;
 	XLogRedoAction oldaction;
 	XLogRedoAction newaction;
+	bool		warm_update = false;
 
 	/* initialize to keep the compiler quiet */
 	oldtup.t_data = NULL;
 	oldtup.t_len = 0;
 
+	if (xlrec->flags & XLH_UPDATE_WARM_UPDATE)
+		warm_update = true;
+
 	XLogRecGetBlockTag(record, 0, &rnode, NULL, &newblk);
 	if (XLogRecGetBlockTag(record, 1, NULL, NULL, &oldblk))
 	{
 		/* HOT updates are never done across pages */
 		Assert(!hot_update);
+		/* WARM updates are never done across pages */
+		Assert(!warm_update);
 	}
 	else
 		oldblk = newblk;
@@ -8698,6 +8915,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 								   &htup->t_infomask2);
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
+
+		/* Mark the old tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		/* Set forward chain link in t_ctid */
 		HeapTupleHeaderSetNextTid(htup, &newtid);
 
@@ -8833,6 +9055,10 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
 
+		/* Mark the new tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetHeapWarmTuple(htup);
+
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index f54337c..4e8ed79 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -834,6 +834,13 @@ heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				continue;
 
+			/*
+			 * If the tuple has root line pointer, it must be the end of the
+			 * chain
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			/* Set up to scan the HOT-chain */
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index cc5ac8b..da6c252 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -75,10 +75,12 @@
 #include "access/xlog.h"
 #include "catalog/catalog.h"
 #include "catalog/index.h"
+#include "executor/executor.h"
 #include "pgstat.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
+#include "utils/datum.h"
 #include "utils/snapmgr.h"
 #include "utils/tqual.h"
 
@@ -234,6 +236,21 @@ index_beginscan(Relation heapRelation,
 	scan->heapRelation = heapRelation;
 	scan->xs_snapshot = snapshot;
 
+	/*
+	 * If the index supports recheck, make sure that index tuple is saved
+	 * during index scans.
+	 *
+	 * XXX Ideally, we should look at all indexes on the table and check if
+	 * WARM is at all supported on the base table. If WARM is not supported
+	 * then we don't need to do any recheck. RelationGetIndexAttrBitmap() does
+	 * do that and sets rd_supportswarm after looking at all indexes. But we
+	 * don't know if the function was called earlier in the session when we're
+	 * here. We can't call it now because there exists a risk of causing
+	 * deadlock.
+	 */
+	if (indexRelation->rd_amroutine->amrecheck)
+		scan->xs_want_itup = true;
+
 	return scan;
 }
 
@@ -535,8 +552,8 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
 	/*
 	 * The AM's amgettuple proc finds the next index entry matching the scan
 	 * keys, and puts the TID into scan->xs_ctup.t_self.  It should also set
-	 * scan->xs_recheck and possibly scan->xs_itup/scan->xs_hitup, though we
-	 * pay no attention to those fields here.
+	 * scan->xs_tuple_recheck and possibly scan->xs_itup/scan->xs_hitup,
+	 * though we pay no attention to those fields here.
 	 */
 	found = scan->indexRelation->rd_amroutine->amgettuple(scan, direction);
 
@@ -574,7 +591,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
  * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
  * call).
  *
- * Note: caller must check scan->xs_recheck, and perform rechecking of the
+ * Note: caller must check scan->xs_tuple_recheck, and perform rechecking of the
  * scan keys if required.  We do not do that here because we don't have
  * enough information to do it efficiently in the general case.
  * ----------------
@@ -601,6 +618,12 @@ index_fetch_heap(IndexScanDesc scan)
 		 */
 		if (prev_buf != scan->xs_cbuf)
 			heap_page_prune_opt(scan->heapRelation, scan->xs_cbuf);
+
+		/*
+		 * If we're not always re-checking, reset recheck for this tuple.
+		 * Otherwise we must recheck every tuple.
+		 */
+		scan->xs_tuple_recheck = scan->xs_recheck;
 	}
 
 	/* Obtain share-lock on the buffer so we can examine visibility */
@@ -610,32 +633,64 @@ index_fetch_heap(IndexScanDesc scan)
 											scan->xs_snapshot,
 											&scan->xs_ctup,
 											&all_dead,
-											!scan->xs_continue_hot);
+											!scan->xs_continue_hot,
+											&scan->xs_tuple_recheck);
 	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
 
 	if (got_heap_tuple)
 	{
+		bool res = true;
+
+		/*
+		 * Ok we got a tuple which satisfies the snapshot, but if its part of a
+		 * WARM chain, we must do additional checks to ensure that we are
+		 * indeed returning a correct tuple. Note that if the index AM does not
+		 * implement amrecheck method, then we don't any additional checks
+		 * since WARM must have been disabled on such tables
+		 *
+		 * XXX What happens when a new index which does not support amcheck is
+		 * added to the table? Do we need to handle this case or is CREATE
+		 * INDEX and CREATE INDEX CONCURRENTLY smart enough to handle this
+		 * issue?
+		 */
+		if (scan->xs_tuple_recheck &&
+				scan->xs_itup &&
+				scan->indexRelation->rd_amroutine->amrecheck)
+		{
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
+			res = scan->indexRelation->rd_amroutine->amrecheck(
+						scan->indexRelation,
+						scan->xs_itup,
+						scan->heapRelation,
+						&scan->xs_ctup);
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+		}
+
 		/*
 		 * Only in a non-MVCC snapshot can more than one member of the HOT
 		 * chain be visible.
 		 */
 		scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
 		pgstat_count_heap_fetch(scan->indexRelation);
-		return &scan->xs_ctup;
+
+		if (res)
+			return &scan->xs_ctup;
 	}
+	else
+	{
+		/* We've reached the end of the HOT chain. */
+		scan->xs_continue_hot = false;
 
-	/* We've reached the end of the HOT chain. */
-	scan->xs_continue_hot = false;
-
-	/*
-	 * If we scanned a whole HOT chain and found only dead tuples, tell index
-	 * AM to kill its entry for that TID (this will take effect in the next
-	 * amgettuple call, in index_getnext_tid).  We do not do this when in
-	 * recovery because it may violate MVCC to do so.  See comments in
-	 * RelationGetIndexScan().
-	 */
-	if (!scan->xactStartedInRecovery)
-		scan->kill_prior_tuple = all_dead;
+		/*
+		 * If we scanned a whole HOT chain and found only dead tuples, tell index
+		 * AM to kill its entry for that TID (this will take effect in the next
+		 * amgettuple call, in index_getnext_tid).  We do not do this when in
+		 * recovery because it may violate MVCC to do so.  See comments in
+		 * RelationGetIndexScan().
+		 */
+		if (!scan->xactStartedInRecovery)
+			scan->kill_prior_tuple = all_dead;
+	}
 
 	return NULL;
 }
diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c
index 6dca810..b5cb619 100644
--- a/src/backend/access/nbtree/nbtinsert.c
+++ b/src/backend/access/nbtree/nbtinsert.c
@@ -20,11 +20,14 @@
 #include "access/nbtxlog.h"
 #include "access/transam.h"
 #include "access/xloginsert.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
 #include "utils/tqual.h"
-
+#include "utils/datum.h"
 
 typedef struct
 {
@@ -250,6 +253,9 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 	BTPageOpaque opaque;
 	Buffer		nbuf = InvalidBuffer;
 	bool		found = false;
+	Buffer		buffer;
+	HeapTupleData	heapTuple;
+	bool		recheck = false;
 
 	/* Assume unique until we find a duplicate */
 	*is_unique = true;
@@ -309,6 +315,8 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				curitup = (IndexTuple) PageGetItem(page, curitemid);
 				htid = curitup->t_tid;
 
+				recheck = false;
+
 				/*
 				 * If we are doing a recheck, we expect to find the tuple we
 				 * are rechecking.  It's not a duplicate, but we have to keep
@@ -326,112 +334,153 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				 * have just a single index entry for the entire chain.
 				 */
 				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
-										 &all_dead))
+							&all_dead, &recheck, &buffer,
+							&heapTuple))
 				{
 					TransactionId xwait;
+					bool result = true;
 
 					/*
-					 * It is a duplicate. If we are only doing a partial
-					 * check, then don't bother checking if the tuple is being
-					 * updated in another transaction. Just return the fact
-					 * that it is a potential conflict and leave the full
-					 * check till later.
+					 * If the tuple was WARM update, we may again see our own
+					 * tuple. Since WARM updates don't create new index
+					 * entries, our own tuple is only reachable via the old
+					 * index pointer
 					 */
-					if (checkUnique == UNIQUE_CHECK_PARTIAL)
+					if (checkUnique == UNIQUE_CHECK_EXISTING &&
+							ItemPointerCompare(&htid, &itup->t_tid) == 0)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						*is_unique = false;
-						return InvalidTransactionId;
+						found = true;
+						result = false;
+						if (recheck)
+							UnlockReleaseBuffer(buffer);
+					}
+					else if (recheck)
+					{
+						result = btrecheck(rel, curitup, heapRel, &heapTuple);
+						UnlockReleaseBuffer(buffer);
 					}
 
-					/*
-					 * If this tuple is being updated by other transaction
-					 * then we have to wait for its commit/abort.
-					 */
-					xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
-						SnapshotDirty.xmin : SnapshotDirty.xmax;
-
-					if (TransactionIdIsValid(xwait))
-					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						/* Tell _bt_doinsert to wait... */
-						*speculativeToken = SnapshotDirty.speculativeToken;
-						return xwait;
-					}
-
-					/*
-					 * Otherwise we have a definite conflict.  But before
-					 * complaining, look to see if the tuple we want to insert
-					 * is itself now committed dead --- if so, don't complain.
-					 * This is a waste of time in normal scenarios but we must
-					 * do it to support CREATE INDEX CONCURRENTLY.
-					 *
-					 * We must follow HOT-chains here because during
-					 * concurrent index build, we insert the root TID though
-					 * the actual tuple may be somewhere in the HOT-chain.
-					 * While following the chain we might not stop at the
-					 * exact tuple which triggered the insert, but that's OK
-					 * because if we find a live tuple anywhere in this chain,
-					 * we have a unique key conflict.  The other live tuple is
-					 * not part of this chain because it had a different index
-					 * entry.
-					 */
-					htid = itup->t_tid;
-					if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL))
-					{
-						/* Normal case --- it's still live */
-					}
-					else
+					if (result)
 					{
 						/*
-						 * It's been deleted, so no error, and no need to
-						 * continue searching
+						 * It is a duplicate. If we are only doing a partial
+						 * check, then don't bother checking if the tuple is being
+						 * updated in another transaction. Just return the fact
+						 * that it is a potential conflict and leave the full
+						 * check till later.
 						 */
-						break;
-					}
+						if (checkUnique == UNIQUE_CHECK_PARTIAL)
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							*is_unique = false;
+							return InvalidTransactionId;
+						}
 
-					/*
-					 * Check for a conflict-in as we would if we were going to
-					 * write to this page.  We aren't actually going to write,
-					 * but we want a chance to report SSI conflicts that would
-					 * otherwise be masked by this unique constraint
-					 * violation.
-					 */
-					CheckForSerializableConflictIn(rel, NULL, buf);
+						/*
+						 * If this tuple is being updated by other transaction
+						 * then we have to wait for its commit/abort.
+						 */
+						xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
+							SnapshotDirty.xmin : SnapshotDirty.xmax;
 
-					/*
-					 * This is a definite conflict.  Break the tuple down into
-					 * datums and report the error.  But first, make sure we
-					 * release the buffer locks we're holding ---
-					 * BuildIndexValueDescription could make catalog accesses,
-					 * which in the worst case might touch this same index and
-					 * cause deadlocks.
-					 */
-					if (nbuf != InvalidBuffer)
-						_bt_relbuf(rel, nbuf);
-					_bt_relbuf(rel, buf);
+						if (TransactionIdIsValid(xwait))
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							/* Tell _bt_doinsert to wait... */
+							*speculativeToken = SnapshotDirty.speculativeToken;
+							return xwait;
+						}
 
-					{
-						Datum		values[INDEX_MAX_KEYS];
-						bool		isnull[INDEX_MAX_KEYS];
-						char	   *key_desc;
+						/*
+						 * Otherwise we have a definite conflict.  But before
+						 * complaining, look to see if the tuple we want to insert
+						 * is itself now committed dead --- if so, don't complain.
+						 * This is a waste of time in normal scenarios but we must
+						 * do it to support CREATE INDEX CONCURRENTLY.
+						 *
+						 * We must follow HOT-chains here because during
+						 * concurrent index build, we insert the root TID though
+						 * the actual tuple may be somewhere in the HOT-chain.
+						 * While following the chain we might not stop at the
+						 * exact tuple which triggered the insert, but that's OK
+						 * because if we find a live tuple anywhere in this chain,
+						 * we have a unique key conflict.  The other live tuple is
+						 * not part of this chain because it had a different index
+						 * entry.
+						 */
+						recheck = false;
+						ItemPointerCopy(&itup->t_tid, &htid);
+						if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL,
+									&recheck, &buffer, &heapTuple))
+						{
+							bool result = true;
+							if (recheck)
+							{
+								/*
+								 * Recheck if the tuple actually satisfies the
+								 * index key. Otherwise, we might be following
+								 * a wrong index pointer and mustn't entertain
+								 * this tuple
+								 */
+								result = btrecheck(rel, itup, heapRel, &heapTuple);
+								UnlockReleaseBuffer(buffer);
+							}
+							if (!result)
+								break;
+							/* Normal case --- it's still live */
+						}
+						else
+						{
+							/*
+							 * It's been deleted, so no error, and no need to
+							 * continue searching
+							 */
+							break;
+						}
 
-						index_deform_tuple(itup, RelationGetDescr(rel),
-										   values, isnull);
+						/*
+						 * Check for a conflict-in as we would if we were going to
+						 * write to this page.  We aren't actually going to write,
+						 * but we want a chance to report SSI conflicts that would
+						 * otherwise be masked by this unique constraint
+						 * violation.
+						 */
+						CheckForSerializableConflictIn(rel, NULL, buf);
 
-						key_desc = BuildIndexValueDescription(rel, values,
-															  isnull);
+						/*
+						 * This is a definite conflict.  Break the tuple down into
+						 * datums and report the error.  But first, make sure we
+						 * release the buffer locks we're holding ---
+						 * BuildIndexValueDescription could make catalog accesses,
+						 * which in the worst case might touch this same index and
+						 * cause deadlocks.
+						 */
+						if (nbuf != InvalidBuffer)
+							_bt_relbuf(rel, nbuf);
+						_bt_relbuf(rel, buf);
 
-						ereport(ERROR,
-								(errcode(ERRCODE_UNIQUE_VIOLATION),
-								 errmsg("duplicate key value violates unique constraint \"%s\"",
-										RelationGetRelationName(rel)),
-							   key_desc ? errdetail("Key %s already exists.",
-													key_desc) : 0,
-								 errtableconstraint(heapRel,
-											 RelationGetRelationName(rel))));
+						{
+							Datum		values[INDEX_MAX_KEYS];
+							bool		isnull[INDEX_MAX_KEYS];
+							char	   *key_desc;
+
+							index_deform_tuple(itup, RelationGetDescr(rel),
+									values, isnull);
+
+							key_desc = BuildIndexValueDescription(rel, values,
+									isnull);
+
+							ereport(ERROR,
+									(errcode(ERRCODE_UNIQUE_VIOLATION),
+									 errmsg("duplicate key value violates unique constraint \"%s\"",
+										 RelationGetRelationName(rel)),
+									 key_desc ? errdetail("Key %s already exists.",
+										 key_desc) : 0,
+									 errtableconstraint(heapRel,
+										 RelationGetRelationName(rel))));
+						}
 					}
 				}
 				else if (all_dead)
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 775f2ff..952ed8f 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -23,6 +23,7 @@
 #include "access/xlog.h"
 #include "catalog/index.h"
 #include "commands/vacuum.h"
+#include "executor/nodeIndexscan.h"
 #include "pgstat.h"
 #include "storage/condition_variable.h"
 #include "storage/indexfsm.h"
@@ -163,6 +164,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = btestimateparallelscan;
 	amroutine->aminitparallelscan = btinitparallelscan;
 	amroutine->amparallelrescan = btparallelrescan;
+	amroutine->amrecheck = btrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -344,8 +346,9 @@ btgettuple(IndexScanDesc scan, ScanDirection dir)
 	BTScanOpaque so = (BTScanOpaque) scan->opaque;
 	bool		res;
 
-	/* btree indexes are never lossy */
+	/* btree indexes are never lossy, except for WARM tuples */
 	scan->xs_recheck = false;
+	scan->xs_tuple_recheck = false;
 
 	/*
 	 * If we have any array keys, initialize them during first call for a
diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c
index 5b259a3..c376c1b 100644
--- a/src/backend/access/nbtree/nbtutils.c
+++ b/src/backend/access/nbtree/nbtutils.c
@@ -20,11 +20,15 @@
 #include "access/nbtree.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
+#include "executor/executor.h"
 #include "miscadmin.h"
+#include "nodes/execnodes.h"
 #include "utils/array.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 typedef struct BTSortArrayContext
@@ -2069,3 +2073,103 @@ btproperty(Oid index_oid, int attno,
 			return false;		/* punt to generic code */
 	}
 }
+
+/*
+ * Check if the index tuple's key matches the one computed from the given heap
+ * tuple's attribute
+ */
+bool
+btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	IndexInfo  *indexInfo;
+	EState	   *estate;
+	ExprContext *econtext;
+	TupleTableSlot *slot;
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	/* Get IndexInfo for this index */
+	indexInfo = BuildIndexInfo(indexRel);
+
+	/*
+	 * The heap tuple must be put into a slot for FormIndexDatum.
+	 */
+	slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
+
+	ExecStoreTuple(heapTuple, slot, InvalidBuffer, false);
+
+	/*
+	 * Typically the index won't have expressions, but if it does we need an
+	 * EState to evaluate them.  We need it for exclusion constraints too,
+	 * even if they are just on simple columns.
+	 */
+	if (indexInfo->ii_Expressions != NIL ||
+			indexInfo->ii_ExclusionOps != NULL)
+	{
+		estate = CreateExecutorState();
+		econtext = GetPerTupleExprContext(estate);
+		econtext->ecxt_scantuple = slot;
+	}
+	else
+		estate = NULL;
+
+	/*
+	 * Form the index values and isnull flags for the index entry that we need
+	 * to check.
+	 *
+	 * Note: if the index uses functions that are not as immutable as they are
+	 * supposed to be, this could produce an index tuple different from the
+	 * original.  The index AM can catch such errors by verifying that it
+	 * finds a matching index entry with the tuple's TID.  For exclusion
+	 * constraints we check this in check_exclusion_constraint().
+	 */
+	FormIndexDatum(indexInfo, slot, estate, values, isnull);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL, then they are equal
+		 */
+		if (isnull[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If just one is NULL, then they are not equal
+		 */
+		if (isnull[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now just do a raw memory comparison. If the index tuple was formed
+		 * using this heap tuple, the computed index values must match
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	if (estate != NULL)
+		FreeExecutorState(estate);
+
+	ExecDropSingleTupleTableSlot(slot);
+
+	return equal;
+}
diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c
index e57ac49..59ef7f3 100644
--- a/src/backend/access/spgist/spgutils.c
+++ b/src/backend/access/spgist/spgutils.c
@@ -72,6 +72,7 @@ spghandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 8d42a34..049eb28 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -54,6 +54,7 @@
 #include "nodes/makefuncs.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/var.h"
 #include "parser/parser.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
@@ -1691,6 +1692,20 @@ BuildIndexInfo(Relation index)
 	ii->ii_AmCache = NULL;
 	ii->ii_Context = CurrentMemoryContext;
 
+	/* build a bitmap of all table attributes referred by this index */
+	for (i = 0; i < ii->ii_NumIndexAttrs; i++)
+	{
+		AttrNumber attr = ii->ii_KeyAttrNumbers[i];
+		ii->ii_indxattrs = bms_add_member(ii->ii_indxattrs, attr -
+				FirstLowInvalidHeapAttributeNumber);
+	}
+
+	/* Collect all attributes used in expressions, too */
+	pull_varattnos((Node *) ii->ii_Expressions, 1, &ii->ii_indxattrs);
+
+	/* Collect all attributes in the index predicate, too */
+	pull_varattnos((Node *) ii->ii_Predicate, 1, &ii->ii_indxattrs);
+
 	return ii;
 }
 
diff --git a/src/backend/catalog/indexing.c b/src/backend/catalog/indexing.c
index abc344a..970254f 100644
--- a/src/backend/catalog/indexing.c
+++ b/src/backend/catalog/indexing.c
@@ -66,10 +66,15 @@ CatalogCloseIndexes(CatalogIndexState indstate)
  *
  * This should be called for each inserted or updated catalog tuple.
  *
+ * If the tuple was WARM updated, the modified_attrs contains the list of
+ * columns updated by the update. We must not insert new index entries for
+ * indexes which do not refer to any of the modified columns.
+ *
  * This is effectively a cut-down version of ExecInsertIndexTuples.
  */
 static void
-CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
+CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple,
+		Bitmapset *modified_attrs, bool warm_update)
 {
 	int			i;
 	int			numIndexes;
@@ -79,12 +84,28 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 	IndexInfo **indexInfoArray;
 	Datum		values[INDEX_MAX_KEYS];
 	bool		isnull[INDEX_MAX_KEYS];
+	ItemPointerData root_tid;
 
-	/* HOT update does not require index inserts */
-	if (HeapTupleIsHeapOnly(heapTuple))
+	/*
+	 * HOT update does not require index inserts, but WARM may need for some
+	 * indexes.
+	 */
+	if (HeapTupleIsHeapOnly(heapTuple) && !warm_update)
 		return;
 
 	/*
+	 * If we've done a WARM update, then we must index the TID of the root line
+	 * pointer and not the actual TID of the new tuple.
+	 */
+	if (warm_update)
+		ItemPointerSet(&root_tid,
+				ItemPointerGetBlockNumber(&(heapTuple->t_self)),
+				HeapTupleHeaderGetRootOffset(heapTuple->t_data));
+	else
+		ItemPointerCopy(&heapTuple->t_self, &root_tid);
+
+
+	/*
 	 * Get information from the state structure.  Fall out if nothing to do.
 	 */
 	numIndexes = indstate->ri_NumIndices;
@@ -112,6 +133,17 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 			continue;
 
 		/*
+		 * If we've done WARM update, then we must not insert a new index tuple
+		 * if none of the index keys have changed. This is not just an
+		 * optimization, but a requirement for WARM to work correctly.
+		 */
+		if (warm_update)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
+		/*
 		 * Expressional and partial indexes on system catalogs are not
 		 * supported, nor exclusion constraints, nor deferred uniqueness
 		 */
@@ -136,7 +168,7 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 		index_insert(relationDescs[i],	/* index relation */
 					 values,	/* array of index Datums */
 					 isnull,	/* is-null flags */
-					 &(heapTuple->t_self),		/* tid of heap tuple */
+					 &root_tid,
 					 heapRelation,
 					 relationDescs[i]->rd_index->indisunique ?
 					 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
@@ -168,7 +200,7 @@ CatalogTupleInsert(Relation heapRel, HeapTuple tup)
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 	CatalogCloseIndexes(indstate);
 
 	return oid;
@@ -190,7 +222,7 @@ CatalogTupleInsertWithInfo(Relation heapRel, HeapTuple tup,
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, false, NULL);
 
 	return oid;
 }
@@ -210,12 +242,14 @@ void
 CatalogTupleUpdate(Relation heapRel, ItemPointer otid, HeapTuple tup)
 {
 	CatalogIndexState indstate;
+	bool	warm_update;
+	Bitmapset	*modified_attrs;
 
 	indstate = CatalogOpenIndexes(heapRel);
 
-	simple_heap_update(heapRel, otid, tup);
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 	CatalogCloseIndexes(indstate);
 }
 
@@ -231,9 +265,12 @@ void
 CatalogTupleUpdateWithInfo(Relation heapRel, ItemPointer otid, HeapTuple tup,
 						   CatalogIndexState indstate)
 {
-	simple_heap_update(heapRel, otid, tup);
+	Bitmapset  *modified_attrs;
+	bool		warm_update;
 
-	CatalogIndexInsert(indstate, tup);
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
+
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 }
 
 /*
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index ba980de..410ccd3 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -498,6 +498,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_tuples_deleted(C.oid) AS n_tup_del,
             pg_stat_get_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
@@ -528,7 +529,8 @@ CREATE VIEW pg_stat_xact_all_tables AS
             pg_stat_get_xact_tuples_inserted(C.oid) AS n_tup_ins,
             pg_stat_get_xact_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_xact_tuples_deleted(C.oid) AS n_tup_del,
-            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd
+            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_xact_tuples_warm_updated(C.oid) AS n_tup_warm_upd
     FROM pg_class C LEFT JOIN
          pg_index I ON C.oid = I.indrelid
          LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index e2544e5..d9c0fe7 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -40,6 +40,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	TriggerData *trigdata = castNode(TriggerData, fcinfo->context);
 	const char *funcname = "unique_key_recheck";
 	HeapTuple	new_row;
+	HeapTupleData heapTuple;
 	ItemPointerData tmptid;
 	Relation	indexRel;
 	IndexInfo  *indexInfo;
@@ -102,7 +103,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	 * removed.
 	 */
 	tmptid = new_row->t_self;
-	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL,
+				NULL, NULL, &heapTuple))
 	{
 		/*
 		 * All rows in the HOT chain are dead, so skip the check.
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index 3102ab1..428fc65 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -2681,6 +2681,8 @@ CopyFrom(CopyState cstate)
 					if (resultRelInfo->ri_NumIndices > 0)
 						recheckIndexes = ExecInsertIndexTuples(slot,
 															&(tuple->t_self),
+															&(tuple->t_self),
+															NULL,
 															   estate,
 															   false,
 															   NULL,
@@ -2835,6 +2837,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
 			ExecStoreTuple(bufferedTuples[i], myslot, InvalidBuffer, false);
 			recheckIndexes =
 				ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
+									  &(bufferedTuples[i]->t_self), NULL,
 									  estate, false, NULL, NIL);
 			ExecARInsertTriggers(estate, resultRelInfo,
 								 bufferedTuples[i],
diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c
index 72bb06c..d8f033d 100644
--- a/src/backend/commands/indexcmds.c
+++ b/src/backend/commands/indexcmds.c
@@ -699,7 +699,14 @@ DefineIndex(Oid relationId,
 	 * visible to other transactions before we start to build the index. That
 	 * will prevent them from making incompatible HOT updates.  The new index
 	 * will be marked not indisready and not indisvalid, so that no one else
-	 * tries to either insert into it or use it for queries.
+	 * tries to either insert into it or use it for queries. In addition to
+	 * that, WARM updates will be disallowed if an update is modifying one of
+	 * the columns used by this new index. This is necessary to ensure that we
+	 * don't create WARM tuples which do not have corresponding entry in this
+	 * index. It must be noted that during the second phase, we will index only
+	 * those heap tuples whose root line pointer is not already in the index,
+	 * hence it's important that all tuples in a given chain, has the same
+	 * value for any indexed column (including this new index).
 	 *
 	 * We must commit our current transaction so that the index becomes
 	 * visible; then start another.  Note that all the data structures we just
@@ -747,7 +754,10 @@ DefineIndex(Oid relationId,
 	 * marked as "not-ready-for-inserts".  The index is consulted while
 	 * deciding HOT-safety though.  This arrangement ensures that no new HOT
 	 * chains can be created where the new tuple and the old tuple in the
-	 * chain have different index keys.
+	 * chain have different index keys. Also, the new index is consulted for
+	 * deciding whether a WARM update is possible, and WARM update is not done
+	 * if a column used by this index is being updated. This ensures that we
+	 * don't create WARM tuples which are not indexed by this index.
 	 *
 	 * We now take a new snapshot, and build the index using all tuples that
 	 * are visible in this snapshot.  We can be sure that any HOT updates to
@@ -782,7 +792,8 @@ DefineIndex(Oid relationId,
 	/*
 	 * Update the pg_index row to mark the index as ready for inserts. Once we
 	 * commit this transaction, any new transactions that open the table must
-	 * insert new entries into the index for insertions and non-HOT updates.
+	 * insert new entries into the index for insertions and non-HOT updates or
+	 * WARM updates where this index needs new entry.
 	 */
 	index_set_state_flags(indexRelationId, INDEX_CREATE_SET_READY);
 
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 5d47f16..7376099 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -1033,6 +1033,19 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 							break;
 						}
 
+						/*
+						 * If this tuple was ever WARM updated or is a WARM
+						 * tuple, there could be multiple index entries
+						 * pointing to the root of this chain. We can't do
+						 * index-only scans for such tuples without verifying
+						 * index key check. So mark the page as !all_visible
+						 */
+						if (HeapTupleHeaderIsHeapWarmTuple(tuple.t_data))
+						{
+							all_visible = false;
+							break;
+						}
+
 						/* Track newest xmin on page. */
 						if (TransactionIdFollows(xmin, visibility_cutoff_xid))
 							visibility_cutoff_xid = xmin;
@@ -2159,6 +2172,18 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 						break;
 					}
 
+					/*
+					 * If this tuple was ever WARM updated or is a WARM tuple,
+					 * there could be multiple index entries pointing to the
+					 * root of this chain. We can't do index-only scans for
+					 * such tuples without verifying index key check. So mark
+					 * the page as !all_visible
+					 */
+					if (HeapTupleHeaderIsHeapWarmTuple(tuple.t_data))
+					{
+						all_visible = false;
+					}
+
 					/* Track newest xmin on page. */
 					if (TransactionIdFollows(xmin, *visibility_cutoff_xid))
 						*visibility_cutoff_xid = xmin;
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 2142273..d62d2de 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -270,6 +270,8 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
 List *
 ExecInsertIndexTuples(TupleTableSlot *slot,
 					  ItemPointer tupleid,
+					  ItemPointer root_tid,
+					  Bitmapset *modified_attrs,
 					  EState *estate,
 					  bool noDupErr,
 					  bool *specConflict,
@@ -324,6 +326,17 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 		if (!indexInfo->ii_ReadyForInserts)
 			continue;
 
+		/*
+		 * If modified_attrs is set, we only insert index entries for those
+		 * indexes whose column has changed. All other indexes can use their
+		 * existing index pointers to look up the new tuple
+		 */
+		if (modified_attrs)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
 		/* Check for partial index */
 		if (indexInfo->ii_Predicate != NIL)
 		{
@@ -389,7 +402,7 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 			index_insert(indexRelation, /* index relation */
 						 values,	/* array of index Datums */
 						 isnull,	/* null flags */
-						 tupleid,		/* tid of heap tuple */
+						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique,	/* type of uniqueness check to do */
 						 indexInfo);	/* index AM may need this */
@@ -791,6 +804,9 @@ retry:
 		{
 			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
 				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
+			else
+				ItemPointerCopy(&tup->t_self, &ctid_wait);
+
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index f20d728..943a30c 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -399,6 +399,8 @@ ExecSimpleRelationInsert(EState *estate, TupleTableSlot *slot)
 
 		if (resultRelInfo->ri_NumIndices > 0)
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &(tuple->t_self),
+												   NULL,
 												   estate, false, NULL,
 												   NIL);
 
@@ -445,6 +447,8 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 	if (!skip_tuple)
 	{
 		List	   *recheckIndexes = NIL;
+		bool		warm_update;
+		Bitmapset  *modified_attrs;
 
 		/* Check the constraints of the tuple */
 		if (rel->rd_att->constr)
@@ -455,13 +459,30 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 
 		/* OK, update the tuple and index entries for it */
 		simple_heap_update(rel, &searchslot->tts_tuple->t_self,
-						   slot->tts_tuple);
+						   slot->tts_tuple, &modified_attrs, &warm_update);
 
 		if (resultRelInfo->ri_NumIndices > 0 &&
-			!HeapTupleIsHeapOnly(slot->tts_tuple))
+			(!HeapTupleIsHeapOnly(slot->tts_tuple) || warm_update))
+		{
+			ItemPointerData root_tid;
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self,
+						&root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
+
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL,
 												   NIL);
+		}
 
 		/* AFTER ROW UPDATE Triggers */
 		ExecARUpdateTriggers(estate, resultRelInfo,
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index c1aa9f1..35b0b83 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -39,6 +39,7 @@
 
 #include "access/relscan.h"
 #include "access/transam.h"
+#include "access/valid.h"
 #include "executor/execdebug.h"
 #include "executor/nodeBitmapHeapscan.h"
 #include "pgstat.h"
@@ -314,11 +315,27 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			OffsetNumber offnum = tbmres->offsets[curslot];
 			ItemPointerData tid;
 			HeapTupleData heapTuple;
+			bool recheck = false;
 
 			ItemPointerSet(&tid, page, offnum);
 			if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
-									   &heapTuple, NULL, true))
-				scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+									   &heapTuple, NULL, true, &recheck))
+			{
+				bool valid = true;
+
+				if (scan->rs_key)
+					HeapKeyTest(&heapTuple, RelationGetDescr(scan->rs_rd),
+							scan->rs_nkeys, scan->rs_key, valid);
+				if (valid)
+					scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+
+				/*
+				 * If the heap tuple needs a recheck because of a WARM update,
+				 * it's a lossy case
+				 */
+				if (recheck)
+					tbmres->recheck = true;
+			}
 		}
 	}
 	else
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index cb6aff9..355a2d8 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -142,10 +142,10 @@ IndexNext(IndexScanState *node)
 					   false);	/* don't pfree */
 
 		/*
-		 * If the index was lossy, we have to recheck the index quals using
-		 * the fetched tuple.
+		 * If the index was lossy or the tuple was WARM, we have to recheck
+		 * the index quals using the fetched tuple.
 		 */
-		if (scandesc->xs_recheck)
+		if (scandesc->xs_recheck || scandesc->xs_tuple_recheck)
 		{
 			econtext->ecxt_scantuple = slot;
 			ResetExprContext(econtext);
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index 95e1589..a1f3440 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -512,6 +512,7 @@ ExecInsert(ModifyTableState *mtstate,
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												 &(tuple->t_self), NULL,
 												 estate, true, &specConflict,
 												   arbiterIndexes);
 
@@ -558,6 +559,7 @@ ExecInsert(ModifyTableState *mtstate,
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
 				recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+													   &(tuple->t_self), NULL,
 													   estate, false, NULL,
 													   arbiterIndexes);
 		}
@@ -891,6 +893,9 @@ ExecUpdate(ItemPointer tupleid,
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
 	List	   *recheckIndexes = NIL;
+	Bitmapset  *modified_attrs = NULL;
+	ItemPointerData	root_tid;
+	bool		warm_update;
 
 	/*
 	 * abort the operation if not running transactions
@@ -1007,7 +1012,7 @@ lreplace:;
 							 estate->es_output_cid,
 							 estate->es_crosscheck_snapshot,
 							 true /* wait for commit */ ,
-							 &hufd, &lockmode);
+							 &hufd, &lockmode, &modified_attrs, &warm_update);
 		switch (result)
 		{
 			case HeapTupleSelfUpdated:
@@ -1094,10 +1099,28 @@ lreplace:;
 		 * the t_self field.
 		 *
 		 * If it's a HOT update, we mustn't insert new index entries.
+		 *
+		 * If it's a WARM update, then we must insert new entries with TID
+		 * pointing to the root of the WARM chain.
 		 */
-		if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
+		if (resultRelInfo->ri_NumIndices > 0 &&
+			(!HeapTupleIsHeapOnly(tuple) || warm_update))
+		{
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL, NIL);
+		}
 	}
 
 	if (canSetTag)
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index 2fb9a8b..35cc6c5 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -1823,7 +1823,7 @@ pgstat_count_heap_insert(Relation rel, int n)
  * pgstat_count_heap_update - count a tuple update
  */
 void
-pgstat_count_heap_update(Relation rel, bool hot)
+pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 {
 	PgStat_TableStatus *pgstat_info = rel->pgstat_info;
 
@@ -1841,6 +1841,8 @@ pgstat_count_heap_update(Relation rel, bool hot)
 		/* t_tuples_hot_updated is nontransactional, so just advance it */
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
+		else if (warm)
+			pgstat_info->t_counts.t_tuples_warm_updated++;
 	}
 }
 
@@ -4088,6 +4090,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_updated = 0;
 		result->tuples_deleted = 0;
 		result->tuples_hot_updated = 0;
+		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
 		result->changes_since_analyze = 0;
@@ -5197,6 +5200,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated = tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted = tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated = tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
@@ -5224,6 +5228,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated += tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted += tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated += tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated += tabmsg->t_counts.t_tuples_warm_updated;
 			/* If table was truncated, first reset the live/dead counters */
 			if (tabmsg->t_counts.t_truncated)
 			{
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index a987d0d..b8677f3 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -145,6 +145,22 @@ pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS)
 
 
 Datum
+pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+
+Datum
 pg_stat_get_live_tuples(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
@@ -1644,6 +1660,21 @@ pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS)
 }
 
 Datum
+pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_TableStatus *tabentry;
+
+	if ((tabentry = find_tabstat_entry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->t_counts.t_tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+Datum
 pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 9001e20..c85898c 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2338,6 +2338,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	list_free_deep(relation->rd_fkeylist);
 	list_free(relation->rd_indexlist);
 	bms_free(relation->rd_indexattr);
+	bms_free(relation->rd_exprindexattr);
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_pkattr);
 	bms_free(relation->rd_idattr);
@@ -4352,6 +4353,13 @@ RelationGetIndexList(Relation relation)
 		return list_copy(relation->rd_indexlist);
 
 	/*
+	 * If the index list was invalidated, we better also invalidate the index
+	 * attribute list (which should automatically invalidate other attributes
+	 * such as primary key and replica identity)
+	 */
+	relation->rd_indexattr = NULL;
+
+	/*
 	 * We build the list we intend to return (in the caller's context) while
 	 * doing the scan.  After successfully completing the scan, we copy that
 	 * list into the relcache entry.  This avoids cache-context memory leakage
@@ -4759,15 +4767,19 @@ Bitmapset *
 RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 {
 	Bitmapset  *indexattrs;		/* indexed columns */
+	Bitmapset  *exprindexattrs;	/* indexed columns in expression/prediacate
+									 indexes */
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *pkindexattrs;	/* columns in the primary index */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
+	Bitmapset  *indxnotreadyattrs;	/* columns in not ready indexes */
 	List	   *indexoidlist;
 	List	   *newindexoidlist;
 	Oid			relpkindex;
 	Oid			relreplindex;
 	ListCell   *l;
 	MemoryContext oldcxt;
+	bool		supportswarm = true;/* True if the table can be WARM updated */
 
 	/* Quick exit if we already computed the result. */
 	if (relation->rd_indexattr != NULL)
@@ -4782,6 +4794,10 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_pkattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
+			case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+				return bms_copy(relation->rd_exprindexattr);
+			case INDEX_ATTR_BITMAP_NOTREADY:
+				return bms_copy(relation->rd_indxnotreadyattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -4822,9 +4838,11 @@ restart:
 	 * won't be returned at all by RelationGetIndexList.
 	 */
 	indexattrs = NULL;
+	exprindexattrs = NULL;
 	uindexattrs = NULL;
 	pkindexattrs = NULL;
 	idindexattrs = NULL;
+	indxnotreadyattrs = NULL;
 	foreach(l, indexoidlist)
 	{
 		Oid			indexOid = lfirst_oid(l);
@@ -4861,6 +4879,10 @@ restart:
 				indexattrs = bms_add_member(indexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
 
+				if (!indexInfo->ii_ReadyForInserts)
+					indxnotreadyattrs = bms_add_member(indxnotreadyattrs,
+							   attrnum - FirstLowInvalidHeapAttributeNumber);
+
 				if (isKey)
 					uindexattrs = bms_add_member(uindexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
@@ -4876,10 +4898,29 @@ restart:
 		}
 
 		/* Collect all attributes used in expressions, too */
-		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &exprindexattrs);
 
 		/* Collect all attributes in the index predicate, too */
-		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
+
+		/*
+		 * indexattrs should include attributes referenced in index expressions
+		 * and predicates too
+		 */
+		indexattrs = bms_add_members(indexattrs, exprindexattrs);
+
+		if (!indexInfo->ii_ReadyForInserts)
+			indxnotreadyattrs = bms_add_members(indxnotreadyattrs,
+					exprindexattrs);
+
+		/*
+		 * Check if the index has amrecheck method defined. If the method is
+		 * not defined, the index does not support WARM update. Completely
+		 * disable WARM updates on such tables
+		 */
+		if (!indexDesc->rd_amroutine->amrecheck)
+			supportswarm = false;
+
 
 		index_close(indexDesc, AccessShareLock);
 	}
@@ -4912,15 +4953,22 @@ restart:
 		goto restart;
 	}
 
+	/* Remember if the table can do WARM updates */
+	relation->rd_supportswarm = supportswarm;
+
 	/* Don't leak the old values of these bitmaps, if any */
 	bms_free(relation->rd_indexattr);
 	relation->rd_indexattr = NULL;
+	bms_free(relation->rd_exprindexattr);
+	relation->rd_exprindexattr = NULL;
 	bms_free(relation->rd_keyattr);
 	relation->rd_keyattr = NULL;
 	bms_free(relation->rd_pkattr);
 	relation->rd_pkattr = NULL;
 	bms_free(relation->rd_idattr);
 	relation->rd_idattr = NULL;
+	bms_free(relation->rd_indxnotreadyattr);
+	relation->rd_indxnotreadyattr = NULL;
 
 	/*
 	 * Now save copies of the bitmaps in the relcache entry.  We intentionally
@@ -4933,7 +4981,9 @@ restart:
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_pkattr = bms_copy(pkindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_indexattr = bms_copy(indexattrs);
+	relation->rd_exprindexattr = bms_copy(exprindexattrs);
+	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
+	relation->rd_indxnotreadyattr = bms_copy(indxnotreadyattrs);
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -4947,6 +4997,10 @@ restart:
 			return bms_copy(relation->rd_pkattr);
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
+		case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+			return exprindexattrs;
+		case INDEX_ATTR_BITMAP_NOTREADY:
+			return indxnotreadyattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
@@ -5559,6 +5613,7 @@ load_relcache_init_file(bool shared)
 		rel->rd_keyattr = NULL;
 		rel->rd_pkattr = NULL;
 		rel->rd_idattr = NULL;
+		rel->rd_indxnotreadyattr = NULL;
 		rel->rd_pubactions = NULL;
 		rel->rd_createSubid = InvalidSubTransactionId;
 		rel->rd_newRelfilenodeSubid = InvalidSubTransactionId;
diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h
index f919cf8..d7702e5 100644
--- a/src/include/access/amapi.h
+++ b/src/include/access/amapi.h
@@ -13,6 +13,7 @@
 #define AMAPI_H
 
 #include "access/genam.h"
+#include "access/itup.h"
 
 /*
  * We don't wish to include planner header files here, since most of an index
@@ -152,6 +153,10 @@ typedef void (*aminitparallelscan_function) (void *target);
 /* (re)start parallel index scan */
 typedef void (*amparallelrescan_function) (IndexScanDesc scan);
 
+/* recheck index tuple and heap tuple match */
+typedef bool (*amrecheck_function) (Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 /*
  * API struct for an index AM.  Note this must be stored in a single palloc'd
  * chunk of memory.
@@ -217,6 +222,9 @@ typedef struct IndexAmRoutine
 	amestimateparallelscan_function amestimateparallelscan;		/* can be NULL */
 	aminitparallelscan_function aminitparallelscan;		/* can be NULL */
 	amparallelrescan_function amparallelrescan; /* can be NULL */
+
+	/* interface function to support WARM */
+	amrecheck_function amrecheck;		/* can be NULL */
 } IndexAmRoutine;
 
 
diff --git a/src/include/access/hash.h b/src/include/access/hash.h
index bfdfed8..0af6b4e 100644
--- a/src/include/access/hash.h
+++ b/src/include/access/hash.h
@@ -391,4 +391,8 @@ extern void hashbucketcleanup(Relation rel, Bucket cur_bucket,
 				  bool bucket_has_garbage,
 				  IndexBulkDeleteCallback callback, void *callback_state);
 
+/* hash.c */
+extern bool hashrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 #endif   /* HASH_H */
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 95aa976..9412c3a 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -137,9 +137,10 @@ extern bool heap_fetch(Relation relation, Snapshot snapshot,
 		   Relation stats_relation);
 extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
 					   Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call);
+					   bool *all_dead, bool first_call, bool *recheck);
 extern bool heap_hot_search(ItemPointer tid, Relation relation,
-				Snapshot snapshot, bool *all_dead);
+				Snapshot snapshot, bool *all_dead,
+				bool *recheck, Buffer *buffer, HeapTuple heapTuple);
 
 extern void heap_get_latest_tid(Relation relation, Snapshot snapshot,
 					ItemPointer tid);
@@ -161,7 +162,8 @@ extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
 extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
@@ -176,7 +178,9 @@ extern bool heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple);
 extern Oid	simple_heap_insert(Relation relation, HeapTuple tup);
 extern void simple_heap_delete(Relation relation, ItemPointer tid);
 extern void simple_heap_update(Relation relation, ItemPointer otid,
-				   HeapTuple tup);
+				   HeapTuple tup,
+				   Bitmapset **modified_attrs,
+				   bool *warm_update);
 
 extern void heap_sync(Relation relation);
 
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index e6019d5..9b081bf 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -80,6 +80,7 @@
 #define XLH_UPDATE_CONTAINS_NEW_TUPLE			(1<<4)
 #define XLH_UPDATE_PREFIX_FROM_OLD				(1<<5)
 #define XLH_UPDATE_SUFFIX_FROM_OLD				(1<<6)
+#define XLH_UPDATE_WARM_UPDATE					(1<<7)
 
 /* convenience macro for checking whether any form of old tuple was logged */
 #define XLH_UPDATE_CONTAINS_OLD						\
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 4d614b7..b5891ca 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,7 +260,8 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x0800 are available */
+#define HEAP_WARM_TUPLE			0x0800	/* This tuple is a part of a WARM chain
+										 */
 #define HEAP_LATEST_TUPLE		0x1000	/*
 										 * This is the last tuple in chain and
 										 * ip_posid points to the root line
@@ -271,7 +272,7 @@ struct HeapTupleHeaderData
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF800	/* visibility-related bits */
 
 
 /*
@@ -510,6 +511,21 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+#define HeapTupleHeaderSetHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 |= HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderClearHeapWarmTuple(tup) \
+do { \
+	(tup)->t_infomask2 &= ~HEAP_WARM_TUPLE; \
+} while (0)
+
+#define HeapTupleHeaderIsHeapWarmTuple(tup) \
+( \
+  ((tup)->t_infomask2 & HEAP_WARM_TUPLE) != 0 \
+)
+
 /*
  * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
  * the TID of the tuple itself in t_ctid field to mark the end of the chain.
@@ -785,6 +801,15 @@ struct MinimalTupleData
 #define HeapTupleClearHeapOnly(tuple) \
 		HeapTupleHeaderClearHeapOnly((tuple)->t_data)
 
+#define HeapTupleIsHeapWarmTuple(tuple) \
+		HeapTupleHeaderIsHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleSetHeapWarmTuple(tuple) \
+		HeapTupleHeaderSetHeapWarmTuple((tuple)->t_data)
+
+#define HeapTupleClearHeapWarmTuple(tuple) \
+		HeapTupleHeaderClearHeapWarmTuple((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index f9304db..d4b35ca 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -537,6 +537,8 @@ extern bytea *btoptions(Datum reloptions, bool validate);
 extern bool btproperty(Oid index_oid, int attno,
 		   IndexAMProperty prop, const char *propname,
 		   bool *res, bool *isnull);
+extern bool btrecheck(Relation indexRel, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * prototypes for functions in nbtvalidate.c
diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h
index 3fc726d..f971b43 100644
--- a/src/include/access/relscan.h
+++ b/src/include/access/relscan.h
@@ -119,7 +119,8 @@ typedef struct IndexScanDescData
 	HeapTupleData xs_ctup;		/* current heap tuple, if any */
 	Buffer		xs_cbuf;		/* current heap buffer in scan, if any */
 	/* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */
-	bool		xs_recheck;		/* T means scan keys must be rechecked */
+	bool		xs_recheck;		/* T means scan keys must be rechecked for each tuple */
+	bool		xs_tuple_recheck;	/* T means scan keys must be rechecked for current tuple */
 
 	/*
 	 * When fetching with an ordering operator, the values of the ORDER BY
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index ec4aedb..ec42c30 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2740,6 +2740,8 @@ DATA(insert OID = 1933 (  pg_stat_get_tuples_deleted	PGNSP PGUID 12 1 0 0 0 f f
 DESCR("statistics: number of tuples deleted");
 DATA(insert OID = 1972 (  pg_stat_get_tuples_hot_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated");
+DATA(insert OID = 3353 (  pg_stat_get_tuples_warm_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated");
 DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_live_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
@@ -2892,6 +2894,8 @@ DATA(insert OID = 3042 (  pg_stat_get_xact_tuples_deleted		PGNSP PGUID 12 1 0 0
 DESCR("statistics: number of tuples deleted in current transaction");
 DATA(insert OID = 3043 (  pg_stat_get_xact_tuples_hot_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated in current transaction");
+DATA(insert OID = 3354 (  pg_stat_get_xact_tuples_warm_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated in current transaction");
 DATA(insert OID = 3044 (  pg_stat_get_xact_blocks_fetched		PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_fetched _null_ _null_ _null_ ));
 DESCR("statistics: number of blocks fetched in current transaction");
 DATA(insert OID = 3045 (  pg_stat_get_xact_blocks_hit			PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_hit _null_ _null_ _null_ ));
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 02dbe7b..c4495a3 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -382,6 +382,7 @@ extern void UnregisterExprContextCallback(ExprContext *econtext,
 extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
 extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
 extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
+					  ItemPointer root_tid, Bitmapset *modified_attrs,
 					  EState *estate, bool noDupErr, bool *specConflict,
 					  List *arbiterIndexes);
 extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
diff --git a/src/include/executor/nodeIndexscan.h b/src/include/executor/nodeIndexscan.h
index ea3f3a5..ebeec74 100644
--- a/src/include/executor/nodeIndexscan.h
+++ b/src/include/executor/nodeIndexscan.h
@@ -41,5 +41,4 @@ extern void ExecIndexEvalRuntimeKeys(ExprContext *econtext,
 extern bool ExecIndexEvalArrayKeys(ExprContext *econtext,
 					   IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 extern bool ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
-
 #endif   /* NODEINDEXSCAN_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 2fde67a..0b16157 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -64,6 +64,7 @@ typedef struct IndexInfo
 	NodeTag		type;
 	int			ii_NumIndexAttrs;
 	AttrNumber	ii_KeyAttrNumbers[INDEX_MAX_KEYS];
+	Bitmapset  *ii_indxattrs;	/* bitmap of all columns used in this index */
 	List	   *ii_Expressions; /* list of Expr */
 	List	   *ii_ExpressionsState;	/* list of ExprState */
 	List	   *ii_Predicate;	/* list of Expr */
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 0062fb8..70a7c8d 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -105,6 +105,7 @@ typedef struct PgStat_TableCounts
 	PgStat_Counter t_tuples_updated;
 	PgStat_Counter t_tuples_deleted;
 	PgStat_Counter t_tuples_hot_updated;
+	PgStat_Counter t_tuples_warm_updated;
 	bool		t_truncated;
 
 	PgStat_Counter t_delta_live_tuples;
@@ -625,6 +626,7 @@ typedef struct PgStat_StatTabEntry
 	PgStat_Counter tuples_updated;
 	PgStat_Counter tuples_deleted;
 	PgStat_Counter tuples_hot_updated;
+	PgStat_Counter tuples_warm_updated;
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
@@ -1178,7 +1180,7 @@ pgstat_report_wait_end(void)
 	(pgStatBlockWriteTime += (n))
 
 extern void pgstat_count_heap_insert(Relation rel, int n);
-extern void pgstat_count_heap_update(Relation rel, bool hot);
+extern void pgstat_count_heap_update(Relation rel, bool hot, bool warm);
 extern void pgstat_count_heap_delete(Relation rel);
 extern void pgstat_count_truncate(Relation rel);
 extern void pgstat_update_heap_dead_tuples(Relation rel, int delta);
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index a617a7c..fbac7c0 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -138,9 +138,14 @@ typedef struct RelationData
 
 	/* data managed by RelationGetIndexAttrBitmap: */
 	Bitmapset  *rd_indexattr;	/* identifies columns used in indexes */
+	Bitmapset  *rd_exprindexattr; /* indentified columns used in expression or
+									 predicate indexes */
+	Bitmapset  *rd_indxnotreadyattr;	/* columns used by indexes not yet
+										   ready */
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_pkattr;		/* cols included in primary key */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	PublicationActions  *rd_pubactions;	/* publication actions */
 
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index da36b67..d18bd09 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -50,7 +50,9 @@ typedef enum IndexAttrBitmapKind
 	INDEX_ATTR_BITMAP_ALL,
 	INDEX_ATTR_BITMAP_KEY,
 	INDEX_ATTR_BITMAP_PRIMARY_KEY,
-	INDEX_ATTR_BITMAP_IDENTITY_KEY
+	INDEX_ATTR_BITMAP_IDENTITY_KEY,
+	INDEX_ATTR_BITMAP_EXPR_PREDICATE,
+	INDEX_ATTR_BITMAP_NOTREADY
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index c661f1d..561d9579 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1732,6 +1732,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_tuples_deleted(c.oid) AS n_tup_del,
     pg_stat_get_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
@@ -1875,6 +1876,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1918,6 +1920,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1955,7 +1958,8 @@ pg_stat_xact_all_tables| SELECT c.oid AS relid,
     pg_stat_get_xact_tuples_inserted(c.oid) AS n_tup_ins,
     pg_stat_get_xact_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_xact_tuples_deleted(c.oid) AS n_tup_del,
-    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd
+    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_xact_tuples_warm_updated(c.oid) AS n_tup_warm_upd
    FROM ((pg_class c
      LEFT JOIN pg_index i ON ((c.oid = i.indrelid)))
      LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
@@ -1971,7 +1975,8 @@ pg_stat_xact_sys_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname = ANY (ARRAY['pg_catalog'::name, 'information_schema'::name])) OR (pg_stat_xact_all_tables.schemaname ~ '^pg_toast'::text));
 pg_stat_xact_user_functions| SELECT p.oid AS funcid,
@@ -1993,7 +1998,8 @@ pg_stat_xact_user_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname <> ALL (ARRAY['pg_catalog'::name, 'information_schema'::name])) AND (pg_stat_xact_all_tables.schemaname !~ '^pg_toast'::text));
 pg_statio_all_indexes| SELECT c.oid AS relid,
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
new file mode 100644
index 0000000..6391891
--- /dev/null
+++ b/src/test/regress/expected/warm.out
@@ -0,0 +1,367 @@
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Check if index only scan works correctly
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx1 on updtst_tab1
+   Index Cond: (b = 140001)
+(2 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab1;
+------------------
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE a = 1;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE b = 701;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+VACUUM updtst_tab2;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx2 on updtst_tab2
+   Index Cond: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab2 WHERE b = 701;
+  b  
+-----
+ 701
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab2;
+------------------
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 1;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+       QUERY PLAN        
+-------------------------
+ Seq Scan on updtst_tab3
+   Filter: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 701;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+  b   
+------
+ 1421
+(1 row)
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+SET enable_seqscan = false;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    98
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 2;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx3 on updtst_tab3
+   Index Cond: (b = 702)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 702;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+  b   
+------
+ 1422
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab3;
+------------------
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+explain select * from test_warm where lower(a) = 'test';
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on test_warm  (cost=4.18..12.65 rows=4 width=64)
+   Recheck Cond: (lower(a) = 'test'::text)
+   ->  Bitmap Index Scan on test_warmindx  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (lower(a) = 'test'::text)
+(4 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+select *, ctid from test_warm where a = 'test';
+ a | b | ctid 
+---+---+------
+(0 rows)
+
+select *, ctid from test_warm where a = 'TEST';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+                                   QUERY PLAN                                    
+---------------------------------------------------------------------------------
+ Index Scan using test_warmindx on test_warm  (cost=0.15..20.22 rows=4 width=64)
+   Index Cond: (lower(a) = 'test'::text)
+(2 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+DROP TABLE test_warm;
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 13bf494..0b6193b 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -42,6 +42,8 @@ test: create_type
 test: create_table
 test: create_function_2
 
+test: warm
+
 # ----------
 # Load huge amounts of data
 # We should split the data files into single files and then
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
new file mode 100644
index 0000000..c025087
--- /dev/null
+++ b/src/test/regress/sql/warm.sql
@@ -0,0 +1,171 @@
+-- WARM update tests
+
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Check if index only scan works correctly
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab1;
+
+------------------
+
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE a = 1;
+
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE b = 701;
+
+VACUUM updtst_tab2;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+SELECT b FROM updtst_tab2 WHERE b = 701;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab2;
+------------------
+
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+SELECT * FROM updtst_tab3 WHERE a = 1;
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+SET enable_seqscan = false;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+SELECT * FROM updtst_tab3 WHERE a = 2;
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab3;
+------------------
+
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where a = 'test';
+select *, ctid from test_warm where a = 'TEST';
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+DROP TABLE test_warm;
-- 
2.1.4

0006-warm-chain-conversion-v16.patchtext/plain; charset=us-asciiDownload
From 2c901fe7c1829d21e3630070750c12d4415fb40c Mon Sep 17 00:00:00 2001
From: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Wed, 8 Mar 2017 13:51:12 -0300
Subject: [PATCH 6/6] warm chain conversion v16

---
 contrib/bloom/blvacuum.c                 |   2 +-
 src/backend/access/gin/ginvacuum.c       |   3 +-
 src/backend/access/gist/gistvacuum.c     |   3 +-
 src/backend/access/hash/hash.c           |  82 ++++-
 src/backend/access/hash/hashpage.c       |  14 +
 src/backend/access/heap/heapam.c         | 323 +++++++++++++++--
 src/backend/access/heap/tuptoaster.c     |   3 +-
 src/backend/access/index/indexam.c       |   9 +-
 src/backend/access/nbtree/nbtpage.c      |  51 ++-
 src/backend/access/nbtree/nbtree.c       |  75 +++-
 src/backend/access/nbtree/nbtxlog.c      |  99 +----
 src/backend/access/rmgrdesc/heapdesc.c   |  26 +-
 src/backend/access/rmgrdesc/nbtdesc.c    |   4 +-
 src/backend/access/spgist/spgvacuum.c    |  12 +-
 src/backend/catalog/index.c              |  11 +-
 src/backend/catalog/indexing.c           |   5 +-
 src/backend/commands/constraint.c        |   3 +-
 src/backend/commands/vacuumlazy.c        | 602 +++++++++++++++++++++++++++++--
 src/backend/executor/execIndexing.c      |   3 +-
 src/backend/replication/logical/decode.c |  13 +-
 src/backend/utils/time/combocid.c        |   4 +-
 src/backend/utils/time/tqual.c           |  24 +-
 src/include/access/amapi.h               |   9 +
 src/include/access/genam.h               |  22 +-
 src/include/access/hash.h                |  11 +
 src/include/access/heapam.h              |  18 +
 src/include/access/heapam_xlog.h         |  23 +-
 src/include/access/htup_details.h        |  84 ++++-
 src/include/access/nbtree.h              |  18 +-
 src/include/access/nbtxlog.h             |  26 +-
 src/include/commands/progress.h          |   1 +
 31 files changed, 1321 insertions(+), 262 deletions(-)

diff --git a/contrib/bloom/blvacuum.c b/contrib/bloom/blvacuum.c
index 04abd0f..ff50361 100644
--- a/contrib/bloom/blvacuum.c
+++ b/contrib/bloom/blvacuum.c
@@ -88,7 +88,7 @@ blbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 		while (itup < itupEnd)
 		{
 			/* Do we have to delete this tuple? */
-			if (callback(&itup->heapPtr, callback_state))
+			if (callback(&itup->heapPtr, false, callback_state) == IBDCR_DELETE)
 			{
 				/* Yes; adjust count of tuples that will be left on page */
 				BloomPageGetOpaque(page)->maxoff--;
diff --git a/src/backend/access/gin/ginvacuum.c b/src/backend/access/gin/ginvacuum.c
index c9ccfee..8ed71c5 100644
--- a/src/backend/access/gin/ginvacuum.c
+++ b/src/backend/access/gin/ginvacuum.c
@@ -56,7 +56,8 @@ ginVacuumItemPointers(GinVacuumState *gvs, ItemPointerData *items,
 	 */
 	for (i = 0; i < nitem; i++)
 	{
-		if (gvs->callback(items + i, gvs->callback_state))
+		if (gvs->callback(items + i, false, gvs->callback_state) ==
+				IBDCR_DELETE)
 		{
 			gvs->result->tuples_removed += 1;
 			if (!tmpitems)
diff --git a/src/backend/access/gist/gistvacuum.c b/src/backend/access/gist/gistvacuum.c
index 77d9d12..0955db6 100644
--- a/src/backend/access/gist/gistvacuum.c
+++ b/src/backend/access/gist/gistvacuum.c
@@ -202,7 +202,8 @@ gistbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 				iid = PageGetItemId(page, i);
 				idxtuple = (IndexTuple) PageGetItem(page, iid);
 
-				if (callback(&(idxtuple->t_tid), callback_state))
+				if (callback(&(idxtuple->t_tid), false, callback_state) ==
+						IBDCR_DELETE)
 					todelete[ntodelete++] = i;
 				else
 					stats->num_index_tuples += 1;
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index 9b20ae6..5310c67 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -73,6 +73,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = hashbuild;
 	amroutine->ambuildempty = hashbuildempty;
 	amroutine->aminsert = hashinsert;
+	amroutine->amwarminsert = hashwarminsert;
 	amroutine->ambulkdelete = hashbulkdelete;
 	amroutine->amvacuumcleanup = hashvacuumcleanup;
 	amroutine->amcanreturn = NULL;
@@ -231,11 +232,11 @@ hashbuildCallback(Relation index,
  *	Hash on the heap tuple's key, form an index tuple with hash code.
  *	Find the appropriate location for the new tuple, and put it there.
  */
-bool
-hashinsert(Relation rel, Datum *values, bool *isnull,
+static bool
+hashinsert_internal(Relation rel, Datum *values, bool *isnull,
 		   ItemPointer ht_ctid, Relation heapRel,
 		   IndexUniqueCheck checkUnique,
-		   IndexInfo *indexInfo)
+		   IndexInfo *indexInfo, bool warm_update)
 {
 	Datum		index_values[1];
 	bool		index_isnull[1];
@@ -251,6 +252,11 @@ hashinsert(Relation rel, Datum *values, bool *isnull,
 	itup = index_form_tuple(RelationGetDescr(rel), index_values, index_isnull);
 	itup->t_tid = *ht_ctid;
 
+	if (warm_update)
+		ItemPointerSetFlags(&itup->t_tid, HASH_INDEX_RED_POINTER);
+	else
+		ItemPointerClearFlags(&itup->t_tid);
+
 	_hash_doinsert(rel, itup);
 
 	pfree(itup);
@@ -258,6 +264,26 @@ hashinsert(Relation rel, Datum *values, bool *isnull,
 	return false;
 }
 
+bool
+hashinsert(Relation rel, Datum *values, bool *isnull,
+		   ItemPointer ht_ctid, Relation heapRel,
+		   IndexUniqueCheck checkUnique,
+		   IndexInfo *indexInfo)
+{
+	return hashinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, false);
+}
+
+bool
+hashwarminsert(Relation rel, Datum *values, bool *isnull,
+		   ItemPointer ht_ctid, Relation heapRel,
+		   IndexUniqueCheck checkUnique,
+		   IndexInfo *indexInfo)
+{
+	return hashinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, true);
+
+}
 
 /*
  *	hashgettuple() -- Get the next tuple in the scan.
@@ -738,6 +764,8 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 		Page		page;
 		OffsetNumber deletable[MaxOffsetNumber];
 		int			ndeletable = 0;
+		OffsetNumber colorblue[MaxOffsetNumber];
+		int			ncolorblue = 0;
 		bool		retain_pin = false;
 
 		vacuum_delay_point();
@@ -755,20 +783,35 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 			IndexTuple	itup;
 			Bucket		bucket;
 			bool		kill_tuple = false;
+			bool		color_tuple = false;
+			int			flags;
+			bool		is_red;
+			IndexBulkDeleteCallbackResult	result;
 
 			itup = (IndexTuple) PageGetItem(page,
 											PageGetItemId(page, offno));
 			htup = &(itup->t_tid);
 
+			flags = ItemPointerGetFlags(&itup->t_tid);
+			is_red = ((flags & HASH_INDEX_RED_POINTER) != 0);
+
 			/*
 			 * To remove the dead tuples, we strictly want to rely on results
 			 * of callback function.  refer btvacuumpage for detailed reason.
 			 */
-			if (callback && callback(htup, callback_state))
+			if (callback)
 			{
-				kill_tuple = true;
-				if (tuples_removed)
-					*tuples_removed += 1;
+				result = callback(htup, is_red, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					kill_tuple = true;
+					if (tuples_removed)
+						*tuples_removed += 1;
+				}
+				else if (result == IBDCR_COLOR_BLUE)
+				{
+					color_tuple = true;
+				}
 			}
 			else if (split_cleanup)
 			{
@@ -791,6 +834,12 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 				}
 			}
 
+			if (color_tuple)
+			{
+				/* color the pointer blue */
+				colorblue[ncolorblue++] = offno;
+			}
+
 			if (kill_tuple)
 			{
 				/* mark the item for deletion */
@@ -815,9 +864,24 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 		/*
 		 * Apply deletions, advance to next page and write page if needed.
 		 */
-		if (ndeletable > 0)
+		if (ndeletable > 0 || ncolorblue > 0)
 		{
-			PageIndexMultiDelete(page, deletable, ndeletable);
+			/*
+			 * Color the Red pointers Blue.
+			 *
+			 * We must do this before dealing with the dead items because
+			 * PageIndexMultiDelete may move items around to compactify the
+			 * array and hence offnums recorded earlier won't make any sense
+			 * after PageIndexMultiDelete is called..
+			 */
+			if (ncolorblue > 0)
+				_hash_color_items(page, colorblue, ncolorblue);
+
+			/*
+			 * And delete the deletable items
+			 */
+			if (ndeletable > 0)
+				PageIndexMultiDelete(page, deletable, ndeletable);
 			bucket_dirty = true;
 			MarkBufferDirty(buf);
 		}
diff --git a/src/backend/access/hash/hashpage.c b/src/backend/access/hash/hashpage.c
index c73929c..7df3e12 100644
--- a/src/backend/access/hash/hashpage.c
+++ b/src/backend/access/hash/hashpage.c
@@ -1376,3 +1376,17 @@ _hash_getbucketbuf_from_hashkey(Relation rel, uint32 hashkey, int access,
 
 	return buf;
 }
+
+void _hash_color_items(Page page, OffsetNumber *coloritemnos,
+					   uint16 ncoloritems)
+{
+	int			i;
+	IndexTuple	itup;
+
+	for (i = 0; i < ncoloritems; i++)
+	{
+		itup = (IndexTuple) PageGetItem(page,
+				PageGetItemId(page, coloritemnos[i]));
+		ItemPointerClearFlags(&itup->t_tid);
+	}
+}
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index b9ff94d..0ffb9a9 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -1958,17 +1958,32 @@ heap_fetch(Relation relation,
 }
 
 /*
- * Check if the HOT chain containing this tid is actually a WARM chain.
- * Note that even if the WARM update ultimately aborted, we still must do a
- * recheck because the failing UPDATE when have inserted created index entries
- * which are now stale, but still referencing this chain.
+ * Check status of a (possibly) WARM chain.
+ *
+ * This function looks at a HOT/WARM chain starting at tid and return a bitmask
+ * of information. We only follow the chain as long as it's known to be valid
+ * HOT chain. Information returned by the function consists of:
+ *
+ *  HCWC_WARM_TUPLE - a warm tuple is found somewhere in the chain. Note that
+ *  				  when a tuple is WARM updated, both old and new versions
+ *  				  of the tuple are treated as WARM tuple
+ *
+ *  HCWC_RED_TUPLE  - a warm tuple part of the Red chain is found somewhere in
+ *					  the chain.
+ *
+ *  HCWC_BLUE_TUPLE - a warm tuple part of the Blue chain is found somewhere in
+ *					  the chain.
+ *
+ *	If stop_at_warm is true, we stop when the first WARM tuple is found and
+ *	return information collected so far.
  */
-static bool
-hot_check_warm_chain(Page dp, ItemPointer tid)
+HeapCheckWarmChainStatus
+heap_check_warm_chain(Page dp, ItemPointer tid, bool stop_at_warm)
 {
-	TransactionId prev_xmax = InvalidTransactionId;
-	OffsetNumber offnum;
-	HeapTupleData heapTuple;
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	HeapCheckWarmChainStatus	status = 0;
 
 	offnum = ItemPointerGetOffsetNumber(tid);
 	heapTuple.t_self = *tid;
@@ -1985,7 +2000,16 @@ hot_check_warm_chain(Page dp, ItemPointer tid)
 
 		/* check for unused, dead, or redirected items */
 		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
 			break;
+		}
 
 		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
 		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
@@ -2000,13 +2024,30 @@ hot_check_warm_chain(Page dp, ItemPointer tid)
 			break;
 
 
-		/*
-		 * Presence of either WARM or WARM updated tuple signals possible
-		 * breakage and the caller must recheck tuple returned from this chain
-		 * for index satisfaction
-		 */
 		if (HeapTupleHeaderIsHeapWarmTuple(heapTuple.t_data))
-			return true;
+		{
+			/* We found a WARM tuple */
+			status |= HCWC_WARM_TUPLE;
+
+			/*
+			 * If we've been told to stop at the first WARM tuple, just return
+			 * whatever information collected so far.
+			 */
+			if (stop_at_warm)
+				return status;
+
+			/*
+			 * If it's not a Red tuple, then it's definitely a Blue tuple. Set
+			 * either of the bit correctly.
+			 */
+			if (HeapTupleHeaderIsWarmRed(heapTuple.t_data))
+				status |= HCWC_RED_TUPLE;
+			else
+				status |= HCWC_BLUE_TUPLE;
+		}
+		else
+			/* Must be a tuple belonging to the Blue chain */
+			status |= HCWC_BLUE_TUPLE;
 
 		/*
 		 * Check to see if HOT chain continues past this tuple; if so fetch
@@ -2026,7 +2067,94 @@ hot_check_warm_chain(Page dp, ItemPointer tid)
 	}
 
 	/* All OK. No need to recheck */
-	return false;
+	return status;
+}
+
+/*
+ * Scan through the WARM chain starting at tid and reset all WARM related
+ * flags. At the end, the chain will have all characteristics of a regular HOT
+ * chain.
+ *
+ * Return the number of cleared offnums. Cleared offnums are returned in the
+ * passed-in cleared_offnums array. The caller must ensure that the array is
+ * large enough to hold maximum offnums that can be cleared by this invokation
+ * of heap_clear_warm_chain().
+ */
+int
+heap_clear_warm_chain(Page dp, ItemPointer tid, OffsetNumber *cleared_offnums)
+{
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	int							num_cleared = 0;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
+			break;
+		}
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Clear WARM and Red flags
+		 */
+		if (HeapTupleHeaderIsHeapWarmTuple(heapTuple.t_data))
+		{
+			HeapTupleHeaderClearHeapWarmTuple(heapTuple.t_data);
+			HeapTupleHeaderClearWarmRed(heapTuple.t_data);
+			cleared_offnums[num_cleared++] = offnum;
+		}
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	return num_cleared;
 }
 
 /*
@@ -2135,7 +2263,11 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		 * possible improvements here
 		 */
 		if (recheck && *recheck == false)
-			*recheck = hot_check_warm_chain(dp, &heapTuple->t_self);
+		{
+			HeapCheckWarmChainStatus status;
+			status = heap_check_warm_chain(dp, &heapTuple->t_self, true);
+			*recheck = HCWC_IS_WARM(status);
+		}
 
 		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
@@ -2888,7 +3020,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		{
 			XLogRecPtr	recptr;
 			xl_heap_multi_insert *xlrec;
-			uint8		info = XLOG_HEAP2_MULTI_INSERT;
+			uint8		info = XLOG_HEAP_MULTI_INSERT;
 			char	   *tupledata;
 			int			totaldatalen;
 			char	   *scratchptr = scratch;
@@ -2985,7 +3117,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			/* filtering by origin on a row level is much more efficient */
 			XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);
 
-			recptr = XLogInsert(RM_HEAP2_ID, info);
+			recptr = XLogInsert(RM_HEAP_ID, info);
 
 			PageSetLSN(page, recptr);
 		}
@@ -3409,7 +3541,9 @@ l1:
 	}
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	tp.t_data->t_infomask |= new_infomask;
 	tp.t_data->t_infomask2 |= new_infomask2;
@@ -4172,7 +4306,9 @@ l2:
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
-		oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(oldtup.t_data))
+			oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 		oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleClearHotUpdated(&oldtup);
 		/* ... and store info about transaction updating this tuple */
@@ -4419,6 +4555,16 @@ l2:
 		}
 
 		/*
+		 * If the old tuple is already a member of the Red chain, mark the new
+		 * tuple with the same flag
+		 */
+		if (HeapTupleIsHeapWarmTupleRed(&oldtup))
+		{
+			HeapTupleSetHeapWarmTupleRed(heaptup);
+			HeapTupleSetHeapWarmTupleRed(newtup);
+		}
+
+		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
 		 * Usually this information will be available in the corresponding
@@ -4435,12 +4581,20 @@ l2:
 		/* Mark the old tuple as HOT-updated */
 		HeapTupleSetHotUpdated(&oldtup);
 		HeapTupleSetHeapWarmTuple(&oldtup);
+
 		/* And mark the new tuple as heap-only */
 		HeapTupleSetHeapOnly(heaptup);
+		/* Mark the new tuple as WARM tuple */
 		HeapTupleSetHeapWarmTuple(heaptup);
+		/* This update also starts a Red chain */
+		HeapTupleSetHeapWarmTupleRed(heaptup);
+		Assert(!HeapTupleIsHeapWarmTupleRed(&oldtup));
+
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
 		HeapTupleSetHeapWarmTuple(newtup);
+		HeapTupleSetHeapWarmTupleRed(newtup);
+
 		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
 			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 		else
@@ -4459,6 +4613,8 @@ l2:
 		HeapTupleClearHeapOnly(newtup);
 		HeapTupleClearHeapWarmTuple(heaptup);
 		HeapTupleClearHeapWarmTuple(newtup);
+		HeapTupleClearHeapWarmTupleRed(heaptup);
+		HeapTupleClearHeapWarmTupleRed(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4477,7 +4633,9 @@ l2:
 	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
-	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(oldtup.t_data))
+		oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 	oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	/* ... and store info about transaction updating this tuple */
 	Assert(TransactionIdIsValid(xmax_old_tuple));
@@ -6398,7 +6556,9 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	PageSetPrunable(page, RecentGlobalXmin);
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 
 	/*
@@ -6972,7 +7132,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 	 * Old-style VACUUM FULL is gone, but we have to keep this code as long as
 	 * we support having MOVED_OFF/MOVED_IN tuples in the database.
 	 */
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 
@@ -6991,7 +7151,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			 * have failed; whereas a non-dead MOVED_IN tuple must mean the
 			 * xvac transaction succeeded.
 			 */
-			if (tuple->t_infomask & HEAP_MOVED_OFF)
+			if (HeapTupleHeaderIsMovedOff(tuple))
 				frz->frzflags |= XLH_INVALID_XVAC;
 			else
 				frz->frzflags |= XLH_FREEZE_XVAC;
@@ -7461,7 +7621,7 @@ heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple)
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid))
@@ -7544,7 +7704,7 @@ heap_tuple_needs_freeze(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid) &&
@@ -7570,7 +7730,7 @@ HeapTupleHeaderAdvanceLatestRemovedXid(HeapTupleHeader tuple,
 	TransactionId xmax = HeapTupleHeaderGetUpdateXid(tuple);
 	TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		if (TransactionIdPrecedes(*latestRemovedXid, xvac))
 			*latestRemovedXid = xvac;
@@ -7619,6 +7779,36 @@ log_heap_cleanup_info(RelFileNode rnode, TransactionId latestRemovedXid)
 }
 
 /*
+ * Perform XLogInsert for a heap-warm-clear operation.  Caller must already
+ * have modified the buffer and marked it dirty.
+ */
+XLogRecPtr
+log_heap_warmclear(Relation reln, Buffer buffer,
+			   OffsetNumber *cleared, int ncleared)
+{
+	xl_heap_warmclear	xlrec;
+	XLogRecPtr			recptr;
+
+	/* Caller should not call me on a non-WAL-logged relation */
+	Assert(RelationNeedsWAL(reln));
+
+	xlrec.ncleared = ncleared;
+
+	XLogBeginInsert();
+	XLogRegisterData((char *) &xlrec, SizeOfHeapWarmClear);
+
+	XLogRegisterBuffer(0, buffer, REGBUF_STANDARD);
+
+	if (ncleared > 0)
+		XLogRegisterBufData(0, (char *) cleared,
+							ncleared * sizeof(OffsetNumber));
+
+	recptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_WARMCLEAR);
+
+	return recptr;
+}
+
+/*
  * Perform XLogInsert for a heap-clean operation.  Caller must already
  * have modified the buffer and marked it dirty.
  *
@@ -8277,6 +8467,60 @@ heap_xlog_clean(XLogReaderState *record)
 		XLogRecordPageWithFreeSpace(rnode, blkno, freespace);
 }
 
+
+/*
+ * Handles HEAP2_WARMCLEAR record type
+ */
+static void
+heap_xlog_warmclear(XLogReaderState *record)
+{
+	XLogRecPtr	lsn = record->EndRecPtr;
+	xl_heap_warmclear	*xlrec = (xl_heap_warmclear *) XLogRecGetData(record);
+	Buffer		buffer;
+	RelFileNode rnode;
+	BlockNumber blkno;
+	XLogRedoAction action;
+
+	XLogRecGetBlockTag(record, 0, &rnode, NULL, &blkno);
+
+	/*
+	 * If we have a full-page image, restore it (using a cleanup lock) and
+	 * we're done.
+	 */
+	action = XLogReadBufferForRedoExtended(record, 0, RBM_NORMAL, true,
+										   &buffer);
+	if (action == BLK_NEEDS_REDO)
+	{
+		Page		page = (Page) BufferGetPage(buffer);
+		OffsetNumber *cleared;
+		int			ncleared;
+		Size		datalen;
+		int			i;
+
+		cleared = (OffsetNumber *) XLogRecGetBlockData(record, 0, &datalen);
+
+		ncleared = xlrec->ncleared;
+
+		for (i = 0; i < ncleared; i++)
+		{
+			ItemId			lp;
+			OffsetNumber	offnum = cleared[i];
+			HeapTupleData	heapTuple;
+
+			lp = PageGetItemId(page, offnum);
+			heapTuple.t_data = (HeapTupleHeader) PageGetItem(page, lp);
+
+			HeapTupleHeaderClearHeapWarmTuple(heapTuple.t_data);
+			HeapTupleHeaderClearWarmRed(heapTuple.t_data);
+		}
+
+		PageSetLSN(page, lsn);
+		MarkBufferDirty(buffer);
+	}
+	if (BufferIsValid(buffer))
+		UnlockReleaseBuffer(buffer);
+}
+
 /*
  * Replay XLOG_HEAP2_VISIBLE record.
  *
@@ -8523,7 +8767,9 @@ heap_xlog_delete(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleHeaderClearHotUpdated(htup);
 		fix_infomask_from_infobits(xlrec->infobits_set,
@@ -9186,7 +9432,9 @@ heap_xlog_lock(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9265,7 +9513,9 @@ heap_xlog_lock_updated(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9334,6 +9584,9 @@ heap_redo(XLogReaderState *record)
 		case XLOG_HEAP_INSERT:
 			heap_xlog_insert(record);
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			heap_xlog_multi_insert(record);
+			break;
 		case XLOG_HEAP_DELETE:
 			heap_xlog_delete(record);
 			break;
@@ -9362,7 +9615,7 @@ heap2_redo(XLogReaderState *record)
 {
 	uint8		info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
 
-	switch (info & XLOG_HEAP_OPMASK)
+	switch (info & XLOG_HEAP2_OPMASK)
 	{
 		case XLOG_HEAP2_CLEAN:
 			heap_xlog_clean(record);
@@ -9376,9 +9629,6 @@ heap2_redo(XLogReaderState *record)
 		case XLOG_HEAP2_VISIBLE:
 			heap_xlog_visible(record);
 			break;
-		case XLOG_HEAP2_MULTI_INSERT:
-			heap_xlog_multi_insert(record);
-			break;
 		case XLOG_HEAP2_LOCK_UPDATED:
 			heap_xlog_lock_updated(record);
 			break;
@@ -9392,6 +9642,9 @@ heap2_redo(XLogReaderState *record)
 		case XLOG_HEAP2_REWRITE:
 			heap_xlog_logical_rewrite(record);
 			break;
+		case XLOG_HEAP2_WARMCLEAR:
+			heap_xlog_warmclear(record);
+			break;
 		default:
 			elog(PANIC, "heap2_redo: unknown op code %u", info);
 	}
diff --git a/src/backend/access/heap/tuptoaster.c b/src/backend/access/heap/tuptoaster.c
index 19e7048..47b01eb 100644
--- a/src/backend/access/heap/tuptoaster.c
+++ b/src/backend/access/heap/tuptoaster.c
@@ -1620,7 +1620,8 @@ toast_save_datum(Relation rel, Datum value,
 							 toastrel,
 							 toastidxs[i]->rd_index->indisunique ?
 							 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-							 NULL);
+							 NULL,
+							 false);
 		}
 
 		/*
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index da6c252..e0553d0 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -199,7 +199,8 @@ index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 IndexInfo *indexInfo)
+			 IndexInfo *indexInfo,
+			 bool warm_update)
 {
 	RELATION_CHECKS;
 	CHECK_REL_PROCEDURE(aminsert);
@@ -209,6 +210,12 @@ index_insert(Relation indexRelation,
 									   (HeapTuple) NULL,
 									   InvalidBuffer);
 
+	if (warm_update)
+	{
+		Assert(indexRelation->rd_amroutine->amwarminsert != NULL);
+		return indexRelation->rd_amroutine->amwarminsert(indexRelation, values,
+				isnull, heap_t_ctid, heapRelation, checkUnique, indexInfo);
+	}
 	return indexRelation->rd_amroutine->aminsert(indexRelation, values, isnull,
 												 heap_t_ctid, heapRelation,
 												 checkUnique, indexInfo);
diff --git a/src/backend/access/nbtree/nbtpage.c b/src/backend/access/nbtree/nbtpage.c
index f815fd4..7959155 100644
--- a/src/backend/access/nbtree/nbtpage.c
+++ b/src/backend/access/nbtree/nbtpage.c
@@ -766,11 +766,12 @@ _bt_page_recyclable(Page page)
 }
 
 /*
- * Delete item(s) from a btree page during VACUUM.
+ * Delete item(s) and color item(s) blue on a btree page during VACUUM.
  *
  * This must only be used for deleting leaf items.  Deleting an item on a
  * non-leaf page has to be done as part of an atomic action that includes
- * deleting the page it points to.
+ * deleting the page it points to. We don't ever color pointers on a non-leaf
+ * page.
  *
  * This routine assumes that the caller has pinned and locked the buffer.
  * Also, the given itemnos *must* appear in increasing order in the array.
@@ -786,9 +787,9 @@ _bt_page_recyclable(Page page)
  * ensure correct locking.
  */
 void
-_bt_delitems_vacuum(Relation rel, Buffer buf,
-					OffsetNumber *itemnos, int nitems,
-					BlockNumber lastBlockVacuumed)
+_bt_handleitems_vacuum(Relation rel, Buffer buf,
+					OffsetNumber *delitemnos, int ndelitems,
+					OffsetNumber *coloritemnos, int ncoloritems)
 {
 	Page		page = BufferGetPage(buf);
 	BTPageOpaque opaque;
@@ -796,9 +797,20 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 	/* No ereport(ERROR) until changes are logged */
 	START_CRIT_SECTION();
 
+	/*
+	 * Color the Red pointers Blue.
+	 *
+	 * We must do this before dealing with the dead items because
+	 * PageIndexMultiDelete may move items around to compactify the array and
+	 * hence offnums recorded earlier won't make any sense after
+	 * PageIndexMultiDelete is called..
+	 */
+	if (ncoloritems > 0)
+		_bt_color_items(page, coloritemnos, ncoloritems);
+
 	/* Fix the page */
-	if (nitems > 0)
-		PageIndexMultiDelete(page, itemnos, nitems);
+	if (ndelitems > 0)
+		PageIndexMultiDelete(page, delitemnos, ndelitems);
 
 	/*
 	 * We can clear the vacuum cycle ID since this page has certainly been
@@ -824,7 +836,8 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 		XLogRecPtr	recptr;
 		xl_btree_vacuum xlrec_vacuum;
 
-		xlrec_vacuum.lastBlockVacuumed = lastBlockVacuumed;
+		xlrec_vacuum.ndelitems = ndelitems;
+		xlrec_vacuum.ncoloritems = ncoloritems;
 
 		XLogBeginInsert();
 		XLogRegisterBuffer(0, buf, REGBUF_STANDARD);
@@ -835,8 +848,11 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 		 * is.  When XLogInsert stores the whole buffer, the offsets array
 		 * need not be stored too.
 		 */
-		if (nitems > 0)
-			XLogRegisterBufData(0, (char *) itemnos, nitems * sizeof(OffsetNumber));
+		if (ndelitems > 0)
+			XLogRegisterBufData(0, (char *) delitemnos, ndelitems * sizeof(OffsetNumber));
+
+		if (ncoloritems > 0)
+			XLogRegisterBufData(0, (char *) coloritemnos, ncoloritems * sizeof(OffsetNumber));
 
 		recptr = XLogInsert(RM_BTREE_ID, XLOG_BTREE_VACUUM);
 
@@ -1882,3 +1898,18 @@ _bt_unlink_halfdead_page(Relation rel, Buffer leafbuf, bool *rightsib_empty)
 
 	return true;
 }
+
+void
+_bt_color_items(Page page, OffsetNumber *coloritemnos, uint16 ncoloritems)
+{
+	int			i;
+	ItemId		itemid;
+	IndexTuple	itup;
+
+	for (i = 0; i < ncoloritems; i++)
+	{
+		itemid = PageGetItemId(page, coloritemnos[i]);
+		itup = (IndexTuple) PageGetItem(page, itemid);
+		ItemPointerClearFlags(&itup->t_tid);
+	}
+}
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 952ed8f..92f490e 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -147,6 +147,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = btbuild;
 	amroutine->ambuildempty = btbuildempty;
 	amroutine->aminsert = btinsert;
+	amroutine->amwarminsert = btwarminsert;
 	amroutine->ambulkdelete = btbulkdelete;
 	amroutine->amvacuumcleanup = btvacuumcleanup;
 	amroutine->amcanreturn = btcanreturn;
@@ -317,11 +318,12 @@ btbuildempty(Relation index)
  *		Descend the tree recursively, find the appropriate location for our
  *		new tuple, and put it there.
  */
-bool
-btinsert(Relation rel, Datum *values, bool *isnull,
+static bool
+btinsert_internal(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
-		 IndexInfo *indexInfo)
+		 IndexInfo *indexInfo,
+		 bool warm_update)
 {
 	bool		result;
 	IndexTuple	itup;
@@ -330,6 +332,11 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	itup = index_form_tuple(RelationGetDescr(rel), values, isnull);
 	itup->t_tid = *ht_ctid;
 
+	if (warm_update)
+		ItemPointerSetFlags(&itup->t_tid, BTREE_INDEX_RED_POINTER);
+	else
+		ItemPointerClearFlags(&itup->t_tid);
+
 	result = _bt_doinsert(rel, itup, checkUnique, heapRel);
 
 	pfree(itup);
@@ -337,6 +344,26 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	return result;
 }
 
+bool
+btinsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, false);
+}
+
+bool
+btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, true);
+}
+
 /*
  *	btgettuple() -- Get the next tuple in the scan.
  */
@@ -1106,7 +1133,7 @@ btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 								 RBM_NORMAL, info->strategy);
 		LockBufferForCleanup(buf);
 		_bt_checkpage(rel, buf);
-		_bt_delitems_vacuum(rel, buf, NULL, 0, vstate.lastBlockVacuumed);
+		_bt_handleitems_vacuum(rel, buf, NULL, 0, NULL, 0);
 		_bt_relbuf(rel, buf);
 	}
 
@@ -1204,6 +1231,8 @@ restart:
 	{
 		OffsetNumber deletable[MaxOffsetNumber];
 		int			ndeletable;
+		OffsetNumber colorblue[MaxOffsetNumber];
+		int			ncolorblue;
 		OffsetNumber offnum,
 					minoff,
 					maxoff;
@@ -1242,7 +1271,7 @@ restart:
 		 * Scan over all items to see which ones need deleted according to the
 		 * callback function.
 		 */
-		ndeletable = 0;
+		ndeletable = ncolorblue = 0;
 		minoff = P_FIRSTDATAKEY(opaque);
 		maxoff = PageGetMaxOffsetNumber(page);
 		if (callback)
@@ -1253,6 +1282,9 @@ restart:
 			{
 				IndexTuple	itup;
 				ItemPointer htup;
+				int			flags;
+				bool		is_red = false;
+				IndexBulkDeleteCallbackResult	result;
 
 				itup = (IndexTuple) PageGetItem(page,
 												PageGetItemId(page, offnum));
@@ -1279,16 +1311,36 @@ restart:
 				 * applies to *any* type of index that marks index tuples as
 				 * killed.
 				 */
-				if (callback(htup, callback_state))
+				flags = ItemPointerGetFlags(&itup->t_tid);
+				is_red = ((flags & BTREE_INDEX_RED_POINTER) != 0);
+
+				if (is_red)
+					stats->num_red_pointers++;
+				else
+					stats->num_blue_pointers++;
+
+				result = callback(htup, is_red, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					if (is_red)
+						stats->red_pointers_removed++;
+					else
+						stats->blue_pointers_removed++;
 					deletable[ndeletable++] = offnum;
+				}
+				else if (result == IBDCR_COLOR_BLUE)
+				{
+					colorblue[ncolorblue++] = offnum;
+				}
 			}
 		}
 
 		/*
-		 * Apply any needed deletes.  We issue just one _bt_delitems_vacuum()
-		 * call per page, so as to minimize WAL traffic.
+		 * Apply any needed deletes and coloring.  We issue just one
+		 * _bt_handleitems_vacuum() call per page, so as to minimize WAL
+		 * traffic.
 		 */
-		if (ndeletable > 0)
+		if (ndeletable > 0 || ncolorblue > 0)
 		{
 			/*
 			 * Notice that the issued XLOG_BTREE_VACUUM WAL record includes
@@ -1304,8 +1356,8 @@ restart:
 			 * doesn't seem worth the amount of bookkeeping it'd take to avoid
 			 * that.
 			 */
-			_bt_delitems_vacuum(rel, buf, deletable, ndeletable,
-								vstate->lastBlockVacuumed);
+			_bt_handleitems_vacuum(rel, buf, deletable, ndeletable,
+								colorblue, ncolorblue);
 
 			/*
 			 * Remember highest leaf page number we've issued a
@@ -1315,6 +1367,7 @@ restart:
 				vstate->lastBlockVacuumed = blkno;
 
 			stats->tuples_removed += ndeletable;
+			stats->pointers_colored += ncolorblue;
 			/* must recompute maxoff */
 			maxoff = PageGetMaxOffsetNumber(page);
 		}
diff --git a/src/backend/access/nbtree/nbtxlog.c b/src/backend/access/nbtree/nbtxlog.c
index ac60db0..916c76e 100644
--- a/src/backend/access/nbtree/nbtxlog.c
+++ b/src/backend/access/nbtree/nbtxlog.c
@@ -390,83 +390,9 @@ btree_xlog_vacuum(XLogReaderState *record)
 	Buffer		buffer;
 	Page		page;
 	BTPageOpaque opaque;
-#ifdef UNUSED
 	xl_btree_vacuum *xlrec = (xl_btree_vacuum *) XLogRecGetData(record);
 
 	/*
-	 * This section of code is thought to be no longer needed, after analysis
-	 * of the calling paths. It is retained to allow the code to be reinstated
-	 * if a flaw is revealed in that thinking.
-	 *
-	 * If we are running non-MVCC scans using this index we need to do some
-	 * additional work to ensure correctness, which is known as a "pin scan"
-	 * described in more detail in next paragraphs. We used to do the extra
-	 * work in all cases, whereas we now avoid that work in most cases. If
-	 * lastBlockVacuumed is set to InvalidBlockNumber then we skip the
-	 * additional work required for the pin scan.
-	 *
-	 * Avoiding this extra work is important since it requires us to touch
-	 * every page in the index, so is an O(N) operation. Worse, it is an
-	 * operation performed in the foreground during redo, so it delays
-	 * replication directly.
-	 *
-	 * If queries might be active then we need to ensure every leaf page is
-	 * unpinned between the lastBlockVacuumed and the current block, if there
-	 * are any.  This prevents replay of the VACUUM from reaching the stage of
-	 * removing heap tuples while there could still be indexscans "in flight"
-	 * to those particular tuples for those scans which could be confused by
-	 * finding new tuples at the old TID locations (see nbtree/README).
-	 *
-	 * It might be worth checking if there are actually any backends running;
-	 * if not, we could just skip this.
-	 *
-	 * Since VACUUM can visit leaf pages out-of-order, it might issue records
-	 * with lastBlockVacuumed >= block; that's not an error, it just means
-	 * nothing to do now.
-	 *
-	 * Note: since we touch all pages in the range, we will lock non-leaf
-	 * pages, and also any empty (all-zero) pages that may be in the index. It
-	 * doesn't seem worth the complexity to avoid that.  But it's important
-	 * that HotStandbyActiveInReplay() will not return true if the database
-	 * isn't yet consistent; so we need not fear reading still-corrupt blocks
-	 * here during crash recovery.
-	 */
-	if (HotStandbyActiveInReplay() && BlockNumberIsValid(xlrec->lastBlockVacuumed))
-	{
-		RelFileNode thisrnode;
-		BlockNumber thisblkno;
-		BlockNumber blkno;
-
-		XLogRecGetBlockTag(record, 0, &thisrnode, NULL, &thisblkno);
-
-		for (blkno = xlrec->lastBlockVacuumed + 1; blkno < thisblkno; blkno++)
-		{
-			/*
-			 * We use RBM_NORMAL_NO_LOG mode because it's not an error
-			 * condition to see all-zero pages.  The original btvacuumpage
-			 * scan would have skipped over all-zero pages, noting them in FSM
-			 * but not bothering to initialize them just yet; so we mustn't
-			 * throw an error here.  (We could skip acquiring the cleanup lock
-			 * if PageIsNew, but it's probably not worth the cycles to test.)
-			 *
-			 * XXX we don't actually need to read the block, we just need to
-			 * confirm it is unpinned. If we had a special call into the
-			 * buffer manager we could optimise this so that if the block is
-			 * not in shared_buffers we confirm it as unpinned. Optimizing
-			 * this is now moot, since in most cases we avoid the scan.
-			 */
-			buffer = XLogReadBufferExtended(thisrnode, MAIN_FORKNUM, blkno,
-											RBM_NORMAL_NO_LOG);
-			if (BufferIsValid(buffer))
-			{
-				LockBufferForCleanup(buffer);
-				UnlockReleaseBuffer(buffer);
-			}
-		}
-	}
-#endif
-
-	/*
 	 * Like in btvacuumpage(), we need to take a cleanup lock on every leaf
 	 * page. See nbtree/README for details.
 	 */
@@ -482,19 +408,30 @@ btree_xlog_vacuum(XLogReaderState *record)
 
 		if (len > 0)
 		{
-			OffsetNumber *unused;
-			OffsetNumber *unend;
+			OffsetNumber *offnums = (OffsetNumber *) ptr;
 
-			unused = (OffsetNumber *) ptr;
-			unend = (OffsetNumber *) ((char *) ptr + len);
+			/*
+			 * Color the Red pointers Blue.
+			 *
+			 * We must do this before dealing with the dead items because
+			 * PageIndexMultiDelete may move items around to compactify the
+			 * array and hence offnums recorded earlier won't make any sense
+			 * after PageIndexMultiDelete is called..
+			 */
+			if (xlrec->ncoloritems > 0)
+				_bt_color_items(page, offnums + xlrec->ndelitems,
+						xlrec->ncoloritems);
 
-			if ((unend - unused) > 0)
-				PageIndexMultiDelete(page, unused, unend - unused);
+			/*
+			 * And handle the deleted items too
+			 */
+			if (xlrec->ndelitems > 0)
+				PageIndexMultiDelete(page, offnums, xlrec->ndelitems);
 		}
 
 		/*
 		 * Mark the page as not containing any LP_DEAD items --- see comments
-		 * in _bt_delitems_vacuum().
+		 * in _bt_handleitems_vacuum().
 		 */
 		opaque = (BTPageOpaque) PageGetSpecialPointer(page);
 		opaque->btpo_flags &= ~BTP_HAS_GARBAGE;
diff --git a/src/backend/access/rmgrdesc/heapdesc.c b/src/backend/access/rmgrdesc/heapdesc.c
index 44d2d63..d373e61 100644
--- a/src/backend/access/rmgrdesc/heapdesc.c
+++ b/src/backend/access/rmgrdesc/heapdesc.c
@@ -44,6 +44,12 @@ heap_desc(StringInfo buf, XLogReaderState *record)
 
 		appendStringInfo(buf, "off %u", xlrec->offnum);
 	}
+	else if (info == XLOG_HEAP_MULTI_INSERT)
+	{
+		xl_heap_multi_insert *xlrec = (xl_heap_multi_insert *) rec;
+
+		appendStringInfo(buf, "%d tuples", xlrec->ntuples);
+	}
 	else if (info == XLOG_HEAP_DELETE)
 	{
 		xl_heap_delete *xlrec = (xl_heap_delete *) rec;
@@ -102,7 +108,7 @@ heap2_desc(StringInfo buf, XLogReaderState *record)
 	char	   *rec = XLogRecGetData(record);
 	uint8		info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
 
-	info &= XLOG_HEAP_OPMASK;
+	info &= XLOG_HEAP2_OPMASK;
 	if (info == XLOG_HEAP2_CLEAN)
 	{
 		xl_heap_clean *xlrec = (xl_heap_clean *) rec;
@@ -129,12 +135,6 @@ heap2_desc(StringInfo buf, XLogReaderState *record)
 		appendStringInfo(buf, "cutoff xid %u flags %d",
 						 xlrec->cutoff_xid, xlrec->flags);
 	}
-	else if (info == XLOG_HEAP2_MULTI_INSERT)
-	{
-		xl_heap_multi_insert *xlrec = (xl_heap_multi_insert *) rec;
-
-		appendStringInfo(buf, "%d tuples", xlrec->ntuples);
-	}
 	else if (info == XLOG_HEAP2_LOCK_UPDATED)
 	{
 		xl_heap_lock_updated *xlrec = (xl_heap_lock_updated *) rec;
@@ -171,6 +171,12 @@ heap_identify(uint8 info)
 		case XLOG_HEAP_INSERT | XLOG_HEAP_INIT_PAGE:
 			id = "INSERT+INIT";
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			id = "MULTI_INSERT";
+			break;
+		case XLOG_HEAP_MULTI_INSERT | XLOG_HEAP_INIT_PAGE:
+			id = "MULTI_INSERT+INIT";
+			break;
 		case XLOG_HEAP_DELETE:
 			id = "DELETE";
 			break;
@@ -219,12 +225,6 @@ heap2_identify(uint8 info)
 		case XLOG_HEAP2_VISIBLE:
 			id = "VISIBLE";
 			break;
-		case XLOG_HEAP2_MULTI_INSERT:
-			id = "MULTI_INSERT";
-			break;
-		case XLOG_HEAP2_MULTI_INSERT | XLOG_HEAP_INIT_PAGE:
-			id = "MULTI_INSERT+INIT";
-			break;
 		case XLOG_HEAP2_LOCK_UPDATED:
 			id = "LOCK_UPDATED";
 			break;
diff --git a/src/backend/access/rmgrdesc/nbtdesc.c b/src/backend/access/rmgrdesc/nbtdesc.c
index fbde9d6..0e9a2eb 100644
--- a/src/backend/access/rmgrdesc/nbtdesc.c
+++ b/src/backend/access/rmgrdesc/nbtdesc.c
@@ -48,8 +48,8 @@ btree_desc(StringInfo buf, XLogReaderState *record)
 			{
 				xl_btree_vacuum *xlrec = (xl_btree_vacuum *) rec;
 
-				appendStringInfo(buf, "lastBlockVacuumed %u",
-								 xlrec->lastBlockVacuumed);
+				appendStringInfo(buf, "ndelitems %u, ncoloritems %u",
+								 xlrec->ndelitems, xlrec->ncoloritems);
 				break;
 			}
 		case XLOG_BTREE_DELETE:
diff --git a/src/backend/access/spgist/spgvacuum.c b/src/backend/access/spgist/spgvacuum.c
index cce9b3f..5343b10 100644
--- a/src/backend/access/spgist/spgvacuum.c
+++ b/src/backend/access/spgist/spgvacuum.c
@@ -155,7 +155,8 @@ vacuumLeafPage(spgBulkDeleteState *bds, Relation index, Buffer buffer,
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				deletable[i] = true;
@@ -425,7 +426,8 @@ vacuumLeafRoot(spgBulkDeleteState *bds, Relation index, Buffer buffer)
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				toDelete[xlrec.nDelete] = i;
@@ -902,10 +904,10 @@ spgbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 }
 
 /* Dummy callback to delete no tuples during spgvacuumcleanup */
-static bool
-dummy_callback(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+dummy_callback(ItemPointer itemptr, bool is_red, void *state)
 {
-	return false;
+	return IBDCR_KEEP;
 }
 
 /*
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 049eb28..166efd8 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -115,7 +115,7 @@ static void IndexCheckExclusion(Relation heapRelation,
 					IndexInfo *indexInfo);
 static inline int64 itemptr_encode(ItemPointer itemptr);
 static inline void itemptr_decode(ItemPointer itemptr, int64 encoded);
-static bool validate_index_callback(ItemPointer itemptr, void *opaque);
+static IndexBulkDeleteCallbackResult validate_index_callback(ItemPointer itemptr, bool is_red, void *opaque);
 static void validate_index_heapscan(Relation heapRelation,
 						Relation indexRelation,
 						IndexInfo *indexInfo,
@@ -2949,15 +2949,15 @@ itemptr_decode(ItemPointer itemptr, int64 encoded)
 /*
  * validate_index_callback - bulkdelete callback to collect the index TIDs
  */
-static bool
-validate_index_callback(ItemPointer itemptr, void *opaque)
+static IndexBulkDeleteCallbackResult
+validate_index_callback(ItemPointer itemptr, bool is_red, void *opaque)
 {
 	v_i_state  *state = (v_i_state *) opaque;
 	int64		encoded = itemptr_encode(itemptr);
 
 	tuplesort_putdatum(state->tuplesort, Int64GetDatum(encoded), false);
 	state->itups += 1;
-	return false;				/* never actually delete anything */
+	return IBDCR_KEEP;				/* never actually delete anything */
 }
 
 /*
@@ -3178,7 +3178,8 @@ validate_index_heapscan(Relation heapRelation,
 						 heapRelation,
 						 indexInfo->ii_Unique ?
 						 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-						 indexInfo);
+						 indexInfo,
+						 false);
 
 			state->tups_inserted += 1;
 		}
diff --git a/src/backend/catalog/indexing.c b/src/backend/catalog/indexing.c
index 970254f..6392f33 100644
--- a/src/backend/catalog/indexing.c
+++ b/src/backend/catalog/indexing.c
@@ -172,7 +172,8 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple,
 					 heapRelation,
 					 relationDescs[i]->rd_index->indisunique ?
 					 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-					 indexInfo);
+					 indexInfo,
+					 warm_update);
 	}
 
 	ExecDropSingleTupleTableSlot(slot);
@@ -222,7 +223,7 @@ CatalogTupleInsertWithInfo(Relation heapRel, HeapTuple tup,
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup, false, NULL);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 
 	return oid;
 }
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index d9c0fe7..330b661 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -168,7 +168,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 		 */
 		index_insert(indexRel, values, isnull, &(new_row->t_self),
 					 trigdata->tg_relation, UNIQUE_CHECK_EXISTING,
-					 indexInfo);
+					 indexInfo,
+					 false);
 	}
 	else
 	{
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 7376099..deb76cb 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -104,6 +104,25 @@
  */
 #define PREFETCH_SIZE			((BlockNumber) 32)
 
+/*
+ * Structure to track WARM chains that can be converted into HOT chains during
+ * this run.
+ *
+ * To reduce space requirement, we're using bitfields. But the way things are
+ * laid down, we're still wasting 1-byte per candidate chain.
+ */
+typedef struct LVRedBlueChain
+{
+	ItemPointerData	chain_tid;			/* root of the chain */
+	uint8			is_red_chain:2;		/* is the WARM chain complete red ? */
+	uint8			keep_warm_chain:2;	/* this chain can't be cleared of WARM
+										 * tuples */
+	uint8			num_blue_pointers:2;/* number of blue pointers found so
+										 * far */
+	uint8			num_red_pointers:2; /* number of red pointers found so far
+										 * in the current index */
+} LVRedBlueChain;
+
 typedef struct LVRelStats
 {
 	/* hasindex = true means two-pass strategy; false means one-pass */
@@ -121,6 +140,16 @@ typedef struct LVRelStats
 	BlockNumber pages_removed;
 	double		tuples_deleted;
 	BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */
+
+	double			num_warm_chains; /* number of warm chains seen so far */
+
+	/* List of WARM chains that can be converted into HOT chains */
+	/* NB: this list is ordered by TID of the root pointers */
+	int				num_redblue_chains;	/* current # of entries */
+	int				max_redblue_chains;	/* # slots allocated in array */
+	LVRedBlueChain *redblue_chains;	/* array of LVRedBlueChain */
+	double			num_non_convertible_warm_chains;
+
 	/* List of TIDs of tuples we intend to delete */
 	/* NB: this list is ordered by TID address */
 	int			num_dead_tuples;	/* current # of entries */
@@ -149,6 +178,7 @@ static void lazy_scan_heap(Relation onerel, int options,
 static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats);
 static bool lazy_check_needs_freeze(Buffer buf, bool *hastup);
 static void lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats);
 static void lazy_cleanup_index(Relation indrel,
@@ -156,6 +186,10 @@ static void lazy_cleanup_index(Relation indrel,
 				   LVRelStats *vacrelstats);
 static int lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 				 int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer);
+static int lazy_warmclear_page(Relation onerel, BlockNumber blkno,
+				 Buffer buffer, int chainindex, LVRelStats *vacrelstats,
+				 Buffer *vmbuffer, bool check_all_visible);
+static void lazy_reset_redblue_pointer_count(LVRelStats *vacrelstats);
 static bool should_attempt_truncation(LVRelStats *vacrelstats);
 static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
 static BlockNumber count_nondeletable_pages(Relation onerel,
@@ -163,8 +197,15 @@ static BlockNumber count_nondeletable_pages(Relation onerel,
 static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks);
 static void lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr);
-static bool lazy_tid_reaped(ItemPointer itemptr, void *state);
+static void lazy_record_red_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static void lazy_record_blue_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static IndexBulkDeleteCallbackResult lazy_tid_reaped(ItemPointer itemptr, bool is_red, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase1(ItemPointer itemptr, bool is_red, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase2(ItemPointer itemptr, bool is_red, void *state);
 static int	vac_cmp_itemptr(const void *left, const void *right);
+static int vac_cmp_redblue_chain(const void *left, const void *right);
 static bool heap_page_is_all_visible(Relation rel, Buffer buf,
 					 TransactionId *visibility_cutoff_xid, bool *all_frozen);
 
@@ -684,8 +725,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		 * If we are close to overrunning the available space for dead-tuple
 		 * TIDs, pause and do a cycle of vacuuming before we tackle this page.
 		 */
-		if ((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
-			vacrelstats->num_dead_tuples > 0)
+		if (((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
+			vacrelstats->num_dead_tuples > 0) ||
+			((vacrelstats->max_redblue_chains - vacrelstats->num_redblue_chains) < MaxHeapTuplesPerPage &&
+			 vacrelstats->num_redblue_chains > 0))
 		{
 			const int	hvp_index[] = {
 				PROGRESS_VACUUM_PHASE,
@@ -715,6 +758,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			/* Remove index entries */
 			for (i = 0; i < nindexes; i++)
 				lazy_vacuum_index(Irel[i],
+								  (vacrelstats->num_redblue_chains > 0),
 								  &indstats[i],
 								  vacrelstats);
 
@@ -737,6 +781,9 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			 * valid.
 			 */
 			vacrelstats->num_dead_tuples = 0;
+			vacrelstats->num_redblue_chains = 0;
+			memset(vacrelstats->redblue_chains, 0,
+					vacrelstats->max_redblue_chains * sizeof (LVRedBlueChain));
 			vacrelstats->num_index_scans++;
 
 			/* Report that we are once again scanning the heap */
@@ -940,15 +987,33 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 				continue;
 			}
 
+			ItemPointerSet(&(tuple.t_self), blkno, offnum);
+
 			/* Redirect items mustn't be touched */
 			if (ItemIdIsRedirected(itemid))
 			{
+				HeapCheckWarmChainStatus status = heap_check_warm_chain(page,
+						&tuple.t_self, false);
+				if (HCWC_IS_WARM(status))
+				{
+					vacrelstats->num_warm_chains++;
+
+					/*
+					 * A chain which is either complete Red or Blue is a
+					 * candidate for chain conversion. Remember the chain and
+					 * its color.
+					 */
+					if (HCWC_IS_ALL_RED(status))
+						lazy_record_red_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_BLUE(status))
+						lazy_record_blue_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
 				hastup = true;	/* this page won't be truncatable */
 				continue;
 			}
 
-			ItemPointerSet(&(tuple.t_self), blkno, offnum);
-
 			/*
 			 * DEAD item pointers are to be vacuumed normally; but we don't
 			 * count them in tups_vacuumed, else we'd be double-counting (at
@@ -968,6 +1033,28 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			tuple.t_len = ItemIdGetLength(itemid);
 			tuple.t_tableOid = RelationGetRelid(onerel);
 
+			if (!HeapTupleIsHeapOnly(&tuple))
+			{
+				HeapCheckWarmChainStatus status = heap_check_warm_chain(page,
+						&tuple.t_self, false);
+				if (HCWC_IS_WARM(status))
+				{
+					vacrelstats->num_warm_chains++;
+
+					/*
+					 * A chain which is either complete Red or Blue is a
+					 * candidate for chain conversion. Remember the chain and
+					 * its color.
+					 */
+					if (HCWC_IS_ALL_RED(status))
+						lazy_record_red_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_BLUE(status))
+						lazy_record_blue_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
+			}
+
 			tupgone = false;
 
 			switch (HeapTupleSatisfiesVacuum(&tuple, OldestXmin, buf))
@@ -1288,7 +1375,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 
 	/* If any tuples need to be deleted, perform final vacuum cycle */
 	/* XXX put a threshold on min number of tuples here? */
-	if (vacrelstats->num_dead_tuples > 0)
+	if (vacrelstats->num_dead_tuples > 0 || vacrelstats->num_redblue_chains > 0)
 	{
 		const int	hvp_index[] = {
 			PROGRESS_VACUUM_PHASE,
@@ -1306,6 +1393,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		/* Remove index entries */
 		for (i = 0; i < nindexes; i++)
 			lazy_vacuum_index(Irel[i],
+							  (vacrelstats->num_redblue_chains > 0),
 							  &indstats[i],
 							  vacrelstats);
 
@@ -1373,7 +1461,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
  *
  *		This routine marks dead tuples as unused and compacts out free
  *		space on their pages.  Pages not having dead tuples recorded from
- *		lazy_scan_heap are not visited at all.
+ *		lazy_scan_heap are not visited at all. This routine also converts
+ *		candidate WARM chains to HOT chains by clearing WARM related flags. The
+ *		candidate chains are determined by the preceeding index scans after
+ *		looking at the data collected by the first heap scan.
  *
  * Note: the reason for doing this as a second pass is we cannot remove
  * the tuples until we've removed their index entries, and we want to
@@ -1382,7 +1473,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 static void
 lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 {
-	int			tupindex;
+	int			tupindex, chainindex;
 	int			npages;
 	PGRUsage	ru0;
 	Buffer		vmbuffer = InvalidBuffer;
@@ -1391,33 +1482,69 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 	npages = 0;
 
 	tupindex = 0;
-	while (tupindex < vacrelstats->num_dead_tuples)
+	chainindex = 0;
+	while (tupindex < vacrelstats->num_dead_tuples ||
+		   chainindex < vacrelstats->num_redblue_chains)
 	{
-		BlockNumber tblk;
+		BlockNumber tblk, chainblk, vacblk;
 		Buffer		buf;
 		Page		page;
 		Size		freespace;
 
 		vacuum_delay_point();
 
-		tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
-		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, tblk, RBM_NORMAL,
+		tblk = chainblk = InvalidBlockNumber;
+		if (chainindex < vacrelstats->num_redblue_chains)
+			chainblk =
+				ItemPointerGetBlockNumber(&(vacrelstats->redblue_chains[chainindex].chain_tid));
+
+		if (tupindex < vacrelstats->num_dead_tuples)
+			tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
+
+		if (tblk == InvalidBlockNumber)
+			vacblk = chainblk;
+		else if (chainblk == InvalidBlockNumber)
+			vacblk = tblk;
+		else
+			vacblk = Min(chainblk, tblk);
+
+		Assert(vacblk != InvalidBlockNumber);
+
+		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, vacblk, RBM_NORMAL,
 								 vac_strategy);
-		if (!ConditionalLockBufferForCleanup(buf))
+
+
+		if (vacblk == chainblk)
+			LockBufferForCleanup(buf);
+		else if (!ConditionalLockBufferForCleanup(buf))
 		{
 			ReleaseBuffer(buf);
 			++tupindex;
 			continue;
 		}
-		tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
-									&vmbuffer);
+
+		/*
+		 * Convert WARM chains on this page. This should be done before
+		 * vacuuming the page to ensure that we can correctly set visibility
+		 * bits after clearing WARM chains.
+		 *
+		 * If we are going to vacuum this page then don't check for
+		 * all-visibility just yet.
+		*/
+		if (vacblk == chainblk)
+			chainindex = lazy_warmclear_page(onerel, chainblk, buf, chainindex,
+					vacrelstats, &vmbuffer, chainblk != tblk);
+
+		if (vacblk == tblk)
+			tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
+					&vmbuffer);
 
 		/* Now that we've compacted the page, record its available space */
 		page = BufferGetPage(buf);
 		freespace = PageGetHeapFreeSpace(page);
 
 		UnlockReleaseBuffer(buf);
-		RecordPageWithFreeSpace(onerel, tblk, freespace);
+		RecordPageWithFreeSpace(onerel, vacblk, freespace);
 		npages++;
 	}
 
@@ -1436,6 +1563,107 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 }
 
 /*
+ *	lazy_warmclear_page() -- clear WARM flag and mark chains blue when possible
+ *
+ * Caller must hold pin and buffer cleanup lock on the buffer.
+ *
+ * chainindex is the index in vacrelstats->redblue_chains of the first dead
+ * tuple for this page.  We assume the rest follow sequentially.
+ * The return value is the first tupindex after the tuples of this page.
+ *
+ * If check_all_visible is set then we also check if the page has now become
+ * all visible and update visibility map.
+ */
+static int
+lazy_warmclear_page(Relation onerel, BlockNumber blkno, Buffer buffer,
+				 int chainindex, LVRelStats *vacrelstats, Buffer *vmbuffer,
+				 bool check_all_visible)
+{
+	Page			page = BufferGetPage(buffer);
+	OffsetNumber	cleared_offnums[MaxHeapTuplesPerPage];
+	int				num_cleared = 0;
+	TransactionId	visibility_cutoff_xid;
+	bool			all_frozen;
+
+	pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED, blkno);
+
+	START_CRIT_SECTION();
+
+	for (; chainindex < vacrelstats->num_redblue_chains ; chainindex++)
+	{
+		BlockNumber tblk;
+		LVRedBlueChain	*chain;
+
+		chain = &vacrelstats->redblue_chains[chainindex];
+
+		tblk = ItemPointerGetBlockNumber(&chain->chain_tid);
+		if (tblk != blkno)
+			break;				/* past end of tuples for this block */
+
+		/*
+		 * Since a heap page can have no more than MaxHeapTuplesPerPage
+		 * offnums and we process each offnum only once, MaxHeapTuplesPerPage
+		 * size array should be enough to hold all cleared tuples in this page.
+		 */
+		if (!chain->keep_warm_chain)
+			num_cleared += heap_clear_warm_chain(page, &chain->chain_tid,
+					cleared_offnums + num_cleared);
+	}
+
+	/*
+	 * Mark buffer dirty before we write WAL.
+	 */
+	MarkBufferDirty(buffer);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(onerel))
+	{
+		XLogRecPtr	recptr;
+
+		recptr = log_heap_warmclear(onerel, buffer,
+								cleared_offnums, num_cleared);
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	/* If not checking for all-visibility then we're done */
+	if (!check_all_visible)
+		return chainindex;
+
+	/*
+	 * The following code should match the corresponding code in
+	 * lazy_vacuum_page
+	 **/
+	if (heap_page_is_all_visible(onerel, buffer, &visibility_cutoff_xid,
+								 &all_frozen))
+		PageSetAllVisible(page);
+
+	/*
+	 * All the changes to the heap page have been done. If the all-visible
+	 * flag is now set, also set the VM all-visible bit (and, if possible, the
+	 * all-frozen bit) unless this has already been done previously.
+	 */
+	if (PageIsAllVisible(page))
+	{
+		uint8		vm_status = visibilitymap_get_status(onerel, blkno, vmbuffer);
+		uint8		flags = 0;
+
+		/* Set the VM all-frozen bit to flag, if needed */
+		if ((vm_status & VISIBILITYMAP_ALL_VISIBLE) == 0)
+			flags |= VISIBILITYMAP_ALL_VISIBLE;
+		if ((vm_status & VISIBILITYMAP_ALL_FROZEN) == 0 && all_frozen)
+			flags |= VISIBILITYMAP_ALL_FROZEN;
+
+		Assert(BufferIsValid(*vmbuffer));
+		if (flags != 0)
+			visibilitymap_set(onerel, blkno, buffer, InvalidXLogRecPtr,
+							  *vmbuffer, visibility_cutoff_xid, flags);
+	}
+	return chainindex;
+}
+
+/*
  *	lazy_vacuum_page() -- free dead tuples on a page
  *					 and repair its fragmentation.
  *
@@ -1588,6 +1816,16 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
 	return false;
 }
 
+static void
+lazy_reset_redblue_pointer_count(LVRelStats *vacrelstats)
+{
+	int i;
+	for (i = 0; i < vacrelstats->num_redblue_chains; i++)
+	{
+		LVRedBlueChain *chain = &vacrelstats->redblue_chains[i];
+		chain->num_blue_pointers = chain->num_red_pointers = 0;
+	}
+}
 
 /*
  *	lazy_vacuum_index() -- vacuum one index relation.
@@ -1597,6 +1835,7 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
  */
 static void
 lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats)
 {
@@ -1612,15 +1851,81 @@ lazy_vacuum_index(Relation indrel,
 	ivinfo.num_heap_tuples = vacrelstats->old_rel_tuples;
 	ivinfo.strategy = vac_strategy;
 
-	/* Do bulk deletion */
-	*stats = index_bulk_delete(&ivinfo, *stats,
-							   lazy_tid_reaped, (void *) vacrelstats);
+	/*
+	 * If told, convert WARM chains into HOT chains.
+	 *
+	 * We must have already collected candidate WARM chains i.e. chains which
+	 * has either has only Red or only Blue tuples, but not a mix of both.
+	 *
+	 * This works in two phases. In the first phase, we do a complete index
+	 * scan and collect information about index pointers to the candidate
+	 * chains, but we don't do conversion. To be precise, we count the number
+	 * of Blue and Red index pointers to each candidate chain and use that
+	 * knowledge to arrive at a decision and do the actual conversion during
+	 * the second phase (we kill known dead pointers though in this phase).
+	 *
+	 * In the second phase, for each Red chain we check if we have seen a Red
+	 * index pointer. For such chains, we kill the Blue pointer and color the
+	 * Red pointer Blue. the heap tuples are marked Blue in the second heap
+	 * scan. If we did not find any Red pointer to a Red chain, that means that
+	 * the chain is reachable from the Blue pointer (because say WARM update
+	 * did not added a new entry for this index). In that case, we do nothing.
+	 * There is a third case where we find more than one Blue pointers to a Red
+	 * chain. This can happen because of aborted vacuums. We don't handle that
+	 * case yet, but it should be possible to apply the same recheck logic and
+	 * find which of the Blue pointers is redundant and should be removed.
+	 *
+	 * For Blue chains, we just kill the Red pointer, if it exists and keep the
+	 * Blue pointer.
+	 */
+	if (clear_warm)
+	{
+		lazy_reset_redblue_pointer_count(vacrelstats);
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase1, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row version, found "
+						"%0.f red pointers, %0.f blue pointers, removed "
+						"%0.f red pointers, removed %0.f blue pointers",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples,
+						(*stats)->num_red_pointers,
+						(*stats)->num_blue_pointers,
+						(*stats)->red_pointers_removed,
+						(*stats)->blue_pointers_removed)));
+
+		(*stats)->num_red_pointers = 0;
+		(*stats)->num_blue_pointers = 0;
+		(*stats)->red_pointers_removed = 0;
+		(*stats)->blue_pointers_removed = 0;
+		(*stats)->pointers_colored = 0;
+
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase2, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to convert red pointers, found "
+						"%0.f red pointers, %0.f blue pointers, removed "
+						"%0.f red pointers, removed %0.f blue pointers, "
+						"colored %0.f red pointers blue",
+						RelationGetRelationName(indrel),
+						(*stats)->num_red_pointers,
+						(*stats)->num_blue_pointers,
+						(*stats)->red_pointers_removed,
+						(*stats)->blue_pointers_removed,
+						(*stats)->pointers_colored)));
+	}
+	else
+	{
+		/* Do bulk deletion */
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_tid_reaped, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row versions",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples),
+				 errdetail("%s.", pg_rusage_show(&ru0))));
+	}
 
-	ereport(elevel,
-			(errmsg("scanned index \"%s\" to remove %d row versions",
-					RelationGetRelationName(indrel),
-					vacrelstats->num_dead_tuples),
-			 errdetail("%s.", pg_rusage_show(&ru0))));
 }
 
 /*
@@ -1994,9 +2299,11 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 
 	if (vacrelstats->hasindex)
 	{
-		maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData);
+		maxtuples = (vac_work_mem * 1024L) / (sizeof(ItemPointerData) +
+				sizeof(LVRedBlueChain));
 		maxtuples = Min(maxtuples, INT_MAX);
-		maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData));
+		maxtuples = Min(maxtuples, MaxAllocSize / (sizeof(ItemPointerData) +
+					sizeof(LVRedBlueChain)));
 
 		/* curious coding here to ensure the multiplication can't overflow */
 		if ((BlockNumber) (maxtuples / LAZY_ALLOC_TUPLES) > relblocks)
@@ -2014,6 +2321,57 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 	vacrelstats->max_dead_tuples = (int) maxtuples;
 	vacrelstats->dead_tuples = (ItemPointer)
 		palloc(maxtuples * sizeof(ItemPointerData));
+
+	/*
+	 * XXX Cheat for now and allocate the same size array for tracking blue and
+	 * red chains. maxtuples must have been already adjusted above to ensure we
+	 * don't cross vac_work_mem.
+	 */
+	vacrelstats->num_redblue_chains = 0;
+	vacrelstats->max_redblue_chains = (int) maxtuples;
+	vacrelstats->redblue_chains = (LVRedBlueChain *)
+		palloc0(maxtuples * sizeof(LVRedBlueChain));
+
+}
+
+/*
+ * lazy_record_blue_chain - remember one blue chain
+ */
+static void
+lazy_record_blue_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_redblue_chains < vacrelstats->max_redblue_chains)
+	{
+		vacrelstats->redblue_chains[vacrelstats->num_redblue_chains].chain_tid = *itemptr;
+		vacrelstats->redblue_chains[vacrelstats->num_redblue_chains].is_red_chain = 0;
+		vacrelstats->num_redblue_chains++;
+	}
+}
+
+/*
+ * lazy_record_red_chain - remember one red chain
+ */
+static void
+lazy_record_red_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_redblue_chains < vacrelstats->max_redblue_chains)
+	{
+		vacrelstats->redblue_chains[vacrelstats->num_redblue_chains].chain_tid = *itemptr;
+		vacrelstats->redblue_chains[vacrelstats->num_redblue_chains].is_red_chain = 1;
+		vacrelstats->num_redblue_chains++;
+	}
 }
 
 /*
@@ -2044,8 +2402,8 @@ lazy_record_dead_tuple(LVRelStats *vacrelstats,
  *
  *		Assumes dead_tuples array is in sorted order.
  */
-static bool
-lazy_tid_reaped(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+lazy_tid_reaped(ItemPointer itemptr, bool is_red, void *state)
 {
 	LVRelStats *vacrelstats = (LVRelStats *) state;
 	ItemPointer res;
@@ -2056,7 +2414,193 @@ lazy_tid_reaped(ItemPointer itemptr, void *state)
 								sizeof(ItemPointerData),
 								vac_cmp_itemptr);
 
-	return (res != NULL);
+	return (res != NULL) ? IBDCR_DELETE : IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase1() -- run first pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase1(ItemPointer itemptr, bool is_red, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	ItemPointer		res;
+	LVRedBlueChain	*chain;
+
+	res = (ItemPointer) bsearch((void *) itemptr,
+								(void *) vacrelstats->dead_tuples,
+								vacrelstats->num_dead_tuples,
+								sizeof(ItemPointerData),
+								vac_cmp_itemptr);
+
+	if (res != NULL)
+		return IBDCR_DELETE;
+
+	chain = (LVRedBlueChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->redblue_chains,
+								vacrelstats->num_redblue_chains,
+								sizeof(LVRedBlueChain),
+								vac_cmp_redblue_chain);
+	if (chain != NULL)
+	{
+		if (is_red)
+			chain->num_red_pointers++;
+		else
+			chain->num_blue_pointers++;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase2() -- run first pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase2(ItemPointer itemptr, bool is_red, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	LVRedBlueChain	*chain;
+
+	chain = (LVRedBlueChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->redblue_chains,
+								vacrelstats->num_redblue_chains,
+								sizeof(LVRedBlueChain),
+								vac_cmp_redblue_chain);
+
+	if (chain != NULL && (chain->keep_warm_chain != 1))
+	{
+		/*
+		 * At no point, we can have more than 1 Red pointer to any chain and no
+		 * more than 2 Blue pointers.
+		 */
+		Assert(chain->num_red_pointers <= 1);
+		Assert(chain->num_blue_pointers <= 2);
+
+		if (chain->is_red_chain == 1)
+		{
+			if (is_red)
+			{
+				/*
+				 * A Red pointer, pointing to a Blue chain.
+				 *
+				 * Color the Red pointer Blue (and delete the Blue pointer). We
+				 * may have already seen the Blue pointer in the scan and
+				 * deleted that or we may see it later in the scan. It doesn't
+				 * matter if we fail at any point because we won't clear up
+				 * WARM bits on the heap tuples until we have dealt with the
+				 * index pointers cleanly.
+				 */
+				return IBDCR_COLOR_BLUE;
+			}
+			else
+			{
+				/*
+				 * Blue pointer to a Red chain.
+				 */
+				if (chain->num_red_pointers > 0)
+				{
+					/*
+					 * If there exists a Red pointer to the chain, we can
+					 * delete the Blue pointer and clear the WARM bits on the
+					 * heap tuples.
+					 */
+					return IBDCR_DELETE;
+				}
+				else if (chain->num_blue_pointers == 1)
+				{
+					/*
+					 * If this is the only pointer to a Red chain, we must keep the
+					 * Blue pointer.
+					 *
+					 * The presence of Red chain indicates that the WARM update
+					 * must have been committed good. But during the update
+					 * this index was probably not updated and hence it
+					 * contains just one, original Blue pointer to the chain.
+					 * We should be able to clear the WARM bits on heap tuples
+					 * unless we later find another index which prevents the
+					 * cleanup.
+					 */
+					return IBDCR_KEEP;
+				}
+			}
+		}
+		else
+		{
+			/*
+			 * This is a Blue chain.
+			 */
+			if (is_red)
+			{
+				/*
+				 * A Red pointer to a Blue chain.
+				 *
+				 * This can happen when a WARM update is aborted. Later the HOT
+				 * chain is pruned leaving behind only Blue tuples in the
+				 * chain. But the Red index pointer inserted in the index
+				 * remains and it must now be deleted before we clear WARM bits
+				 * from the heap tuple.
+				 */
+				return IBDCR_DELETE;
+			}
+
+			/*
+			 * Blue pointer to a Blue chain.
+			 *
+			 * If this is the only surviving Blue pointer, keep it and clear
+			 * the WARM bits from the heap tuples.
+			 */
+			if (chain->num_blue_pointers == 1)
+				return IBDCR_KEEP;
+
+			/*
+			 * If there are more than 1 Blue pointers to this chain, we can
+			 * apply the recheck logic and kill the redudant Blue pointer and
+			 * convert the chain. But that's not yet done.
+			 */
+		}
+
+		/*
+		 * For everything else, we must keep the WARM bits and also keep the
+		 * index pointers.
+		 */
+		chain->keep_warm_chain = 1;
+		return IBDCR_KEEP;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ * Comparator routines for use with qsort() and bsearch(). Similar to
+ * vac_cmp_itemptr, but right hand argument is LVRedBlueChain struct pointer.
+ */
+static int
+vac_cmp_redblue_chain(const void *left, const void *right)
+{
+	BlockNumber lblk,
+				rblk;
+	OffsetNumber loff,
+				roff;
+
+	lblk = ItemPointerGetBlockNumber((ItemPointer) left);
+	rblk = ItemPointerGetBlockNumber(&((LVRedBlueChain *) right)->chain_tid);
+
+	if (lblk < rblk)
+		return -1;
+	if (lblk > rblk)
+		return 1;
+
+	loff = ItemPointerGetOffsetNumber((ItemPointer) left);
+	roff = ItemPointerGetOffsetNumber(&((LVRedBlueChain *) right)->chain_tid);
+
+	if (loff < roff)
+		return -1;
+	if (loff > roff)
+		return 1;
+
+	return 0;
 }
 
 /*
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index d62d2de..3e49a8f 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -405,7 +405,8 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique,	/* type of uniqueness check to do */
-						 indexInfo);	/* index AM may need this */
+						 indexInfo,	/* index AM may need this */
+						 (modified_attrs != NULL));	/* type of uniqueness check to do */
 
 		/*
 		 * If the index has an associated exclusion constraint, check that.
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 5c13d26..7a9b48a 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -347,7 +347,7 @@ DecodeStandbyOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 static void
 DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 {
-	uint8		info = XLogRecGetInfo(buf->record) & XLOG_HEAP_OPMASK;
+	uint8		info = XLogRecGetInfo(buf->record) & XLOG_HEAP2_OPMASK;
 	TransactionId xid = XLogRecGetXid(buf->record);
 	SnapBuild  *builder = ctx->snapshot_builder;
 
@@ -359,10 +359,6 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 
 	switch (info)
 	{
-		case XLOG_HEAP2_MULTI_INSERT:
-			if (SnapBuildProcessChange(builder, xid, buf->origptr))
-				DecodeMultiInsert(ctx, buf);
-			break;
 		case XLOG_HEAP2_NEW_CID:
 			{
 				xl_heap_new_cid *xlrec;
@@ -390,6 +386,7 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 		case XLOG_HEAP2_CLEANUP_INFO:
 		case XLOG_HEAP2_VISIBLE:
 		case XLOG_HEAP2_LOCK_UPDATED:
+		case XLOG_HEAP2_WARMCLEAR:
 			break;
 		default:
 			elog(ERROR, "unexpected RM_HEAP2_ID record type: %u", info);
@@ -418,6 +415,10 @@ DecodeHeapOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 			if (SnapBuildProcessChange(builder, xid, buf->origptr))
 				DecodeInsert(ctx, buf);
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			if (SnapBuildProcessChange(builder, xid, buf->origptr))
+				DecodeMultiInsert(ctx, buf);
+			break;
 
 			/*
 			 * Treat HOT update as normal updates. There is no useful
@@ -809,7 +810,7 @@ DecodeDelete(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 }
 
 /*
- * Decode XLOG_HEAP2_MULTI_INSERT_insert record into multiple tuplebufs.
+ * Decode XLOG_HEAP_MULTI_INSERT_insert record into multiple tuplebufs.
  *
  * Currently MULTI_INSERT will always contain the full tuples.
  */
diff --git a/src/backend/utils/time/combocid.c b/src/backend/utils/time/combocid.c
index baff998..6a2e2f2 100644
--- a/src/backend/utils/time/combocid.c
+++ b/src/backend/utils/time/combocid.c
@@ -106,7 +106,7 @@ HeapTupleHeaderGetCmin(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 	Assert(TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmin(tup)));
 
 	if (tup->t_infomask & HEAP_COMBOCID)
@@ -120,7 +120,7 @@ HeapTupleHeaderGetCmax(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 
 	/*
 	 * Because GetUpdateXid() performs memory allocations if xmax is a
diff --git a/src/backend/utils/time/tqual.c b/src/backend/utils/time/tqual.c
index 703bdce..0df5a44 100644
--- a/src/backend/utils/time/tqual.c
+++ b/src/backend/utils/time/tqual.c
@@ -186,7 +186,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -205,7 +205,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -377,7 +377,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -396,7 +396,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -471,7 +471,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			return HeapTupleInvisible;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -490,7 +490,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -753,7 +753,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -772,7 +772,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -974,7 +974,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -993,7 +993,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1180,7 +1180,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 		if (HeapTupleHeaderXminInvalid(tuple))
 			return HEAPTUPLE_DEAD;
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_OFF)
+		else if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1198,7 +1198,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 						InvalidTransactionId);
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h
index d7702e5..68859f2 100644
--- a/src/include/access/amapi.h
+++ b/src/include/access/amapi.h
@@ -75,6 +75,14 @@ typedef bool (*aminsert_function) (Relation indexRelation,
 											   Relation heapRelation,
 											   IndexUniqueCheck checkUnique,
 											   struct IndexInfo *indexInfo);
+/* insert this WARM tuple */
+typedef bool (*amwarminsert_function) (Relation indexRelation,
+											   Datum *values,
+											   bool *isnull,
+											   ItemPointer heap_tid,
+											   Relation heapRelation,
+											   IndexUniqueCheck checkUnique,
+											   struct IndexInfo *indexInfo);
 
 /* bulk delete */
 typedef IndexBulkDeleteResult *(*ambulkdelete_function) (IndexVacuumInfo *info,
@@ -203,6 +211,7 @@ typedef struct IndexAmRoutine
 	ambuild_function ambuild;
 	ambuildempty_function ambuildempty;
 	aminsert_function aminsert;
+	amwarminsert_function amwarminsert;
 	ambulkdelete_function ambulkdelete;
 	amvacuumcleanup_function amvacuumcleanup;
 	amcanreturn_function amcanreturn;	/* can be NULL */
diff --git a/src/include/access/genam.h b/src/include/access/genam.h
index f467b18..bf1e6bd 100644
--- a/src/include/access/genam.h
+++ b/src/include/access/genam.h
@@ -75,12 +75,29 @@ typedef struct IndexBulkDeleteResult
 	bool		estimated_count;	/* num_index_tuples is an estimate */
 	double		num_index_tuples;		/* tuples remaining */
 	double		tuples_removed; /* # removed during vacuum operation */
+	double		num_red_pointers;	/* # red pointers found */
+	double		num_blue_pointers;	/* # blue pointers found */
+	double		pointers_colored;	/* # red pointers colored blue */
+	double		red_pointers_removed;	/* # red pointers removed */
+	double		blue_pointers_removed;	/* # blue pointers removed */
 	BlockNumber pages_deleted;	/* # unused pages in index */
 	BlockNumber pages_free;		/* # pages available for reuse */
 } IndexBulkDeleteResult;
 
+/*
+ * IndexBulkDeleteCallback should return one of the following
+ */
+typedef enum IndexBulkDeleteCallbackResult
+{
+	IBDCR_KEEP,			/* index tuple should be preserved */
+	IBDCR_DELETE,		/* index tuple should be deleted */
+	IBDCR_COLOR_BLUE	/* index tuple should be colored blue */
+} IndexBulkDeleteCallbackResult;
+
 /* Typedef for callback function to determine if a tuple is bulk-deletable */
-typedef bool (*IndexBulkDeleteCallback) (ItemPointer itemptr, void *state);
+typedef IndexBulkDeleteCallbackResult (*IndexBulkDeleteCallback) (
+										 ItemPointer itemptr,
+										 bool is_red, void *state);
 
 /* struct definitions appear in relscan.h */
 typedef struct IndexScanDescData *IndexScanDesc;
@@ -135,7 +152,8 @@ extern bool index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 struct IndexInfo *indexInfo);
+			 struct IndexInfo *indexInfo,
+			 bool warm_update);
 
 extern IndexScanDesc index_beginscan(Relation heapRelation,
 				Relation indexRelation,
diff --git a/src/include/access/hash.h b/src/include/access/hash.h
index 0af6b4e..97d9cfb 100644
--- a/src/include/access/hash.h
+++ b/src/include/access/hash.h
@@ -269,6 +269,11 @@ typedef HashMetaPageData *HashMetaPage;
 #define HASHPROC		1
 #define HASHNProcs		1
 
+/*
+ * Flags overloaded on t_tid.ip_posid field. They are managed by
+ * ItemPointerSetFlags and corresponing routines.
+ */
+#define HASH_INDEX_RED_POINTER	0x01
 
 /* public routines */
 
@@ -279,6 +284,10 @@ extern bool hashinsert(Relation rel, Datum *values, bool *isnull,
 		   ItemPointer ht_ctid, Relation heapRel,
 		   IndexUniqueCheck checkUnique,
 		   struct IndexInfo *indexInfo);
+extern bool hashwarminsert(Relation rel, Datum *values, bool *isnull,
+		   ItemPointer ht_ctid, Relation heapRel,
+		   IndexUniqueCheck checkUnique,
+		   struct IndexInfo *indexInfo);
 extern bool hashgettuple(IndexScanDesc scan, ScanDirection dir);
 extern int64 hashgetbitmap(IndexScanDesc scan, TIDBitmap *tbm);
 extern IndexScanDesc hashbeginscan(Relation rel, int nkeys, int norderbys);
@@ -348,6 +357,8 @@ extern void _hash_expandtable(Relation rel, Buffer metabuf);
 extern void _hash_finish_split(Relation rel, Buffer metabuf, Buffer obuf,
 				   Bucket obucket, uint32 maxbucket, uint32 highmask,
 				   uint32 lowmask);
+extern void _hash_color_items(Page page, OffsetNumber *coloritemsno,
+				   uint16 ncoloritems);
 
 /* hashsearch.c */
 extern bool _hash_next(IndexScanDesc scan, ScanDirection dir);
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 9412c3a..719a725 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -72,6 +72,20 @@ typedef struct HeapUpdateFailureData
 	CommandId	cmax;
 } HeapUpdateFailureData;
 
+typedef int HeapCheckWarmChainStatus;
+
+#define HCWC_BLUE_TUPLE	0x0001
+#define	HCWC_RED_TUPLE	0x0002
+#define HCWC_WARM_TUPLE	0x0004
+
+#define HCWC_IS_MIXED(status) \
+	(((status) & (HCWC_BLUE_TUPLE | HCWC_RED_TUPLE)) != 0)
+#define HCWC_IS_ALL_RED(status) \
+	(((status) & HCWC_BLUE_TUPLE) == 0)
+#define HCWC_IS_ALL_BLUE(status) \
+	(((status) & HCWC_RED_TUPLE) == 0)
+#define HCWC_IS_WARM(status) \
+	(((status) & HCWC_WARM_TUPLE) != 0)
 
 /* ----------------
  *		function prototypes for heap access method
@@ -183,6 +197,10 @@ extern void simple_heap_update(Relation relation, ItemPointer otid,
 				   bool *warm_update);
 
 extern void heap_sync(Relation relation);
+extern HeapCheckWarmChainStatus heap_check_warm_chain(Page dp,
+				   ItemPointer tid, bool stop_at_warm);
+extern int heap_clear_warm_chain(Page dp, ItemPointer tid,
+				   OffsetNumber *cleared_offnums);
 
 /* in heap/pruneheap.c */
 extern void heap_page_prune_opt(Relation relation, Buffer buffer);
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index 9b081bf..66fd0ea 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -32,7 +32,7 @@
 #define XLOG_HEAP_INSERT		0x00
 #define XLOG_HEAP_DELETE		0x10
 #define XLOG_HEAP_UPDATE		0x20
-/* 0x030 is free, was XLOG_HEAP_MOVE */
+#define XLOG_HEAP_MULTI_INSERT	0x30
 #define XLOG_HEAP_HOT_UPDATE	0x40
 #define XLOG_HEAP_CONFIRM		0x50
 #define XLOG_HEAP_LOCK			0x60
@@ -47,18 +47,23 @@
 /*
  * We ran out of opcodes, so heapam.c now has a second RmgrId.  These opcodes
  * are associated with RM_HEAP2_ID, but are not logically different from
- * the ones above associated with RM_HEAP_ID.  XLOG_HEAP_OPMASK applies to
- * these, too.
+ * the ones above associated with RM_HEAP_ID.
+ *
+ * In PG 10, we moved XLOG_HEAP2_MULTI_INSERT to RM_HEAP_ID. That allows us to
+ * use 0x80 bit in RM_HEAP2_ID, thus potentially creating another 8 possible
+ * opcodes in RM_HEAP2_ID.
  */
 #define XLOG_HEAP2_REWRITE		0x00
 #define XLOG_HEAP2_CLEAN		0x10
 #define XLOG_HEAP2_FREEZE_PAGE	0x20
 #define XLOG_HEAP2_CLEANUP_INFO 0x30
 #define XLOG_HEAP2_VISIBLE		0x40
-#define XLOG_HEAP2_MULTI_INSERT 0x50
+#define XLOG_HEAP2_WARMCLEAR	0x50
 #define XLOG_HEAP2_LOCK_UPDATED 0x60
 #define XLOG_HEAP2_NEW_CID		0x70
 
+#define XLOG_HEAP2_OPMASK		0x70
+
 /*
  * xl_heap_insert/xl_heap_multi_insert flag values, 8 bits are available.
  */
@@ -226,6 +231,14 @@ typedef struct xl_heap_clean
 
 #define SizeOfHeapClean (offsetof(xl_heap_clean, ndead) + sizeof(uint16))
 
+typedef struct xl_heap_warmclear
+{
+	uint16		ncleared;
+	/* OFFSET NUMBERS are in the block reference 0 */
+} xl_heap_warmclear;
+
+#define SizeOfHeapWarmClear (offsetof(xl_heap_warmclear, ncleared) + sizeof(uint16))
+
 /*
  * Cleanup_info is required in some cases during a lazy VACUUM.
  * Used for reporting the results of HeapTupleHeaderAdvanceLatestRemovedXid()
@@ -389,6 +402,8 @@ extern XLogRecPtr log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid);
+extern XLogRecPtr log_heap_warmclear(Relation reln, Buffer buffer,
+			   OffsetNumber *cleared, int ncleared);
 extern XLogRecPtr log_heap_freeze(Relation reln, Buffer buffer,
 				TransactionId cutoff_xid, xl_heap_freeze_tuple *tuples,
 				int ntuples);
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index b5891ca..ba5e94d 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -201,6 +201,21 @@ struct HeapTupleHeaderData
 										 * upgrade support */
 #define HEAP_MOVED (HEAP_MOVED_OFF | HEAP_MOVED_IN)
 
+/*
+ * A WARM chain usually consists of two parts. Each of these parts are HOT
+ * chains in themselves i.e. all indexed columns has the same value, but a WARM
+ * update separates these parts. We call these two parts as Blue chain and Red
+ * chain. We need a mechanism to identify which part a tuple belongs to. We
+ * can't just look at if it's a HeapTupleHeaderIsHeapWarmTuple() because during
+ * WARM update, both old and new tuples are marked as WARM tuples.
+ *
+ * We need another infomask bit for this. But we use the same infomask bit that
+ * was earlier used for by old-style VACUUM FULL. This is safe because
+ * HEAP_WARM_TUPLE flag will always be set along with HEAP_WARM_RED. So if
+ * HEAP_WARM_TUPLE and HEAP_WARM_RED is set then we know that it's referring to
+ * red part of the WARM chain.
+ */
+#define HEAP_WARM_RED			0x4000
 #define HEAP_XACT_MASK			0xFFF0	/* visibility-related bits */
 
 /*
@@ -397,7 +412,7 @@ struct HeapTupleHeaderData
 /* SetCmin is reasonably simple since we never need a combo CID */
 #define HeapTupleHeaderSetCmin(tup, cid) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	(tup)->t_infomask &= ~HEAP_COMBOCID; \
 } while (0)
@@ -405,7 +420,7 @@ do { \
 /* SetCmax must be used after HeapTupleHeaderAdjustCmax; see combocid.c */
 #define HeapTupleHeaderSetCmax(tup, cid, iscombo) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	if (iscombo) \
 		(tup)->t_infomask |= HEAP_COMBOCID; \
@@ -415,7 +430,7 @@ do { \
 
 #define HeapTupleHeaderGetXvac(tup) \
 ( \
-	((tup)->t_infomask & HEAP_MOVED) ? \
+	HeapTupleHeaderIsMoved(tup) ? \
 		(tup)->t_choice.t_heap.t_field3.t_xvac \
 	: \
 		InvalidTransactionId \
@@ -423,7 +438,7 @@ do { \
 
 #define HeapTupleHeaderSetXvac(tup, xid) \
 do { \
-	Assert((tup)->t_infomask & HEAP_MOVED); \
+	Assert(HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_xvac = (xid); \
 } while (0)
 
@@ -651,6 +666,58 @@ do { \
 )
 
 /*
+ * Macros to check if tuple is a moved-off/in tuple by VACUUM FULL in from
+ * pre-9.0 era. Such tuple must not have HEAP_WARM_TUPLE flag set.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderIsMovedOff(tuple) \
+( \
+	!HeapTupleHeaderIsHeapWarmTuple((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED_OFF) \
+)
+
+#define HeapTupleHeaderIsMovedIn(tuple) \
+( \
+	!HeapTupleHeaderIsHeapWarmTuple((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED_IN) \
+)
+
+#define HeapTupleHeaderIsMoved(tuple) \
+( \
+	!HeapTupleHeaderIsHeapWarmTuple((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED) \
+)
+
+/*
+ * Check if tuple belongs to the Red part of the WARM chain.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderIsWarmRed(tuple) \
+( \
+	HeapTupleHeaderIsHeapWarmTuple(tuple) && \
+	(((tuple)->t_infomask & HEAP_WARM_RED) != 0) \
+)
+
+/*
+ * Mark tuple as a member of the Red chain. Must only be done on a tuple which
+ * is already marked a WARM-tuple.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderSetWarmRed(tuple) \
+( \
+	AssertMacro(HeapTupleHeaderIsHeapWarmTuple(tuple)), \
+	(tuple)->t_infomask |= HEAP_WARM_RED \
+)
+
+#define HeapTupleHeaderClearWarmRed(tuple) \
+( \
+	(tuple)->t_infomask &= ~HEAP_WARM_RED \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
@@ -810,6 +877,15 @@ struct MinimalTupleData
 #define HeapTupleClearHeapWarmTuple(tuple) \
 		HeapTupleHeaderClearHeapWarmTuple((tuple)->t_data)
 
+#define HeapTupleIsHeapWarmTupleRed(tuple) \
+		HeapTupleHeaderIsWarmRed((tuple)->t_data)
+
+#define HeapTupleSetHeapWarmTupleRed(tuple) \
+		HeapTupleHeaderSetWarmRed((tuple)->t_data)
+
+#define HeapTupleClearHeapWarmTupleRed(tuple) \
+		HeapTupleHeaderClearWarmRed((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index d4b35ca..1f4f0bd 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -427,6 +427,12 @@ typedef BTScanOpaqueData *BTScanOpaque;
 #define SK_BT_NULLS_FIRST	(INDOPTION_NULLS_FIRST << SK_BT_INDOPTION_SHIFT)
 
 /*
+ * Flags overloaded on t_tid.ip_posid field. They are managed by
+ * ItemPointerSetFlags and corresponing routines.
+ */
+#define BTREE_INDEX_RED_POINTER	0x01
+
+/*
  * external entry points for btree, in nbtree.c
  */
 extern IndexBuildResult *btbuild(Relation heap, Relation index,
@@ -436,6 +442,10 @@ extern bool btinsert(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
 		 struct IndexInfo *indexInfo);
+extern bool btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 struct IndexInfo *indexInfo);
 extern IndexScanDesc btbeginscan(Relation rel, int nkeys, int norderbys);
 extern Size btestimateparallelscan(void);
 extern void btinitparallelscan(void *target);
@@ -487,10 +497,12 @@ extern void _bt_pageinit(Page page, Size size);
 extern bool _bt_page_recyclable(Page page);
 extern void _bt_delitems_delete(Relation rel, Buffer buf,
 					OffsetNumber *itemnos, int nitems, Relation heapRel);
-extern void _bt_delitems_vacuum(Relation rel, Buffer buf,
-					OffsetNumber *itemnos, int nitems,
-					BlockNumber lastBlockVacuumed);
+extern void _bt_handleitems_vacuum(Relation rel, Buffer buf,
+					OffsetNumber *delitemnos, int ndelitems,
+					OffsetNumber *coloritemnos, int ncoloritems);
 extern int	_bt_pagedel(Relation rel, Buffer buf);
+extern void	_bt_color_items(Page page, OffsetNumber *coloritemnos,
+					uint16 ncoloritems);
 
 /*
  * prototypes for functions in nbtsearch.c
diff --git a/src/include/access/nbtxlog.h b/src/include/access/nbtxlog.h
index d6a3085..5555742 100644
--- a/src/include/access/nbtxlog.h
+++ b/src/include/access/nbtxlog.h
@@ -142,34 +142,20 @@ typedef struct xl_btree_reuse_page
 /*
  * This is what we need to know about vacuum of individual leaf index tuples.
  * The WAL record can represent deletion of any number of index tuples on a
- * single index page when executed by VACUUM.
- *
- * For MVCC scans, lastBlockVacuumed will be set to InvalidBlockNumber.
- * For a non-MVCC index scans there is an additional correctness requirement
- * for applying these changes during recovery, which is that we must do one
- * of these two things for every block in the index:
- *		* lock the block for cleanup and apply any required changes
- *		* EnsureBlockUnpinned()
- * The purpose of this is to ensure that no index scans started before we
- * finish scanning the index are still running by the time we begin to remove
- * heap tuples.
- *
- * Any changes to any one block are registered on just one WAL record. All
- * blocks that we need to run EnsureBlockUnpinned() are listed as a block range
- * starting from the last block vacuumed through until this one. Individual
- * block numbers aren't given.
+ * single index page when executed by VACUUM. It also includes tuples whose
+ * color is changed from red to blue by VACUUM.
  *
  * Note that the *last* WAL record in any vacuum of an index is allowed to
  * have a zero length array of offsets. Earlier records must have at least one.
  */
 typedef struct xl_btree_vacuum
 {
-	BlockNumber lastBlockVacuumed;
-
-	/* TARGET OFFSET NUMBERS FOLLOW */
+	uint16		ndelitems;
+	uint16		ncoloritems;
+	/* ndelitems + ncoloritems TARGET OFFSET NUMBERS FOLLOW */
 } xl_btree_vacuum;
 
-#define SizeOfBtreeVacuum	(offsetof(xl_btree_vacuum, lastBlockVacuumed) + sizeof(BlockNumber))
+#define SizeOfBtreeVacuum	(offsetof(xl_btree_vacuum, ncoloritems) + sizeof(uint16))
 
 /*
  * This is what we need to know about marking an empty branch for deletion.
diff --git a/src/include/commands/progress.h b/src/include/commands/progress.h
index 9472ecc..b355b61 100644
--- a/src/include/commands/progress.h
+++ b/src/include/commands/progress.h
@@ -25,6 +25,7 @@
 #define PROGRESS_VACUUM_NUM_INDEX_VACUUMS		4
 #define PROGRESS_VACUUM_MAX_DEAD_TUPLES			5
 #define PROGRESS_VACUUM_NUM_DEAD_TUPLES			6
+#define PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED	7
 
 /* Phases of vacuum (as advertised via PROGRESS_VACUUM_PHASE) */
 #define PROGRESS_VACUUM_PHASE_SCAN_HEAP			1
-- 
2.1.4

#92Robert Haas
robertmhaas@gmail.com
In reply to: Alvaro Herrera (#91)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Mar 8, 2017 at 12:14 PM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:

Alvaro Herrera wrote:

Here's a rebased set of patches. This is the same Pavan posted; I only
fixed some whitespace and a trivial conflict in indexam.c, per 9b88f27cb42f.

Jaime noted that I forgot the attachments. Here they are

If I recall correctly, the main concern about 0001 was whether it
might negatively affect performance, and testing showed that, if
anything, it was a little better. Does that sound right?

Regarding 0002, I think this could use some documentation someplace
explaining the overall theory of operation. README.HOT, maybe?

+     * Most often and unless we are dealing with a pg-upgraded cluster, the
+     * root offset information should be cached. So there should not be too
+     * much overhead of fetching this information. Also, once a tuple is
+     * updated, the information will be copied to the new version. So it's not
+     * as if we're going to pay this price forever.

What if a tuple is updated -- presumably clearing the
HEAP_LATEST_TUPLE on the tuple at the end of the chain -- and then the
update aborts? Then we must be back to not having this information.

One overall question about this patch series is how we feel about
using up this many bits. 0002 uses a bit from infomask, and 0005 uses
a bit from infomask2. I'm not sure if that's everything, and then I
think we're steeling some bits from the item pointers, too. While the
performance benefits of the patch sound pretty good based on the test
results so far, this is definitely the very last time we'll be able to
implement a feature that requires this many bits.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#93Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Robert Haas (#92)
Re: Patch: Write Amplification Reduction Method (WARM)

Robert Haas wrote:

On Wed, Mar 8, 2017 at 12:14 PM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:

Alvaro Herrera wrote:

Here's a rebased set of patches. This is the same Pavan posted; I only
fixed some whitespace and a trivial conflict in indexam.c, per 9b88f27cb42f.

Jaime noted that I forgot the attachments. Here they are

If I recall correctly, the main concern about 0001 was whether it
might negatively affect performance, and testing showed that, if
anything, it was a little better. Does that sound right?

Not really -- it's a bit slower actually in a synthetic case measuring
exactly the slowed-down case. See
/messages/by-id/CAD__OugK12ZqMWWjZiM-YyuD1y8JmMy6x9YEctNiF3rPp6hy0g@mail.gmail.com
I bet in normal cases it's unnoticeable. If WARM flies, then it's going
to provide a larger improvement than is lost to this.

Regarding 0002, I think this could use some documentation someplace
explaining the overall theory of operation. README.HOT, maybe?

Hmm. Yeah, we should have something to that effect. 0005 includes
README.WARM, but I think there should be some place unified that
explains the whole thing.

+     * Most often and unless we are dealing with a pg-upgraded cluster, the
+     * root offset information should be cached. So there should not be too
+     * much overhead of fetching this information. Also, once a tuple is
+     * updated, the information will be copied to the new version. So it's not
+     * as if we're going to pay this price forever.

What if a tuple is updated -- presumably clearing the
HEAP_LATEST_TUPLE on the tuple at the end of the chain -- and then the
update aborts? Then we must be back to not having this information.

I will leave this question until I have grokked how this actually works.

One overall question about this patch series is how we feel about
using up this many bits. 0002 uses a bit from infomask, and 0005 uses
a bit from infomask2. I'm not sure if that's everything, and then I
think we're steeling some bits from the item pointers, too. While the
performance benefits of the patch sound pretty good based on the test
results so far, this is definitely the very last time we'll be able to
implement a feature that requires this many bits.

Yeah, this patch series uses a lot of bits. At some point we should
really add the "last full-scanned by version X" we discussed a long time
ago, and free the MOVED_IN / MOVED_OFF bits that have been unused for so
long. Sadly, once we add that, we need to wait one more release before
we can use the bits anyway.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#94Robert Haas
robertmhaas@gmail.com
In reply to: Alvaro Herrera (#93)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Mar 8, 2017 at 2:30 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:

Not really -- it's a bit slower actually in a synthetic case measuring
exactly the slowed-down case. See
/messages/by-id/CAD__OugK12ZqMWWjZiM-YyuD1y8JmMy6x9YEctNiF3rPp6hy0g@mail.gmail.com
I bet in normal cases it's unnoticeable. If WARM flies, then it's going
to provide a larger improvement than is lost to this.

Hmm, that test case isn't all that synthetic. It's just a single
column bulk update, which isn't anything all that crazy, and 5-10%
isn't nothing.

I'm kinda surprised it made that much difference, though.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#95Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Robert Haas (#94)
Re: Patch: Write Amplification Reduction Method (WARM)

Robert Haas wrote:

On Wed, Mar 8, 2017 at 2:30 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:

Not really -- it's a bit slower actually in a synthetic case measuring
exactly the slowed-down case. See
/messages/by-id/CAD__OugK12ZqMWWjZiM-YyuD1y8JmMy6x9YEctNiF3rPp6hy0g@mail.gmail.com
I bet in normal cases it's unnoticeable. If WARM flies, then it's going
to provide a larger improvement than is lost to this.

Hmm, that test case isn't all that synthetic. It's just a single
column bulk update, which isn't anything all that crazy,

The problem is that the update touches the second indexed column. With
the original code we would have stopped checking at that point, but with
the patched code we continue to verify all the other indexed columns for
changes.

Maybe we need more than one bitmapset to be given -- multiple ones for
for "any of these" checks (such as HOT, KEY and Identity) which can be
stopped as soon as one is found, and one for "all of these" (for WARM,
indirect indexes) which needs to be checked to completion.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#96Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#91)
Re: Patch: Write Amplification Reduction Method (WARM)

@@ -234,6 +236,21 @@ index_beginscan(Relation heapRelation,
scan->heapRelation = heapRelation;
scan->xs_snapshot = snapshot;

+	/*
+	 * If the index supports recheck, make sure that index tuple is saved
+	 * during index scans.
+	 *
+	 * XXX Ideally, we should look at all indexes on the table and check if
+	 * WARM is at all supported on the base table. If WARM is not supported
+	 * then we don't need to do any recheck. RelationGetIndexAttrBitmap() does
+	 * do that and sets rd_supportswarm after looking at all indexes. But we
+	 * don't know if the function was called earlier in the session when we're
+	 * here. We can't call it now because there exists a risk of causing
+	 * deadlock.
+	 */
+	if (indexRelation->rd_amroutine->amrecheck)
+		scan->xs_want_itup = true;
+
return scan;
}

I didn't like this comment very much. But it's not necessary: you have
already given relcache responsibility for setting rd_supportswarm. The
only problem seems to be that you set it in RelationGetIndexAttrBitmap
instead of RelationGetIndexList, but it's not clear to me why. I think
if the latter function is in charge, then we can trust the flag more
than the current situation. Let's set the value to false on relcache
entry build, for safety's sake.

I noticed that nbtinsert.c and nbtree.c have a bunch of new includes
that they don't actually need. Let's remove those. nbtutils.c does
need them because of btrecheck(). Speaking of which:

I have already commented about the executor involvement in btrecheck();
that doesn't seem good. I previously suggested to pass the EState down
from caller, but that's not a great idea either since you still need to
do the actual FormIndexDatum. I now think that a workable option would
be to compute the values/isnulls arrays so that btrecheck gets them
already computed. With that, this function would be no more of a
modularity violation that HeapSatisfiesHOTAndKey() itself.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#97Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Alvaro Herrera (#91)
Re: Patch: Write Amplification Reduction Method (WARM)

After looking at how index_fetch_heap and heap_hot_search_buffer
interact, I can't say I'm in love with the idea. I started thinking
that we should not have index_fetch_heap release the buffer lock only to
re-acquire it five lines later, so it should keep the buffer lock, do
the recheck and only release it afterwards (I realize that this means
there'd be need for two additional "else release buffer lock" branches);
but then this got me thinking that perhaps it would be better to have
another routine that does both call heap_hot_search_buffer and then call
recheck -- it occurs to me that what we're doing here is essentially
heap_warm_search_buffer.

Does that make sense?

Another thing is BuildIndexInfo being called over and over for each
recheck(). Surely we need to cache the indexinfo for each indexscan.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#98Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Alvaro Herrera (#96)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 14, 2017 at 7:17 AM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

@@ -234,6 +236,21 @@ index_beginscan(Relation heapRelation,
scan->heapRelation = heapRelation;
scan->xs_snapshot = snapshot;

+     /*
+      * If the index supports recheck, make sure that index tuple is

saved

+      * during index scans.
+      *
+      * XXX Ideally, we should look at all indexes on the table and

check if

+ * WARM is at all supported on the base table. If WARM is not

supported

+ * then we don't need to do any recheck.

RelationGetIndexAttrBitmap() does

+ * do that and sets rd_supportswarm after looking at all indexes.

But we

+ * don't know if the function was called earlier in the session

when we're

+ * here. We can't call it now because there exists a risk of

causing

+      * deadlock.
+      */
+     if (indexRelation->rd_amroutine->amrecheck)
+             scan->xs_want_itup = true;
+
return scan;
}

I didn't like this comment very much. But it's not necessary: you have
already given relcache responsibility for setting rd_supportswarm. The
only problem seems to be that you set it in RelationGetIndexAttrBitmap
instead of RelationGetIndexList, but it's not clear to me why.

Hmm. I think you're right. Will fix that way and test.

I noticed that nbtinsert.c and nbtree.c have a bunch of new includes
that they don't actually need. Let's remove those. nbtutils.c does
need them because of btrecheck().

Right. It's probably a left over from the way I wrote the first version.
Will fix.

Speaking of which:

I have already commented about the executor involvement in btrecheck();
that doesn't seem good. I previously suggested to pass the EState down
from caller, but that's not a great idea either since you still need to
do the actual FormIndexDatum. I now think that a workable option would
be to compute the values/isnulls arrays so that btrecheck gets them
already computed.

I agree with your complaint about modularity violation. What I am unclear
is how passing values/isnulls array will fix that. The way code is
structured currently, recheck routines are called by index_fetch_heap(). So
if we try to compute values/isnulls in that function, we'll still need
access EState, which AFAIU will lead to similar violation. Or am I
mis-reading your idea?

I wonder if we should instead invent something similar to IndexRecheck(),
but instead of running ExecQual(), this new routine will compare the index
values by the given HeapTuple against given IndexTuple. ISTM that for this
to work we'll need to modify all callers of index_getnext() and teach them
to invoke the AM specific recheck method if xs_tuple_recheck flag is set to
true by index_getnext().

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#99Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Pavan Deolasee (#98)
1 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 14, 2017 at 5:17 PM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

After looking at how index_fetch_heap and heap_hot_search_buffer
interact, I can't say I'm in love with the idea. I started thinking
that we should not have index_fetch_heap release the buffer lock only to
re-acquire it five lines later, so it should keep the buffer lock, do
the recheck and only release it afterwards (I realize that this means
there'd be need for two additional "else release buffer lock" branches);

Yes, it makes sense.

but then this got me thinking that perhaps it would be better to have
another routine that does both call heap_hot_search_buffer and then call
recheck -- it occurs to me that what we're doing here is essentially
heap_warm_search_buffer.

Does that make sense?

We can do that, but it's not clear to me if that would be a huge
improvement. Also, I think we need to first decide on how to model the
recheck logic since that might affect this function significantly. For
example, if we decide to do recheck at a higher level then we will most
likely end up releasing and reacquiring the lock anyways.

Another thing is BuildIndexInfo being called over and over for each
recheck(). Surely we need to cache the indexinfo for each indexscan.

Good point. What should that place be though? Can we just cache them in the
relcache and maintain them along with the list of indexes? Looking at the
current callers, ExecOpenIndices() usually cache them in the ResultRelInfo,
which is sufficient because INSERT/UPDATE/DELETE code paths are the most
relevant paths where caching definitely helps. The only other place where
it may get called once per tuple is unique_key_recheck(), which is used for
deferred unique key tests and hence probably not very common.

BTW I wanted to share some more numbers from a recent performance test. I
thought it's important because the latest patch has fully functional chain
conversion code as well as all WAL-logging related pieces are in place
too. I ran these tests on a box borrowed from Tomas (thanks!). This has
64GB RAM and 350GB SSD with 1GB on-board RAM. I used the same test setup
that I used for the first test results reported on this thread i.e. a
modified pgbench_accounts table with additional columns and additional
indexes (one index on abalance so that every UPDATE is a potential WARM
update).

In a test where table + indexes exceeds RAM, running for 8hrs and
auto-vacuum parameters set such that we get 2-3 autovacuums on the table
during the test, we see WARM delivering more than 100% TPS as compared to
master. In this graph, I've plotted a moving average of TPS and the spikes
that we see coincides with the checkpoints (checkpoint_timeout is set to
20mins and max_wal_size large enough to avoid any xlog-based checkpoints).
The spikes are more prominent on WARM but I guess that's purely because it
delivers much higher TPS. I haven't shown here but I see WARM updates close
to 65-70% of the total updates. Also there is significant reduction in WAL
generated per txn.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

Moderate_AV_4Indexes_100FF_SF1200_Duration28800s_Run2.pdfapplication/pdf; name=Moderate_AV_4Indexes_100FF_SF1200_Duration28800s_Run2.pdfDownload
%PDF-1.3
%�����������
4 0 obj
<< /Length 5 0 R /Filter /FlateDecode >>
stream
x��K�d1�e9�U�
��c��q�z��A/ P��FD�����B��TW��@�����B������������_'�������|����������������:����:�����=����~��?�����,�_���xN���}�����_�������z�������W���/J�^~?/������������WA��#������g�u�^����\��ZQ���o�����S������E~.g�{����x�����|z>~D��3y(����s�'��y?_�@��������3|�>���y�q>Q$�M���s���+����\|2�������~�|���Nsy�]��/��=�|^� ����,?����7���?�_����Gds�}�*����6O��f5��}z"�_��������K�u�~>}?~��h^v;�Dn�E��q���<v}<�o#�~P�$���q�~����|W�|��+����~;��|��������O�\��[�v;#g%z�V=��+O-����l����������s����y���Q�����SS/O��>��E-���W��D����tC���Q�J���4�fY���;��5�������CY��>������y=!�;�Ng*�X��;?�iUT��.����?~�|�����g�G�o��ZMC�y�� ������n�\_Wz����)�r=#�����J���o�KT���~���<�62�/O$�}�}����|}_�����#|�����P*e��t�o����
=�'jI�n�����,�
��m����NhS������E��BN����P`��=������(��'���a����#��w��$j���J����� ���a��M��\������@���8}��oT��#�����?n���
:��>��}}F���P����?h1M-$}z���_S���d���!wT�
�>����*e���[h��
w�7d���P�kx�3��C=���~��3�������{��u���sG�P��<J;��?v��7���N�.�]���3l,$�8]~NG.;���&�������w�Q�j�gz�Z(Z��DQ�g'�vhP�b6�2`I���������xY(��P,��U�6���Ukh�Y(�uE�S4-Y)����O���^)J{_(
�R�.E��E���K�>�@QF���_�?EQ��F��L��(��PT����j�(��FQ��R�>](��$��A[��R�����`E���BQ�Z�A�u�lud��~Q��j�(u�(j�^(�u!EX���)��w��
��)
�f`�,
���>�St@���P�d�E-i�Eot�J��[fQ�l�E���Yd�(�J�/���������)����������`E��JQ��BQ�P���fV��Qd��t�����[Q���f��}���=�NQJ
E%A�� /�(j/��O��1c�O7�*���7�}��
�fQF����(��AQL�e���}��Q�:n�mE�W&�e�(��6(=�
����Un��~��?{����.�����wy�r����g�g���w=��n���W7��c��b0�4��
�[4������r�F���1��6�����KS�{�"W
c������G7<F��.Z�S��~�AK���<�Q�;^����4�������.��Z�0�����:��'�P��1�1{���E0=��3�d�C-���h���E�hb���:DT �E~�Z��L�����-.�����1�7�dy��L��c�3Xc��=nc��kw�NX������'���]?+���F��'ndT".c4�~Y\�������]�eL��!��2Pye�.�Wt�?��Qi��o�����#
������>7�8���<������n���N��E�����v����?�c�7��H�������=#�����r`S�# ���a�,��z(O��������Q��L���7}�oT�����_��14N+Yg��z�����DCZ��O}�<�����,Q|�T�����	]4�{=wCHNL�LT�R��:& ��k�
��~&H�c�phD��
"8�Uo���N��R9���2��^����3�hi�9�\�9��X�J7�\���+���K�����C�����-*|{�"y����h2�oj���.I�4�\<z��m5���9ci�)������[#�Q�:Q���5� ��t����8v����J�S
W�2�90���E��2fcc�=���>{��O�<�-���<�3�	(�����a�^�M��i�]k�����@���5	p�d�o'�B����*=��~������md�����~�|�$O�����Y�	��������0�v�]��o�b��Q�7
J�w��`�R(�s�4�&;�8}p�K*���Q������3�1�M@��|���]������������/���3��<�A=w�z~f��vJ�9Xs�$B��5C����������`����9f3�L�$a*������#�����4���P��WS&�O�V��@�C�����M�����hu�u�R����pBT��J��	�
FH�`��H���-s�4� g�(�� �GI
�m-�0���������o�/�;�b<�
'�R�Z? ����0�j�w��CP�(�%1225�$�.R�%�J�`��D�Oh���"��=Z<�,!����;r�x������\z�������@}>"��g'H�'���N��X�����m�j+�0*�a�;�J8�F�'�U����p��R�c����
%����>M8a �������D���J_�����1��pB��`8����-���1'����wB�zF�����m��(�\{W�*j���n���LW��0j�*�u�6���@�W���E=����w{
'4����9AA<@#(EP�RL��!(H&������AG��2���`B'�q���z���Y�d#��t�!%��$�	�� /[	�5�	�z#�0���	��h��*� "A���t8����OH�x� �t�~M�swz`���T��AP���jB&���^N0aP�h`�l���	S�AG0a"h&d*">A[�� h�Z4���a���>�t�e#h&PT#(u�`���&���`��@B���"�
&��A��
&(�x���	_-�`�$>�
-�>��:4n��������j�����R@
@&����6��R#2�V��������sC9��^�I��qn��,Ei7�0\�j���q^c���9�=����<p��oJr��1U�:k�,dS#\Zl�����KL�����	?J��Q������@�a�Y���e�s�������C���d�N���P[�k�a/",�	0�1�SI��c�L���d;?n�Kj���-r������E�X��������e�-�������_oiJ�5�v5O�����\����J:����Y���P'��4���[I�~��a������=���X��!*��A�����j<q�|_3P@��[H
��F�z����u�[�F,=�[��TD�R����}�	9��-����Y�^�2��?��#�l��oe����G&��'��=M��h�7S������]�,s�ET����l���kpk![����F���W�L`EYQ0��(�P���n>#Z�1��\F.��ro�O�+u�h����p���pn�i��2����'R T�
��1�J����!�����#�-&hJJ|�����Ua���a�2��1-i#N�<R�+�>w`�)�#R����H�tQ*N���g�;�>X��J�,��������x7ip�T�p5X�1��`�fL�k�;��}T�����T�"FPV�}�GX������q�u����c�h�h��Y�%���M|�us�t�E[k�s��.��7s4�|br���x"\�m`���y>
���8�Q�ln7�Q.��� ��x
�J�������g!OG��cFOB`���X8��RP,��>���8q�3���$���o�L���(#{S�F��uUOU��q+��N7Q"�K��s��t*��Y�<����T�ke��KQ�	K6����la�����8�0#y�A���8&+K����R�"YJG4����,��b��;r5�Z�,���R���le�+������ai0��!����O+K�\�.c?#l�+ea�=
#6�2�o,m�,=rK-�����E�Z�BY����Tue����Rd��da����RcdK���,��b�]z�*��l,�}/,[Y���,������X�XV��F�u_�ikc)���R������s��K6���,����.�o,�G��lc)�HXZ��s)�K�v�b�n,mZ6�R�������V�����`ai46s�5_Yj�6�*���:QK���w�PC8�\
K��6���,�y��`��5��l,YY���ms)U�XJ�_X��V��|��T����X�\JdggiC2��t��Y������������Kc�e.��K�������R��VI���n��t*c6���6����Y���=�$?;���a�o����x�.�-.�����.T=�ry/�Q�J��k������%t8�����`��ryY=����+�����b��T,./{�W���Rc�./�7��q���]��./>����:�x�����z�]^������`�bqy�n������mqyS�����l.��1Xs�pyA�r=����c�*��cvC����B����>��<�T.�ry../5�fS�~]s�i���5+�e�q�'�WE�#�\�(�uuyU*v,S�����,�������q���;����*�7��������lP^\^1l=3��+�MDW�S�'/.��������)f����0\^C��������>/g"�v�f����wM�m��{�Pl�8���'������q#R\�����vu��9ht�n��:�~�����v�[���J����;��sA�ap�?���Y�w���u!����m����;&�����am���~��D�������%��9�]�~����@V��������2�Xe��<h9���G[�;3�_�s,pW5Y�'GC���Z�#<R��=1Th&�I<:����A�t����Htd��q1�Kl��.[�;�/vr�H�j�:�c!Y��*��������yz������P�u�gv5�%����������r?�`�b�m���	�J9H��Z�+��0GO��NR���Rf1W���{>]�,�Q:����8y@����5����`2�2���+�ed6l���.����R����D��'����T��.���~\��Q+��e���@��!�2�"�$�O��tQ�a:+u\	�\��`,x�Bz� d�[�a�v���f@�q����+��M���!qUh���p�0�������X3�<8��V8�44��N���r�<�wg6tT�������]���.��������`2ql"��@�@���i9��;!�3}�sO������,����6{�k�`<#�k!)��K���������L0^���Y��h���V��S�����=�q��-��L��h�;&���`�������vf��AP`#��Pr�{?8�F�K�N��
(��c��Z6�iA���r� �%e�����$�K��3���{e��j����,����k[��Ae�I(��zDf����O���jVU�{��xd��ADX�c
�A�cA�*�_�z�\�Y���t�vc,9��d��EA�C����n0�g��@�r[�?0B�Y�oQJ��������)3�fXK�2u��������5VY�����:r�(HD���aV��Z������(�uD?{����a0K�-
A���?� ����oDA�4�J�Z�X������PRC�������� �����K���j�tZ�x�oVa����A'2xJ������QV�`�)H�A&���<=� ��.�����AZ.�-X	���<�Ew,�����N�����<=���.z�AO}_� O�*X�^���<a���}�:q�^a����P�AD�S$Wa_����W�q����� �� O'�xZ������\<�8��S����l*2�*2�t��S��8H���8H�������9xz O�-��8��*��8��S��ZS@��S���Tg+�9�N�x�X6�"�t%�av��(�8L�� �������FB(.�������
���m�_�3'2���'�cP�	4gM�qJ+k��������<A�)�����6z,`�ws�����de������9�a	���h���)��-
�tb�{b���#���j w�l�I��P(�6����]�F������!�";�����(��k���T�~�r��S�y
S�c��������'�]�
��I����F!���q$
0��w�����V)��K�ao�Z�b`����o�
��1�Er��q���{g�e�;�� $<4B����(\�JD��029�����uL?l/�]��W����G#
�)t���Hr����
MjC��u�{.��e�#a�]k��J���}`���}�� �~���S�'�o1���?F)�:7$��@b��y.~g;gx�0%���^p����/��R��8;xZH0r{�]����������O��mH���w�z���?�r)`��C�
7���(��a��QR���\�V���H}�]�t�� �G��#�1#�-<��}6�6����-��.\�������t���
�>��:��u��\%0��fb�Z6:�4J�phG�=s������85a~W��^
7����#P
�_g#W��c�]k0[p�������$-��y���m�����~����-j^�O�_j!'?h@�������\�W�z�F�/IB�����]�"4��/�/+��t;���'#n�Q����wJL3���5/CD��8����a���1�%(.q���<�;:J
�u�����e�
�6�@>U$�V8ZB\���;c+4a�g@s{j���<RAK[��	������CU���G��5��	*�~���[�D{�%���A���L�2�1��--��{�}=k�V�Ho�M��r�oU)�l��7j��\L��p�+]�c@���\�6Pa:� 
���(�;�_A�s�����^<f�����?��n2T��q|��l��-������P�����m\Y;/� #X�Z���+� �vjAx����n�_x/�x�6�A0���a��D�r���f� �Qb`W�������J�.�#x�s	� ,�2
Qp"�OF.�����bh;W�Q2+�Z���t�A��h*���Ke5J��z�\i���]���������*��� c��<��t���T�6����� �>�A���n<�|g4��z&(C�*���'��^���n,�����6��tN6�N�`��9��Fta�����4le)���dg��?V��m^Y�#��R�*K�t����r�t�KS��RKZYj�1V�Z����oa)
��*��� k��G����u�fe��v�����5�z_��������,��+K����
re)��XJ<me���R3K�ee)�K��`��5��/fe)@-OL,�RKA6�����`$���Y�,�Y��R��G�di���K���dc���v��2X
��T���`+K�V�z�ag�W3l,el��R���t(��R���r�t���l�����`K�����oa)]�V�"��� K+`�����{c�G)p���\:!�R�5r��dc���>���,��u.��/,�X��V�z���R�����R����R,��S
�������X��&�R������Ruv�K���� K�\z5K(,e���
���
�-\�k��Ek*�9X���^�L���[�G�MCK�F�=�N��<�f�s��k:v�kR����lm%v�.��Tr���
d�U�4�K�x]�(-=��.�Bg4��1�u����58�d9�5�11y{�f <��5I���<U���4��S�����H�5����6�� .��u�p�TK�F�8=�R�!�D����p���%�jo�5b\����3���p����������C	�<��p
~0�7o��Pl�A1���M
"e�
MjD���kF.�R������r�p�5�d��c9��*F��kRq�n�4���c���	���II��pMj��?��p�%R�����d1"��	��ph
�t�$U�����	��$�y�k�>"�J�a#
�v{�w=Sr1�8#��3�Ns�F!�k�tJ�F�M	V�*KQw�m���kDJ���	��@���(�1���h�5�y9�h	��Jm��X�5b}iIEb�oB`�	���N�X�#��3q&\����z�ktXA\�5lb!�:�kD\W��������p�'�X[����������$z�c�oH���"�=�k�6J�zo	��qz
�q��41\��)\�!��p��'k����"�?{�����mW�0q���um��
�rg����B\���+]���%�1X\�c����*98��[X�I�6�G�sU7���l�A�����
>qF�1�b!�/|%z���0^b��n��m�r��
�I�X�6�������#�V��E���PH� ��,-$��u���P*a&��}����]W!�����
)A.b��:�>	0�Qm�al����J�MWt$�1�v$���%��]Wy.�S��kV
�&��A��py	Czn�Y1v����1�I���5z�]WD1ZQ���*G�,��EiS���[���.�@����������&u�0"������KVF��i�����+4����)�{��`&v-K[�����"V�n ��8:W��*O&vmD�������L�D��������ou��$�0�r��u`��=W��������R�D�������\����Z���loQ[=�Xmao_)|0��I�j�
��i&��h�
�-Wu�_��2�U78\��*���p�E�z.CU��
%�-:�������r�k
�4�Y��U��v�
vz��x��\1��+
�
��Q��X�y�CL����W�r"����j�W�c�LX��L��1f�����!9��9����>��N����O���W�e���w�<G[��a�6������b�� �+OE�U&�b�=�������sF^.p|����-}��1;�*0 �p�Q��������
�A2����0cz�.��v\U`@$���1�~E%##s����3O�uI�}`�P�����
�u��
 ��b^*� ��dL�tHC��K�qu��'k�U�
���g�)9��hp�%0��bzD�� ���S���w�9�����g)�{��9�
I���h9�����
�h�ZS���0�}��IFU*��w���:�
�������%�;v\U`����s=|��=|7�tj�%����"����"���������m�J`��TY\��K����
(���]^&���;���WG}�V�����������'�g��������������f0��:�������zz�N(�{�+K'�y��	+�T��&g�$,Ev��`	�Y��t���F�n D�Z�.�s	�T�.%�R���m,I��J5�v�.B����X�����$|GI��`�S��Rk����%|U��G����T��D�R�W���,K#�h��
�N�`��1Y������R�����n,�XJU+�>����SRcd:��wC|�����W���e�s��#0w����f�Xgi�u�-��]g�}�m��dci�uFb_ �
�
dc�k�������gzY7��������]T�z���~p���G��������:��f��).Z�Zd�
F@�M����X��p��V�.*��B,�3�evO�x/����J���a�\�}�0|QF��UCo��S_��<Z��|�<���Q?Z�0�� ��<����C�.��mxBF��{����[	��Z���8����x�8y\���
��c��=�B��d�%>Y9�N�f��l��J�����8��#�����ul�f��X�FM1��=Y7�{��J�rB9_�rY��0�����J��8#�2rB�h0��\>a�K�J��MGL{H���Lm����1Ds)���}Cl���W���&�I�bq����c�%u��&���������(^N�oB���eb�1>���F��6�k�%�3F_�N��'tP��ue/A|/�0LH���NM���S�p=���]�q�h�\�]��q7������Y�������^��=�j^�%�:��`CeB�����P%������*u�`������0R_���0�����@��l��7���K:���(�st��xJj��N�o�:��j9�D��lD��
�v�wQ-(kS�� /D�nQ)�d%��Q������f����L{/De���
�uB���������V������j!*��FT��BT��d%j],��^�������s�N����t��#!	t��}��Q�����?��NT�el9N_#��9��j�nDE4+Q���� !jT�Wc#*H��*�1!����
����D-e��J�C��c�j��-
R"�3QuHv��t_LTk'���BTM���2�:aETZ�
�f�h3*��6��lD�z}!��>D��FT���t��Jo�Q��RR�Q;����:a������9�Ru��Z-D����4���!D�����-��J�������NT
]g�>���D�`�Q'�rkw���� !*%�d%*��7���+Q������D�w�2�:�.D%���nOZ���������J�`�Q��FT�/���<��J�7������uZ���6�"�������l�,�K��N�:������_\�Q�������)y�����r��6���f����mNI�f��x�; _�������������W�����MN	�������9��`�.Y/c�
�sf�(_&`���n������K���
�I���\��T�MY�����9�Clhv}a������@A��A�R���`��z�d�`�~���m���Q*�L��Zae7ag�,���>�6��Pb<��|�P�D��]���� �~?+�1����-�.���"RKf�<89�GB&����.*8#&�g��d���?_�y[1;{�%�n�����}����]�U�<7���$�=KY��QI�U��m���hw�qOAN��y���jla`���f�[�-�ad"�A� ����Aml�1V��j�+��%3fR�ME�zC�Ny���-�QO��LY2�}���A����(�Ds�8���SV<
8�`q�1��wL�P�t������X���S. ��������D}����!*�����
���W?�Y�c��+P]{I�Q��	���U`�T��KJ:y��y����.��n�p��H���&���q�C����M�P}��Sg ��k�����ci���x�	�����
�sn]CI8�c���bh����c%��x�+�c#}4�
�����:~.E��u@Y� ���:��H�#�7=F���LI�f��������E(�2�16���"��"���]�4��/G.�n��URM^�u3�:��d,k�f��5��a��5���BA����y�O�	B=	���<���H1�?q7]\�@��{�J��]U�`�O�aN7���
w�@l�U-:��1����b��4O���LE�Nm���� ��jG����v �/�OAY7��Q�m���X�/+�.(�fpb,iHT��kVyG.��/.Vu���������(!8�������,��X��(�����;��OS�����������^:/]:<"��(1tTD�V3"R���"-&�,�
��sKBj`/g�Q����[R�_�e�,%���_[��DB���Q�w���������Q���Y��Q�qUA�<�cW�Z�(��p]m��%���!�GTS�5%�,�n�������U�n���tj����pF���H:5�/e�\�Q�$o�Q���������:�T�RG5�OAV�j_wd���O-�U-O�x7�j�o<�xz�����C��������T���S~"������z���SO�
�$S<=��SJ*?u�����z��Z�O<�x:��(�X9+?u`��J�}y
�u^�(?�wtv�7?����@;O����5���~.��IOA��z��~�xj7�O�~jf?�c��1���W�?��t�SO�O:�t`���Om<������OE��S:u����6N.~j�)�j~*%���s��t*�]�T����w?������j�?{�X$��k)���y��J^�A]��c:�������k�@���������.����5'�/>��l
`T}V��a1��p3������8��LMW62�0�b��:�]����������c�e�9V��x�eo���H����c��hC>F����y�mO�X�|V�_�a���wL�g�S��yA�����Fl����M���;�Gb���
T��e�3Q�:�u|��9~ �
v8N���
���3����� [��KH������P��?��M��N���D����n\��"J>+��_,O����q^�w�8C�FW�$����"dQ��������!k�Ao�c
n�S�� ������B���+C(�f#��T���9���@����qH���TY������a�C�Q:&��>������;7��;o+_�hU)(��q��T�c!�-�iWay�[C���){�d�%�y�\�e���K�2������Vh��au�/���A�k�zD�����N����}��XU�[������?��|	����J����_�zg|�F?Eq��5���l������H.�e;�bL����r:�h�������X>�D�b5r_$~�#hH�%H�B��.�[�����g�H�c��U�Gt,�5������0ab3<@1"H���j�0v����8z���\:r�����(�x�@w��U.�#W�� F,��4��$�m���T�t��$�%7,��r"���� ��1�f�*��3�y{���"���<X�"��9e�xPQ6�s����/!�����O�9[CcZ� l`
����q�,||g����	�`���Dn��OP���+�
�a��?=��7l7�_��;>�>a��Gd�y�������jB��L�AX�"\B@
p0���&Fp�0 G ��a�\�+.�L�@JZ��{.�E���SF_]w��T��g�l-K���L~��F�`�!>�_�K��_5��H����V�a�J8����(g���@#�[s$
4�6���5��
"1����w�{V�)�������`>���,��,]�d�S-��@M�'���vH�.��4�����$[�l
��%H�����x�����D��Bx�e�s ��[����8S�k�sVH"�����m�6���f2����M$�@�o�0��yoh�"R�s�[9^���[����R8�\��A�`�SVC����IC�q|uY��0��l�OD��c`d�v��r\b05Kv�o�!�t�N�%&L;�+�A�vS���KI�#�Y���3��.j�&	�&�]g&B5�X���	1�����'��w`"|�6Spv�������a'F$���N$�8�aw���z�n�UA����!P
b�����R���L;�M����vi�j��!F�a������������R��[����u��N����:��e4(��#�h7�:&����"���:4����u�;�%�bT3��*�>Z��:�L�)�+C����L�l�QT�u�>��E�}�c�\�s�u"�Y�k��;�f��K{�n�����(:��Y'��+�w�u�!dCc��K��[���2�.�*���e�)(.�F������������BS?���d�)���D��
�Z�JSo�_hJQt���� M��M�`�BSJb���))���+M�6���4m��YTe4MAMA^hj���*[����hj�v�*��� 3M#����Kw��e���~�m�)�ud���MA6��,4U��wi�%�W:y�)m~�)��iJ�0f�R��l4��n_i
����:�h�1�AS��� M�<L3��NSJ�\ij��4��R+l�������}�����f!�M.M���2��A��4MI3M�������JS+���/xm4��%�l�����M�l�V������T�A��8�$G�������1�0����gSz��/�i���4B�����L�#������RL������_c6XhJa9hz`��"�T���$"M��}���YgS�,Mm��M��LSD���S���X��T�W�$��ZgS?�d�i��2�i��)�-4
���FS�����RhzD�^fS��Td���S�l�g��>I~���AnL���*1��W=�`�#��Iem�����?��ds
���[�����g�C���e��w�������g���Cjn2��q�e��t��x�N�����LP�XZ�nYxf4�a�|r��A�Ls$$2*������u0�����;��N��-������kF@D��R�/g�*��
-t}��@���Px>0�-%����Z��S�ak5��c�)���^��$�
ot��!�$vx����2O���ukm@}[�'\v"��������&�!��=1�d�E��`��bw����	���������7���(�`_U�e� �S��A���@=�����`�+�s������l�>��Bf��O���#c�Q�a�q��C
0F� ���@�r�zG�>�!X� R>�H���5<�p�D6��HB����{w	�����M�m�@���������f9�"Y.8��M�}_�j��QL��Y����W�j>$������4��N�����8�vE��g��}��i3�2�J�9�I��D*����K
��$���BS1���yO�b,��1�����{\"DI�/R����kV�F��D���������Z�g=%��XS.�	J���!�II���&����>��`��o��<�c�;�>���:�={��b�/
T�ji��c��m��K���S�z�P��h*&��� �W )�\���A��`=�4=�$�����nI;2��c��t(H"�I�l+�IJC6�bk�$�$F.���T�Y�ZI
��d%)EANH��V�DV�`"��IS��N������i%)�FRZ'I3����
�H�X��t����2�k�;Icn��I�7�vd&ia��h���$�FR�F;I?3�$��'�V�B�����I����H}!i���ML�I��+I2�t`���*�E��z�G��I
���jn$�K�^6�:��;I��F���1�V�RI��Xk��]uI6�d"i�\"�h�m$y!�lYI
"I�y���+IQ�����:���$�G��FR����H:�FR.���i&�h�����i%)�BRJB5���.EIA6�"�u�����?���l#)�F��4�FX=� �%�$�IJ���D:y")5�I�H�d6�*���H]���={��I��J��L$mX5��nI��Iy�NR���Ts!iu�4���H��6�"�����Wk?��������_g��4���$YgR��r}!)��#���������kv�bR�$EV�U����
�L
��������FR���V����R�����B�������_gR����i�����0kG��Pb�I=.3�����i&� �%�$YI��-$��m6w���b�����Iccd+�R_f��?6�:�2��$-�%���HM�5�]���.��.U���!���F����yK'K������b��I�k�U�R?v(ju�%�S����(6v�9�ch�k��rb<����v
T�!2n1�`K�����0P�:�H��t���x:��R1,�����1[�G���-��������O]�Q���\�X���~b<j��)��1G�+�������aa=�b<Blt���`7���� �I�g������b�*�d��|�HK�x�1N4[��1�f=I�?�K��!�{���'�#�M���{�G���fc<l+;b<���s�x:���c<�?�V�O�d���FGz%1� �x������x�=a��X��*�����@/%'����m�2�9�it����	)@�)B�����Kte�7���'��E����� lN
��]f)���u`m{��)<_`q��� �����K�]U�K����wj�'s�xm�,�Jb7fwa��%���K��w��K�����d[X�w��:�0+j_+�Z�����)��wIL�����Lsy��Y�!F��LU����e��8
����u"�����������^���������#�@{�Z$�4@���zx@a���]������.s�S��w�0\�f��
^)�Mj�q����q7�k����TFf������JQg��k���j�b�^a���
q�$�9�1n��w0����v+W.��;�u��V}��qiQ�<6R~��E�\v�=���""���/kA|6�U�D��&O6x�^�)_�m;������K�0�S'����K��2�C��iU_h���+y�;��m����]^"�������$7�0E0�`����&s�dF����������8��gpi��(
vc���;A�&����TP�������2�f�����Qm����t^�'�\�z6y%�;��U��D8�AI�Hm�m���&���[y3�./c/\�RHQ�H�#��r������h([�u����%�WM�=� v�C����!�9�}� K�^�-���aE���2���������;c~��c.�xR��5���~VF��bl�� }����:U+�'��hT���`�,)�2u��
�|>�����,b\�3���7k&S�������|��%r]�Js���RF�9���J���2����99����Y��,����E�[����Lt���[U�����{���]m��0-�,>��#�%�������c����2)B�s��@��� K��I�)D/��mU����d��Y>���|��O�N>Kz�Z|��>��_���R{���Z���T�X�H:���k$� ,�-����m$�L�BRTgc,��$e��H��?�����,���"I;0��cI-h�[�+I��FR����W��}n!z�$ENIA��cc����=���6��@�B��L$m��OI+I){')�FR;y"i�($=h���Q���n$���I�g�k	,8�l$�D��5�R+l�%�'�BR����x+I��5D��o$�H���1a&�� Cs��Io��JR<�u&%K!3I;B���~!�6�6��k')�Lz��	��l#)W�n$�Ij�m$YI:���k$e��i&M�/$%�FR�����P�y&�d�fR����:O�+�G�$)�Z�h��� ������\7��FR�5��KII5�����-$��t�DRj�����`I��FRO�n!z����e#�@&�6��[�FR���@I��B���i&M����3)B�I���!z{�;��;�?�t�%�;�z����MN���O�������_������a��V4'���Hc�bgps.7g 6<��f{7g �x�Ln�AB���]������h&��pH�c>�3����L0Si{=	#�N���:�����<��;�1\�mru5���,~�pPG|1��b��^&K�d��v�Gy��/UM����� ��8�����BI.���& c�"���@� ���g�������^HH1�5�9d�E���nW���x]<6��i��r�fYx|��J!/� ����(���������OI��/� rJ�,��Q��
(�>�{^��1:�x Z�8sir�������?m)�x>e�53��]�"�+�����#f����?����1��&�U\7�����u5�.Eq+M�����2��R�l/���
m��x�����M�����U*�=?N���>I����k.n�K.�6�q����	W���g�u��8rR�=yi[E"n��22xZ�K$!�������]����*d�[�z',�c�G�]�!�C�G�E�������<���]H�.���X���D���J��+��T�����vBF�����KU��k�n��~V���]%s��C��:�R�-���T+������&�!f{�vW�:\��S
�1���:m������:��4��s~F?�M���T�Aa�O�����F�9�~ �}`}~A�xO���(����4Y1�LB�ru��W=����K=r��&+f�
H�#��33�Dj��O�K�\O�l��6�AJ,��vUW���1?X�tE�GN�h�$����s�Au��5\]{�M�5���iZ�%1�D�:���D���r���!~|]���^����sG�����P4�JQ��(��AQ�����Q����G�82�BQ�a��f=�R��*1���lx`�(��NQ�?��G8
���}.��L��h
�(������I��FQ�h����BQ�r�}�D6�"���x}d�8(��$����N���(�s�LXFY�B���BQ*�S�N�)J����MVE��LQ�E)	e�)J�1e��D��u?���F^(Jv�2�����M�p6V�"���/�W}�^m�(f�NQ\r:�,J����:(J1�,J��r�y�Es�FQ.{�E�g�(�B���B���:��8���(E�L��)h�(`�)j�f��R+Ei�FQ��h�JQ���(����Eu'�>�r$|�(�:��Gd>���r�Yd�(�bL�i�bR���Y�N�)J�����j��2>(�^�(JI�N+EA6��Q4��s�Y�\Ey�E���G�V�v�"���P{�E)i������7�z8y6t�
�|�h���)�(���Z�JQ�����*�b��l����r��BQJ�%YgQ��r��FQ���
�{	aeMAE��m�RPTum�.�JQZ�BQO��Y(ZW��=W��O�S���Yd�(�ud�E6(
�Qd���O7o����R�.�)�d�R+:��(%c���.w���[gQ��������r���
������A�j������Q���EE��MIe�"����O����k���L�(����?bQ-����h�p]*Us_q�f�*f"k0�]�;%�^$uG.i2�x�Q�Y ����Ph0
�����t�L�hQJ�1g����D��.��(��!�x$	F1 ��s0J����N0�:'���-bi�RT��D�K0J��l�(������?���"I�Y����p�`�X6O�6��s,*��l������}�PG,�+2�cQMP�^�Z"V����)EQ�Qs,
��Wh�E��:�����XT�	�X�U���`=ec�6bQJ� w���;���ZbQV����a��]�E������t2[�K��:�X�_��bQlha�:��y��G9��\������24��(6�0.V��o�ct��!�JMj����`Bm�y�j��Q�-p��tL�f�}m�#K�J�~;�Xn,�U�&Q�(�%f<�v�h��B���A�p���s7w�l�K5	���sb� �F���!�O��#�\���y�7#G�='��<��9cu���h������a��A�`|�,�e���	csC�������s���d)#��a��e�\��������l�K.�XO.T�+�>����l��1��v;Zdv��9��qia��r��gB����%rJ�v��h"l�L����c�T�3�r�q�<�%�=�<�)Y��	�@@�����&��P��{.Xn&�R%�����g~b*�����<�w��]��`�
����2���������>���L����k�:�u7���hf�&wE���>�����@>�l��wN�;3�ju9��b�,JT{��N��Z]����W�s;Z�|qaD#������a��hb(�)V!�TO0�f���h_rHA����sA���H!jB�%�[Rg��;���-z� ^��X��o�SNq��V�rre�R7�Q����c���95�����{��8l��'�wN��u�w�@��s3����[���w���~�PB�F���{Gv�o*T�[�E�+N$_��?R2�=w-��I�0�e��X�j-8�\t��Vk����O�:|��$$�q�8B��NV�YB�"(�C�b���S�z�����cuj	U����r�5�+����n�J����_;�D�d�Z(~#�z��dUA.���F
_�:Yb\�;�&X���dY�%�9,I�1�:��b�V�i1Ad�8���tI�Y�:�EQ�������&�������:��b�$�_0�����.F���o����e-!]<�X���7KZMk	���b���a0V�s{���l�RDj�.������bh}�o�8�t��X�D�..�e��s�o����e-��53XV	�u���0����� �6��s����y.�c[������%"��s��_��L1KDd���������c�������?����O����"��	��l�����)���~v,Wp��z�4�3��Os����3?*������g~*���"+?��XQ?)����O��������z`=bI+?Ev~�-��fp�Joj�1���)��3^���������z����N���k}�4���,,lv��k�'��?1��?���T]k/���io��Y����������L�'�a�6~����@������O�Y�	��O���t'��������O����l1��OJ���O������i�?STa}���V~��u�4��OJ����O���O�����i�'��������.]��$~���+H�U���W:fL������� /���V�V�����4�����r��IW�Drm���Sn���O�������F������G+?-{�����h�2�%~���+�?�������+~���D(��w�P��;��].��i�����v��w�Prm�d����N}�D�q~^CK���Y!',�3������#�n����2x����bxI���������r�����e���b/�j�s�����Y:#)pU�����<�s���,!0�pvX�%�AX�I(�L1������r�~���������������<�=�����%v0;���R�*���M�����9.�
��[8�SVA�#��cY�(>��b��>q�c"4]�$���dH��"�-�� ��g��S,����B����rQ��^"9���C_J:q�0�
�$�QY&�V���c�L���w�m����WH5�1���8�h���������?����y���%QH�#BT-���Gk�
cH;�NT*�D-�����|]}��U�����
�5��h��K
	%�mx��%1���fRw����-�� %a��ira�W�=0�M�9�@��� �����N\����qt�c[�F�_�U.�Y��))��g�������p��jI�Fb~��\�=U;$�S)�K}��������UQ71�R4�M���'������WQ���N�!	r�P��qR�(tn NNO*�sc��m%�Z*�n�������A�}	��N3����D$T=�����d
����L��GE�J'}#��b�����	D��F8�c���t�:r)����\b�(�vW�u�%|�J�lGqCZ�����P�k
�����kG>������:�%�[8G.������L�&/\���3��8�����m�'��f+����P���?��f���:2��	��fJb�������It�������?���z�hT�:��+��#F���?� ���Y�EmZ ��9���,j��w��>��}��9
�9K��k`�y�2uc��q�j���Y��o��Z��
�q>a��|��u�]Gh�p������Oy��I���W]�8������ ��g��3�@����p��4w0���z��\���L����$����0��T�������>fS�<�;���sZ�!y��![2Q��*-�[nf������k8:O����=^#�G�5���X�7�T����*����
�L�=^c�R���$n�^d��N�4��)�=��R��L�5*(��f�p���y�>�4q�����4r/��l,Hc�z5��RK�|�������`K�������#}���R�����������9=dc���,MI;K-gb)J��v�Z������oa�:?�T�EKAV����)������ue)�����N��U���l,ya)U���Shk���R�D��,E2K���t��R��~����V�v��
}�r5��$��-,u|�Y�D7�4]������!l,EP+K[�gT3�z��>�r]��R���
�O��a�2W�5XjIK������A�Z�6���,e�}a)#��R�u����s�9��le)���4�f���\�r�����:m,�o�X�y8��a���2�"����yK�K�ake)����,,�KA6�����*�,��7���/,��2&\t��i.�q�\J.W���l,����
��l,ya)u�Y��2���Ia�K����jei��������R�j�K�`egi!+K{����������lc)��R���Y�M%KA6�z�>���,XY:���,M9KA^Xj�V��l,�u/,EKAV��a.��^YJ�h�x�Dggi!+K{���(�j�R�+K�A����n,�Gw�"�u.U�+KA^X���+K��q�K2�t`�� KA^XJ6�R���6og)B�Y
���u��R���=�?�~��[[pt��4���a�+y���h�ME]������G��*>=���.i@=�B8#a�	#T��	���UN���}K�A8~��G���������	XC�b����	�~�P��aN,1}�cs�e�p�p9|c�8%��� ����\��R��SF6���f�C�Z��r.���,rZ��n��m~,�`PB�	�����Mfp_��h����b����8��,K�g�Hv/~���+��T�:�	�<w��O.��B��E�B��4q��a�Au��y��
��MjX�|�������uO��CP!U�}�(��XX�b�����G������m����m�A��5�=�X,���c|7%q������z��y=��z?���K�9f� ���|8nJIL@��@��b�v�dr�dj�
Y>F�-��%B~U��^q����*`���������X���l�b��X�x|��V
 v��^������MW�����a�9�����X�/�'��u� K2M5�w
��~�9c�>�dC�
b���zYS��p����D1�pS��c�r�����)z��l� 8���P
:F�������^�<_(n��|��,�`�L�|0�A����� ��(s���L���pKgG��;�>�00�&'��O��i�����H�-���A���������:��jG���������%W�I��1(���u��������{�
�fl��/��^���z�j#�$�����E���	L��O4Se�"6�TEy�W�,b�������C�:�@���1��E�5�.:�g��
g7>���*7}�S>��*�O����.U�:���%B�U:��q.��:*	����qI5`���k~��������Rz��lQ����v>�[)7��0�� SodT�M �4��{�T��-�o��zy(���]�U�@8R������K��/Qr�>�U�G��"*!��v����E�hR%�P��������h\�U��e`[0���
��:��^�J�\9���{�F�*�����uE��p���l�t�Yf<���c2�_"h�)"��57�����Kc���YZMkX��=X)�5����!�XS��-���pt�^4��w�X��P�� l
'�I���4�������S��)�*Gr��� ����0�����}�g�I9w��9���y(Ypk�l��a;C�0/�R<nP�1!8rj a�X���h��TqB���ec����rs���,��P��
���i�"��|���p��*���a�!����XG!����T�Z�Y����A��J��"a'��#�Qm`������|H���������\�N�� .�[��L|e��z�o&GG�j�4���SH�!��v�Ov��o�:�V�JN���o`����)>b��(x��M~2�P/6��j��#���[�tb�����S2����O�����X}����C���\,p�M���e.��u�sXe1��������
��S��q��g6J0fA��j`d�d7[e����b�T�H��F��������a=���1/ABa���L�f"�]:�C4�C~������x��P��xl���"�v~���~z�kl��h�q��<D��T8aV?��JH����U�l���@3L�F��5��9�7~�n�2�t�%;�1Cdtc�sZc�3'P��{��A������&6�|��
�Io�W�8
#O�� n�ie��{s1GpS�h�����
��X�(ED�t�v�|R�|������+/�	��-W�G,��|�Vu.����f����tu����:�m_$���� n�oL��fr�[G�$�I@W�L,�#�ic\{,��n�V�Iu�]E�p�6&��sKl�QO0F�6�j
H6f
q��
]0`�Y�����L]]g,�*@WR��F�=�A��)Si1����0<�*���I�P�]]*&�b��	�v��*�c2Fs�i|���@�8U|o���z���?�zF��i^��4��`�3�{�Y�����ZY���J���8R�3���%���#���$�u���&~�":����>����>��?y�����_'��L!�:�������?�B�b��c�7s=�',q/L�y�?=�!�l\TD�_{n��>�$E�2�`c��(-�O����8�X�I�#m�|�N?CKE�CS��2�~�`��!�{=G�����_�M���?S)���w�H��x^�����9����������?��D�~�
����'�e��_)��])y��������~�z�����q<7'����7�D�����V���/���M��|/
�C2�@�M#�W��=��L}�w��y�_�Ma���;0QZ�G*57�H34��,�%.��
����\�(�;�xK����Q�f�����S�����Y"��<)�����j����@���~_}+L��ss2
�~d���&������J2�����,��D+�Z�M#�f���N���������R���1[�F@��L#LbaT#��0>���$m#�������#�F��/������ �4')��N����k����$SL�6W����'P���`O�	S�����Hq����T���Z��M-������])E�F*�?>��G���cS
��d���E�.*�?��?R�>)|e��S�1>T!C��������"{��=[*?R���z/{_��19��y�)*�o� ?��8�g���'���������n6��RTw�Z��_�KM����J�S��O�-S[N���]T�Ok�O�g��R�#���O���	#����������,�����lp��R[�/=�r�Pu@O��-�Qky����#���pu�����p<��P�8���X/3&��������Wo��������UL�{"��<,k�.�����[!-��w��<����0=7'�y�U��R2��:������H��LD���Fp1}�pWe[7?|l;1��-��4@]�k��7�:��F*��d��>����t��Et�Dv�����\�F��������&~?�M���D@����g�����$�(��V}�����r��L�T_��N~(�w��Q�9E�-����dc}!���A2�z�w�D�����d<����Wc$�����=���s�W�/��x;���3�Sn�`��+��"�)��sT�H��#��Eh�r���$�&���"������}d<~����'�7����9q$��9��|"k#_�N�M�z2�MHOOk����Q���>�=�r-�=T?3�����I&S{�H�����T[2@s�S��g�J�Y�x��0R�QfY�_�T�/o^i9��C�S�����Ba�R?��\5���$�/�����Q��+j��������������V8c�/��LM�*���6���t��i|������y�%y;�}O~T���6�Hd��-�64��B|KU�������?J��m���dq����
��]��&�F$�pP�x��2)�X���;��b�L�kD^���!�)E�������1+���)E�d,j�Y	���D%p1R���]Xnx��|�u��*�?�+�G���*$��<�����QX.�X>+�8����Gd���
�TzN��T^_��qN��b�K����y�r�����6�h��Q�
�]1�����.[��kH��T�Z��{*��dz;�E]Y-fh��z�6�S��?i������J��o;QK��2��O)�H���>����T?c<� ��t�*�	��+(*~~��S�V��H=����u���f��i�!Wv�M��/���Wr���Ez�j�����s��{-��\im�R{��#;���"JR����C�}����(�R���1�EC�c4�0����O5/E���!I�:��S]��WcF����XC�����e������L��)�����-J���`�i�A+�~�\J��(����<G��c�f-�T���QvZ�,�K��'��O,�wo������dd'Og�����i�j<D�BE�	y�_>=��B�M�����&X�(��j��X�M���G��h'���i/���M)���{SD{7�6^�Zo���R{�D>�����1��q��������o�R��K�������L��������?9�%�����I+���0#��xjJQ��\���}��	��X�uO�%&L�,���4C��	�V��L��!�)����/�wQ�,�R����]��e�<H�<��E#���	�&�F����?��#���_�<�;�p�K������l��R'W�v���M�������4w<7'�|
�������(����s2M�Wv�����obdKx��6F�M��I�iBd�����cS*
�[�?�b����x�)�H��}o��VD�mY���Sq�M~�zV0hc���#��V�[y�u����RyGb����/ORDF{<�X�3�W�����(�=��2R�;c��"����!��C�J�������M-Rs�r��P����c��cT�	<���H��/����PW�������w�Wq��q�!3m'��V�C{����s_K������R|�oc�c���c�L�����q�d�����(1d��'�Q;�=5����[����N�o��G4�l�����A�3����5<M#h����#w{@��'-D��~�/���L��e-����k�?
�\	�����M+�������n�#��q�oo�H���/6�7%!�CI�����-6�,�����\~2� �J�#��vV���T���{��
'�Yo�)��7:5���dsd��K���|#�&{���)#��O��������i���8����3��1�>�|�<���=�)=�7�������V�I��H�����0c����i���>e���W��ugH:�W�M�q#���#���_�<_��C�K��
?	��ArBu�]��IB�������l(�xlJ�j|���G��]d#X�O�����:����q|e[\T���B��l��-�n�������i���g���+��(j:=�L2mh���3��u�==�n��do��>�Y�L��`���t���L��F����!�)�����?�e,P��F�S��UE2:n�|��z6��Bc�_=�~�qi��'�M�I�;G�t�G�������8���X?u9�\5/�M)���������k�c�E$��2x|�{��V���8m\#_�/n%P�5E�?����4�|/l�@��#,o"5V��9��f��1>�"UD���.�0�`�9V?�I�>�J���m1/��S Y�"��3������u�	������(w�����]�����K��a��ko�u�&�	)\Y4K�}Oo���X���$>v^v[��mSqF�*�s���	�7��T��T-F����iF���~6���~9��l��H��S�]�e����a��V�Z���Om�!��1E�LKz�ymzlJ������k6{CT��L�IYy�����V�?��M3�
��H!�-���_�L��ss�d��k)������L�&�6�������8('������k�q{����������J���������7��}K�'�S�� �
�uu#LQ�+����_Y����:N���cT�G����xO������_\nf�=�R�'�J�����b�k	+>%����D/�@�w$Roa�|���
J���j�$�r�x#��A�X��lBK�[��'��x��S���A��v���	�U�:����Q���\���x�u�9�vd����	I_mcbK��|s�r>������j}�C�s	"U�����u���0X��be�&���@������
���	&�<���$���PY%R�#��r��#l���+ ���+!U�dp|'MH�����T<:o�-��RT�����Ao��o�����XJ��V�?MO�#����NjOR���d�~Q]��dK�G*��R���(��{�����-�V�>����oU��*Kw&?I~���16�xs���"���`����Hm�R����"�$i4��/e$����~��SI��T~����T�i��u�H����f��S��x�-
z�
��J~�6��'���)��_��62�=<�4V/m	�����c%�li�6��.���7����g.^{$�s���w������=�j�c�}�9�,�K*B��B#�I^�����$/�p�~}SJFf=i��H�q&�uKRJK���\���z��V���?�Z,���s2�H��ry=o#�� W���i��_zB��bRF������Go�Qz6Z>�x5#��M�����L�8��W�R{S,��qR�KZwVk�2�_���p�t����T����)�c��"�����S���!��'	�H���"s�*��L.���d�&=��R������|����U�>��*�������y��x��#Y86�!�Qp���|��J�Y7��>W��-��/N�*S
*?�P��W��O�h=�o��f��f�6^W1i����R���.
-���$��r&��\��k�I����
$o���+P^��P�FL��:3��M���a�/E4hJ!�JiS}$����B��.�9����6i���!m���u?R��,-[hd��VB�I����Y�h��f��c�iJ�����Fh����B������{��PS,�H|J��V(�����Z2/��K�3��b����m�V%+����m�d���� C��n�It��X�����@
>U��>�p�!�p��9w~z�6]����T����z��>�t�E�e�L)�����w�O�����<�Q_����d�����$0`��t?}j�����a����=r����
�2��(%�i�5�7����r�CGB���X@T����P��?�f�t�{��z�y����I|���U��W]�o^��f�-
C���"n������3�E1���	�V��������?7%!�s�j�~*f����m��B2�0������p ���!L�Wf�8��
�����j���A��������i�������iR��������e��[q�
��N�5��2����3��t��=�w���_{cKU�i��^{�����`+!
�$��������_Bb[	��QU?�)���]��O������<1,.o~;�W�Et�� 7�1?f�O�Cg9��1�=���*o~�X���n�v��w�����������G�&$�Y�$���$o*�h��������c�HK����������+��V�;!�������H.�Kb����o��A��c�9Z�Sa������m����W��-�/%�heZ�~QL������H,�5�Z+5S�S�������h�Yi{]c�V��9�q��l	�r�f1��Xy���(��5�������zI��
��\���SUy~��GL��AE�����1�4����?�Hcv�)��kF�TF	����5��j j��F/%!Y��@����>��#nz��dK:���������[���������|��Tn�+w�9��[��D�i[BB���~��M��G(������}_C]G8�?��Pb!���1�?�c��Q��[;$6<����4
����3��dP�>�2�uH�R��v2r�����p%%��"|���u������i�n��Z9�d���Fja9�\j���p�{:7�l;���5���c��+�{��!d���E��O� �;����^3f���A)��Sy�4��l��L�U$pG7����������9��.y�������%��n�U�Q��|�K*wZo���2����B�v���������."��������l<���E��Q�+���^����Z�s6Ly���q�z&����7�x"���'�O�
h�����C��=D.�U|��;)6��='U�\���97J���,����r4�+IGU$��~�$�J�A���-W���X]���|��;�2����!�l�
�8EHw�k���@������O���@N��Hz��3j�DJI�\�����+*�D������0�s����� ���T��R������F�@}����{��:�Eo�/��lq��t��k��d2��,������!��o����A�Z��&����'T��e�}��t���l7��W��j;�8l��^�������7X%��/�^7*��[�������A�r�BKW�(�R~��?0�?yL�&��WLa�*�����@��%��$-�d�c�WR������M�+���7���1�lP=��f�U��y�GI���ak��	��|�b��`�D�m�����6�����GT�-p�����)b��f��/vD�����6�cXY^��S�-8�����s�WFV�8��i�o��kl/+������u�EI�p���9V�����E2���e��)d_ J�A]�Uj�8.:��;0#��f�+�4Y+�\�0v��t~,��{�,�����;V�������J�D����A6���n,f��}cUk����`����������\H��1����G��n���[��]f���v�o��6����	��@�n[.�
�ph�=�������(0	Q�����SNL��~U�!�.�'F��
��`����$����XH��C�2�o3���n��Hx���#��#�x�'��[��Z�W�p!�h��_0���M�`;�x��[����F�k���xY��y��D
\zt����\8�C��bs�B	�e�����k�`�SW5|*@��o�I*�$fW PH��A���uw%�(� �<l�����^L6Ux`� ���P��ySo��a�`�C��3EV{�1,�cEh����c�0M��L������D�e��u�}f�8�k�B���/�Ji��NH��LEt�w.9z�'62{�u��#|:��6�]��Uuq�jaB����)^|(wlG����u�[(����<"L,������c���E��o�2H��)���E2��������w`���������hv9xAx��?�V��/����E��.s���������A�jl����� �,�1��;|lI�P��s�J���F)�����7�$z�;����3�y��m��
(� Y09Y�2^������`���*�]���$�$6E�XF�T����������#H,���+��=d����E�����L�V�������d�S�.������c��;�u��9�� �N����� [����	k��0I�+N7T>���Pj| �\�?d�qT��,j�b���(V�P��E.�_X$2����L?�	S����7�F	D{�bs�67d����/J��	�z�3E�2qHk����<(����F�a�	��|���<E �1����3#$��-Y'�y���G����n��s�eBgIl2���[z{�-�\6�M��D��,�:.H]�_��\�����o�T
�>���3jY��2�7����#���58�0�y-<�P��$;��zN���	Yl����%�&~?��UB����T�rZ/�Gw��u����t�4���=o�fn�"���G���z��P���LP&����U����>���
r��;���7�>"���];b�����z�%g��c���?/yM�?�L�5�c{��v��m��H�oH��#��P#����$��&�=:&�g������SbQ������m���"��P������CB�� ���8��u"������wF�iF� ��*�� ��)s]	�g^�����s|Pe�/]th2����wO���< �bZ��P-���O�S�~����f��&���s;����Q)��d� ^w\V�����V�TX����~qn�Q�y��o]�r�(>)QK!��c�a"gGC���-$&
$� <���T��X?�0��\��r���UX�����F��9���i��,�7a�?w�7`�y�b���i���?��o�L���E�����20��s��C]P��;�r�3`�'x&��&#tW]^p(��A�a����b�U��`�9�7�$�r�bG�XF�.Cy0q�Z��(z�,m!a>0�S"�op�=�����OX=[s,s��x�d��-����X3����FW<��Or�x%����������������0�?-Kx�s���/Z*���}A[g�������"3�R.�����}dQz������Ix��c���=���#���!?j������������K���� Q�:*=��P6�X��'���-�_t��H�opi}?�n���Z����U�	��,s�I�>��N���
�]7��u�i�U�t*B��/��n-��%(L�W�^SPO�l������5�������*��.+�ve�������}FX��tx3u'V�
��ls+��BJ��0�w��@�)q�����������Wf�a����m%����t�9��������
��|j{�|LM0�@V�.{�S���5b��W�WF��-�G�t��cu)F�������x�PXN��pP��������M��+�T�S�>�	�DA�,��pI�2�@7#li���Z$E���A��&!����@[�6�&D��0�� �����(��(G&8�����m��9���*�O�]:�\ln-�
�C�J�c5���jC}��`���1��-��s�mP%��w=r�2���_������*�e�r���%Q����#��.��ZG���x�\�X�j<�6E��0?V_���^��8y��q��{����i��G.�&�%t	���DJXs������C����`���N;�aJ�#��$2�B��
�
�_����J��Y.�T%����o\<QtD������#�ua�~29|(�C�-'m��1�Gk��i�����Q�AU��=+C��z�2s��Bd��3i�g,%k����RP����Q��2
V���EF{)���Q�m��^-C�����}��������� �r�,��c�ao�5�wS���
�����X5��d����:�-7���z����{��������d��a(�)=��V��:Y��W��dA.\��	N�p��:�]������xK��h~��an��~U���`���;>ww�49)Q������� �	�v��quIO�D�������hN�jrY|�`E��Ts@�I�"��	?N[u��~7�<�3�M>��w1�=��\��_���������2{'��&d�	)e�9��yq,K?]�A������d^$���V��ql����q� 1��6�u'�Eso��nC
�{���<'���wY�gY�D�A��H)*\t���2����ql���F���pV!���sA��2t�8��u�k�3�3��9����;�G�.�B���4�UB���C45e�0Q�����n��}�J$GP2����6��sIZ�P%4#t��N����%4��q����d�����:<����������]WzP���d�r���I �)�h`������D>��U�]@��I�
/A��9�]��o���s�@D��@f�N��+)���2}�`�r_��
F6�.4��;�������D�Gk��N�����sO�k���^�$��A\w��G���c������Z8��4��EX�9����A0%& ������0lP{$J����94Q�
�SJw�2���.�d�{a�~\%Qg��-@�T��;��kM���4
V
OX���=p��@E�2� >T>I����	}#|�������{�b���]����U�_a�������#t j�sh������}g3t���sB*��$�NRL.I#�LLp� ��UV�}��7�
����K����F��r�U�������l����
�������}m@D�U�����.���%����N3+;��9���'���]���J�W�������+<��\\3(��
�E����O1`�����R���^bQ��'��TA�y�U�D@4��q����k����p�������1�J�"����Tp���=��m��2��
*y!���=� ��v��� �q�A�9��&]���O��{�R��9z���l���$�0@��$��)�+�����V���c"���"��^�v��y,U�� �i�D��������p��U|�	����*��wf�����i�������^��c���#(�9���XAp�w'�m�����Y#��%��T�������;i����\<$P�^��(P��p:0/O���������:d����
V�b�2���dk�2-C	B���]���C���������w[�\,�i$�X-"�	Y�8�F�%$C�W2���;y����Fd��
7�Fb���2�w���o����8Ic�:mJX|5���c����+�B`�����d�{,&�t\��<�X�9W�4����2x���66h�#4\����3�v@'s�l���XV������E��Lq��B6������q�������=t�T����T���,%�#��@h<��l03!�
w����0>Q-�����V����:�"�At�Lx,I^�+�e�?Y�\8)��3���%� ��l���C7�K��1���D`�p,�G�ge�����|I��rap
�l�ib�N�P����5bMC`:�d�jlP.�I���38��kY��T�������;�N�Tb��r�9����V	e�c�Qz*��������S����_���6Kl���F��1�qf�+�����9�������]@�R7������xN�
:/~��)�'Mc��0Y"����,�5b����o�i�~
)z�snP���d��5��\�p4?a��n���e�Ad��clZ4�5R�|%M	�x�z<������R����������_��m*M�v�:(��%��w']��K������W-�iL�Z�q,�;,c|���kW�Q�]���\0����ER������"�^������c`�{G�W1	r�_%���Q��������!�J�r�E����o��m��F��'{b��@_��gRx,�`R��I2��W��3/"v����_��N�����Mh�MT��>O|��S��q����\"z�@:�l�tQ�?Y����+#���N6��m�a������'B[g��tG�ge<�{��L�7��;�2_�]�*���2������U�	���^��d]�����s�����
Ad���9���9]kO��mvl i��{�G��8�9maw�_���#]���'�E���eUN���a��z�w�gJvQ�bX��}N��T�E|�dD��*��"�����
�]w/��H�=�+���M����N�Q�'�����>�y]�d2�o���5���y��D�����)^9:Q�&�(	b�����n���O:��N��<�U�\�������}�z�B��1y�o&f"�3��7s�`�"��X������w�����um�f$��.L������!W��uBw��ZS��"�D(��K�F���66Tp�VT�N4Jf '�������\����sE���`���T���
�U}�s��g(�"w
���s�
<Y����^��1��'%s��`2�|Vi2*�2����T��L!�H�$���������~�!e<�7���$*���m���lG�[�����U�����;eo
]��$~G���[G��l�(�do,l������`E������S������u��:kD6�����Ve�}�p�8(Gx�1��"�5MxaA�#k�����	D�������3��!�����os�M�����8;�Q1��������$�}/)�RP�,��J���/�
��zQ��d�y�������[*����ic
��X����I6S�8�19��=����D��\Zg����`�5�.N����>�	��Z����s4���qa�������{W�S�����pU�(Lu������NFl^�������Vc�
�(��q�}�U	c��^\�j�6�1^d��pr���ort�>j|���N��R�X&������f��#����eT�����ud�����G�KMLV���hux�(�}����d�~��$���N1ta0�[}�%V�fZ$�"�k���<�T�������S�S�����g���(�/�c}��e�A>�0l�p����e>�_m�j��.�)���I1�\�0
�K��y�%�������Dr.��9��g�� ��|����M����y�zB8�T�c���QlrI�@�1�q��������
i)�|P	vLO��s�`���&a�mlUypM"��9���2�4��99Y/�\8��~he|����L�^���r���1g�4�Xez���U"�J7tGq|���
R:sI����P��C��C��b��T��"�y,��Z���u)@����+a��'�BN���{���eJR�R|Q�������g����\8��X���3��oL�����Q��;��/D��L��,� ����Kr��;�9xr�J��x<�����A��e�Te���v\]|s������w8�d�������Nf�s��c������sn�����&�����>�G�1����n�G���'�c�'R�$�+��^�lcP�V�=v�do�,"���@f�~�����sZ��%$�,X6x��_`�}�JQ�?���B�d�=�BF�&&�Dx=���E�);��r���3/5F���Q��!�����Z��^��E	�U��-���![OcT�X{-*CG�q��c4�S��XF�����X�p�atxc�������0�.������{�R�hU>��$��:L�r�.�(����d��u/6;�^���b����S��l�i�w^�FBvr]�!��U@����[�f"y���)b���C
���J�<o�k���/�=U�x;y�\N�i���_:�h3cE��^�s��1s84����k<��(Mb��T��A�B�"�M"�V��bu�ol�c�I=^;�v�"�'/.V4\��9�`���}z�\
�+��q��s=lL�������;s�}.;��m�1��{��M�q����F����
}�����T_����
U�yox�g���Q�b�Ez��}5QS�,�;Q����m0���s3b>d�/~��E�1���%�ja� ���@��"��)
i����V x�t��l�z���#*��e��P.Y~84�->���(��y�h�y��?N��d�V��(w�����n����w<2��~o?��$S�l1��%�����E��b%+�D�^��:&t����1a&��������$�������xm��p����$)6<vu/���$f��$�`-���ei.y[j4?�\r����K�1����C�\����+�������$[(�`s3���m�m�	Z���g�D��O)�c��6�qk�>���������c% l�1��hQ?��1S�XF���Q��Lj�y�R$�2���>�fQ���A�2���M2~��wDHw���2����7��f"�K����� �4���A��
f�;(Q�������@������o����`������8��'���Ro����Aw~ga	� 6����������
3��a�6��Nq�������F�c�o�(�������yG����D���E�T(��#C��v%$`���i���(��^��\�����j���}k�A�}�KB�P����+��7��y[`�S��AH~��5�p#�V����[X�n�&���{y�6��;��a!���������+���6_��Av}�2�O"����9b�#��d����3������K���������
�#`��|�T��CK�[2��@|�*�?��oV��|�J,X%��w.����Ug�
V�����uU"Ie�>�"9������k-����?�����i	��O2����
�����g�N<��tL<�����M��1M� -��P���m�"3��[�\����~�>J����%EN�8�*x��K���A�����vca[
����2���r?��(	l�� ������3���7X���pH0���LB��r��1�9�A��*@��e���| ��Rp��b�+���.�E�D��x���X��z�}�K���N��a�}��4~�1o�Y�O�����s��,����i������8�/�tQ0_m@��{�0��l:!!�n,h{��S��2��/N�O Jz����';L���Y�I���[�U*+��v��������%�:?�u�8r�W7�ZSN��p%e>�'����
��~��2(uV�S������?��}�eQSe���a�A:��9(�^�<Z)Q8�����$Td��R�/XF��o������|��Q����V�I|p�H�P�9{3E���OV���K�����k���%�_X-��~v�%*P�g�K��-�)h
:)0Cx���@���*7������o%�o~��%��P\�vpHx\��k���E������w��_�����'��!w_�� 3
��Y'�?B������Io�;V����eP��T�W��L����mJ��{nd�T~��Ef@�G���V�
^��K�%��4r��K.�|]�eUJ��.P�\�Y���ER�����p���4����M"z������f��^����CD{�Q����:�c�c"�����Iv+�Oy���x��>��D5��0�Dij��O) ���W����8,��^��\�:��	�����u^|����/��I��@��K���T�4�>-�}��3�3f�e�9
��������g4���h�~^J��@H��xP�7�
�n����;l0��v�h(��3��J��3��$Ayp������@l�U�����>NN5��w�I�������K�&[����k�7����m���I�P��U���R}L0�����?�_�#����n7�LC�����Fd��d��_��6����d)��y ��A���"�e}����C�p�2�R����|D�{�h������X��s���H��f��f�d������W��/B�x�1��b����H\]�O0�����>��_Y����#���[`~`�@��;�����T ��=&��O�d�g��������A��;����� ���c��xn-�6X5:�^�L�D�-�A
/����$����D���
��<�o��dT�u��3������`�;�$�&d3ny�T�-:4A��Zd���2�Af��jD�����B<��Z���Kx�T"���[7L�.�tA� ^��7.6�s�d'��O��]vDW�`Zo��������I?�.7�����U��������~9anC�0���u����~���k]������UK�\Z4V�K!��[����}���/R������&d��>����\p�������.�/����W���dZ��W����#\��?��&M
��3�"��4f����C]x�i�
�3��s2&`f���I�.�����c��|�����FM��1ux��@����m����hy�����OX�l��k����\��0.�����B
�!�j
�����h/�������%��f�y��V��/j��UQ�W;�CU��yJ��q
z��i���Vb2n�-X�L���AELa,������fx���2+�����-|2�7;�����������;�/S>�)2���q�Ry)�W���/�f��R�����|��n7�B�n�*q�l�E�n&�%3�xt6�|E6n���d�����J�2�U~�a�E�&2���?r��w���9=z���Q�^yw�Z����jx����)�V6:ig���Mp�������/F�=N��At4��0J���{�{Z6A*
h�P��^	S��1�3�
m�x��+�����G�x�U��&f��N�%���[q��{������c�Uk�e����tx�N-�/��(��h�v,�P����.G�
�Z�7������o\y�Jl�D9�^xPEo�b��j6����ZG�>9�^+� �!�`��gO~�e�,J�*��<`�^=�\�v8SG��W����m�����_���CW���D�S�D��P%
���K|x�o����x�����Ek����XE�o�:"FI���;�.Z�S5%�~��>��@^��W�OU�E�lI^�[����LT�9���L�8&8y�S��q0�XAo�d��D*0�X��h51��6�������{a����Y'%��bt%2���:��q�?��uq�o<��>Q�%���GH�1�N7����L��o�^m�����T����]�cU)&���UI����>�RC��~��[�>(is���(5�z%���rfcaK#{e�U�����F�H��L|DkRp5S��V	�w`���-dt�YBAU@�k^7���]d��5+L$��3#�q>g��%��A��m~B;��!a��KS�8C���A�9�E���~l&7�n\�s� @�i{�w�OX5�����%��D�sM*v��ix=�-�.�����4���� ��N������V��_5�hh���D����2U��i�bh�=���_p5q�!u��T�#FSB����^$�����A�Db�+�v�$I�������q?bdBO��z�l��I�
��!u� [f3�X��R|�~u!Pz��j�!mL2�jy�ic
i^�M����H��9G�������/:4m����{F����/\�6�(k��qc/��3IH���=lG+���������?s<YU�e�Z$8��0k�j����Y;��?Y���4F ���\ev|�h������`,,������u��{���g������1�������TCG�^��G��{4?a��.P�/j���M#�(65�?.z`+<�!�J~���/�=�fam>v��i`j�n
���f��F��<oD���l�V����
g���-�;���{�5b���9�9�W����H��;6�~�44`=UZ_�n��5����
j-"L,����T�pk�Y���Jz]8X���F��*�ro�?0�����Z#�>�c���U�I�R��Uz�,mx6�>0N� �\<�Rc��h~����$�w�6�y)�|����0��U���9�������c�k����h|9p�_y��z,c�%W���X@�4��J��������n���0�;�����������������w4�x~�m����w�h]�t�s���x��:�P`��m�{B��=Q�r����z�G|�Qs<��T��w����gW��oe������g��0P�GC:]��(y/\��j��lG�+}{���:������X���X�\�%.
��
�)
��A�
@����r��h�(a�}�^��dYXT��0�'��R��R�c�a.t]8�q����r�UD���F��_�]��w�.����(�R���W�0��b�V�\��&�n���Ie��(K��<B��`��B������Jep,�F�z#�`UYw����"�?�X@�H�����T�vec���b�6�/S�'d�j�
^�(���g��+�(����_e��M��Zj�=�^��R���r�*��Fw�"����`��%��r�%wmt�{,,b�\p���A���u���`��e���Of��&Qf���^�K�:B|����n���e��-���N�n��!ud��M���S��1iE�P���L<��B.Q�	(D�5}��C�
�s��Qt
�60���+��1"����p����S9�{��������;L����w/\��d��A��JlP-��A���w��\�b���|7za�yC9��f;���`Eh\����J\Y3|���"�8��p�'�e��hV�������d]�4����4X���	G&eSE���4o��uP�$ac������&P2�?�T�<��5/����+�R��G�">o�#���3HUY�Q���d@;O��zDI��������Co�:�����J@Hz��%"�?Xr��4Y`��d
��np���N��P���Y��AW$^�x��v��K�����_�jLQ2��C�v�e���BL�Z��5ra�~����wQ�XU�2�<�m2*3YL&�.�'����2t�oE7edV����!�f�����{%S����|�.a����80�=?����]���*��K/>�>%�����wm����h#�������[��1����n�>�?������Vn�L����w�\,H��F��`����O�r������,D�Ut8f�}��E2��QF����g�v���p�������\�����������k�V���;�
��-�(jD_qU��!_���{y�o�����q<Y���S��'7h���\��6������`.�B,��9�����b���l�����Bh�R��U}�{W����-$���-��[���P���t��ph�V��~#� ��n��{�2��/�0����@*�W�hm
��34$�~ -����
;+��N��WFY���dz�'�	��Q����tO����1���>����J*��U�xt�Td�Y3}���6���gnP�����A�z5�C�1SY�_-�p �K����r�~g\0�����&�M(�a����D<�_%�<#���&'6?%����3Tu��}�*z�;k�u���T�@��k���L=������s�A�s>$��[��~dv �9-�w7e�n�,��J2���H��-�/B=�V�`P�����Q�����7��c�;�*^�pnS.�rXdw]��O|��a���K������.5G���X�����H��-��Q9f���cTN�����)\�!�0�VtC��^����tS�7��*����`�yfcC�����1y��s���72D���?������
����u�,a�����\.{y��hdO���e��"C��*������`�ro�<PD�i������H
��3�Y�s���>�A�84k�i���t^�B�O�\�)�=��3s�\U�On�%Y��2e��'^p����2��e�y0��i���4��N�������UFL��u;��9��G��(�����w"=9����2�������z���&E��b�"b������e��X��6`f~���E�]�n���eVFj�s�WC��"n�=&���v/��^xQ����B4�0�����b{<�=�h>��EY���k.	.I��S
�p���)������Te��u�S:��xD�XF��Sr��	"��8�1?Jo@�g/z��LGqp�D�W���3@��<�nV��Q�c��Y���'t�p064:X/��K�{����A�LxgH���|k�
V�"�����?���c�~��D�A�r��EF����6������yVP�������X�O[6����}��qj�sxR�h/�L���
l.��#7�����}�q
�qQCZq�,k���p��Oe�O)�6���};�fP�o(kV�@�
/�\R��6e�T-���43�<���3����d����vF&�����9A�2bR���fe��&ca�j�	���;�1Q�����Ts��$j���%��_��,J�2����Je�X����o�xN�7h��M�w_z�d
�D�����xZ���\���8Xs�ER��/�>n9��y�z<XF��}V����ox��e4�K*��c�2f���"|u_{�E�y��$��LE�t����j��z��1�e�x��K�7��4�,�\���}Rs*u.�A��{�E�������n�N\��R��8�g��`P5��y5������xA�n*�C��8�1m �(}�E@����\z�Zy<B���A)r�r�*:����{�.����S��bXp��}���&�%BP!��iC��v.����1�zP�W����n�X���.�C�1������q��x �X�st�AE�\8���Q6�D�E�U��WF�cnE����������>���S�T&Q�]Hg(���_��:��b7/%{�6I���5m^c��� �D�e|R���8���������$�c=��Qa�=���K%?��H�lP-�q��pI�{uN��
��v!B.����3�����u�� �M���w%1f���+}��Qeq������J����;�
n[z�dl��nn�U������]2L�I�)�Z���p�C�^�e`������O��7R�i����Q�SP��5�������8�q!����/RZ��8	�	$�\lo��h~��A�?/)���42�����!�F>(��a�5i�M��l���{����]
�7�n����M��1�4���<�3C"8A�:A��U�[w�t;����uP�9Dv>���/*�[�fz=�� KJm���E�r�����u�E���`E�p�3�#}
�W���*�����j��R�[{X�$�kdm�S4�D������z�XkV�
*��L�wk��wqBI�=3���f��'����g��M��:f������l��=�}�i��vy����p����s#�a��"��oM'��x�4����.d�����c��`�A�.�N������\t(�`���)`+�k��j�,�3	%�x�n
���v�y�Zt����^� �}N�
��+����pI��u�u�
����������jK�)M^e\8�"��qp'�`�Xw\�?�5�*��}#r!	����9Z�W�X�9"�WRWd���aA�3~�|��rqe9m�-h7p���8�������|�Y������`"��T�/��A�Z��xn&fl~��!|I��c�B�����K�*u?�'1_L�*<���>;q;��Ay����H���p�����0u��p��V�?�;V��*881�9�D�y��W��q}�*gFQ�8��'�%&�~Q���.���>Myg��^*��TT��
��TV�]��G��DG[u/H���n��%����J���+�a:�����~�����D^$��p&D����u����
k�����&�����;����KQ��������_�����?�;�2�������X��X���|��C�9��_fi�X0�.�Cw��Z��6XE��.u/�����E4�/{�A���I	`��_&f��G�G��#O2�Z����������-���Wr��q�����/�Wsv��5�BSc<:�8���������`%�*�'^^;?H�A0{��t���8����?��C��7?�2�r��0"��j0Lm�g*��*���b.�7�0&����	}�J���*���/�f>"R���5N#W�V�2��}�����7cv�?�i�9��aq�Zc�����!�/~h��
Ub@]���dx�V\g��n����r�Jj�)���q�u������J���`��H��7.D�T;)�F�j���������y�i	�/���E|���.I"����Xdd�9I1Y$���:��1��n-�S�}�/����$n��ezs`%���y/��s�o+|�A��|����
l��s�{�2{�7O~z�V%<�UB���}0�'"X�� '&��6 �����DU�}ea[1��
�"����f�y,o�x��^`���_m�j�I������;N���4D���[�6[����3�<�v",���p"u��#�_	x>f�?����|{g�����&��%�o�>=����E�D�]qy��R@D�����3%�?������L�>����" {�Q�h��7mdD2��O���r�[&�pP�Q9����m����dxg�>���)��#��~
$n1~��i<`��K_w	�x.2��j<��E2�����p�W��"�<�K< ��9������p�����d����w��.D�>l�����u+���L�^����f��cX��U���Q�X"�WF�=��v�����,D���K�:n���#�d�i�Z//�\��f
���
���"T�t�vhJ�Y']�c�Q�P�`���U"���L&�*c��/N���+ir
�A)wl�?TNF��GJ�5��6w,�bs�Ht�����Ov��W)>�R���!{P���,j��Y�U�hE��1��w%%Cp:]����h�n��s5����o����0;�����w�e����#��������I�#�n��"$��a��\����kd�>28.%:��1fI��r��L��A�"����s���&#<�*�
g�VWU�$��`��.	���|R?6��7*�4��� ~�j������E��N,��&��_�k��5��������v�u����|`�6?a�`|���r!���,�D-�>����Y0?�T�A�XA[�~�ve-��H��o�'i�9q��y�4��g
�
IP�u�
�1��0�	\���c3�~s����1��-,�W1���}=WM�������&��?���=6�A
Y-R� }�C)t7�L8T��=8�(Y�2U�]��
d���eF
B��F�$��&R��lx".\g$��u�A�AU`�3���v�C�d0G��
��*�8�k�ZAcV
��j������\Rt\���{����G~P5rW�>��1JE
����C�\��^<!���U"/V�.\������k��@��1FL+����5���p��6t�t��J�����b|{�Z#t��]�p����<��x���\��b.�YA_����`����
��L���	��������8��\�T���*�$�{���5�I;�`2�/�����������ot��<�����dg�f2O�����)��:Lo�#�z�V	��v��K�Cl�����U(��7Yx.id~:*�B�������Df�A�V�
~�RK�97kV���Qd�B,�2.8���8;L��r.�m�2�R*s���rRB��WF�j�m�X���������",�w�#�>����`�S$��!3z."�x�Gs�����&jo���=�{�*c��]�C��r/�4'��>^�t���}��{�b{<�:2]j����s�_�$^��Z���I�>�@�+N:�X�Jl��!:.l�3��L�0���:�!�{�	��lv!��:/8�s�jF������b����CT��Q��������x��Q��'(�;6qp��0����A��nP�$��n8,��1��)}?��P.�{Kt���V,���f�����~��e�{ ����i.������#
U����>�~�Z;9k�0Y�p6������'�����f�����s�K�=�:��?RPD���z���b\��r���r��I	;~�Q�����_]Y�3�z��������8Yg1/���5�F����Te������(���c����c�������h��t���]f���(%�`~}�<<�e��-^��0G{8�U��� E�U�#�Y��"^��F�;�UM!K��mv:���y��+����
��e��y9��)8�����g��J�C���L�7x�|�������}0:y�	G?����+����������w����.� �S���I��5o��p�{	�DF����d�%����dz��Z!k�����$/���T�����FY�*z����n$M7b�&<���9��=�����+<���o��s�����K�J��m�~�J1"���lF"�F���B������/��"��w���O�-�gr/��+��8'����{e.\I���x�*�K!h���`�H���#���P�3����(��2f�������Q���T��o�1���$��Jy���������3O��	��Of�|��)o]�
z�^s�F������u2*<��FM���:RF;�bV-���0J����F��o{qa�~R�;��.:6_��w|��F4���"Vr��c���i�|�z������o�����9x ��{>�;\L����S��������M
����������b�'J���L%6xi��Q-����1�=-�SI�����D��IC��iyMm��?���r��b���\H*����@���E���M�I���;)!c*���
<���6�I�!�@�� sW)���4����/"���p��-R����
�n��dk��t�
�%�nx��Z��A�������e�| �{&Vx����(���$��
�GZ���a�K��E�^.N�O/\���E�
a���y��&Q�D����y�5���Z���\��=��4>td���G�h�A�56x6G��e���#�B���DB�x���%($���&xz�����F`u�>�hh;g�Uj�������q����lU��[��qGI�{(����1-2D8����`%Rh������Z�y\����L��a,�m�!\Q��W@��F���l�:pW%V.���=@tS���`^%���g��Q�|N�
V�u��e4�J��:��U���_~<�"\����_���|?-�N��Rh�����w�����nblP�W���Fu����0{|����8�"�|}��>�e����=u��ST�-���A������/_ J*��.���p'%�QY��0�o������/PbL�%�(K�T 5��Uk<����W/�'������P���!��+����4I��N��.��[%��]G�2O�y����k_&=���K��I���������U����������(���.��*a����r~^����o�;�#���u�N �~!�p��h�8{��Y�8��R�,0���t����z>gxe�J[��Sc���)� Wo�Hm1�����(���;)Qi>�Km�A�
����SR����j 0=s�J�t&�C������C���`7J��'�v|�Jd��.��)���$��������)�X�.eRi�W���l��o�QD���������/eR���>k�e��sd�u�1�9���r�nT��6�
V�uM�C���_����(�lQ�;��s��/j$I�)�_��������E#�;�|�B�� �K�����i���8��H��������D�l�K�A{�-����:��i/:4ue����hq�cul2�fX������S��\��Ov����D�xo�Tb��=��5.�u,*T	������Pb������f����x�K������������/,0�`�-�B:t)����-�P^i�iu�^8�_#��z�'�<���?��h������������kRX��i.����S�a���g����rR��������%�Z��;���B�A��Il��A���Y��Z����N��L8;�h�M>����.^Y���������p��.�����?����9�P�WP��������t@���i��}��9���C	����6�����;:<l>�\t��Dk������@;��P5&.�q���!������G�[�+�yn�����{����R�;�g5X%]���,L�yi��`o�q��q�E
�4Y���`�0�u�F^=!18����
�'k�K��O��b�|PbFJ���cV^��:n\:I�\����u�6G�	�Sd)J�?SN��h�t%B�����@�����>�D��Q������-�}�D	'��L9,{�����~<	Qa�*���s��;�����M�
(�@�zqi�3E����`���W+0������2����A4��aj��!9���-T�Al�I�]��s��~d�ldR��C�+2md��O"����EB	?H�`!R� �3��?KD u7�N_������;�l����
��(3���.��	F��Z�I��-8H�9���=b��[������n�Udq��hAN=����k��8uk��,tz���	��J���!8��y�m�����w�����B�������~�PY�#��lu;��9C���Pr��7�
�����pZd�\����L�K���V �5X�u�&,a_J���	=���g��d��[\h������@�W	T������y���|�7V��|H%�)��yb��q�a��}���C��F��qu���|���<9N���������*5f��g�)�<��s�}'����lks������I��Z�p�Q������Ew�=���W��)S������h8||gA[�4R����(	�[�c���<>�(/\���d#��*Xo~��)��x�$�o?!a@y��������"���1�v�V��#�>����wF��t����c�^1~��C��KH�$��LL�9�%�>t�"zt���?�=��9�R���
Y���=vQ�	��aIZ�r��?cn>�!�a��:��g�*8_��yc�����#��JD�����ZW�lq�g�����C���1�]g���������@;�����%S���A�=����
 � ��P|��nq��LK<��$��pW)���h����U_;���\���Q����OK�S���j���c�H�3��JV��/���BJ�K���������.L��D��l_(�����(�4��!#�|1�'�~2��� [O����9m���
��tr�i���U&6�J>|���{�e���?����[�]p�a�o���'�3�45����RP:�1����q��w���l�BK�Sua�E���d�_Xt�w���3e���X���L���nP�G,��u���N�xT��;L��.gr���J�I�0=�����\8���g����������O���h��G��y����/��A����5��NN��p#�kT�v�Nf���##*�����V��u�T�5����-*�����5
�`7T�*���:���TFg
�OX5��L�.U�@���(��l_��7B���q�=�vX�n,(�$l��n�����p��Qzc0#1�y�b��m��WG^t)�.��l�'<
�HI����z�F��A�'{�JP
��1)iKk���;�e��2�q���b�?�G����|�O`��~��_�a�wN\
�Q"D>�	�yzK1��v'�D��A�Q���
W~��z�'��]��N�
��������_d�
lS������O0�H�%5g�U|_219q����+�V��=���<v��A
�c%h��2B
\�&+�#��2�}uyq%�,,����L�q/�iW���ON��W��+���e��V�W6����a�~NorJR `u��������L��p��Xl���Gyp-�l?)1�>��*��yrl��Y�t�}#N���������1�]{�����TY�l?HE�*^X$���w�%��r�N�b�>g�$��+d�~������a���������Am��������;�[@��6H��<u��t�����"�B�u�	)H>z���^���+�O���}�=ti~�U"��h/J�:J4Z���DdcCx�8����0b�kg$l��A��\-B��'T���+���}�)|4�R���#��w��5���n"�Hl�+'WBcV~�,�?�*���&����1?fR� �d6.�cK2cv-���%�NP��%{����!�,f��9fD��\MU|��e-�Yw|�����
V�\{�
~��9���*� ��%G�
����C�"��H`��0	�Jc-����A�i�W}�z<����/Y��$����K�3�� Q��qZ^<f-�q����+)2���j��~��+G���a���a�H1��z��g}�n�����V���#/cn�j��bY���7���\r��<G�H�i��B\������29�������TE��M�iU���*�vq���A�em�$����S�E���Ky}��������2yk������8��U��5��eB���$J%����;�&����)��s�e�U����K�bsXs�i�:�B��|;l��G��D����(��]?(0�}<����
z�V��X��m9=v��$� R��?�OK�<L��D�����m3Gzxa����U�y����X�e�0���\�W��A'��)H�=w���pk�����������m2:�yi�2*F�_e�N���	�t1��J�?�}���4\�N���z����os�0d!�����j�Q�A��e�Qn.?���_	���A�sl��	
�����q4��S�_(�B���cB<���F�j��r/�A*:W�c'_�k_�j��7a���������]Yh����d��?�A�6O�A�I��(��]��-T4�=���I���D��:�g�P~�����t*���2�:8U�u�$Ce[m~I�G����h��Y������D�.��)�eN^���<�1B�_��b������{��% �;������j[.�0���� B��Q| �I��8��d�����;��E����!���%���^`����3����o\����]�#�9}q��_�
.�����l0:x�B*\��Pz����~�I����qh'�����w\f?�"��f�AN��j)��R�<&�
tL�t�����{�N��f�
�������{���1���w��Rt���M�����k������ �S��7����iUI�wS/��0 ���h�\�5`T��#uP�D��Q=���=����A���}?Q�^�,0���Je���S���gc��g�[���=���{gx���V����A�z���I��8��jZ##S_�N��C�AH����l�z�5A��?����Gc�%�}.�U��m	x�)���W���$kt`���`��wI�/�����):k0�3p��{��8���&�����MR`�����c� �u����0�)~��#���`�����0r{1�S�
�N���\�������2�V���Ew�j��!�b{�iX�5�p%�J~#| �@5��+7
��G��wyE������

��Q�`�^�Z#=8T��H��+���'����w\]"Q�>�.aXP������L^Q��3i��(a�z����<���Rp��MK����8�Xu�,4���Ov�x�Y��2XK�6��q�Y��3o��Qj0�,�h�?�	�������8�>
>���j��	s�D��>f�;�H��-z�����
� ���
f�|���d&9����p������d�N���������h~B��./=�v�K��-e�0|��i�v������O�����C	�T`��
��,5'id�l�o������+e��
g���`��A���HTq���
:�������7����(��d��!�-ho&�.���m.��:��R���Q����:V��z\0{<.�z�9�a6T����u�y��G�L������
X5�;�.���`����:OEi`�m�*�{�>�m�e�Bar�o���O���)����=q��;/='���:z�,yW���>�D�����(D���t8sr���G��+�2Y��z�i�!��K�A�-']�{�v}��;z����@����	�:�U��v:zN/��`d�%�<0��p�\<ww~��D��kE��-(�I(8���|���B�����wg9m������sI���'���'O�yQl� U�'��/%��!�C>?3�p�Q�����	�V�\]�t	�$����>�rd���$�i�c�rF����K�a�����opIm>
�q�A��o�K�K������&r��\8�?�
N3�����!��;#��$w���������Wj�n��+����8��F���~���+~Y]����tN�K���st�v ���Ln|�K65�8��Z[_��J��d��`{�b%����{5 0_��(�K���%��=�||y�W/\I)3�����~�:��
�����$����<8���$En�6?��E���`�'>�2C�N��Ri_XE���:������	a����6B6��B-�H���2IE�g������;��L�����0{��6�)�}
����������wD��#���������������;�����6O��9���B��r��\�Z@
8'���D��.@��)��,]�����H�]Y���R}��1V�e�<�cER�c4�1�����Z!��;V��lwFt!1!�������qU�D����$[[PF��#�����A�eJ����R<9U��d��4_�^����b����JD�]�b6��$<�p�!I����*�.���y�AfW��A��GF���9V�����MN����"<dRV~���Ed�T���O��A�_�������*d�8N�,��*>��lG������c{�C;�E�����f��r�����#��U�����t������7�2j�����o_qU�Do��&y,b�)E�����+����?8B������#h}�U�bNz�>��R�,"�^����C{a��yH9�s���.��`q�N{p����\�5s�|���
s����J�����u��p�Lnt�s��%��eT|�U�U�����*o���_f�\�+{�j
����c�U�����kb�f��H,������F�S�[,��umpD�	#�S��SMXE���6��r]����������"��xl*a���2k�0���U"y�����M�pIF�J���5��9��DM�b�>����d���d������A���9	��/�ib�����d@�0sy�����=�%�t&���i�u��n��Y��.���?�9q��V�4�/\�?X��0���"��j���t�,1�K���i��`�Hn+9���d�+35����C]�-�Ub�=p�%le����'*�K!���t|����,g"����{�c�=����~����eD���@��{��Y���2�P���=Fl`����p��Z�w#�`UXw\j�C���*��9�\�D�5*�9?H����Mr��
����D�o��F��Ev�p51���{���`�i�����:���k��i�I%aI;����fheN�B�5����:����� f,�>��n�tL�?w������S�����x�r)dl� ���Y��8�:�x@�L����V���i~s[���zo��J��O{�~����b�n���(������/�=k��9�^�;�Y��\�X��j<���<�!5��F�T�)Jk�}�2�8���-[�e�����.9�L����Z��2�R4c���3]u1"����r���B�e5�I��P>tc&���m�o+�{^� /ui��
V��:���%PcX#�a������)�J��*�����`���}�6><E��������?H��%O�2��&Qt%B�A���� �-�Z����)�Dd�Zn�_�0��3fl������>dG�7�R�a3�NlI�;���B�3B��}�}�#depPl����m�k}���AKF�7\�T}�CL��g�4.T��I?8�\q�������b�\)���Tq/��(i����J�aF������a����,G(��y�b��?tl$b�s�����X]�!����\5Y��l0������P�A�Ev(��]~�\=Z$��s��]��K��"n�c�8����K�u�����z)F��-�w��'e��"���A`S���y�����[�eb�lN���'C�M�>���*?�r��+�
]���bQ�A����E��n�
��7.i.�fz��yFG���	����M��F���6H�Yt���N6*����(qU6Ok|MO7Z@��I�JT�:'	^�z��H�K�������{r��'�I���5U��0Km"M�mV�����4�	6.���(��2T��6x�eJ�d_��e�G��d�{8�q���@�.��G{��o_�'�?!�$>�m�0�n,l+��Q�P�K��\Y�w�RXges�Cx�^��z�wX������(���A���A�����H�HQ��]lN����H��x�	�a�*���\nsI9�.4[F>2K
��]�7Yx.��|���I�Y��\���p��!��j$b����������'�-�K�b-2>[�Vzm��=L�����g���6��*<�b�?p��o4�v"��b���+��eBL�V|�����n���!��%i��[T��V�@vpv�qlB�40Y�z��K���a=�O�HL2�Xw\l�.s��u��*��Y���2��h�'K���r�5�I��@�K��xaa-��<���%�k~���
���_�0�sF�sXK0�3�}8�����:�I��J�]FmE�Z�@����r���Pb���2���D�LRj�kc2���#�����Eqf]i�j�1�3'}��Y��+�E�Z�x(>4?����.o�L2�Xw\�6���B���(�C.��0%��0�S��{rJ���}�A��/%���
.������
��9���g�P:���/_��y�//_������&����";>�|�&a��e�cu)~}�I��
����">����K���3������o����.���R��h~�����6w_������!X9�)�}���`��S�\�L56�� ��%J�$F���wX�(����"�.Rl�����J�]t�����k�eM��7,���`T�����%�0���7u��m�C����?t��RI��.�
�K
O�(!�4H����\(�sJ$���D�!.�wB�
!�T��W.Z�nx�F"q������1��$IO,�`fx\8h^+(�"[@�����}��7�~r��s�~S~ �X��K>N���D�T����!��d���K�Qb�u_��E�����'�(���� #����3|���a�z��4�Y9������*��L�5����r9dbxA-r����ic:��*B�x����$��S��e��C��������&6�'DD��"�1��U�y��"C�;k;�}=�����FN,|�bO*���KV	L�k�o�"*�:�6T����H�-�%A�?(�{��}�*�"��>f�'_��D��
#a����������
�����N��wQ��l�
ij��`���p		�%z���l�6l�$�	;���y���=���Q�5W�,v�v�Z;Z���
���B�\:h^�u�8[O8��E�I�77�����IU������3<��>V&�
?3M�Y�,L2U��*?(+���bJ|��^���Ix���;m�Jr�a��*��>.Rk�b�m����������;	F��:��F`�|5!s����7^��:������3��R��b���`��,�kB��H�D�sZm�Ub�=p�l�y����)�K!]����	��-��09�%�]0t������ 
l��KE���;�F�V
�%�^ea�����M�0k��4��M�8;�q��b��X���4A$�� 
�A�8�7LX��������mB���Z?������+���~X<HE����	)�/N9Z�qi���=�I�A[oh�q�l��5�l���K�
�,,z�]��'*������%�J\����������fr~��3x�;��~�o�0��5*�q�
?f/��t����v�_L��������d��x�l�F���*6L��x�"\8��i����{V:����zK��:����;��V��x��c���Q����:
8�=�T��������|G�s�W4IN�t�f�P��Da������I�;��~/�G�DE��l��P�B�s$&��-E��^|�=O�S�I�A�>G�{�R���|�@�|jC@!��@�=���y�-t�'Nj����9�w#2����o��� �
k�
j f��]j����
�N%Z��X[i~�=HK�z����@[�+�
���BM�����RJ�t"��Sr���Sp��.��_(w\����p�����|;��,���9��Fd
��d>')'+�e�e�t�>(e����x�a
����KeC��L1Go�?3��l�9��8�������{`"qE�����L^Z�Z���`����7�c(=�Z
��|P��Pk��%#�N�c%V�X����0'��2E�(��29�XR�����-��Tp.��������l���;�-q\�W*G�V~��%��g$F���|$z�@Z����I�D����\#eD�	CGU��W��p� Q�a��>�iy����1��ec

�&#���;\���+'(��p���,<�HA���;��P��Z������� ���e�"��`
R
��X5:�OF��I	�������H]e�����]��P�����IJ?�O��)<�~�P���D�\8�|�1���0{��������qA**�H�Fx���w�49�Q�Q�G�������H��0�<��cF'�|����Ug���<Q�`�V�1������)<��|�o���(��Q��["z<���������o6���c�)��"��Y�D�Y�PP�U�^����J�A6Af%K���J���(��b��!���}���S+=F�v���i.���t1��6�le����n�7#c�U���$����k��]��E���"�N���BXK,8���Z'{���{F�UB�lo�-����n�R��B�!^����E��A)w�����phF�M�b��S�����}�(QG1]��s�I9�!���K�4���R<9U��d�B|���?7�Sp���������@�G�m[���E���1�z���-XX��^�H��s���s�,E��d����C��;%��l�tN�\��������&��� �9�+*b{=�������O9�v�+�A)w�� WF���&y~���2�}��2.8�r\�iL�\�2�3��t����2��{��thZ\�C`,�i~��A����I�12RTx��U�y0���*�p�1dP�T�N&�er?��A�cR����P~4���Io�v�t~j�������7bn{�j��	VR� ��w��Y�`2��p��^�Z���nP����w;�f5=�y����,
�{�{v�����#`%Z��A����?�;�":��Q�xv�����v�.�(�6�I9�I$-Ny��t��rae+�'�Vy�_%	������8C|��@�f�JD�_�L�����������6�s��vI4	�7�`$qd��r�H������*����*�������h.�r`=|b�	��/1��R���"�2w�F`G�A�Z�:��A� {o�W���(!�	�}�s�D��no}eA[�l��
LT�A�)��5/�`�$\��s6�6`��q��7X����s�zidZ��A��������i/d�Iu���U��0���w��J,���1J�0J�ab�M�}�����<���"����"���-ip�;R�����.F�m��X�nH���s�;��td��M2+���\��xX(V�M�w��cc��!�����d�1� G���A��p������<�(�M9Ym�����MADH�!z!HU�}f�W�����K�2�G������k�g'�5�����n��:����2b�x�k��@�A�H-����A�Z$5�%z�V���/I�Vc�"��Ez����E����%�@9%\��A�hp�E��8al�h����s$q�����.����<���Wj��q���P���s��*���p��]�����Pb9��;�`����L3�p����
��t���+Lx�$����:26������$��>�I>\iN�f��5��|V��{��@�7#�g���r��7��<
*n�f���)�n��_���s�m��'�Lro^�-���(��g��#t�<��d����/������7�%d�8��yQ@��2Y��1-�
�^�$���;�KR$��s��������
z��w�`x��I��a(���L!�����kE����E�Uv[|�$�����[�h����)�;��_����^|��`��Ndsa��@�W���Q���L���4�|��������>�)���A��4�R�?����~��s��R�D������{�	���=Y���� �h��x�����@���
6��/W A�0���3F�R+�'�������vFm�r?bT�=����W�������@�^�O��N��������/��OX��W��ZX7H��>�����
�����+����G,M�J?Kbop�2��Q��[3p�zg�3���P�GA������
�P�U%,����
�S�<$s������d)�����!���G��V6�E4�q����A]�?�G��G5�P�����Y��9[������2m�4����k'�/��\�2
)�0��6��v�u=������,;���.=��u�maD��GG����)����<�f�S/.!N�uB>�f$�pFA��7�$,�K�aabGA���A�W}�D�0�l���T?1��������F���:��<
���]�mP�*��2x�G/�E�����*}N��g
����Bl�������B]h�8�wJ�Vo
b�in�h�3X�L6�W1�@��"���.�TT�����ER���-��'[��-���!F�N�2��Z(?)�y�9�A���;	$M2?��FB�Yf<��J*��
�B
��i�}p���s��_��Q����iq���\Q����n�*&g��7����>83KF���0�0;>
��M�w(>,D��V�kX�������2��1>�� L�]�d��tZf����7*��{��
�A��d��P�'�<������P�I�}>���x����!'�B"z�G?q4��"�������$H_txsCD�1�V�������x�,��B�~��OH�D�.�m~B���o�A��J<�t��q}0�~����
X5����)?����C���W�9���@a���������[Gyp��|>ou0	7���<�!�@��FaP�/�X��W���2�5L��l���E���!�A��S��������;�(��>���0�*��hp0b�)�%{�PQ�9��I�A���$��j�q�6w�����������?����0'CA+�C22�+��/}�q����T���$�$���.;�����s�Y����'�2�sR���/���>5��^��S~ ���{b���?Y�ppq��/;S~&��f��1)�jK\��-���)���X��Y���J�I��u	���<���x��0������
���>��!���Vq��3~�d��<$=o�����ye�R)�����$B�0�3��M�>8�rL�{5&�Y�L���RaQu�b�$[�3�	� �����xG]_<o�E\���xj������� ���_�����q��� �����S������f��C���L��Vs��������5~�Y��!v,�b>�L��P�
�#�G�
��E|�HI��{�7��f���xV�^�W�� ��8�~�J%��/�u�7T~��}�8�{���V)�8s�<���hA����9	�f�O=��q������u�c���YI�tQ��(Gx��Pt��|�`z������������.Qj�t���a�k�����M���
�!��������PS�ga�Z�WlQ#f����`a�������H
ddk'P���^�%i�qy�N�LxqyypmZ�Z���yGJ�c)���;K,������%�0���������A�Z�d���#�Sz�2-���d;��!X1U�-���
a���+~/�J�s�s+�:'b���z���o�����$�El�&���'�}���#��� ��H��OM�k&V�t�Q~ ����(��� �(bz����<����[�P�'>P�>8����^����*���IZ���i�N�c4d�q���(������~����QmCc���im+���Ym�P���~$J%���!����A��)R������(�u�(=��Am{�7C/;�|/�D���
�8�i�9E�;�7*vt$�s�C����9���:����!���U!%��1�I��n�u�g�%oo���-
���6k���(�J��>���`V��>������d\�~�,b*��lI��{��c�V�7�6�|�0��I��U��kK������l��U-�&Gw�~`�&?��~���gH5���W*��p?���J���A�x�������8af���2�5���&�qv2�l����E�%Rz���?e��xS+pp����H+�"�&7��Z/l�
������)�� 0l+�'v�v{ 9c>�e/^�� ��//��\���j>MBh�;�`	e�O�wG'/0x�}�0��A�[0�j��?�<$!�N(�4I�+;*��_\a?�H�nrM���Sj\��.>�2����Z-�T��C��xm���H@�;��\��1���������>8T�-pI�=5�A�w���T
����]�<��:*����� ��v)��z=S!������f���Bx�.�;���z�*@J�>�Lh�,�;�l���v�U/A����JTj�cn�l@K
!*�����P�]=k�hW��{U���{2�A����P1������+�_�cA�W1H��d$L'h������C�������i�E6nV�D7A[�D��U�J"I�.y?h���(�.�y*���w����Z_�����RrP@�Q���n@�+�����������w��N��%�<800�����T�Y�����@I�q�b�<p��+��u(��	a4���A8��IM����Z�Y�������
*���D�]y�J!���h0��G�cB1,�u�A���8^�?<H�.n$��h�{���NFA�_d����I"�����l�����t�k����{*�b�N����������KwcT,��^���B�dV��x�F��������^��f5�_�*�����|s$�h�#�Z��A��a��"��U������o���
g4�.�G(�zvc3N�cM�OR�`����"�K.e1�H�T�9z�}�z����F����ir�~�_�K�u	i���!B�w�������v:��C���������
��w*y����1���? G������X
���EHlY������l8e���7H����;���?�I0�������B�,�/���G:�[GD�A#{�[U�f�/V��3"�.�A(M����/�1q�U7�x������
cpy<��+�}j2���QKng-|\�`��*M&����it���)3�����8��Zh���W=�$c�����@���~�������5�R���N�L*����a*U��kU���;�O����/b0���z���b�b��=��)�Y�j
�jU�V��������j���PXN>���<6�=����h~��ai��U��������1>xp�
��g�f��TA���4�Q�da��G3��dx�B�{�1�,J��A�$�5��������O|j�xe��d~�-yU���~
��u|�����;���>�w�����J^����g&3��bp�������2:*8�����Ei��C����4�m1���b��Qk�^�@���	�b�7�Q�G���J&��C�������B���Qv%���e����@��XK��B��P?����]L|h�������A[��h��J�`sQ^�TV?*%a���p/T�����GEC�?��^5���E���a	:�weV����1���.G=��k�_���H�[Y�#K�ES��0g����j�:����Q�3�V��A����H��c8�Mu<0Mh��kE�_O��b����������A�������*v����o'�yp`��t�L2��h �)?B��$�yQ��������<�:gB���2Rg��E��#���p����IIr��;�r�'�D�iv�1m�Q��qA���Y$��&������i�L��Vs��0B)���i�M�K(W�������Xe���B�j,-��;�� �t[t����������G�T��KLo�V@���I9��8�R*����6����u������W�� �%b<�fY��c��A����(}#��u�"��a|�8m+�z�������@ej1V��3��:�+D���*	:��:������Gv�����o���1�����7d�w�'�>�N�����	���H�(!6���`88,czO�U�Yv�F5F���b������|>�P��$�0������q�����~e����7�Z��6��#
�����7?R5*\S#]B�����pW�<�Va�-z�3*n�����j�]����� �\��L*�2�s��@��j��/;��9�����>S[�����L�
�[��{����}��U?�$>�_�'������e?��-=P��e!��c��4���C�&6|��'3^a&F�AO}p�
����[(������P����4��ek�{=��P�����V;8����l�oG������d�VH�eoS���o������<���L��\����;<O���Y���u�G��|���2�4^7s�xx����W�w�z�`���y�Hs��QB D�RID��1{A�1��C,�rIP!T�u������y�#�7*i.��v][�BY���
���\f�O\A����:+C'��9s"<�b�~�6s���a�]9�oV�����'W�^���f `=�~��M��`_��'����AI�X�*�H)��G��>^�����l������������p�o��;�4I���0�&qi�`3e�,���=`tJ�p6���M|�D��Kg=�!�J�(��a�����jr�R��^�^����%�@�E�� DM9��?�m���Y���J���Ee�ZH��L�i�{��D��`�m��=�OB�y�'�m�A��$����)A'����jPo�'�j�d=*�lW\�����i�YOE5��i�
~����5w.F�h%P)�=����/�J<�O��J'�!�U3��������������:(�*=Y��eV�����0$�;�Q-bK�����C��</Zs�������$��G�����I��<��ux1^��R�03jc���&}�WY�(�t0���KG����r0
����������\���q��h�4���������2J�4?wJ�V�%A�+�S#��a�c�����Y
?5����Z�b��~vE;�"O�7��=�W���<��0w��,�g���4/�����>�M���\r��b������B�IF#cdx�Qa����I�/V��e�Ff��{�������8�s��^��X�v��~��<8��]�G�
�.��@�f�=zW�1�z���p?��x��:fr�a:���9��r�be�I6H�0�d����0�D*I�G*`� ��(�52����ju�<���b�PP����G=J������L;z���������,�$����)�������z�� �����!�(����nP�1�2�I��O{�l��=��K6�6��X;�F�M�%���a�+K��gP&
hD&��p�A�M�%���5�I^I�Vvj��<H@p�v��k
�U�(���A�veu46f�
j	�;��@�j�����]���"1e���x
r�G��[��<�W%e�(�-�U�A(W��	�9b�]�d������V�6�A�Gmrs�M��/��n�Uv|v�ir�|����������2�� �d?|0j;Xf[���<v,8��P{�D�x�
��:p���^<�<�f���SB����� J{yR�g5�i+����V�n$l+�f���P����+�����
�f�^��G��]x�B�0����VUR�6}P�c���.��1�?�qv���v��hG��j�� �8/l�"d��a�o)B��[�M	6�P�T'�@��k�t�Q!��,o]���6����H�%��^~����)��8sK2���
n����q�A�qd�F��
8j��iG��T�%��-�$tf������!|h�VP���~sUR�]�kq�I�#8�-��m��������0�d��c
��4���^3K���9m�f���OQ��#�T�?P���j/�L���O��B������2��=,���+��v��.I��%/5��qI�'�aj�$�l5��R��}�P��zA�w\lx�x�9�97V�Y��hJh^��;\���f���"�)|~�C���!����$�2=��DU�����w�V�m��C���{����N?1J�T\e{���l/X)�3������1+�,C������%��x$;l_���c��� I����GJ�?�XB��/��5C��V��]D�<8�P��O��
k����=�s4�u�_��4��Br�	�0)��q���SariR�:�B���>s���
2K�.+w/���aF&������������t��FG����+�{-O�
V���J�J{I��.�R��J�2� w����?��d�)���h����Z���O���kN��m��[��a����=����O������Cyn��V�?^��W����`@�[�>���e��qA����k��R<�0ypUe�����IIar_kP��I2���{X8sx�1Z�e� ���������M��/��P�0���6w�I�i�Z&}.L��u�M�U�+���A����g5�������"i����.�st�
z��=�p�"2����5s��C��|��������+���$�V}jJ�W�H�T��kxnh���c�}=U?{�l5w�-�JJ��C
���W���d���e��H�@��yHC����GV���7Y���zB������^zU�1��������?7��,D��M&����t_�m�'Z��������^3��v��V`hq��J�"�`3�g��q\�6��UrU���Tt�0!�S�	o������h~B�P�es�s���J]B�H��;AU�gB���d��{F�K�^�o7J��nS���|-�e�+x�	q��{@����[/]`�@g��	1�`Gj�V��'���T�j�q����9���PuZ�4^�+�]��Z�)?rJi��=Y�XY����a�L(/��n]�8oDw��H�y�;��2&�:.XI���l`mG�7"�X'��0>�	+��%��1�d"�6�@�\5��F�e���0�����5=C��y����f8=���c��a�����Z!�J���2�,�Ap�t^/]��Zy�OFU��M"������y�����Q���3�}�����Qs����$#K�s�����<��K���Aw�vD$��B��r�T�QQAv�q��E����n�`����A�G�o�:���o���5�������=���;Rvq�iz���
�c-{<����Rr�wv��Q��Ud���#
���?���s���N��;���l��zg��H@��
��'N����d�^��m������[�n>:�oP*z�p��^�u�lQ��
�'�L�����������)��J>�����
�
T/B���S':R2���
>H_z��T{���w�HT���X��!W��$D���*"a���lf����3�!oi�l��jK
��WOtd)�p�,�BF�uo�d3_��Q�����|��W��Y�A`�HL	����h���o����+	��>�EF��W~�w,TG�4��
��k��[�R��n�>�T����Ad�k@�^�\�#>����9�&T�:	j��/f�Q��z"��C����e]��K'&]0���~�f;�?�+�>��{2�A5�J�T��>�i,���<1�]\9�_8�HA�� �U���OEobH���+���#�<"d�F!��x�P"m0b0���|����I�R�(����K�l�IY�0f�4P�e���N���R G����������z[����+�6�|N�R��P�=�KO~�3����l��{��L���u�Y9�~Q���?XO����������c�iz\d���W�.
�������g+<��h`�`�<�b���~:?;���/����d )��s����PIF�����C�P1���2]���D:��C���y0�~�Y%,5^T�k���/���
]�&�����\	���8��S�ga2}'<�O_tZe:Lx��U��R����^?+*�S~#�&�)l�G�`,��j��q�_%<����!J��p����[U��k��_p-���cf��������r��.S-��)����W�a�;��41+�o�P�;�T�q`��+h}	� +�B��w���r���4e�N�X|�H����A���+	�*���D��`�p�+���A�����
�D?�����F5\y��E\��<W���PVG�����j	���|�F�s������A�vj�AR~�G{�M��N�aab���|�x.*��,��v\pPe����0O���X�����J#�!���I;���)�Ja�#����/���I&���C�.����"�1�>������	�f��G�~��1�����=�i=�.('������7���huT�P�e��4c��h�-$� .��#J������$�D����s�\���e�����6j�g5�o��[����2*	{�q�I\�TqM�j�X2�d�Bh�a�����1�w\Y�����������F$������*~q�V?P:p��;��KW���Dz��+�����
�r||�XK���j�m�2��@�S)m�6c6a�}$SDZ�qK6I�#����P�8�]n�+v�`pl�SI8�;tSyQ�� �%����"u>�$A)����<8�pZ�n�����YQK<�BM����y'��`ei9g@��FFWt��*�"&���:���G��Y�v�Z��~����d�X[`y�{��
��v5��o�������
����h���qL���HAt�{h��v>�X����wW���#�������������0Q���C}w�T��`,E�ye����y�E��v�G���@���&i1}OB;�(��Q����QP�rH����
��I[T_�Xa
W�t��_	%���3����{�p��|!��:M�MR�����F�l}OI6� x�E�X�i��]������AM2`g��C������q
����1�~�n��K�ga����0��
t%�"����]�'�unD������i����=�,���������o0��<0i/
�
����(:���Cd]hV���=��Y��N*68}OlP!�&��d�O(�����<Y!2�po����
�1;6{���<�6X!�;��j�h�C���}d�6 ����{��o>�6�y�J�L�d��b��8�ZYe+�_|P�vn{�� ^B�l�Q��c�������s���7���)!c�e5O(&�R0���Qz������C}3�{��>^T�dO��C�4PE �����
����-$� �g�z��y-~�Q� ��������}��WMf��A]C��0���Q��F_pEq(1]S~&D�$�����1�8��oa}�g�z�S:����4�Bj�$
� pL\���x71([����Y����j*����w�������T�nL�o�7l�?C7 �t��l����� �7����B�C�~��F��)b0�����?��79��vJ�����wzC�����|���Jf�4�%����&e�,u��K��I��L���kE&aM�����Mb+�T�TFl�R������'��^B?�%>t��b�C��BX!6x��m^aud�u���)��>?��2��2/z�Q��=*��g���L�c��AGE<FK�h���=v�A���*�����q��{	s���8���C���9�U�A6��e�HL	8K�$9�m[��������[3�����1��0��^1Gz}�N��
���;�����'7Je@�{�'���2�,r���:O��
v��&Q��+��O}�^�K\�a�Iv�����=��i�.��������@\�CF�VS<)���A�)��RxY�"}�x���������/��0�J-|]~���j������V�L��l7[=�������L�t����<��;�x�����Nt�Yz��c�h��F*��L�"�/T�)�]|y{J)���U^��^����j
�����z
���� ��!�B���g5�9�3��L��$�\2��^tHXcB��m���*J����JCUlP*����Q�����Gw<���h�R���:�*=P�bx�0:����!T%'^J�d��a|-�`s��`q�&�����'d\ �*��a�y�R����h"/T�Sl�t�4���K�q���Q�J�o����T��k���u��Y��
�R^O��UV����������hn��`�_��^ls0����{�j_�0O����%���VE�/Y�A���d�z#=����Sz�R3�C�)U�i\��(�e���	�t�!��se`����7�H����d��e����"�b�a�l����_Q�����9���t��zV3�����Ff��{ ����g���%����eB&��3��'��9#���tz/T�c ���`��Hzl8��[;P��V�#���4���B�c����p�BM�e��Q����B�I;!^ld������m 5�����s����AB9`A!aa�:V�^�xYxS�>!����:�o�B|'s��(�>�6!��4]]t(�P)ev��E���Y���L���0(����b�@�n$���k��hy���<�$H��:�2���e&������=��	c��'�4��>)�����+��_��=��
��U�d�����=�:�����D����F(M��|���`G�,Q�D�B�{83���{�0��7���O�sZP��[�4NC�m*�zW��[;��qDt\1��Gg��5����p�<���v�I�7���TS�
^:U�l���c���b���E���\'��Ff�!��<�Ue&'��*�P�S�����n���*��C8��'�L�����1��5�� F�x��WM����
���A��	sg���!�`�;����ni��v������ ��W���^3���A��
*���+�"��2����1�a+��;1�=wV3�0y�Ex�A'KD�.�P��/ze�%�nj:�����<x�}����M����%s'�|�Q[l�J��i�J��e�yP����@��������c�S�a���e���,=0Pk��e�6���bPZ�q�R1H�O��5`�	3���}[z�+?_���YQA��7��	5Y?x�P������y��K)!�<�O�M�j��V�����Ca��Z������u���^���$��R�.�f�1���%P6(�.-}�&G��b
5��h`�`�A��+_j�_.����Gk���V���?�"x��J�������������f?�RR�$w��x`�������E����i��*�bt2��$$5H-�����[��0'��0�$4QU��Q|��:��=��� Q�+�C``c�Ja�y�
_�I�6�g��t��_bL�&xj6����<�B�H��uL�;V��o��*��$�����6���C�U����	�3�a��a{��;���
�%Hc�L���2�;<�e��Fi�+S�VE����t�C��IB%P)
��Hd��H�LP�z��b8����G�{��
V-*�;��m��R�`�o��=b����Pn���/j�Xa�(;V�f/�:�js���aT\Y��.�x �	lt������1��!����*B�.�x�����E�i�ly9A;�GC�y���`�#�}p�H��!������,=P�hA�w���J�c��^�$`�.��uOcP�G�M,6H�"�y���A�fu��;7 ��u$��~�/etR�#a<84=�@}j�\K���	d�I�����G��E��5���/~f,�#�� �t(hp���LO����7*i*~��b�{
+]�k0�U�H?���!C��U|h��0UwUC��l\�q%=��a���l�J"I��dB�]��fP�L�����A�>�[���V��	l[����������	��d�� 1Bv�/T4mV3"��|;.0�|�SM��*!��N#�m!��\�e�� ����D��Ae7�hM*����*,�aS��w�q��p�_U�I��9Y��`'���!�����eg�D}�hl���7��t���"'T�����1i=��V1��J%!3��1��~�{�;R����|���k�gTf���Q�sr>��)yN�<v��'2e����_���y�����*���e~���|�xC�oP*�$,�}t�O�����}�����>RM��K����(�+!�����h��,�������Va�}u�6���1��+���mo1J��c2�������T[����=���O��e^�zp�����Vs���oW��!:7H*�zC�v�W9���gB��!�/wR�P�E�����O��2>�=�Ix���������� g�� gG5�&5�}p`���TT������G�|&��dx@�d[�R��K�*�v)�|e���^1���s��A2�:�+�
���T�@�,��R���GI�bn��%+>H?]�z��jA��s�P���o����x�:���x��k��1{����b�3���AW
q"�(W*��l*T�U����P1��
��:,b����y�`b�Vq�0����Qz`�6u��c~�`���z��pz��mT����j
|T�KZ���P��J�T\<�����O��f�S�B����I�#F���H��/��4j�XA��	}���:����Dype�0KB�Y:2���x�4�d�sE������/�
��^��rc�a�	�yZ��w�p��z��o����k���r��{a`�c�"�y������n������X�{9���Mj�@J|�"F4���� E`�J�h��t���4f�d/V yQ�����8���E�����I��K�W�U~�Z����B��}YJ{�KR��U7,�����`�?�M<�zQ5���<�g�������>�]��Z{[��Y���<q
c�y<8�@I|�j�'%���|�P��M���
cD.8c����_��zi�����"yp�~���lV�
���A%�/��TZ���P����{�
%e�6�\|�Hflc_�s�D6����Dx4���n�E�������R��L!;�G�eu�����vi�\-���g���2�6�j=F��i�	������N}�;.o����g��s*��ja�?��s/�h������-�!}����*�9��Y���-��O��P���r~��C���G�+��*|�([|����v]�,�Y
A<7�����Y�b�����!;�J"I�������c��������s���i�i��'�m?~����N��������t�*9�d�F('�������g=}��J��n�Wx�����	�0#�ON.�I�w,���M���%J�]E3�g=����j�?�]{����v�
��B�������$&��b��9;�-�����}����2���S�2l�\��lH%7�
+���F6�rI��G.T�!d��(���{�G5f%v<�:���xQ�4P@�3wk2���%����+0�W5W�E���.�M�Z���<���p/B/c��c���LE���k?���K��u}Ds�TO
��!d �\�9k��^Swo����X!�=�Wv$���Yo��i�J�����'�i=����+��,�o�x/^�3�~� ^!�Z$9�o�
�q�v��>t�)��O:�L���[�GKqG�t��'���
�������$�H�OU���3V�-��h��EI��s$�(��F��s�E|��0{���
V�n�S�9Zw
�
6(�B������^n��)����Z�����B����;�����JE2�&�w�Xxw���o�,b?{~�S������������w\/�N��Ta��k�j�??s5��aF�nF�����?A�E�}55�����f�U�Yv��u\��t�Z���1:��� ��d2c����>�Vuu�U|�
����(�Rq������	+9��t�^�T��K?�l����C��0���g�`�H�P�m�n�}S�$�W��lf���I%�����T�u�pu�U�b��c������k+���x����)�%|bi��J�'^c$�P�^��;���i)����U�}l�F�c�_����jW��+�����q���&�
����O_9P�DopYv5��M
��=H�AY����]���.�_���N?�����R�x��>j��o\��M7[T�A�����d����eE�>p<��]y�+�5��c�z�wBi2���dabF�(gMK����h���b��k����#rD��D�T)�";>*y�b��'K�����"��2(/�����X��������2+2�/(q-=PI��o�d�QA`��C����
���3)��m��C|5&\����o�"��_I�FH_�!I��YX�S	^K��3G�#>�=gF��3�D���f�H����l��A#*� �����H����|�������\����3����E(M���1����D�#���>8L�I��?���+��ux7�~�D�\����/��	�h�'Q'XB-w��2��"�}���7p��+��Xz�n5��D��r#��e��U"���@�"�W[�/dO5:�u\���*,�u���j��z�,}a�1�5fz��P)��#���B%�*�k�
�R}0�~>N����b��BB����G/��IZL
�da��b�'����:��ys���2Y,������r�Ev�_�<d�;��<D$�]�4�\\aR>��Q+���D�����	���d���`1 &&�{��Q��]�@
Hq�_��I��]M�����r����N<�����c�_�Z�
c��k9�8Py���k9��DT�}W���^G��x�OUZ����� ��l�J��k[�iW-d}T�X���Q�������It��Um���������)�A�&i1��*�������������O��y��R|�X+4�~�����)^�L]n�jR<��X�J��Z.��|A���vr����{,�����VKOxPE�
���N�7���$� B.t����<A�gw������P|�YI����v��>V�*W���k�s�R^(�/z[5#_����;o�Rag���������;���'����7��r�8i����~J�5�q<����������	>�p:S����L�Qm�S^�T6���&
��������/�J��7�p�$+Y|��L��g��F�`��J��	$�����PK�Rf��=�R	�����
}p����^���)�/3�����mo&9�@�l�+��CF�q�|���,~�}5����j���I8k�q������e=����D����Z�'�*y�����g�
*;�o�O�����i�J�r`q��dtj��8�1���n��RZs�J���
���P�B�Z;��u���a�������j����>���y�O�+LK&�K&U���|l�0��i�?�0aNq&��`��t�����|�B�WR�W�U:����#�5��H�P`���s5*)��D���6M��7�/-�=5�Az�W
f�A
���"jg��J���2K�^|{w�W���������(39�=sn�/X�>��� �	e�8Y����<��JP�?;M&���������I��^`�GPYJoT��������[��������`��r��Q��\A���l�&#���0qe	�G���S�i\J����(yU�wY�a����_Xv�lP�T�p����Q��'_�w4����X�X�>b�Y�D�QC;}5b�:�W�or�{FQ1t��������Ie��blP1���P�B%������������t�U�i>���,��|!�y��1iWJNAEype{p�qO�N�~a)�Z�����j�?�����F�����0^{*�0�3���&�xBY��q&p&~�<�!�Dc��J����cyEx�$��7�����F��*�^.�`�7��E������j������&��+1C`�`�!������1gp<�������`}-*|����e����
F����$�:��q��o�;+*��q�o����S����
�J�T�2�"������!c���*�3M���u��gaR{�m��q����+WpTa��+��=i����%�z�������t�>+{�����"<�2L*M�xZ(F����@aL�+:vG�BNG�,��2o���uB\P�����-��� nj�R)$Pq�%��D������������$���
�`����q��i\X!����dp���B�4�|H���*������^�c2������A������y/�e�X��	�0|h~:���^R�g�_�wX�*]��=WM�"����U�w����Tp'���o-���A-�[���f��q?=g�$L���
#���?���=��k	-�2��	{�>ho�����:����Q������$ey>W+��!=�(�kET���`�_���(�&	D���28�D)�y."�^!��|5i��zp�����fr�� O�����_����X���T���oK��H��������f��*&��{�;���jK�T�B�jg�V
2~/���wr8��h`?�h��X0�~V���k�N A�������������-�d�W��at�����I�O���Yv
�$�����8�8j��#I��O�,Rh5����*@2��b�4V}�d�^��l�:8����+��|���GL�	*��#�7g����V�c�5Q��	 �S~#�&&!���Y�{$:�B(~��M�/�8�>U?/sY��j-`���/�?��Q{�~��	+�����@2'��~�N�����������|���G��A��:��a� ����[�b�eh�����\�$$�6���@����`l����:C���Xj�R	|�6)��~r�6���q^|3#��p�����"���J�C���a��7h��l�9O�����"V������<?�!�V���_���a���V�6M��t�W�sc��i���

������ @�ER�GEF���C����IL�: 7(�@>z�����{�D��0_:�D��C��n���"�,(l[f�����?��2u5�T��a�
��2���
�v!����-mf���7/:Am8�"����$���"�������`�h!�J�K0������Uo�����&�?�ti.9����P*��7tRkg:V2���>fGA�$'���O���+�?N��V6G��	�t_�"��
�����Eb2����Q�ypXn7�����jQw��$=g4?�TZ��i�(	��Hs�4Y|��'��s�V{������;��H�1����x �����r���)8�KdCl�h��}�����f�
V�Yz�do�q�]_�A�z�*�c��p{��)�ac�Y��_��:��O�?J���\�d�fRi�������}�D���g��C��b������Y���Y�Wr@���6��M:��-�l�;QW���5��J\����T������d�T��OBQG���J��|���J���t��������k'�����?�`F�h��B[�s���LT�A\�9F�����N%|���}p�q���
�J�Tr����8[_?��
�C���T��Y_	��f�����la�X�4�Cl.��:6�!
�X+@���x�H�y����B��&=���~o����
/$�V�����^�t��;��dW��8��p�lj����LdI��{
���`G�����:~��`�JQ�E���$Fe�mL��� =��qc>����T��(	��|��6���Z8`:53���$��((j2n�X�oE��in��`�_�o�w�������V�)5���}��7Fz�(�� ��(����	�#3��
�J�:�9T6n�u�V��F����3��Xf8����<�6O���u.�H�p�J����.���8	bq}s�idl#8�\I���� �t&�I����`E��y������u4�8R��WXsPr�"~�
����z��"����^
�`��Z�+�4Gkcu�����D��:���O
�.��y�6����"�<�a����)����BWd�p<a��g�=��3���:|0�`D��o�����c�(NA���4�ix�01��Q�,���g�4PJu
ypR���h^IPel��7�bSx���LF�J#�����
�%X<W�����o���i�j��q[v#a[����
0Q�'���k1lg�"�T�����;&�}j"B�o(�
_�ij��$���-�`	��3{�u�b6T�����L��nU���W�����<d�5�o��d�9FF�Nj�uC�q���~8};2&���������4�{�V�U�$��?n`��HA��A_8�`p4$3���-�JG�k��\������fy�s�	P,�23��3������`1���g�L��V�
*���a�p����h�cK	���b�
�$��	u���������U�b����iSAv|��M"J�&&��<XB��Q�)G��#��xMb��O�������b���m?��SzWi*�$j��"z����"��E(�U���>8Xd�,��SQF����@0#���X1a�����gm�t��Qv�G%���*>s�Wk,����Y��b��O++�H<):/b$@`���xq��c�%]7(,@f�������8Wel�bXJ�z��_��&�J��0��j���m5��R�&����D���p&�b�xqhW��5;��e����x����d���+�6T��i����e�{lP*^�G�J���y��a�>��|��S�Q�������~|�;$<e��4[.8hX�@�����\]2y�}1���!3�w�~P��e���:�Y?
~FqR�Ax�c����C�d
L9T���R�9������d[u{�����0����rn'F�{���eL����Ci^��k��Ss��?��	2gB��^���_t7F����+~�
��y�4y�`�M�'L��
����M�_�H'V��oPi�>|&�����1#=�_��s���6
4k(�:t�p���sv'y��hdU�#��\���
l��56H@��@��VO�Oh5�I
�|c�I��"��y����QI�7~'A[Q|{���?+�Ch��WP�����_l�g=���R�D��T	6I�a��=�!��2&B$���J�U�ga�������z�~�����C�&���r���b2K)�p�7��?d�,!���`*�}����a�i=F�m;��V�_���$�!K�$���R&���c�!��KEgCS�[mC����+	�f�t������"r��Z)���F:I�I���>j:X��`A�gW5K/j��dEgJP%{L)�{�����B�!�L�s��[��c�p_�ZxXa��6	��PY:�����YN��A�"L���-/k$����L�^�Y�;�63�/|V2�1���i�
�,�l�����vN��`s�
RO|�B;���w���|P��e���="g��2E��e06������l>���d �����}��o���������,���u�����zg�3�j�w��U��������`D�I���4�w���N�;�+��������<7T�C�g���t�������^��^;�a`�X_��9��E3[P!��O������sc@�L����`�&�'�e�����.D�������Rh
?
���$
���Vd��(`�~^N�����^��x�",��w�
*������4>�"P���Y�*� -��v3�|�8��0^�����d�A�����U�������RzY��Z<l��<B����RPkFy�{���!x��P�d!�k�����}���P���H`GB0OK=}Q��'�#��Y:��jK�����(��
��B�_�.�*���P���������z�[�����,���|%��}�|��������B��M��Q��O�Q���$�f�rXI1:������Q�BM��8�j��TD���hX|��C�Q
�=����+	��v�N+eh�%}!�@xn�g��y<di`X��6��;��-�9;h�����j(d<UY��x`X�G�0�U�,�G=���
�=*IC��	�2�j�hi��W^8�#1;q�^���^L�}�=�"���L��w(=���3^r�d�0�����6�>�JnF|��eB�GHf�bg6/^�&� K���k)L��Fd*��!�J)�w+���T{PE�eW+�$z*�K�H+uz����S��
x<?��I5�2�Gj�(@�D�^�$��6u
�	+Cx�<��J?O1���9��#w����I������l������v��1����g6\�/�����
�4�,}TO?^�d~V���:d�_��h������w������DD��A
�pf�7]���:�/h'b�2J�V4C����Oh'J�����?����������bL���X9\Av|��om�s3Y�XB��Jr��(�EB���MY2&t�jm��*���@�Lx���K%��T��u�ZP�����bA��rO�nk���,h�9�R��3���BK�M�>F0@���(N��dO�K
��:��h#���T8����m%�^$� >>���.���E��_%D:�>�	�V��o�t'��C��0�d��lm��J2
�����k=BK\����]a0k��{��
��Yz���'���kq:@ae�?�_I�9����9:��gf��<b�������B,X!6x��Fs�v�
oZ�(!�"�#�t����)<��?���?3w_{��\%i�)��[�C��F�0O�gx�$�����~�4��7�P�F`g"�}p�B�WE$ �~VS�
^�@{����Z�C%P)�'y���/����j�V`���{Er������m�����j��Vs�s�s� �q5���B�ZV�b�@����Pt����*��`�W��Lx���^PXI|b�eBck�@fm���OS�]���P�n;R/�`l@))��T��2��y�R��������XttR��v�����������l\�_I������nG���Y��#���(�D�����
���>Gv�M���4���>x'�S����.a����)�zGU�+lV��^4O&�S�����7x���e����b�)|��C-/3i}��������'�)�g�5�=�Yo�6�_��Tl���F����B��}p`!<_��I��_-�$��W!L�b�K��o?ku �S���B��G&������J�3���������s��;|I�^�u��m�+m�TVje��
0'����C��D&�=�V���l���~��Ld�_Q��*y��8���kcM��w���z����
����MR1k���9	K�����>H��0����q���uc�����5w�0x���m��4|����B�k��#L0��p��LaF��S�[��<�����2�+�^Qav|�h>��F��~�
�r�tC����TV%$��`���$u���F0�j1&c�]In��1{'�U�_p���	���X��&&�o������0+��� �7���������4I}4XdT�J��E����r���^�m���E���c���Q��C;/���,@���g�'l�B�����'%����u���{��c-�1	�y��R3+�PgF���M.�=
r�z�=�G��Y��=�k�9ibU�`EY�*��G�b�=qH:<����3�)���a��&Lwc����_��A�z)����@��`�5��G7$	#�?�p�W��a�~�"j��q�����Z������jB�g;���<>�B�l~����X)��:^&Y��x�4$RK���b��>�_/j�
g����J��H���LTx`��(�����b�1y��b���F�,�T<����!�����c��[�4�e�C-��������	�1�8�����'��x�W���b�����9)��_���f�.�h������=o��wXVi:�az�=�$E\!6(�O���x�J�������{6J�K� �PqN�Z�j g���Y��k#�A�w��c�9�?���%�[*�;���V�����W�8	]��Axe"�%\S?���EI��Ig�R2O~=8X�Sm���c���*��O6aJj�O����K����6G�c��~��dYH�������n��j�W�K[v%a���t�e�-T�{'�+����,n�M�{<8T��2a~��A�w��S�h/��BaR|�#~���p�9�I%�	=���c5o���g32��Oo�Bh�w*6Gk����
Je@�Y�N!Z�
�$o��.&��Y���U�{�OC"l&�l�����.o!����{t���2Y��q��R�������`�����^�'�>OO���]����E��A� .*�kS��a����jR�����v��a��3�W�D��c��
�i�e��@+1�>8����KN�V���mf��
���Za%�|!��^8����1U���.=��k=E�����&y1y�v�G������Nd����jXS|��3>��-�#�7�d�7�R!�VP	����Kg"�����;h.���� @m������+�*=Pi��=�Lv�T�
T�@�*}�b�9?O�.G)R�(�r��m��e.|~����k��
^'*��-y��n���s>Sw��F�����q?[���Uv&��|�b"�X�?PzO���+	">����^��I��VlB����
`G��n��ftu+�AM0�kx9O�I!����$�>8�1�s���&]fa�Gl���1�M"q��/���X�OD�YD�`����l{T���e�'�Q-3���;T�E�m�wX@��xB��DBe�������GN[�����\����>��h��3T[�cQ�%N|U"6;n<��e��`<�3���\��I�@��+���A��[6XW8���~�}r�d��uj��k�f��z�d��b�g3����KT����W��'vy��^o��
�'����������=;�m�lI��QIs�>B;���H�u��Zo�$FX)� Xf�?��p��'�x�b,X16x��������@*�����d`�dz�8u>�q����(_��UR�_I��dY%�������F>�g^��2J��O�g�3�;���j?k�B���H��Jx��"_]�m#IG��l����>�%g#���p�������?������,���������7(�B���G��{� �
�� ���8��r��-��q��������5z���/����	�T�H����LF���A� �xV����]���H�6��P���*���uR��{h���8��?�Z�-�Ea�u����qE8S�LB{���a��� �@���j$&�K��2y�������70�)m�S�
����^���\sTn�B��A��� �B���|���f��Zs��)?�����V�\Y��[�N�	�:<a��	dK�E���>:�\��(��&fl�g|��'�^c��������)?]���[^_&�����`�qr5t��".sB`3���=;���X�'��b,X1���W�?�j��j���"�j7�O��b	�4�!A[�V+���=��2&Hg��&����7L|`�zK��So���Va�����ds�u����u�1�2;%����1�;��>��$����+���#!��}O�7�9=����3���#�Ka�H��>;q����!J���](�j	��j���@�
�-���
��\��)�g/����>��F��!�yp����l~���	V���%�B,�"�	s+��J|D�"K&.�v6��@�"��z�y�X�o�3��,\��q<�BS��
eUEs�4�9nv�PyHY���X��u+�G#R}m�j&�����
����,���� uWX:IM������n�����${��O��:i0?�+�W��x���Q�����A��
�c<�V5�b��@}D���C��r�4t�A�r�%���B� =�����$�;�>T�E�KV��>���SyV�7��.6������'�*/���<�_+")��z�����c���g;�n�28��w���!�}��&>1��#�U���S
��4P���ZE}�f����W��q�Qs�t�B��%�f���D���^I���&1�7�Fh��5���r_�����s�|��I���4����?�$���S)\WKh�����������FF����8w��q�����ng�W�/X��jw�����`���FJ����h7��
flu���tY\/*��e!^�N;Ds�s�������%�Y�8�A��_S6� %��>�U�y�YO��I��<�7Bi�D���L���i�(��P���
�)���|K��i����V�m����Qm���:��Dh����	���]�d<�����x?��G�Em.�Zv#����1�������F��}�"����<)������S	��wx�����u�%���R	�1��
�@����'*��J\�2'�E�"�`��Z�Wk�$����;�������^	�S~j#�$�S����x�6]~�C��Q�2�6�|���9��,�M��"��n�1*;��)��Bt�\�8���o� �}~�g�lOT������C"y���N	,
���W^<�|���}Ww*��]�� @O�S-�R0�����
'�v�l@\�z����������c���z�o�IZ�[�,L�hh9S��g�WG��������� ���;V��v��,�M��g�XB?>�����~���N�!��N_��fs�K��<�B]�4u��wwsa��+�S~&��xu]�;�vW�7Qx\�&�����t����^����q��mP9�0���U��Z1�6���J���Z�Gg�=��������TOD~9�-SIT�I�g����Lj��+�+���^�����z�l���m�U������lP*�X���U9���]C�:�N�o�?�u�Y��L�2?J�ED���}l�?�e=1�O\��������l9 �(H���O!����,�I��w�����(�Pq���GI.��r�"FN�����`���i�`G]�P�bl�c�5i/���������$���A�W����1B0\����{V|�s��Uz�������F�V�?]��s�����0�p��|h�ZL�|��v�QM!6����6%&T������Oh����R!�1�%�#7f��N�=��I�/V�e�����<X1A����.C�����,{�+Gl|��s5Y��`*�z�����T�����P��X���5�
 j�����%f���#V�����{���Ym�X��wH��`q���<w�JK�}�
�Q���@W���Sm�����C���z��$����B�������9I(���>�2�d�Y�&w�����#d�����������9������a2;�"�q6vp�q':�K"� v�%o���y�0�V�>y����W�
 �B�������L{t����6��.����}0��h~��� \�������\[�6��8���<��
.vV5d���]�\Id@���^to�Z���j�-���O�Q=����>�u�����1��k�
_Li2|w��GX�
I��qD���I��TY���sB7�<�1�/h�����@e����� v���+E����+��)w@�D?��"\�����B�o�[�R���/���G��[���(8����t�@LiU���M3D���''�������.&�����#!C9f}��4@� �7}�(<�r�����F�LT��r��y0E�x���P��}�7�q�����?\�l����y����1 dzJ����e0�2���	�Ke�~}�y�(BL����3�RTq�����V���!@����M����D^o ����/������?�b{]-���J�5y��j
��(���K<XV�+�I��c*V�`�^�*v��AF��C4�%8~Hv���J6�~�#s��H��������o��Y��+	/��� C���?���\�x���Y���������f��Us��)?��^g�����`ab	�Y(?���OP\B��DmH���jr����IIr_�;�r���L�8g��r)�rDT1��_Re��|���BS4�*n���R���$ot���U�Q�_l�14��6��w������.;����������/��?���)�R�/�g��7`�O��
�w�gQ��� �s����W�g�igK��3���B2V�g�*��K����Z����	N����[5������W����
5��u�W*	��V��"��<�w�l��r�Xn���GTG���)~�sx�����4o���?���S'���k�0W��� ���,�L��>�A5?�M���_`3��� �1z��`i�h���
���*>0�`�2K�Th��r�0��P)(%�(��/T���1J'^�<U?�������cmQ/������kQ�|'�\Y(��pz���$aFb�8��q�_�	1m�)�*=7��	��v���@�f�b$&~��]����9��;����W��_j��y�o'�i2���'+�,�����&	7�IL\J��[H�����ym0X���0�����MN��^�tpV�o���(�Qav�C�Y���2�/��6�{j���A���$��&��`�v������r_��Z�FiX��UE�#k�
|���>�B����~k��h�Q]���+�d������fS(%t<�z��n��+��
�II�%���_��Maa��M�bt�+!V��c�/���EL��v���$�;�6XQV)��Ae���P*�:e=���+����]>��Lr��������S�D6�Ng��I�G���+"8�v���p�����@��F��\��z$����g~�lE$X�"Lx������, ��	�qLd��o}F��"F7v�J
��]�
�<������O3h~���u`���#�Q ��f
�r{Z��G���/�����f���P�u�
�&���17�8F�~�p�M�|&��~�O�w���J�xp���h�2�*"���r��X�<��V4�;�R-P*����^jV^C���P/7S`�6�i�Dz_���o����>h��<������|x0�y�����	�R�7<������J��Vs���y'�����E�7-���������Z�IG�m����N���������7�c|�J|�g���#�m]��W����w�y���O��5���q=8�Q�$*����,d����Z$��'_3��+�g��v&��H���GYG:���
�?��=�2�IH`.��2���L��W��*|Ic�K�p�yX�V���S��k�
^�%�CoQ�]��� �k'/#���t&�p��y,���CN7��@��(L%�
�-?����:#�����`+��#�WB$�G�d������j����
��aW��O��C�����/�4<Y�0`	�a������,�����U$|��uO���p��4�����y��B��o���� �G_�'���ZL���h���9/�5�~P�_e���u�L���j�2�_���kG'y�a��N�`f���W�9����:���*���J�YP�k���������`"�� Y��G��y��	�Vo�
PO�C����������7�	
�]<<��!��\���Wf�)~se��{E��e~�T�$|
�ce)F��)��6;.�~TJ�}�"�]v���e��������=0}1���7X9����fW}$��|��$����5'�V�
*�����@�=��T�Z����m�2�Rz�B�`��rx������kU�P�n�x)�X�W*������b�A�7WG�|��������P�����v�S���n_��6�1Ff?\�qa��3_&|.`E�M<!���T�8���
;["��|'d��m�,"��]|������y�o8C�Yp��������k���'��863�l����X�D��w��k��1��&����xp��Y�=+n0�`Y7KoTlf��+��3������9P��6�;#(FN��y����jU��O�>�Ij�LB���+,�<�]e	��<b�1��L�^���
c�Y]�-����/�sv�>sE���B���U0;�`	��%�}��b	�a�|�5��h�%Tg�m?]"�W�e���8�&�B�����Xe���@��.V����)����{���*��>�����k]�U�[=�8N��r>k��y�c����@���b$��*���������������+����E�7��c@gtG�MGD\-���^tx���<�B�6IT-id���EL��e���=�Q��:"L�����AX���zH2a&\��,�Q�y�3p�T
!��m��{G��	�vH��qF�^�N��d�N(�*W*�����b�������q�U�V�t3*���'b$��Z�lC��^1tY��g�^��1�G�*�#�1/Tf�yp�����Q����������l�%HKM����{q�`�
�g���}���fLd�X:	7���}3|���)�g�����$Y<��1�����5�c8�&�e��):����W%�|c ��Uo���S9�4<_:f\t�T�&�A�+�mc~K�<�t���<3���|����l<>�����9P���LsA����zY�����)���`�Gm�����W�������o+�7�����Oa�M2F�!�+a���C����z.k�*�~���k�ur�c�(���������Nz<��W�Yh?����"��C���	zZ�r�����2pE�����LHf�\����L����}�|��Lh�E�	�mn��^�����8(31�}�����:������q�c>/)A{Bn<�����H+�����
�U���E��O&]i�X����MyY�2A����A���e��W��>6�����_��w�eW��+��'*��qj�8�?�i�����G�=�)�*�j���Yf��z[�����jN.b!D�obQof�w3��z6LU��3z�Ii/�0lF�Y�`Z�,���3|O��q e����yp��#�~���V�,'�������(>�;�w
�B�r�����/������dxo���v4���;�Zr�C�b���Eh��5r�a�a%�"����n!���Ve/�_�;3�����W�7p�f��0`���{�������{HA��
Szn�`�v)�K����y���/:���cm��[-?���I/2�.�KH���5����_��>�0yp�aN�1�<���<X�W�<�J�3�,��f\a���o�z����XJ���7���0+�������=�$�������H�YPG/�topY4�����A�����������<oel�'�K1�P	�J��M�K�2���Nr���jH]UZ��
:������2�Aa�Nh����b���I��R!�Qw����|�X[��z�a/�b�[�\n���'����;�2Q,|U(�������A���2:�;�Q>��f��t����/VU��5k����bkW,�&��-���S�8��7v�QmCj2.%e��;���j�TP����:��D�g1���M��[���[E�?�j����	�����$���+��1���td��%�V�_�8�^��/�!1G���+CxaT�	��.����%�����Y>�+G��P� �	�
�v���qW5����^UB�4��#=p~#��x�L���z� �t4�qQ���26�;�J����A����"�d�a��">T���� ��D�E6^��J"7�6J����2�W.He���&[���yp�g��������a������i��:�i,6(�����mj"��J���
��G�����������QQ[��#5h��N�j2�>,W�`2�^F*�@���	<�d~���
�������U�l���������7���L�1��2kzYC�%\�z
�F:!�t|*J*��� +FMP5��P*�����5$�����v;��2�pQ��c����Y>���%������(j�L���lk�3��q_C��h����t�`��17"����8��(���a�����N�������~�9����4�����v��|"�@e�F�7�7���3�����Y��m����]��Y
���Y�������������YC9�a�*/*X��I���6!�,��n�Q3�Cj�F3�0.5�m<��%��B&O��`�
�\a��W������?)�9�O]��W7+�TD���o�_ ��t/I���������70�X�@���BX�D���C�T������'��kSZ��y�����>9R�-��9��A�J�_�i�V *��T��bV.C����#���@��Sm{���N�p��#��~�M�j�i�W����R�P�l?q�2�(T�@��FIN#"��L��[��P��|�����/P!�Z����i�!�  ��~�d�bh���s-�p�S�`��"\�,p"����$��B�RpH�-7\����cG��s=
�cG��"�r������<�H��`�yy��P����s�/C"�����!��l$��c���"�X'��P�?�B�0�KP� 7�(��{�pT[0y�H���DI��T����f��Ta��'B�6����,x��<O����)�����H�}�q�a���V�|r`<?zGS�9{����%XB}~�w��y17[`	��1��#�l���JG����e�{��=����������"nx���0�h7�1�l������h���SnT������m��l�-Hy�9��Z��X1G�N`?#�F�� G#�HCl��G���Lx���%�L ����aO�&�)y���$��#��@����T���{�����Cn��R��M�[��E���������9���G8v�+��+��w�`�M��Z]���+I�ym���L�3�T�B��X���1���
3J2��O�T�9v/M����C�t�#���9�EG'B�����/�U���^������'���5T�=�j~B���I5O� b���bd����T|������
c�F�g*6i3L�V�����u)6[�|��U��)3A�.�3F1��Qo�6O�P*W��2)G�i��p5*d�r��`'��)Dp����������XT��$L���A7�D����daD�qJG�$&��s�� 	�y��od�,h�$j���`
i��6���C�$?-�������0���i���W	.�!"��0��=�d�v4?���)rP�Cwfx�g��������������kT����M��z�W\Y�b��B���Z���6��be	�WD[M������n�~�����/�4���9�I�Tf��So��{'y�q��� ���a;q��f�����0�m�PK�$�}]�FZ!�+8���r�k���k�t��Q)�W���O����H���,�������|TI���ZUBo����<r�t����+��|C�n�SN&�(�c��{B���,Uj�S.����H���n8���3�c�����b$��P)�������)��;�
Qg��h{��@����Ot����S��6�Q�B6v�&�K�`l��*���M�����8���,��Q���Q�#�]s�*��*��b��$��M�H���z+�W�h	����_�^M�O�����i9�z �Wh�c4CJz��X1�;�9���`z���Tl.l7>-�B�E2�8��9�gK���cPD�l3��S;�L�#2��Y���6�9��/�$A��qD�=<8�v�*ntpw���b�e(���+�C�$�V8�j�	��U�$����U�Hf�����UL�?fhU�Q�hz�����4Y�i��p���0������v#������}�d�1F��4������D�J��P5��f����@��0Cc�x#��0�\�G��"T��	���J���B�?u9w�%��N���5�	`�F��gunu�2�����zR�{��?,P���x��{�c���#G�&��T#�L`���g���]�{C�
�������U�����8���l�L�wrF�� ���'*2����.X1���v`6�����nw������JwES&�7zI��D[��i����Y��cP��l����`d2=����m'Y����m��Y��?P�=�|��"�������<2B�A��vc>�4b�C�U�����~������7��GF_q>��t4������{By���nr&K,b$��cU3���7�)��/�x1�'?����4?a� >����b�cH�Za�4��Q������F6���L��I[��^��Tq��(����d��L���Z(pK���.��\q�����&a!����%�����gKh?��Uk�l��n��9���[������}rT������f�x������9�"E>���|F&S��x�4Z�����W&�4O��9�>	��>>��.X!������,�|�/�aP�a~�q�����kE���N-����H{����\�C%P*������{�(���U��(���^����nD�~�$�=�0I���YGZ����=�c/ Q`���b��U�������Zj��B<j|H���s�a�K����~���_
���q�n�3�'[��yO�����g*4O��N�/P	������y+G5��m�t~��@H�8R��@{,��];��
+Z�M{#Q���	k����	1�����+.�UM_O�Th*W�X�b�6>�fN���(�����)��MJ���"!iB��2��/���/����\�OX1\����$A|/�i'��L�0�������9��R���#���st8��g���x~$��H��b�i8���a1�LyV�#�����I�y~1���I��+�0�9q�0�� _�
�ow�D���k����4�*�o�7~X���6x)W	 �YLQ�u&��)�C�7���Fo�H[�xE
��'����$�����
�G�LQ�
UC�f�^���������*��#�F�v��M��N�U��}j���Id�d"�����������d:�QP�y���qr#���Tb�h}�B<4���
�\S���Ol���I<�K��a6�*��Q[�d�Y�nP� ������BL��,����Kh�l�o8j�`�a2�J��1
NU���=XX`� ������"nm]��`������/P�}��*��N�XG���/i�����F�G�O�t|i�X����[���@x�e�s��_��-���5Zp����W��II7�	���W�C;]pL�}0��s��@(�V��]�jA������u���Xab�����&a~����1�S�|�����$����_v����������e�C,[XA����A�����f���K(�B���
/�Jn� .��,}�)8W}�oY�6����~������7X�B+�~�a#v��������Qs�p=������'�4[�_
p�E/�qQ�Pt"�P��b�\1yD�7��k%���'�Vxp�,�!4.�

~��s��:�}S�t���!����O"}���m<S�#��=�b~��\���V{��C`���2����B��m��x_�U9{$a�n$0���G�_F(�|�����Y��q�F�0�Y�yU3S������9^���O�e�N�]�/���iP��0��!2�g����=��'�,�nd�L��c �����2T�A�����0��>DFd�@�o��j%Ov|_��-8
lH�B6I��>v��2p�����X�a��t'���
�C�Q���i5k�����`SO��B�E}�/�Wv�5�����=�nPJP�*B����q0����rA8����T����5�CH<B�OX9�a�S?���fC��to�Q�$�_�.�\
t�GI�g���#�t� ���MG.���/eR��[��Z��/�5��b��P��@g������V\Q2�����I��4<+E)�Gi�L��U�I�+�n<+*�_J���K�g^����G�8��2L�7�aA|5��w��*��PM���������]M^P�����!�#d0���6��rY%xSq�������)Bb�3	���V��+G.��I�Q
,p���s"��(Xf��Yq��zz��;6�S"m~B����,�Z���c![W�J�G�sD>�%�D��W>ILX�absa��$��8���S������	��E6�����$���#,k�Xkd�~<?v�^�%��������s��N������X�@5�Y�f��F���07�0+�i%!�
��b����J���<?Y��X�^�BA���v��p�w��6�a� �����{�4��OX9���H�)K���Y��tp4p���'$D�����@p}�'������V(�>����Jz�O��kN��*|G�f��TA��'B�jW�"�p����~���X4�Rv�*�)����|��P�{��n�0������#L:T��,@8��������;=q��1�y�/�F�+�������7�uv��-,��>���"I����"!��<����{���fs
�U�{���h~��=X��G*���	uW�V
6���<}#	p�h`�R0������0:��e�n3�n�58��W!. @��6�k�	���wC�'Zp����r���*���D��y����+N�s��}b���fF�z����1;�����[�W�*�d>��Y��+�k�X���G ���2�7r0��+��Q]�= ;��'�7:tlwG��
p�?�X,�=�u����X����J���vD�b����j��
�:������S?G���'.8�i8��w(;VL�D����8��a��JF�Q4���+���q��s#P��=��\e��vO����t�W!.X!x��CX&_W�����a�6L
9*�X����8B���]o@u�@�'b��#��Mg��b����F�����,�px���A�����o������q=�P��n����?��R0��1S~s�����+@2�S�`�u����.hlf�3��`S�#�zkk��=����j�MSE�r����������;���a�����o$�A{����WZ���7����N�0�kP2�*�W����5!��
�_����M���1���
����B����!�$s���}�Q� ����(F/���>��RK���+��X{�k�`�DP�x��;.�4���H%�y/"���b�������h>�I�S*�vx
|d�Ds���Y�8R�a�9���5�����c�7<�A�>m�������Z�O�u~�������m�A����E���B,P[$���%mc��#D��H��W�e�B����A�R<
t�����r���z}f<����$N�:,O"
��w�#�������O)�.d
�d��v��X`E�����T<W����f�:V ���1����/	}4�Y��XEAg5�)����H"���2tp�L�����D��2�,+�4 �Y�:-�����4�+����b��X��x�uI�i�R��8�F�����W�&�~)���C��S3}j���g/o����<�84
p+�,y��@(�_pMz{)��KNU� FM��b]V^��!��W�JB����Xg?	1��fV0Rb4��Z��$�+!\Q�r��k�����O�}Jy�-g�9F,�\�i�I��z�����,����?%���'����CHf�O�������M���"�b�������E�Z�C�#3')�
�������2�-H��8����~%��Z��F���P��j�1�lG�%X�~��#^����[�0.�����d���	B����3y���I*�o��<�����9i�
����|u0��5Y���E�j��K:[7�Lt�-��NJ�0��a��������'����I�����
~��`����ll,�����$hY��MT�]��ks��m����c���z!�g��:jf�a�����
!T��gA1�&�V\Q�����9c���&"H�C1�I��j����I�A��!�s8C^0Y��8�E�3�~v�
+6����4+�hR�'w�)F�JI�^?���c�t�"FD��3tw��~A�������
��Sp��]g�x�(���|z����8�	G�*Oa."T���5�|5�����U�6����6�:�Py*OK��rvi�,������#�t��`���c������y�1#kF�Smv��j?��Dy���0���I���pK:�M�C����g=/\���
B�*����"T�l��C����#�td��bO.y�'<�J1^�O2�@��&��n�10d^1.X1���{S�=��T����X�D���^j��0_����h���NT���}�"��w�N���r�X�kF��:�?.�������m{��=��`]x�����l����}M�AT��|�f���)a�#j/��sv��_�Ot�-�k�a�c�Q!�^�l�-���)�!%�Y����=�Q/.�@-�d��������H�*�x���yd#f��sb������0
6Z�-�������s`�%�3�e�a:C�sHc������MH�F��!��g�e�#����5��
�Z�8�Q4?a��8�;R!�Pg��(���Q��:���kr�T���C�@s6I���y��R��1Tl8�b��I���s����y�f��$N������'���'�	f�d�Kv����L�>erFu7�Ta,�e,85Y]wM��*�����:7M�X��
��K�W	H�DX,�xy17c����jL6F��%Y���,8���(�������&\�Aa����G*�q�b� �$��}������M�$���|�Q�t
S�����OX1�"?���T��Z��Q2������M��"��z�����7�x���qa�!�j�;B���n�ox�*�:Q����IuN��*c�#u_����������QE���5��J����Po�dvC�^����/wia��0����~�7-���Ae��e|���N��rC� �d����
��(.��SUX�q�����������3��P)x��:��*�X� ��Nr�[0����)JlE��2����/�CMA���y�f���b3�=�An������HaF��(>U���=�_`l���H��X�����Bx#g����&�hO0]_!���UraG����z��lT�|i�z����c�8r�L.y*D�N
�Z�Q229�'H|��q��5����q8{$A[�p{b�<P��[��=�|1�g+����[��������X���=���u�l��nT(�@�����G��$�\0+��\i�!��/h��
<���W��w���#�t�B�����T�M�����fx�V�s����!��d��N����G�o&�A���e�QU��LF(W�qO�h&G����xQu�����U�/���9r��<l�#���d����#���=�7���a�<$	���b�7L���S�\3�*#h-P*��O//##c��QHR��g��W$C��v����	xt���u�1n��J9oiP)��qq���{�������u�W������L������+J:Z���/�y��tdtF�F�v���������V��{�<������m��C<����� ���������9^*&�e-��B������)q"T�

�r�P��kt�1@�����e��
����Qq��A��tC��$�a����%d���n@���
��!Fw�
[C\���)���,�
b\O�T��dY%�h	� .���do:�d�EH��m�Z�w��}!C��i�-�V�'���>�aO%f���$�(��c��!x1S7}��b6����/P!��G*�c�U��������%8��+%URn���+�`�������M���+���E���r����M� �T������,{#gg�����P��Q?���m���O�y�v�S� i��:B��	����GK�E����D%�B�\^
�n��������x�/fQ$�/��	>��y/��a�H��J�����PY�T���#x����{tFv
u�A��)��������&;;�,��x��#3g�������������7��T$F�����R��.��T�mbV\ �S�������2tas)��������2G9C��]����5p��"\�,pg�(�Me�0��1`�r��
/��H��o�&��#;c<�$����#N��d��#�y�fs&n���������"L��
C��|T=6$N����EG�m*���s�49��N*��`���G�MR/6�y�L��b�m��{�s�=%Y���k��1l��@�����H�.J^��v��{�
Ck�,(�m�������*��	#�x����X�b\�bxW&�=R!��������;�k�����x�|�b\��X����~�T����J!����dC�@�<�@f.����h������!7��/��T������I���������-!F���I��j�������e!����!�>�����m��b	�����'��"H=���0�cE?�uW����|v$A[�ift�<P�'���1��[0y�z�-�S�Z������}#�63���������HAe_�t�A=�`���i�!�n����]���|��N�k���	\E��tcTY�R�k<}��H����CQ1��������8�^5W,
z��M�Qy���������N'aP>g���C��� USp�C��GMH�������l��1Y�?�Ra|���#!��f)!?K���$t��q�x���(��y��A���2l�R	$��������1{A &aR��o��a�����?+}f�m�C��-T?�����5N��H�8�3�����[�����1��l@��[q��	���p=������"���j ��������"o�`r�����"�������y���n�L!�;����	�3�
!�����4�:,�3�J���z��M�/%����y"�c%�;z#����|�GSQd�%Q:FvPO��FNL9wZWXQ����n���O:�p�j�@N���U�`�@�x9��!�6#�.���BP-#�D���<;�mE�:��2�	����M�a\��Y#$`���P�k�>�P��+��tH�:��.Y.���!���6�j�l�P0����
�� �"w�6U���qMb.,Xb����&!�9�]���9��#��#Lzh)%�O=��hF~_T
+v�M�������v��V\a����a����� �P�����k���������e��rje{PIY�T���j���%��t��,�����g�q?�w�Q�>?Y <V���eQ�'*�"���!�h�X��qJlk�J���:�8����]�H��1��\��%]�d�Z���D����iUC
��*y��E{���
C���N��$�0�����[��� ��+����%q��|~��hs���l=1�$^o>���	�~��D�~���QnA+x���g�>Z�
W5��@Q^�v=��$lw�1���L�%6���<�pe�Y��e�������`y�j�Fu���$Y�� (
�A�u�E���B��fs������T����.s�������HsP��V���w\R1l�sK�|�%V����l���?@���A�6��U3��~�5�0�?;.��&�_���I�����N��|��a���
����r���k]�F���_��r���F��S��"�������j���������{�aM����t���vVyi��9D���!�{��lO��=e�e��|5iqW�!D\�H*�s����y��G���KY�����s��3��Y�,�����
�&Q}�T��+��	Z�B����RB�RrO��)�3����5����*���H(���1�,x���L�8s��./�;�c	3����q�w�o�U��l#DM��1��N�n$�"��U(y���sh����G���#���<V)��C���]�-T��Z�C�Q����O
a:����8�x�vUG��V��h�#<�\��~����:;�B��L��0�06*������b6�S��$����dFi�WJ����	�$��ZE��E_��/�}����l��>-���`*���$���z� Q�OG���T2��n����3�I_���F�z+��/������#1��z����.�"�g���@E��t�el�(+�{����I"��\V���&�o�qT" ����@Ep��T\�&���{�����Z!O��~�\+�?���CD��	�W�Q���h�L����/	%��Sn�.�`��20���~A�T�Y(��I)xFg<�(7�o"��_����`"D6���W�r�}����&��#^�K��8#��<�;�&���Yh�K�3)B�D�IT�X>���z`���Z�`m��;�[1Y�f��oT���	�=�M��9!�d� �����������\���~�S�i���K��Uq��@��O7T���c�p��<�By+�x�@%w[H:��)tr�%��M���r�`������L��'0|��*���st��#C�{"D�M�2����K�� O��e�+��+���4M�bv� ��P��z���������Bfb�R���`��+r����P���j���.pFa��Dn �C%�rp�'~0.�b2����68����1�Z��5�q#g���$l�tS{��HA�s��_�Z��p��@Y*���+��WMD�q�X��	-F�S'�t��"��x��(C�B&Q�-�!K���*2���zLG�u�G*6/O���V
o�`�����Y-H��a����s:��Y�����z��������+�P1����U-B� +?)��`� �� �=�=��HQ����HB5P��0�.����	��OF��m�y�xg���`S�VPV�Rw\	��:k��tC�����14rc�Yr�q��I��������^��};{�qW��{�������|�������l}@���<[�3(�k���D��7��
�	�w��|r��tCe���p^��*��'�B��q���}�`Kx�/wc�����l~��O)=Si�?g�{����{�F���>J��r�KT�u��a�����J�o�����r
@+��mx����'��:2Y��!N2�[���i��(�T�Q�����khU���-T�A�};����h::CzO���)X_��-8
lt�B6I��P`yJ����Q;rD\�0h�pJn0�z�C���+vy�P�7�r���'���V\a�Vv������r�JB$��#�7Z�I��R�F���q"�~���;biRn_�>z���^nd�b���>�gIcgh��������g�����<�����4Tv�H�S�'x
/�M<���/�n8���o���]�#WL���]�gC�n�=�
�eH����i7�	sz���tL��������g�<^T�vf�)���t��_aZ����.��#GLs�0�P>��,g#f:VSo�
��M���_�����S���5H�*�N���mve�{�,e��-8�o:}��}B=��}��\
��Okp��b���9~��bd[���n(����[zS#�
SLTn���kC�����~�d�H��H$=[�s�����>i�����rs�4�kP���H�H%y2��T^���2��2�>����Cb�R������D�����
r??N;2WP���W�,'S��Z��"�5Lxx�0)��d�[��W���\���|C�7�=,d�Q}�����|��=�H�!�o��X��;���7��](n"�3	���u�
@Da"xv,��~0i�+)��R���p��^�>'�����������g�i� ��\�>/Ky/�����g�.�����7^�[4�<J����(rPL������x���L���b)���X������WI��Q�������4���f�j���g�|��x����+����!�~�q�N���2c���QI�6]�����P�Y�t��Ff�9�-8��������4��;L�H�m������:��Q���V��(uS9�C�lsg����qJ����������0+�8K�G�E%O�F����<<��Z��[*=j<%�v<j&='���Dt��`�7b��!��
Vd��_qE�9�@y�`7�=��&��-8U����u����!����������7C�da���>���<?B����vh�Z�*~�7��7�!�D�F/-�A[(c�51�M$� ����9^s�����(�5����g�U��Q k���A&����OX)�?�gO��I�V ���0����`0�r.�T\��'��5Y[\�:����y�/�/���&�H�4���[�i�C[.c,5��RE�����H����d!��!�$��>�	��|��FR�S��.]a��������qQ>u/�G�^�6Yp��9���e����17U��5����`�L"��2�.����22���
pH�HEn\�H��?���1���Wd[p�� rW��m��x�?��c;
G`�fM4�x�+���e��3R�������S��j�8b,�7���h>'i��4����O?��`
�����U��R�*�9@�t���
�{�*T�T��?���aBZG��+0d������`������FG(gJ1���7�8��`��"3��*~r�?�K�E2iQ�c�\����#f��4���1F(�7���y��{�eBmb
�_Q!���16]@z=l_���l�L�"����i��U
�]�d�F/�Y���??Y�w������/���
�%B���|����<�%��=��@�o���G*�&�t6��zd������s��9F�,g��������h�\qav>(ou��I^d�v�0��4��)�0F1��G\'���5��y�
�6J&Ho |�E�����:X	`�/�:X��8�m�`�c��������������#���Bu*�
�����A��<�8�ma?.�3��	��mh�I�GCc4Y �+��
�?SI�?���+dRu���c���
��?x�����\��h�����jP���]�������Au��G�9P[aRp�C���=�\b����I���g��Z��z�y��1����p0
<����at�YY.XQ&<��?@����������S��%T���~dG<
���������_�IN��>�������P��v���dM��
/��D�����~�8a�<U��8��9J,���#p�D%����X+���D��I�����8*����DS�g=���t���
��9�u��*L��
�����i��n1�nX"�[���p��aN"h���{����M�B\�B$(8M�(�B�L^t�:,zF�������-t�Y�.X1x��%v�%����@��I�T�^�;�<m��'b$����TL�X�A�E�|z�b{l�b4?a���I�<=Racs�F�|�k�X�aSqF�������;9�)��tC����*���
�m�B�(���7�y/[���b
����p\7�`�j
������I�,�Q+{����h����������6�-uv�M����v[X��XI���|p2U�0�V.'�p�)�9O�Nd��U������G��=��������#{##���l)��|`���@`=3�n��5	�����u�V������JU�Vg6nPG���O������d��)�
�,j�x���V�;:�,�Bd�i�Q��
d(d��AT�o*V�����-����CuC�:�96]�
av��:�������h.�'0�wB�y�\f���}"Ep��g�u�����-$� v��<;\�`��&2|�I
�u6��
�>�S���
����0�?���R	D��\aR�������-bd�<2�| �:$O�?��~{���H����T�+��t�:G��/��t``/{��,z9*\q��c	���E�8�~E�qeI�z�-�X���Nw��&�KY|��^x�KK�({�O�Tl*f�(��B�>L��82Rp��A���i~�����F��*����7:L7i�1�
�JA�_����9xD��X����-bi�7�i����"\O7jh�dz�!�U.��8��G�3h�@n�|`<R�izZ �{��>���#5`���P{Dv'=�ei;Y��#8�4fd�tO(�g[��q��1��|���%\�n�A��O�Tr��av�+E>-���
�F��}�aU����M�Z$��K�k���Z$	�~�v8�����'����d9K��y4-K����z��}W���"��|����������S	���A�%T0�K�p������g2K�F������{�?�s^���VX)��Hz�v�s0�F��S�3����Y���`��
���$y*h���'v�KH�\<?����L�Ll�����9c5�m�5W�Ur3k<����c�D�1V��%���\G��7��IKNM��,�m�+G-��)�+�����;7�j�K�)���:��"�;�K����z9����y��T�6��2�Qt����?�H[T��lE��.�:������dr��$�?��-�H^+�fo�[*�/h�NK��
�i6���C�����+1	��$B;9N��.8�'b��������z/�N��H�MB(WIE&������J�{A>i���g���z��I��c�a�n����/5�5�.�������,�����6��_cI��bn��� �<sweV\a�`OT��eW1��RX1�]���)!�9����
bGt(�4�>�T�QM�Q�qF��4D;m�@O�(]$	�V8�n|���T��D�
�=^8�M����l����k���>y�l�L�MT�EQ�~��H�;-nAF&�+�x�v}����^��`X�A	6��04�T�dZ�R�6�q���r�%��1�{m%�?1�����"/V�YJ��gBWb?T�����H}�^���D_�
HN�2�[��,-8L;0��_�(��4��
����C�+�c�p��Q��A��������!4n��������)t���b�FIn��t�B������G^��#\��`=K�Xy��|����@�o�U���6�q�T�|mO8�r��w��������3�'�j�[�_��M�{��=�O��_\�*�#�}J����I�Oc������6.>�,��}������_`��*�Ap
�����B��8{��D����5�)8:q��}o��	�S��������<��>%����+��\[`�����
Nt�7|5��,�7:B{�;7!q#�/"g���8���0�AM}�!�d2��P�]`�R|z�d�����vc!FVy#�{p�+i��o�{p;����;��41\�B������W�\����+�O�v�l�u��@M��q:��`�#�u����^���W%E���+��H�1���E�D~�q��Jn{���d=���x�>�j��]�FZ���xv��F�R]�F�?��������+(���<P��}s���Cx0<m��"p���i��8�H��b�K�������� @<�����D��������*�VI��L��0��N�S�A�P>���f�����@���O7T:��l���6�	Q$;M��fx�\*T�C��87(�UG�U��H���L�zz���C�*+T<��I�>P�Qc7K';�-U_i��^��.`�l
�Q����Xu�lw��'��kd��A7|3�T�|�H����u!7�N�|v"a[�4c�����Y��8f��^��;lZ���7���Q��gE���o��x\�����+�XQ����P^d��=E�4){��|��0�.����X��1����9R?���J`�&T������9�qh�!c6Ug���Y�p9��4��
��)�I��p2(
� ��t�?���'����}����y!j��I��(�C��=��Qv�"?����Tj�C~�~�N�����cR|��I��H�����k����bw�/W�3W�`�O'��p�c�,PM�����:��.&H_��+��9��U�w!���'*�;��lq������Lu�0�v�x��M��s�p?�K����_ \Y|�-��@�@o� �`�$q"x�AX���j�1���$�����:��e��?�����sLRQ�1��o�605aV�����Y�\�e/��D[��a-���$�D����N�D������I���s��"��'��R��F��|*Y�;���?���x�<��v
C��������-�S^#�Zo��z:,�rtKs��o�����>��QG��������%Y�[��Z0�����7�G$h��C�Y'T��s�<e����WCLya�1b��a����g3�j�Yq�2��f<��:��N��G�X`����d����b�lj�����4E������7:H@[��d	��m�.���$rn����GpT���2=P������#���p�|�3!��������s����q��1�c)��������{�`���:t��{�|��Y=��_c�����P[���������R�Qr�A}/F������o���Q[p�f+�>e�y�����9Cg��k�B|��&���n�L����	L���U[�����6=�V��s�b8n��(~�M�l��\�e�d*���H%'��8N��*�xz8b./��E��)U���Ja��b��f��[1�����'�i2
l��5@������9%?�N���wN����h�G�O���\@K.���f���'=�V��I�z�HT����������O�&���nT���~A�.����H?
bY�0"����a!cG���x����4#�~�����HD�;��LB�q���*��%�Lx�J�S�i�}���������]d��l9P���%z%H�Ga�������f�L��,�����]��+�.]��d0I��;+v50��Y���������Sa�h�yTJ��]S���\U��S��Y��0�da�
��1 Z����@�IV5Rb������Bg������rA9`53�n��,��9��<��,���S`�������C����(�����#���/`��Y��>,�N,7����Ts�=Pit�)u�m�H������&�������?*;�!9����%]-S�
G�)Y[vw�j7�����������zL�f�3�4Gj�HB�R1�'^x/Q2������`S�!^��Vd���$������"����e�6`�>�&��dg�l>�<������z��+d=����N��p��'P�0Cd�\?�8#�9Y�������7��jp?Qmp�]���� ��n�~KQ�������Z��#h5�E�������c�&��=�-���am{�|5o�$��/�$��2�����'{��}��oF����m�F�X����k�����7��k�>���n�K��Y��|O(/���;�R��E�����f	�c}g}?�����������(@n��3�d�����{�x������2��1jSS���������&�<���<����C���l�WG�KM��TA������>DY�9��_y&�|e�:1�^�gwX_2>��|�[����U�8��$�rw#{����������(�%q�Q����3���M��	���L��p=��PBI��-LT��8?���������#�B�w&�����cU�l7+��+
��A%����8�z���^��U����7[��S�O��Q#-��Qs2��s.\$[�!��^~�{L@��=f��b9������$A]K]vB���sU�P�+������7�l2g�C�;�)F���d�9�or�\Y�C���:+��}��KK�j���c���w��^5��dZ��P�����N��.`i�w;�+D���I���/2Z���x�bs����8.$GEX!|�i�v��[q�2���R�=*�+#3J3��D&�<X4����h�\�b����-J� b���#�-�X�f$V���a>�P�z�g�s�d4O���An'7���#2���������,)8W%-e���N�xR�gBM�g�1��������0�����1����NN�|d�_5W\�h�kr�/F��CV�a�I���p73��&M���1���F�FsR�����k����\���!���1�.t0 T�z@���P��,x��p����^13����������I���a"���VO9?7�����Rr���� �.��������I�5�F��*�&i�F{��6I�~Lj�l N����[��WH�������;�'](�����pc9���<�������/��6�qEs�����)*���d�����|�����fr���V���Q���$�,#�C��Z*3T��4#6�%b�`?�d���I��k:��l��gO$l+?��y���?�P�2�����e�ejL<�����:%�~V�}�yT�����E�8C w@���?OGq���k��%�d����('�W=8�[G5Y���	rz�/�
�e����ck����r�4��zX���`��N�b���[p��_J���<H��?p%�#���{���������fYJ�#U���z�rdv,tW�,�g����V�d��g�a���0���e��+9~��,+�D?����5ir����nd��'S�������Oj]�`�2	�A����f���C���yo�H}��_�=���<�j`�[���-��������=�&��D������WM���;g5]�_xg{��&�87���������63��<��N&�l���a������o�e��GB�������.�L~<;;��;9��L������\`�`��HU6b����FM��al2�+o$� ��W�'��6�����%�����[��-�h�-M8A��
���o��?�9fJ,������!�bS�H�[�	A*�`T}��Fk�����t��4�"F�h���b����p#���9�0QA�R�a`����4�bR� ��#��Fh�!��c�bV< �����D\3����!��\1��"rLB���qA1�k�j��E���n/&f�&~���Y\�<
�&+������r==j�u_���x��v��I,�<���������
�s��$H��*��~`�����D�d�t4?a�����uk������������.7q�A]7��n�
��}v6���J��JJ_�E��UI��}���.�L��\1�z��V�X�g*����.��������{%B���
�B�Q�$�86�f���B��y��RG�9�u�7[�hg}e�}��0�aG��a���	�	����o��~����{�=H���[�B���]����$�:^���qo{�b���[e��D�����8[�$�4m�8d�
�H�y� ��<~?�x���l��`/o&�k��b�H���sx�l�����\qEa���[�}9�$E� �t�JH�F���&�_5n��0+�9r�_�O�j�,L�Vz�{Y�����v�gXfr��T��8���7t�� ����������B���9�T��$3O �k��W����������V]*���\����6`���R�bD��j+�07W�Z�������$lu�������K}/G%^�1#K�����7;�R2����/��{7��Jxt!���.��x��� ����J��3��mN��L��*|��}�r\8� ��A(M�z�l�:D������d����W,*a����8&��'���=�A\uV[P�1�m�Ng�-H��JW��$�<T��hO�v�����G���O�'���v�X���L�>�Sm����7�8rj��������������d�t����qB[g�r�]@�=�`2�_��p+�@V-<}��c��w���p��cU�����W<�p?�Ar�����a���b?��(�[�^+lhc�h���m_ �@���@uI��O��G����o���H =~��v*�������,c�����"���UnH��1��h�44{]��&�"� �������Q��pb��Z��9-8VMn�]q�2��e�=wP�C�8�h~��A�2I��/{�`������
c�c/�g�������*��Bmy&�����ne�*�J�7����8��7
���]��#����E�^��1=�o�G{%��2a����xg;��cs��0N��
���-����L����!7|i�a'
f�g-����J�����R!MPW����>&Q�R
�U����4����o�8i�����!��~q�����)��H��[)�7��d�_���4��v�<���8���l!(�!�������U���y���V��a�mg������ot���x��]o���X����_�	����W!�X5���!����W�D����p�n�T6��^
P����C���������Sj�I��{�q�Y�7'9�m�
b83�V��_��YO6HL`^2B$��@6�����U�j�D�#�*��xX�+P[n��{sI��O��\�+�uTP]�Fe�C>���u�����<���i;���8�S���h@���X�4k�So�
'*S{FDh/��������d�h��2�1��s���e��^��~����T�%�����r�
QH��u�O��X0b
9�#��HmA�"�jP���}�Z!w�#��0�Q !�ib��41y.?�.���h>S��c:C�D�zv:F�t&��i��w�O��m�O��D>f�=|	�]�;/|H�;�5'�7*���Iv�����r�>@�-<�H�����g�Nq���k����m�QbR�x��J
	�C(l�P�}��6�W����yY���a��U��Z�M�rk��/X�E��#�&a!kUL��PR�����+��G�����	���<�m�y^�5.XQ��T`�����]^�rx��XsN��J��I|W�( {�-����(9�WES��'td�.u��*��X��=���A�B<��� 
�V
s�"�!$�������g�p�qUS�g*i�j#��P!�:�T����l�s	1X!�7��I��{
�@��E���C���:2��{;�]���u��1{`���!��`#@�����b%�gE(�&���s�4�M���?pM��)�b��O��s~���`���W�e���0�D���]�����T>�F�m��}'�����%���5��9{��������-���m���xz���f~N�j~���S6�������`���HO��N_������[d(�����[;'d�a�#���>�EL��^���0S�����k�����V�	���9����(��~;����Ma���L6��7B� ���U�09��'O�O7T����r,g2}�p8�Bz�q����:c������)��sU���#*^��#���i8�<���*0��<}�Ma?�;F5�����Q�����n��(,X��
�)���g���>uM�$��\0:5U)�7����G_�� ��sc�3�d�����u���@v�y���x��B��8�F��0��w��8A�n�T6�\�
!�����9�_�;'N	�MS/wN���^���b�!�����gn8�[!0�V���b�v���)�����T���[�������,�d�T��G�av���Z�2>�����,�q��C�n���&�1��/�2�'G�%as�\��E�*�}�� �tr�����CS���+���#!L���4�W���}^9 J^q!�y��`#L��)�}1�����M�T��a�g�Iu�
&g��s���#Gl�Z�� ��'9g9l��@5��H�6�l�vM���q��{��s� �����}<��q�BE���g�_k�zm����g��@D�;*��2sd�O��O>A=�i���Y
�����
�U^��q�[`���OT��G}��v�I�KM�Ne
]r�U�.y�P�	n�K��
M�������!��C(;�?� ��HW*v
6�k��0�\���V�G�#�����M����	#E��f9�����!+�Q�-]j������u�/����1�'`n�vV� K�b���X�V�`}�r���CB"wOh�#�{��c�p,��1$.������*�~	�c���~\X)��NDzRz��F��P�����c{�������'BiB��D��L
L�����~ ��'lN4zS3wOZp�}gPg	�M�Q��>sL>����s�Ie��>GwJ�V1��7D���=q��j��������������*K+"�+��=��Y/��dz\ q�'Gk��e������r�D|��������y����m�����}����*��	}��.��`#y�����ht�����
O���	�4-@p�Rq�'+p�f'���k��
|�B���3,�B����Ut����������B(�>��o��^������C)R�a`�1�����B~[�LI�����~��%�����"�s�w�`�����&��.��6A$$�$h+Zk��Q�k��\p�`���d�#�-��>m�����sU�<��C{�w�M<D�!�'��z?F�
d�Eh��V��,���Ht�_�r U�j>��-*��^�HE&BG'fvk.d�v:ql�2eXj��B<j���5��+�P1�n�o8�J�z�P���TpXx:v7F*�8WZ`�qd>}j���u��*�|�=y����cgZ����z�6��t���R���4N$f�~���7�2���=��}�����z�~`z+�����Qo&�_�
a����<W�$L��Bs���8�$�����)Nc|{��G�QzO6�W��M��������"���l�s��)W\Q�<y4�jN������\o	u+�[��M������� ��T�Tz��<��N!���>�J�V2��{��m�]�1s2c�*�00-B��fs�K����7�j�&�����V�,���>��H
Tj�Y�b��S�9�O��j�=�!A��<��W�����4��:�I��w��G_b��S��Z+$W���@m��*�
��2���������<��+�"Tr�������))8�	�0
���\������	]k����u���6�{����R3��-8sY,5W�U2�f�@�L)���-�R��&8y���n��h��8�V��Y�]'��F\wT������K^�b���/��L;l���6�r��.j$,kL">�mz|_S���� q�T��k��*l"��g�
�����sf:!b�R����3���T���$�7�(��`�9:���\ ����}ML���z�!!8w^� ���x�sT����=zKs�8��B��(������}�o��FJ��@)Zp��xC��kOJ�@|����I$a�w�0p�a?���<?"!PW�� �I��R�q$��U
F&�7:��;\�Xk�HAL|lA�0.����$���/���o���4���H\J�(������a�
��l%�A���W���X����b���}-�_����?��`��s��6���?���g�!��P������7>V������0��
�A�3�4�?���R	4���k�������mRxp�}�H���xQ;4G~V����~�1'��G^Y��{�cLYqe����oy��O�t!�����)���a��h�0��h1�����6���<7P�M����Q�^�d���:����K��M�i
R�����1.h�,\��
���/���
����6�/�v",=P�}VI���A���:/���}���Fn>R�U.�(=c�����[��]�fz�/���h��`#���:�]�����~�!��a��&�dab	�������l�;��J���c�&�_�n�~��V�������V�\;�����)���U��xQ���p���v#7�I�c�rB}���������V��$�Px�,_����K5������w�����4W������HA\
��\�>�fO������x���X�m�������g*��R
t��.[|�`_"y�ah!U^!\����|��!�[`��T�����8�:�)W(�>u
<{����K|��:�AZ��{G��q��q���nt�M��-Yd�a�E�A�i]���T����~�YJ�}b��g��+!7�+.������lK]]�%������`K"�cD�0;�+9�T�Qq��z��R=�^�4V�@�A a�6��@�mc/���+�51A�b�����4b�!�/�*�I�d��1����c�"��y���l�E�����=�J�F��"��b��)U��z�+��^�������]���pT�2:8e��NXBe�+��pW��1{�gG�	�T���WO�������@G�b&�-��>C.|Wt ���&�v�;C�������	��E`��rGj�5<����P����L�5���
A(��rm�^�b�[�,>�����R�Z�����a��5��Q���^��a�L���,dY�0}�,7�nv�x1���B�T[����^���'T������1�i�;�I��?������=���O��u$;�x'D�w{��U��*��H"_b��e��3(�a�����k�>f�!���J����?Q��m���6$��bg$sh��J.1�o.���K�����:4h���Zpj��>4^T�+.���c�J�l4%Op�0qm@��R*��h��l��fC�����0j�L�D�v�
���h����n��N���?����]5�o8�Y��gS9qT�x#���r�i<k��<*��}^x'�h2��u�0�������O1*vSyr���,>���I����T%���l�vLk[l�s|n$� VK��^m�P���U!��ilZ�����U��Q����>��Qw���/[��?P�#9��g����Q���*���v��B��������v�q�5zF� W�<O��Cc�k7'�3O�,���a]w����=�QMA��'*i*�4������H��8\sTc�8�A��.������+�w1�jL�S�������V�!(�@�x:���;x/�L���{"F
6z�?�l������q=�Pi���d$!8�(�����g� wV]�t�<�������	�}�j7*���x
�L�� AD�#�(���+����n*������7rW�	�w��z#H�n�����&@�!m�V�����:��|�����tN=,�)86��y��>��!<p`�����B�$�)kkM�����DLzS&�G���Ks��?��(��Q�u7"���v[�����_g�5���U��X	|�����@&��kvDO
r�U]l�7*�0��#�xz������'�S�]�b���Wxz���z���2���
�n+������+<L�l��vj����������}��������*�+>��i�]o����1�d���Kf���mx[�J(dV����.$�*�S���d��7B�M���-�B<Q�m�����CLdx)T�Bf��3���d3�����t����F�2��T+F�26a����
�%�}t�V~%y�@%y��2��	)����X[4�~V���0j�� �!1
A2����M**�"��$X|�J�R��:��\������[`7U��\���?,P
�or����d#C�(��������:��O��d�'�s���_�0y��e=<PzV#+'{{���(	
�sE�NRm��}�=�pW�����7*�<��$O[3���>'#61$��l�7����M�j���C<��~0A�q@��Y�G*�����v*�bJ
�v ��c��@�",��F��&�h��]`���-C�����Z�����k�w�����N
���!��@���
E�e&.e�>��������!����k��j��x���L)K��!d��x-�W����7�*�'S�o����2�
��[}|�~o�����|s;$%��?���������gJ��������e�|�o�����,�gEZ���y�RB�����i������_I-�j����-uH|P����R����+%�oL��W9%%�2�F��#
�)!h���Q}��7��U����x��aC�g�������e����������k�G}�p���R�F[�v#??����Z�{v�i�S��?(a�����iM����>���[|^S���-:U���|~��7��({��CJJ�������`�7z��4����O�XRa���/��[�k�Z�B�������_�.�hY�Z�R��X������|��#Y��p$������)��m�h��������� ���}7����J��|}�	O�5���;��������^��A���?�0�M}��_�Y���-3x���t��>co���t'[��Diz���%�"�{xcw�t��}���M�W?�Z�����,���|M��eC��C���C������}���9���&?O�>��Y��%9������?t���'��Q��M)�r{��~}'~sB�n4����
9(���>B?�H�9V�[G�O��o�ug���Q%��7m�����q?\v��}|��-��CX���*���	N�}0����d����`���g9{���,!-�V�d"h��}|
�~|(#%_������{�~����(�&f_�*_������+=�����>w9���n��x���-�c���> j�$��������D"�%�5�dI��F�VE���Tc�1
��$����J�n�O��e�����S��:�S��{��5��1G�{���4�wr/����?���@��D��g�(8$Y�Z����e�_VQ}�����d,��5K�86����1����+��5�&���D�mo�;(����~}�����A�M� �A�~f�K�L���!�=��`9���+�;J-%��x��W4(�D����1��;��/�+���'|a/
��}��x�a%r��_�kSb5V?����)[?���gm���_�$��gZ���~�a���P9/14T�l�����m�\�>>�//}�#���kk��8�G�t������K�v�����r'%$/~};.0�������0����
Gr\��p��1���&��[�>DU��O�wK��$�kF�r������G,�t}��`��q��-B��w��U1�S%���7��S��3~�
�|l`h���K>]�Db��*�9�e�������j�7G�&&U�o��B	��F��_�����h�;O���1|��	�x��F�	�w�w����a{��������%�����"h���?�c���mo�4'C'�7�,�R-!����GE�������-\�Ug��8+l���W���v�L�hD��`���>�B��Y��K���v�����%����N�!��;��2�I{���"e���e]����e���d)��/:������������9 �)X��w������{������@�m}v�����_��V�3��}����)&?�\�����\�����H1�ghe�LW���X!}��q��1h'2�\���1��,C���B\@�31D[@�7V���+<�~t:0���7�q-9"W
���>[9_��[�/�J@JA���QK��}���N�b����8�I��X�Qg�%�[��~��/�_���hi]����&l�d�%��7�'���3�Y���^"�����Ld|\e1$���cf4����2�����bv����d-3�S���,^�G�-{�2^�ez��2�.5��{�)��smK�`�������k��u���tL��V&��������Y@����Qc���,�g�Vau){�e/�-�~Q�$B�'�RJ�|��>�����a0��k�`:K�O2�*������"Qx�E��-��E�a*	G��G�	i���
q&�s��f��(c�^�������w�rDq�����k,���P{t��`�=�?�(2f�2V?��w�p
f���qE�Ak�����i��$���8�K��)����L>���Y�������35��K��9K��ctW�h�R>�S�~cN-Dn�c6j�b�N)/������W�������,���=r�1�f�/qW����a��6�:���u��V��x��A\����esw��|Y���#�e����(#z��^eX�k�,a�'?'�E����iL���S�B8�>��&m����������E��3f���;!�y�j��7��g��j�&d��5=�7�v������P��Xc���[�b��>���Z��~r�Cg|sCZ����YFMV_���(����Q�m<�����L��(;����(��-��Zz�0H�&�&��N���,g��S���n���S��2fd
��B�L1����uq���T��0A26@�h�:]�%��S�b��(e���da�T�E��)���9�@��f!�7}�xqh��M�����_0�H�$���P4�_�K�0'SL���r}����~7���N7Q]�W�89���u3�pmC�����~&3Y;P��.�0C��#��cu��#����p
������K�?���h�`��F��G&�@_�f�Qt�V�2,�g�g��6�w����|��z??�`-��z�����e'��,gQ�U����>�+1{�
�����W�J����M��x!����CW�u�L�M���@��Cg\YJ~�C�!�~f��C��z[f�!�$��6�5oL!�����X��f�*�
���%d��K������b�n����u����d�a�G�j�/(�-����>3��%s���^����������|sW��chlu��;�2�z���t�)E��6>�o���D&� ������i�=ul����oq���:dj��<���m����d��l�I���nE'�{�q�.^��C�y�@3o�|�K��%7I)�-�;h�1��w6O�W����1M~��u�=���.
���!��3�`�s�a=P�S�7Bf��2r��M�h������������R�;O�(c�����H�I;�J}���%��(�Dk�)���:R����;1j��N}��+nl�Y��5g�������C*�c�:.���cE"���U��'*�A�$����Q}��]��"-eS���-�����S������}���.�� ��/���?_��#��t���p�N�Lf,<�r�[����?���*�!�?8�*�u���Cy������D�=�IS	�T�8�f{�nr� '��g��tW�� ����$6�X�"���&/���O���n���=rL�Z�Z�qG0H�?q�J����EbT$B~����S3MK�?r,�����$�c:	|\(�'%E��Z�|�&�J~ve�?���m>�MV|h��Z���h����od��O�&
�\�����w�I.[_�f�m�zN�	IX��Tgh�b�������h�<�� Y�M�l�x��I��k2~�I��7\De��>��J���I��Z��K��s�(D��(-��Ap2�*�v������������`D�y��`:��
6p*����8�\������B`��S�x�i>�k��G�+����
���5G�?8��'���T�M{�sM���!D�I�B��Q-��TF�P���q�%��F"9��v����c89��	�:�6�o9����
L�����	Nbi�����xH�o���f����HF�1Ka6�(�H(�Mb�Ab�$^�}�i�}���(������m;i�U�v����d���=�6��I���V�{�q�����+2�wxl<����L'������!��-�$D�\H��8�	��������@��]<���{$*�+L�C%���I��^*�����%��,�^��~"����������	�)��&H�/C;@���U?`��
�Q!�
���#=�����a�)���c&��i�y�Hv%��9*!pI���k��y�:�m.L�]�:�=7�aZ�&���_����W*������b�i&�an�7�
�)�jP#�0�I���3��;��}U�+���J*/��1�0�\�8I:5O���M�B�����/��J���HM��&��I�6`��^��&�`0�?����4�|�
��O����?f�7h���V���!��:���7�����rT��$�MD�,q��o 1����t��(�
:0�f�O��	��:��N���x~��w��1�H��@���A�a2���������(=���I6F�F�a&�d1&A�](����M�Z`i����R	��{2w��%�CY�q�pUNr���D�b��l�T�k������?��&Q�G��k���T��^��G}���s�@w�������� Az���}����( T�������M~����H�|����zZ5��,~� ���gK�p�5/��M�����
���|�T��P<>d6_�Y�pV:g6���A��I�/�eB���r���B�H�%��qa�Rh,E�o>c��+9�H��XL|��>����]�+=t���.����Y��������s���/J�=(�
v���������h�L���7��o���1^�s�n���8���{	�/?�����fm��`�����g����A��q��*]��L�	�M�*�bB�
�lu���c�2LBg�|-���0Lx��;������ek��?�}�������*aem|`v�C��{�Q�N���O)������i��2l%}d�)�y�@B�&0�xH��6t��������_H��0���XL���>����~�z����C�n���Q��d������y~`n��Ff���y����"N}�S��m�k�$��9��_J��x��L�	�f��a2q�����������Sb��>��gn�%s��0�Q��X���V��b����B�,�������b��B�G��k!<�Q��	����u�0Li�@�iA� E�,7'�������I�J��9w���">�}B�Y�_L�6,���=����p�"!<3U2=��X@\g�[�t����duO{�<VZ-�aA��&6q�Y�P�;����cu�������4���6T�����7�H5[�����$��(��>u^����[�5:7��dz�����������.*�l����� KL�=���rG�����%���vf�P�7���9�n6��j ����r,�,���J6
����O�]1�F�o?�Y�b��'J��1LX���A�=pAb{�O����=���n���Ve	��L�^=���"����q�!�C�!�H��	y�G��91������}��V���f1�~��Qx�Fdj���t���2���-cf'��(���`���lmv6t�,��e�`cNE��ctH6��
��%R�K��2l����/��i�@S (�|�6�W�->�	�!��r.o<h��_��yk�>v�B+W8��+����cs���B�D{�;uW|��H_�W�M��F�k������kK���q�Q1�#�OkK�5�?g������������9;�I����!�q���=;,��V���,����o^)����?�bra�?#8]1�d�`6qo�2y3��6���N��q���f %������a�������S.�|}�*��)'����|����=r��=dO�$�����|�����O:�./��&����y8m>����'�m\�W�@f�-y�0.5�z������O����������x�E>LF��X��$3�%l#v�P|j����Z*��%Nd�F�����zS	SV�w%"��5���F�a������N�Q���~�&t�qX�a��_LF�o�f�9@r5��!o�6[Ev7V���L����o��?��G�c53����0�D`��h!y��X�L�$$Lr�E^�M5!����?��������#�.��{�n!�i��q�MUM��u�L�2#��2�:���{Q�
v��l����Y��6��>)1�[���I7��%����KGe�vQ�������@.���2�;"�S��3.~������%�
0�za�>udJ�!L�/�<{����U
�
��]����d.������P�/�"��(�4��	,����?E���K��4�{�X����]��>��ye�����u4%�0?@7������#2!s����N.�^�e�;���Sr�u���l���|Q��� ���pEhjo�8�R�Z\�.�s��%^2��������������a���	�����Dn��c��}V�*;J�$J���p|�(l�v���Cgo(�&��6��L�w�}�?�m-A7*��_�t�k'
��=�4�6&�������T��Nd����~ct���o�a�">o"��NI�Li�&�_�W�s��!��C���x��p?�����A`'Q �C���%�-�Q������Q"�N������lz�?�56</%r�����aK;z�aC���	�M�TkQ�� �A�x�;
">}?#~�dK����eav���
�])d���	G��j�iM��J��
��>�<���j�K��|0��������1��x"���
I���/����6����,Y��Q��f6�LQ�
����j����V.�Xw��6���5�}� S��C�iA�"6�6S�}tN��?E��7FrS��a#�NF�YFD����f'e��z�E�����L�v"�����v�rY:<T:��=����Y����x\. no�R�
�WZ�?���	T�\^���D�{r���!�/�#x���fJ����$��Ao9�����Ax�t.�f����vX]����a��lw������Pi���F
pxy�1��I���V��E��Rw���;J�e?���:���]���0�� y��o���y5nT��Y<#>C,&��]_"�D�/�T�d�g����K�-�����r
���O8����V|�?��
���*��"�U���m9�@#��`�%�u*���'��?s����^'<�(q�XK��0�<tM����U7*��3C�b47Pr�!����
�;h��w������qC��	�#���
��=�[��{A:<}��B�Vk2}������������$���D�C	,	�4!��N�(K;z/�b�zFa�m]��	BgB��u�:�9�?TT�+
�X���}A�*��_}@:�s�@# N���y�������_��L�`kP�V��!�^�s����u���d��M������B�fg�����L����[�EI�����>@9]+��='~��;����@�z

������^�B�}���O��bg������?�� ��Cc��7�s�}����_�k���hyl��%�}�>��"��: :���W�x�!g�
.�����[Zi���E�_������r�)#k������xr5��������^�����mT�������(��^k�P�D�Hhg��%�������|5x1�l0:;���vp?���d�w�W);;��*f%��8Cl��u>�w?�.�E��I�����������.�n��Ul/�5	d�r��\i�`���	\;�a��2��K2K������y�*~�3���|��c���nm��1L#�0�M@�-�=d�Q��f�+c����*��5���Q�4{0�I��5�Q7��TK�wj4��A�X�[���{Jpm�����k�0b}���f�0-�tp�C#kw�^���y
|��=����	tx�������y._�E���0�%��r7�G�*�7�1�"?n�Y*b'����o�`*������{1���b�r+��1/�zo��HnS:����CDln��m��3��<�z����q�f�]��j��o��s��G��f
�9�M���sVk���%���������Yl��x���TeY���"���g	Lwp()�	62y �>��`���6V_����F$��t�gK2}�?����%���_�����j��h
��������C������C;7/��Q_<T{�L���=
V����^�t����Z����������<8t��>���5�F���1<�X���yc��1�bL9�Nb��2�+��������6c�j���swL�	H��
r��Z����M�K6�����As=`���P8|i�qMN�O(��l=��P<��EA���\%�	�-/�Bu��[�"��CU~���
�a�ZE�2i�y]��k�v��Ea��oqpy���'����U����o�U��~l�*����z�L*��������XZ�M�,��1�K��������b��^�<-Sy����:�s��G����������@�/G�,����~�������b��m�)qU��W�,���	�%��	��x�(i��*���S������t���*���p�K���������ifQ�0�����^�0_W{b-�
!�U|n�E|���l?��N�Q�:�[���BK�EU|~��_���;iV1LZ�<��)0���ujv�D�����y��35`!i�;����@��f�DZ�
_������A���K��������ev�WV/�g4Q?�A�+*������`i'���M��vx�|��uqe��W�G]��������!t7$��1�����
h�6�*�f�d�'R��Yr�Td(���@�R����`b���G V�*D��utpGc�����L�����|���/��CR�Hc���������B����;���;��/�������S1@���t�/]_���@���o����P::����(�k��2G���)L0�aW����X���w��O-����t�b;�x���A����^E��#1j'{�`���k����������������b������f�@�oy1����n3re6t���H���X_�X���
l�:��]_(����?��t���:H���=��
0��8�;d��{Q�\(��m"5Yb)\5����P|eK��oO�J�!-��f�����o�z��3�5��e$2aaS��>+	l��U}���&���%GIN��S=N��a��jl����'k��������z���������\��$�!�Q5 ����[�U
�4g�
J�8�Q_����U�~�����=���P��6���,m��1�E>�������3�E�A��Sr�Gb?��q��B(/���z�O>��s�������/�e�S�p�S����V��7����|)�w�nd;�S�G�R�W7�.�o�l���� K�K�o d��R6���27V2�#���Y�lt	���sc�W��j�)jc$�P:�!{���Rs �$I���<������z��K��$��}���d3��R/��W���-~��]��(�������O5^�3T�MaJ���2i���&�N>�{�p(>pB����1 ����Mn�s@i
������J�Pv�Z���M�6S�;:� ��\����L�T2�N3���[�2���#�shnA�����&�|U&������%C�2��Q�M��`#U$�w��z2�����,��-��q�F��	Mq��c��)cX������y3S��
�o�+(J�U
N(�����S�A5�v�T����}q������'CU��������(��U�E=�?HQ0����������J2t��;�n\����}���8lb��?���3�����k
��J���������d��H�R��N�aLCE��
(a�c�B�82��v��1B<RW�)Q��U�Q�]���} ��m�B�Ig���a�����R�L�{�����i��j�������N�`�����Jl$���qEa���
��t�}0�?�JR$�+�CV����,�Z?�w_6��@/����)i��@�d��W�s^k���%�RU�����_@I;MQ�a+
��G�W%ru8J�y?$����^�n��CV	I��DL����:,6��@/J�e1��%Z'K�?)�h�u����
���$�/7��f���b����e\���p2���G�
�:p#��Cae����-��(�mP��A��]qz/���(���A�%��;�6�Jx���r_����$���F.���=gq>%�%������+��A����1���1����L�aL���3>@|��Z���;��KJ�������=�R��%��|�/d������2l�[�H�fX��-zm���d!}`�?1���}Q�"!������m�b��Yr�@��R����:���O�w:�%������R|6W��[�li����@�-6!n�lD$P|�U[z���9� bk���KW	iV�|~B
�C����������S1�d~`4�0��"}� �ck�.��-!3�"|���cA^����ZX�N7Y:e���Hz�8��#A�u9�!�[3�7�*�^��}����Q�m������F8�uX���������}�����HW�$����6�ac����1�*���1�:ed�O�����������w��o;Vx%�Nkl����
C�zcF���)'����;�n�����gB�t����T��}1s�@iXl���02\N�	�))[dJp����U��:�*�t?D�q�K���Y���L��?�|�?���@a��E��A�B\Y�X��Y8q6��	e��N�xL\��{P[P���j	�S@��P&��q��@3��<d�����AI�����,6��@/���������#Bd����i2��MU�_��.
�H�*�����,�Pg|��x������^����8��N+A�u*Pw��w�p�����=8Zs�����*NDO!��2Dx��|I�1�d!�t��s����~�
��������|X��X|w��	L��<�O-vn��u�B5!~q�Q�z������"_m�-3W|�h���q\Y�U�����l�2��b��:*��6���l�*�&���,\Xtn�3������c�	�
��;�i�c�|���y8�+����F�@�����`e\!]�V�i+�=v�"��B����,�Z9wOw�2��������9���(�]�ZxC^�|p�^��/g6�tE��n���
oCE�2F��#Lr,(��v�sh[��Uq~�>��c�/OQ���EU
(�?�A�I���2�����~���;��5���r	��@�'93zN��H��T���+	|���v*���i�Wr�C��(������c��T�������q>�~H����~)a�����P$7����;oPE	n\%.u��1�2+�0����-LX��U_��a�f��r����g(���_v�(����	2%�,F�E�s(��o�����f5���^��|`��]]�	�T4.���vy��/o�Mh"}.���&2���v��
���-(h�>�(,H��0m�%Z\j��$G��`+�K��z_����Mm� S!
����+e]P|�T���TL1�z�����Q:I*����x|��i1
���2�|B��J^K@:���,��gV�x5t=�}g�C�s�J�i�=��ef~n& {�v��������������%�}���W=L�y�f�(��� Ef�����#��M��w���;��>���oT��m�?�S �L��@�m< ~�&� c�R�>8h^�{����o�i�l�@Y��\4�
U��AYl��>�,/XR���b�F�c������TW�����N��0,	�
_�aL���JD`�{,�{
���5�yg�������C���I�O?��vU�!_��q���\�k�n��!/������BF�f�A|��������FS^��N56R5�N��MBX�a�=h���>0vf������j�|&��K�u>������� E�S�H�\�G�D|�nL#�F9����!��J�ML�6�"/��Y����9���7>0���5��"�^c,#�1�I��Q��)o��Z��E�S�c^$�`�H�<�H�q0�Y��d�S,r?n	[��2lj��q�7m�%!�o��s�9��:���0��;��H"!dP@�
����1�W���Ux��K'�-������� s��C����v���4 J��1L���w��\�63���rf�y�<�q_"m&�Kg�O.����cV���C��=8���H��9����$OMOep��;�GH�^�m�������zOCN��;�XY+� #+���r�k���!��d�(�������E����>�(���a��c�5����,��L��C��?�+�}d�����_����D����6v$�Dr������N���K�k?3�� ����6��&��A�B��
�lvga���o#8����CV��O
�lWr��b�%>q?>J$����y���k�����r��N+B����+|J�Pv����(��+���Z
A:�����dK�^'���HC-�@6x}��A.(�-(�YVBj���,�81�O��q�����]��b'��"���E������T���j����'��0�������J�i����$&�����q�=�=��!M-2�!�vD�,S�����(GB�~6���
��5�� .cr��;�JGw3r�b�� y����|U&��oS{�Y1������~e|��Nko�0'p��D�2'HwM��3��`��yN��	���Y	�4���I:4��v/������%���B�9A %-�|c��(c�+�#��>T��W6
�z\Gh!�$�]����~���9���2�V��A��m5[�qI����fwZ?L��C��.���x���No@����(���;�	l��e4z�?�0���t����2�8�h��c\;@Y�(�bB�
h�J���%p�)@�,-2�$k��B���<���-rC)�y!����0��������2'E�x��b��_3#LF�"��A	Z�����6���d������k��am��k��l�~�Q�!P:�ac�������������~3��	i��I�v�0=��
7R%��`E2�1�q�F.>��{J�z��Go3����k}����j�+���C���Q*��P��4$�a��fO1�.���_���J��$���*�"37L
�e�Q��AYlz�7f��
�s<�q}-���O��������9�gn����+����n��vZq#_�\����]G����)+��tF�+���}�#R�f�x"_o���cA�#�L6�B�I+py9�W)�����U����}`��@`����gz�d��O98���q��vUacS����,�k%��r IW�E����K�3�6:� TM>��PF��7��.����@T ������2��pD����f���������4��_��W���lmg�]l��>jt|p�5�B
Z�j3���M��W�v��E��7C���J;L6S�B�
_��wHq
-C*n��1����::�����R�ot%�cT���a�����J�@�(��<�+XK�X�W�F����~>���s#�?����I.7pmWll�T2}\=���Q"7��3������u�t�0��(����
�1,��;J���>d����h;�B�����������(4}.�H��z@Y��B�������@�j���V�-��h�T�v��+��)��s�����@��e���9���k>��������#������9�XOe��F�l]HL��elC��N�@��nF��2kb���������T���)��pDQhW_���2��sxc����RK.����ZX�y���=��%��1���{�6�
����}c[(�� K��LNh�:��8�������/�* �@�C���d��Q&�o���W�_E�q�.Sl�Z>�!�N���7�*����&����mW%66�x��z���%�v��
�w/p�D�U6R%��H��������Jl�Bi�YE�%��0�Kp|�������@�U�D����o5M�iWQ�o���el";"�Z��=t2"��:��5*�s��������+�wG��Fza����U"bn-S�ID�����g]�M�F�l��w��,�����E�l6��e�����]�8������a����>8+����sv��Y��e�u�J�s�@C�(�Y�Y�����P5����FgA�;��G�M�MT����9>Ve��j7g:d+�;lT�r�$�t}cd�F;)q����Z�u�7F~C�)�P~K7$6;�j<�K�m!l����re�����x�'��')��I�����d �+yj�l=vJ�Y���o��f������ce$O~��A(� >��9
qz���7
B���1�+pg4�K�c�?��#�����,���:=���������w�z�>��������t�}��$������~�%�_@Y�kP��\P
z
e�������!g���^����t@��h�OL��@����� u[����?[��I����I��CTz
���6<�����(��p�+�H��
��N��A���M�����Oh�TD����{��m���[L������'����O��v��q3�*�^�fK=��s{F�U~@[Q��}����*7�BJ�|[9����o)�U����S=�HJDo�
hUe��L��s���	��YLn=[�'�Tro�;�'z?�*S��B��w��r���3�Mj�,�p�(,.#�\p?;��v��z����(Q�d����n/���T�N
�'��Q�<8
�
�%6��#�FRL���,����F��N565D����Bg���������`XO�<sj����+B�)�5�[������*��s:u��J���+�)��_����:���,y��z.�^f�0�a�b�����%�^=C��� g�)�L�<���$������'�S�a�y)����H3Ux�B���g��e������zd~���Tj3#'�/���|������=�%0�[�%��b�8����e��a����h��r�'��I7���k��	M�C���{��2�^�Y�����6���W��f�e�	m��DTq���)]���
���H�+P;�J��*�Xl�R�E�x�kGgKfH��CVIjsOM��W����Zl�?z���*��%;��q��U�M[%����s���~��7J��=(�����(��>�<Q�+��xn-��s��������*,p?����4�/�@�`�xq�V)S:C�����-��6�
�i��;t�)Q��^Pp�}Q����.B�����`��*�Uo��
��wZ��*�_�Cq���J�@8D��%�*�HLp��.��b'�QUV5s#���b����lqA�tQ�Ms<}�����(����N���O'*�������+;I��L��`���=����m�L��%��r0.LH�1�=������!�t��b���^���;J�
~6��l�gY��
SO�����0^��v(���Qg���#fw{�0eK�*��-��1��1�!�:)�]:J�".6$*:�[P�H�qE%LWO��I��T�(��P&H]=���K{�SD���<�@-�7�#�C�J��������*?lAa|���1�:'�5'D�%ff�>���)�z���C���=��Z(<���C�(��C��.8��
A\Y����1�G��dB������%r���e,�(!%g��C�	p���r�����J��P�^b+�<d�i�:��{��sV-�l�b��n��~Uh��(��<�)��s�Ub#U��<�]����F.��/:����.�"���TC,�����@2���2G�m�����@/���`����lR����%l�A�EU*�N�_��Y�L��^6�����Z�,��%�������)Eg/�����0���=�4��������X�����}'�j�N#�A\{���_��h^~���e���v!�'�c|OrW}�J�������b��{�
0��������/�8�Zb���E�Yhe��.I��t��1�\0K�2'��)��b� XWt�"�@d~���:a]�Fbp
�k�����E���[-�����1I�~����O�e�iP�W�H�����g��/E1�����������Oi�m������_�(�9Jl�B)�=#p{=e�U�E�����<��o����Q;]&T��Y��eD���~�0�����g��#r���LF��,-s~�]|9�3����+,����)���~��1��@��?��i�p����4Od;sO/f����*��B��l#�qC]��	^^Jo4�5(�;1��}�g��q�WV/�Y���t9dh�g��(�B�(%�U��qE�,+�����"i���2ab�o���\��Cc�F�~�
���S]��/���5p��4J�+H4�#��mk�������T����+�}k:lH���\(����HEXn�X�*4k��������������~������-��o(��1�>��H�r<���AK�!7VB35c�����d���\~z;{}��j��[PP2}�dA���gCb����$���,I�+����`O����|01'V��0
�����@���-m��}0����_�3�>�q�b@���f�_���mXO$��1};����9��"�LJ�En�"Y�$])�N�B�.�N�p��
c�@�q��(sX�s�H�e�ad�;th���3���!��y#��J�
�+������e�1��=��RL� !�B��&��[�����k��Kf=����=(�!b_>��/���2��X�%��X�������#�#�
��v�]?�
�������%�p_:���v����su��}�������������jW%�EIc�K��R2� 0�(!}�!�Y%b���{F��{"l��e�&,h����E�X��d�3��d"��FzT�\��i�k��&w�����^5�j0���UjLY�Y�W6+It��~��w3� �������_5}�����
1LZ���!6M�
|�kRJ����?|W	:r��OR�I�7GD���w��%��vp{�8aMz�G8������m#���}���S<����/
S6M!
��jnY����T�xNF������Xl����{(x������Je���>�����Mu�!F8��+�P��H)�����BK��v�nl��
�L��^fU�Ld2J|@F��_��<��[V��[4�bt�@2d!����L,��/��J����u�Cyp���O-B
���w	�����mF>��*����D�G~X���(3?��W�\v��2��W�*�T��L��6�U�u�-����������Hh5�@���A<�������,�b��u\t��������c�?/:�����6��5:��}�3�
����.��r_��i��{P�����,Z����U�9^�X]���Y�������Vh���fU`Q1��i�O��>���:�r	�>�����w#�}B>#�n���v�>(e{P.�R�RTf���9�HMY�����6����a�JX�e���,��vU"�eS���}nr���d�;���<8��|l^�jH�%��h"���:$1��l���^PX��xVi����wP�n��H����2�o@�S�������e������&Q��,�u�t��������I�X��F��f�����L�C��l��� KP���t�t���pc�h���\Q>d�H���/��nC)�J,���@��	����"j�)�gC�2LJYV�_��!���l���R�����+�'��l��z�a���!}P��l���Bx�]����_��F�0(vA=����2c6�
:��������������]R���!�L�!��R �@��|����	
�y�Ce��<1t��U@���y�������]U��4��cP��q�������m�����T�=G���UB����r���JxV9l*����D�i�x�^���[��<��r���M6R5�y����������2le�?z������Z0����	��������,\W|��1��c~�����i�)`,�����442��+�)�aR���g����'�;reI��?,���Ne]�1�d=����^����� M�~b>������0j���D��-�8�e���(�>��B/�~6f����q��7�y#n~��%�~�X�y�
h��������z��TU<�R��?�W<)R	�y��<����v,���6(�� KQ�d97����Bb��}O���xm�����(�5���X����*��b���d{���l�����Du������-��@-g}:�ky��F����"���Z�
�����Uj<�����
�I�������
��m�;��c��l8��q8ec^y�"y�F���F/��0�����z�p��CdH���
7�a|��Q<s8E������4K���Nl
/~YV��F��!�����N��F
���M��z.3���k�T��ype���I
�j��L�~�[����Q��E)���P�.d��9���"����99�_��U1���]��p��:-��a
��/.�EvJ���#�w���C��@�;�����2H��s����*��W��7�X��-|���*H���x&�+�7��2������+���!@��)
�+�1�*;�e�����B��2M	!�v�Y�����y��b���l�?Mi��,6���^}�2���
v;�S��Qaa����8ygQ���*��^�&v1C!�b�����R�}���s�)}�����C����gP:�DAS�����|Q�����*����!����������������f��_e(c�@�����]�X08/h�
U&g�}pu�������Rk&�0��4\2oF�����}���n��q-t*�h�1$��(A�Z�?U����~�4�'�i���5��P0��u�N�KX���rB|�R�o��)��-Li��/�@X��Z�� ��u5�����,��������&�}��
;J��`a9}����)3�#�Y}�,f��=	���[� �.���I��~����^G��G�-� r���?�����m��F�-�{��^
�����je�W���,d�1b
�Q
v)�sy0�?��
-����6\Q�_���!��R�AijP0H����H���s�P���������0��6�G`+����K)?jw�M!��~P/����k��v6��"t�+�����j6���ay��$J�&���������+�����a�>��j+Q���}��cr�
���8��nG7��=���~W�<H���.����:���H����K_���][������6(kqY~�������,�����J�Lj������#m�iT�),�A�z�M	�@����E����&0���CVpI�_����jiP��E65�@���FZ���E\�O���y�0��b�84���V
��U^�i���o����U".��#��$U��Z{����iL������ch�^��3a��elC���U�����N����1���L�;X��\���R������U	��:@A(��B�4��������=%2��,���lp�5W����F[���4{ �I<��w5���9�E��2J�^�7ps\��*��*���J��l��9v^(eS�
���]�*��?sz����x<�n ���������9��4([����\(9���b���
�&�90����^p������w;����-C~S��:��\A��c���G��W@��\z��2�e�u����Vd&�/�,�C��F��as@=�X��~����c�o�����xI����*+���aC�Y�@��H�-�P�|�����X�;��>>0�{C|�>^��Hs����{<
���~���u�@��2��e�ZJ��;�3�r��a�����29�<��d����@�q��3��������$��`}pe%�.�kV�E��U5����(m��,�� \���p'�y��_T��x|���s�u\��d���}�D�L�b�U��I�_��a!8;p81�Z�,a�{�91�8��!������~�
����F��W�����2��<��yppW-�x�7��4
��U���I���y0�i�
��h/H�����S.f���@�<�2yp��������74Hs6�_�=�p�+@YB��KgJ
�����
�R}?ew?����N���(����Q����rvu/���Q(���K���o��O
���6}�1�q%�cm�4�^&-�%�����S<�������.�6gDw���&OP/;�4�����S�A)���Q�4p�H��`H��h��/l��X��J���D6i�2>?�ci���F~��@����������A��~.�$��%������H�9j�<:�*�����eSep&�����w@]�5{S�[�k��Rl"Y��Q����x�@ B
��_VHF���f�,*�P"}���~��2�v�����)�P��>n�e�4���E�s��w��\��f����j����I���u���t��(i��[Q��2�8�qy����Q�T�C��R���-�Y%��<��S^5����2l*��q�E}sphnDb������q�����?�Is�����x�U���1H��C���V�R}`�/,4������k6R(p)}��� ���W��T��D(7�Z���3I&x ���#�^����U�)��>^;c�-��M�>X�2���UW�C�^�G�wd���]{|
�F	a�"�G�.S��������j�����JD�����.K�6R���������v��J���F
��o_D�����kuk���p�l��b,����i�����4W������]��T���VuC�������p���OD%����=����i�8����8��Fud��B�lP�����J�O�O�WXvK�f�~�@��1L��s��o������
���Z�zF�2��jm��_��w���f�eR��2)��+�%����n=��go�{��K)<O��H�6S�B����.����U�a��m�C���g�xU���*>����������!�I�_���������TL���3m�_��O�6��=�rR����2�j3J���J�[�������o�H���B���no2������7���WN�*<ESA<���a<E�N:����U~dY���{���V�;�6k��M$������K�D��WFooh��=c�~(��R.��H3��a
D|�1c��$,��aB��4[��w��[������*~���� \Y�n �����y��HlYk���� �@�>�|��5�ZoQ���+��<����M�����`^�:��y9!��>�z�[�J��(�H�H�v~5E20�P=��0P�9�sc)���	g�����N�C�_�l�*���*���?��v�bS�>�8�������A��������B�\)"��z��p�In��Z���"`���7v�L91���
H�#)����?�H�|�H9�������K-��D����y����I�`G��R�t���7A��kBv�F�A�U�E��,J�y�"{���:X|� ����H�Ffm'�K)9Q\O�F�x�����%s��6(H!@�
�_�xd��O�~�����0Y`e�����q�����b;m������:9V'��'��v:�!O�[.����[�N~�$T;�3�?<`����
�L��� �x.�&��R�m<������]�m���:��O���(m��,��d�}� ���(�T��ug=�)��xH}�`N��T�9��
�2l�s0�>
x(?� :��n��:�?��:��W���R����2�e�����m&bGK�Y�Qi�����j&|�G�����[����(������0���]����.������2���
a�Ro\��h�|��~}�U
���mP��}tpk/j���M�dD��JX��~�����]����k�)���a���l�3�q���_Wp�k�#�Cg.4�{��1
��UB�ad.���jF�L���nN$_�*nu������ot����v�p�Vj�*�3���Q�9l{�����`"8�����lL��3�|��R��p�3����er�*�C9�b:R��������(��	�����'i��[��l�|��������]f�F>(�K�]I�R
pDU#�r��`��I�"j�����dj`�����M6.�������}di�`����3G��"��~�G�3�V$��*���+rc�liW"H�T`�q�&��*'���:�a�<@*�po�Y$E��m���
l��m�Ik eRQqv2;�NJ�u~Q4�(z��?a����n�����n��3��AS�q�"S8+f��U�ybZ�r�6������P�������i��_�e.Z/�������D��N���^G
�1�"|_I$s���N��U�{5�b�H�������b���7E���?���Ln�j���������.?����.��l���q��$�\��^%3$�~��Snb�ss��t��s�4��}��UegT����7��"�C7�*����7F��P��)�����y���j�
����@*�oR� /c�c&/Q��o/��a7g:�u������|v��5!z,�2)���A���g�tF�e�Lg4f�,�?d0��.�[
�mW<�Bi����@Qb��E-z��%-�p�A����iE �k�\W;2[����h1.�vt�G��������K�T�ypc<;�uj�
�:�@?�+X�����<��^Z��(���]�hz�������u���!���";���SW+�X�����?I������j�Il���dDzc1�!��K�p�6����a��7�aBz_��U|&�s�,��r��a���\C���`�L(��^/+������m/��QUv�Pd�7���T��l|�	�3���Se��>s�����sb��P{8t\[���.�=����+�1B�v��X���g��I���y\����d�� `��obFG�e;���U�b����<zg�#�%��~�Y��U����7:>+yH�
?j4�� {�>�X�K�vq��Z�>����>i>=���([��(��P������.�C&B����N��2r+�v�Qb��E3���WP��J�2l*a9}\e!�N�_�nJ-��Z+&��{���������gw`�lw���R�����$��
k�%��������pS�$a��I���]?�������v��OPpI���B�-�:�T�a����'o�������+g�B�x�
 �3����O�e�Tc�d�;��Tw�p������)��0����&$�����������Je�����>��P���� ��.��>0�����I��
�cz��bc��x�A>�_#RPpVFf���WV�����wM���'�@�g�������*]MLg��
��0"03�}�C��X/����x�^���\a� jH����!�� �?HQ��E�\���P����WV/K�����'�l�)����v/��
?OU����Su����������G��!�AC����8N6��,~��F�R�\���s�����������wZ������_j��!�s�S�/ ��G��Q$/��L�?go��-S�t�:;��2��o�*��P::13J�93�j6"1��}pfEm���3tH�Qt��5���/D�-��-���o��n��9*Z@���9�;}����f�*�^;���<m�^jA�Yd%Q��������<8t����yoq�o{_du�����0��gh�X���
�S\F�d	�Q$Y�x�Ej��L
�/���R�c��#��v�b#��p��lF$�T�NfI���D�dX?$g0�
������/D�4/�&�(������N��>0�x�"z�p���4�$��t�&)"��T�2-�R����Z� kr���WF!"z�Z�U�M]���@��1L��^�
A�&��?A�A�4_�O�����^�G��UB�������2�a�Ue�D%�T�JL���2�<�g>���L�*����*C�W��M��
�0fbh_�a�Gh3Lw^�������H�X�/U5��1r�um���*�LZb��J�������<|;������Yn4���E%b<�K2��kYV~	�����=`W�7%d�V����P���M
E;����~D�,yx����l�qe�+��O��fr�RVd�"���H4d� 9��������bDOR������I�'c5,
�����������VbY�� ���Sl�Ca3��B��������Y^+�\)�}�n&�C����hs�[�6�����y��+�
i���5c�0����?��9c6�h��	b1���������$�X��v����2�m�L��yC_;N��74�9�m��'O��Oq����yWaC	���P�kLb����	���x!~���k}/���_�Z�?d����{@�D�d;�b����@(��@	A�{���_)������o6�C��f[��\�Q�����9�3$�"�/i|L=��'�"��������Vz���~������2!0�r�>8���2������������k�b�p��b�o�G�\�Dz��������23��<������2��DZ����b>������
<����H}�4�X��@���-�����������%}\e���Ac|��9vw�H��8�8��� 0T;�J����-�)��a�/��c~��*o�B6�+�����od�/��l�����S���Ca������;A��J?7�N+��0gZ��!�/����j����_�~�~�z�*7�XX27�(����X
�Ex��d})
6�9$?���\&�)�K���d������.3���;�C�j�\9@����j� ��Kx�����2)|c��l@,M�����:1^����7q|��{Q��SV������_�@�ms���n

���m�C�������?���d�����*��E$L��C�1��H��7�N������Y��5�KUp>��<���2��Q����R�xH�/�1�ks�R��c���I�F�F&)560iu��;�����U�E9�Q�s�A�|������[$G��3����n�)�'P�}�t�A��S��|P����9�
G9-��/�!olt���X�m�C�_���}��BXL`�]J�d�~W�H�/s�<t\�Y?�,����~)��������L52��T��DX67�s�A
�A����O�w�����Y
md�Ou���/�X��q�^@n�?VX��g]?����;r������E`����v�g�i?��\��v�b����'o�!T��@1����3�9C�}	e7r[��L��f��p�D�G�������

(�g��j�I����Q������,�&��L`���1c�|��#����u|lM�]jm�se{��em>��c�B�����"Do��Zs�����)�r<��'U$��o��6� ����6ds�7���{m.��E=��qn}�B#X*���56��X|����
�W�z�%7��z���`�
�V��"'t}pP����,7i:�6R�{[����9	��4bQr��E��H���\|��`�����	c&�]��T��t!��@^}�Ai�e�%���0�w��&���P�>����wLu#��!kGHn���A��������Tc�,��Jn2�v�h�7@��w0x->�vU�����y�|j��������(�f ��P�6Vq�[��/��X����y�3v����v/I���^�M�*��-��x�+~(�J3-��8����{�u�������g{��`�Q�����^�G�+"[qQsreD�>0v������r*;�E��|v���S:1����{"��J�^�a�5��{��h��5G��o;vK<Y���gy�s(J���F�(����f��0�x�}p��q^u�)�Fg�3���4�m�C�����VG�q�}�(����_�ZYn}��W��W�9wX�����y�l��p�H=R����v����'Cq������s� �R����	��H�F5Wxt�����jJz���@F�'5?bxLH��dD��G&����n��9�����m�0����M��:����c�I�>p<��U��������;��:[�_JC��
����?����sQ��R�u�=w ��K������;����_+��&����/J�@�ctD�d~�/�:Y!�A��2_}��;�^����_V.�0a,�9
1LZa�� ��y��Y9�����M7V�co���v���O�������(H����!����qCA����^�����G���52������
`������+1=�^�a[(��s��d\��������:R�W���L��"�����c��|w���n@���:�=�Nzt!��J��|�*]pld�7`��Yp���iW�<��2����6����h�<$�C�P�V�a�;;��{);O�w���F��L*^hq�s���������#�"�^���x�2Y����l"��ak�*��i��������Amt���yp`]	F}	!�2	Kg^L���/�W����"JG_>�+p.]�'�?���W�*1Bvt?D���W��-UM+-X��(��c��(��YV�bx�\/����F��z`{��B���+@�U1���d�Q���Gf7�2���X�D���6c���T���C��W�nSk�l�B����`���#��KlB&��7�f�L�lF7�����X����f��\�;�> �r�zv� C)>��������U"�P��C���w @3fH;�/1�5B��F��L��}puc����9�����U����I���O�FelC���/P;:�Hy�%���_��0c�����72��< ���k�EI����i��v�}\Q���La��8;BW����N2'_XkQ��B�����)�� �i ����A���.����=A�2��i-F�_������ �Q{+����P��Nnk<��.%}�{Y���B�PV�%�@>5�Pj=���v����OW��D��W&��VL�	�+���)�e�*�F;�~���������P;�J�A�����-B�x��^�zJ(���������jX�����[��%�S���g�����|�w����������x�,�k.��}�n������e���9��H[�[4�>0�������F )
���r��1�[H3{�0!�x���,������1C�X��}
|%�����(���C
����4�Ty��@\�2���������eKJ�����P���#}?_������__�a|�r�-V9-:��9��J"��$��`�-7�l"ruy��"l�<����pZ�v�bS�
�<��:�cN�s�:�c��3�={r�i�RU��0���)�4"����b�o�q�%��������/l7W6H.i���F����%��{P�� K�@>JA�;|��C�KJ���+r[�������$���_������z����$/P;�!{J�d�W��
���L��I��6L,e�i��k���sd8O�2���P;�2�I�2��T]�=n�:��M��z���g�&����U�5��K�����\I��bW'd�X�=�|M6��EU����}L�1�������w������g
�*��:!��pg7���	p%`��l�����q�~���������3�G��8�={odv)�P��d������p�������:*=|h�d��X�A�E���o�3�,��h�D}l���������L:���$:���n�3��d.|��q�W�1�c�K)?{+�6��k�-��-(����%[M�v��D��P�f����[���j@
�_���7�7��M
VA����HZ����07H^G/qX==RV��R~��?yx��d��L�(�-?!m.�����>h��O�*�mg;�* �
�����{Q�M6��<�@�1�(�,����uPTXk�����3D��-&�7ZUt�IY�C�S&f�����S^�y�D�,v�&�������L����K�'�a��d����v�6�
{�H�s�!�bG������=D�� #U���<O����(�����LHF������!��j��K�`���c?�z���fb�pf!>}�(Uxj��;�C��?�C���l�<�R������@xZv��fD�)�K=�?@t��Z�j���L�g�+�g����������a���</������O���>������?��]?��.'�r���[������h��*�s;�"���E��*���4��WF�������7�������|����EI�r��@-�2�����>�A�Vo�;K�}�v�g$���,�o�m���A��w�x��`0 ������W��U~)�'�����c\{.����K-��_T���`����ky���k���}��n��������s�h����8o`����P��Z��.D]�����V\��o������#������Y���Tq���p�������'��-e���/����4�:%�����0���
1L��^
��M�sKV����]_F�����+����(�
Py�����L[z�&���2���+�q������!�7���:�dj�1LQ��i�k9�Ee�7�'_������l%h10�?/U��l.��MW�4�����,���������p[
q`����\f�}B0�O�K���p	XY��;k��Fn��H&i
A� v���?�dE��	�-�]�K0��I�a[(��\��&�����_���E�5H��E<�a�;�����O�g���>Ji��<����,N��[n/�%��p��%n�T5j����#Bj-������^�/qrw�r�����b����D�jm��:�x�����/�/����Ce�Z�p0�����V�]Z�����`�~^�y7��O�V/U���C����4��/���_�SVe��
����J_����^���%c<��S.>�W��{Q�����l_�-Ty���~���v/}Pbe�q$Q�����^ 	����Jm������BIY�BY�y�����TO����wZ>i���
�K��^�1kF�
b����-t��mvaS�D������^ ��.*5�HU`I��`v�W������)�b��qE���Oe�_*��������?d��ok|]Z6������\(����$e���#�NN��.�A���2i\�������h�������.m;�Y�a[(z��(����\��}-�N�A�N������I
����Nz��_J���1��.�}tpH����l-�1��b�'���2nd}���7�
3��vUdcS����=:>��5��������h���-���g����o��j	��(m��p�f�%}�QX��7~�UW����*[M]?ld3���Y��vU�!_��1f�y���:
�4�:��.��;��y�[������������ ���4�*;�����@�?�7R��i9�rP�����I;jWe|I�9uP>�5����T$(�-(�>�(B��5Yi������;��������U�/��g��vUccS�
�v�z���3;��D�C���.������',��&�}OH�_�T���H%�U
.�;3��<�'i�>0�z�n�\��sa�������t��ul����-������\��m�l�,!�U.�X^`����I)i�O��G����mXU6������*'M��PA$�VQ}p����!I�4�e�HGDR%w>�����{P�������������@[��~9��F{}�m�FU?�,��w�Z�U�a�C�@���}�#/ST����^��2#��KW	�.���������u�L�:xe����&18?��`�N��N�[��Ie�i�!�t��Mel�h1.�vt�(�%0�E��)��(�y9�aq�L������U���7��@mXe�2R����2y��8�
�${��2����E��oQ��D`P<����L6S��z!����K��7-��A��]��7��MC���P1�k�����@��1LZ@���I���X;�[:#iV���k�`��%�5\���/?������y�������
��j�Qz�Po�j���z���J�f���(!_��������J0Y��1��Kh����V=^�"��H����
7�a�";��#�w3r����L��&��8�dl���V��Yc4 ���������)(?���b�^��G������i�!��a�=�2k���vZ�s��
%B~n��Dy����sO���Q �W��������Wf�:)�vU�!J,u��/��F���&������E,��X%�iE�T��@���q������3=R��9�=�G�3���z��N||���f����0��l�����{���,�P4eA����T�9�jl&G���t�	b-r�C�Y]�2�����YN�<dD�'2�d�9����^��fNJ�Y- ���Y?��_�4��@S ��>N�Mvp#����s�E\�_��9��U�������P�6�";��H���D����rr�
"y�\�<��z���� g��}h���V���=���i��b�~?���+,[f^�MY"^�+ :���F��/U5�s����*������%�U|r�Q����bg���Wo}Aw��K���O��uT��O�*�������P��=a����[��&�n��� � ��v��q+�tD���m?(��P����A��'Fy']�<K�2z�vs����J��~��.�B)�J,����Z~�X�X�_��X�p�t�cnF��~�+�p8����(��OY�O�B���4�H����V��b'C�o�O���?���?��f�~�=�\�����CE8N�Z�W,t���q�N������24;��O�]��<(�a�4�z���r���E�������a��[GWn|H����������N6rC��1���xo�F�����At����B/�s���V%x����=����Vv�F�@�� Q|J�$�����HZ]B����{L�%x�o���+����vxX�����=j���J���<8�^/`~$��g����t��O��(�������>����?tR{��@dBrl���?/:������xH��F
��P�	`]���F����?R''pj�#���2�F�?O'�,6||;]�@g�[��U��_�vt6	�#�9
{3���||����
R��*������!Z��m�a���h]_y��/���5P�T���_��g;�O"��v4"<d%��x������+l��2'����N��5R9���O����A:�����T�k�k�(�,lm�A�7�Ai7���A`TVlz���t���t7ky�����t���%7�o���������
R�����Ig��80._��mB�y��B#T�d�3�e/����z^� t������_
q����I��0tN�0[:[�$������1������������V"�9�v�2�T��Hh�k"�(l�I�(����I���n�!���)'�}K�T�C_��nD5���~���n��Wl�����N�?�-,�,��}\w����I/�Y��k���%���V��/r�`�qO�+�mU�HJT��W�E��)�2}�C�h]�3�,�Pn@�_�vt;D��H�n� n��L,23v�2;�����gep�0�[��7�\7<�#7��2�z:B(kv��+���q������$�T�4A���������p��*���"W���YI��>0.�P�e:gwM��j�R�� a���!��C_����p7��r���HiM����o�_mg%@�7�?���7�@�c����9�B)[Q��}�1����)����Q}�y�|a�[�eDZ��{I�N�zU�X��������^<�QX����lR�7����(C%�)���V|��?����F|�=�=���������X�W���f��^= �����G���/���0�oA��2�#�+s�E��������uN�CUl(�5��'Q��=b�������X� �o����;�k+��F��A�#�(i�
E��`������Q�����]z����q�,�d��W���"J���&��q�I^��/P;:ab�i����g����W��6J��"�Cb>�)��~%�^�aJ�}\Q������-K���{�y0�?�
��0�&�F��%���I;	P������6(������o����Z$���F���[���������*8@aP��zVqEL���r�b8������������{���O^WTy�	��_jCgG��JG7 eO���"��~��(����	
~�rI>t@��H��q�r�A�����K�*��|�jLN��Yn� ��F�����+=8����~z�;������ll���bX��x� �]qll2.���jv�<�3'N�d"���_�]ak;��H�
����,F�'��Be\V-��w�� i���%9��Q��<tG6�@��[�S6�l��fuH�8�C:E=H����	i��S���4�L2_+��D*���J:�����C�T�Bo������G��u�����G1��d��� �s����g^'��/�'�.84^b��^���h�BqS�/��t*���+���].{
���o�K\���B�����^��a��_!��c^w�	���81�z�����������}�e���el�����(P;����-�S�[L�P
o�v�b�;]���s��4�����w��n1��������1��b�G#Y2��"ge�	�\��X01�l�?
Q�a����\x�����o�t/��^�s<6��T��@��I���L�Z%�X&�_�����f��W���St�����Z���:������>��&����2�f9��G����	c�<8]�������<��t��%N��~(
7��/���#�0���6�H���g��u3�e���*���!t���!�Pn@��W���@�=o-��X�{�L�WF�8�-aZ�EJ��CN?��N
�}�=�=��a�Q�L���$"�����w5��h~BGE��q-�Dn�a��|��R�D��
���$^\���sg9w��'�S��9�!3�iX%vF	���D[7E��H<����++	B���c���[Ze�~�#�_���
�	~��DU��U&�T���#J~�`�25C�~CW?��3�_z���(�v�D���Cg��aD"�"���U:���3M{_�A��?�N�5�>a���e��s��gD�,����Bz� B{>B[�
���4O.�r�4{ �����+������W����7�[CG����
H�0?v_�%��B^Q���a���5�D�.��F���9����'9wZE��s�E������a,[��#Mnq	�g�<YZ�I�J�����k����i�
��
D��&��)��M�7^�y�>��U8���"�}c���
,j �6�7�a<��;i%1H� ~�)1�]R��}�Y�\�YWf���� B=��;�#�������6��A��w�=�Kb~�;�%��_c��G�������R���D���Q"�Qz�8�N��\pR��������m(����D�e�J����<��D�.\������%\�7��0j�)��Oo��������2E��$�abt���hq94_��dK"�`�T�� 3o
f4�1����� ����/YR0��63&>F�J�������
r}N�2�L�_j ���6N~n%�(���m�A����3dh�����+A����#�wLi��~���A�%�x��^FE���C��@�klz���sus��Kr([BZ�X#N���q����5+������y<G���4a��WB&�7%�B�z �y[�^��`��`7���������I��5,RQi���v������
{�l{�(��y0�����U�ueO���la ����u�0��"����@�L��e�IH�2�P�e��Z(��
���A��Ge:7�HIQ�(�=�0j���Y��:-w|���\Y�����tex�*#M)!/z�g��w5e^�jG����Cg���������e::��b7R	��&�P���II��(��P�������{�3�����_P��L��X�I�4F����j��{P[P8Mj����b�F�\~���5B���B~m�����5Z(eS�
�2.�C87x�3�l�B"W����/�������6	�}o|M�2����i\:@������r��W���i�{�~������UF�s��p,����U��_��g'Bd�5�2g�)#�>��/$�Sh��x){w��u�����/���*b�"�)&�(9���U.9��-}V����,QM�CU|D���o�iV�5�R<!�$gp��{��+��A�e}�������Kr9�c��U4�z`c��erS�v>p%0�L(����I�:/5�������T	��}
��,=���h#
��������[��P�h�[M�8q~�����_yc�)0��6�W���BI	f����.�����3��zR?���;=��~��o��+��m�(�Mc�>���������r��D����DL��	U�-��{��t96���BF�sH�����g������A��2"���u7&�MF|I.�~-)aC���	�l(�=�L�gQ�L��S��z,�t�Rj]��U�����5u���uM}��5���5�����3(�r��#(�������@6�(�;��{��$S�����7������m�8��������h�O>B�O��4_�ZU�����qUr�e_�@�m���)%#G����\��;�3zv���l�|���o(k|�$�:/6G`�qCA�d��3�����R��fq��*a�G��������2lEi������0����hp��c���/^�`��*��]���F�����E�$�����[���@"��|�n���x�i�B�L�=�����JC�y�Y��_B-��^RY�$c�����'9����"�����!&������l�Ay�+���Sh�:C��$����fJ]��T�$�����g���V&��b�ar@��(�����Ii=��F�~�E|���c����R��|���q,W~i;}��FI����p�Z3P
�Q�_I�D�7���w���E��|�����S���^(e���+
���K�j����W�s����EU�^t�(.���2)����7�e�.Z��f$� 4�(�,����2�HUp��\��JG6TM;�/��-��qE��3
��8��Q�a��z�V5h^6iu�MU����@IH}PB���u���p3qr^bZ�,qt��E�����"�~G��-��7kn/��P"�����xTHi����J�Q�jzF!#nb��*`�$�B���Qx������~q�������8S>8�����=�Q�MVH����E[�U���x�^��A���`g-��%:���+�,
�s�>�c8�f*�P1���8����*!HL
u��'	�=USS�P��:�����6����?��S/��Y�w/j�|�����Q��7Y<1dgs�N��a�]U��BA6����O���v����[��A�S����H(�C�=^R'>�dM!�����l�>�(����gB������Y]��u&u�I^@
��Y�U���3q���tL����C�V�|�p�{`�X�
����7�T5N��0������a��s����Y�SVk@��y�������l�������l���h�]Y�
Bm������nM��G�����Er��$��*�
/GCw����C	�����A�k��cAh�Bq.0_Y���^��_4���sG#��7��PCh�NjG7 "p��f��J�3�B��<%�0�t@�j������4�
�
,���3��2��N�qhoD�\���Ed����_�\X�j��_����B����������A�.���Z7z��������j��m5tL'H���(Q���j1�wf?�2����,	�2X!.]e���L��p
c������K�f�	�����dy��Y�x"�C�F����w�6L;�>��H ?@,���"3D�4��d�|pcew�� �i���]�9|���ve�i��
)5�rE���)�>t�L�N3u���C��P��������b�*�0)�x^�"������B��J���P8�����8L�'D~D��o@�]tM�IG��������}��/-jm�6'R�w:Om�M;"�\��{d8'�2������vee2d��"�*E����+�����M8��^~�m�f�N��f�h�y��+3wg���D�<�B������('V��!#�$���W��v�JXl(�o����z��Nu� Y�v�el������ Y#tE�8k��s_O�;�0���2������T5�{��f8H���E:���v��$���1}��B�}�-A;�B��^�Q?�������>?j���J��A�P�I}x����bT�7�e4X>d3o���Jr��A��-�d���cpw�V
����0M��E
Dv����P&���?U�[��H�%qj�������-��t0!�Z��L��,@o~ $�9�u���
3
S�����*�C+x�q#�1,[�9�/r���{�c3����5��9P8��u�hxV��{�t�M�l���e�X��������/������BIu�h���I�->���E���)����
(��$�����m�
�����&��UfE��HD�Y�22�j'�[��[�+�&�
$�,��
��FYl���q��?H�Qc��c�����Z� AF
b�<$r��+��y@��1lm7(q��
��B�N���Sq�ho���)@f��rz�?�]E���~����@a|�Z_�����r�}�iM9d�p�
e��1���2�62]�B�+cv\���t;:HT/�Pm*�]d�]����}�9"Ex�L�$�
ak������r�	������(�R���&�����F�E9������0e����a�[*�(p$�G-8��%��3�>� dK�_���R|6Z�u������f���b��3�k�������H��|~���o��En�����A���3�@T31�T�jG�bB3��h�� �#g97Vd`��@����q����Y�����i���1�+����&�'`�H���:��\�����[���z]Yf)z��0���kn)f.��r2�v�L?����u�,��&�"�6�\C�&@Q���n���8�l7�N��<\�"�L�pQ���v��v-�(�
���A6��=�P Ef\��0����x�A��o�����E�}��U�+9��i�*o�!-���nBy��%����.�A;����==��{����U����6(iW5^l%}��@�>�2�l@�A�c��k����s����T!���G�	S����2��\�J��}Pe�)��>�"���1�*�zto�P�`�{��+LiV�&-��oFD��l_���B�1��<?�����}�/�:��F/,K�!�|���D���9������)'��k=���p�n/��V���lF�o{���q}�(�r;R$�����4�0Ml�J��F���~��O��j�*�3
��/#��D�,�L��(�Dae
���&6���7ctdai+������_��O��+�f�4��~>b^���/���b�����,���fb���'X�;���r"����E��8vk�gG2�'�K�v�a>�8T�]Ue���z���Q��BJ^�W	������+����E��]��>�J�=(�V��q�ND��"t�w��I��0���j����q�Uq�����Qm�(�M5^�����Z52lF��W��	�U��%�*>������0�Y�`Q�/��P
��5��>�]��3f���������|NP90��a�g��X{13�Wf�}<?/~Wq���<\Y��8^�0Y!�+|h���w�r9]����7��e:���������k�j�"�����\��-��*e�J���� �����FF��-��} }���>�l��j�+��wP�n�,���cKW5H]Q��a�l8���q�EK����������e��Yx����D �2���v��zR>����������6/���|�"������9���}��	�)#;=+�W���*���9��2]eB���y�����4�*�fdG��R��9��U��9�XY�S�6Q%����Jc�,��Z�leq2���L��ia��&I
�g�� zY=��0��;9f��O�G����Q���b��q������ZKB�8��}�Ofl���
��^ �,��
<L���o��3��@���_�����8�����6��g�3�o�H] ��@��C���F��ubg��������#�o)��1�����mX�<�������C���@��Q�0�u,���%1�YT{���G����8V!��%�s�@�.�\��A�
����G�f(^���n
�0�Fi;�!(�-(y��y�����Q
s2+T�,�|0�X��GJ�<�����#�����he��Fi�����x�tf�����T�7Y�2S�i�#��K�����(��)�$��&B�h�g���X�h���b����������#��A����IROC����q���D����eU.u�Wy���H�����wU ����@���Z(�]�C�1""�D��!����������%Z�K,xH���/���B���v�2s8+���Y�C���{��<y;P\j��j���/�J���S�vDGs��Du����YHc3o&�S������b.���QK	�|���{��S�3
k�7������	b���s���%��,�O�B�����!����_5Y|�p�,F4
P����_O��5,�fi���&�?���O�����;��gV����yy����~G��ky����������G���o�5���N)1�oP����}���`�j�Y~���� �	 ����6KK~8�s���~U�)w/k��������$	�������x|��5�E_]�+��S�3e_d��x���]����J�8�����g����Y�8����%�b�:�C���w����k$N[&.��w
���'��������>y��������N�~�~����m��'��(� _��^��1E`�A�*}e$�)���CU��)���R@4[N�*��Q��5�G�>���o�/}`�p%v�7�>S�������8�Fa�aB{�W���>�(�^����6{������
�?��+��~�����q[����FYl�����������<a�UN���#��� [����O��}j�U�e/�d.��|�!��hX�-������2�CEl�[���K�6���Z����}&_���O���������w�j�M���*�S�r6�6�������m/�+i�^?!��W��������;j�H{o�N"������0o���}��d���R����%l+Y��������P�Z3x��
�fb��z����A ��ii
����xZW��Pz��EU|������+D�-�����y���^&�t�m���Mor�_��&��7��DI�����b����U��u�~��|�l:0��gD�e�v�I��h�(uA�
�0o���}�P�/6���*�e�������IH��v�C��(�}���FYlEi�p���vz����}��#A��H�Dx�UB�T�(	��7�bS���
���l�B����z�9�e���t�y�v���@��X�A>��������uAj���������A�l�e�,;]��	�f:����(�
�_�thr3N��
�9�%�����}�U@��>x��9�������M3(K�8(�1�0U%�}(a���
���8$���*�E�:t�Fr��F�aS��%�U���+��[w�����h��=\�d���� "��8�4�?��#�E@��� ���,����~�.�������g�����E�7����y�mU�9H]��+��q����7�g�+O�,� �r��?H�C��{�K���7��L7���5�i�Y����9�-��aS����y9���zt�<����A��[���Y5��*��*�PjC�~Q�J���!*�60�5u���c1H��c�+a�	�$����2���Kl~���|,{#��@���k�D_c;eX�?T���\�<�]��)����2��2^g`!�eS�Sq2MT�)3V���0�"���'b�c����$��Q`@����-+t=�M��{�;[2'�`���T��_9`��F�Ep�|�����wO��@��xy�����c��dw���������W�����������k��%��B4+��H�a���H�XQ�x��87�����[~��H��-C��<t[��.:��:;���\.d'�`�����a���Uv�mFJ�W
j8�(~���"�au}pw�df��y�j��_#�q���O���U������F�5Yp(t0B�1��h�"�$&��|�a�4T�7����N8�q�<G����]sm)}�����v�/�J��R�l�����~��\mW^lE1�p�����eG�<��Q�I��i�C�������8O�O��l�e�{�X�y5�����L�<�s�6pe����n������6�'P���li��_�������K#Z��
o�dN�A<'��7��h�$~�����9�%�A��
���p�::H��Dv�|�k������/T�{�f)�Cf�(Tn������1o���C�������t �+��(q�J+�d�X���l�Aa�y�������)^l*�=�#��c�a�y�}������P�E�$���)����ve���H7��E]�F&�@�%���)%�i1�C9^Z��1�58��8�L1L���\�3�1������O���
��Is����p��3aX:6�����A{5�-��K��Y�������E�yb��@��E^�J;m?(e�(��a�d�./'G�$�+#�����{���sxH�1e�Y�L���o��D�^�]/��w�Y����M�=Y���o�*�".���'
���kAC>(���(��y����M3���)����m�S4���s�J���o�Ka��=?�@,���]�E���8V�C��	��v N��l������QP�go���b�/�K6`+�8�M���Nj�Q���s �trsnH( 1�`�7PV����R &����d������`3'2p�_�����k]NJtP�f��P�3��}r��G�P�8������z��2
~����4�����@W)(��J<��
+�$5��k��C_�2+�7��y���@��2�i�"o����@0�\�"{�X� ,\Ys�Wep�����!f�)����Z���Z�����$�,'-���<������'}�������2~�Ckn��@mX��3�OGX�<W9�]���e����,?zG��+u��e|����$g���
6#�T$(�mP��(�VF�<L����?�..���]~=TU�M=Vw���4���/��pr�J�b���xkp�}�B7���;2������	��el�*�fT�7�e~����t]�F�<��v�
2j{���Vi�eh���j"��*��=!�E;�h_�!q��H�H�G�Y�C�H��W�MV��3;OP�5��A6�x�F����&���v�"Ou`��.��&���?�*���h��z��L��fE�)&���jJ&���W�-L$�lM;n�~�3_�C�g��)���Cm�������e�L�����}_v��l��n�3�l�	�.�����|4��\�	U6�����E����`�`�{ ����v�7�"|LA�#������~�x��������+H��ts��A�%�������A�#K����g�����s|S3�i��L����7����	^�Pl7��^��Z���6��������-d[�[�����i��0!���.�.7dM��]T��>�mP��$��|���ZX������&��X<�R��*��1�T�2
Aa��))h�������1 t��@=0�\:JH�mH����#��J��������%���p�9���>����}�mub�����l��lC6[����
H�7k
��e�9I�������`��	$���>���C~�CU|��,��-�e��� E#h&P��~e�1��4�c�v�"U�`>����4
��ar}\s17�b|�U�>�]u��V:'�x$�tb/�/���v�~�Sj��4�Z�
|���N��uJ{�<�\zG���t�$xQ��15x�8:\kXm�
�����	_C�����G��.���G�Wf1=��\)q��B�s�������/6�x��(K�U��g�KrJ	Y�k ��D+�,�*P���w lV�(��o�K9q��C��>T�s��3t���P�>	����_� �%=�d�� 1�%ub�k��Zb#3��OHzQ���Af�}�]����z-�I��>8)���Q��IA�������g������#�7%���?L����>y��"���<��?$��D�������XB�Y�"9�0w���f�������r��J�VCd!�����0��v�m�����Z �V�L*?�g0�
���[N�$�����ElUAkR�C!;�Vy�1�HQ&��/�B����Y.�(|
�����E�2)���2/���In�7j�v��SK����}\ea��;���?�������c�+�C�s�x?������fU`Q���;��O�������m��?p�zA�W�e?��
h�I+���/�K�a/����������m.s��!n��Ne�/\3�2W��xJ����k0.�+p�*1u�_���^�����S�V��������^(!���q�I?y���<V�S���ku/���:����������9@�|2��t�a��_!�����jI�NT
�q��$[�Y]>T���/����}��f��P1��x�����{�5T�������b7�\	�e��?�5�,�������UsN��{F�x���_!�l,��"���j��$���D�i����?H�Z-��'~+���8�,{Q��������B�W�s��iV�7�/���oR ��s��uZie�p
�����	;�*@��[N������v��-��yG&�<����DH��!o�#����}�v=D���/�c�O��?,
��HM"�%ZO�5!��zC��f��?�v�J����p~T�L����;�����f<��	VVz�0t$�}T�n�"���2P|{��n����o6x�8�d�����S	o�Gy�&7F������*R�m���d����Dv����T�l�>n� ��%� ��!l���,�
X�u� ��C������I�%`)�o��i	9��'�J�I�1}��E������]��(� ��?�4�Z*~A��6�����������=�R������D��6B���y+�]q�����K�����)��8n��C)x��/):��M�_^�K�D�
�+�,��S�M��_}3�y���"_v�%�;7u	�|{��l�,lP�M3�>��7�6�;L�����C��hvM���m�?���4�$��(�~��%=��Tq�g9���q$q�<s���l�t+�}�b����f�R �	m_��f���x����������u�;7��A>d0�������2l��@�.�6Z%|f�:XN
y��#������x( b�����;/�a��_!�{V���J�]�H��WFo���K\p?�^d`����7J�m������PW�#�&� �.�I�������K�V2������9�,��Ug�yX|�]���wN��m��)��
y���TF���M��?�I)�/e�8sX��C����������I^�ap�p��e����'J��	��7�����t�Bi�*`:6M����Y�^���I�=-�p�^Ly��
��
������@���a��\��i
l4��������L���/�����&�(mW�f��2}����W�_s����J���2�s.C`�!*>����_���V1�"`Q�
vq.)�{F�'18Q%�����m]l�n���Bp��i���Vv����C�5�%��"x|�[�E��A	���UB<��~3��"5��=8�C`����J�����^�)����,g^d����g>6y��JD�U���>��|��|�k'*Q�<��t��1XS�y(U L[���k�2��
<�@�e�e��:�-I��S��q�t|��?����}ZT��9m*�~��
������^�&�+�2���m�����Rd�������o��lC�@8�7���$�+^Zo}�[��%�B�����wHj�C�(m�Q[Q~�����)uL*���{e{��e@k�)��MV	HJ�_�<.<pb�ve����G�u�*��%�J�����\����x��?�*�Qn��o�4�����NY�#�B_��:l��}w�ky�Q?��������m<L��^�O
�����j�7���2�j[}�o�
�����FXc[�u6�P��-u)t^��zpHu�<t�H=B/�e��o �
��u��f��a�������q�]����7�|�6�7�e�����*@
4}L���]Ut�����>��9��$`w�$y��0�r��a�����C���,������?�W~�����������%�b��y���BP������~[��E�7����f���.����m�/}���2�gP��_���
/������vU�]l���Q��5�*A�*au��uh�`%F�����T	��&���i��&��|P������!FY�?S~���C����������C�s��"�u������&����#I���g��:>�����!��=K����s������h0�u��{�z��OE[�[�Qbk�!l�6����Y�n
���b��U�4�Y�&S���8<Kc��A�����W�Z�ia�����R�j�����k�p�W�
�bS��G���[�,�0��Q�5�C_m��������.�y&?�������ih�o��� ����I=�]���gPk

�����n��{�jP��y��A(� ~5$�\V5Y�Q��TE�S�e�!���QMe�>h�����(�u�L���_)H��4�iN����:+�����$�$��/�!U@��y�|�F;s�9	�^�.y7]^�|��C�g_y�`a'\m��_�BQ���]��\a��{���`��_��q�Rf�������
���>��!S���@�iA�B��?S��	�V�]��\W*���z0�6fd��C���m�i��Kl����x���z"oX�*����\Y�w��������$�o����7���`W�������B�����W_nC��;�&�M*|c~�#��c8T3��J���>n(�k���?�����!��
�i����j��b��zP���d3��P���p����{�Fi1C�_#������y��]w����L1iW^l����F��O|�H�7�T��;��n"7���q��1�eI+E���9�r<� Xs����j0k����A�����Ch������� ���qQ��(� 
�@\����@�>��%s,����/����
��4{��\(�(�|(�Y[���[�Z���c��[	a�S�oH�V�E�K���x�^����D!���N�A�}�\������Y�}��vU+Y�6z��6���f6�B��s�b�H������@���E�-���S��62i����� �?��iCDJ�u�F�4�zm$�2�%���>T��xz���dj����Q�����|2���#f�}c�X8�-���9�����Q�n�,6G�3�9h�x;�1����X�C:��HLF��gg��4���s/=�#/�a��������Eq�*U5(�u���<j�~Q���F��-D�a�@,&�_�����*~���j�����f:����!�������������#����D�e�r\��J����-w��AjdPhp���;�!�&���0��EuVJzv�t�,�-�Cd�/W���R�H��d�sf�G�.[���(F���,��taY����<���{J���C�z�C�A���������e��?�qq���+=���B?�p�'�����D�����_>�z�jo�������>n<��
�����#�?-5b����v�������	�������g� �
]#)CvO�}
@Th�J`�L��6��tZyem���2�y���G{�p�����!������{��[����/�J���y|��.#�>�g�>���R�4�����i�%7�f��"U� 
J�����."��v��"7J�%~����������*��c��������j�h��U�H����VX��1�\(�g&�\�rv�{"x�(�&*v�<X�g���c�&?����������3�}����#(��_��7T���7
��������FiWD���Djp����6�i��j�P1����.���!n�o������=��Q���O��!��H}~-����?��UC������h���
���x�kR��E����WM���a����?��xQ�0����Y���o�w7�c�GLb���\dK.�����g~
	��/����������Ah����5����;��O�����a����U&DA���?(�'"�EU����MW�>0�9���LX�E�w�V����v���6��,�������]_Bz6�l���1��2���������(������)���F�2�'����n���&��]�A����b��_��9bk��Di���[�-y����G,�Y�
n��p���>p�J�(��!�
i���@8��+�9������Z����%U���<�u�~�2����}\���D��@����:�T�@�v��v/9��ux(A����a�/�3
�5$X��H������xPb��}��^������T����D���[�E�<�0��N���#5�zs����G1�
e��
#���26^����Bp���T��|L����f���&R��
.��e���4����73��k6�|��.��+�!li�$�����N�_��/��p�-p
S�-�a����U
��*|�D���������Z�����Rx�x�?�KV~_�>�h,�@`�#DMH�P��B=��i���l���v�MY�A��$�|��L������_��J��6�h
#�	�oq���T�Y����	����LG��d��d��R� �m�G�n���i�+#����>�!��o��g�(i�����}�r�&�,������x!y�����)i�<$CU%6��!Yq��e�u���(��qx���	Ea��y����
��#�k�d8u���P��WB���8LH��(5MT$���U&������7����D�e��U���jr�fQ�A�������;�XX�������|�����<G6�?G\6�i�����l�8�\���������yx���
�i�Yn���N?�%I(���z���7�U
��{
�����������)#����9���fS���/&�_�����v���N�b�\=����m��W"xNn>!�/�R���Wn��[�;�>?�0����#���[�#E,�}�
��w�o�p�Y�@,&-�^$�\�$A9���N�/��������Y���#��	lm�Q�:,�{���J�A�*�j�C�or;V�,��T�\Hf�����d��m���BI7Bk��	���c��sJ�����������4��� 
KR�j�g�bD�P/7f&���buV�
��o�B�s�V��2R���W�Iu�u�)�J�c�c��L��Z���I��%���*�J�a)���'&����W%�K2CC��/������(.[��V�
D�U��������1�ZVE�l�$�f�����h6wBHc��EJ���Oy�7 �L�X&�i �G�cB����V	Bs��{�y�����MT�H~T�'�>�!�o�CN1�� �
2���>����QR@��g���8���a��V���_��}�H"g�����%���q?�=�[{��A�������J���,0Y�~A�Z�?HQga-�����")��-y~�L�F�.���0T{&rs]�~�O���fZ��D�o3i-�rM��V���;�3:h��91�!�=�?�J9�+.���l�U�7�b�z��,�H?��h`t��;Ay	��V�!4�����^���w�,���z��Q4wL����Q�`�e1������6T��+������Y������]�+�X�Y�BRP��0K
&1=�E�l��u��w��WfY��b*D�5"������ ^D��WF��"h,���A�Jr��������M<EKWY(�3u�}&��E����5�-�� ���
����/6�x�^�.[O�?���$�tw��%���J(���(���,�����%��O)����]G`�0r���w;��e{���w�q����
_������ ��:�bv���w�'�3OP^��H�g��gem���A��8Y�|(�g
rr(�W�Q"���c���,�4�"d������)!iW%^l*���?WRmH<�HxEu�����x�Vb�U�+�Y�LQGH�����(���>�-8�2��l��}G�,h ��P���&�����dzC�@�
���N��
�����\=�G��U���\|�
�f�?�������,UJ�E�V}�{Y_��~��l\Y$���l�����:�@/����(������p7���B��C���E���@����(�uj���P���n����W�Y&�C�o��!�a��0��b�����Bp���	���^�{,���'*��h������, �g�n���^g����.5�B�W�7�(�-�/�i�������*:�w�����r�����G
�M�_��u���y��Z7��}��"�;n���|;�c��lm�Q�:����m2m�/O�@R$l�����|��J�Q���~ ��_1����'�0��
�S���#�i�G�3��}���*>�e�{�@�7Li�!�I���D���.�#���7����eib��oE�����E2��0C�O@�����3�j:3�"�>��$%����@����f���a��l������,��]o������B�������.U���t���"�bw���m����<�R�����j,������5a��&+�!U��Y���E������6(�lL����+����������i�)U� K����yy5��Z�G�;D����]�;_����:I������p�oN�T�i^NA.�e�����b�(����+%�[��DJ��B7��eo�6_M�F)�Q����K���M��j]�{eL���]�_d�g����M�R<Y��6�b�Vn����B�l����0^�6�A��D^����*'��l%U���r�������������(�G�/
%=<m��QuH��>�P��]���i���1?f���UE'��E�7b2=�5�2��XAJ�m�����fa�D�&�� ��?Fn��J�aQ�A�����)�f��E�g���eML���>��'Xk���0���
��u[�B�Oe�0g]6'�zXY�\����(�3��������&�4x�s2�~���!�!�t�?y���Y=T
g����j�0�Y��� jDXS"���J����.�ky���������eyW���e��@,�@��J.�g�������d���s���t<s��CU��S�RL�p�D�Q��(����.q8��!�,��]�?��6Tc���� Bm���?.��5/Hrw�-_!{������0�[�o��C:m�����{���w/�o6���8^��X�J�%q�<��y��OLn���Uxi���R���
$�8&����s#~� ���$|=���oC�/K?��}+o�L��l�[~�>�_���pxV��@E���+�_��[o��zrl��C�����+aK�1�b^���B�F�fN7�C:_�w���������������qr�u������ubQ�}����0��"U �'�|�c��N^�B12s�q0e��3��z"?���c@��%���j���X�Q���O�*�0i��@���Cq�:o��!���H��Zrw��J�Y�)�W�����b
���b�&��,"��-�M�[�+e�P����"��W�
�0)������`�giw�5�W��5S�Zm�U���<qq������6�bS�z�}*��l�n�K���g�D����*A0G�yE�%~�6���(�mP��A�N`�����v�/��8����3���K������UE��,
��P��l'�z�['�CZ�
y���������S�s���G��Cp���3KlD���\N���(��	�/�����g]�#�N�: ���~V�e����/���G)���B�Ui}<���8u����)�<d����7�e���Y|O�g�E�D�uF�(�yFNH%��3�e����g�2y�p�G�y!>����+��H�Alt��0G���]�/��c���9+��S�����I,�k�"K*>g�2��K(�K_���Z������]��K�����Y�{�]<�]��}���Dw����m�i�m����
0�T�{���l��7J���@�d�i��&�hG�`{K.��w����%6��-�f�s�o���)^���`�u �K�:P�/7AW�������'���.[:��8��g9�wP�fc������3q5��m�p�r���gwto��\�Un����M��X�������
����m��8I��C�c#|��,���&u"�)j<�%��<H�K�=_C�{������v�(���$N���~��Li��8L�(����c���M����?���>�K���EV�fi���N������eSP���0�	������#@~`�v�i�o���l�Z��� NkHBt'k�u����p	�b3�p�d��sv�J�ix;��Od�����R� _��9����9���<x����)�z�
n<��}u�0���
���u[.X#<�s��^�^����<���[L%������_��o��SQt��
J���p����K�.i>���8�WU���������@�Yx�����p���Zf_�|������N��S�~)�~���O�
�Y���.�������Azhr���K:Sp��P�����+���s��b*D�o�Dzb�_R�7d���f�L�#U���{�B�3/�P������)�������>�__���k�e�z�L���b?�5mEq�7�as^�gB�<��t�!�!�w���W�z(
���v��)��������g�����D��K��p������n.�M����m�U��I,�����I+>�������� ?�$`����c��Pt�2����L	�!�����H@��X������$��Lg���o���!��~��l�Zw�|�i��y��Sn�wl����p[���t��5O�W�*�R�G���D�Q���s���`q}]r�&y�?Y.��\1���4���3V~[i1r�f)��������r��s��M�A��1UE7�#��*:������qW�
��Q��q_�5��7��K�0�%x'��7����8����E�f|3��Y�4�k��=O3H��?��H$F'�����k�w�|��&D���Cm�� $��8�'WZ������g�y(.ew�����ti�eZ���*<���_��P�����i�[���&�<��!�a��{.��7�a�����R� �_6��
B6��T���B^�<��R�{��
�
���X���"��(��1��l��c���;/�7�!8Q����]o��C�3V|���vU�t��T����=MUb���NK^�����{�J�|�����{���>��CL�`K���+��8��J���8L�<T�����-S���!��
X�W)(1*~�� DV�
��A����)<L�o���0�"�Zm��d�7�������w~2K
R�r_����c��wDHv'�&���5m�S!�%����mx��6�@���J����22+�-*V��?�����~\H#>A��*��M����UV�+�h�U�����B,���!�6x)�'�+�W��tW�)��\(����z)�?��:�`W�{]�)|�o�l��K�6r���0���\�M6f�>�B�_��]M����e�����8�Rx5��� ~p�9��2OW����?�-f2]���M��	?R�v/��[ �-(����{e���D�.��!}`����M;�8��&�CH����Pf�Z�h���zq"9J0����Z�\WR3�����P��F��O~Q��(
a��H�ttb#���o�i���=}��vK�M:&fM����j+J�U��BI5�D[+�:dwy�29����N���C�w��"�[$	����U���h��P�`7Y��l0��P������������Lx�C��n�k��P]�r�K@�>��q�v�t�[:(��o�2�cX�W{:.��R�}�/��{g ��?��=�����<K��0q������^�O�6\@������w::�7.���m�(�K��-�(H:���IR^��$I  8/x��ve�
J�5~�q���&E���m���R��h��+>��y/���F�?�����Z�ku3��rC��:Xl_&�2�Q�M�/R�'�{m0������N�_��>J4�r�%V)-,��
bs`m�MT1i�����QE���P��a�@vt������ �A3G>�P�������\T1x���=r�7�n�l6��k�Fd����+���
v��Y|�����]<��*�0�J�����>�HGn�1+:�E�����3�>H��P[�]�m.
�l�^�!�A����L�K�{��S���rE
(.����|1��	�b�Xv�oF1��jZM����Dt
8�U&��V&A]�,�2������B��(
��
c�p�V�/��,��c�H���������	i1��u����D���A��FQ����2l��Pw{p(9��q0�b��r0ldXj=�$Y*K��4������v*�"5(���@���,��P4O�����t�?+@�fe����\��P���q�%�����b��x�e��1��KT��z���q+������
n[Ux����+K��]��.����ao��-+<
L�����k�X�P}?�FB�d�
Jz����5��l-�HC�n7���i����� ���Mm�y�{1y��������P��vCi%l�-�6�QS�O'����L�X�HM�:�A�@^!���_�
��J}�w�?�w�������;d�C(#:�~1�C6x��wE�ek��s
�>��F7,�����&�[!F�o?����j�M.��q�9V�Ub����Q��;9����X`�/�����pO�C	�<lz�,�AiWSPH���=e$�^����I��@�quXTU�0���o!�D���[���l����5����gXc�L�u(���M�8g���Og��Z�a���ep{Ob���s�y7\�17F?���>�|��L�jAx�
�������f-������j{/������&�;$��Y��x���T��n�,6��=�B�Q6#i����O��J�x�Tt�+���h�6�l�p��o1�Y�E6$���o�cc��&��g�*"�R|MU��,�/*)�s���� 7�i��@��
�����a~���MTt6���:�,iU�7���?�&��
�,.� ��F�#��:7#^,�����o@�L�X�Bx�A�����P�����}�j����I��e�9i�����L!���U����N��x�+eHC�����8�(0��C��~C7�����/1�E�	���lq��!-��<X����$�y�k�M���D��cY*��&�l����?�0�t���E��vC�A�Y������"+�����}Q��{PJn���,�Z5B[�E����7�A�a"�c|����:aI���W�^�4�?H�p��n�>�Z���WS����A��w�n�2t%��������������.u��6����K��r)4�+!�SUrB#� Y)k��]�,\����Q �UWD?�$�����c��}J#D��&=���
Q��kp��`��w~��?+!�
�����dp��*<����]��c�:�>�S���������+}�~�-X�c�v��*iv8�,%�t��FYl*���99��{��<F!}R��{w�[>{(U v+��`����b
��_�+���/�����e?��)<L������n���V1������|���b�>BS]m���KG��u�M r���e��q�w^P��K��|�NK*�#'��[q`��f7������c����?n��z5sx��g!n�W
�@
��CoD9���E8�GV�]T-���jw5��������;y�r�#�<.q�I<���l�~�/����,��b�u��f��c��z�MO�����Wk2�)��y�!�������H�������5������A��������`{^+:��E@����9Li�z����
�LYW��<)��T�3+8(�9�:{�M��|��x��F/J�Ew�������+
��*�����C�����L����o	]�W�^X�vUb��}�?���<+���w$��cx2YwUv�	;6r�.��b��$��������VM�Y|�}�����g��1ly���/���*��*p��;qjA� E�q������s������4sw% �+���k7L����9�W���(ST3@��1S>�s�>������7�Q"�G���Dq�W�lW���(�mP��%�$����{��7��|�:�@��Ei>�?���������Q�B|?���9;���!T1i��1�N��m�%)�5����]f��F���9���Y���p_���S����iH�tn��9�d,�g:���1����-ma7�.�y��0!����Q���I���k���w���/R��[�����L�j�l����nn^fI>��g�S�)Xk�-8D��E:kaR�<�`KRD������
�n�8t����	*s������J���E��$?���������v�����u@�
R������}��n�p���L?k�����A�}c��y��
1L��^f��y���l#��`�A^���D��%T=��{������
�f��d��e
���r6�9�S�9�Oo�3���X����zIC���~��%�6����+`e��a�q��(����dJ�M(�+��wf�Y8��?!UB�F8_�p�~�lg�/r����E�<F����*��(B��HXV��O�����v.���K����������<`c�<���{	���z�+z���7i�`J�'r0C�����m��4LW��E��Y�!AE�|0�?�L���)j�HG�M���� @G�lig�/�AI��������Nq���h��+k~;��xu��;tE������j����f�:J���@��qz���cZ������+=p�jex�*#�/�h�q�@m�
c��
|^�1��Y�x�`W��5��k�E��?���x����q�&l���FYl��^�%���,Y�G��C��2��_�������_������]�x��MWY8����n�}`�X��"����X4 �G������Je�J�8������;�:���������?v�}�L,E�w�-�?6��������2\��K���x�{�0wF�O�=��d�D�="?���]G���/��a���`f�ZU���9������,a���(UH*�GQ���zC��L6������������?�;�q��B��*>�h����.LiV�&-���.��O�[Gz�������2���/�����t��a>�����p%� ���t�J�������;�����j�������E�?Y�����JS��v�Q�a[(��0"��8�����V:�"�����	��Y�2�*4q|L�2����2
�=�_�P\k�����(Z�������	�Gc���]�QI8���P&�=eZ���"a"��'59����U�mg	v�_dL��S�o%����M-�^������
��������Fl�6�d����8|����l��rKQB>(�� �(
�')p�<��F�/+�8��?�%m6E�_��a^�~^��6t<
4������2��2'�����'�����#,����^���d3�����;�a"�� Bm�� :D�D�����Z�5.���">���Q0;Y{��o��_,��?5��N��6#p��S�G��|p���F�n�g���x;I\��H\F$�a�p��$�8� 3W��4eFI]9��Q	����S�����^��X���{P�Ms�z��2�B��LC���v@B�p<�m�
�������e|s�� ��^(e�(����C��8�F����K�Q��l����g�%Y��f+�7��`(����u(\2j��!��?�n���Zo�=/�;�}8��/L���j��I<������I;��3}���7n��}l��_�
���������2lEiL���O\��k
>p2��kG���
s��<t	�2��f�i�Fuy_��<���>$��;�����q�%%�7��4�M���7��o���\j7WVN��F5�Kz�,����s�H|�p>�C�kz�����O��bL;;d��=t�����$M�2B1a��*rR�{�~(��w
��?�����d�������;���(���xIj��������|/8�0U�Q�����MC��f�vc��/4�y��3���B�>X��
8x����V�z��U�SlVIm�Q�J�@/��u�s�����pH�����"���@�������sx_������~�����NL�$���k�$�q�~##�1�=D�M�����>��,����Y���~�5y�HM�N�<���X������Q��f�o��5�-Y{���iY������z��(���\����1����"�Jf�w����(ig��^�b�k������~�D��U"Or����|�T�~��^���cx�<�����2l*��?'6�:P��������J��<��U���p����)g��}���1L��^�����m��0����m���l������� 8L{��2-���M��5|,�v���vu!�&�����N/�XliW+���v�f���(�q"��N������%�<���7Y%$��A9��lp��A6��z����sL��(�>��a�����g"�z�E��%n���H�y3�o(��P�N�
��F��ea&��l�Z��J>TU���*��4'oj �6�7W�g��+��7Z��+�������_�v\��k���*Qh��������������c��"�� ����}��\�W���E��gj��6*�@���xkA�q�L�I����>��b������|��6���e�i�7�bsvY:�h��q���f����!$��sT�	J��X�!���0���XL����s��!�q��D�3��;.�x7A�������6���q��������,m��,�K�5/i�v�+.�M��x��4�G^�>2�Cd#|���9�i}�5�aL}��`H���Dn�?<�lm�EYlh�A�����J�cv��y�}�:
�SF�E��RHr<������� �I���$Gx�;\3=I�8M��x��15�C)>�k���Li�!�)��\R|_�*������=��:�Q�!��w�_���&�o��`�#�b�}���M�{}����un��d���P�pv5�/o�
Q�����u]���Y�V�<�*>w3�����a��6���x��>:4�i�T�7P�����\=v��9�}��V��M�y���o��s�
�SF�7��n7����'�-|��a$0�|p0A���F�K���F���_���h���E��:�4�X��T#6��?����1�aZG�|�����k�$��������B���+���J?����������\�����Iv�f�c�~�j�����h��q�fr��:�������������ji��o��=���
����o������&{��."qn[�:I�!�i���v�����&T�x�V/����X�}��M��R�Y���?������|�C7�p��M�PDz��w|��]M����e���D�DQ#n'Y4j��p����a�R5|aCN��X;��������
J����,��:����������o���c��NBO��xD����w���(}(��FV9)��������i��`Y��2I��U�s�+#�83�M?@�(����i���wQ�����`.�3����j�|�"~&�%jo*^��a��H�����KN/�����re��8F������>�i�u��4�l�!0b�'b�����|�F�=��[���L����W����Yak;���M����VW]���Pfg���2[�"w:yJj���KM�<��vU��2l%}�)~\� tI�?P>8Xq�]����2�����iJ1���s���pd��4�*]�<��";9��\5J�I��7V�Y�w��!j�Bx����Fi�����:�}�P��zP$���s"�bOG����?2��dqy�����Q���H�*�0
D�o��������&�Z��}����(�g/��>�;���@�4�"�@�-�!~
Y�~�����A��?G�>C2�7]	Ma������Ro��y�Il���g�2G��A���2$�.��P�R���7�c9�t�T�������H�� ��E, �E�2�J/�o����G���t���y,c��[�=(��P���#��z����K��Dw���7G����������_����
�4�@��o���
q����g�2�����(>��8]%���S��2�ay3���4�b{��1*t0r	���u����9r�S:�1K�?����C�0��Z��$����D��'�$2(��������}c<v�2J;2����_#e����~���d
��1��Q&������f�Jd_[Vb�&� ����[V�p�����,�AiY:��h\1']���l�����33]J!�&
���2U���xx����d���������Zh�h��c��W�9����!�BWiD���Q�5�O�h1j�7��@�I�U.���Q&7���u8�W����Poe�0~��\��
�6T�p�a\@��
D�K���G6aL]�]��q����CU
k��dS�U��K<�?�E�2�����f�\`g�}�3�&���%�������F\��|�� )���%�ND�	�GM3)���L��H�&��Za&^���Cn�X(�����,.fj�
�{��������#���l���h{�����J��+��>!�,�qEa%2�E��>����r���W���L4�g�oEH:���=y�M?@���XY�Xm�2�6t������&Y�v�]X�l�@�$wk��l[t�i���S���M[�p���.���1+����;M|~��>�[\�o�N����/���t�>�(8��{K::���������t:(%�>�MY���(-_d~(.��8���p�%���E��j�y����Q���,I~��H7c�����P:�!{J���"�dH��u�f��ba�y��J�O��Xi#���SWT�7����Y�9qd�������T�/��;[����JY	 �s��4�-�`rQP�y��g,�,�gjE�F)��-��z&����&�CU~��a��J*D�u$�Y�I���/�����I�a����=��M����,��[��r�h&�N8��(!��qC!.���g�
RN�?�
t������(&c|������[�(6�9/r� 	��]("����Y�JV���3
�}������C��9�,SX)�e��R6��z�	�>LC��L`�c�@����"��f���R�[HF��k�4sj �:(/E�U�4QE�[a��@f����v������W�
�)�n�,6
���gi&����0������g�8���{; o:	orF.��g�*�:�@��
��(�d����"�u8��"o�4��\���2��U2��	o1�!��7���fdN	��4��?��H��8��O�9W�h^6Y	$���_:��aS����(D
D�/b���}��)�
��?�*k�r�U�2���(S ���_"~.�B������{G�\�������CUv������\(�vU����eX4�y}f�*$]`��������cY���j`�;���r�]5��M
��{�c�������M���@&h�Vy({�u���>��ru�)@����� �J�zz�)�	���-��p����!*�|��� �,�����E�7�%q�B�W��s#9�A���g/>Y���P��c�����iaz��%�s���Ym��)��M������B
}��2r+!�������0�|x�>(m��_�B�����q6O����0z�B�_c@��1��8�~���{�|{@Y������/6��zLn�kB���/���w��}����o�'�����~C�������5i�G��$S���Z�P��d�)?)"�^	�_;4��Vz������A�N�X;B�5E-��p�T�w�6���m��#@6�3xW3�t/@�
�;H�v^Z���'sA�$��R�V��~���q
S�E���/6f���EZT�����+�����Z�gF�t�L���������]Ux�����#�TG�G^�!�J�j>����I�	�O�v�u��������l�`��_r��W~�V����_��K_��+3dU��9��,qW�
��Q���1�Z���;=�:o���/|��=!?�)�~yH��g>����oGm��]9��I����y�$����k�������
��F�s�d@*���7J��b��{<4�X*�������;#���6�w���v��0��q0E�0lm�(����>�9
����e�>�F����;y�@'��i�pr���:���6����������l�k�/.P{[�-}�g�0�g��P��������f|S��������q3�U|�����1��&����������.�m� ��b�w�@���'U��������u��W���$v�%5!�����r�m;T���B����������^s/y7��fX n�PU�:-���M_�lV�,��/��#|�+���#���@���F�����nQ��J>��18gvmW^lj�`��.�@K����A*���:&:���E�d���z��orP��=`r��e��y�Y���6tH���P�����M�_`$�L�OC��Mo�vs���E:��
�X�2�"��+k
��/+d%x��T��1��7�4�8�M�n�8�v�$q\ ���4�_�`������Ck����
D��Vb�1����/�d�(s���[�]�>h�6���n���sE�d%��0�o(MT�p�ve�i��e�s������:bl�V���{�	��o����b�ggxrU�������5G���Y]y����J���_��9����F����26d�:
�8�RQ�bwv�Q���1��J���/���M<����JXE����>&U��#�b�E��0�yRq��l�E�o���cK�m�C9��H��_���}��Y��	�eQB>(����5�	�{����$L�d�8�/��{ZG��i��I+l�[��b`��C��(���O�s(_��HU��=g�	����P<��mP<��@��`f�������/}���&�����/���E��y<:�+�-��aS���}����!�,*�W��1������w���~�M�c�\J�=�scS��Tb�q��(�C�|N/�)}���ue4:`���)�R�=�@��
�4s1L�X�������?UN�b���s������������?O��n�eP��^
��ID���K���������:��Z����q��$�i���P��q�$1F8^���� %��S����:�}@�6S�F[��!"�_MV�6|����jG7 �l��b�.�� �����4W_3-�2�g��2����#@�k"�
���Qe����au���t�@�����`�zY0>1�"U%��Z�����
�U+'��K� u��F���!~p��a�����Er�5����o����Q�a
J�P�`�8;R������@�j5pX|8�����)�Fw<��-�9��a}V��~���YL���a�z��tt�>3b��M'�G2.�s��U�3�%�T�!6��>n��S�Jx�})u���D�=JlR#����az"�nW%6�������D����XlF?���O����M�9�"�[)>�!2���������v~�A.����5����$�����
�[�E�7Fx�8���x����'�z^�?8�6�_=�X�EczE0�w�@mh��@�q�����\r��!h�����o��?�T����}��g�a�����~lz�B�*��b�G�A>������|���{��}6��li7(�m�T�z^M!u�����������^dG�o��BrP2fUc��1v����A�(�$�C��0F��^����T��S��#��W���T ��4�?H�\Maw�t'&1=����z�h�y�8�o�W7��������e��T���NB_@<����I�`���\vOd�����vD����x��2>����ed!��������OJ�t��h���Ys��n� l� :�0e"� �IK<�����,�K�n��z��W��rY�����CU|��D����t��6s^<�@,��?�Z����W��Z����,������O��j|@���j�v/��m���p�<J��"�9�c��^B�(;g�o�J�(�^�Yl��-�J�>�a�%����;�
>8��3�
��+��%CW�qQ�?1�U?
��C���2+X�y����9F$�"�Y��l�&f:�������$!.Jl&�=e
R	��S��`g���M�&�	�$�l�O����PL�gmV�&�/���`\�':���$=d^�za��	,��+������L��%:���5���&���@��
����P|J��,��qh�+�j��z7�|��FN��.�5�?��IOc�7
�4&,��qP�~A�?�L�����g�";�D"�E��|���mE(I��+B�8V%:��wV�mg�Y!}m>������
�hw�����L6sV<�!�A������<��I*���k5'+�z����q=l�/s+lV��!���b�qR#� u��~0��S��C
���;��f|�������k@w��X>~���m9��)5��C�J��T��k�����������1?��lX�v��(��P����D�0Z�3��AO����%���	]O�`���f��-mGn�B�U�M�U��5z��������wY�����;��2�
�"������ @��li���BI��nS�������h�i�����������Rc�I}y����(mW5��aS
�k7�N�����?�W����F�x��tS?���������yV��~���Y�$���?A��������{�Ihq�����T�2�w���
UF �d�����X}�p �I#���r�O4i�1��wU�����yP������a>���>.+��[��*2���n��7o�������~	1�1�����A����d6��Zp���*t�T�!t�C��cL�������z�CEjc��P�������,6t%=\��B<����/��a5�����
�������q���7
c����j,�k�'
����'��7�L���
��_�z����lG�.|:��7������!d����Hl^ye�p����a�v�@|�'��zf������e���x\��H�����^f�����������/������6������~�����s w��;�C^m�Mi6
R������[���x��O�w�_�BIw�g������U �A�F��
�Q�.�P��]��K�f����Q��+��i��eX��>>�8����!/��v��fo���?�?����RU���������j6n)��2 �<er/����I���?��:��G�kg���ep���i�}6�����o��, 	B�J�#���<N2�
�[�v��%��8Xk��VV���Qz�~����\��� g:���`�k�����q5�C)x���_=$L>�{=���A����
�6������>0Vk�:�,=�?��&��<�o��s�E���4��2�8<V<��2���Hf��$�����5�d���KJ��4��������wG�%G���lE��Vy�*�J�F��NP�^��1�y��X��Y���p��?��|���������u�K��:��(��;���v����(v#��_��W��|��zL��N�T��a��'-�>F�.������A�%��]���)�����w���mv=����}��kQ��S����-<���^%Vq�?�A�z ��;T�+�H�|"5�U�I�7�W�}g�6�/G^"��O�=P����p����"�*����
�K}F\;����+��������8�g��H�>�����v Q����o�4$B�F�����LRz����\�@�����[��_���a�A���������p1�a!x�
��-3eY]�	�+u���n}V�$��<�.f�[<���5�w�Q�"0\d�U�����������J�EE�|t������!Co�����I��
d�2����/���P[@���u=9�~����f-#��W���&�kz�^"T��i��X�l�v��"-�Y���7��%~�����\	�j��X`a��;h/��/^D�T(
vdyWC66�N]��:��������BU@����y�V�������D����I���/xp\��:�epm���_&��$�q_U������~{��������c�W_�^,��p^PY�������'��!1Hj�������������(|���L��3����kl?�C���P�\5������U|�(���Y�4�w���
���D~������o�R��4�E��N���I���>�o�x��PB���`��a�m
���,4�x�
+1��_J+��
���T35Hj!����i��wx��Q�.����c���=l�]RiO��(�T_m�3HT�����������:��$������?/V��^���G�l��`?x�;^�y�_P�������:��/hH�[aF\{���F�w�|����x�n�dx@�_T����7��6T��J���J�3��Z������� ��X%l��?re���/pQ)�k��$���k�$�9��1�	�����{}�131;����,��gL&r�^����
�7"��h�����|G���sT��h��#k�0hT]B���Mt���'�E�2�lDN*[���l�������}�t����T`����U��Y��=��y����z����I������D� ���y�(�"!�+	l��&�x@|GL��(�u��x4�>��Rv3P��FT
�IB��#�hsk�2�s4��r�������������b��E�7��D��h�\���O�r��2�D)���)�{ND���"���9$�
�[�&����m5��=d���[������A���df`��6lD-��{vr�ve0:�K�����G���#�3mq�Z���#t6�*����jC����n�![��!;<��<y��U���������Cd���C��	6vN��)�p���`���	�I���T
|?����Q�m��hj0�!���������0_}@�� ��^���x�Q
��^D6Z�����+5���q�i�r�������x���z��,7�h

m�"RpQ	��p��y���������x���;V��p2�o��XW>�&T6q�L��Qks�F1�oh��:3a�������0�<

�������'�"ME'�d�|4�I�T��L�.
`E���Jd�i��� HmO�1�N�<�b�sE��L2����@u0�g���7���������e��}���5"j��5����.��PbAe
��r�Q��d4�x�i���i.a0�8�c�7��)����#����eq���3�}mW���'!���h����:g�=U��0'���m�?eP�;�7�����'���`nH	m����u��*9�����g���x��e%u7P�s1���FWV
�	�r��
SDm��.	��^w`j���;b��P�)���Mu��Y���4�C�����{{��B���~t�(���D�����2��%��^����3�&T6�
�]������������:����\�(��?�LX6;�_D���J8i����_��U��m���]�#��B(3 4�!�;vP �%��q��F���{���O>HMT@��x=��3x����\���R��L�)�������H;y��^J�4g���*3�i����MSht\�~�)��p57���K�vn�
�j��Y}���:������cg���/�+	����x�hHH�*���y2�T�T/8h?���
V(���/P�<?�m��Q|�(�&v��U�Y{�IB�<���3k[=�GE��z�sr0[��Y"?�ZUq��8De)�*xgH��w3(u��������X�0�12��Q������s#�����x����,��`�I�
T��\Nx���m���� a���\��}C��a�Q|���o�����TUI���e���TZ���=�8\ X�!�+r�vk
���:,�*M�b@�;�
<H��<�@����?8�^w6��l�T`�l�������d�e{�W6	~Q�{(�w�2��9s3)��9r<��BG�*��N�<��'#�G����oEZ$$�N0��v.�
{���!g����Uxz�N�_!�0�$�����-� m.��Y���a�X����Je���>��
�=���I+��F�Be����B���1����$9(��u/�����@��^�)���
�!�1����(>��|��������2�Pys�59�@bY���;�Jb!i���%�p��t��s�cP������S��)���?��1���a�����������V��]�+��f���43�;����AN��Z^$u|�$�W��"�����n��������&�zC���``iw
���;l�����V\����g+.[��������N��t�<�������$��H��Eb!�D�_I�dRm�T|@�ye����?����������������YDw�V�M�r
�2a�������m�R���c`F��U�� �W����T
n*�q�B�T%Py\`�U������}5��I��g���9$2l����70��#Vl[
��e�3se�?@E'��������������`n���.��5���#6�����'���;gU�6��.\�}�?�E�(m����W�q1�b�Ae�i�������bg;(�[�����l��3�bo���.��:S�i�| w2=6;f��	�M���+< �������q��BS����W��u<�\C�����?}�*��o�d���|%�P�8�x��J����SmF�O��tR�~�3b����i������-��Wrwn��
�,�y�����2�@�vk4e���D�0��@�D����8�H�����?�\nT%�����+��������7uW ������qL1�x#sM� �]�`r	v&��D;3�k�-���M��C	������e����9zR�B�Z?����R�a�"9L�h�PrH�"�e��)[�Tr�mQ����f]]L(O�������wT&�2QSC�|���\Dfg���o]���-r�<�GE����y��q&�;\L�p�llsa����m��=���������S�6�����~�
y�M���R�|n��������xb>V�f����Be
d�;���D�*?��A��/�������.S�?r}c���������EOr���f��x�����oo���i�����i(=V�k�S���v��[��EmW���U�7�jobks�:���o���W���?U�<�p���`���l��	Ph��,�����d%����eE���������*a����F������%:h�R���vL�5�v�~����~(������m���y��2h*�"z	$3$�����q�-�����#����N�&�~����/��o����NUTT'�����E���6V�5�N�@��Z�jo�gTD^��/b�M�5��4��cv�$�*��������_WP���V)\(xhQ��L)��,��a:)W����\�M���
Oz�����}b-�*7�E�x��D�w��
/a#��A�U:�]a��������-�	gf����Cb���"%�7���j5e������d�4��R�
Tt�C`��e�������C�����Wlc����rk2���d�} Eg����O&gIB��(�+	.�!(����j���#y�k�b�P9�[���s
V�$�_�$jB��}�L2�X�3����T��+��t�(H�W�A��Me����P�����
}-=���*��> ��9��A��*!��z�/���Jm(�/�xS)��,����-7g���T���bN��}W�
P�����O���R���0fI����*�@��V�����hk^f�o������������!�)�;n��sRP��������r�J��/�
����e_�,%*���'`6�*Q3x7:�A�e}����|T�B��~Kv{��"�$����<'��N���hcG� |9S�b��(zj���+��z�6�e�Y�q�B���!vc�+i�F'+�>/�����?t_�t\�d�E������L�{��J����|�����[�
�7��8�����S����|��{o������K����}�����L�����m����|��R�*�`���/5�L�xj��-��s9<�������G,��UoE�������G�|Wmf����,�����S�,)���B�y�@\
��4$����jr����

@���J[x��+��7{��evP(�S�7�������,�fc�&��S�����_�[�m�����Y*{���Jx�����hl[��J��27%j}�[�({���I,���f����Ee)q��o���\�b�����`1��-uo�
o���i��Z$J���{�M�HAv2�����/w�;2y�>�N^�X-@�TI��^Q,�I�s�B��]u���|��k��F�\�/��w2"�M"�mm�����Z�����T�>�_��y��(-]��>����M�G$3��� ���������pQ�S���~���i�x����]��������_$�����>4�N���'�9{	����!C���B��r��������R�;���q����������4M�ck�h7��/��|Tt��q��#��8��;��![���D[Y�u
|@a�fI�A��Z��X��w��E�������r��:w��u��Q���F��������
�o��n��pX��p�&5!��]&{&
�Q�Y���������@��H���2�WH8L�%Q�����	.��@��$B��@�o�������{A�����?�I��2l��!�)��}S|n+e��eP�����*�!��
TtN�H�'��6(���+�r��}
\T����$tE�euO<����6��{��.���A�w6���B*���Z�sx�k�vZ�s���v#�1�d��y�U��
���P��������.���P*����<��2m#����w����z@�!|DO���a��4$�R�D��S�
��������/6P�9>��w������~(���h#�}D��Z{%3]P����c������}
���L��E�J��J�`��/ !0h���^C�:���������h`M�<8�A���!t��*0�;I���O��r���"�$��\+��P���Ar��.�"��<_�2����5.�x�z�7-�dk
	�
V�Y^l'Z�HQ�]*��/�* 8/�L���xP�����L�N�
(~�w���mP��60��|
Y2�b�.�$(oMm|{������p\9/��y����O$�������������3t,%1;���]Z�7��P��M����������RpQ��y0e��T%\h�v_���X��
|.[�}�>�}�_j���N�[r�������i�A,'���Y�������d�C�������e�V����i���o?�^�S�,P�l@�>By��
���X~����|S�|�+�T���I��
�-���CX+���0����$��*z7����3ZY{~��o��a{kB�l7X�����Tv�*���Jy\�
%�V�6wuHWU���V����];���%E�����4)B��������N2���9 	��8�h�!�z���7P���o��i�������C�f0�Lu���������T����NG��0�������B9��O�70����?rf[=WT,��/��^]g{���@��?.�)iG���PT{(��C�4L� 3�=�#J�al���7�`�7�{_�:�J%�*7��@���B�q���s��%��D�O��)����=I%�D��Jm(����,��RW*��P��`i	�2�bo6S���PU��r��g��a��BR�M��vG��N=���i�
������2;�$�}���W��::\��;�,&�#�69^��s@��n:�;E<�Q���Z��]�:$��u\�~���������#��1�������c�t8���w/�UBX�
�$2l�����v��� ����#����v9�~<��i�/8\X&W�q��7���=PSr%����������W��%�)4����v>.��V���0���'r�*�[X�6����q���,8�F[PYs?5��$�����yH@�?H:�!x�:uB'KI�_�{J,J�� ��G��g�i��-���P�_�r��Q�	���M�l����&+a��W��6��Me�i[��q��	O�M����qOrE�|T�dq"�f5:�+��c1C� j��fE�a�������M���!��m�0"�.k���aZiCv���6�K�S=+>�)�P*>0z�DT8�M=*��w�+�]M'	��"Qh�(��Z�Ks��h��'��"r�!�H<���1^�u��}SA�7��� ����J��K�
���W�v�Z �����@���P4S&�j��J��^��"��P�,{_�����m���T��}�������g��*U\zSl���<[�<��t�����M��-�W���Ea���2�R>�����;y�/��o��lw�l[g�?PE��^���� e���@����Uq��cCD�(�|��J�{��nx�r�[`�A�f��u�*H����R7*��&�����['{WWT��������
��U�m�yWpM��mj�������u4K&��.'�Z=�����8J����pH�&*�'f���75I�?��F�
��>�����JLGe��?(�����@���	����sC�M�E�@�����o5Hf�<�&�)&�E
��V��j�t��e5�+O��������HZ�T:�*�Zh��E�\���8��di~�qb0�J�-_�J$�r���R
Z����T�cL�������Y�����N����?}�rT��1gV���w���������l�t�Gl��s���-�4
�4z@���x�7=�CYJF�Tb�R|�v'��{�E�~Y{��n�'��{EY�4n�5���H����"������!1/6gU.E5b�H�a9���U�/Q��\-W>���y
dm� ��O`c������3`S{����b��=Y
���7��\�o�H����@Cb���,y(�d��LW���k2��N�����W������Q�- ���h��	�\�g]��>�5��"�X�+0��g����OJ�9�a�����>�����H�N0$oh�||�rr�v��H/0�'���/PI-���i���~�|v�I��{������2���S�T���`����!���� )�"0A=���A�y%Nq�"��g�//��D<��3�>c��A�09o�%hI���
y�_�
���Q�����������^Py%x�t!���m! Qp�����o�{������������
)=��l�^�#���q��Mp1�<���A'������V�%� �V%����H�Q��,R�8$�F�!1/I����[S�w�4
|�i�E=����)1��#4��q��.*y��~$JiL��a(v��5=�-�2e�����a�ZE������}![����8���E��g���UEd�����6k�&1Hh�"x	�NXV�1����B����M��5~8��]}���
� ��3��s�X�;0���*x��Jy��o����H�3���~�@�a.����AC�ly�Q���y�zt����������Ax9=�/h�(�[�"m��,S�<`������y�3&��9��f2�Ng���	���������]m>/ �Tn��Z� ��k�,e�R��d�Z���LV���"-!�"������"=�����g��^�	����@x{-�~z��a�A���(���YT;Y}�`�OK5p�>m���'%�g$XkA@�������=��Z����T��d{���b1�4��k����,o6����|�I_��:�
,�������IAj�'\�">����e=�{i�.T��M�V���P�v��?H:�!x�>�H�������(=�}��LA���6��lm��u�N';����R�0�)�#p� F7t[�l0� ^���]�<P����$"�'g�L�,��/�7�k��d���y�5����9���6X�G��Z���#4x3p������^[���GX24��*<�4�R�tVg��o ����;i>��X(�q�P"�T�� HJ��������_��Vy��:o���7g����&N�%vh�@���7�Hj����eg$��������&��u������>w�����E�,������f�6����m����������E|���|O3����Xx3p���%�c;_t�qMv(z[Mn1���:_*����yQ
���I���t6����^jO� ��X�=H���YL���_6<�Jlr���C�w?��05(�"��OH��g~���\&s:R!����P���
F��~�:���*���J�	h��qU������>�P�O��L�������/+'��*�~��B�Pq\����R���8Qb/M}2���?+�yK���)�F������>���E;�!��2:B��
�����G�+bZ��`��B K����]|F=�AY�l�����{.s%�S��"�uR}`����N"W/H�{g��<�~�m�v��Cb��D�_I�>R|�0�T�P��/��� �j������{��
���/�P�t���
���<l��.Z�>> ./$��~�~��� O������m�=TM%^D/����(�}aW����F��w�qFV�J����"�W�/C���oX��P]%b�
	e�MM�V�����NA�����"h~�I����q�AS��}����]���F�aC5���0�[�<���T�e4��~�B3J�o����EoLp�,��K&�Xw�Nm��z�M�S������}L<8y T:nSYh�M�G5����z���eJ|�h����n)�	x�*���Q���� ���^xx�� ��^��=�_D��*.l���l��t�`!�K����%��3PU��C��P�x=�[/,6��XJL	���D0����T����l�$�VM%Q$-�����@\���V;`;��O�w2y����Hx����$�{W�C����4$��m^"n+D^]�[!��A���k(��gN�a%�D�������/�duH(��*�3!8�:t���H7^x���__�mizJC�W3-} 	;'Z�9#Je�4s�<>�H����d��> ��b�a��mT
��'���0���S�R�E%<�tVpqo*E����P����1�oH?PU�'�u}{���-&/�����uVp���,J�����������k�~�
��` v����oT��)
���/�?�z���uweS�)�W���T4��_�J��3��k��b�zQ	�� <��<l���S2�Q-�`�x����,�I
���slJiu*���t����%�XE�Jp��x�:�:���J��2�J�����3�?������5��?O��{�:��M�3������7;*~#���)��7�"-��Z��L�'���T��������2�U�?��F��w���hH���)���@RJ�8����������L��?Bpvi�X�
�&t[5t&��H[�#��CI�,AD���Q�M�����W�8�E�hj��q��q��������6:�.�����"���C��v�h]K,���T�E�R�B�i�yA6������^���T� �|���HNd��@Cb�lA�C�u"���O+��J�AW�T���y����5�$-��_Ip�<1kU���wn����-u�����	gOv���"r���"��	���![e��o#�x Fa�p�� ~ U �+��P�^��|
�����B*�p��QOa#=A�8���?N���TvK���E	t���!Q�������H�-G%?�����>8�Q���4�]j�;���>/dH~����>URy��������f�*�D9���#�����7�p8hP�qu<T���w�
�|q@]���-g��������������������7�����K�h������~�Hsg������$k�x������e�{)��������A�3��&GC	����B��;�1H(_����3.N�M�|�4��Q@T;���}l`�(|C�)&����Y����bu�O�o����o�����H�g�H1c�<�'x�99�D�@�a~7]�m�7�|�����������</v((��2Q�#F[t���5^�)�w4���j��0=���G����5�Qs������� ,Y?����i?_,�����@c�*��G����5��n;%2��RA�F0~����C�K����k�~�rm��w�C�D�����Z��P���\��� b�0ig#��u

�w8]I�Y�T���� �f4i`����^h���y��Z��^4v�[��������WG�J<�����F�U�`u����E%�Kr�w�����������I������:�������}Qud�^��
W��� Ng�U�9}b��DTI����.$&�{���ym����D��A���p�9��X�Y�H���">��m�~��]���.����_��a�ZJ�FB��K�����l����>�[�>��m��*��n���[*k�5T���{Ly<.Ch{����I������.��z�o�<��3�!�J{?�`�H���Eb��D��c�OWp'X\�}���!��P�e������0���>h��,4u@�����[������= ��y�NT;/�g�i�+P�I�v?��ZfT'��(�&v�|��=�\��=�E�����T�
(6sYWN�������(-�����k6�2��=9:�y<75~���E�)C�	�������X����qU�>�B�/�#2iT����X��ZN|��x/%��2��+6<���e�J���Jx�'~���1W���<Q���
���Q��+{���o�~��
6���i���'��%,��yD����.���P�;�2���"���I�1����O���x��(�_��4�8�Rl�I�D���g�X��&�l�@�I��p_��Q������|���\��[���z���"r���r�]�X�
<��}�-���n*��,��C�U��{c����M]y��/���h:P�`��q5�M?���
-�h��\,oUh�S��Xo>*z��w��P_�����0�w�f#^�����'�%���y��=bCQy����F����X�C��U�V���fY�u�Ya�6%��N����)}d%Q�.K�]�NF��
b����lN	&���z������]�kt)���m!�]X�!%"��������E��X���d��M�����q����q�����.��C�:�=�Q�2�317��I�$���
<Q6��������u��)�����A�k��c���LU�
�o7��������V�7���.�J��.x@[Vs������M�������a�AR���y�"��U�'����_q��u+�q��_�
���,�9���h�P�R)�k������lg
x`^P��<�@U��>= ����(��/�)�Q=�2��.���74�k���������R�j��{(��C�'���Q�����y��-^(���2�ZnY5�[���lc���y��bb=/�"v�w�����*v����7TX������kk>��~��fP��|�HJ�]�����3cg���L)��JP�A���K��n��e�U$v�/�6��?H� ���[x@'��5��_�'7
/�����b��x	���b!q����3��3VX�[�G����O�bnA��>k4����g_�&�4em�A^"����Kd����<�o
Pb���������
D�����3��:��!6(H�(_
��)aB�'��i�x�?�-�?�L~A������h����H��/�g)�*�S����3\X�s����I8�����EP�>�@�a��'����T��,�8�R���&�V����2+;x����9�� {�P"�F)���rv+�����tO}�(�� �bL�sX���tC{|pK�"uX���E��R�|$B��;��0]�*��%�+�m
B���K�?h���g����i�%��#?�9a0����4�e���6�"��*d��6|e,��;�X��0uXE���A>4r���)��,�����sB2�9��������)�<�-�*�V*�q����3p�=�`#�p���j��������[�x��T��	�f_D\T��J���j����@5��N]�����;�����5�j�^n��C��>`�i !xl	h	����@�8$�$om�Q���^D�P	�� u#��?����>iP�����x/�:�>�J��L��t6�j(���w?���?�O�8i��`�A>X9#T�-��h,K \�U��
,~H��������F���n���H
�
F?~��q��F�r�z��m��H��&k��z�m&��}��I|,[���
�B$E��@DM1����;�����������T��k�4���7������m�+�w�
x�V�����&�����;T��u����3��	���1/l���;�I����F���!��)�������%��/���V�4$��eh����$��E�k�������_��QP����Jb,q^;�����OW;�D�����z6\X�v
����#N�OV���Z$e����N�����R�~^�/���,��{��0s��6��,�f�����E����.Y���
�M��>n���/�`+��5>��7�#d"	1����Q��TK�A�����i(���h����E�?����P��48ibE�Z��}4X`�������Y>�!a��k��Bo�>
)�$X��[�X�7_�b	�L�Y�Ls�>�@@_�q����
����C� B���)!��H��!�\������\f�U~&�Z�)���df����������I�����Jf�"�i��b;�.��|����.�zb��%���J8\-���i���Z��E0[,����{Q��C�Z��G|p��#� _4�Z=�L�c�+��C�����(i�)�"{�S�sN�Y@v�N!���� z=�E��2�X�MJ�n�}-9�����0���[�Qf�Z�>!|Pe�`���6X�������G�q�F�Ms��^�py{�`�-os7�|9���.��/��Dw�_ul�s� x3P�0�!��_tH�����Y����>��7������\���]U	b�U�I��3P�&)�82�Zy>x;��F&+d>Wt?���&�"�������"7���'v2�T���*�/�B{����)�D�?��K��V�����
oB�mb��Qv4!�7)�E�R���XLB�l~�PM�]�d:01:oBesMJ�SV-MZ��c ��
S�NWn@�f����*���xt������%��N���#
�����m3�9�7��>C���|H8�!Q$4������������XU��E1��@�����_$�U�Ar�����z��:E�K�l�s!�o)�:��G�~`%�l�x^�T�7�"4����T������,
E�[e�Q+��'�	��*p"�pe;2'i�P*+���<n�@�U	��_�R@���b4N�R�k�U	����4f�"��4����3D���&� �v{�������7D��
l[��j���W��qF6�����P�Ee���P�Z���p"���A��>y���������
�z�\T�<������n;�U���>(QO�"���I��������|��jk�xz��s�mS���������}��3^�NS�_p�d�D���d@tE�@��
��-�2�"-�\�:;X0�����o)�}�lZ3X���-�E�C�,Z�U��z���8;��X�s!�Ek*Z�����"H;���@���^F�q��*��T`CqhI|5�Ie-�����y�����Oy,���D
�K��%�����5��/U�{2?�]C�'}�1_��i���cJ��2�=�!|G>���}�p��beo��������N�y��/0qG[>r�Pq��J���M�l_��T*{��U#6'���`�N,�'��z��t����G��~������z~m���W*���C�x��D�6H.�B�e�G���&��"��^��
_�P)�{d���_W�x�r~]�5���]�*X����4�!dLo:oB���)�>eFdf���#G�L����4���(������ht\m�B����I�xb\�rf�x�V��Qb��)�6��4`+@���F
q��g���,�/�/o002�����^���^3X�r���yL�� ��4$J�V�����CyQr���j�<��������?�9��f��p�(/��:<����Xu�����K�D�cE�vPHv�sC8�Awm��������$�6-}<�j��+������~�r��K�}d�"9��K�H�|��� �^"�\!��"�Y]���?�.�]�Hs�P�!1HC����s����2g��Q�:�2���.�������?�?���C�j}n�5�8���c�,���������Y�37q��{�(��^��uU�On��y�|�C�����1�t����>�����s�V�y�E�����TK���n�]��>����9'�yOyT����3	��q��M<4/&�J����8�Jcca���W�S���������Y|'�}ee��k�>@�K������������C�4n1���x0~A�4�O3.���FC����M�y��`G��F��������n�X�f;~�rG�����FT�]���#��s����c���Xz+~��T���@��������K�Z�������wn��r�7�Y��l�&�,7���|���7����!�w:��'>�7!l�T(�8�I��1���EN���`���=�:lpQ)������5���{� �����������E�����BC�%$��SnjAO�=Y�1�j���i	}�2����E+Y�E��$�h-�Q���u�I-\>��z�XR�~�r�/3D�sJhr�R'�������H����E��T�C����z@A�*�V~<�XJ����B���C����`a`}��x����#�������B������������m"����S�����*����_!�����c1$��Mb�������8sm���M�����i�_L'o����_�*0W�aX�������d^���>hv��Y�r	�{��o9��s��*k ��q��T�'?^D�~xh�(H��p���)�1��N&�D��u�d���W_�5�4�*�p��:�j��D�V���VX�\��(g��=�TQ�tX��4���dC�vc~�����M�]s�^����4���_ c��Mg`M!���8k��q}�Vkx��Q|�^�K?��vur.�����^�FM-�(
�h]��c�����^6��T��8�_���F���-�}�_���7�4P9�*z���B#���5�[c��7���,��r���� a.���H=$$5m��_��MAl7"�_�Q���<X�l�9�
���_��q�A4L��C�u�W/�$� =��X^���_.�
��Y{����L���^T��/���@��S��EO��w�3�hg�`&��h�w�K�F(Ue��L��G�G�:�E)��I`���.�m�=T�
V���|%��6���/�?���A�h�������#FJ���5��O������l<�A
�>o�������O�N0PE"�m2V[
 Bg������*�����
��4������
�;~�{����@_VGg������r�x� M����2h*�����m�L�<< ��)]�8L����q�Z1������(�!��J���Z��.���P*�oh����DRk��7T+$�$�T�&3��4�|S��Me������q�����n�Z��G;�=*X���>�	���vn����I�A��{�w���>����-�f���"r~#�-���z^�0�pXt��CB��������d����@����^��y�m��X����0�2h��
�XH�������&B�
��2����X_R�7����T��>��85��eF~)����=�N�Z��5���$���M��?��]�aA��*���Ic���0U�T��/�X���V�^`��
���1|p��;:U���l�����q��W
6����A�z�
y��;�j���X�@������H�S>wL�a�l��PpS�pO�D~o�H��ya^y�F�|�M1v�9�����T��j�~����������F�����wn�B��s�:N�K2���(T��*s�����K�J&|���'�;�e��j`���?�s 4f�"����C�����_5H�����$��������&�@��B���j9o�t�PXH8�K/C�G�+3����i�!��.���K����hK���}r����W�N��=�!�=q,������ivN��.d����w��_�`C�?�;|����S��{�����oC���"V9�.4����J����9)���(�z$��c��Mb�4�C�>�j�Lg!�������Zk���r����^�]����12���8Wn�����X��h���5n^zh�|�2����B��#$��Q`�4���b��D�a�2/��	��������
���������k��p�V ����"��Ti:K����Ret�o?�������3���B��Ug��,398�o�;��������@��o�gQE���>H�^��R����-v���x����9��)�h$ov9Y�-�k��$�[$B�*>w�t0�}�����+Z�l��K}�@$u����HdX��J�7b��sF�4�I�9��4������4+�$2P-�7I`_��`��
� )�"5!�S��1�v�����F��_���i��R�/���#�� ���
�%9��>A�V !������
T�d�c����QS�6�2���������l<|E4����H������j.����-%��A�r�pa�26O^<��jY����/��.T�Rt��$V�7s����kr�\���@��������D�t_`������+�^���Jx��������Y�
���;�,��qpWRd�$�Cy�R���RzI�G�$HJ��m���^���l�Vu[Py�G>f.$��^$I�����^Sl��J�P���SI�z����[��m�(�w�n���&�(~��l�����06d;k����:+�a��������df`"=�A��l����zx;(z|�O��g��
V@����y��X=�����K�	r��Zo����X������r(�fse��`��3�$��Je������� �7O`x<r���D�S�.x6P��q���PhH���"jv��5T���d_�g{l�d�����6��{d���g��:������ 1"�����)�W�S)�9�����`6H#�#��g�Mg�E�l�I����������Y��~�{\�#�xAe�o���V�"�����p�~���9�F+n`�a)<�_4���,U��JnR���-�L�7��\��<H[��VS���{:�_���E�\�J���H1�"O���������%�p�%_�#J��(�0���������p���������FT��K���K�$1����-�ay:��"���$��p���j�%BR
D*���'P�O*n��Dj�{�S�����,���K����@�3�0���^���\E���E��8/�N$J����{�26�u������.��.3������`��!�*4R���:X���>���-&�#�-!>�����:�n�x��k���P������^�e���<u��<��t����L�����.�����v��$_\Q�`:,&= ���y��5o�S0��T�7������*	V
���d�8�/���	���/���n����p���Mg�M�lF������3&%�	���h%��������x?����9���!�" �����;���M������
�
�a�{\��g��\~u���hH��5���O#�lfe0cKJ����T�UT�����C���U��6xh^�O{m4�wu����G_�/I�8���!��@y����K�x�q��*�y��lBe^����ry���!	�����M��W�F�|��E>���.Q}z���>nJ�Q�4����?50+���>"������B��� A�^����Q��I�wS�4��9�@36[ ~]�����w��c�Ce�B%u	�	������E��]DN��]����a�8	���k�
������0:+�b�0<~�%^�n�L�tc�����j�P��.l{�%.�������������Z4���}�c��
�5c�����h{��*�����A�[b�X�����EX����Ztw�@e�o��Q$�%}o(���a��C	cq�(5�FIB�E���
��=����
V����/O�qo*��/�gO�D�X=_����5};'�����	a�$�"�w~�+��=pg;P����P�	��VS����#��Q��Gd�s]<`�%y_E�G����/4����_��H��bH�N�|`���
�2�_�&L%�H�n�xX�a�� 
��?H�"��#+�9R|.�
_�o��e��*>�����a_�ER�E��4E�>�1�_aC+y/������l��*P�\����>N|������>P��L9y���J����
���]���3H��1��WM����A��P]	qo/EV�Dx�*�+a �p�N�q��/H$��a9�:5$2�*@b��D��O��7-V���Q��WD���S�I�i���Q���_i���A���[5�LZ��Tc)��.���3����IoZ������Y�G����
F��'5*�7������)�3*_|��t��U	��#g��@(����8y�\ZA�<�����c)}�mN�;C����?I�Rq�u�C4|�����-*k� (�z`�#��&P����{���r_PUP2o&����la�`�����}�E��
��#����N)/`�,�l�*���������q�AS��KE�5rxW5��gCr��V* �������S��N�%��j��d�5T��~Q��W�7�`�,���u���X��$�X�6�)�
�2���/4��,5&����B������������{����!��w$uc����E%J��:�ovn��	���uk���4�w�
�9s�#+���k����� b��\�P�-��"�g�^��b�~5��s������#�B��6��*�m_T��Jh��BT���~1j���fb��=���h���5�C������Bet6,���y��u	;����y�e���-�9T$T�Z����U���*��>�:
��T-���5���3`\	�"�Z~��_J�B�]����g�i��cU�#_����������U��?1T��@�LSA�M�<nTH�������+����������������?>6s!�a5$I+l�����^���_���V��y�����z�nx�*1�������N-�1U��pE�2:H�"��z({���������|����3=)�����U*W5^hj���W%�:fy������A���?,
'B�s�`���@e� (�"t.x��v1	�������&�AT��GuL�@E�����vDA��L�����?��I�i�C�T�X^���Z���K��V����2�a���z�$����h������'����8)����[��T�
gL��������-P�]�LzN����~|J_�m���_D���[���L��6��!�w>56��0�;5��!����p��\���	P+��z!>���>]��@�,d����AbX��$�������&���9g6���l�+o��]%���=����B|oN^��$��H��A��L�ms�m�D�I����N&�k������*���Q�/����?H�Pl���U
�b����?�b�Y)�C���a������h{���2�������
6�T��T�8m%Z���.���HY��1Nx��'�W�cr�'���M��T�v����z�����Q�ld��%f>\Q]��&Fi�����=�B�r�h�:nSYh��y�,?{�5s���H�}�����m���Yp�J��������;�{�r����9w��}�"9D����
\T�������Y?��(����*�+��v/"�`/����|?��.�R�Q`[l�8&�N�4��F1�pEH�7������mg���!u9J���t��"�#^SW��p�f=����hN���_I���3��}�2��2h��l���Be�L �jWv�
������Z�N`�UC->���C�v7�j��^�3��N��xv ��w�������?P��R0i�#���(4$����s�0sbzm�
z�(|�?���r,a�����}�M-��F�BE�EEY�l��D�F�7���/��l��������@��P
�U�">�
���"�+!��	��R�2-��z�#P~���h��V��h��)�WF
�B	~)�N�w��2;���+"Q�+�|����y�72�"��A@�M������I��&�
|�K����/�r.��F��wA��;`�	�M������P	��
�:m�u�E�	 |��I���+�����f�y]4h.�^T_T\<�)~����^N�V_����s�b#7���)��<>�"��5P����U����q��D�I�NM�=��5p�9��Wa�.� �n�(��)��=Y�E^$�>o"��{q��j]�����z��W�A�`}�/�&��	�����p�3�����8�kq�RY�p3 )�*��������^���������mF16�<�
?���%�#{Z�(���_$��z��op��9��(*�V�<r�u9~?��0q�^�����z9���=-a�H0y^�
�%����}�j�����=��?
D��;�E�h��u�O�]e!��R����r�a����F��T
�v�Y�=$������`��B���D!��M|��DV���6QU�M�
R���P���f	ykD���4��H�
d��y���:��
���J��*ES���J��U��{��%�2���B��n6o�
��|m������)�q]1 r����B�4����'��{��O$��}G��h��_�mS��A�R�za,��jq��,r����gK��Uc���OYg��U���P~Q�L-T����?���F�=1(��h!|`__x���[#>�����%��	A�J�Eb��D��X����x��an>]�Ynq0��Z�Y%��������S�D�@M��u������>s2�`�d����ME8��u���z�_����E����BS������-����#X�&�>s�Jt�|&z���#���c46�� _��������Ef�*���%6#��/�������wW�,w��i��,�95��Me�����%�9���*t.PP��_�&s��y)�Up�z+���?�����y8Z�q�G�ks����|��1����2����$������G�H=�h����z�{?�j~o�W�/�i��T������?����E�71�p�I���$�bP�>I������k}vp�dmM�X���"���X���9�#tJ�H���?���$���e$�?[���V����^ ��5���E������
���
1���M&|�����s��v���ri%x��"p���C��/B��.o��\�����m9	��T������G�n�����������=$������V�E�DOJ0��#.g #%�o-q�e��Zj�o��9��i}����qU���/�?*�"t�B�i�D�+!��m��2g��kl�����9v�������Z��"K�Z�����f
_�������l�H|�B������E�Z�A�3��
������|S���k����/>l�������<8���]���������*")n���2}�d�
C�>d�/��������l<�a�����aZ��J(��6�
?��
�c�C�T��&?��R0���r����&�*w~����jEB����A���A��\���<:��}�d�	�gl0�O��m�U|��\���WK"�*>I��[< n�X����b���k�y*@���T�oJ��*MN�E�7��6T��F������,;����/�0w{Q|���L�f~��Mo<.��A����0�P&��$��$�b�A�q�AZ�E�7�7��=��X�*sL����`S�:�C�2�����&���L./{��}��!��{��J9H�AyA����|[�p�5��FaK2f!kX��{K��2�,@n����2����T�^x����
�5�L����K����G����r�2�^�*����B%���
�]��#��������9A��3����5���=v�G�ZB�Z��S�L�`:FD�P�K6��D����F{�>���4������5�j,03�T��%�XC��1J�d���8���@��B������z"H��I�����\���]p�!qZ�"��sh]����Wx����7)�*���|@E�W)8N7�pv�p���~q@��r~�L����1���!>�p���q;PE��C��5G��������LZ����������g5^�T�2�Li�!�m�a_a]�Uf�����",��Q$���H]�T*�!8��7\p����q]�
��X��(8�����G�TdG�*���/�4�|�UG�$D|���2�����X��R)����A��~��\z&A�L;50������*�jl�"������T:nSYh�����B�NhQ�"�%�I��-�/T��9Jl�V�qZF.��Jg8�J��T�E�B��3J,W��L����W�uE�o@L�F��Ea��o`��r|xh���s���e�'[���|qp�O��~^:]�,��/�C-	��b��T�� K��>��73��Q�\Q��������%�B���T:nSYh����+K��(2V����0�/(�Q���������}�m��#^$P@�!���$7U����Q�Y���y�^�G�id"����1��FX7�q������+�NK�g��i)>8�>��>r�C�@��{]p�dPP3o���,��g2�U�v"83�g���������T��-��\���0-�@��������@f��5p�����e��Q����g�6�>���7I�	��y�%tDu�b��Z8]Dn�P��_j������g�SH���� ��&�"~���qL�3g
;]�;~���z�8���V���~7���q�AS����f���n,�T�6�o;�z��$fj��a��{��gC�g�|�M(�����?!�V����~qCEd���e�1�����4hWE^h��Mx�dA�6�
]�:�,}��:��t9����]�������r�U�I������	D�p��+��|�;;�?5�Jb������YEEC�7���J��������Eh�*��d���)f�_��������u������Z�Z�at5,�����Nro�+����I.���}����v��g������V(�T��)VB��%�h[5��V
B�����#���:�>.H����=Eh]��D�M"�����Ex��5�3��#P���\�i��*@�t����4������
k���f(���=?Q!�\��/P���DC�9�XD�$bQ�:� ��y�m��!�-�t�Rf�u�����	���nt� ����v�A$�P]����s�Vy]��0��^I�\��$/��0�
�x����
�
,���6�]�Ni60N����"R)6�I��Bz���*�o�)�Y?$
	��:�"�����P�"��A�������y}._�������:h��AP�=pn�1[�����1����/��y�H��]�7��M��"��W�KT�X��B��p�=a��.����|��/2�����,��*1R��9���L��9��t�����|���2�����Q���(�_}�"e*$��h�I�xg�"2`��7���k
rP���� ��w=A]�5m@������0Y��RXH%�W
��h��9������u�a�a����YuY�/�����\T��>�8*T���"��
.|�`�^��- ���{j��,�� x}��
oBes���;���t���v��;�� ��1����nq���+�`u�>x��J8\}@�������<��X��G,��
�����8����� i�!~�����kA�p�;k�k�k���~ �g��y��n���a�� �������"`Da�&��wM����F�2�L�*�)��*��T�+����2D�{j�`�Hv���`_��9���z+�PY�����"9�F�D���&x)�H���|�,����k��z"�W�`�H����R\�h���H���,��24/��z���%6��x(���?!�������E����4��rT�
��w�<W��Y���~X�����<[���g�*B�h���3*KP��"~���1��~H��+5��P�!R�L��.�#��U���7�L�����P�\3;$$6���������OC6��\��n?���
U�����%�9�"f��V*�=��>H��XI��K�������"����N�-
�$B� BGA�9__>@zJ���^�]N)�&CM�����f%��!��~[$q����`�>� |w��!��I3�;�������o��A5[I8��?H����O�'Jr�+o�'X]3^Q���x��.�x�t�����U�B��B�	a��v�k��hbf�f_�2v5��/����^�u�����H�>����Q�|
�&TFX�<��v	�i�K���:��Q�>i@�^�U�x�5� 8�s#9D���m2E��&2�����/���v1���V{��_y���Q��{��T��jy�g�5��������%��M��;�I��e/hC�>��8��WS��p��� K}�u�(�:>���������� #w��g���c#�"��FH(��_ ���R����"�w�p�=�d�(%/����u>�Q|}7!���L��e��q3���U&p�*���9Gf���������qK��$�V���BU������B���������K���&���?Wl4_��D*e���H�����C� q^�"hH�k<0
%�^q�F��/�\��R�gl�+��9������,-���_~���Pd�9�~`
er�/��a�23/��:3�d��~�����@����-Jh�'U	,&�8&��8�c��@%���>k���:�4��"d�����}��-{�q�s}�/������7��^�y��kvp����f��:�(�BfU��|����3�X~�������?>wamD��U��������;c����T�P)���A�J�0�RcA83[�������~�	�&~w��o��q�AS	���mX�_\���>)��=_��S����-���a��vB������"v�Ch!.BeTB����������������o��-�h|��M�S4H?��3���?��D�q:���?�"�j
endstream
endobj
5 0 obj
209626
endobj
2 0 obj
<< /Type /Page /Parent 3 0 R /Resources 6 0 R /Contents 4 0 R /MediaBox [0 0 846 594]
/Thumb 8 0 R >>
endobj
6 0 obj
<< /ProcSet [ /PDF ] /ColorSpace << /Cs1 7 0 R >> >>
endobj
8 0 obj
<< /Length 9 0 R /Type /XObject /Subtype /Image /Width 256 /Height 180 /ColorSpace
10 0 R /BitsPerComponent 8 /Filter /DCTDecode >>
stream
����JFIFHH��LExifMM*�i������8Photoshop 3.08BIM8BIM%�������	���B~���"��	
���}!1AQa"q2���#B��R��$3br�	
%&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz���������������������������������������������������������������������������	
���w!1AQaq"2�B����	#3R�br�
$4�%�&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz��������������������������������������������������������������������������C








��C	����?����K������x�����-<h�`������$XL�}��b��-q*���Ld&����+y�G�'���|�z�:u���������i�By�weh;P�6����|�?������K����,���_��?�}�v4��F��D�o�����UF��D�o�����Uz���F��D�o�����UV�}2�&�?��*����1�Bpn�����Z��<��������~���J�w�^�"I�/5�rH�+J4�V�B�f$:���v�����5���i�Hc���.���)�f�:�eeU��n��5��a#_
3��vo��������9;&�:i�M�'��NV5,@�K'g�.�5��g��o��������}:xd�@��D��,B���R�5��z�G���[��3��1��<����\g��;Z�i�0��c��Vb>eL`��@���?V�F�2�K��+��)Q�����������F��D�o�����UF��D�o�����Uz��g�y��������Ty��������U�4P���e�����{�7��Xt�
��?j�aH��������������U�4P�y��������Ty��������U�4P�y��������UX��.������F�'K�� `����xZ��(���?������K���<�?������K����(���?������K���<�?������K����(�`}2�[������g�ht�	(��?j�a����V|�?������K�����A�xo������
���0��q�$���'!�d�#y��$g��_
x1|?k��|J�����������$�m�|�������$l�zW����9������G����9������U~��-������):�����E�3;���t3�|����UW�. �Y��K��ar���l��P��N �G4�<�d��&�>\L��=$1�n��"��)i���_G�e��ook�Y��������k*��w(pBz�^}���IW�?���4t���������m���ef�����!UM�D�2�3"�����<�	9Q(`���J��`�3�G�Wc^q��m|u{��<���O�d���	owyhS�'8%�pk����(��(��$���_x�_���~��i0Np0RI{���mJ=1--��+���Ye�U^�L3�F�^zg��|Q��}w�������}��+Mllv7.���W<���4�7�4�����|=yg<��y[ ��R���b�F�
A��g	����F�NI��
�����t��a�/f�-������S������uk�~�TK&Hv��l�G�����zFJ_
��������Ev�s��nUl�eP�I?*�Q��a��\\|/�{?9��O�Saie�P.r�2H�H �_�M$��G$�7�0P��F<�;C�9�����X�N0�F�zT���������+�8��(��(��	d�������d��q��1f������������������KD�~������'4�V_}�����W��3����|O�y��M�yR)Z7������~��T��:st�+5�Ot�R��'	�4QEf@QE����>�M����e���	�F�u�mh�H��I���AY����,zB���ITms+PL�!
�Q��
�(�����+�K��G��?d����r�d�e�������'Mo���W��W1o���D�#aa�Fq�5z�	������	��x3���w��G]��]%���i,�|�#���v
��&�_�{��Tx/p�����������������(�������/L������c�%Z��^��������(��(�]K(\K,E�m��F0�|�0|�6����:W�m:�L��t�6������D�}�F� \�o�H���j6����������a�G���22B�s����)xe,`�Z��Kw�
�����2�'�x��8'��{o?�����;|�F�q������-�������������s$�3%���~$�F��wZd�-��y��@���bC�	�(���9=
y��5������&��b��a���i��ouq�%*������8��z-��7k�#���L�q�0������w���I���a��rj���j��i?�4�R�\���V�����A����62����.D�me���y��1�^u�?|U��(iZ���������Zw2�H��'j��8-��z��*�~,����],_g��"��)$��Q�����h�T�����K��_�d�������J5��?��G��E��Q��|C�OxR��#���a'�e����F�~��M���������������D�@�f�w&����$�����\O$I������7�� a��P:���?���>!�%�@��� ����Q�,���fr���rzm�^�
N�:0����r��|�}��=Ja8S��NN��O�S��&Qg,��E�
�qa���d���s^o���_�v(Y����7~�7��T��w�?t�5c��Gk��L�i�84��y#th�!qA���>�[����!��XM=����,Zg����.d�����P��V��%�M.�RU.Zs�f����4>��e�4�s��$��r� `��0�����Fs�Ezq>�$����^&�M*��x�E����p?.N;b�j��f�^rN�w��ld�VN��Q\G0QEQEQ\���Y�umJ��{f�.����@I�D���������3K������k��Y���V�Kr����3��$���uKUs&���v��W�n��W�ew���h������*�g��t��&�����/����N��h��������$�<����'��U-���#o����i��*�Dn`�������n��(��]^���^�%��C�c�sM2�z�������O^��"���(:������}]�QE�x��>�M���o���nn"��INM3mE�d��O�K_8||��~8��|=a��k���$��G$�\?*|���<�W���)W���f�u��{$���v��!R�aQ�=m���I����a�������m�.Z�ZU��X����ON��iQ;x��y�#�t�"�[�1�H��2�*�p2�������f;���>!���4��O�mc�ZI$����p�=i����s��)D6�J��>��"�<�@
  I�y�]��i*|�����I�����������Wm���#�|C/�+�����������z��r�dX�"x�-����<�^wz��k���_�$7r�����I"���2�8<���k��I����5�-"���)�TI,�Vn�.������T����G=��B��i�&6��t]��2Y��*I��28���ir�1�n�����O��Q�%�����OC���ok���[���t�;����o�Igy/���t�j�:����%�YG��6��<��l��m�I2��rs[��p|K���#���N����
,�+2��a�v�����oc��z���Oq�x�[����`V���-��* �����#�R��w��������?�����iJ�I/�O��E�?�%i
��&��o��P��j��1W|�F=s����Z����W��x�hN�� ������m�-e��e�Bc��#�Y�+�~\'5�O�h�]�U������x��LV�l��J�ob>�+��4�?�<1`�+K%���R2��;w�n~��&�4�o$��}������I��D��������1^�\�����w�M~��������z=[G��	�R���yg��yi$�����s<!Z5Q���c��k������R	$1�$-�|�*�|�!o���GJ���,������H���Z�q��=��,`iH-�	��=]��� ��t�?�F�!���89�T7�
����+o&���D�+o�U%��v�������9ux!����0�n�9�c�\|����
�+6�Vinaa
��6�������s���+J�*�����yu]�QEbfQE���E��M�����A�\C`�
�iT@�wd�$��k���&������j���4����
��:�E
$�=��|�!��1��L'/��]+��}V�?_������5.���-�<��,�/�[9i��g�"&D�F�t'y�s�����������	��#��\f�c�� �g��c���%hb���|�	$w��,�q���N���
����5'��"�/�+���F�I�.#S������n�2Fk��Q�V���'�r��'iT�K��C�.����l�H�#�i����d'����#��
��{�O�.��~�zy��_�%^��.�������)������������\Kp�&�m��O1�'�3����LeTg!e�d���ac�%Z��^�������6���.����S�Op,�=&lc1YoW�����q��9<W��Es�g��M__��3e>�CC�>�O�&��1]iR���5m�����jT���%�0�8�R[]k3����zD�)WU��mTVS�W���@Ew7������j7p����Y�0%cM�nr!G'��
���`���{m6�3n\t�$�������I��0)���e�/��������O>���Au�/j�3�b
V�<������"[y?6�`p�$u=O����t?k�(�2���&��U����O-J/��u���'�xFO�V�4�=$�U�	 <�]�X��>��J�3�#SVx{����V�r��cqjug*ZE�o%k_�A_?�^��a�
�}��������](��!���Q��S#qp�����/|<�|K������P���5���.c����������4�WU*���}l�.�
uy�l�����cwYh���d���+<�c��||��>V���`Gs\��f����j���W7s%��>�P����p����*������/K�y����):<�H*~�[�q����V�[�������{�������L�P@�c���a�����RI?Nk���o���(���]�W���������>���%��u�Z�K�����65��9\�,�6�{��|g��f�G�n��=%��v�^�Um��.rC��!
�9e�_@k���k^<�l��s�����@�|���I7H%�I;c�s]�,���X��[����+�[_�Z��nua�x�#�*^��k�M�R�{�W2JK�.w *��#'{�@}0y�Q����k�d�D�Co��G�����T�#YN@������}{g����e����yK�������N�{��'�����o���t6�F�f3���%<1,A������Z��T��������J|����?�"�`�M����A�x��� �O=��#9U�W���{�]��x����t�h-�
RXaP������en����5�Z���CN���m$����jr�[���o-�*q���?tc�����~���k�kU\i-�E�RY�
�w$���?+�w~h�}�/6���KSLE._gw�S����m�&=B���#yGz�&�`t���+V��-�"������4P�]�H�(,O ���N���D>">�Zk�Qzm���r�_����\����)?u^�?�D���t�
�+�����Z��}[G8����xS��@^�<����]�F�a��G�9�t��tm����K�������%R��$�����
d�'uB�*���@���s�q�|8�v��M^�mt�Vh��r�UC:��<k��W��<o�gR���c�=gOu�yUCN<�����I	��W^.�>J�Z\��}4����9��m{���.d|��������{��Ug�o3,�#�E�!v�_a��Q��+��Z�k���L�++3�72�?�%#9;\�Fk�����<�����{��]=�w� ���E�BR4e�pzds�����i����Rn	�����r��Z$�X���x��z���R�Z��vz+-U��M���IJu'�g����B/X�0O��]�|�9��Y������2>X����y*����jV����&�ff��i�#�����<U�
�!�����C���
�����-�Ga�984��J���
��CI�6�ZX��rL�LN'8�XV������fU%����=���/����N��h����K�w�)���`IB���/���_�`���#E��=O�4~�����������n�*f@FN7�����G��<�K��-��#k��i;�|�Q=���`��{R|6�W�<K�-�Mf���6�ud�X�I�����9F�9+B�{Q�#\�:��H�������?L���>�a�[�/�����m��<���B������t�b�#:[Z�[_~�y�=QR�6����+������[kW�{�������)_&�m�5����G��"����-�Kg����	�V�oqrt=I�*�<�:%������XNO@k���I�O�~!�#�V�����O5m>�85�q-
���M{���F�x��H���M�V��K���O�<��W��
�V���ao�K�y����+��&��,�"��9P����G����!�O��������D G����0(7}�n�v�y7��?�3����'L��D����,�}A��rp���y�y�|�5=w��'���[�b�y������W�w����V��S�]�d��Nv����]�W�(�������jJ��\��(��@����(�����K������mkw$.�B@�y2,����g��x�	��F����
��R9d%����:���
���u�h:��cv�7��������V@23���#5���M{:r��^�v���}�UD��I������$|����_�,�o�k���������v�e��$�8�~aB0T`��g�������5�_��o�O���(��+H��P���Vp@bs��W��6�t��F��K���.,����4�J�$�A,FY�s�i�S���=��_�����K��.^(F�2�r���p�f�gE&�iE��O��,zt�S����Q�I;��������C�>|)����x�R�m����4oi$���Fp#*�6�M�
�g=z�C��Q��� x�9�����w
�N�3!!�b6��bU�ry��^�P�o/$����6�*.��#��'������%V�h�)��4����[�rW��U��D�$�kZ��#[$��QHn�E��
7�!%T�Rb@�5�x��76v�c�)d�%Z����G����8<.3�A�u��u;�O�'K��#s�\���j���,�:�:���0Aw#������M�����.UW���C���q���� ���+A��g�X�����.j�[��h^��e�*�W��N/M �}�+uf1$�0>Y6p��d��_�eg��I��c�����+�qJ�1�0%NK�����y.��1t���W�}�w
�k�����X�
�*Gs��T/$�����EN����gjzuM.w���t�mY��W���U�J��O�z�V���w���?��#�|=���-����Z���3�!������cc�����������=����#��V3]Y�-kH�������$���q�c��/��8�e���[W��tmO����\�aw�M��2c�M����B��>g��;�o�5�xC]��O
x_G���.�h���W����&.7*�������b��=h����m����W��Y;�]�N�N4�$�K�����;�/�G�^x�_�-����i����7�|om��O��gS�m��� w�M�{�����x/E��+]YkXI�$D��9��b^����Y��V�����tS����H��|�a2��EY�t��e����X�Z��y�;�����G�CG�'�z1u�WQ��B����������qU�T��H���>+-�"O�p������0\l�_@���2�2��]J�a���^�\���$r8:=�l���Yx���0���5���d��/�%���N���K�~��0NPzrz�������B���]7PX*(����6���\�p��u/���X���E���
�;K-2�D�xm��=�����r����3�Z�al��?�!�U�ax�<g�G�F�d.���B���j/�Ecpeuo&��1nfX�o�����dT�S�h�
����'�i���9i��P9-����5��I�?�����g"ri>��N����/����N��h��k����*�g��t��&��c�?�����(��7������<�6�t1�H?kPDH��f���N.v;�z�������5�^�IV�J���?�r6�-�]����n�;d���g��9���id�����+�d��r��:�=������#���q+H�W>�v�#h<�'�R\1[��g<����f���-����E����H�b����
�e�3!���p�I/���:�zs���������"����> x#@�b�����/,fA+y"8�	9<�2r��q� 
���z�������
�\cFY�dK(�j�K�RF��	v6��=x����oSE�.�,���.��,"B���g�|��<x�/�~<��J��xv�{d_)R��x��#DS�<�y5�d9��'9�(�nv�MZ���_�zzn{N�NQNj/��_��(�����f/��0�W���o�5�����<�L%������9VR�
�r0+��x:���<%m;]!�a��Gb��d����������:���R���i���V_�g.;2����-9�m/%dQEy�QEQEQEQE������h�)�������e|��i@������t�V����vz��Vz��T�]���Fyo�-#�����K�	��9�4p�b����%N9��+����#i�!�����?N�n�.�[�U5�UYbrz�,��� W������Q��M�j���X���R����v����5ox���&���]P}����H�h`���Ey��t��{��*t�R[M%�4��|���\!:ki[�w>q���<yi��]��4��c[��;�:��5�[+����'�@�g?�z�������B�e��c�(Ye���yNp�(c��O\v�=U%�������{�`�����`��g��Jy���J�V����m����������U�yt���������h��Y��a�i�����5s���&G��<�x�
��f	��Z+3�/T���A���q_C�V�������W��t���P�\�gd����85�����*J�F�z����Uc9j���uYt?[\D�N�B�FuA�Z���-�����~�]7B��71qe)��*)�����}EA�������[h�)��,�������6U_P�a�z����>&�������.��������@v���[��P�:����5��j��}o��k
��iA���ki����n����{iK�H0Lr<O�:a�\/�0�G��t.�99?��s^����H�Z�����J��`];�I���O�����d������v�fFi��F��~n������j�f�_�;����JY�yS$�J��I#��31 �!��?z8�T\|����-���.������
������]�eM�W�j���i��gx�����4_i��I�C����c+4Wo����r�\u�Nry ���>M?�V�d���i�n�s'a�Mvc��|���H��z��
5�h�]��^$�����S�e?!���S����>������V�y���_=��~Y�|i_^X55Ns�����{>]��%��].}y�����\���3�jZ][�<�.��e��<�d�'�g���8������-/�:�Vw�����yP��q�@��<��r��?	�/�����ow�Bm(	��m�@
>b2GM}��C���kdu�'�7d���c�O��[���v�>
���']��F7��jqw�~T��w�QEr��Q@�><������U���^�e0����������0����N����Zu��D�5;����*�n�K��(�'��Z�'k-�+�`����T�����o��_���SQ�DY�V�;�N�=P��(��(��(��(��(��(��(��(�>�K�$��������:�n�mo����$�	C+px5��#P�
|�0���{G@�������u_�������92(
�����	��#e8�=I����F��X���D�����/Fh����]���O�O�����������_������j��$��1�@���$nF
K���~���B��u�+�I����������Y%d����SM�e����Q��F������z���;O�iv���!T�>Xv�d���m9��w�^#����e��A�>������������z[�t����'�q��.&�m�`m���:�����s�um��.���u����]�r*#��g�����
�Ih����cj��q|��z|1i=|���zW��Im��
j�(��c�-��k��vE%P�768=k�_�����~1k^#}^�l��]�g����i0����`u������J|u���|	�������h��mR`���&A+�����H����P�_���~�t�����,R�4��!Rb8��'�`2*���Y�P�,F?GEN�1jQ�<�t�����������A��}�E�Dxj�MZ��I�Xck1��i%���\Ha�1�����k���3��y�-Jh��:�u
�q�?|�����#�c>���C�^$�a���o���D�y;X��zI#��y0��+>���q.4����B����r{;�O�6�h�:���Om��\Mp�vO��s�O$`�������N��<�3�~�"�D��Y�f7�<H=��b�+�=��"����<��Y�~ye�0_!�61��q�=��N_��>�%��?�,���<m��������	��j��b�o����<q���[^5%w�p�/��ESx^U��ZT���k��p$ ���_��)������
A��T�i�`���#4A�cj��lg����C�Z��-[���~�5����y4-x��,�q�o����<c�3��rg�d�k��b������>f�M'���wn�c�*+����
�I�P����2i��i�1 ��z��k��%���<e,E5Z����kfQE#�(�
����o��o�J(��[�LJ���!��dr8��b2+��=x�|����	�e=������5�]�w��=�hQn���y����X�g*�����5��O�����������sy*A
r�#Q�9'�Kosoy]ZJ��(��X�����g��Q�0�oI�in'����n	!�7����f>}�Mz������hsN'������!�G����H���h���/���3`�vT��������.�>���+���(��+��2���u�*���*[+)ei�`��''�q�n���1���*bj�F	�����W76�v��^J�A4�I#DEff<$��
��a�Z���s����,.�#`����px��u���������$� �E��h����),��I%KF���d�G�o�t��?	u�*�'������w�����|���;����������S�
<L#��|�������I��Z�>��>�K�$��������:���_��wtD��n8���\?�B���:.�����9�
wW?�����i�9���D�]���9��������3��z�[4P	�YY��@�DG�H���j���k�q�x�8�����(:������}]�D-���k��������_-~�����
�kyn$7����db�F�����k��*���I�6��v
����Z����{�Z$�����F"���(������^+�Jk���xB��k���/���y�Rq��{�����|W��
��L]�6B��%I�l���L�c��8�C�zt�i]�gy]J�|��V�.[E��[m���:���m�o����,a�5=:yoo��TB��8��A�s���iw
��|k�������%�B�k� (��� a��?t������>�� �-b��B/9sF7�^x��z�qX��k�x��_-n���!�'�?,�g,Y�������j�{��O����.����"��MG���{6x���J��^3��7y:����$L(A�x�F������|'�������n'i��<�����UQ�(��o��7�?���k�oY��I���;����>��R���}/���\t�EJ�{9��'�l|_���a�������_O���7�r9����F�O���z��i�|k�KE��o3��)uc'���1�+���y��#��@+��e������[s�zB�_��|���F���|�P�=�5�q����f<�0Q�����d�4�|�I��v���~�%��W���H���E+h�2��{�9�<W��6�v���du>�P�o�o�<W��K���<��i�`����"�6������py��>��e�������6��Z'��8�Qv�Sp�m�����bf�|/�F��xX?�.��!�������y��s�^_h�����������-��
�U%}Z��T�m����������������i�������r�F���a���H��s�F8�8�/����_�]O�5�i|+m��kU��V�|����0G$��1��[�|o�o�����e��*������2�G��r�88?�ED(r��[t=l������hFt�Yr��T����%�IKG�S���m����U���6���o�B�`������|0��^����
q��+mJ��9f<���w{��3�1]�_�����z���
?��[�����
�)*S�
Yx�{��>|�>���~5������9�"����M������:t���H$�8���5�U���S���R���Si��Jt���Z�/��������q�_	kz����8�B��t%���������7A�S�_�VZ�����N��j�����G��If'��$t�]�O�4���!�kZ�~\��`|����I�3�5��?f�����������������"u����O������������:����
�`�eYT|��{>Og�o�ZM.�;_�?�?����Z��o-���`�+�����*H�������5?�kZ�h�,�i+lF�Hb�0�<rN|��K�x���O���w���	�=����oV6�D�Y�d�yc�������)�&F�<O�^��%d�#�q�eo0�:0�#�!:t�r5��<Oo�R�)��jU%%����w�\���u���kS�����:��?�E���mg,2B���
�����0x;�������>xsV�v�����	R% �Kw�!h��ppp��?��k�~|?�;�h�������|�,���R=������q��f�o���O���rn�i��/�\���p�'>���Og�o3��d�6�����-�Ux�I�i(��/+�f|��~�?����s��6����X`�2Z%���U ����9���s�sZ��|M�_h	�H�������(�H.2v���z�x��f�'{�i�@C?���63����f������������k��+�;XRg<*�A>���������G�������e:2k�-]���1��z�������W?�^���6wP[x?��jmh�|�w��XP�2����x��Z�������X���Yj�;�W/��99��Z���0��f�L���21��{���G.��{8�0r�%�z����t���Zmy\Ya�x��d��*�~��������p���4u�U��	��x3���w��G\��g�����$��4xC@��-�W���O���f	��~����=A���Q�P�G�C�Zw�n$����kh:E���J�"y�I���#x��z�W�w����������l/����R�������[�����d������[����~��z��������A����5��P����o�A�?���&�Z|f�ouK��;d�=�+|�)�
��lnS�H5���P����X���F	�t��������b��_��&�����g�xhX����h8�
v��8_G���C! ���o�A�?���&��[�
��=�?�����@q�w����������y���������1��F�yq+4��gj��,z����x��1��d�����,v��_fZF�H3��UV��:��
���~��z��������A����4�x�����%��<44kE��f	3 e0=C�j���ox����|={��Y�k��"����*v�Gc\�W�-���0j��cim�_s����l��,Nz���tU)=��f�GVt���+v���oO#�?���W���&��x���V�|��\m��U@�Huo|������o�A�?���&�k�(��eq��i��$�
�7euG��.�@��c����wQ��_�������I���W���~Ws��gA�Rm��.�ha(G
���#�K���w���������?�n�7������^�EI�y��-��������j�����^-���gtl�f��6�6����t��i-�nn"�(��i	*��"����OA^,�9���~I�gk���c�����G$��(������������M��~��z�����I����m�u>�s"��������<�����8�����������M��~��z����G������XG>�����64r���b|�v\.N��g���8�>3|7�[���c����:��W��o+�m�0#�Y�����������Ms��1���(�<����8����'�D���?q��M��@q�w���������?�n�7������^�Ey��>[D�����(�e�����Y��7���xY�M4��v��m���#��O�=w�Z4���)�������>r`�;|��������PA���O���E����k<��b_�C ��G���<������ ������(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��
endstream
endobj
9 0 obj
16014
endobj
11 0 obj
<< /Length 12 0 R /N 3 /Alternate /DeviceRGB /Filter /FlateDecode >>
stream
x�U[�U��9�
�����-�C�t)�K�����[��k���A���d��$�L�}*�����IA��-��z���R�PVw�"(>�xA(�E��;�d&Yj�e�|����o�����B����%�6s�����c��:��!�Q,�V=���~B+���[?�O0W'�l�Wo�,rK%���V��%�D��j�����O����M$����6�����5G����9,��Bxx|��/��vP�O���TE�"k�J��C{���Gy7��7P����u����u��R,��^Q�9�G��5��L�����cD����|x7p�d���Yi����S��������X���]S�zI;������o�HR4;����Y�	=r�JEO��^�9��������g�T%&����
������r=)��%�[���X��3".b�8��z����J>q�n���^�\��;�O*fJ�b�����(r��FN��X����H�g ��y�O����+�-bU��MR(GI��Z'�i����r0w]�����*x������u���]�Be�]w�*�BQ�*����S������������aa����,����)�)�4;��`g�>�w{��|n J������j��m*`��Y����,�6�<��M����=�����*&�:z�^=��X���p}(���[Go�Zj���eqRN����z]U����%tAC�����^�N��m��{�����%cy�cE���[:3�����W���?�.�-}*}%��>�.�"]�.J_K�JK_�����{�$2s%��������X9*o�����Qy�U)��<%��]�lw���o��r��(�u�s�X�Y�\O8������7��X���i��b�:	m�������Ko��i1�]��D0����	N	�}���`�����
��*�*�6?!�'��O�Z�b+{��'�>}\I���R�u�1Y��-n6yq��wS�#��s���mW<�~�h�_x�}�q�D+���7�w���{Bm���?���#�J{�8���(�_?�Z7�x�h��V���[���������|U
endstream
endobj
12 0 obj
1079
endobj
7 0 obj
[ /ICCBased 11 0 R ]
endobj
13 0 obj
<< /Length 14 0 R /N 3 /Alternate /DeviceRGB /Filter /FlateDecode >>
stream
x��wTS����7��" %�z	 �;HQ�I�P��&vDF)VdT�G�"cE��b�	�P��QDE���k	��5�����Y������g�}��P���tX�4�X���\���X��ffG�D���=���H����.�d��,�P&s���"7C$
E�6<~&��S��2����)2�12�	��"���l���+����&��Y��4���P��%����\�%�g�|e�TI���(����L0�_��&�l�2E�����9�r��9h�x�g���Ib���i���f���S�b1+��M��xL����0��o�E%Ym�h�����Y��h����~S�=�z�U�&���A��Y�l��/��$Z����U�m@���O� ������l^���'���ls�k.+�7���o���9�����V;�?�#I3eE����KD����d�����9i���,������UQ��	��h��<�X�.d
���6'~�khu_}�9P�I�o=C#$n?z}�[1
���h���s�2z���\�n�LA"S���dr%�,���l��t�
4�.0,`
�3p� ��H�.Hi@�A>�
A1�v�jp��z�N�6p\W�
p�G@
��K0��i���A����B�ZyCAP8�C���@��&�*���CP=�#t�]���� 4�}���a
�����;G���Dx����J�>����,�_��@��FX�DB�X$!k�"��E�����H�q���a����Y��bVa�bJ0��c�VL�6f3����b���X'�?v	6��-�V`�`[����a�;���p~�\2n5��������
�&�x�*����s�b|!�
����'�	Zk�!� $l$T����4Q��Ot"�y�\b)���A�I&N�I�$R$)���TIj"]&=&�!��:dGrY@^O�$� _%�?P�(&OJEB�N9J�@y@yC�R
�n�X����ZO�D}J}/G�3���������k���{%O���w�_.�'_!J����Q�@�S���V�F���=�IE���b�b�b�b��5�Q%�����O�@���%�!B��y���M�:�e�0G7����������	e%e[�(�����R�0`�3R��������4������6�i^��)��*n*|�"�f����LUo����m�O�0j&jaj�j��.�����w���_4��������z��j���=����U�4�5�n������4��hZ�Z�Z��^0����Tf%��9�����-�>���=�c��Xg�N��]�.[7A�\�SwBOK/X/_�Q��>Q�����G�[��� �`�A�������a�a��c#����*�Z�;�8c�q��>�[&���I�I��MS���T`����k�h&4�5�����YY�F��9�<�|�y��+=�X���_,�,S-�,Y)YXm��������k]c}��j�c��������-�v��};�]���N����"�&�1=�x����tv(��}���������'{'��I���Y�)�
����-r�q��r�.d.�_xp��U���Z���M���v�m���=����+K�G�������^���W�W����b�j��>:>�>�>�v��}/�a��v���������O8�	�
�FV>2	u�����/�_$\�B�Cv�<	5]�s.,4�&�y�Ux~xw-bEDC��H����G��KwF�G�E�GME{E�EK�X,Y��F�Z� �={$vr����K����
��.3\����r�������_�Yq*������L��_�w���������+���]�e�������D��]�cI�II�OA��u�_��������)3����i�����B%a��+]3='�/�4�0C��i��U�@��L(sYf����L�H�$�%�Y�j��gGe��Q������n�����~5f5wug�v����5�k����\��Nw]�������m mH���F��e�n���Q�Q��`h����B�BQ��-�[l�ll��f��j��"^��b����O%����Y}W�����������w�vw�����X�bY^����]��������W��Va[q`i�d��2���J�jG�����������{���������m���>���Pk�Am�a����������g_D�H���G�G����u�;��7�7�6������q�o���C{��P3���8!9������<�y�}��'�����Z�Z�������6i{L{������-?��|�������gK�����9�w~�B������:Wt>�������������^��r�����U��g�9];}�}���������_�~i���m��p�������}��]�/���}�������.�{�^�=�}����^?�z8�h�c���'
O*��?�����f������`���g���C/����O����+F�F�G�G�����z�����������)�������~w��gb���k���?J���9���m�d���wi�������?�����c�����O�O���?w|	��x&mf������
endstream
endobj
14 0 obj
2612
endobj
10 0 obj
[ /ICCBased 13 0 R ]
endobj
3 0 obj
<< /Type /Pages /MediaBox [0 0 846 594] /Count 1 /Kids [ 2 0 R ] >>
endobj
15 0 obj
<< /Type /Catalog /Pages 3 0 R >>
endobj
16 0 obj
(Mac OS X 10.12.1 Quartz PDFContext)
endobj
17 0 obj
(D:20170314064355Z00'00')
endobj
1 0 obj
<< /Producer 16 0 R /CreationDate 17 0 R /ModDate 17 0 R >>
endobj
xref
0 18
0000000000 65535 f 
0000230369 00000 n 
0000209744 00000 n 
0000230141 00000 n 
0000000022 00000 n 
0000209722 00000 n 
0000209861 00000 n 
0000227332 00000 n 
0000209929 00000 n 
0000226108 00000 n 
0000230104 00000 n 
0000226129 00000 n 
0000227311 00000 n 
0000227368 00000 n 
0000230083 00000 n 
0000230224 00000 n 
0000230274 00000 n 
0000230327 00000 n 
trailer
<< /Size 18 /Root 15 0 R /Info 1 0 R /ID [ <bf59064ce673bded725e02c36ad36dcf>
<bf59064ce673bded725e02c36ad36dcf> ] >>
startxref
230444
%%EOF
#100Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Pavan Deolasee (#98)
Re: Patch: Write Amplification Reduction Method (WARM)

Pavan Deolasee wrote:

On Tue, Mar 14, 2017 at 7:17 AM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

I have already commented about the executor involvement in btrecheck();
that doesn't seem good. I previously suggested to pass the EState down
from caller, but that's not a great idea either since you still need to
do the actual FormIndexDatum. I now think that a workable option would
be to compute the values/isnulls arrays so that btrecheck gets them
already computed.

I agree with your complaint about modularity violation. What I am unclear
is how passing values/isnulls array will fix that. The way code is
structured currently, recheck routines are called by index_fetch_heap(). So
if we try to compute values/isnulls in that function, we'll still need
access EState, which AFAIU will lead to similar violation. Or am I
mis-reading your idea?

You're right, it's still a problem. (Honestly, I think the whole idea
of trying to compute a fake index tuple starting from a just-read heap
tuple is a problem in itself; I just wonder if there's a way to do the
recheck that doesn't involve such a thing.)

I wonder if we should instead invent something similar to IndexRecheck(),
but instead of running ExecQual(), this new routine will compare the index
values by the given HeapTuple against given IndexTuple. ISTM that for this
to work we'll need to modify all callers of index_getnext() and teach them
to invoke the AM specific recheck method if xs_tuple_recheck flag is set to
true by index_getnext().

Yeah, grumble, that idea does sound intrusive, but perhaps it's
workable. What about bitmap indexscans? AFAICS we already have a
recheck there natively, so we only need to mark the page as lossy, which
we're already doing anyway.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#101Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Pavan Deolasee (#99)
Re: Patch: Write Amplification Reduction Method (WARM)

Pavan Deolasee wrote:

BTW I wanted to share some more numbers from a recent performance test. I
thought it's important because the latest patch has fully functional chain
conversion code as well as all WAL-logging related pieces are in place
too. I ran these tests on a box borrowed from Tomas (thanks!). This has
64GB RAM and 350GB SSD with 1GB on-board RAM. I used the same test setup
that I used for the first test results reported on this thread i.e. a
modified pgbench_accounts table with additional columns and additional
indexes (one index on abalance so that every UPDATE is a potential WARM
update).

In a test where table + indexes exceeds RAM, running for 8hrs and
auto-vacuum parameters set such that we get 2-3 autovacuums on the table
during the test, we see WARM delivering more than 100% TPS as compared to
master. In this graph, I've plotted a moving average of TPS and the spikes
that we see coincides with the checkpoints (checkpoint_timeout is set to
20mins and max_wal_size large enough to avoid any xlog-based checkpoints).
The spikes are more prominent on WARM but I guess that's purely because it
delivers much higher TPS. I haven't shown here but I see WARM updates close
to 65-70% of the total updates. Also there is significant reduction in WAL
generated per txn.

Impressive results. Labels on axes would improve readability of the chart :-)

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#102Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Alvaro Herrera (#101)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 14, 2017 at 7:19 PM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

Pavan Deolasee wrote:

BTW I wanted to share some more numbers from a recent performance test. I
thought it's important because the latest patch has fully functional

chain

conversion code as well as all WAL-logging related pieces are in place
too. I ran these tests on a box borrowed from Tomas (thanks!). This has
64GB RAM and 350GB SSD with 1GB on-board RAM. I used the same test setup
that I used for the first test results reported on this thread i.e. a
modified pgbench_accounts table with additional columns and additional
indexes (one index on abalance so that every UPDATE is a potential WARM
update).

In a test where table + indexes exceeds RAM, running for 8hrs and
auto-vacuum parameters set such that we get 2-3 autovacuums on the table
during the test, we see WARM delivering more than 100% TPS as compared to
master. In this graph, I've plotted a moving average of TPS and the

spikes

that we see coincides with the checkpoints (checkpoint_timeout is set to
20mins and max_wal_size large enough to avoid any xlog-based

checkpoints).

The spikes are more prominent on WARM but I guess that's purely because

it

delivers much higher TPS. I haven't shown here but I see WARM updates

close

to 65-70% of the total updates. Also there is significant reduction in

WAL

generated per txn.

Impressive results. Labels on axes would improve readability of the chart
:-)

Sorry about that. I was desperately searching for Undo button after hitting
"send" for the very same reason :-) Looks like I used gnuplot after a few
years.

Just to make it clear, the X-axis is duration of tests in seconds and
Y-axis is 450s moving average of TPS. BTW 450 is no magic figure. I
collected stats every 15s and took a moving average of last 30 samples.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

In reply to: Alvaro Herrera (#101)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 14, 2017 at 12:19 PM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:

Impressive results.

Agreed.

It seems like an important invariant for WARM is that any duplicate
index values ought to have different TIDs (actually, it's a bit
stricter than that, since btrecheck() cares about simple binary
equality). ISTM that it would be fairly easy to modify amcheck such
that the "items in logical order" check, as well as the similar
"cross-page order" check (the one that detects transposed pages) also
check that this new WARM invariant holds. Obviously this would only
make sense on the leaf level of the index.

You wouldn't have to teach amcheck about the heap, because a TID that
points to the heap can only be duplicated within a B-Tree index
because of WARM. So, if we find that two adjacent tuples are equal,
check if the TIDs are equal. If they are also equal, check for strict
binary equality. If strict binary equality is indicated, throw an
error due to invariant failing.

IIUC, the design of WARM makes this simple enough to implement, and
cheap enough that the additional runtime overhead is well worthwhile.
You could just add this check to the existing checks without changing
the user-visible interface. It seems pretty complementary to what is
already there.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#104Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Alvaro Herrera (#100)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 14, 2017 at 7:16 PM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

Pavan Deolasee wrote:

On Tue, Mar 14, 2017 at 7:17 AM, Alvaro Herrera <

alvherre@2ndquadrant.com>

wrote:

I have already commented about the executor involvement in btrecheck();
that doesn't seem good. I previously suggested to pass the EState down
from caller, but that's not a great idea either since you still need to
do the actual FormIndexDatum. I now think that a workable option would
be to compute the values/isnulls arrays so that btrecheck gets them
already computed.

I agree with your complaint about modularity violation. What I am unclear
is how passing values/isnulls array will fix that. The way code is
structured currently, recheck routines are called by index_fetch_heap().

So

if we try to compute values/isnulls in that function, we'll still need
access EState, which AFAIU will lead to similar violation. Or am I
mis-reading your idea?

You're right, it's still a problem. (Honestly, I think the whole idea
of trying to compute a fake index tuple starting from a just-read heap
tuple is a problem in itself;

Why do you think so?

I just wonder if there's a way to do the
recheck that doesn't involve such a thing.)

I couldn't find a better way without a lot of complex infrastructure. Even
though we now have ability to mark index pointers and we know that a given
pointer either points to the pre-WARM chain or post-WARM chain, this does
not solve the case when an index does not receive a new entry. In that
case, both pre-WARM and post-WARM tuples are reachable via the same old
index pointer. The only way we could deal with this is to mark index
pointers as "common", "pre-warm" and "post-warm". But that would require us
to update the old pointer's state from "common" to "pre-warm" for the index
whose keys are being updated. May be it's doable, but might be more complex
than the current approach.

I wonder if we should instead invent something similar to IndexRecheck(),
but instead of running ExecQual(), this new routine will compare the

index

values by the given HeapTuple against given IndexTuple. ISTM that for

this

to work we'll need to modify all callers of index_getnext() and teach

them

to invoke the AM specific recheck method if xs_tuple_recheck flag is set

to

true by index_getnext().

Yeah, grumble, that idea does sound intrusive, but perhaps it's
workable. What about bitmap indexscans? AFAICS we already have a
recheck there natively, so we only need to mark the page as lossy, which
we're already doing anyway.

Yeah, bitmap indexscans should be ok. We need recheck logic only to avoid
duplicate scans and since a TID can only occur once in the bitmap, there is
no risk for duplicate results.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#105Robert Haas
robertmhaas@gmail.com
In reply to: Pavan Deolasee (#104)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Mar 15, 2017 at 3:44 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

I couldn't find a better way without a lot of complex infrastructure. Even
though we now have ability to mark index pointers and we know that a given
pointer either points to the pre-WARM chain or post-WARM chain, this does
not solve the case when an index does not receive a new entry. In that case,
both pre-WARM and post-WARM tuples are reachable via the same old index
pointer. The only way we could deal with this is to mark index pointers as
"common", "pre-warm" and "post-warm". But that would require us to update
the old pointer's state from "common" to "pre-warm" for the index whose keys
are being updated. May be it's doable, but might be more complex than the
current approach.

/me scratches head.

Aren't pre-warm and post-warm just (better) names for blue and red?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#106Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#105)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 16, 2017 at 12:53 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Wed, Mar 15, 2017 at 3:44 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

I couldn't find a better way without a lot of complex infrastructure.

Even

though we now have ability to mark index pointers and we know that a

given

pointer either points to the pre-WARM chain or post-WARM chain, this does
not solve the case when an index does not receive a new entry. In that

case,

both pre-WARM and post-WARM tuples are reachable via the same old index
pointer. The only way we could deal with this is to mark index pointers

as

"common", "pre-warm" and "post-warm". But that would require us to update
the old pointer's state from "common" to "pre-warm" for the index whose

keys

are being updated. May be it's doable, but might be more complex than the
current approach.

/me scratches head.

Aren't pre-warm and post-warm just (better) names for blue and red?

Yeah, sounds better. Just to make it clear, the current design sets the
following information:

HEAP_WARM_TUPLE - When a row gets WARM updated, both old and new versions
of the row are marked with HEAP_WARM_TUPLE flag. This allows us to remember
that a certain row was WARM-updated, even if the update later aborts and we
cleanup the new version and truncate the chain. All subsequent tuple
versions will carry this flag until a non-HOT updates happens, which breaks
the HOT chain.

HEAP_WARM_RED - After first WARM update, the new version of the tuple is
marked with this flag. This flag is also carried forward to all future HOT
updated tuples. So the only tuple that has HEAP_WARM_TUPLE but not
HEAP_WARM_RED is the old version before the WARM update. Also, all tuples
marked with HEAP_WARM_RED flag satisfies HOT property (i.e. all index key
columns share the same value). Similarly, all tuples NOT marked with
HEAP_WARM_RED also satisfy HOT property. I've so far called them Red and
Blue chains respectively.

In addition, in the current patch, the new index pointers resulted from
WARM updates are marked BTREE_INDEX_RED_POINTER/HASH_INDEX_RED_POINTER

I think per your suggestion we can change HEAP_WARM_RED to HEAP_WARM_TUPLE
and similarly rename the index pointers to BTREE/HASH_INDEX_WARM_POINTER
and replace HEAP_WARM_TUPLE with something like HEAP_WARM_UPDATED_TUPLE to
signify that this or some previous version of this chain was once
WARM-updated.

Does that sound ok? I can change the patch accordingly.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#107Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Peter Geoghegan (#103)
1 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 14, 2017 at 8:14 PM, Peter Geoghegan <pg@bowt.ie> wrote:

On Tue, Mar 14, 2017 at 12:19 PM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:

Impressive results.

Agreed.

Thanks. I repeated the same tests with slightly lower scale factor so that
most (but not all) data fits in memory. The results are kinda similar
(attached here). The spikes are still there and they correspond to the
checkpoint_timeout set for these tests.

It seems like an important invariant for WARM is that any duplicate
index values ought to have different TIDs (actually, it's a bit
stricter than that, since btrecheck() cares about simple binary
equality).

Yes. I think in the current code, indexes can never duplicate TIDs (at
least for btrees and hash). With WARM, indexes can have duplicate TIDs, but
iff index values differ. In addition there can only be one more duplicate
and one of them must be a Blue pointer (or a non-WARM pointer if we accept
the new nomenclature proposed a few mins back).

You wouldn't have to teach amcheck about the heap, because a TID that
points to the heap can only be duplicated within a B-Tree index
because of WARM. So, if we find that two adjacent tuples are equal,
check if the TIDs are equal. If they are also equal, check for strict
binary equality. If strict binary equality is indicated, throw an
error due to invariant failing.

Wouldn't this be much more expensive for non-unique indexes?

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

Moderate_AV_4Indexes_100FF_SF800_Duration28800s.pdfapplication/pdf; name=Moderate_AV_4Indexes_100FF_SF800_Duration28800s.pdfDownload
%PDF-1.3
%�����������
4 0 obj
<< /Length 5 0 R /Filter /FlateDecode >>
stream
x��[�.��e��������A!zB�R�9���F�I�~�$H'b�N��fF#��~����^�����}~����������������o��{���o{��	���^��~�������L����5^�������������s�����Z?���?y�/������������.���Q����{|�B>�k���6#�?���v�mUQ�����~^����O�=.����+}���O��������\>����n�����zP��zy=�����{EpT�|�x�z���	{_���m�R��v�����k��</�g�'�>on���w����J[o����.^�������<���B�w�s����nw���k�'�z�Py����������A"�����z!��,�y��T��>�����~��������D���}o�����E�������_��/���y�������3Wz����CE���3�{_��r���!�����q��#:�yg|;�~�]#z���7.��z{�2����&��C���o��fo��������
cJ"����4�F�`F�!��O���3�g�?�����v�}��/j5��Y�/�!��~{�&�X���������\���u���]���������}�^H�#�#�����<?L��#IM����rgN����u�\�t�_�\?�7���������~���W|/����B/~x>�������a1���uyC���w8��^��#4g�Q���`"�o�4�{�\Q!A��-���Z�h���|����Fr��e@���'�{���b@���������"�"�^��:�|}�����eUR����V+B��-G�_�q�
l��?�'��v��-��>�����	����#-@]0�t$cz���T>��RH�>�l��'r#��#
�sE���]����y���������;s�_������)����vFW*���E5���)K~@���>�5;���mD3����|3����		=.�/��������������m��(��DQ��B�&�B��Lme�R�FQFZ�������JQD�(X(��)�|�(��(
�Tc�(�B��(��D�QhP4-Y)��E����{�(�DQ�P�!E3���������E�Eef(�^*�d���
�2����B���NQ���LQFy��c�PTi�LH���Jy�(�=Ql���e�������R�(�FQ�
E�F����E�����E������N���9�o��NQ�P4�1�(�J��]�U�j������
�(J6������K�����6�*���@+E�BQ�)`����U4-	E[_E�6�������DQ�P4�qVQ��Qd���L�7�����V���D���X�(5��?c9Ql����R�1�����1�(��W���E�����U�9�RtE;��EQ~X)���U4�JQ��Q��-u��[����FQ����������Ig�����/����)_#c�V���y�)������s=�-����q�f�Q����K!=� �F�
������wy�a,\0r'�Q���f���?���3�oykSW��K��n��"��6415=_�8q
�s���������1���oL}z	�_�6��7��#����K���(�������>_?���\y�"/�w�?�z���Po�Rq���``���n#MT��~���7��m���q������?�v�6�y�d�x���Rb���k{�1_)T^���5��i��Kp�Wu�1�}p%�13�u[�����9��d�����R�*N�W6a�B"L����L(D�^����K��z"d��������������&X��������\@���G����oB�/�x=qA��������j��?	j��B1\o(�������%EP���_��+'�#���@<�w�}<���2�v���\��|�#Q��%ho=z�yG�'�]I�u>�B��@f�en:�y��)���zM^�& ����f�oV~%>c���p��7S�Dp���\�u�$�H�~��.���f-���r��C�2y����i:z
���X�}oU�M�E0}J�fb
�g&����co4jM���U7���J��a������{T���_�������I
�`����v������R�Rw<H�����WM"��������X��{x
w�j0���0�b�����imX�z�0��E�Fi�B�������6� f�������7
�S������{	����,��!�9�;v��� �XE���Z�+6C� �xV�a�}"�<���k��J)�bi�,\X�i�flI4��{����W��B�>1�!M��8Th�>�wrTso)�KEYn.5AQ�����L�������� !����/j
�ca�@�z��n�@��������5=��Qe/J�����c~'�&�I`-��8�5I�*#b�z(���3F��jC���V}������ob���zD!��s�7�1*8��q���7�!'](D}1�E�[q�6��i��I8�����0!t�����Ax[(��U����_j�h���[��.L1k�4�����)m� �7!��:&�9���������V&�.A?��k]�W����%��#
���
b\�]����\�(A��"�%��8���ugsJ��5�����l&�
Y�4uBp�Y���c�����
N"��V(��gEH��[IS�����
�.��i
�	�1-oe`
9�����%�
��6R�\(D�(�?���:o��5��1?/����������-oEY^xY��b~�N�N���,W�B������\(��F)��V,u�9�����;�=AP��(�}#AAA�JUT����tHP�A�V���{���]i�-O8a`��A������F�� A'��:8�!hj2���BP�A�V���xB����� �O��FZ�������%���x�p�u�2)
���J@��`�n=J5�R�FP�
't�:z8��3��!`�c���@h;A�JP��t�&��pBF��1��-�0�X=z�)���F�N�
�f�Aic���	'$��p���I��BP�V��1v!h'(���N��)0��P$:�
���)������
������#���e���	��p����ZQeo��W��|�}|���g�_���7m���G����.�[M��~>(�l��UUAP��~�
;FH����f�<P<�Fy�Su���
B�&6td����:&>�������}���f*B����`b���"���x�E�p��:
������r��<�?��W�}6����"C��[`�'�c��yKU��:����}���/����}�`zR����Dx�~�T$�3����i���!6��sU
�S��U�,glYb����[I<GP�A) �;^ui��&���	�.^�K)��Y� �u�~1��my8Y��[@
��N�]��v&��,f,���`6�&"S��#�E'��*���8a�A���'q�	P0q'�@ ��i#WV=X'q9�,��h���Zs�����W�a)��N\�i���M���5���gL���5�Y��:�D`�g���2�������D�S)�z�����$e�N;�{%����.q��0d�7��O{���2Ah:v��w?�TC)���L���1�25��04��r��������)X���������@0�G,�J9�n��5W}����B�9��������0Frc���pTh�2�����\��r!�e�x[2!h���;��(a+6�Qv����y�:g�|F ���
�Q+e��q�%�S=$D��+W����\=�3tA��`D#w5=�-0��cGJV�����!=��(�U�1���
��!��%2P���s��/R���E�L�\'���5�u +����:d����Lv�2A����f��3�h>��G������"cJ8.kD�A"�DN����s��:wb)�$^��R���@��L,��V*,�Yj���R����:
K�L���mYUg����,u���R�"Y��KuqV�NH����K�I�RSc���Y
�������R�7��U+K��6�����7��e:/,uDa��R���
A�,=JK��m,%hqbi��/��`)ma��wb)��X
��Ttg�1�����V�NH��!=JKE6���K�V�:�V����Y��,5Wgf�)����fbm,���X����T��#c-��b)��R������Y���u-�G��l[K�D�R�Y[KAN,�X�����#��m-��l,�����og)XX������,�Kq�6��D-,Ur��e@
��k),���X
��T��#���X
��� +K��`�ZJS7���K���R7�����Y�����F�6�6$C����Tg)~�����K���L���4sc�C����X�X������m��AEg��:�����e��{�o~�������V���;�h����ek#vx}�O����Zl����72v���]��9//�!4I�q*�^����D�W�]em�����gg//�.�������;0�rxS����hO�����k_H_9^���W��9��}qx�M��+�K#�;�bPbrxi?����$�*�w`������t�*)�W$��<����,��^�Gf|������zrxEp=&�����8�N*6/&��i�O79�8�h������l8�"��M�y�����w��N1��i���+�������1��+�����	c��������1',!��j,��j�1�T�[��	����W�s���m#N\���@�����a�7�.4q���T7oNy��
o��f�`�8�wH��Z�m��/
��b_�r�r��m~�lx=)�����w��x8/���t��i���
L�m���a�8v���}f���%�v�?�%u��c�Gy����B02T.��RG��� )�ZIp*N:���@�&��Q����^n��:��"l�4A',�a�������GV�
�E�9����C:�(:�Tj_8fo���Br�&�����Bp
��D�G d��b�ql��"�%�%�6���i����v�&\�&��-wf��2��"��TO����t����V*;������r��Q��
u��vf$�*Wj����(	[��x��O���Eo�GF��`*NK�����S���)�v]�=�l�������l�*�*�J�b�DVXKE��Xd`�/�V<&$���ty>��oC�2x�j�	�?B���
��7,6N�Y�%9���4��|�A;��R����*���������*`7��
�%�)
��������0� ���}���	x|9C�L�M�H�d D^�Xt��P	�\�8c����
�C�UE�����jc�)�kS�=����qc�u if�dS]�L�9��s���{��zD�K�)��#�<T��1�����Q��)��!;���8�0d��H��G|�-�:��O���"����T}����7NIU�T���&��8��[���k#X�Va�q����f�u���8��Uvrj�o�T8mA;y�����z_E@�������S���-2��(�|�B=bE�9�]���j)O�8�-���,R��}fAB
��������S����Y�v��=�����-�)�	���X-p��-EH���)��(�" ��qJ��|+���E�" ;>��=�QVu~ (��S�B`5��qJ�W���d�?��'�U���H�2��QI�T����B�)Gd`��" �)q�����r��=N���e�,��3r�XJ���L,�Xf�,��`)HE@DK���d����R�}��0��" >.k�nBj�� +K��u���bM��#�1Xz K'�X�w^��R����R����|ai�g$sQ��L��8��Fd��U��9v�IG��Kq�+�������XZ��V��V�R�" �� �X
V�4��������@26K�^" ��,e�R�����	��9Xz K{T$�*,��J���K����YJ3����#2�1�����RD�����d��r���W����o�
gTi�����|�>m��h�?��>� '�����8`Lu���'�09nS��	� d6bA�4�M��L�	{��70��1~5�	?P�;G%}���0��Ib���#w���������g���w�<�R��n����D�d��`�����bB�"Z��	�~�BO
�*x���H���t�W9U�+�K�4k�},���iqPI����hZ�4k�#x��aO���<��z��Z�b��I�u8w�r���LVLCJ��^�����Q�+G�E�u�(>R$��Y�R�2[��*��2����<�QE��/{�XN���������%��-�� v2�����I����<`���p~r�95�En���/e���s���S��'�_�<Vq������8WJ�9��	��� �+�D�p6k������NcZ`�8-`[��/������t
����E��c�	���v�)Iq�6����8���f|��_HW���PR�Z�������	����d/2 ����T��� u��B��!O���x��	�Q��������k�: ���������0I��"~���cd�X:<���lI�`N��,�����7�B��`��x|d�����B���J=0�4�3����-�Q�n�����H��1���M<����������<-F�#'����&2����:�81C�WtP&\��m�f
��A_���#�$����5M;D�+e8`����kg>�#`���e)~J������"o'V��g�
$�%��������QZ���;�W
���DMT�r$}_u]0u����W �z5������`�������������$~c"e���]:LU������Mt�TbY�+I��;+[Z�Q�\��K���o��@qs�K�`�j��� �X�+!���m��?�!Z���r�"(`�wjb�RAm��>&X����>����W�
q�h~�j1�hCx�>pV��3����5��$~#�u�2����n����o�0������3��FB�f~O�b%�N�qb�A&����,�d��!5�pr�������b��<�&�KH^G�V�B�����\
7��I����ob!��i;1�UX��5�8��d��>O�,co>JC�_%O������L��%$���gV�7�z�23E�[
����;!K	���@�:[R����#:W���4����2���N�&<�a��4B�����9xz`��L����~V�B���x�OAv�����4~��S<�O
�,<5�`��5ud�t�OS��Sk�L��S��������<�OU_Ha�ib[OO��1�^+�OM.���y�Y���B{O�x�+O���S���S-������S�����m<%���Ti%�7�4����h�O'�xjV��S��;q��Fm<�x�������^���� ;O��C*���,y�u���`�z
��tBO;F]�S���	�,�)����S�+O=�����6��9�<Yy����<����tFO�R�S�6x`�z
r�)��S[��4�[x�X���S��� ���	��<�F"�����z:!�S�u�j�)��S�>��������,<uO<�x��V�z���S��������:+c��<��z*���������S�������I������S���H.���;�P��].+�0��mXc��Q���_�m�(�I��q����"X�6b�f���j�`�cp���@0�z��cG�F�`��[m���2�`��e��]����ml����k�����s�!�b�<�#h��xvK��Nk�8����w�K�9��9�5���6I���<UA�W����,�����)d�i{�b	��e��XG��
a�9h#b�������M��r����C"�DhIPs#Z`G��=N�	�|�s���
To��W�FL�V��h/3�!UY&hs��A.t��6"8�K��V��LA��)h#B��{�m�rV��q��um�;�j��|Zp����
67�9hB&��iHI�m:��[���y#���}����m ���FmJ.s���p>j	����>m�0���3x���U;�6�a��y�AgA1F�mD��`	��%���
��;���U�F$���Q�m��x/�[�c��7
��Dq8
�R�:���NV�m�OL���	���,A1�G��
AI�=m:0m:�~����\���6"o	����O�/,�?O�Tv�W�}��2�r��	�W%��6Q���&�
����������H�SA� ���������_�Tx���R0��uE������3���I:��%�pK�;�p`��%m�uX��*EC�B���`W-h���r�g]K�V�T����NR�
�D�@�:����2�
�\�GfN?����Y���M�<Z�I���Z���rx�U8�ETpz��):�m�B*�
�<k���+n�Ry'��b�7D�FA��2������"�N�"�)n�J=C(��FM�0�t�$�������Z��K�+�Y(>�UfR��!�*H��jKq�qV��hX[+G|���y����miWR���(�d`
�������!�+yQ�'7��4�����N0T���v�J�.J��ve,�'���_�G��v��`���`-����H+����0�w����)d�\�O������@��t�)����������f���4��B�s���LV���t���;w<!w/!^	{H����� $?&���9U�j�XdP���nf�M���ab�qm`��t�
����d�E���+���j�$]�=�^a���?�$}�ep�[�WI�� q��#R�i?o������"���'�������2bk,X�*;)��\�� n���R����u��|VK�"x�!Gw����N�O�}R��u�����+��D�3����X
�#����a%FAE?G��]�4L�U9�V�)tD�c���p���j���*��%z$]�R<V�'b�<��~8�"x����J�j�F(y�G�#�:*�H{��$2P5���������*2v$������9FNM�P�LK�:J1�=J��(����H;
��@��%]Ud ��h]E�B�zo`�3���T�C��Y��eD{�N#��uU�;]�C[h�.�Z$����t?z`9YWp-�v�X�C[���.#��6�N��^���/����AM` ��v�^���*0`�Y��>FO,�C{h�V�����*��HK�E
I�������Z�U��C��#���eK&YW-0�l�Y�������5��vXQK�����z�4�����a����?���2!���
��b)��R�����`)X�w��G������u��M��QU�2!������tub)�6��tN6�v$kQ?v�J�5��&�w�),	K3aK�*~�4���6�����#��]8�����w���F��Y�@4k�,,�a��#07X:!���X��w�bXJ���X
����V�}p���FfP+|w��������X��w��Y�sK�`����di�u�-\Y�R`L[���dc��5Fb>!����������-|���g��_���XL��������[��P���x���#��
��y��?r+��<E�U��h�~&���3�����V`V��a�l������F'p�"�[��s�`~����y�Z�w:����'�Y�L�0��w,X�1�s�aUQ�q���l�rf����N����,4��Y�V�H'���2�Bo�(����	a�ft$�o����@�,�k�VX9�	���k�7�4�O���:�s�PK�k���(����Zh��c�*y����@~R�<nq�����C�x�]��u���.���d�b�+|���_.gt���=r�
���]�4%��M(_��(�gJ���������C,�5'����m�@j�LZ���U��v�2�������������84X��A|�i�e��'��l��4)���V�M�1��D��p����4��mh�u�9�P[���@���y20e���}F2-���7��X5��|]�X"wFN�y��V3��	:B�F2A��X,\w#��A�qfL�X�P��C��p<r����������&�D�	�^nH�:�sx��>-Ss� i&�GMm�b�2��iz�h���h�����!���(�iJE��:h�"�i�����FS���:'�j�/4Xi���6�j�n4��]iz ����Q�R4MM+M)s���Zhj��JS�w�)B�h
���n���"g�B���x	t��!�4�2�MhX�)�o�)���
��T�j��c����4u7����L�AS��w�)H'����364�h��V��\�hJ��z��;M��NS���Tgd�)v��ec�0�6�j7.4���h:!���hjE+MABS(9VS��m5�h��z��V>4
�]r�&�����j����$4����v��0V�	k4���� �iL��������s�)�;�!����VX���&���%of�����{�2]����tJ�j
M'��$4�YD8]MAV��*t�)M�h���4e�#����u5E�g����������
t����M�U.����JS������|���[hj�p���Am4E���eP��%��Yf��uG���M�����8$�������C�w�D��;$6;$H�m��!����Ow��j����6�w���b-cvH0��q`�K�������hK��8��!��g�=[e,S�I�5�K��f�����I�������rs�����0�P��Z� &h0��;�w���M�h�<(V%@�%����������Y�1��l�Q{>oA��:e�,�b�>�>J�4�t���=bF��\�yYv����k�#<��C�a9���
�!
����������_"�9�'Kq�;>A��Bn:b�2t>�1���e.�=u����W	D�oN��U0bwX�<W_%��%���i@>'���}B�Q��Y!��b+����B^aOe���cHm�|� �XMl��>J��D��\�|���7��A7o� �3 ��������"!��N��PXn �KP	�JTH_i[i:x"��t�����%������� ������0E�[$�"F��cP�	��B�0
�'	�sp �6�9��&���$�	�K��9�2���!@�e�I=9T��'}a��N>�)�����\����o Wn��{L�|��([}<��J`��!vv��!����6 �U�64�����J0�K�u1��I�Z�c�J �5��^=8U�3�O]&��e����n���*MZ��rR�D�|h��a��6�����|�j"��c�1�����&����%TN��th*��RI���h	��G%�^�!
�@���}�����`�����QJ���$$ST� �ox�S)����v�J��5u�h��5�Q��~>ex������z>E���r?�r`�D���T��	(1��Se��	t��	!���S����ix�a�uO�C�T�Td@��7�����f�(�Q��hZ&�DS<�2j��S)=Mw���L`~�����?�4O�����0��T�B�a���v��
O�c 1�3��|�i�2���A�|�pU�&���X���0%�4�R�������*�rU�w}�����t�O�*���)�U�tW�����bB�e+#J���R���(9����F�@b�R�SV�>�W�W�	cfQi60��4W���1t)��t�l`&_����f��|�s�l��dPc�Bf�	�7�Ru@�4|����C�����|}�`���5�0l�Q'��
�|�FT�����QA��J�C��WmH96R����v���Q�V�&�z%*e:uQ'����Z�������C��Wu��As��
)����F)J"� !*�j�j����vQ�k��ZL������2�:aE�|�r%*e��z��n��m�
�
���A�h(g��pV;��W���D�gQ7�7t�@�X��6�����D��z8��M��{;Qg����g�~ABT����>|���"����w�����^��v	��^/���g����G
04{�{��\U�n��Y�����z�c��U�"-�R,�� ��h���t �3�Tu` �~Q��0������k�.j2����,�� u�
M�������t�b"���vR21��
~?@C2�|�j���2Vv0�=�L�1�P������IIF)WI��:0J�2�X!�H��N�������dU��1&�K�"eH9�i����(��r��X9��2�^�y�e/�%���8G���6P& ��AO���r��@&\���GM*{�b�]=a<����s��n$��� ���p@����U��
/w�#J���/V]<;�o�R���d�H�J�"������oL�i�{#��e��L�*����)b�bM;Rq��(��E
!��X{$��U��,V��G�;���A�$�#����&dVQ�5%��X�|gCq�A=[n^�'��E~�*��J�"- q.)�t��EvS=4�I�3fn���\!��A��d/�p�Uon��E0\Cs����� }��xe\G(C���'�����5u��u6{i_������H�4����E���N6>��vK���!��\}�js��B���t8-z"'��/��(�%[��E�T_��	b�����	�>���t��V�����MtJg��E���3�4���"�Z*�� Or�Z�J`S��"t�����F�N��IE�hgQy��j������9����$/"��t��eB$C���&�%/RG�+��,E*.�d�q-�m�VN���h�-S#��"�)���X��N�������9��A1�|��a��B
�^�������:�{\��G&o��E}C%%���~�3��T���L��U�9N����S�~�
�s�+_P�B��br�-5>k1�[����Y�b���E���'3���A���tO�b��X����m�����eK����I�"
�iBe�%Z����^������3i�������������H����3����3�+i�P����a��oZ�ub:U	mQ�G4V�7a��t(�"�:����9t~�#���H���O���R���E;�y'sC2��_��a�&�f�S�����o#��dN�c������r7����<��X�s3��f?����rv��q���`7��Wn���	�)����(rSd�ffne7S
�D&f�	r�&��M��r����D;7�+7�q�Xb���I�c�&���2�����d�m���>q��q��]�i�wn"��� |	k��n]�L�����	����9!����	�q����Y�L���7n�������MH7O#�x�X�	�qd�fGfnN�H'��I���	�qS�q^7q�O����_�q��M���"70s�(���jn���	�q�f���s;7��M��qd���q��i\c���70s��7�f�&���z�+7���vx�&���	�q�H�������I$a����<��M��u�4J���`��I#7n�������u�l�Dp��e<����_&�i�����4���Fm�)��~��]��{�m����[����Bh��lMCC!X:��8rX,=�����C!
181B!�T��&-�
�{�0�0�>��aF>2�&�	�
���	a�v`X�43�W(����q1���=��}n�B�������*�]���G(�#3��B!���P�����B��5-V�1��A���$��5-R��|K2�UB!�+��-�������X�;p���X�A-T�w�E*"�v�G(����#�=�8B!
��GCU(�c:�
qk��B��W�{�;X���5
�B! �K�N������*bw�yI(�oe}Q���
�����Xf� ���}I(D�o/��A�B�H`�P�12�C!�'p#���&��B!|��M�v�f����9B! ��{(�=v�4�Q���]��F���4�j�d�PMO,��3�k�\v_��B@0oC��sax�y����aGj�&c���V/�9��]�W�(�w"��BB�\���Z,��%������`�Q���t$�M,��Z)&q�B�g�X�ji%P��q�'{�#���R�.1:g����C��9f[w����j[��U,��N.�h��/b���H����H,$�>��t4bk:�u7����MscV�gKskn�NmA���G)j;�Ps#�
bS�in<�hnj*�mxd��24��u�M�jnjB;%��_*�=in�
b3����K_57>obK����^`�
�w�hn��]s�-��\�h���-����-e���"���q	Ps�)+7,��X5��Is#��TU��RuZ{��'��&��in�hn�*�M��6�
�jn���Ss��h���l��B���47se���'Pbw�-RA���A*��l� v���Ys�� ���8�����T�in'����]s���f@��AN��������-��57�Is��(v�� ]'����i�{���iQ���i����{-��.-���in���b��u�C�b�Ca�h���gE�)R��2��6vVQlT����R��6.v�����Ms��4��Ss�.5���47�Is�-���(vt����v��v�:�8'.��n	�V���� ���U��Xc33����1�r<��*���D%w"�u\�����AM�GA�tZ���;��$���U��#��W�c����E�p�/����Z^�r�P&)���)RF�s6���=�g�Z���#/Vo��d�/���QIS3,�x�����]�8���i�\y��9����CH��S�'��:n�pA���i#TF���F�=F�	s�F����l����.���$���*��s���P
K���-w�����r�;I5w�a���Be��e5w�Ud5w\�s�l���g��Q�	�{d���Q
�/�GL�KCL#T�0=��D���R��Jc��;�"Es3���w��F��c����D����F.�8�j#��Y�	�be��F�N�<�FN+Z�	�beEN������:��?��[����-T�s!'���`9E*��l@l�l)&����t�����l''X�=+9M��i�vr�2�
�o�9�y�q2+�T3�#�r���\��NN�H�����1`� ��\6rR1��{L����U��l��txWr����Hj#'H��)�y���R#'�t��hC�sD�9� ��9G�������l��NN������<be��ti#']j��>rO�t��x���F��M��^'����T�#�?���_�
�������Z����qR�0�o�B	&J���8�����x��Yl]n�2���0'��U��h0yXC
��G�V &0�~>��������8�w�aI�F�v.52u����-�s��D:�Jh[�LG�d�l�����OPx�])A��<���>v�tv�����	�%i`$p"|?�����a���
��hfZ#�|�#��.�V(�|��v�����A{�=N�����`	g:�X��+9H#�6�d�~��9�;���%q�X�ag��>�$8�H��p4���	Mb��/���u��x�/1��lr�����x��2ab,�����?���&Rf�����7o3uPi�DXd�`=���l1�}#`���U��.�s�MT����i�����L��BX�@X#=%�*;0B�B�=���=�A����#�cK�V5C������z�SBY���5t�K�+NEx�XYws)�t��as!����A��3��b�a�JK�b��������D�_5�����?�� �uDS(�/��$Lv�?~��i��Z�H�D�"K	CD������
���gh]��
�(S=�L/��3�NB����'����T��iN���M,�"�����8��T��.:f8k�sIA�}����f;���9:����D�r���s��DQP����u�'�_g�c�P(�u�A��F1XZU�>�<Y�����zE8KR���
���)�5�"�@�� ��+�d����-�Nw�|[E��;A������Uk�L�x�;zK0]���(�`L�K%Y�U.Hn��D���eH�bKA��&������H�`+���/|>����m�����p�d��_�51j����k1&o�'v&8��E�.�@��!��hU�!�k�\�����<G�3�����\eD|�5�.��T����C(��?2�r�^]e���2�5�7��i��(&�0G��#�iLB
���8�����8���������[w�b���
���o8���R5�����D�}a:�c��m5�	����2�-��"����!t��!!�j��;a:6&�zF���F<m�nB�fE`~����O�� >G��y`Nb�PNM~��������|�������`�-��x,S^��E�����<c~��R��,������:'_]�_���N!��i�i���!f� FM#;R�q�6��1�K`*�c�%�YI�d�wmA(�q'�����I�NJ���h����s��q��G&aw"!�a�	]�?7�b�Q�j�Pp7���%=y�vbL���e���=T�5��+�N���R��;1��bF�;�s�FZ�������n�u�9^�]����X����r��0��\��:���l9��9���K���Q�H��LI�b�c����Wc���u��m���a���@!��L&�~ �iw`�����n����V�E���/v�qB��.�l2,=,��>������^p�V��F1)��4�(�3�f�)t���^.��L;|�ML;j�H3�2c;�&������GP1�|Q��V�]����md9uk;��j2�2����j�L;���X
�$`�dl�]D��^��������/{-+OD6�d���O�i�ij�yJ!yjSO�j�)�El<�*�b7�"nx����w�'<e��y��X~7�vd�i��Z��Sj�}'��*�l&V9ai��S�7����V�"����u�jL�Hf�b�s��)���)d��(�x���Z�<Yx*�6�:�3O3~g�bCL<e�j/�<�~^y��16�b��<����<�X�ij�y��y~c}n<��y�����<�
�������n�S&�������� ;O;rK�[����4��v�V]�X���S��Pl<uoe[O���xzE��<�&��}=[xJ�p�V���4���OS��Sj�};O���*Z�����<�\G
O��zi:zk��F��2���b�z
���!s�����XOA����S0yz�J2�3Oi<��}=EV+O�"<����T���d��@O3��T�ij�x��N<[x����4�;�~o<���������<`���7��X<G�����&�U���K�2L�#��9������7L����� Z#�i��9������srp�?If������.�h�k ��?�sD�bIF�
�E	�6qG}����\
D>����hg��i0W�_���=GX���f�����mE5$*��t#��y��oqbu�8�c�"�l0��4����d�t�j��s7r>8���q�A�C&s ��*����lo����n�����TP���J,���;����G�#p�?5��=&{�:i�b��Zo����%��`l,\����K��q����I���+ac�v�Y�R���zX�L����Z_�!8��+�����r�8P��/)����J
���e6m��#��+y`}���Mw%E�S�cr���ga�w��Ad��Hy�8�DFr��!�CU��tQ/e��X�V/�;er�N��	�j���"�;d��>���&�����j�L���I���\I�������D!�Q��>����eWR!�=��1Z�'�9�R��s�S&5gW�'�R6'��n�$�1�����8���#��z`}�KM����dj������M`�J�t_����7�8fHI2>��Q���QJ����R�/��	zR��K�]�%M�/IM����V�6��6����O������%��|n��w	���d~�+i�9�����c�T���N]o
H��^U���u�2�w
���x��6������2�t`��?)���2�t�PI3�Y
����p�O,���1�}��R}�GK�a4��RV�����#3K;�X�jZY��f��}Y�w��==�4�;�!,,E�j����s�Sf����[�K�
�pe�@K�1,����)����-,���s3K������R��������������K��<����YZ�y�������'���XJ3KkH�f�6�"���Hw;��F'���}���;K;r��>V�KS9��h���9���k)=d ����.����Q�+KA�R�>��R����dc�@K���kijZX��v���y-M���&����"����������^X�@��,�X
����HtdZK6XjM+KAV��aK�"����X���R�����*Q��k)R?��w;����9���!���rW��R�� Ky��R Y�NB[Ki��R����Z
�����2~K3����j�Ok�,�JXY
����:2��,YY
��Tu����4��R�����RZ�������,}��V�XKA6�6d�<��R���ia���Yj�`���Y� K�����eciLJ6�)
?�k�:z5����R8���l���#���7b���,�RGf��v��"Mo�
{X����R�tg)�YX�!��R�~b�+ce)��Rj�����
��l,�}+K��������K����`c)�'KC��x��[��F�6s��v����`��i�rp���\���1�����J��/^���c��!�`���k��XO!s��#���z:���c=3V����X�7�&E��z���i�]���t����X��}U��z:6�u�U��l��X�����X{����&�8��&�=Z��������h�)�#�e1���zf�o�p����J�����1������`a��VA1&Di�f6q�z�/��@b=|���z����b"f�zD�L����c(�6���.RIC�I���zY.�Zb=b�1j���������s��VT��z
�g��b��C�=�K�xY�$)_���j�E���,���.�	L+s@�5A~��3b��L�vO�#	�
����.i��e�A���H���I(�����|O�J�`!�^�#v���W�p�����=�=�]�MM]iT=�K����J�9�Ii-,Q Nqg!
�3�$�d� ���\�1��"=��Bl�P'��^i\O*�+e���2����.s���U��� f{�&���p���hs���xi'�b�h���������xS��c��!u��@`��]1�Us8�^d��Wjj�^�y?�^�t/!�dN��2�y1���e{�
&��H���*2��5��2�|?^������

I).����������s&=�M!R��U*'Lia�^�,@7�l/1�rWl�t/jc�D�J�
cH�!4�2��H����h���i�^i�Y���t/������J�W�!RK��e`e��	_���W���&|��I���O����>���F��I3��:��.y�hv�S��;/�r�H�W�����I�Rr\������AT3}&�X�\�|�pc6
8R��XfS6�_P�Gr<���|�HR����T�|��y%<qpMD�X�Y�m�������vsE���>��������#���g�>�J!��<9_Je����nW�
����"���UF���2�i�@�������)�-~�7�-1@�4��L�z�9�E���{�������� ��VD�g��c�B�rv�E�#���{��	k~����o�[>��#�+�����n-1@���*��Y��=3!�F�:&���)��^�)k-��:=RO�r����s{]��i~���zt!5-�O{;_Z��q.c��-K3�CO�6��C��-J�Q��Q
�.1@eLn�d����"���N��?tW>�a���A�����^��#�6!tKs=�n��f�H}f,���>=�K���H�b"?2+g���h;c�X�+K�.xM��R���9b��]>\����o~���XJ��m�@�,�I,L'�j:�,e��Y��\�g�"����%5SdgiGKSU�����ie)�;��F�,�&�f��{'�����R�����R�kig)'=7��,,����2�[��RkZY
������Kt�1]Y���X���X�D��R���H=R�#'�z�hei!s����Mn���R�����E��:�KkH7�"����iai��]��������ZS�Z����T��i-��RJ�k�����Tu�����Av�f�Y����������RjB�o,�X
���!s��6����� '���%h3w�27���lk)�,�tj1@E��^�)�u;K9���DN1@�$tb��K�ie)����)tc)�����N,E2KR
�y?M��R��ok���,���Lk��KA6���,M������,U`�Z�h�,
U���Z�.#Z��Y��m�U+��������wa�� `�6GI���e�q�,A�<=�,� I������6w����GL����s{G�p�H��#W�����7��0k�L9�8
�%�����	+:�H��{�)����5��kAb��s�\�X�� &��'��(�!���EE�k.Ec��z0N�#w��/�F�<��ox�� �G�(n�R�]���UN�BE�I�5��j��O1r\
�A\e�������L9]��Pi5�hm��/��+N��Ym�:N���B��@|�yc���"s��vB
Mf����)L���p)9*	se�s���'N,N!�����������5�P�S�|��q� U��I�qC�J��yE s9z�:�GTJ?��l3��O�u	S=��e��riW4��C��Egy����|����Y�fN*������~���N�#�I�O�����|l�O��a����j��0'�V��y����H�m
�����	p,���t�!B��:�$��Q�#/�0�����������O�Ge��g(���F �M�n`"l$��&B"e���s��!�b���c��f�s:�aL��*������htt���8��G'3�Vq��#][u����/k��Ah@�nY��j,�]�.m�V����6��d*��a'�!����NKMR��;��)�0��-������q�~�I������0"#aJ�s�(�5��3�1��m������R5ZY�*5M����)MMp�k���"'�EXGy�1���HX[�S��������L�������������tG4fVwp�w�g���������`�D
4���U�4���1����Q�H����=+Gp8�>�2G�����Q�����p�y�8�e���O��6��,�&���Q�A���qd��@G%��GS���y��Q[5s4�Z9��*3G�6B�8
�s4C�]��q�6���b�l����?8Z��C�f�s��T7�Ev�2���rp�	Gc��GI:�3GA����j��2>8��p�W�R�{�(����4�fD{�nH[��Q�w�(����������Qd�s����}��Q.��9�=�:
2s4"��h/38j5+GA6������uG��Q�]�9��G�IkYGAN�f��� G��#�::�����p4��8�L�U3G�������.�p4��8�8����4�jZG������2L����8:��Q������Q���tg��c:s��j��2>8JO��Ne}�9�L��X���L��������hf����a�hFt����9��i_/�(rZ8��ZG/c�eO�����Q���)�s�������8JM��j��l��������� ;GM��l]��l]���H�#3G;vp��[�������8�@��ly�l�����8�8���u���:s4�-5s����W�d���G|n�hj:q��mu��u������x@V;G���Q�������4s�z��V�d������p43��QsBj?����{�h�n���CG���Q���^q��Z�mH*����j�&���A�zk�b�aJ���kzj���SP�y�g��qy	J���-'#A)>)n�����?Q���wv���'|cP���=(5c	J�~����d��$(���9(%@��<��D�l�J�3.����q��1�VS�R"d�a��[PJ���jT�R"���IP��`���p��T�����I��9&�6q�PZ�cR�%&%�T�T�IEsL*�"����"�%&��]r��J�I	�Vs��h����:�#&5��JMsL��xl�cR6�������-[L�!'�F��@�I�*n&ZbRVO������	sL�������X��IL�R��L1)�|pc�z����#&�������f�1).�"_��itbd��}�2w��g�L��n)`��Q8�H�,���x�WY�%�����MS:uG���uF��<s�o���$��*87�QU/�����T�q�(����frW��=����XP9&QVNy�;@���#��2QMG$<Z$�};&B�S�J.],5�sbp5��������!x�,Y�8�'�H�8���e#������qm�J����AZ2]a���L��5!CALl�>h7�E&aC�S�L"�����r����s��Y�`~���K'BO	)�tbd�EW$��p1RW�1�]��q����iBl��/� T.]����z�7�s.�%�N�)�9�2�DD������K�rh+�N�����J���hgq���rWz&�}GV���MC�%�������Q�Gg1EsqZ(����I�,�fL���L:���Wa�H �]����x����L:����bm��DB���`iW�e(���������H�D��lI"���D��M�uCc��P�����t����e�Mz��M��X��c�����D=�R�@�2��P	R�tNl6�i��.O����X��4t61�K��S����I��G���K).���J&��Vi�tB�y���ab$�Z�ti%�YQ!^%�Za��wW���S*������,���Txs��u	��0��M��
��V�Y-H�l*�~������������@�)�����)�~��� �nP��MR0�VGK��nL�
"�yG
�������uG�j�6}SA��
9�-�,N��7l'gL�P�4g��-��@p�"����~�=�}�x�D�_&����6\��D�`q`��x�#��u`��y,[
"����J��`2?kK!c��Y���R.d&3��6C$\L�iw'�?��o)��%��@�&%�RO;n)��f20;m�V��� �9�X�}K�6�i��r�l7v@V���QR��,e�^gd�:r��r�T
��o)���U��(��Jp��
S;A�
X	��A��J���T�D+-��������!�AY.P�����"+A���A����$h������=�������Njk��C'��N��j!��LP$��	
��BP���"���p�LN�p���e�X� b���8���	��<��%��\���t�X���G��5e���Ft%h$<��%�;=}����AEV��A����M���JP����a%��)A����'��=?e�bo]�A��^T���k6��������1��jV�������	�aYA������d!(5a�n+��FP���iu����U�BP��j�f��*�W�����V�la��?Ga'(Wm���y���E+5
��DP� A�
?Gt#�#��lE�:s��V^�o+(Tge��<���)�4��cmMM;A����6,�x.+(sE�oEA��BPj�������S��{*qJC��w���d���'�*g�)?����!9�` �87+{]�����<��@����7r�@\�5N���H�?0�HN�!/xq����9b^���c�_�9=iA�\2��Z����'Q|����X�������W�
���|�k/�w��%���W�Y�pK�o+�m�@��#>�}����h���at�R8�)�.6����>��.�q�
�-E�[�+.����0���&��N�=g��P����-�.E���&�������r�5�Pa���������f�(	�O=W�0�9�[�����AA�����o��Y-g7���������2Yw���e:Q���0�I������V��+��?�;���J��|�Ce�tD8����3�	����;��*TY�*H��S����A�L���*�N_��LU�P��5����D�
2\���#4]O�~8T���#%ab�G)Z�|�>���Lb��f���D����H�\��yD8���B��@L�d|��\�b-{�MM�0�����$�n^f=��Y:��A����Q$��1{���T�m�1��9�G��/����9���DiMd�"3��8����('?�#B��97'��`*�9a�����c�E�ah*$�#a:�p�c"�Uj�L�'�J&<��+)��\>s�|�b����p���������]�����!e��=�yU��[1$Z�u�bi�`�(}��K�o��V����J���Tx��������?�z^Hu����g�R$�2�v����dY3\���!�@�<%VnnD��(V~�@E��G��@�K��,����.��<ntV}���}
�����4�������g�B�z�n���J�s,���z)�p,5%h�a��o	����<-�����|��a��9��r��;\���%���"���}B�T3����Y��6��Q#���{LZ��K�`��b1��!%uAN8z�T\�1�c�4��d�jpie�i�DPN:�R��Dm���o9�G�ns��-�
j}���jM��:�x�1*����9����>fHs��#%+�\�<�
BH�FsM
1�2��\p�*l�	{Vbw1�E�6��kG;vx�"|&;�#�"��Z-{��6��f����C��d�= ��q���!%�;+E�m�L�0�N^�
�M��qh���-7��l4���\�4��������l�)��
�"�F@�o��^i����d�)�#q��i��P�4�5Xi��A�TU��ch�Y;&�����6j�)���j��g�"��� +MV%[c�*���{,W�l4m�����R�� MAN4e����:�M���Hf�)"�i�a��Y�M�,M���i4��X�
I�9����M��3M3��� �CJ |�)B�h��V�8x9M����2�v�r��FS���
qL���l�4����8�'�2��9~T�)�6��N�$�{�i��YWS������VS��u5u�4����j
��t �����MS�����iJ�6�bdn4���VS�����4���'4E�'�r��JS���
Yh:J
��l49��&�4e����!=�4����*��� '�ru�FS/C^i:���4�VS�Mi�NS����~�).���"���m�6�z��������v�����4���j�sMy���`MA6�r��������d�)w��h����j
��t M6h�����hj�V��l4�����a�)�JS�F���V��[�o��49�"v�����:h��[�^/'��^~�)M�h���4E2�j��W���h���M�
y��@&�l�d�)����a�)
�hj�v�"����{1z��?YM�z���_�#����� !"02B+dJ;kh����}��'��'N���r�^R���7�x�����Ng��A� ;��cT����c�#�e`�	��=�kr�~��	�
@���<��L�!�L]������ �60������1��H�a����P��*��O��G�-E���?K�e�a>x�U��W�Xyq����u`�F]��Fn��"�:��Q*���k�r��^0��i����-��+������/&z���1�����`���"
�
B���G{�-$�L�q�!�A���
��,EwP`�f�V_{���E�G��!��!;ZFbH�0� �j�Hf��]��%vS�mb90�
�	S�j�r�R?c���WIm���
���4����
������\�8�J)��������`����dN��*�r`k���'�������WM�f>���6w����y`�Tb���`���gE�������J����4���k��l��A>�U���sM� 9=�cG�����x
���{��vt����`4�j���T_����l�Q
{����D��?;@D��b�.4=�&�K`��t-`2���X��P���$�2mSCx~�R��!�7�w����F7)�u�f�6�-��{����.K7�~��6��h	�2�\/�:�5��d[1�9����O6&"L��x�����t�)��
���y�aC��.�SIIRb���x�y���������������R���6�o|X�7���*�WckC"_"��V�����B�������y	��o���dB1U���bbS@	0���#�	F����0�1���Pf��+�$��q�x�_�����c�O��������?����[Cx��2/�^��Qj�F=�M�$����y�E��E{[;z�D��x��a��Irc�2�G!Bv
�=��OD�v���F�JRc�0m�t���X��{���������>��)��q�M��q�|�OT����/`,$)�V�;a������ 9%���1.r��"��B���q9�sl,!�z��K
3���z�"5����?v&�0��EX&�3�eK���k�9��&e����Y��f�B�n�	��2��~��T0s\)�u
A��[��)H��^H{_/��A�c��RK�D����<��H��������J�h,��F��i�����0z�0�"�K��&B�����
ji�8[���@����1s��K�MP���Vh��\eB�6<��A`S���M�����1�,2�o~��������L����Q�"� �*���-��s���V�����(�pXRe�)������N�r���|���k:t;��8��0O��+���f���r�t&�3�9B]X�
*��B�:����U���dmCx�8I�HL�=�Mfh�u&�=g�*$#P�5�4HMB����n��,��_�$X�?tc���!-)Xn���W�U���^��s���U�^�;!�B^Dx����r��=�/����Wc���!�����_������C��-7Y���p
=����dE���0��
����n)[��������!-�^Q�;��uM��n4���eS�Y:Hl��WH�GY7������P��>n)���=��kV;�����|�6tn��}�N#�X�Y��?{�.���a�=������7�����w<��+����^{� �\=&�1�0P�����>����Y����L�B�0��<��+��@�,����R�!.��?�������J�(*���W��j�]=��J6+�������PY.�_�����A=�S�B�s��l>��S�:��0��QT�y�f��
8h�����]���j�s=(D�#b8���"dN�*Jq.N-H)���Y\��H�����7
�9���a4��[a9�'�<X|c�0N!?�hl�H����f/z�$/�%�L��h3rh����PQ*M8�6�����9a�2���{����H���%�C��|���0�)�����A�`|^ MC���m[
X�(me����Q����qh�qmO:o9�
�Sy3��%��:N��	�g��*lg��ijC�+���X��K)4kHA��"�)�5J�x@.����&W����G}� '�EeDX��1,9��L��8��5.O�;�<��
��K��9{h��S4a�d�I�Z6�jua'����R����S���*�\��J��
�������������Xz�����-��8���a�|�mM�P���d���~��C��
����D��LM�����UJ7�lK���<�N8���������!<&=�A�WM���15PS�������g�K�6�>��Y
&19�H�.�Z��M���M�J]4���3���")�A`� OT�Z?��1�p`X��5�j�N�����2)y%v�R������I^%������0b��?3��c�D|��4BG|z�l��K���Bx2X���n	�	t$h���5\ �l;����X�c",f\C��Uh4a������\k��������J}��^�t0i(�2�SB3�$sU��r�I�*�l�b
5��8`�h�r�,^j4T�lF!��E��g��C�K����(R��hy�p�T��<���Q����w���-a������u���%�2��"�@G��"��HO���OV;��3������8�LzM��3�:q)�&�7}�g��u������J����Q�����ZG"��N+?zN#��%Uau@qCR����X:|aq�"V.7�:e]nG;���3�_jrqQ�k���#V���;���n)�mY;#�I�����/�_F�"���M�<�\���4����N"1}���s"����%BA����u��X��]\�?�,���d9K)�����������d�(�Gk<"�������p�pB"'���KP+��Hr����''��D�u&�Q8���,hyXh+ �#o�[B�Lf�J���Hhj�Y#���"�B���ZT&m*#�����[
�����G��Y�D25�F�:(�R�., Y��!��KT�u�q��e��Xk��I]��W@��uZ���Va���H��5�g45���E�]$(}��*�>4�E�h��Qa����!�����R��F��C��%�6�{���;1�J�!�'�U�_3�E ,���QUA0�
����W�lt�96�_�L�i��\>��2��N��!������%DqX��;`l\�{AG��2w������w��c��j��{�.�,*����?�g �?A\�h��{EH��=��\!�C�.w=F�UV���)�&�Eb�b%��$���h���!�_g��q�E���x��J'�+7h����I!"U
�/qz�cC�����`��K5h�>�q~�o\S.�a��(*�!+�����:A��,43	���6���7\ �R��J�Wp$1S�y"Dm	""��'U��0�M����#�p0(n.}�>��)�������R���0��?U�(+�bk�D������5�Hv����Ker�@�����s���k�L
�f��7��\;O���*�6$�4K%N_�=���E�_9[�s�j�`]�Nl����;I&
c�������U,���8�)m��e3��(�fyu�V�����T��(#B���n��@������1�B�+�a�<j���R�^*	I���R�jg��7M����R0�kP��1�<f[�,���^_�R�������BNY�Be�2I�/Ie�����p�2@���\���X$}vE3W` �M15\=J��L�*1����ki��E �����,����5������~�����?����a0����4E���0z��#"~����2 �n7`��s)���=_
�1#��/���<�8X}��(
������T3aF8s�4c,�)�Q�A����5�c�u�B#�L�
(��fr]f:M�^�,�[�n:2|���y��f&��;;4�L�`�����"�"����2.in�Q��$*6��e��W�n!� ��hK�����U��*����9�m�5dEM�a���hM�*K���_��
E�FN�}��tm�&N�W6ac	����T6_v��4�i���y����+�f�-,��^S�am<�9�	�U!��c���)$��J6+0�]������o�����;�%��euE��VFeD~��Z�UC(��WB�_�������h�jU��@@��Z�6�sd���,� ���`a����xb�[t��F"_��^��Zc��h���m5�LJ�����7g����`<���Qrh����rF����1)�ABL�������R9���FB�?Zh*����0dd78"�	���e���y`�C`����tH
a
�jx�CK2\1+.n2�II�r�@=FBZ'�57���R0afw!����zK2LF����23�j��"�R���K��:���4��J��X#�l\ELb���J�����	pQ4�8}Y�G���s�Q��Y[�Tup)4+dJ��rl���LNv��)��]�F=|A�B{�������S��	��a���R�C�������\���=���bTi=4��V
�	y�^H8�z����d��wyq<.4c�`���2e�k�P��V��ib�H����g�Dc���k��{q����D`F2Y�t��wF:���$�z�,Y���$P��o19� �>�5(B�1%�1�`j�Q��h)��h!3=�h���/��`��HD�����Rl;��K.�%�qa��A�7�X8V-��I*��lM�SZ�����\3;#*����\�p)���h��*X���ZB�n(�~�m���8+	|�:��L��a�x��\�yP�K=5t^�vyj�BI���)�l�U���D��2N�"�`���)��)�a,�$z,���y��+��y�)b�5��y�h��<�`<��<�`\a[�<8�`A�dL��j�Q�<q��h���<�`XB�0ybP��bX���e'1fy�>���C�S�5�c�,�m0���/�"9	�����`!I��0e�`���:�Pp�{�p�����u����
��y���586n���v8��y����l\�<�!�)�<1e�z%yb�=���:��p�x���=�#8�l3-9y��F����y��4�2�o�8b�i��2����
�'�|�<��
�+���0<�P3����4����Bl�b���p&������y�57�Sx�4q���dy�HZ���D���x,���C���a�e6���-A���%7����&&�1�8n�6]8Q�x�
��T�\2!v�A�h��fTC7$'A�I��B�J4�?��"�f��K=�K�b�s��=���^�����^���A�G�/�iQ�/�K�Pi�Q��\S:��5DETY?L�K7>�^K���=U�T����Yy�:B���e"+���F��?�(�P�&��rG�a��T���41��_�"X�V����@gM ,�t��#�S
���d#��;�6!���dR*���G\�a�3*�9�K��Y�%6�D�Ub�ac+�-�C,���F�:�C,���s���YcT����	���#�7FC��p�!������k&���Ho0��l��
*�7��Mc'�}���[
5Df�
R��'zX����B�>=�d�o���C�`�g	!y^�c����F����}C�~�����zm�C\o6����I$���@����O���*��L�C�5<���������i����'���2��\�	
S��CF%VK����o�`����W�2o`k�}�����OY~��\���N�G�!�bX��������?Go���������k�a�#{�A��vI��H���P`��K�!-Y�G,w�Lw�oU��n����/l�kd9�)
���9k���0X��"�����OwY���`8lI����%q��0�8F������zZ���?n��R��z$����m������JqR�0�q�hwn����
L����\d#�q�s�|G�J�l���)���������q��+�d�T�$gB��H���E!�+�q�����������"��'3��	C|"#�������=b�k����*TO.L��Su
��)b\�p���(�4�^�y4eU�=,���|�\�H��[��!y^�i�U
��E�|25������:�2�XR�i��h8,�.�\�������P���sE@8\a+����aZ'�����`&\����W��2^B)��i	
Lf��MNW���J!A�t8d#{��^@W����Un&�Y�h�M�th�U��j\���9&*\E� �b
��C�Ts�������u��o�[�f��AT.��vg� *n���]AT08��\]AT�6�qQ�����+�:�rk��W�+�
�u���(*���F�h����'x� *��@�`0�.�Q�_i{b��k���"e�e�U�6!T�>zSR&��X������!TZc���p�2RB7x�P�~�v��A��
�����k"����*T�T)�FPAl&���D�@Vw�5���M4MQ>��ut�<���1��8��jc"�N
7HZ�t�u��u1�@�z���3������Qj�b�`j
�
#����&Tq�vPE$Y�
�.L�Mu�h�T$�E�M����&�
�~cmM����.�D��&��Pp��2T0x!�,@���ve%�
�9ap
��!,�sp	�.�!z���`�I�`Xa��	��AI&����T)��67�p�(�Bv��F7��;�(��@�n0ce���P�x�R��*��HR�`�%B��j�����*�b@�6��p,�	�����*iU�|��P�*����^���&V�����[�=eU�Br���S0D�F������y���
��L�(�S0h�`������i��a4x
��R��)�<�����i6N��O�h	�B^����'��� 
c�X����]�/�1x%�CJ&|���u���)(��JrsfXr@�w�)�*8����*j��'�
���k�k{6�n�!���������u�k��c�9�����
�6�

������������S�vEP�RW��P`��A��������=����������4L3��8}�`�u��������TY���A����C�)L�&�#��fJ������DL����`���8�p`&q�����L� ��,4p�dP0Zh� /��h��wT��n���ApX���$����N�y�P���`<=j��
�"����P����\0
m��u��I�7��[�yD�R���k��c��.i�)�71�f� �����H �"����V�qISw��
Z���,�ILK����6h)�@RR�6(��0�3p����6����R
F�-��"��g���-m��b�����1m���z;�+oPL��aQ$q��0
6�d^87�����b��T���q�aK5�/Ov����Lp���]b>0k?�R�(h�WS)��m[?�v{���������&N�O4��S�?[�*����<�d�4�(�d�!�5��d���B���@��COS�B���E�-u��+f�%��������Of
���B��QSbL���M� 8,Q�	
f
�%<�O������00�5h����h� n|��0:��4�{Q�5H[�T5*v���s���7���u�0�5a4\�������P#^�o��*��������a)qR_�Y��%�)��A19���H��B��H���)�Y���e�L�`z����bC\���F�d@����1����R�~��3f�TR�`=VK2��X��f�"�����H[��N�2�LN:��2�p����-i���m���,W�R�#�;)���<�4e������������(�?lw�*�J$�yVk����H�"�?8����|���ypEU���-�<q���RQ�I�a(�����&��$�`F8.Lfci�����i�����"<���b�����@H��1'gP^U��9�b�z��ei��9��7g�>��#Q�c�������H�d��)����>��Ri�`���I��K�0���>�fA�yqt?	�Y�Q���2���q��I����Yp��9SW� R��{c�]����JY������!b���/��Cw#Su+�-�cjh\�aTh�7,�,!T����aK�P�����>A��A_M�+ 	<.ke��1������D�'_Px���dIGZP�I�XL�^x��5g�����U��P��:�8Jc��^�\
q��M�� &�����j�������$]�Vd+h�����!�=G�)\p�O�	#�rwB�#�!s�����C.�;�3)�����<<�)`6�9GC`$z����%���9"�gcC�>���Am�k���,�d�7���#�Rs�iX���[�c0XL��:G�����s�(`����D	�M�p�'����h��$�bH����D�J:�u�(�?�B��$\b�(������;�,
���j
�%m����,h����#��z��A"�E�+k�_p��������X{�����A"-q�����z��t�����1�zpHd�QOCbe^�\b�����7��){��X�Y�s�(;s���b�������E�sDY�� .��#�`s���sDe~H�:GD���X\4�#>a�9G���ig��r�Yh9G�=*���z��|�0oa[�]��B�nU�$2��M��������������=��%3v��!���\<�A����<��+!r�����`�������kd�L,�/;�TY����{�{Yb$F1c��X\����Wn8��|��|?+���h�}�v��jZa�z���Q��+�������7�d��d�K\Qu
�n-/A0FFD���A���u<�
�"���a'�`�M���J����S��IU<�����Ht�rP$��Z<��c<'�b���q������R���!�R��Ni(� ��fH���Q�x� ��}4�3��s"t9l3��4�(Lg�������K�UX�����p�����=�3v��%M�Az����P�R�.Dx���2�y�)�0���'��Q��m�����P��	�=
�n�a��0W)T�~.��V'��2�0���)�DrJ��,^V@��������{�&�"L�s�O�o����b��
�*��8.i&�!6-�]e�5��P&8u�=wl!�e8V?L��������kO�=�4�b
��!���Z�u��}M����lE!41o��)����hH��1l/������hf��uP���4<�)����$
AQ���S�&}#j��I�_����� RF������g��1�P�A�E��k�Va�XN��>��<|�������Ct#F�^*.\����:#�t��2����$���+��Z��I��*��,�Pv�`������
p����22D��l��7[�I��b���8��0���i�_C���U�����!��6';��O8��o�7�t�#��2<��4+�"/�X��g��R�����T��=[O�?$��� ���N�O
�4%����pm�s(�����hopDp�d��X����/c�H�	;3L��-�M��A������ZKs�������>X���}��G�0P6��t���q6Dr�S��|`�����S��gap
s�p0����&1�1���N����1�=�B�SfNnx����O�4�(�-����6D��E9�2����i��.�q�+�����D[�Rj)bZT�A�Z�@�\6��R�����L!��1_��tz��g2<t7?sQb�����p�e����0�y
�������|!:�V)�5���~�������/$��V�z���h>�������C$�Q�������n��B���i�/�h�,r��������xN�#��*����}A�p�$g�����	�cF-"&�<\���9(N�u����w5[E�M�^�~��lh�(Qh��_���:R�@	�\9��U�������3y^�a��b|�>�:����TM@�	�����G��m5�����qy�����y
�C�������j�������S(�N`���u����[!W5�}�O\&�.��O���Y������(@�
�����~���m��A�<Q�N�[./��Td��9�aJA��������k�!�Z��]��LMsX(};<8T����:j�o�#����j�Ad���sx����g!�\F�K1Z���Q4��E���x�Q.�
��)-&�f�&7��fV#�(_�c�6mj�B��x�2m#M�O~�SZ��~�L=	c�8aN��/k&����1ql;w��#������H����g~CKa��&9����&�}B��������n��c��W�.��F}���A4�%vH@���$f�a���Q��+8�LL���b;�X@�A�.��U��%����m����g���S�8������lHl�h�a�$�q5@��4��z����hsp�����8��YK��e�Ly..2��(>�0>�>��R�hR��&�����#�]��Bd�(�Fs-�#�I��r���������~0	�J�bH!����O��:Z�e��E�a�����K&�����/��,w)�a[K!k�����Z�*;�s���!jl�����(��1bp.n�l";��C=%�	b���������a�����N����%�QnTQr�Uv<��0��cxW1�}p�0��-��Ei5���������:�_,�����rS����H��IW�0L����Nq�r���>/��W�T����X�m������
#k�A!��1N����bz��2�B�<f���B1�5�*�,��\8�����qX����o��+>��w�-����2c�z�-trX�q��V��X/�����}�����A��:�L���9���_d��&|����[�8�r)�#;��F���G���N��7�c�El��*Dz���<2�����=S����0���%M�s�C������*�K��B8��]���op��Y5�`TFf����g�Y�b0���T��
=w�qBl���[�A	��U���D#x ���>����7���7�G���������z��Zo48���"�#�p%���6���pv
�WK�q�L����'R�������b����,<�#j��-�Rl���f�30�zXY8q��`�
��[�P�r�X�s����,���a�����u����2��������+&h���Q/���%6�%x���1��7+KM"Y���D]b+\(6��k����V��h,N\����M���q�V8�l���RG�h����#`4�Qw)��u��q�����@&��'8K���C���_^��}���*�g(�'\x�
���������a���d+����k���t�2e�$5i��,����"i��o��Qq�������c:�8�V%�}����X�X��AZ�����w��dWtEbF#�`�F��[���g	8�A{)���~���M��,�=�k;����!��dJid�i�C
m)��a���;.�E���.����uRS��������C=GZ1�����@�T��G��������
}0f��)<z����
�����`�d1UI�T��y������~/[+���q�^[�;gZ0}�o�F[�y���Q��~��&�g�c�7�L�x�3���c6O���|4F5�������U��\��b=�]/�#H���a|�x��R�	{��_�p�j������������������>��(�\�q�Q�_�F�������>��i-I��H�P�<E���\?���3T,0����`�����J��J�����t��m��.9��%)C�:��A�0B!�<F���b�q`Aq�8�����,&����d��3�a�2*��&�]8�-7���sT:�0�q��xv�H�����E�t
F6��`�����Gc�z���b������9�'�;DL�0W����N�h	C��U,&/�~&Y=o�`So�8M�|,�������!���P���
C����O��Od)V(-(/��d'�����-z-�>"�e(����n��\v��#��}����a��~�������tkH�O0K� #��q����l���c�c�r��`��m�Q�pb�^m��!�0�*������++�-K�eg�$2���a3LC�C`������+���Y	�n��(F�������~S��mAdv�n
�b��n��)���u����
��p(I����g�������dd�/s��2���L1�G�'���z���ebX�}�8�
[O�3��bV�C��Qy����*��TD`�*1t�]���H����V"F��=����h�`���A�.�;Y��#��#�{t(���K��H�& �}�R
��P2��0�	�4���jIk�0oJ���
�5h����R]8$<J�bE�z���$�IA�����ud��KS,�d@�^�2������=�>XZ���A1�U�aVZ��1���!n��:��ymw��8r!2|=��R�-��Yhiia(��������i	^Q���e�40�R�2���`�#"������ �S�!��(t����������G9���Z+F-�b�/�������<��������Hf+���*d�T�1��W"�G"����?K��Ws��0��9}�q.1���Y���cp�a!"X�k����^*D��8w�4������$2���b�'YJ�#��0J4k�/�	�����N?��c� sa:��?B9�!���Q�A����S��a�uJ����bqJ3���5�QY	b0P3�K������LwCDrJR���V,&�(U/ejq�/�[$�Wv&vO��0:GEA���I��Z}��}/���~@-��������w[*��X�����g��.2|��X��.\��*��E���K-0<���X�L�w�X�c�	�)��B��;!�`���:r�&4����_���=._�1��9b��CC�Mw��X|��/n)N��N��p���0��-���R�=O���?;(���Se�4Cb�CC�S�)�Y���R4���d1�����sc�bP�������opW�N*�M��bo)�����Y�]�c)1����ikJ9{I��%>������PkK~���b	e�#>|��g������f��p�8�:k9�1lt�K��Fs�,���n���U���E���h�p�f�Rk��^���6N
nY����u]��ArYhM��t18N�i���=�)iL�~������9e�Q���+�(���|���8jm�>=j���!��l���Usfi)�A�%�Ip��q����+��35����1Tc������&pp<F�*��n&�-q�C9&�b���t�'��"Y%����u���O1}� �&�gy�[��.c1b1�-�w�p�t�3)H�����&C�I���]*�=�ipJ=1	�1���������" 4��b�8�e0�)l!��D���,���E���&�AYO4���`��VC3hU8�IE�ooV63e��R�E�|K<c�q�>��7I��5RaMiiQ��%8N��|R�
0����H����S�7�� &HI��W�D�0A;��|C�������>�jy�
j�$�h=����R�~�qb�E�Ag��I���a��y��+�]g)"�����
�;��2;��5'���&4}:�TH�{�7���W��u�����+I��
�����Y�$7'���������r�E�(;3��q�a%�=�]�v3%�'E,S�2�Z�8���CW9�/�&�VL��bb�K%�O]�yf����AD���2�f1���6�{5��XgP���L��!*��G=l��IJz��_�?NZ'K5y(!��-�X}�W���CT����r\1M�=sN�F�GW����+8x ��LS��F*#c<m�Z^0�mqe:����U�cS�>�$Y��b��e2�����Ab����bH_<��=��&G�L��#:*;.��c*o�W�X,�J ��I��@���K=.eL<QN��Z(���c������C���Rdp������\5���v&BjwR�!�01����!�&�E�[�P�cx���@[&�o2u?o�R����te�20�l���
�g=p]�S0����q��@M�r����Gj�qC@�QD��L2�_����"�EE�����
�NCb�c=��Y$��1J�a+��l��<&���B��%�
1��QKsV�|�@�;#Zn~{��h<W/"�������V���Z@��*�����#k=&�M�-�)����x�)�b�`���tE�J� �wS
IO��-	pN::�vhfe��Y��'Ht@�e�8Q��x�%��P� P���p�RfV[�� AB0��,0�pZ&�#���\�-�	������������Lj����E�L"��[.�6���e!Ns�C`y��	�����(nK�--���QC�r7�6��'%��7<CA
X����#�e
���S������C#	8uI������ZM�	f[S8���2��V�x|�����,����c,�V��Q	,k,|��L����*
�q(B?����`����Ea2��A� \D����
��#���T��A6��)������p��Fz��c��z�������~��-"�F1�&��k��^G����P�Ft����vQ�S�J��Y/�*�;�����<�5(0d�9(b�����y7.Z?�oK�����<m���K�>�~~u2�1��Gv�4f�
����V'_����IS��bL�A���w#���Q�����������U��h&kVLb����T�XChn�v��:]t����%R��F���'�"r���87`�o/�,�`�=��
_`4��[
���F�Q�E���pxW�5Th m|J-�6G#�k�C���n���"N�0��60hR��Z�(%n���;.	6C�L+�lI�T�^[�m/W;�i5L��.�e�f)�Y1���0��[��'�z���&���dBH�o�u���.n�]"S_(���\��g�����k
2M-5����
���r2��U]����r��v�'oj��c.�����Q���M���}��b����S��������#:??X|�@A�������+k(0=+0h~�k���R�0�T�]88��|�r
�CXj8p��=D���<L�$�pl"�M��/�r�A��V�q�("���*�,K���bIx�Vs������J���K�D�����O�<N�B@�
���]]EC���[������\���Ac�p-��4	�r�C1Sa��LR����'�)b#s��I
�d��%A,�-c�E����0����#�'������SD���-����eR�����F�"`<�(75�y�]�1B}`p�(���a61H���,�7���"�la�D�X�r����%�'��H��1�bh"`�$��>J��EK1R[h�eA��U#9-�b2��\v�2�?�Q�b�y�D�Xq
�C}�1bG��y,`|���N0�Qp8�(t�����uzA;}B��/f!C�B���"aWE"?,N�LS��!2ULS�R��	����9��29G�@��,L6��������_����li������.���
��9�F[�(e�����1(���n��mp��� ��I'L��%��pri*�i~�0���8�����v�� &����
��T��.Mq�Nh�9�0�O�\���+?c��K�V�u���iL�{�\2d���}���P����G&�� o�ff$0�,�PF��L1�C&P2��KqH��0v�gZb,q��/���gF��)�N����b��0��M,��eu��:aYH�*�x���y����;^���\3D������������>�3l����j���Rk�:���q�@L.�a-U�g\�wq>,�>�4��2c�b���#6����8��y�����b��� 	 --���l[��C7H�p��%0�E����N�C������$(�&������*�(+U}����`*Y��F"G�"�I/ba���"����xV7;$(��$�
��B��P�Z�8x��-3'vV�J� Nz���,H�N���id���O\i@�����3�K!"���2��:N�l���vX��h�%��^
��awLt��1�x?�*�\t��2Y��p�Of�\���m2XpD������� ����Q�Iz�&���q�����/g��%����!9[d]�=E1��F@���R1����_�L�5i�zx`(D]a�'m��e=�����1|&z8{b�2NRI�#l��l��b��T���ss8��c����Af7�=F�-#W��F�h���f0<sHXTP�G�+b{�������L	F���m`�G7��P���yv���=��E���J���9w���=c�5
q��Y�����a��9|��)���2�~.�9��$������`OW���B#?�?��LJK�]	��)��4ZfEf���<�/�	�!��h�������R8^0����@5.���"��#'i	�
�����a��)���"^ ����=@�3b@�ov����B0<��@@���%�`�x�M�4��xr�����G�,���1��W`��M�`��tQyY�X�d*��t�����6GV���\1}��j�4�/s/�?����x��rb ��m���E�,��p��b,�r�A��l���vX
���!09_�`�aXd�)D����T�,@����6!�D��*�L��E��AQ�uj�lp�g�b�j��#�N�d���jC[����~���&��`jz������
�C�\J�����8\W���6`P����0	a:���U���aB�!�BC1��Z4��M�=�\��m�b�7� 	X�����S���|�rh��)j�,��1���
������yIi��HU��(�	�g��v�A��%\���cT4��B���p�u����7����|F�7�������&����������dW /�}��Cn���pO�i9��]X�-t���)Au"&�*	fBm�i5r8�U�R	0k�C�-�pv��b�����S'f��EL���e�����f�.1o^�F�����P�p(PF��,S.N�(\� >���q��	yC3]���u'��E5��D��s���f�;��P�t��
��k`v��2��!$�blrmw�i-*��?i����+JK��7�e�:���p�7��e��|LI��/��#�b�����eR_�g�S���%��r2)$�"K��L����%��Z���2�V�i/J"�����XF�1������%����&���v/���`5F�!����Q1�d1@J~�ha�R�V��*�vE^���T���
T��I9tR�p�7���#��|��R���!��L���!< �S)�����L��%�1�h�S!�F	�N�k�����q]��dA�
��`�����U��hP�7��Rl�	�B�*��S�MCZ���jb�y�����u�C���k�F� I��EK1!��_qO������f�:Tq����t��1�Ke�hwvf�,8�o��dXpa��9��b�6.����m�MW��3�4�N0��JK3���U��M�����%	[�g��C,,mc���W<Z�;x��&���y�����o�!\yN���=>R(���Qu�#)>��;�m��(1\�b�k�/:�m[���l'
�]��`�f�pY�_��0����i����V�ckL�5HK����@��������9����w��c~Pg�$v�4�����`Vk����1����'!��y���L����9���W�Q�p���^eP����Y� �'^��x$]2����uz<,�2����Ju�;cG!�)>Q"���T����_�j��p����D
E|��U���%&��B�lhY
���� .�p����
s��!/d0�xMn���M�q���>�&�`$���y�q�����A
pw+iCW)�C��I �r�e0�)�A������mF�7�QJv:F
c*T��[n�L'A���kC�G��>�&�����!��]����Y8�%b:s�[kD�������`�������O����
���]e`q��+��m�1\N��zO��]�Xd�U���F��gC�F�6���n���"�P��_�VU�U��I�����qX%�C��a�k�(0Jm/��2F��G���������$,I�������O*W��`2��a�4�m>a.���O�qX��{�d��fLd��;�;+�"f��UV0��Z��p����d���| �:�2/����8�������dz��W����J,���������FT�
a��u%�;[���bb����*��Y/�����'�Y�ua�OP�H�vL�xU�	���\������f�a�U�1����Up��cn��*i���/�<p�!���4�Yt��a�4y)3�*�Y��Mn���z�5��n�A��J��������_�un$j��_�%F���_��f�m��<�����
���q����W�t1j��_�R��_������
���n��#�3���Qy�N���r���rXA�Ug�Z�Px�n@�qXa�3�9���a�����
���?i�>�����d�:J��EQ�v8�*���SFu�Q.%q�W]��c�����y���W)��b�U�26��c�[��X����C5��x�Cx?�"�M
��tDA\I�[���M(�fH��_��;�Onv�����������~X���w9��<]d�;�R�?C��Vf�Nl�m"m�T��J���5�m����`�1��S�z=�H�p�:���r���1����C�*�IG���	.5dC�rQ�b�C(�-���k|�������2���0����f��`AE���!O9��s��m�3����;�9l�	�
��M9���C%�O�/>�
y\8������n������gy���,''Y��P���}��w����������oD#���U�L��(�Pq2���������H�����_������tZ���u�� ���}�#H��X����%�;�bp��cO�3����#�5n-�����M��"�����}�0-���p���M=nx�R�k��M�F�dz�2�@����`/��z��?��H�9U����u�����z%��1�V��48���� ���<Ks���#
e�2��-�����,����Z��+&�i�\D��&B7\&�q�hI��pmmM"{����:���HS�=���\s�H�"����f��Y��z��/\��p�i��a�L)�+D��#`��ed��+�-+|�V�S��2�������a���3�����-�*���vtU�|���}��C�?`�\H	%���eL�2H!e�p����[��-�B��������P��1��������s��D�	D^~�)�Of�r0N/,�qK���!����-_��$�8���-��FGj��p�)��
�Lh��GA!���*Ap&���e6��c�Nx�u��r����2��]I"����Z����������=��v���jp�������@�n/2.NO^�U��<���C������m�8x�9U
����B����$���n��R��s���9S��/+�s��2�fF����`fvw������n�Q?l>��^��J�~��p��fhM������s��+!���@�y`E=�6�����:�!�@TD���0�``T:�7����V�8��r�'�%������������i�W���e���4�v��f�IxZ���e/���f-]��/
�1��RM��
�R$A���4�J4���"�C-*-��2�1�d��p���Vv��}SO	F��)��YM`�H�S;R�EU���YY�XUY�]mILZ�'��U���9Z1����*0������x������6(&Q����jKD��x n!qX�]�,`0��p�k������R$8(T*�s�,�����I�%qX�c5p�i&b8��p���.s�+Nq`u�u2n�����jT����5��O��wH���N'M��d{��H�����+��'�oQ}#��+�k%�G��-���U9>�3J���8�)���7�f04�``�/�1:��r��<�/�5[���Ei�VR�2%i�&���6��/O-�{j56G��)-E�UJ�C������D��RV���=���D���������q�U��Y�*g��L��YkN��Nw��P��������^�rBd�7w1e��G�f�7cq*xIq���DK10�����j�	�<���������18�a;�]�2P� �B(un�p����vT��$�q����n���Z�{�����S�a)D��E)x��c��F%����T1v��q��"����1�3���J�`094���c��;���&4��f]y��S7D&�,�}�^�������u�T�hB�p"�q�?��������z�W�@�@����#�n[J��
b[�V��3��
;N�@��4N[k�`v�\P��������i�� ��E�����������9�u�!���
�e�@Q���������1p���p�]�_G�C&��B�
f�Q��FpdQL�P���0���e�,v������M}�d�����uz�XS7&��,�7�1�j�`j�a�;�
�1d��[A�<Z�2��#������A|�z���z0�w�Zz\� Pw/���aku�E����1�����f��e���=��	��X�Y$����`��8��m ���)
Q�J�z"�
����qS-�F6�����I�N��-����F�d��d��q�c�w�%EQ�h>��c(�D�������p�L�����=`�S�<_B[�o��a'��.�����1�5xF�6�!��Dx�61��T.����F|�m��pq��;��7�,��N
���Uq�W��~�a��Z���)��j1x�Uv�wF��������Q���1�*I��������K��je�����("���
�����6�����7(���X�������8��J�T[=�������:�f���t� x�<��hx[�?LkB���8��P���D��������O0`�<���U�Z�!�o����"���9�|��C�|<����)n3�>�nA�*��7��]!�&�mg����S��f���#������g� *���!�T���}�����u��2R7���S�a�Vn���T�A)����
#z���2_�������]1Rm������a0
����g�&�A�5C'�����Fw����W�����?�V:�.!���0����Y�x;0qw�>���O�]�j���oG)�/���X=/�&����o3
4�� ��k�k���0���?�%Z���
�G�p����&"K�����
��@����<����������<���jmP};i5���,�����|�?B��<\�Z5_�L�x{�5���5�>�'�����T�~��B��5�a������Z���C���&��������}������O��~0�d�8�����9�8}Q��j������jmP�}}�������0���1�Bz�O��Z�i<���c^��3���������S}>����7�qs������3gX���=##$�;���^�U�m\G��oML��Bst�� 
e�@�p�vj�D���G%;������\��!��~���7|�����P�'3y��L1I���2����e0�W1jmp���=4���LA���p.3z!�@�Y�q\�C�0e����"V���2|d��	�4�3|��_�l��Q��:��e+8����%��{m���/(���/����
"9�Z��M8xO��i����&M�{<�OQ��[�)q;H��s���LX�����KB�P�� 
���];����t�
>T����J����������C���2�?Y�G�g�u��a	��:������e������m*]e��;��c=�>����6���C�`l��?�;������ �s�.��Z�{��zG���B_ca����F�!���>���)F5h����A��/���hb,x{^M���q���F.[U�?_����2���;�,�t/���U���CC�_�0h���9��A�H�8u����|�����ib����XP,���x��>�z�&~vB2i��
�])@�q��
8#Y��q7(����G_��y ����7O���������.L#q�_����!����[�6��������u,�(���_
��������mK#�����l8�lR/p��<C��J\���$a_6[��4#�v��1��H�����[�n ���=���R��vZ�Iw����1J�!�(���	�������m]�7����K�.M��yA�"��D������$A�8��p����`���QoC�����vf1��{��x2���0��c�#f�(;���>.a��,�U�9�w\��o�l�t.(�">�@���:��:c��j��)�S)�T���6_�G���R�
J���|�X�|�g�	�Et�r6`O��j&���� ����O��&b��m��z��X����
7IaaZ�'�����*v�X��d�(
�[����>�j�3��e�A��`��m��X<C^���A�-#��?�<�A+���e������XD#ka����w 2����D�!9�����J`�D�SE��W9���+��o��N`&���.��4
(��a2Kx�����q�������P{��� �)�-�D�����'q�&�z��Q�s�
x�F�����
t��
<�G�H `�����S���e�5�L.��8>a���5WW-� ������xj#>�dw����4�������g�f��gax���T�V������R�)���f<��rW��EN�~���Qj2���������n��g������8����1���n���Z�i�^��7$����.�������A�|Xa�md��0����\�qe�BXJ���4?�r�1t��������6����?���/���������D"��'�*�x�(i�p<����	����n��_��o�	���=!�`@zd/M����`������i��gO@|�
i���[��J�M�u���/{��3����,�����H������X%S���A��arn��h��{�>>�%�E�[���0�N%�f&�:��}Rg���"#���������7��(��<�V9�B�q��-�GH�����xL8���gt��z������`}|hc��/8i:�Z����msQ���
2��"�?���[�0��b�����H��$	x|���,���JB������_H�+���6�O(�Pif���N
��?����q��:�Ls���@�>z��� ���u��c��v��$������N0�h��Ps�,.�,f-����x��s�����Y<�>�h�}0���7���3�6�!�*��~���q����!�/�
�e�������Yli����8���)�������B0)/�9�2� z��0l�x�P��c+�G+&�X*��l�i�?E^T���	?t_F���6�D�q�>�RV�����!����,����Vt������Nb���������
,�_k(q?w.FbC;���e���h$2(�"O�E�����B1k�f�
�I�V��a::���j	X[����,����|n��'d��C������K����R��bw�N$����P (�g;:~$v��#DIC�y�����w�v��[C���.�q�e�����og,�D��������k��8'�{<�\�`V[F����,���$S�V6P:\/�c!~HGzI�L�2�Pq�+���S������C+�ch���Q�N�5$B�3�E,p��Z16�v^�A�����b"�+�3<��	E�C���x�U��%����G.�2"m`k+J$x�x�_gKx���
,���V:�����
 �d8�q����J�M*�T�N�� �U�N�#���������/`��b�����n���J2Py��+BU�d~����k��LN5����0��[���h,���^V�V���2���(/����l��� ������6a��4k>�����ut�9#�����2|���9�{��r���~{h����8Y�W/������	5:�^C
��!������ZN�m`	�H����7cl����0 ,>;x��G[]O��l��v%m\�J���
<����%��#��X2<�I��X�R�0BY�DX0d�8���G��s�Vb�X__��Dz\�_`�0�;�����O��3aE��SY5�q���t.�P��o�#z��_��
�I�d$xi%��^pp]@D+��q<��
e�m�J����*�n��#������������"��E<�����������!����J���`��W:�P�R�	�����]T4�'/�#D���������O���
,����J���	����8�h�s �r9����R*rjY�;q8���d7�i��CH��|h~�"����b�~��%���G�R�4Z.�����2���N�#m����  c}{&�b��^K��V�o��yQ��(��:	&+���C�k��S�zL�v���7�;����z51��5D��i��"#:2�8D���X�&�_;��P"�ReN�g\�,��p�q���|eU����s�	G;�4��M�8q4W5���@%���@_��
��8'�R2.��,�M��y�/Q/hO��=;��g�9
�T"����v�K����V�w�"z�HL�;f?oK���
�p��J���%`������M2�,�e�|�m���"��p��46���^��|1T�?�3K��k3�j7T2���&:#��YdV^���M"��l3P,�(�8���w4��`������*
(��XC�>O�,�����Y�����.w��X�/�<�Y����@��.�K+k"��"MC�K�_�P�gp�rP�m��z��DTdEe)�'2Z�t�)������
>'��?Z1=d��%��G��3J��d�a-����EL��-H�k���@os�>�s*�e�E��@a���JY�5Q���T�u�1Z�������`�m�_h%.���.��JB@�$���o J�)"����l����EC[�
�\\_ZY\�=%nZ��qc�,��xzi%�ic�2����.�q@S1����Ng�~}+�6�)�/�d���8���&��#o; ��A��^qA��;�|����~vR:�vI��`���`i�wv'��<;��9e������sEz��|U��]�������o��
u���]y6���2���eA�MQp��L�G������G��|?��t�Y"���(<�6��&f�co�q�-�a��	m���w&��GJ��e9����)��<3����r~J�Z�^i�����8���z��p�lk�)uR��z|,w��@g�_�I���T:�2)X\
�� ���|�^Wt���n�$�o�&���uU���n~��^���qS�=��l�V�~�T|2~�W9�;���C����1|c~���r� w���4�0��yTw"�T�O���&���5��^�a0��]�o��7����������X�����Y�v�+J�Og.���9,����V`�����w66�&�
��q%���]��U�N��S�D7�So��rP��J��7�����X>�9xD+�����W�����s���^}�U�����ZY<���a��L�t��������������29����%|KKI
�(���L��sYn�����~���8����:�pT��?P��xk��U��|��<<r�C��x���\6;���hc��JD:�	�D���Y�R.�����a����"�=��x�EAn`�P����J��$gfB���h���
��*�e�������)�����K�>�XN/�W�4�Ee+�j~��5�BFv�q�7O���� �U�W���s]l��B���/`��@Q4�,$��c`/d��h�f�*>)�x������zq�v�=�;yn������������0g�f�I�e�Vx�.|����Y�6��_�N;%}c�3�17������AtwJI\��T��O�G{+��6N���#���c����fb��>	D���bB��%iRA(E�Ww����207�VC�8l��G���lC	��4{0E)w4yH�%��)/����sUZ��,�?���X�Q��w���c�9���i�~�P�_��+QJz]�n(���'�)M�����@f�=�����P�JBhl����B�vrj�Nx�a_�����!���l�9f��~'���|i((��r�-?��b]����?��$�$�c+vG�ejn`��+�ZW�xt��V�;��:��Z��kX�K�����C�l� L��]�me~������D�����������]o;pL�~��gn�]����P���j\����\�t�����]�s�_������yU�w��~2�bs�|�7����������F����$�P<3B0P
|��kyW� w�<��M�c^���$�Q<>����s�bM�j���*��p�\��g�jp}}heqC����t��FKC@BfoD�v�Jv�OhF�3��bl����
t4�������!�K��m�W(f�<#�8�-d+IE<�s��m7�d\_Z�\��?�Z�r}@�(#���N+&�z����\���rv���m���e���h�j�
,���\�v~'���e�~� �_[���y1��(!�����4�Q{�\Z".�������RC��hK���q�J�t9o0Z��i��
�Q7��~���W���K+�����ZD$BFS�i�A�����*Z!.��-�3kD2f}l`��+����f���a��
�����w������k
�K��d�!;<�����/M�8��K�A4������p)�����d�;^+x�8�
�k��h(���e��������Y���jl2�e���vj����&V�A>G������]�.��'>0�����KV~���z�Q]��}��r�	���!�wZ�L�9�	0�+��z:���q1g����x5v�l/�/f������c��<�E2q�*Z�h�~Q��M����3s���S+�G�p���H�`i�k�����l@f�Cz������:��4]`����D�����w�V_s���-_*t�����`2�*e*~��M������v��������x"^������$BO^dZ��'�Ie����y
c�X��*	���	~�2(;����������}�p�GS�@�r?��YdW=���4��qP6�'�6*
�c;G&RM'��<�U��Q���J[�^Do����^[�;Z�IL���|��8�I�v�<�����}��YX[�g����?����|���.j2�(�/�x��	]N~�j8h,U�	?�[O�������g[y������p�������5�	�\N:9��%#B�)�����(�w���\������P<�tIC�0J��pi������D����h��'��xq��:�*�`�"�:������r=�~����^]O�������0�2!���cig�����6��
��n>��Zt�s Hb�`��1�:vm�����pW�i�:�.���C��R���sCv�V�:��0�t3K�w�Uj�f�!'��DRt(������-'77��\_3��}�}�L����J�B&_��:��d�&��!n)E<T����l����G~����4����^0Nu0��2�d��&_�c!@�[��1��-�X5�[�.!�h:��do��]C�b`i��=�:�NM��C�x�LI�R:�����n�d�oMt:���|��$��+T�����D�r��rcA�t�@�J���������T>3��[N���AN��:|
$'��3}���%6� 7�S���zl%.Mn�_��M�2�_�#_8�����[��������]����&��F�-�Q'�<7QH�"��e�!`#�w�gG�{�E<�^1���*��;2������3����
n�J��#:�X�����0���P#{K�q���;���lA��9rh�K+}~��6�B�gl����;Iws���$�\)~���'���P9	�X�-g:�O��KS)3� (�,(�as�`K��������;�9
2�
!T�����n�'@�R@����5g.8�!WZ������	�a��F���#�`n�����	�aF��p�X���c��@9�]��:t|����,��O�`�C��	<d�\���f(�X��A@#RA���a�e-�`�����%i�['
���0?������rC��p��\>��S��N�j;���z�Q�������-�?����5�jGk��Z�C��q13�N�gbd�������	�nS��.%.��7~�����R��r��Kg������L�
�;���������Wl�
��~3�������	��N`�u�����>��z�Hh�s����g'�Q��9F��}}5\w�r#���X�i���@���X"6�Y��;Z1nZ	�@4��D4h9D|����R0��C|P����x������S���q�%csA����!�F�J@��>���T28m�����P�sE��TK�7d�lx0�M�C�<��q�Q:��G:��7F��|]��������;\"^����<,&
��Pahy3@p�4�9�����V�?��I���b�A1=����&��b�U.	*:�&G���(A�4������M�U-�f ������&��t�8W��8K(P�O4����������~BgWp@U�;��g���s��Om������!Ig`m%`��<jH����S��0!"�k��D��2����R+|��a��r�O/�8���a ����_�^���'�_Y�e@2n�8V�����T�@�7k�_�8�:���K��0j*��3iN=	Uu	#D��^7vOb�z����
<�	vG�m��8[ib��|@�A�����N��}q��z�E�����\�@���Y_�3�*��T6Pf�;�/������@�R��b��Q�+
�sH���D�����
\����y�����r�
D�}�{��d����b����v����z^	tO9f���~�r���GR.��^h.������r?.f���
�lM�x�T}�$,�:��!��j����J���B�n��O��L�`�Zh������6a�E)����/��~����',������A�^�bU�j���������e�V��\�\��_Z�	(�5I����
�+������o���S�y�k+v��	�/�D�t{Nj1B�Q�U��zUFE<����s�U���q}}h�sA����i��P�B�2���f'�����%|��8�-g�(������1NE�D\�s����^�"g��)4sz<��\�����k�����O��g��q�D�K��b����a3����5x�Z�(^A��A�@����vj���U���l�o�*!��/�G,)��O���H�>?�����d�SM�s �f�]��d������q,�@��	����	P:��$	p�s�3�cU�����������\���`��g��rHd[�L�48^
�����f�l�8n4�DwE*\`�c����t�����v�V�2�uT$�r���#��$���7��V�oR������T��P������d[$�e}R�V��S�m'��K!�:�(�cGa���l�rlw�\�=�Kvw�J\��tE����S�3�"�gg�# -�R�I�Vz�oO�0kh:^�m�+��aE���rqBW�)p2o?T�,��\�P��T��
:Ni�+_����������3�LDm7��%�V��������U.���� C{k��me�W;h+��Y�yp��B�E��sI�RV�iTq��7�xm��J���Z��t�{���'/d�#���!#N�kO���+�:7�q�[Aa����g�<D,�H���b��~����h�*��_`ga��;�/@��=��91����������tVp��������+���^��J~#���c+vG+���/0�P���y&���(�M\U]�$~�~����U����$��a�Bu7� �r��X��������*T�*�6!�rv^�s�I�v�{+Y��������!c��mr��{�U�,{K,��,��x
J�;���B&Y�v	�]�!,����f|Q���w�v#P�����':w��u������p�uC6�z/Ex
�*���8��wE$�i������n��SC��+`���K���u���8�\8��F�����������!����c�]P
���(�l�hV� :(?T\3�����~����o����+��z�����K
yx�0�����&" ��n������!yO��q�6A_4���
��B��.t���}�V����@y��r���%����������6#q%���&����u.��H\��mI��Y|���)S��7��`��WZ��%�9�/�t|�p�����7��-�P�3��|W�!g� �������<�/]F�!4�}���b"P�B�b� ��f3�L����$����R`w���A�������-�9�`-&"��A���;})������*���55�3����[/[+>�C<��	�/��0����)��������������7l`��W�W��x
���K!CD����|fl���~x�%d��v�78�^��	��~G/����!Ev��tHq�����*�m��%���B+Q� ���v�t���N��|PM���������>�������,d����4������)�nzV��2���U����ED��x�?��j�?|�e������tI�uq�BRf�%\OER�+��,�G��Y������^���^�����"�Qqi��G�#k�%���SQ�)��W6�o G��n.:����?-�$ ���Sn;_�����y<���D1R�,b�:~=�A<�o�B_5��.;��"�<�`�+h�	.)~���NES�}�EL�"�Y���Gt�����SC��������w����cC�#��8;3
#Y"�6.�i��:P���,)�W��q���(#h+��Yd|�J�8������� 8CK����;�$@�br����R�WNf�KK���0�.vbl0�i(*����!eD��n021/p��B�p.����,n��f]3�en����J*��@��x���v�U1�i�������r��gRL�{
h+|���K+>G!�1���s���sqa���
�{z����Ow���]"�����o�4��8���Ro�PT����$W��q�%d��7�j�X;lC�Ac�����O+7���H�-e*��
f}x/�����w;���X:pZ�E���J�V&,�Z���_+���w�
t����f����S�F[m`��k�,<�R��_��J�;X�W�������)&[�3������O<�����At��
/��y5��sA�Q�*��j�h���vt�����C��?�Q�������4�K+ubC�{�9q��v�?��O�"�����V�t2������W��hQ�����V�y2��(gbH�`���:����3%p�.��Z�;,-������'�{YK�s�����n�U�]�2*K�Z��s2�����C�BbI��k;&�2�}�k=UW����%C>���lHtf�Cx��$/@�wp��	���Q4@D��8�>��US8�\��3���{�P��_6x��|�"9�KXs����2^�)n���Uo�WA�"�s��t��.�\U�w�����G�X}�/M��(�JD*aK�����^,!O�`�l���z�G���6"������{Y�8��K�{((�O�Vz����M�pI;N
�Y��r9�VB2������r��`1fM�������<r����
F3Ei&"zn����y���;!�2c��VP�u]Q6\�U�A:>�W ���s�=�oM�����d����g�@���F�8���0O9�*�q�w����'���N��q�u��j�Op	�;����b��������T
����C��	1wK%f����C�g��'���[��aD�V��g��

�g�oG1��h��PV�����������@�q��R�����r��8D	;��0h��;���7Y__(����z�wh,
qV+<~Q�uV��j��z�>�}��g&�)p/
�K�N��4^��D��;1	�Y�o����U����3:� D������rUdR�To�����|{u�(���(���:�������
l]�L�q��/�s�\�8������_X�9�$�O&^f�3��m�:�WM�P�;�������cN��
���H!{xh�v�!\_�9b_�`�,��*���w��
/��D���UU�-�CBy���h�T
�?��8�E<�^Q&�����?�����J����=���+B�����ql��f��8�"j'���j�P�h�}Js:pg���j�.C2n����O�=n�qD���<n�4���0��6�~����(h����Ga�[���J��i��/���'����|�%�j���J��vd��X������M��I03�����a
�~w�M�oj�p�OS`�4��a�64�w��K]I�8�pK���y`���"�W�:�m�1$�B>?7�g
�G��3�b�KL���!�'��%�{�AsG��x �z&.�]�#� ��Q��������~����X���XHJmU��TU��pv�HV���>I����q}=���b?� �s�u�A"�R��$D����sLB��
��>�.iJ�;��G���������5����~F�����
�DDQ�*��k�9�+n`Ia'�VN*��`�����|��!_��|�o:'<�s���"�V&���`fs�7
?�������y�VN��w��(%�0��KCqbyq�j)Nl��,����o���!��~l�U`C���.1�#�K�C��5���?�� C��1|�����7�������/#��G�>�/P:��5��r8�|9a^��� ����z�^-IHn��JCvI�	���.�w�=��aB������dm!h�`��i�k�p��:Gjr���1X��a'��W#`��3���m(N-C|����8�)f�*���&�;�����/U�����8G/)�3W���6:r����Eg!�]q<�
���y.	|R.f�t����W���g��;K�ovUaE ����U�-Ki�j�������!�H+��"���d�����K>���?6�mMw	��C��U�������l��t.���P�`<03�qg��j'�lU�������_W5��(��C+k���s_2X@K d���
!O5%���O*MA�2nm��T�������oc� ����=T����yi(�,�]�2w��T|ViqWW5�l�����o�K���2h�z�|�*nHNj$��&:��`��
29�Ha�����r�,�)�T�`f!Y_O���a��E�����m�\���q�
��
]�P�k"�,�bWEga�����IV�0�p(�����{&�6g�@�"&W��x f�R�kUd�Y`f����S+P�r�0���������Z�i+�m�`����wF���j��xO�o������@j��1����i�:x�>gR!�+�'�����~��������/���M��gY�;���KqK�_Qq9�yT,T������\?QuR�����v���O�T�x�07�Dd	�A8r>i�������CG>�c,�)f�7T���&�K>g��o�����/�
�I�*2�u�wyG��	�k�������!�}���w���.!Q���u.��t����9i8>j�c���W��6�'�V���������:�;�,�J���f�g��D�x������Gv��*KE<`m���wE����9xPI��|�{Z��.|���o��a,�OZ��c�]���i��q>��#����(��P��qk������N��!�b<qD>������b]�\)��}��G�V�-��i^&�!]������<2$�o(."tJ?T�T���T��� ;�2�/4A�b�l�}�fD���;
tp��45�����0��?HmZ
~A�����/��b��4�U��/��"���Y���?c����$]���7���8��s������k�$��SiAwx��Q�����x}�j�h��x�~���������,	���������������a����-N���5���
��K.�u�#�����Xo�\@������U0�;����s���4I�"��F	W]Z:rq%�_~ �������x��c<�(J�/Z_��o|!\^H���5������;���+1{�J2��^�	�L��S�
��s���8	g�3H�N������9���|���Jg~�����k�)����?��I:_��
?�;���0���[���
T����\b���b��&�A���n+Pu��M9_J��T�>���d��3�T�E�P�������5$az�%����3���#�,�!{�j��]Y��*�#��������(����&�����BOf������'�nn2�!�E#2��8�.^sHP�_MP')x-	)Lb}��|!���rn:����L���/������piG���X��IHY��7��KB*�g,1H��1|@M�����K��#���M(����G�m�g���x���F_�;/!j��L58��������pR�U#����6/��o������u
�FT��C6���;�V��t��Re�q�=��n��m4I�P�,n���L~�c����Dcyb��qy�}2�*Q&����0)�c6�����F���l�v��A��>�8��=��C�.���LU8X���F��[E�p	T�@vX��"K�%L-�+
�*������5T-`�.
#�L�j�w���n+Vt��97qZOD�E���IFyp�z�T������j��F���O��/��+��*����(/��*�����O���,g��(M<��>YX,L�����z����A�}SM���p�E�x���a[�Y���)sc2D�G��6Q�oc��S:����h����CP �����8>���y�����9�mKL�������B������(��8t�L�/:����������4�f�����������GJ	��>�%s�
��,�.M�A�����j�d��ML�������R��LF�����o�=�]}`�s����1����<qUx��6 Dzg�3����2j�-�cT�����t$����./&�D��=(-!��R6��.���5X�BU���x��E�n&��"~�G�8��7��o�/�n��l��{��F��"z��\5>����A���&!�����Z���t��N��2��Y���Q��F����91}o	$_���M�~�qX����f>d/����k���?/!X�Z�"�D������7�N��r
-CSE���?�@��k6�$��H���O��;x��!>K���{^K
��,����.���*����P6���*�����~(KA^�p[$�*���#�0�-���+�.��(m�I����4��XE}JG.V�����������6x������[��d0��oJ�I55�^T��EGE�d���	i<�������)�\��I��<-���/Jyb�_���^62�&��(�c[$�	�<�gc�`Y*�s6����P��"C�������<P��TeN����7h���d��'�2�3��"Q��q�gF�Z�
������&{PZ�����.d���V���k��^j���T��[4�6��^/���zQ�����VE1�a�*���?_$���$�,��@�<P�>Hw��)�7�*�G�Z%�g�m��p��>�#���A"�#���}���~�s���9��z�V��n�������"����4Y�,K���|/7�X:u�%���g�"���cnh��%y�����������S��[��RA����d���"&R�^��/�N<�4�7�V��LY������c����X$c�jl�lF�F���H�������fvq�����Z����8��8{����z�q�?)�x�>������E�D���u�\�{��Q�y({�:�&�`ZM�]9�����L57���J��Fv��AF�w�G�����xO���-��9�����%�����*��C�����^!�bEc���m��*��P���/c���g1�q��)	h���rI2�J9��o��Xf���0�E�f���i��9+hP�$�}������Z�����
����Nh�>�r�����R������^�����OJJvm��4�d�5�^d���_�e}Pd+�,=���-��u�I�I����g����W`�3�U��*>	�7�����J��,D9�1�>����y&�,V2���������K�ls�i������:�yLo�SauP6����$uy����<P2.:�Q�ti�E%8pF�����*��0lOtbG��3S%�x�����>�~���)!���d���K�u%u�R3����R����Y�������`|������{a��C�����.��}��@������?^�w0�V������%j@�o����Z=��v��KMau��c�t{A����#�{����1����M%_�U�]6��������s"Y��5��5"=m��Y�� ��yp!��8���I���`��X��(O<�hVS��hO\U,g���Q"�T������g����v��Xe^LG�2���,����l����UF���9�@�(����o��hU|!�KY�<�^T��������=:�p��
�����,L2����K���W,�g��F�=���c>�=?4`2�3��]au��+�������d�-�+
�E\��D�'�"	����5X%"$�"l�.�<7�.�#�Zvu��)�)�h$�+����/�-��+������W]U`Y
��5 �f�ske��F�<��6`C���H����=����[�>��:n��_�X-(����Q�b�+��K����<?����J��J��y~��o�C�Q��QJ�-�;o�|�c���K��c������%t��!����_�,`�l����X5�[|�bU
S��=��!���������r	���Uz1F� sJ�6|�������qG!SO�?:������R���3���A��
@O��_�?J�0"�yE�8��^��0"�wF���������t=�HL���:e3�)��X\�-[m����?���#�%|0�>A��q�
?���/!j<�j����Z���&O(��b��5$�#%e�w�Z$%
��Gj��t\�xJ'.V/w�w�V	e�(�R,J�M��|��r"�\���a��d||�"dw�_�}� ���"l\]��l�7��Y��f� � ���������P~�e����d�N\���2�]�o��&\�>�0"��W^��Yj,����JD7%�����"�V��Ca�RB��m�W1��>����"���W�$�p��Yq&��t�d�%S������&8$]U�������S#�c�9J�����qV����|��	����]v��������i9��F�(�$��E�}��'E2��,k�H����{a�G��1��X�9�4X����~�yV�F����y���7�(Q6}�h�������6�:Lhr��b
�R�>0�c��\��rh
n��5f����}��t>��t�~��TW`�U��U�,WP��%�*�qu�����>X�?�P[�]��JpbE`��)��P�Z%�rO������(��>�t�q�K����L�I#��Mw��+��=���x����vkF$��2j9W�A�F#KO�b�V���-�s�o���$�[3au��Tj�5W�V�Ut��m���41��y���v<���B4�Do�F�di��>�G+z���B������CX�BIU�;�81G��K��6S���'�6?�5V��������P
���-�%U�k[}��?�~ �h�S��'�2��.��f�^�C�V
Y����8 b��Z��n�}���s�&}	��T���.���n��-�*�@�<�=�3%���Q�S��1��pj�O*��$�8_�W��U>2B����!�m�-������r���8������`IE��e�."<����~l��F���Q�BU@�uV�Q(7F��,X0Tx���[��T����gT���1u@��P�F
]�.>�_W�x-�t�XR�0���']{�,���^e�!m2�?��foZz���.���9�f���=R����<�������K7��I?�x�*i��=������U��GJG�C�,���zh ��4��k:�EUc�]��S�=u�����b)�#&�8]
�7�3].�7�h���I%z���GF���fFN��p\����D��K����)��_m�c��XEH���f�&����y�:��u��6�2�~��+*���Q{7��$#�_Tv����`����H/�2���E�"G�y���n0���-���wzx@k��jKW�?���.B'�<au`����k"Z�|�\^6��6"��mI5��p���q�;�a��&r�W(���O7��y�s=��;YM[0�1��]dI;���.�@]�B2k^���<�}�b� r�~L�������ye�-=����r�N\R\��T��O���vo�G
F���t3�^���f�>������(%cQ�6������V���\;�"l,�,�������s��� v�4���Ya�P�%�0pd`������8�^c-�e��A��:������e�|;��3��?R
����Aq���M�
���v���Z]K\;�F?����V��`���H�����R�;����\R]nt$������wdOr��Y�c�0����q�c��fv�P�U��1��x�!��~��a&O_j�<���.5�����z�d�Uv��U�
m��c��_i����x���h�
"N�4_v��S�}0��j�Ay����9�fGZe���9"l,���s��3#��=�X|��f�=�03~�)9f��J���+<���g������?��#��5��T�������0z����U����N�.D^�#�F�(J�6y�������M���yp�}{A�_�&��U�-�0�4��g�v[!u����oH
8�7���4��A�>�wP�h���"|!��/e��\�cy�����oX=�
�����&u������T,~�������U������t=K}�������A�y�������Q,�q-�$s�<�jYiu������/��o�QiaE��cs'��$!��[YBx����{�����%S�=�������L��
������K�`��������B����l��D�z���YKA�N��������j�T
 :~����	�/>cX�z,h��I1_�N9����mE����
b�FcX�����g�����2���d�\�|�X�c������q�S{=V�Z[����-K(������]gLm~A��G/�
�lt������,@0�Li!��?�M����������A����Iw@`]�����[����4�ZJW���'iGQ�����]�V|1`}�[�}�J$lK��$�3l����d��������	XB��T1���\�>�;:�h����[N���C���gQ�b���������c4�mU�����]H�3��^�����c`�t����Tuu���
X-(��I:m�_�%
���x�;��r�1>�J����:j�R��?�\�l�/nH=-�Gw�^P.�������w��b&��d�r�%�8�3�+��I������S�s(C��������<���������
��V�
G_���j�*�@� ����J�7	(�Iu�3Y���Ft��-��%����Ri
�	�/�i���]:I�����
�{I79&�$k��z����dx��[o�fo-�h����{��H�Q/��u ��������$CK�k��>������A��������n����z�������.}�H�IjD'T.I3��B��+��&U��t�W��F0"�EO	i��W�����E�m��ff~��e~��w�i���O����,���\�7���_k��;I/{>C.,�K�������������~e+�h?�#f�����2��RV���.���Io L��=����B��Jp��"���yk�Dv���:��S1���������d����%J9���wB�a�P[�*���z�9mC,Mtnp�z$�4��5���~���&���t�����(A�5��Tv��R��~WO�>(�����A��N����X�>H*�����*@p�����-�Q��rY����9k�X�4 x�y��x��y0�0�2K�t=NB9��9b��}�CO�.���(C����1�d	4��d�]?���x��E�u�A7��"�!X��7y�U��s�GB�I��7���si��%�#.�7�X����e����a�7k����9L�j�$E/��������zM�	�u�;���1���ZwX����	+�y���T��������D��
�f�������6?��"rv6��G+����������i�K�5�|���C�u-�m|*]�����@)��2H\��
�i��k�	��0��/�c�����>Z]�����*Qx~�gG�l%|������B|W��.�d�k���w��>?���\���G���NBB�4�_X�d�U�D��.~���,_i���l����`M�4b�������K��<XfL�d�P}O�[m\H��6���i�������(<�e3��'�"�����`	�pr��]�pJ���2I+!V8��00�'H`;rq���k"p��A)���S�C���-�&��',D���"�L	��0��&S�������>����U=LU�R`��}pI�O�v���H#K<[��k�=�?`}aV������}�*��*!���s�/�0�����?�z���Hz��$<�jx)m�|����&7��W����F_��?��)	(���$���I1�^� ��?T���Xc}��/���7����|�x2�%%��Z�'	h,���&�)��_���fX��"C����|E���X��)�0����>;1�uuI�7?����yO�=92y�����
l��g�P���n�B���� �s�Z%X��H�tw8��F�";P}��.u��2����dQ�j��EZ�y���.:��.hL�6N����I	r�b�O��AF	�-��qE��\#��`��y���}�<���&�%���U���[S%�'�T�|la�=���:���$�l��1,%X�+��u���Q?�lT5^t������;Q��X^��{^�����G��d��������w*�P�����i-�|t�2��^w�
W��w�ur'�	)V��<�	�E�z�i�>A�&�x`9����r9	�Pc��I�
>�7y���h��N��/��_ �|���
�`w�8� u���c}�����[m���%��s%��W�d�5w��f��{c���Q}����;�����~��>Z�zQbm#y�#J����A'�Aw�*a��%�=���^��� ����:
��y2����3��G�%��U�1����{I��u!�����(�A�����8�&%�^[�HF���N��L���V��4f�������H�9r�b������I#�
$5%,yp!����_w
�Tt���|f�;]+���'�&mP�����&��z#��������C�.T���o.�J�%�9`����un��F��
�����_��3�z>d/�
������N�.��YP��d�W��Mz�HJ��r!������$���{V~�*`J�������U���!���/$%M �M2�������!K���B�]va�
������jfy^T���'�F�g�����
���x�AR��f�/�n�����EMk���3)~Qv����LA7�a�84s����N���k�6*�i3"�c�y�I@h+����WY�F&~Q��[c:T	�����e����U�d��K����Z���U"����v@���l��/b�P_��� P2�-"����
��\�� X��?�������J}�����)Y2O���&�yO�����H���>zg�����>����ORN�^����s}�o���*q���7X�v�R�P%6��8b ��1�
��s@'���.�����C�:"g�5/������(�9�`�6|P����{�K��-�&��?K��z�������.�X{�<�hy!Q0w'��Qb/K� Z����?7��H����FP�=j���[>i�*}/M��9P���f�x1�&��	�x�+�}0-��V>d/P
ba������dM��Y��xCy��s���H:sL�K�>�7>�2�>y���zC�9E��������
�U����DE�w~$ �Eo\N<�2>�[''>�cS���������2�(c*sbs���5������
[���}p1h;�$�!���M�ANx�h��P���B��-B�r��~�"��IRRB��A I_���D���$�_���y*��%�=z�e��
A�V����tU )i\H���
^����!�F�����R���7M�����S���o��Bu��\�]�O����Ei�'�lb�����Y";P`.O�7O`���r��1������1�n�{|���/k��Q��G��5���!�)

#^���S9\�&�d��������9����l2�����K�.d�(�cR�c�C��,=�>]k���	�����}��g���>d�p�Q��g���\�+���6G_�Z��X��a�����P�#�	��<=�J?�+7N�4�o����\LN�����W������$}x�}(O���V��V�D���=quIy2��vt��2���!��;)�r�EJ�N\e����U��_�����2	�L�FFL��vy,$2����3!�k��������%C���1*�HAD����SY��b\�C�����Qs��������0�J�Z"B�W1N�o�.�E��������E��4�k������3F9���etb��_���yYuH`��2��F�a]J�X���Y�Q9��|���u���!U��5}����O*���*?��Q���J��R��H?L��GFV��{�#"l�S[N`1�1'3��$+�s��!����\�t�"	l�����3%����t��U�FI%���T�����zz�[���|$�^#�[boT�m_;�n����� ���K�{�b6}P"����o���UF]|e4���8���%����XR1��5�,EI�����	�S�y��d�V���RRg'&��O�T��G]d�*,3R^�#��9�i^��E���pa���������"�Q�J��HX��,C���s�k$MJJ�_���q��������
����U�q)����G9x�`�������$��^8��2��75M���wm�i����F��q�x�E��M��x�Q#��`A]����h�UV4a���(	+/�-���_��'��n2F#�z�����|����S�%m���s�+�XQ���e��U����$��73��=�v@�����K���H��O(���lkl���S�Y2J�{�T<��t4���s����Zva����t%��E�@gV~P���,8�[��8Yb���H�[�?��3���������_,������E��!-`��y-{��Y'���6�E�����7]s�M�����^��V��nrV�T�=Out��K�NiL9C]�N�#!R�y�vp@MhZ�&:sIu	���/XG$>\��`.�����g��[^H�/�M��:$�C��H`�0
�K��A��*�m,i�D�M�=�0S�{�"9�&���4��~rA�L�]�4�X%(u	p��=`�Qf�e�-����J=�ds���A�<auL����Xy7p��m[�Af���i�6���x�m�<���s��0�$�B��<�k�v	�T���o�/�8�sM����j7�i�Qy�����z������E������[S%/�l�a�W����bL���q��@n��I|i��e|�� >�d*����k�G����w��5�+fH`�����'$}0wrNr�hB���\�PG�p�%�U���H)�	�q������*�.���#!����=\m�ew��#�Cl?�Y%X%�6^��U����R����b�|��"@��S��E)O_4�'F�@8g}�����H,�U�n�"�V���� F���4F��������+��+�tN9$����z���ai�\�����'�`��BX����d/�r��^Y��JH�?�e���&#n�\�_�+)m�`B���L���=�&��:�s����#i>������;�dUW��t����h0��IA�G��f��P~�s����7"K�fQJ���
����E|:�0a�4d��gD�.P"���d/����b����`��6���/bh�c��u@�N����{��Rw#�_W�{V�B���c�O.V�
K�*���DI�R�n�,���
�	�&{������afP��n�����6�����bj�g !�Y��:�����f�WlR��|�:�w%�&B���ul��OYu(���y0�J+}1����w�9���UFF�~��HR���W1�Sur�x`g
����<
��|_�/�������a�qXcQq��:��/87����V��b���M��f`�R
6\�8�`?�����`U���"v�������'�<JU6I:��#�6���;�~ g��T�������k_PW��#e�2���
}^���a#NP������V�����L%x�D���ve��(�)����:�%��W�prN��VJR��4�����E�/���kS�@�H��@{�7^K��WQ����{5�,��)��BlK���l�	!x�@j���D��
������X��a�&����$�;������<x�#'�
�����^RK|��Nf|��T��"2�\%-���8sk���kn��0eX���Y�v(7�>
|�
1\L�3l:
~(��.�a��
-8��v��}���:�Y9�,([K���#�jR�Q�=�k��e^\eN|S��i:�2'�����%/&���z�3�	:���dPzn���3G��o.�d��[��z�#�kpIj��"�wY�y���q`An���'T��Uz1F<�?����w*�@�pK
w]��B��:#�{P�<������
��2xs�
o�����j����q�*�\��Q��|A���t�M+��C�"��u�a�o9���w��FMQ\��f_��[��G�=�j6��G���^S��x�����k���x����3V�5I���v�N$�/��tM+�����Sz��H�:����#t�V!��?}���W���$�|�}�:�%/i������D�����t���Ujq�4���������DN\D��]���a�J��*p�����T����Q)P.HCq�E�z�%�p�N�^ L
��f��V@I�b�E%8pZAo��#�D��V����('����D�����X9�O�`��������Z�������:��y���"�O�DY�������j0s�M���F���/�{���0
���O���:_VfRT��2��r��u�A.���n�=B&�|���5�����<�sCGE8Tyqu/e�t�9S�Y[�PMI���1�`&��2��AK�"O��K
buX��Dfp�j���N\����]6MI��	Y�D_��ET�
���,���@���E��t�e����D��SX='4_\��~��'V	��G�Y%5v�,�qU����{� ����k�
�.�BUC�4	|O�����Sz1H�X�:.���TK�����K�?�j*�n�/��G5(��[�=�3o�+z�� �B����s�R�f�C�,���u���y0�&	]/�����3���}S�=����}���b���7�t��g_	��������$e>��������.�2���O��Xe�9�����3!�h6ZN&���]�0
=_������pq��#���U�����s���Q����$����I�z�H�Y:�|	��>d�+e�q��g����2�o`�R��m��k�j8���S�����_t��������R]�Nrt���0����!39}�{���t=����a
���?P<W:sI?c}\B��� ��]a����t��\�����b��Z��������0�|@f����^�����6=�jP����$as����&�.�
KO	�F�m�E��E$@��'�E&�B��,O\]�Ji9��M���lS%�I�&�Z�����MPs�' %�C��{�1�W�L����t�����y*H���)
�0X����GQsgt������]-����I��5�� �l$� F�y���H[�;u�������(JEm����$W����	^�V���2��H�#C��m]di	%������B�y��a^�M��U�
��W������FnN�zo�����&���&����{������.��{Y�X�o����Sq�[�Gc}P�zS`���@��U+N>����Eg����K�VJ�����B��]���&EC��#�!T8���3iR��bux3Rg�Oe��w]��p�2�	�
>$b|�������T�����'P���^��K���1
7^|�����&^#)��/���cb�|�{��t@/]2��X�\~��\�.�T������b���8ZP����6�(!�T_&���������p�����Et�B&���N].�l.�!Ju�]n��gB���d��o+6�d�Qke�,�:#�s%�d���9�Y������������&Yt9����;�HI�����m}g���A�4���,ZF	"1C8!�u�%~��V�D�'�C�u����r)�M��$�JT?�sz��6���J���RRHf�<��]qT����>AD�G��rX,o6d�<���eY���)�+���e��l��Y��OU�
��?���C�����#'|�����l:�X%���
VGd��	�#T�B���'�~������I��h����	�NJ��_�
�>Y���L����S���~h�N�]7���V`0��/��a�4r�5�,�����v�������jN���������%=Hz �C�f&��~��X������"�6�uAK����D6a�e^��VK��AS�M�"Upq�i��,���h�(�k(���L.�����;o�����I��.xQ��"��B`7'd�"�D����lO����f��K���+i�tT�umm��F@�����*$~��A�r�G8�ZB����i3�^-�7R�����*����7O&>kX}��/�H0�'������{�s�l����%�S�����{n]h�&"�B������[�DZ_���1�X�H���4�����\k���:jS�]����	������v;�����qG]�<[B���b�8	U��7_��URc�Kn6V�l�{��d�"QZ/���>`+cf�L�8�}����t����N\����U�Vn��,��Kv?�ZD����g,���$�nJ��AK,���y��>IC�t�n6�.�{�eR�2=��}6���E6�'+ ��7���xyJ'.��WG{��Ua������Gt]��#W�����8������{U)�?O2�W+�,�e�� �c����-��!����K6���gL��K�������Q>1J�0�
���z�r��9�72b��P����x�X��<��%G�i8�L�w-���M�q�j����V)�1���5
�EP��$�XL@M�6���&�[�g}�a>���
�q���
����|t���&�~��S}O����W��^D���^��`ur;����,>���,���.)��b��H8>/V�W�����.�~6��a�v�'�Q1?�������8c��i{(��(&��7���"=^����9���:�-��j�����~�����
@�z��D�.�����m�}�<��X���A����������@�i���A\D/)s��(O��&)��)X%"0ME���X���P>�P�#Gr9���Pn�E�=:1K~�L<��=7�&����K��\���
��_�((1�>�e*���3?~�D�l��w��q5��x��z�����G����1��F��J%���+x?�����0�E`���������Y�:����G�A���w�d%p���jB/�)'��`�8�������-������#
udmg4)=���X����F��FRS<����������x�������3m�\�7�8#I�(K�i�.)D$��`���-0X8��������#�*��P�Z%���[dqu�iTva�IY�T>0���lz�I�W��E�!�$T����TZ��\���	�����a��HGQ�)��Dr�/t���2d/a��M�*�t$\���W��zX����r�/���'�P
�tw�3��E���(��
��0.��v\8_���+�G�W�`��`H��m��� .�"�
b��2���*�������P���~-�'�$-}[-�,��`���=@��'���C=KM4���Pw�Aa3������3�U��|"G��t/�p�q�?�bu��J��}�*�n��g����'-��;�����F�5���'z%��/��-���$�����o)n4��H��{IB�w�� yhL�+��{�.�"c� �a4w��/b���
V
����j$-�U�L��[���=n�g+�J<���$<&.���`�2�[7*��=����'!�w���z�~���Z���E���H�oF��3#T����8��
�bn�>��B�]R����H���d�k/"feu������!�[d��0�����`m0���t8i��d
noZ"�}U)uA��w�iy�y�t0��%)%Z������PS��e+�L8������_��~]X���O6��IFS3e�A��<�����pe��
`���E��\'�ELW5��3�5�4���r��~�W��'5����rc�.��*a�	U��U���7����3Z��i�\
��~��-�%���5:��_/��i�`�`%�������e�e��~h�'��+��_c)��L��N5J��<�H���!<�������?�VR&�MKW��'N��t���a����7�+lZ:�BXg&)-f{QU0�?��Z��j���w����6������&{�&MS����r����?�:���vi72���({�1�3����Y��e������6�{!#���4'
�*����@fu,��u�����4�N>��l,���f����XuR�E�M�'X��p���q?�I�j���D;n��^S�X��Y7^U#.j�5�|)O��A�a�X��;�����h�~�.�s����D�	���l�9�@e�`$m��R�7�*cb�����R���",���r��|��70�f��������y��������Rk����k��z�[���8
�%��z�����?=���:��f����<��o.T���w�V��Xf<��X���k�)���<�wQ��K��^��C�LK|k���,[x2����X]��N�3�����.��Q��~����!��{�Z��8�.�X�Ne=[�$U�^����M@)���\�~\j�����ID�*��_�w���7�V�t��B3��W�D�A����5�x�@�<����I��u�xs�2o��(gt
������NJ���wE���S��&�IMD�H	�*�x�G�'4��x�j��uJ �E�%���Z���eW�X6�������A�Mf���T�E�F������fxo�w)��	�'��NG#x$LJ���>�(r!(F
R��y��/���y�����"��"�w����\h*�E������c�}���]��w��$�(� ���v�9������9�����l���>��U�)��t��"u;��%�hA�����/��qn���R\Hwok��	�L��_�Wz�o2�Q��Q?����quQ"��V�D0�q^bC��S����D�����T��g��^���'��r:����,��K��i}B�S�������;��3Q��U����m}�{��a��i�Y���R��\�NB���-*�����Dm*\��UB�V��K��������M�\
�>��/�q
�x�v��L_��E����v���h���q���h�C�����������"E����	y��Xh\
0�H��x
���Y8�/�	���;��0J�e��6�SU���������m���K�X�bE�h��b���V�*�[�W1�J�n>�t�hU�$��K�E�v���>th���E�"�����Q8�_��QZ��v�w'Yx/wH�<�X�$&�|07*>���=���Zr����&�s�jW�a�43�����������.i�HHVc
����_���#�����z�N��b�22%a%�L�
���j��������\G%�1	�(!D'�\
i���N��]R��:9����,@����h��p���"�j����h16����5NSOP5�N�/r������{�8�����g���*!�����I����3pSw�e�4a��V��GB:���q�(�vP�g9q��4�$�D�1����LU.?�UN�t�������Kr����m�N}1��<P.����~����y@if���G���Rd��Ub>e#�('�#��.��#�X�4�yqMc\�||x!�?�Y�]���[���.���3������1�/n%[K�Y,1�����<��r[MNW/�s�t+�XJJ/\:lQ�QV7V��s���S|��������j4�����sP��������M�xJG5����u@�r�3��.I_Y�C��������Bz���P���[��x�"C���
KLo0�Z1V��GWFx��I��Ox�����Q��"C��9�C�V��t���	�����/NX���Q�R'my�����A�F��vAn�J��B�����V�?;��`A��k^��j��^��;	I{+���N�}\H��$��\C','�HO���u�Yh
;��c��W�>�o��_xU�E4��<�����.�n66��O�0Z�/����U�s%"���e���������.�-�o������p�I�O��`4Hi���$�F����W��V�t��;�J$Sn)���;�������2!x9lNn��D�xl���������V	�7\�\Vu�����k�L(���~B|�~�^��r�>q��^>���|8,�T���X��?��	�2����~��p��*_uO�'�C�+���l�L�-�^0��d�p�{{HuJ�s[^`nQX%������1���/7�����k���?|�H�{U��]d*�� }rIu��JX��*Q�e����|/�}y1C8�}�/��/�-���EV�a7����s�6k����K���#����L�� �1_�������i�q�*@��b���.��9��U�6�B.���}��a_^�I<�9��E��c��lB������(3�T�w���"�DE��sg�s��7�	a��'�$�V���q����_,�+��q�*�A`:�Q
������+� P�����=��	���U��CDSr�����y�E����<:�{��og>0�8��5���^�{:��E��(U"{&F�"ll^U����1�����D�fHH]���=�[����<,~:q�Rw��6�^$� ��~�gO��p���`1�,�����A��x���Uz	��R��{{���7T �}����E
������}��y������'V��T8_4�7�A�gp�"l\]RN�ve���T,Q���r���y����v\I�����>�����\������b�X��7��{�ww�,"&��~�fc���7��j�]6�O]D��k��a'�}v��}� �?@"YN��g�%����U�}LD������D��rY���t����8��a<p�j/q�rI��+��{��a�l5&[c.�DBF���X%�f�Z����#i�K.�0������t��A����;`t�-(E�{G�eB��`��u�a~��og��h-]s�@��<�S:���
�=GF)��jR!Z��Iq�z�zm���tW��\;���b���EL?���?
ged���etb��b��#�8,a`�i5�cI��<�R�o���*0���?�_wR�#�~#��2�0?�6:����m$���1�^����)*����(����k�����(-g�/���o`��]�w�$����~Y�g�K���o�*��+�2��f�?�n(do�#�8<�����:��?$����!{P��9{Y�,\fP����*?I)��9<j&��G4}�4W������/�~�E)O�2��u���z[���*6Qt	������&<�)#pt_T)a�KO�����uJG.V�{�O����x���.�e�~p��
���z$��Kx@���q#�mE2L�����"�j0Yg����,�BW�{D:H�����O�C�{���U���u����a���\zgO[��{@�^�O��v�>�typ!�����)���U��7���v�LQ�����������Q�&��vQ�FI��eI2B�(���i�~��St,��G���z��${���� ��� e�3w_��G���6�r�y�]X4�
@����P�����n^j��O��`�^������
+~���������eD�g-#J��3DL<m!A�;�k.	��S����B�L�K���EU��T�.,����Za����HE��7)���L���5e���[�H<by�Q|�8P�'�?Yd��5��5��jh���R=���)��<�qT~��k����*�X	89����@�d���f�d{8qU�]�5���f����t�/�|m����)��VEQ��\��d�O/������\��|�z2�&��\H���]"s\���O�>�����������X:�U=LW�i����l@�����1le7��D�O��g�T	��������Q�n��ZJS��42%@Z~]����R'�7���Sz�R%��=�tg#6�
�$�S*�����8���+�
�>�&GXd�����J����'���T�a��K�x]��,{���
�t@\����_T�G����#�f���5_�Y6�.����!���r���K>��R]�Nw�<�
���E�.2�~j���=�<Z����B�{��g6	@ICA}��$��d�m�fB\��WoogM����"�2.�o�I����^g�����W��dh���N\�Z>7�Lvr��!Gz��:���N����"e�����	��Ya{�}��<��p��y�z�S�\,:�=�b
�*�����9�7��`�0�%9���t��2$\2x��7���w�1����8� {�0��%��D�����;��\�V�	=����iA�J����,���g~5&��f��3a����f��.]?>���6*��O�(u���Ub�#�&�J�k�69��q�9S`��`^S�������m@O��
�^�kn�:���Y����D|��8�	/��'y�z(���^5x����b��:PRg)U.�@\����Ko���Z����`��H���cW���'�������;9mZV	�dU���%����2�$a����+�a12���4�{���pb�r��r��ePd$�1��Y`��4��FYx#j��T>�����H��z�W�<�jl8�|6�F
�P5
slY�@"��`_����I���)�xH �tT��AKGSX6s#�J��U"�u�.�M����C'���y0�:Q�����3���I�q�n����
4 7�G_"on��A��G	����Fj��XU����u@@����5j=�W�k�;�	��g�'k�y����N?�����J�l���?���`uH����r�:��E�.� ����������R=��PEN|�E��2�kHE�X]�	R�'��"�h���\��:jx����ii��ok��>j�����3���*:�0N�y�4�������<����%��%�����ox��3u�D�TD�������^`(������-r_T��'�����B���o.���\���/(�a�a�8��{�;��M����?���&���:���$v4��"v�n/���f�o+�����|�ZR��@s��v�Ig�5���T���G�X��m����/ z���#�<����{j$~�����}BtS��������`�1:��ac�](�HTW���)iz9%!%<��H����I����D�V�xo�� ��	)�=>�B���A�L�<7�x��[�!��O Pn��]�xe���*��.���46�a��<�I�%k$j��\N��o��/ d�#�d�;�*��P����D� ���D���/����E�����]��;I����s��P�����(}rZ�>���Jfzy0�&���3K1��i
� (��V�#=*A��0���i��i
	��i:�K��Qz���P�Xr
MK�;�z������/�.�kM��X��}2�IG���'�*ieXUx�kcu��LT�6�'VJY"�j�/�����n�-B7��d���1Jo�(��T�a��i��[�����G���`bm�+y����h�&��Ls`��� ��Qf�~�I�/���8�gU��FID�Q]X+tM x$L�����j�r��a�_�{���[�}>,�����/
,���N|��%�\D���,���Hv���5�s��]�9a$J*�g�W�������n��'���yMC�UR�^R��b�� ��O��h�p����y4���!����&37�j�h���++���a�s���d���|�j�-�������E�5>F�����_sr�W�Gz�,v�q~`��e�1p��h���&[�kO�G��1�^��
d y������3�#�!��������
>6$��s�o�4|\�tS<������+��+�y<�����
��B���Q��|�i�A�6t�X�F����{8!H2�M��{//@]1W�p,z�H�E_�<��`���f������@���c����h.�G�����;��<~����t{Aqn�t�CD�im��%C����v����Q5�+�"�;~�]�:j�<,j�P.
����{����\\Ws� s�`����>�����7�Rm������gU
��V
J�c�$�x�b
�'o����`"E�xmS������hl��U��E�XF�X��G�f,�hj���D�����VH���<��A�^��3N;����\r/����;ls07Fg&�4t����	i�^W�6)��C�����x`��X#�)�� ���Z3L������E�cEncw�P�v�E�6P�K(t
��7�K ��>RS�#5��tn��w��w}� �x>tt�n`�N77�[zQ"���7�3���J�T�b�!��\�-%r<7�R}�|be��s�E����/�~��/e��f0���X���"E�����i#V���n�D��}������������u����K��9f��d^��\N�������JSM��=>�u�:�����cp,��O�Vb�ap y��gs�c}�Re���?7��6�R�k�a���d�,�0Or%t������8Z�5���J@�����=v���wm��E�����d!p��L���8/�����L�w;t=��5$�<(O�D��<������Qz�}���������)}A�81�}pQ��x����U������H}9��li��x�\
s"�"���T��7t�D�;)*`��K�q�m�S�}����B�u.T�A��Y(��'bQ�`"|�C)��s����8����l�Y2a���&2�p~N��f���qUs���.�L���UG������	���t/���oo��W��3�dj��9Y�+i���sD��g����I���MQ����#I�C�"L����]Xl��r.;�QC�q���P7�{S���fX`?���j��7�������J���*y�iwb��Nx���(�"UjL%���Y�5!m,��j�.��0
����F���O��Wl-u��f���G4GX%����'�,F{�������K�i��|�Y�,�9JA^��vQo��T��&�h�(O\%F�;��C�A����]�<ppYq��$�p���8"=T�����;`��.u��q����HH(�1���t����n��D4n�X�%�4d	H���DKJ?B��;��^�oX5��c)JG.��j�Y�Kf�z��"��a&�|	�7L����)��^o��U�0kg.5)�:����=��^y0�6�|)����$����UR�g����4�.8�$	)���
�H�V�Y�J����45�����6�w~���X�P�������� �)�)bp������M�JU���`]��4t���tx�)�{�y��)k�[r2S2�<�H�X�Y/�'��u�MF�#���W�`��"�O�}6u�&A��z9�^h���2{�|��.�*\Af��H{��fV @o�_�H�F�$�1�wd:�����ID%��W�����a��:X��J~��l�Z��PLQ�SV#���<i��b�Y:���=���\R*��(�U:au0/��0�,�.<i)��)������������FU�E���;�\�c�5#�5S�HA�h���Z�Q���z���%?F/$�<��+��5�a��t (d�a&)x��Eq�V��E}0{ g���3��G���Om!�������L2
�.�9�2�1�a�����l�@�y���"�}��Ttz��-�0�W������2}�ac�Q$pN���E�SZG�����G�����N9�rd���
���U�R�N�.�i��\G$�E�d������5�|�h�9�S�.��������3���Hb7��L��NV^ZMK�g�$�}��X%h[�%g�~���!JP.�����}�{�A�����%Ez��}tn��}M^d�.i�,���h�J����KA�_��1��.��I�|���A1AV��U�!LkH��'���z�uu�'6�*��+��D8��"��]��{�F������#��Q�n��K!0�cv[(��T�t��#Q����6L<���,=���-����������A`������$�j0�_��������&����.P���[�)(�����u�k
�O����E�:2i�AybD t����^~�!K�z��b�����>hA���Q�.��E�������}�d3G�]v�Fc��`�JIU��
p��|�c�`#���S&aB3���K����Cf�,��/\���6w�|L��k��|�&E-@�1�H=���F���i��L��z�-�gFi��,l���v�.��b��3�z��C��c>���~O���\�[\�%���k$�C*��h�$Yc����DM��@N������d�r��}!�W�]:��y�*�
 %R��W�,���Z��qC������mw�s@���������g���9=��!V	���=;�?������R��`M3����h��6��u	�,��U��C���|��~tU-���;yh8r1]5�z�
�\'��93������NJ.fhH��q����n�����p�`ZG2#��p��^� ����c�&[3�
G�9q�6�28V	Fr�x����;�#��D]f�>p|4?��������S|ac�F��|��7L7�0_��|�+/�Vi��� �JW4Ny������>��0���p����(��uB��5(�������.��2��q��B����2t�X%x�T�xL�H�_T��ZV��I���������.��s�b�n����U�-��Ne�Q�v���;#)��v������%��������'5�dMTQ��#�,/,�T���keP�Z,���uFW,�-_�[~�W�Z�O�����e]79���R�w.�.nh8g!P�d�z��cT�^����<�tq����iv������o��Y%�������V����I[��:�N�|Qe]d���!g��ks!���u%���_TH_	������b����Z&.
�dx^t�;�p��)�0��HJ���s?��������R{��"/������W�w	��)�C��x#��?�bu��$MY
�A����C�����`'8ij�E�=�0�~���."/����v,����{`��D����<�1�@l�?��������������Y7#�?Z�Q7���W���y!Yq��y�7�Z�T�}�`=|���o����>��V���_��l<��!�CY6���Xn�a
��"f�$�}O�:sc��#.�����7PJ���C����7��?���';
�h��@�>dIF�������J<��;'��&�ay����{��hX�l�M9�ZwNR��<�Z�`D~`5��m	X�>;��M����"t��G�/��������EU�����v�.�������dQ4�c�EQ�J��`h��Mw����^�������e�j����[v��BDI@��V���/e��g���h�'�
G�����[%���p��b9��(�3B�+3
;cV"pt���8�����-�Q�
�
g.V���
��TK��9�e�IZ�c��
ab).�mI1.�U��VSJ/\��Gs��v����n�0=%
����7�/x!�������=�?`����\�.�6_8`}a)3��|������N%������I/}�J<�Jp��W�G�$T����E�z�3%���b�������"k�h��^����iRC]����y�Jo:��Y�\������Z�#���������q�Uh	uFF2�G��@�G	�I��T�;w�+�t�O)!pW�.�o�t@Q�O���(���T��Q��������P>�T
��lN��pi$S��1P�������hnL\���/Xr������P����J8��O���:�^'�K�;���m����c�E\��/V:������e��QF�TI����o���&��W�?�M�����g\����s�M���a�l��K'.`��AfU�a� 7e�&jF.�%�}���!�vs�l��p{������
���x����.��[b*�&���#0Y�[��GR�KJH8�!�$�/�4?p�)�X"�L�H��_1�\
���"�;d��`�}RR��k���r�2�YV�	�����kr�4\�����(�9����IBqU 1��}�5�w�T<i���2]}���*���`2	2�3VYsQ\Y�5��������7H�`�SN��J�R�b���.(�a�S_�e���qD�Y�����$������%��%�6l'>nG�aw������E�����9��D6P��%-�F\F<��?fR�����n�����b���Ja�;N�P �)�u����>���f��bt��C�A���@�c����L'�.�����3�J��&�`G�!��76}�^�v	��r�:��u@�r$�3�8<b�����C��4x�G����EV�VY^X����pD�"uD������q��r�9]��@<��L���t�*�g���G��r9`u���O�\��(i�/��u"�(?���J�<q1����ZE�~���vT�]�����HR9���P�D��>����IE_��]�d�'\���r�kU�a������}�nl����3��N��m|Qb��{V���K���Y+���z�����v����]�������<
��</}�H��L����a,/o<xf�V5������X�]�G��j���
�)IkL�|f�/��&_�|J#�m"h���x����.o]�h|�`g`��t������D��]�1'�aS%x�0�#g�!driz�-�7�}���J`XQ_�@-�`�4)})O�"����<:�*1�^f�W�H����k*f��8x�I�����IK_�V��#�z�PHC@����=�X=�|�4p�,3y�����dn�����xv&��5kQU`���uA�1?Y�H���H��k�k�q������bx0z!I�z��O�\��b�#p�U�~L_E^\U�����I+�{�''��b=�(�`0-=_h��*O������!��u�*�;�u�E	&~�������_*n�p��%{Q��e�������.�iG�}���=E�l^C�����0)����R�
x9�>��v�?�u? �w<��?#��G����l����A����E���U���^��U�������su�X�>_.�]d��G�^o��e�����O&�Cn��/+�.�"��rG������1��t��i�l��wM�`��!�i�vd�>��8����c�)�zd7��7\U,g��'���pBl�B����Z #>���w2b"��!���~F���8�@�-�����$�����$���2_�LA2:���<�
��mBo��|����*���GC;bE�6�u��W����>0kb�KS8` =m��1���#�{`u���?��9J0,��(!�T�}��5t8��U���
�e���
��?e��%j��,�u��c^d���l�/Rxo�-�
�u)-��j^H)������JL�c���)wLp'leb�*gns�*�G\�2����@F���s@�����g"j��utX����z���
W��|�R%�<o����2*�b:���}t�9������R�Hy0��s����qu�T�V��K;&��Cg�t+��=�z�y��Q:rae��2>�b��L��"��X%�=��
f.Y�������-��S�K"�-���".��a5�|)O������/�p�Z%��w5N\]R>���_I�\:#J2����C?li���=(C�*�+�M:�^�5/T�Oy6/
`�$�d���XR�Y�Q~tq�-�k��E��.��.���\�$��k�l���2^��)+����o������������d6WX��X]�/:JP3:�{K��n)�B]�c��ni\L��`j�z�"<�0|������������%�9XX�X�C^�\�X.4�X�h^k�ku�F��fA],��`d��HoY�����v�^�i�
����r{����4��������������j���D�a��b9����D�%�I���-����	Y�\�I� BW5v����X��^��y!���`���RV��?Y6K�/u�F�&�+:M��VL�&F'��Y��\��<���]R�Z�%o���<�����h�5����S����U"9N���U�r>��li�3��A��8�4��>�(���������?�s�[s�����eK`��������,5,+���`R|�Q��$5�|w����L����*�P�eWZ��j��~m�����]��'���<4���l}Q	Nlhq��)�hR���_h\��W�X���������*��
�$���E��3�7%�N\e�rE����-B6��*�r�<�))�j��]�p���;�{�!{�jx�t����(K���p�~��P�,���A�f�]_x�&��"�BXOd�}�e�:�T��W��:jsgo����.�R���z����lz�<������:��^��)���R15!�J����2�3z��N%�>��S���O�_�e���H�����%���E��>N0�E-Y`�`B�-�������'=em!�"��BLnt�u*�
�g
�%�oF�F+_��w��:-9��e��6	�(<�2y�1~:Zb�5�lZZ�=�m�����Je��E�XF���T��K������4���|�8��2f1�2 s��j�hh��El���k���
0��Z������5Y�I�=��� :
�������M�O$�����O�kIS1#U�����%
��s�U����Lu�ft��2gI[+����a�J��V<$t��Fl��db�s���r���i��M���I�W���g*�@��:~�M�j��O�|�x��`-��od�����b��[�����h@,������`|�b�H�
���t5f���}Q���=��zY�~,����������t���wN����i���3��k|F�jL��*�5o�����mZ��r	������Xg[�S2	{�4�sFwR=1+���V������� JNW6\�&�
�����u����!�<��S<�!�Ey����V����9��+B{��P�<�(s����e�F��=g��v�#���=�p����{��&J}96��������y��o�p
�v�z�Z|O����5ux�*0~�������y'/�E�g��E_K����\
v2|����I�Cw�*��l��}�(��q\�n�h$�1mN8���n���Gm�3\�����2]^/�IO���<xf��:a���v�g��b�����h��������
�)�&{j0���O_g�S���\�M��[+�����z;����v9��^���xMF�����D�?d���b��[$�-�p=����I�I�'v�����[��,��j��*r��N5s@���N~�;
Y���u'[V����e5��`�?)i��+_�����?�B}���b�������������%�����S[������V)��]e(��O����`K��b�z�����c������{��|q
.��u�?`��;����F���$zQ��%���Sr����sg���l� ,&���L�u�����Pm��`Z5�
��T���R}�i�m��U�]vaQ�|M��A�J3� �}m��ER�*�1Y����������\�<��1YN�l|��#�/6���*�
	�Q"R���������}y�q����u@�A��v+�d���=��-��f9��HJ.
$&�7����,����.D5_R���k/!Ph"���mJ���mH|q�*t��ET^�����.��a��������J�]�U~a�C=�v~�U�r����;#j����H�\���c��D�����f����K'.@RH��W:�zP�pQ��K�UsgN������v�����	=O����t�pR�?.<5�[�\]���U@.i#
�����#�KsR�'B���Mw�x!��o.�����c}A����3�I������d��4�"~[D������Y�"����~t@�#ua��Et4d���A���'���T��k��IE�����z���y�u3^ ����+1���*>e����_a�T&��}����I�?��P�,'�F%N|�=�UB�����J�?�I�1�%x�K���.�i��^tLKp�&��\R�y`m���Z(Nu����I���.K5#X1L���'I89I����3q��Ni���)�s��!����,���e�O7�v��8���o������D��|r�=<�����qu��!6dF9�Xy�7NY��/�j��������2���K�7��#I�H{������e�0�U��S�c��,��%�./�l�^��Ai������������	*_-�h]����eQ#�ypQ����Y^�+A�yq�0�/�x�����e���/X����p�Opt�����F_�|V����W�����������l"���\����]�$-���#$�T�7o�����kq�'��)�>�����q������j��I��*=8������p����V	n%�t�bu�� _P.�r�����JpJ�.�������	&?�=]x@��>�.�pi�buQ�a)D�n����C��<sIR�WzH�K�=f_$��{U�%��JD��C7��m�_��+3��f��������,F�<��n?�/a2����)�p�����L]����j�����f��i����j}�	0��yO��������Km�E���_�b<;n���S�i�E�nW�/6�'�~>d/�
��SvaQ7P�:�iD"����IK�=�/VK��4n
�v\�����O�������y�����tynL�"����FQ\t��z��'�6���`�d������xqus�3]$�I������� '���#iR������9E�'\�������P=�����{IV���F��>Xj���AN��T
S�Uz1F=K��?+>�j��.�$�(���V#Ik��MJ_J��TEV��yd��lh%�j��XL-�I��cI+oz�F�4���G����U�)M��u�T���3�'��� �g�/J�������K�H���:3ii�[c��>
����'q��#Z�z:�ubK5(��d��*I+�c<�x�������12^����*����f�Lil��e�.�ek{�^L���g�\lIqe��Q�!d���UwN����yk�Ndm�hE�� ������Kr���*��MM'���eSE6x�����,�������Z����<��@���f��?x���cvR��!sY� #$ta�K�e}�7'�V��]e���A�}Y��c�B���`�~x�����a0=	Ud�
�>������l��K�{����c�3�L�r���1���v��A��7��?~q�f���ej�����8��PS��XtT����	Z%y��kEn��I�C��tp����(���^��X���	��pk�"fX����V$4�w\����@/\P���T�P=���-��!�Y�pI�j��>��=���G�����4���T	#����Uom���K�������7���,9��4�qa�����0QUx��H��6p���%�����Ow�'��q�E�e�������=�����E].�����������6Q���wN�z5����n�:h� I����Qr��.g5��d6�^������I'���
|�6��C�%����K�����,;h�f�.X����=HEpS|�S.�Q.�3��+)��C6AFn�%Wz��l5*���+V,��%�)[����)q�	>�I��x�%�d_��\�.����������U"p�z�tp�\��t"K��,�Eyp7�W��{nP/��p�>N�
��~B��_�^��z�lzU|h��"I'�����r��>�2p�2�cA�A��T~F9��
����+����}!�h�c��o���o-?����W�`3ea
:�{�`�*����~��q����^5Y��l0���X�0�e��-_���C��D:R(V�}pm���y/�|��s��\�/��Q�j���D�N
�A):^|�~�-x������/jh����p��*���AM��7h�)��������:���D����(b(|���$�-��${�d*�J�
bu�F[�P2��II�r�4���q�����?X����"_H���`��O��������^v|���5�(-�nWY�*�~\P>�������d�/V��I�";���5k2���RD�Q����r���2��zG��z�K����M8�"����1� ����?*S��!3�J�TR�/&W����_%b���8�8�V����J���!� �
�tJ����x/;�u���0�#�'��$��us���%������ ��L%6x6G���<�2k����P���YbT�3)��BK���-��J�8�/�m�
g^���0J�0��v�P$t��eLN9~=z�#6�=\Gn��<��!�����\������r�B�, A��
�3��$��jTF�!��������.��
?�u��+����)�<|���TUK�'M�/���O��P6MX%r�a�K��i�_����SD
�!�^��E�<��l���aL 'b�a����`]#�]�*�tp��Cw�G�0
���.E�A���8j`B's��ae����4�����+��^j�6W��[�@���=�8X|��!2y��
d��{���\�l���c��;��>��/��������#H�l}�z����������������tL����X����W��#�P5(E�l<��4���Z_&�L^���LP�Y�K�Wu`���W[u_���*?�u�"��_��)Mq\�����sv�t���T�&N�]9�����!�+Ua����uU�*YZ^��%���#���tZ�?���#�����/=��0D�����T��HC9u��MJ�5�3�,-�w[���b�s�����Z�R���c��"����}��<d�����0q���P����{�^���������gu��k�����)���/\LSq��_���<���IE���Ck�SY��(4���OW�"�h���rv2)Q�t5"���Cx��MG������Ok�5���"��y Y�L������(��a[�����/2i�F)���}s�_��*�<�S������#�T[���!����*I�C�A��p����S��35B���9
q�c�S�2�j�q���9���:��g3�LTt1����f�
4��0��s���.�jLpl ���$��C��*`��?��s�fu�����}@m�R���rsm���p���.2���')����k1�U"�����i�l�M���X%��^�}���L���-�_��oS=�2������*���?��q��4������������'\��V������#����#�}�~�N'���R��yFYH�FKY��fi����[-�����?��I�A-�%4�h���B�OS���4�#:��#�WCr|$�Hd%��Eyp������!��V�
^�$�cRYQ'.#��4Yu��d���fk�0�N���a�x�����A1���/�
��["��&8�5�
�LB����_�^*l#�baHr����sB���bt�b��7|2��U���[���P�J"n�0�����rK�A�T��F�����n�9
U��y�b}(�S��'��Y
��&�eO��l��� �<>�|�R%L.:�0m����,���%S��[�,GSz�����e����>�k����Q���R�t�PE���T	#��
R��Y?����_��I�~C��I?/����d�$C7}dR{<�c��9��10��������kO�E7�A�S��7���1�{��
�T�N�*�q���<n������#��0;e?����S>���`��O��x�z��*���z_�K��"��({��3h:�l���Z�S6T����!�,%c���!���#eP�9�c�1�GF��m�����	�u0�$���l|7j�����1;]d_|�c��f}Qd�2�Q,M�����!����l�8n!>��i�_m�f��
Z}�;��KK����n�nd���o������I�@� E��� ��3�M������O����
r���m*��������A�<k�r�7������!]d��� R�Yv`1������S3E��9�{z����SN�u���C�Y��2^���y)'%�5�0�J$��<E��Q���w}qH4 xs�+D�<�My��LJ��U��I���V���{�2*�Ns�{XA����0�����G�T���d3��+�1�2c�a����^�;��K=���_V��a$�����B&5,�o����k�S\fi�F!��(3��8�B�#������)�>8�@;t�Q�@
�+B"�Q~`D�N����&�D�������e�Os65r��S��a�y�	���I�e�M0i�E�����U��+��K
��'�A(���l[\���T�01�O�����9�����oo����)����cT<a�4���K�i�1]�
��@V|�-=lB��@<9E�$�?`��t��n8&-��(�*C�f8���eE������!���T5f������`�7x��b3g-�~\��5UZ0'@lQV�'nGBj����m@N/���\��������Zp>��V�!`�m�����B�����M�4t�m�J8����)�v�X��<��K��0�xs��U^$JI��b������9�4^���
~~��u��3j{�������e��������:b���
��:~��A���r����t=�[�����%�D�����=���1���T�S|�m 5�N}Q�7���+r?`d�0���"���~-���I�x.�#iTe]I�-�9T���WM�I%�L��<�d���B���P*�"�]��;��?�A�U��!�,����|`�`�Oy�B}1�H��)���a�#���M��V�R�ef�Q}`b���Q�n��(G*:	����u0��q��T��	��kXE'��ur^L��J�T&�}�� ���b�I�(�`e�7���P���/��>"��rW�^-C�����C�d5��J���t����(�]�.�����C��Z��_���\��2��5}`�����iZe�hoZ:	��S~`T+X%��cQb�*2����&1�� ���szN�}p�!S�n�(��NN�����R%�d�q�0��f�h�C��D0J�����W����[�����VyP��e��c�����r�������8�v�����Bl��n�r��}�#>!�v>QgEF�W���Q�@>efX���E���P���A�9�|sdL�`%�e�;���lA8��N������E�2���%4P�����@�&r�%�0���^��[��;�Z�A2����c��kIK����a<(n0�|��n���������.�7#��c��S�p�>*�@��<�
����J�4q��;&n����OeS����XU���kt�)�JD6���|JF���F:&�D��Z�(�Xe/u�IJP����a(�E&}8Y2��I����/z'�D�9���"�&�A���|d>�����E��-�?F���Dt� �A�(���w�����+�S~`�q�*a����p	���[�H*�8��rbU3f�������<��t�{t-�r�5=�V���W�zP�p�>{�2��v��y�YW�l���+l��0SD���?S�Z���#�zS���},��U
�/����^���������{`l���#.�����*��e)��w^����2�~KL�Z����#��a-�J��G�����(��t@�x��8:kuI�3��Zh�x���S\g����A�	>��J_�01���������%�,]&/��S������)���E��X=��'�����`��a��X�abui9����I2�&1eqz���Q�6|h�9(a�cd�c1�;#�d�k�bv$��L$z_2 ���e9�(����M9� ��U�$��_���@<�@��!�tnj�b�X��YS��gY��_��D_��uO�����t�@��O��*�r���*t|��:���4|��=�,9W������t��q�E��S=~��Q����_��I������������L�Y������~O��+��v�
J����ks���E/�2O*�@�(����Y<�5�5���rW��-���^�_P�b5|����:�/�{!K�Q!A=Z��V�YC�~�-��J�}�Z��'j��zl8v�F�j�1J�7���Rl��rt!��5	��~��t3p�P���C�22�H{xu�,u������E�
�"e�?����HR2J��qz�0	�j���1�^�C���cH�P��/C��8)����o��/���
��9y�.�f������M����\D/�R6����Q=�WGq�l�1�M�V������F]�p#IGy�,�p��F`5WG���E�$��[�1|���p����
���9���b��OX-X�y��\��r��������s����,J�p+V���_�G�CUypu	N�wn�c����!n�����}p&e�L�93�0O2U��GM�������r�yDs��O���,�%�,z'W���)n��8�"��
��0J�����l8��j,(K=N)�t���z�>l�����D��&��%62����V�%��V�=�O��(6hq��x/��_f1��z���DC����]�%��%S	�^���4Y�Hk�J27�������:�������%���?s^x��Y�AO�,�%�+�N�H�Yjc�<C|��,X���Ql��D=y%l�������?������Cr��\
�K�_���U��I���G%�\[M������/)����^e]?���z��*���/\zx���=��b>�)�jN���J�)�����T�K.�W��*�I�Q���<����P?�_.<��yp�w�}��E�T����S~`�'23Y%z0�?"L,�Q���R��+�uir:shD���}�%��d��t3+�\�\�Rb�Xc�j`o�u�-A���'1:=Z�2���o[(�ER��p�V����?l�
�W
S9$rh�A���	gjJ'��H����&����%6X�U�}�H<?4��)���2M<�yM(�x��m;��[�&Lv�1��OU=�4�9J�J��"}��1K����2����GM<	`c\�E=��n����d������_9��(e���_
����k=�]s��d��M5���/���~��&8$���j��f��*o8KSz�O��<8��kG��r�Q�}c�aV)!cS���K���$�0K��e��6���Myp%����{,h'��f�����^�lP=�,�g���KwI���|�M���4k��|	b�����,=P������1�m�jP�~���������6����
��x�$�/J)������_��P6��!M��qu��r���C���H���<���$5�`u�!i�:�.������l�2�R������#��a����M�FB����)����F�������7�k�����e�#'z�~�3��4�j��0O%I�L���X���h�MC���o�Jn�rIu�����}@�dS��i� ������9�$I
>h?-�
'J,X%6xvK���������dA�>�^�T��%��f���D-�D4�i��3	��\#���n��J�s7T#P�U�[@���$����b)�Pwm�]��W�
'�
l�b��c����	���[IOi���b=8� ~[�	��cdv�d_������-�A���i;�&P�� ��1��������`"�����9Q8-B�XP��F��1��6�&��>JeJ�4�O!\l�2���K��	���1���0v�N�Qb�*��������!��f�V�K�������QbN�&Sm�p��h2��c���t����N�(k:O���������z������k��C���&IL!�L2�T��.36<
��A0R��Kq�Dk0�"vxR��5��%�=�*��$�@��r�a�h�r��?�IaD������S���}pP��MKJ���\�C�0B[%�D�Y8�m9�'������,�����V����;���*E���I}��Ud�r)$�Xj|y/�+���'�kZ �o�_dX9���6%��=`sI&���W�����K��|`�����^��&��{�m0*���r�e�j��G����g��I���^&@�u&�e`S�u�>8�1�`��&O�dX�)=pi+R_��Y���E�o#��zQ��%ik����������+i�~`M�7��4_���Zz�B}i�����"��D�������a�d�����I�����g����/����Y~�~��R�*���c�/�Mv6^�OM3<��Jx?IA��g�VCLT���XP���<�,D_�8������~��8lI����j��p������������tGy�f$���Z��:����XNb"���m�C��4�����8��c\K�Jd5KK�d
�dz�P%����N@���v�� ��AT| l�K-����,;���3x�Bo������{�]��0AU�lC��5�Nh����A��t��)�M�����$��hvT�����=.43<p\H|���������8��*�T������iU�Q�C!{�/c�u���t��A���I�95�ZyQ;$5���IEr�����*}%6;v�)v��E"��8�G����1yp]m0!U�A����Jc���	��wHnV��e����������MF���aH�M��z�wX�,�6���X%��\������rl�z��e��s������$��{j�A�`)��Y)g@�;,P
��NRL[�������Y�<��E����H?Q�����^�U���:��1��X���J�3?0�~b|�8��|��a�tH�HY6}�eO��:j3��(�zF���������+�$�}p#e2������LX���5��?j�.�0"m��a�Z#�~����LIL��5����94�(c�<	�e����sbj}��bT?�\RJ4�HC���)�515�/�������6���Vl���#���}���a�J.29�IHi�,��$	i���q[o�t;����j�T�e��%��:���b0�{�U���Jt K�9����ya�
]zu��f���&g<�2C���C&���U�$ K*�6��k��������XX5�9��>��S�:�����$��>b�rI���o��(Q��y���OV��]�x�SN`4dV���N�h��l�*9��/q��1��L��p�{���s9�����,>��jJ�j��n��^�M�0��U#��}0H?e�����p�
��y}���d��.�R�OX5,M{U���5X�;_;c��+af�������~�%������7�B7���kR1�7��'�~�B|Pb��<\d�RKP�0�,:O�`�9w��B��U~�����=r�X��&���g�T|�u�f�,u?���=�����7�g�MT�E��n�	�cw]�������k��������*����NV	#�mg�������{�r���������">K@
����{`������C�l�
V��s3�*F����T�|/�J�����=���9F�K��6�T���
�T���N�W��:d�m(&�����4�f)��v0��;#����;fq`���/�d��;(�TN�?�����d��+�r�X���A����5��v#V���)������j�<��4^���,������;��
s���If>�T�A�����,V�^��p��������&��H�����i}p��,�����x�o��l�*ss���xV^lkE���[K���Z��2�ie>�M?��R\�S����Mj��x~>#"�Z�T�����s�b�8VQegPL��	i.�2mW"��eW��?`���Avp�]�A�;�8F�aE�J�������x�%�Z�?Lrv�����'��F��M������	O>9���}�������{�-qn�^�$hX?����<Q��+H���O=���
��Z,��C�mCSP+�d�
����@]�e��w��l�;gjJ������i���)t��}p'%	���tlb���/:��1J����'����Z�L�,13�j�v(��'>T_+$	}�6�rY��.��+���$Y�.�ib���@�>��k����-^������Y��-�-x�&"��R��>M
�1�&"�<G������U<0=��cU������}A`����v�M������V
����?p�M���24�>����)r?/jq�.�*����JT����q��]v���_%M%Z�iQ�Ax�����I�"D�%�=��j����R�
����/%Ae���;<H��W��IQ��F��}��t����(*��'LW�\M%T�b�O�j�X><����s���Y%z$��*����2�����X]�Q��s��t,i��F����L������v�$�ir:	������H!��t�q"2R��Ob�FMXs����Xs����7-���b�hs2�HW�h��3�������	kKss.�#x�M\3��������A�N�9�:^�=��B��I�L�I\��6E@�`uQ$��Z��$,pf%$q
d��H'5J����1a�����/��`��O���4u�`�Vv�_S�[��C��#�h�K��A�9m��$Ln:�Tb���Xd%��r*P�����7���\g��NQ �j�Y�t�/V��3[q�c��u��zQ�?m�;v+���S}a���\�IXa��`��������-���^Z����;�e3a�c��+.�������T~l��`AQ�
�j�T�d4���i�;�JCg����b�"��R0 :[�������@�]�jV�U:��l�����gU&P.�|"��.8m��T�&�Hn�)�e���)�h�c��iK�d_=�7�id��������=W�����m��#U2��Y���D^�'y��eX3M�{c�ZP���W�����N�����Z�jQ������vK5���+�x�!�������x�.�Y�Y}�>�3KV
 ������^Wd{6[/�������b�
V	����(�F���=U�4m�R���tG���+�p�����x�14/��&�%�����f3��y�!/��8>�����~�
�a��H,(�_���!{������[>�4NW�AG���A�(x�-t��k�/
�����`p��|���M��&��qm��R�I���^Lb��xN9%3��e�!U�3/�L{�4���rM��j�-	�
��M�����]�]��5k�����sb��Y��A�;K�Wug��y]4�C�P@T�AnA���"�.��t��@O��;i���(��:T����������0�A�(=W��IOM�kS�9d����������dil2����Y�3s�3k����0����/&=}Dbm1�Ue>'�u�$�����}�R��>�����n�����{6rqhOz�p�}�`D|w�&�y�v�`��>��V�������\2Jy5�z�����H��A���q��-LA`5��!�K��0^�����x|�;�.v7���$(a�������e����� �� �P:;�I*�7���p�|�W�q���r(h����<�)�qLD�
�����lt�i�UzQ?��r����a*�����W��KSjg��|z�7��E��|�	��� 'T�	��(2��������fj�2����_�NNV�Q}pp�5�I�.���>�B��>����V�4�[}����j�{��?|/�*�K^���ypi�N�b��
�-r��;[�p�U"{�V���������oq�.|1	+U��������S�}��$�����{'�:��l�.X-��j�c�Va�nMX�]����_�hW7!�dMO'�*����X�Es�����)��W�@}�0Q���tr�"�����/��������Lo�M�9���!�^�Z���������+�/��-�M��������3��p�J����6��W����hq�H�bB���%���%4���(�k5�DU�;��,�M
�h&�E���e����AeY!r�7���60-�UK� ���`-;�� FKImYU|���,�+bl:t�;
���;i�����o8�]���0��8��vl�
�E��D$iqA{a%cetH�s������"�:��
I����A�A�9]��J�%�P���'We����f�����I��X���Qz���&s`T?a� �eB������r���|x�x0H?-	+wb�E�cE�������
��*��P�ab��|�����������*�.�gB^\	i��dk��~�*���\��a���bk������7�4Uu�l�����(�M�U�Yvh�i	��4��&��Z���X�t��1��� F��+��R��ONU�pmp�2�J^d�xD8��L/���%��29e�����W����4��U&�\0�<���f,���=�]Fy�0$:��������q�[U�HB�n��,��&q}mEQ.����k��x��nS��s����+���EU���5���(+���v��7�<i��G��OX���z�2���MMvP�Xe�R�0�yD�����w��yAR5B>e��?�k���E�x���?�B+�)c�������>��mH�b��s9"b�z�����9�X:�����XMI�����8V��O�/D�v��X���"K�Q&g��SB5#6�:{a�E)w�Er�@����������)�������vs,?�B{??����9��A��DUd�����cYWf��P��A���r��E�Xy1+��=�<8)0�QW_����.2 ��a�^RW�C���6��6��cU��F�X	;P@�M�>v�$&=5C��!��(�4U6�H���������#��#������X�����9��H������]�Tc���F�\���7�IS�1�9 �e�&���.��~�Wg���
h��t����`����;�KJ�;���Q�}e�����Vv_����L�c���g�r�K���/n�~L.Tb��h���F^L*k�<9u�5^�1vG}�w����F�\��>����#���������1��Z	�jt�M���T�X�p��%=����{�Nje���F0K%���~H�Y+��>�6a�{�����+�T��^���(���h����}/�,�F7w��a�����t����"��aWF)��W�W�`��1����@���VB��Xm�����������/�����',�+:LkLT�^���U
=��0)R+�yH�-���~���x��j���K6��������e�c	�����]���%��&�����F��������j���E��6{��T�O������P���������� +g���O�x���#K����?]�>�0�'��'n���-�{z��F��f��<J��!;K5�m�Z���0��S������~�y�I�|h�6X���e��'���3����K�����iq��������p�DIT��H����87�t������=�*����s�D]p��� Y�DG��
X#�
p���5���o����,X6xi�,��M��L"�#I`u�l8�]���"��J�0�n����n�T������(��39�k�`�H���T��1��l3���7�WG
>[��G_&�����D��/[�%�
�;�?}��>��s��$��P�qV���z���g�T��5��\������O��"���,i������HbJb���JM<�b�x8U��:���:��W���p��D'���������������s�!F 4~��:7c�U�"a:hu[�n>�zn\g�����
��c�.���X��[/�����w~��G\R��*�t���o����p]
�W���<���i�t�|��@_�h�u�:A��wbmP$K��.�/m���-Y/��������CH:a����	�/�.��r��2V
 �i
�rIB����HB��:��vL�����
����y��?���~4��_�j��rO�eE��)	�_Y��P��s�7P.
��
X����RM�4���*�:��B������G�C��:��������������������k#��P%W�-��8?C�]T�.����}p��-��r2�+�H�P`�g��>n��|M�5��j��-��,m�t4b���r����y�b�l���N����Kl�o2@hzTWR���L���|q&��T��}rJ����	*�_�7�2Y�(s���c���0��Z���*���7���)���j�>4g�+i�$�U�j<��[�K�������_�������	��s����� z����b�����zBo�
|��R%�ie|��"�����M"���u���NE�8��	k���������#w�^��{Zrf�W��m�F�m�A���*�����-X���Ms$&b���z&�z�b}p!o�mP.�]l^�d������j�0yiH����\Ev���4eZ.cK����\]2�f�:H4mBzJ�%�*���}p%m
�P�''e�l���+�����1�n8}1N�6�2��~�������-�
��Y{�=�j���1m�o�p��6$�v?p_��5�T������i���1i�"[(��U�������*1�5T��^t�{.Q�S�1����)�Pw��o�����
Z{~�����`uq��O��`
�	�,�\}���R���$-��;i��W5�;(�c=^7�m�pA)����A��;F�e���/#}�(�����,_R�${����e5u�.#-����Og���/����.a�[+x�n>8)��3J�2�o8�����������k�;��TP~m�1m����3pB���q���n��X�������&�O(����QzP��O����izZ��tP�3�e�2d���
���J"��7K�t���A��� ;+5=�����7����i�X��/.��K�T���V<G�V-:OlP%
�b���=�������DU2n��������9��8�)z�5ae�XT�A,r���N�u�G��������7Gw�]dIK���E���Lwa��#��*�e�9H�>��K�+���IP��@:`�(e:A���)pY��
�2�2�
�XU��d�D�*Q��[�)	*K�����6��������0��d�&T�����[�n�G��LT�����:c�v�I��.�i��;����,p������&�KR���V�f�-�.�[�h���6X���� {��.b�Wl0��]������>�Jp\d��<����z.�
�D�r=ppf�����])�J9���^����M �)x�s����k.�[<�s��A�&�L���{	G���r:*��M���T�D�M�S����\����t�
��7.��'9��$@il��mV	���O�L.�&j>(&a�R�%��MM;��j��lz�;�TY;6}��5w�D�261?��]I2�dp��>������s%D����2���H����
�EH@����\SFJ������H�fj�|�a��������x�o"�Yzw
��K�iU
��V
JY(d���������i�C���W�HQ�/�Z;�"��:�*�����)�����n����*7TY���,87T���H�;��E&�:"�E���YQ�?��z[����Pe�,ju���<��1��W�����7�,���!�L�f����OhC���V���gt�baS�g�Cs�}2��
IFJld*���!�K�u3k��Y��0�f�|�e*a���r�r�0���J����c��-�-x�����c����br����A���P*sT�N
th3�%�o�o�^/������}�6k,�F�a�S�e���2�*��M�e�fK�9^����P��I�_$�H2E�Q�������:&�i�kP����Y��)f��L
���_6��~�c��y���[RG)&�T�K�
�����%4}�d�1W����kgI8n�5����%<�8�d�#8��[r��D���H,wXV����~<�&R;�SW&*�l��`��U�4�|pw�$���U$0�����}�/A����+���k|<�G�hr*e�b�����l0}�T5f��E}�)7$T������U��IN��]t�����xp%u�/Z������Q�|?���E	�����_�5t���Y�b���z��>�P������?s�4Z���Z!]�������������u)��PN���T�I��tt�i�Ue�^����� �����z���^0���� K
����<�
���o�;�"��>^ C�����X���%YC�����6�I��kX��$���U&dwFt����>�C_��U�nI{��m&Gs?�<�����W������l!l������`���z�
�poG����"�bbm�����im��{�E)w��(weT[���&i�)�O�W$p>{z��c8�(�*�Q-���z���
r�1t�5�]���UO�y	r�>`
?y;�����U���E�,=�L������L{l��&Q��k�B�P5������t���oBA��U����4di��R������ ~J����*�e.��(f���rw����}p�|�]��za?�2	U�)?0J�H��9g�F������1���EF�|������.��3�����^u�0=��)?�b}��:����Y<�7 ��*��
x���>-�901Au�a�TW�F��cY��=V;�V
!��b
�9���Y��`"�P}m�u�1ea���"�����:�\����
�6�9�M���*M��pvV�����$"Y��V�*2;����T��U�� RrY�����*qv����|#Lb��9��-,���w�%��.N��X�'�� ���|]$9��X�*���">~9���3'�<�
n\K�W	TA��\F��t�{=��vP������*����sdg�(�z����X����8k�JX�����C
\����tGVS��J���5j�PU�egh�p�g�@�K�����a�#{�D�u�d����F�������{v����:��2�C���s�����y�13����f��7��Q&]X�&���:3'�&L���}�.v'�����8������tFV����,o�O�>0���,�r���$���7���B����d�M�ZW��]�I>�����m��&O�A��*a�������K����*mz��1��bu	>H0g'���E��d�94�J�\�.�Uc�1�����Nea��vC��6�W������!{�M���`�9%�%���</��E����/!F��6��3-����s��%�E����6D9_Y%z�&gY8�h�8���=�-�I�k�1--��"��C�A�2�?�j�t�:�n���k�W��\MK5��9��i��T���_H
a��f��CN�WsP���������i(���K�M�0�.+�lZ(����{��A�����:�"7}���FaN  �q�����Ti�FiL������I?7�W�WF��B�u����I_�0��Ip�bv���j���� �;��,{���f�����Wx���{8��&���,��
&�����9[����"��9�^�qV-<`�Q=d4����e�l�di��n:���,�o>�����RF�
�
�87��d�F���(]``�A"U�<^�����df:\����m����O#�������3�����8���7��A���j�z?��8��N��b������h���A������x�|I������h\|0��3O:6�2,��K�Y]��	����S���g��n��`���Xz�'���.4	�����c�,����UR#�^�;�G�{��m�&�G�u�9F�&�b����C�"�#�5F������������}�����MW���b�
����@^�����a����Remb�
c���-����/��/"t\��P��q)i����q=^<�C~s||�g��J��{=�p��N���2����)G����MmK<�p�D�$��3`��&��=
�A%`������>�9[X=d�r	ds������#�Y��9�u�!R�G�u�
V	W�_�X�m6�CvX%RJp����r��
�"G���@:',�v,Sh���+��V���M/@��4�!�.����%�83�F����rfXN�7q�<��"<;4&5U�	��=�4Jo\��i�*�A�b�l:�p1��%�e@�]�>���Cf<�MWV�^����c��*Q%K�mI����lO�.�`\:�d�� ���aR�1��'MY���2��M����2����L�_'C�-�X�f���o<�����W,��.���]�/�Q7E��R8���S3�ew5G�cV �O�t�S2��Ch�:��Ka-=p�tF}�p����U�L��kq��g1�g��X������A�h�
��g)`�_e��6b������"�B\���%��	����W`��d�=��,�_�GH6����oH-�t�K;��b���?��}p%5]�K�4J�\���x��~�j�5Vc�+��R1�v�V����/��>t��^��qi��:���J�B(�B�3��?y/��9B�~ARi������}���r�<����	�7l�
Xj_��r�c_`��?m����j���p�Q�)=p�vP�y\��V
�^{D�[��g�1�����MW�^����dXc����Q�t�
���d�w[��%�%��6l�h�����m���������Z��P�yf���J��r'�et���Pq��\��!�Z}U���?�*(�+�6o�)��_��������1�cB�V��@�Q��WD����6^��tc����S���p�6�q�Q��M#P~����<���4~�I�W�&��xrR�P��|e�oS�w��G(C+_�?��0a�a�(>��{�J��.�p�U�$��p��]�����'�(-e�^j|v0��7Sc���k+>���F�]�
��*=s��zAB��R;L�U"Cy�E�pi����]�Q8��p����3lP%����K����c�*�>��<[��^���3�Ii�C���7-��{P�w����um�O�[�1���>[q!����l����sl���'��MK���~��^�d�B�i�oPz��'Y���y��J$Ke`8���A���W%�������MA����%g�BX%2�x^���L�MO���<�a`��+)�o������d1���K�c}p�6\vhcP�	�e������#�f�����E��A6����'��G1�u���E1C��"B�{��\���!�b>s����P��B?���J<����9��������5��c�4��0i���`��4	����2l1Q�x���cN�4�g�5$��y�����2�7?�.ii�H����E��X��b���X�%�����d�W.,lfX���|�m���� =M�3D�q��!���3��"�*�J��oXF��2����������NU&�i�I�723�I���X��G�[x&��R��x��4��G��&
$=�(n�p����V�=�6d�y��e���'���X	
LT�8G|�������7nl�L}p�}NS��DS�I�O��Q��Q��������F����w�`i�/�fs��5�dCz�n��<�����h$� �#d�!����9��	��H���z����l���x�?`��y�;�4�Wt��4���F�DIL9��>��������2���Jn�<�l�
^:���ln��Ea4(4:���'sXE[��O�<��6���Rn86H�U�wFI��Q�9��r�'��*�:�Uv���� _��k_���F�A[�an�F���.3	�K��u�r	d��<��?���	��m��|�?�]"[c���/�eu���^g�S���M�Yj��<��>�3.Ug��+@�~;���0����Jt�K�a���r��Lt���<�)"
�d�P����2s�"r~Y����l��$������
�l��%�����,��+6�Oz���w�	����z�������KN5D�p�`�P��{��Jt<��x���<���Ii9U�������OY%"d��"L\]R��:�3��E[���m��r�a��2�vR���������\���3k��	��R�F��,?��f��-�D�<��rW�K�
�U����,��^� ����|����d��s�V��b��+a��E���"���$	�gC�
���T�*������X���-��@�V��/�k�Uz�2[�{�����Q�UC��
�����H�[]<H���HJ��`��J���|Q�D��`T5�$���yYN����rR���T>��&����GN�|g��A�f�JT��abui9����${����N�c���g�	���i���3��;1� ��K����[��f��q��"j$M���sn�-�h��;�PO!G����	������D��k3R�z,�q ��h��H�:�����5��t�n.:�~#���6Ku����;���V�ht��x/i)VN$	NZ|0��	�'�DV����X���@����T��C6U/5�_*Y,����4���\Ge�I�����Q}��u9@Q?�P���Xy��r��r� �O�oZ��xQ���k����Z%j`�����I0���`
���� ��|�	:��II]��&��+�V��Q��Q�m��"<&�t/��_,��:�$F�l��R��YM��I�"8r����>Z$����K�7dK�L����k��D�3������["r����Sm����Jj���;�-R`��~&Y�^�Uye|p&a�s/�oXe�����YS%Iq�#���������V9��i?ua	1��������J:'�����>:V�\�"$T�5G�����Y"��$�/��4���P�����9�^���:-�Z1�@�1�6Y*V���-����|�|����A����r�fP�X���+#Z"
��9�8��|zk��������$AY���������}��|�����90���t?�B}�7���P=�x����������	�Q#F|��I>�
*3�,=�c-K}�����~��a���4�SRS��S��0����|.���W��~��]�v�
��0����������X@�`)Q[��A�����6�
������A}��$���t�=�&-J��pIj:?���L�w���y���
�*D{eA]�<���v�|q���f����=\��R+��Z�)�C��*����QG��������;����+�����Q�����XT����Fd0YD�j�������Z ^;QE�	��=!�}�=�!xO��u�I?��"��i�,�
X���	�a�������N>IQ,\�V��~�K���#�z��������
V��}���~O�4���:�}���ON8��#q7��v�u�����4,S�D��[s���u�H]�k�k��9`�����7RYOhr����4j]I�~�=� �)*���
G��y,���e��)4�)G��y�D��g�/�|�e�)�*=p�[6�dJ0�)�
���/\z#����=\�DM7�lX@�I��w�d	^�moRPX��A9~v�]�sLA	������m��>d������O]~s!-�����2d���8���#s�0���#yp�7Y�F���o���}�HV����,�c�P��\����,Dw��C0.������VHj������d���e�%���A�U;���B�������U"�N��c�:��J��~�%[jUcA�"�M������c�j����oKR���������~k����A�L2���9���s�����'��D*V�@'��,�(\LC��L-}J\I�{Q�6l��"O������&���D��r{��0J0����)�f�Z:��y�\���U���Pw�j]������dQD��&>��0)��`����C���i�E����n�\���
�`wXp��=��=��'q�:������I0J��	�C�����a�Mr�Gk�X]�����C�C�/>���>(MB���2�&~�e����&�]���F�:��OX=��H�U
�P��[�RNU�$���i��[)SG���GP���������q"��+&�8�<�c�z�(s#E\=���E;����*����F�j��d�XF��EwC�HN��*c�A�+nD�kf��qw������D�����z�b�J�s�D|�����S�D~������t,����I;i�C�m8z���8�����h�l�.�O��9������%?�~X����j�R��c�VY��q�sF*�P�����I-�
*���`�����x��kU?a����h����(\�|���k���9�)����\N�U����x��N�������$�:/��n�_���r�m��+��wZ]x4��ri)�1���6d(J�mH{�4j�e���|�����{���������r�a�������U������n��Qe]�.�+_	���B�����������RR��x������\Q|�A�i	��']��N2��J\��V��.B��a*����
�d�\������������a���Y��������I�+C�t&��*!t����8a`��m�T��iv>�mXK�y����,;�hRV j�:M��qW����gWc,�Sbpl�����~[�
��T��a�W.VG+���%7XR���U.�E��7-�����&�sz�N�d��I�"O��QG�����_3d�"�Ypu�M�E����6 �sd�^�Hc����J���<�ttBVH���x/Y��lP=��z�����
�����oW��P��4��cP�XE��#����q#��!5EH��B��e��M�vX��
���.~�k����I5:!�'�*�3.���_����Q��2��v�|�����2���J�n����*�,;�@m@]4��<F��K)���,���#���r��g\IM27��G0q��+�K�)���'���'o\��������?�@�&dOi�g��\v��.i�T�F�����7 g=��>��8�_
KN��l~��}`cJ�[*��*�A�NoM9d�%�����z���K"���
�
>�_��'�=,3g����*;��&`�c2�t���/��oe���#��uVF���y������r�P�7�����V�;AE���&���FIK��3l�������>��"d$'�d83�����s�q�j��r6�����KZ�E���j����Cs�x/��3?�G�.�by�i(�K[���z��I�:��:�2��.����+k�%�0l.'�0k�_v���kN�u��eT���i�c���IS�(��������J��tLNZ?'�g���V�\������g�V!Z������xtn���7:�}g��h',G�U��Sv����OY����d�}���"������"�:/��M@��C��
���<-�����~�jGau��c�{� �����U��s�wBP��{�m��95������'M;,X%�,w�B���=aX���$>vP�F�C���0�,:��\���[��H���O�f��)�%=E|�I���#��8A���h��Lw����5Rc>�?E@�`�;E'�|a�[�	����H��6�1�����Hv��"�yt�^�'��y����2j9�)�2���$:o��X>����0gK��E��t��,;��cQ�e��
.\�@��/����-�)$V��%�i�z��-M?���MG�I	X�X�/j��	h�)��:2Z���+#:�P�AAN������c��*3(w\e6F7�J���n�x#��h��|��X�hq,��V�u��d�:V������=�j]���!�5�Y3D��D���|a�;�D�u,�1|pR���ts� l������������L
:��
V����B�r�V+��Q����B���'���T".}����:�JMc&*���0�ss� ���d����Yu�!^��r]��)6!�R�i<tM�fo|���M���m:�����6Ojzk��� 6L�	C�D}C��2���{t���){om.������e�t�9��5G�'F��t���UCw������AMP�V��ej2�� j�$%����W+$��{cB����o>����!X���9H?��\5(��(���1��J4�j�"O���J����I�A���S��\��?I�R��@	f�V���>0��7�@RR;�N;��a���Sv|12���t�G���1���F�2�e�
���C;T���}�f���B���F1����=�����y+������R�+��Cz��jR���5){�6a�e��H]~���"�,�#-z�0mP_Kjj�wk
��#�%�Qy��X�
�D��������	*]7%Kj:/n��Z���	�$�SO���+w�V����x�nDV�Z&�.�����5��v�`
N�/:Q���T�]G�;dpR�Uz��.����0�&�N�{�CjR���9�D�&2�HK��c*�z��)[
���=k�HX�#���K�)sT�����;g]g�����B�*xJ���J�������7X%,���k�����	�8	i�A�i�A��
:��+[+����N5d�/��$�}�K�lV�$�yp�B�������T�3%����9���ab'��s=��C�v�e�Re���vrr�{=�&�YFY�N�a���q��U"�	�ab[%;���Q�sd���A:�L|��c���_��X���A�.�.��e��7H�K��$�/������wf&�� e���<����]B���5k���c]�,\
�PE��U~�����}�p0OB
���y�'�����"������S~`DK��I�=����;�Q��)��?pJ�<8-ez�4�2PA�C6A�;�����cMK��bq��]����aZ���P��>
|������p�
�2K�P�R���T?a������8E���lc]�)�WJ��S�������R�a�A�"��_zw(�������c�b�HQ~h��S88Sr���93�&	�v�����A��'���7fj��5����D�H��^sN��'h����a�������.���x����Bq�"��_���QV��Yqp��o5�=��)�@�Fk"��6��7~�q�����j��Y_a4H8���k$����?�$��N��q��������AT�W${e�e<WO�OT�A�pf����}���5MAX1��yp�BR��p�Uh��K�@}�J����,P-(f���K��?������51�����^����7����)�����]��?�>�����#�����M8]��@U��AK\f�{��������f�����q��,���-��p�x��D	�jh��
���+��������"{���X��������UR�����Em�1�'�X%�U�#Z��*a�YK��qUn�67cJ�f���8��WW0�D�I�@gY�W.V���`�H�����G�<�-�����@8����I�>HQUx�e^�.���z���S����[l��\M�0��y�U���<j��C"�Q�&�5��H���y`��$�W/�l�����8�����?����S�
j
.��Q�[~�{Sr�2�����������=����p���$M(|mH�|�p�������WY_�����4�;��K�N��^o��p��wI��`�o�X��d&R�����'C�}8�~a��v���1�J�N�[V�qO�_m��[�RB��n�
����X�>���jOO7P�0�zn�&\z�4\-z��w+�H�����]��8T]/����X8*��
X���d�A�PG�����F��T/m��P�%��w���2�U���>��~�s�;�O�`"/-������N�o��6�#2�h� ���2��
$\�����d�0VH�r��LQ�

x��WZvV�uE��]�<P�����\�N�9�����.8�������F�
YB&���/t���
�I��1�r�3�D�G)}�OgR�@
�ze��'B�����"�;Vx��:X��7��3GO���/�LCc�i
�����2�7~f���pb�\t��b/�z#�\�O(K��|�I�7n�E
�,M�h!��o�`,����0��,�PzCg��bP���}�B���(:�o"*��c�����
���*|���d�����`��l[M��{��	)����g����k�����6��s�2)�;#������;���	��gK�vl��������>��k'�=[s�U��4l�MB}p�-O�lP.��������2k5�r�S-.�H�����Y��>��W.*�mGV������J�}L�&�����M� �vi�������#z�p�'GZ!�/
9��[�!V6	�{X��%E�����p��e���)�{pF�{�����/.��e.�Y��r)d<<:y���������0��tU��6eQ�M���c<�{���=�_2����������05y����D�\C����U�s;�`�yQ�$HW	�P���Mk�%������
���/��{�cbf�1����<`J���p���*=P�p^73����M�y��$wP�
������m�K?��������F-��&j��&����o����������]�0���@8g�8��K�[6Ub��T	�CN��m���*1�'�^��X/������%F=J|'�/Y��_������VI�w��UrW��r�g�&�u�e'�q�^�f�ih�p��N2'�����4�[�����9��� v��U������0Z��57�����2�F��*����{�e>�v�d��^��������^dk����:�����2s�!�~�U��n�?�b��+�wg���z99�u�+��E�����5�����R��5�5r������z�V�I����n��C���b���b���C���89�sV�`�^�7.���Fn��Mm� �)=u�/����b!jMt�4�.l*�iK�H��j��\KQ��j!a=�3D��3�q����'��*�!�@|n��$�� C�������K�v��G�G���uO�y))�_�����r���
����7\�W������$��;�"���&��XaR���9%U}���LE%C��_��2dm���:�a"������������B��R����Y�'S
>�Tf!<z���@��G���u� �� �2�|a�u���t�Q��@�y�i��l��X�I��W�&>�la����X;�*)]N��=��9%U}�r�i�w������S~h����e�lSe���1�"q�6������P���i*���_��Z��+�e��������9F�
�%���k��<�2�r��C�<t��Q���c�7z
��D>�<*���R,�c
+���^���GkJ�"zF4m�z������zq��`E�
��8�u������v,���,Ht�@���x��U&�)�	����x��I�����/6��t<
1��K�z� ��C�d�6��".t0��O�����#yO�H���LN���7�?4��='�x�y��
rn�*C����e��~#��8��/�_��E�c�3���V��_�������eT�gB�e[	��4�*�N�|h��s�"��[�n%��l��"G��3���pfx��MG��I�'�����<�����xO�
F�Uz���1���7X�|a�Rd9���1Z����A�5���}�z���Kl�FX��J�@��X]~��EwY�SM�0�\|����.�������	��5������K�����c^��|���k	����%{P��e�lp��=L30s�Vt��j����*4��R�A|%���=[m��hZ.���Hp4����X�$�gLH#�{�Y[0��������=m�$���F���W�{1/��`�����<-��5#��Q������+�(�A���r,���k��1
mP-0
aG!���3A���>84b}��F��*��u��49������3M���D�k�p-�~I|%l���q(��z���'M�jo��7X�l���ra5��8�,:a�c��=���5��-\����SS��{������2X\F����LI��"j]?�s���g���
������K"R���x�\�'���e\��]\��wm���4�1��
��t�t����T�������a����=Nb�LB����8�&�}T����Xx�& ������C�u@�������=e���>�����s����s�eg ����G�Y��Y�]�g.��������`�U�x���,��������4�G��+�L4xi��x�3e���������T�a�Z'N�xmv�7����D4�����C�.���(�4$5hg}l*��\Z�}������Dt�����#�~�A��h������}�up{�?V����2�\���a�s��k�Dn}wQ~�U�)�Ud�Ue�2*��h�c���M@�w�S3�["�aV��d�;��l����M��m�5�
�)��i�.MDG����?�#u��58��|��W�^�`����D���=���R��O�G�x�#Fz:z��@|��k�Ybz�@��B����S���}R�������93�>O�Y���Z64�<WQW�\���o�%��6��x2&�&�����1�?��������@��N1<�8O�gy2���Q�� ���{��Q��(w\���(p��2:�g�0��F��z���d`������������b���}����Z�.�5Y�K�HQE����)[C/������5������]1)g^��Y9�F��_XP5���A��g3 \�Hy�{�����������6����s.�B�T`���n�gb���.]�T�������R6��fk� Z���F��if(����/�]��#���7�O&6�k���K�pt$m��l�:�b��Ak�#x�=Ho`�xX|�H�J]�F�=kH-{�;�k����`k0�V���P������S�5d��&�s�z����[�:�<�G���0(�����5%4BH����2��U�)?0�=p�L�����nQ\]�&9�rf��8L����a`NK�����Hj�����C�<��MQ�N����������
?K;%�/���t�s��@���(��U�3&����@_�f�a����,
\��e������;N���+iS��Xe���Pe~�(�\�,j�
g�+v�x1.U���$W����@�e���^#�ha��J�reX��K�tG�j���N>d�~�������Or������g_�r�*����+�:X2J'��X���$��r��aY������f���-[G���{`��m�W.Vt���Z�d���d�&>�
3R���2k��1[�����������S^��p%o�/NII��c�`Dp����K��n0
��e{�W�����[����+����c��
��20f�gl��Q�n:����<�=��6K �����2��>I)b$z�����!�f=�6����`q���Ki�mH)#Y9���y-I)]�N�q�d�}��@��Y^���6�wU�
^���cb*���s5<������P�4�D�z����o]����W�+�p��t>��m0*4k%9���qV@�&�^VhR�+�4����@�������@����u����!���>������OJ��K�����v��:��xR���I�
n����.X��nC�hY��[wb�2�d�Q�����?����h�� y��/X6x��;a�A���Q���!��Q�a� �VD��Ks;(���n�$P�
�I2R���=J�Q5�)��[K�4����������(�;r���p�
��xF�A�����hd��%�O,�D�������mV��[��������{`"�^�{�bu1FvHS��U����:n6WV��{�KM���l�%�����9� ��1s������,���\X%�'v�+vm�
]�����l��63�����e���_����8	��<�lP5���z���K[
�u��$�}0���/�d��+�HV��=jV��*4\Uypu���ai:1��<]�=�]�����/�l�r;����2��d��<�1��2����D����I�n���
��(�����,�w�������d���Si7rS����A��c��)�F%�-L>���PS���7��{b�����\�����U7-��RA���]
�/�_h V|�5D^�u�m���R�Z]���\
��e��=�<L�76:[�9D�m@�3�,��?���G:����L��*��m������U�� B������N�@
�p�Q`���H}p�Kf��`����Y���P�����<8�1����4����"O����o�0��)�������]����O2�r�vi�e�����
>������5{ f�����F������'x�����}"������k*�A�I�
��
�>��b�A&�����2=���t#��6&H��7���4��(w��G���;D�/~�y�	;�6�_|��������
slH����*sB!b��$K"�L���b6$Y)��!��!UP�;we�E�!~?I����Q�<���*0�w��}��J_��$S���~B���_>{wX��K�<���t����
C��(�|�Tb���`uX��U"���-^��C�s4M�&���0��b�O�<�)��������K.��AHQ5���U���GQz��e��M������ =M�u������h����bz<`T?�\R�g�(}Q#������M��F����e0M�<a�sT�)�1J������W�I9�.��{a�j[(����}�l�(Q&�NUf���2U�-���p
��nzC�������t7p2��C+��'�d(�����<��oe���LT.D��?f
����!g� h�{`E���f�������
��E�a���	=H�W��Q��-x�0�#�"���^��eX_('OJ�/3��"2��t�,9t8]�Fi����%;������Q�p����"B8��B
>�B+_e�d(B8=�T�A��.�9\{��Q�6��>�}�be4��$m��
JM�����G��4Ds�A����Ln�\���)w6��5�W��D�|U&��P���2�V������#N<Ud����|i7B��P�0���$jW'[��m�8�d�}p%%5}�&H'w|���)"7:�a5P,b��I��,�r��1�i����~���q�V!����.@�rc��!6X5,e Q�3@�k�c��A�*�v�W?�t�������Y%�&��A6���O�*��|�/<�����s9���v�
����[8����{�=���rii��5>�D��r��&�_~����������N"Ux���C�K��d��Sg���G"��+3�*�K���(�.�����xqJ�6�J�~��;��<r�����^���H�{/��px�z��O|��@�&P�A��B�'HUO��pM08���<p��>d2��g
�V1���������/�0y��*M���f���=h� �D��>�. �[��(
[����)Q-PG��q�.�����P��r�4�OP5x����!�Q����e�"���	o�/���K�(���z��R_�g4�p��P"������r�;��,|dlaz���9���N�H$�BZ���B���:n�5"�?�D���I��A��~�>+��Z���\2aq���^o�E��U��G*�m:��t(X�*�R�.�+�")��������Q������3,*s*���2
(o���v����{��T�A��M�`/��Q�%,+� �x
�8	98A��b�i����$6\�V�'T����� 1g�R�@���>Y�2*��,@��H�o��Z?����fU �`�} ;��iD+��+XX���=��3����A��H6����ae�����Jd��������)P%
�y��`�����CXdIm��1}�{��X5�`���T;R�h��H��6�����E|�^}�K����5���Q	�}�>���������
w�g�������SlR����r+]�Qe���/<41�C��,`<���9�("\�c��J|�r{�#�A���Bf*��T��K�����C�����2\[�L�'T��O�T �r�20)����i�����s"��s��@������R���S�j�r4�~\��<]�������
H9��G.�O��R�������WE�0#b��� ���5	=p��Y���cT���0�/4����!�g�
dG]�����$�g�^<�O�I���a-G�-����f�y�!�����$>9�� ��a���_P�g�
f����1x#xGB?�2����s�i�<���0�ug�F�,
L�^)8_KA���r�$[��9=>���H��
�$sf	${"!/HxT&T��\y��J�?9=���
�c�����B>"}!���;�Ta^�_�P�t��U)@��)D���DD�0���W����rz*���~��
X?8���T��C�#X%�f�%rz
�:�}$]x��$���L����AG�������%��@x���@��Kw�����!7������<1AM��3KC��� ����$
e�{��f�)��3��	o��)<^�x�4KF�}�-T7Tj�q+q&��j�Cdqg��J/�Bf��y%{D�k7��n>G�#���!�iBGS�*���8/p�$xsO��������o�����~� ���-��~y�g�D���(��G<,+��4����������H��D���o�I�L%��y�4T�J�\�U�����d�p=�E������P�'�m��o���zBv���Z���DvRL���r���<VA���E�d>��
L��_�)����<77�J����AA������o����9
|?�������T�J�������f}��Q���S.�?����(�y�d|��d�o"�1�Ga���U�z�)�ek�A���@���EB}��l��4���5�p��/>��gU���=�g:z����= �?�^����u���*$|\������F�,�K�e��J����P�������L��'�����>HO/�`�0�����~$�(!����D�����I��P�g7J�8���p^���P5�vd�c����|��~�U�����I����vH������Bx����������
�LS�y����|8��h��t��N1� �kJ#
����{������\���������D��^D��N�s>������z+^`9Uo�#7�a���VX-����^�U����$������\��#`�`O��t#K|�\��{}���i8���l?]��s��P
�Y�G*�C��5��<%>���%�q
���
�
������_L��_�+��C����*�a��%������#A�e��RJ%�{�lX+��:��/P	�B����`�����8!3i��)������1VO�)H��y��]�8}�X�
7_|�TyAb��Hy�����u'����6�
nd�#��w��3��A+�����Y�����
Yz�	��Ra��8�\�R��m��
c���;NA���a�kF^{���2eX���T���a���]J2��0mO]�t�1/��NF�r��f?����~;��,��_��t��J��jO��_
�����C��9w�����[�����!�?lb��&���RK�c���r4�|���%��]���y�q��X�`�����sA��%�?��'+p���&
x�>"bvO7������:y������8�P���8�X�;��e�VHQx�{�O.�/�V������tC�c�� oV�x�B-/�� �3�x(���K����)���o.�~�_@������S�����pR(�
R�3�}�y��h)J��Q`L++K�����4��������&��)/l���OQJZ���I:�|D�x�i��z9�������\����8X����$���{/�Qi��Ha�
���������b������)d��,{&w��
�0�������8�e�����f=JUk:J���);�O��W�@t��hO�y��@}���U���V�jp��?�a�P5�0���Z-_��P���?���^E}�����_ .M)Gom���/�V��r-����XM/L��m�*�����ABv�0`��6���������_����6�����wt�$�"����x�&���|�m�TX=���%�;�����bB/gf�Cz����{C���F�-Tn��g�[EI�D$8�xg�:b/E��K�Y1�FM�%</���~��Vh�`�gO��U�p`u�����,{Ob����������~��GwJt@Zl��h7���?�hn��r�~pI��s�����B{���#��?����y�[��P����3	x!���J��?���y���H7Qm#u�X�����
�s�U"w�����	��#�)������O���Du'	�k��k]��0�>"&!�x7hVDM�L%��������^���x��1�rz�5X�P�7JL������
7t�I���:]=���0r���x������\�4��4��`�9;�����z��D�@�H�������r�x�Pd�\��'q�dC�a���� ����
T9_~/e�����D�,��P���%���������_K(wuO��X�P�^h(6��qC���7�tgU/���U(����t�%]^����p,��}BdB�W���,�*�Y��>=Q	;�x���c
_�N�U������3� �,7�����Bv�oL�J�RY��/~�%"d;l�0a	)����H�aFG�U$���,RG�	�R�F����������`<�j�X8S����E*��=7� �)AS���|�:�O��x���f$��]�?�N��l��� �[�:���F�8�k?�VP�f�{�vp���>@�$;�?^>�_�%wuP�i��>������
��9��x�!Q7�E�����U~!�t ��CI����r����30.�-T�g_����Y�\�n�����m����9����c���?�:�Zw�LE����(�K���^0����Sd{����?����y�/Dg�L'�v�-S�V�~�w����7T�y�x�������kw$���n`w�{������Q�n���g{�/H����b�@��8����!�8?�>�`�M�#����*x�1�w��dy����J� �����=i�~)39&��a����O)Y�L������~�
�f���|G97}>��w������	�hmE���F/�O��pZ�2��7�j?���r��#?���	J%O9��������de�/�f�
�L��6x����@���6��#����P3� /�V	� /f?$J+}�������_���!����������mg�i6_��K��R�Ra���k@�\�]��1O���o��7���x�	`������;V����`<��*9��k�LN7ok����F��|��5o�n�L+z��'2.��?m�W�\����(��}���~r���3j��b=��A��\��P^���P���c������x���e�*daYJ9�l�139���*����|�/����/��
����?&i|��'�_L5r���
��!^�1�^0<?9[X��@C	e���1d���"���Q"6%��s�.����1��~z���"�&O�%�q�����}�j���)�~�����RP�=M�F��/W���*pk�7��(;���Cz��|�@y��3P���^�h�#�Gu�O��
��'/0tY�_^��*!��^`;�P��Yq�&�F�9g�^��&~Hh�����6�\��� �>�G/hkGC�=�J��(��(0s=��t2�,*�[�3�B��:yZ.�
Uj���@�����A"tN��B���e�v����t�Uz�h*��{��~�*�Q�7�*�	���R������TL}��c'
O�7:��\��T��#���P�jC��8K��M���P:�:nz�R����L���D\AUqGq<��i���C��~���O���Rt�F"����q/:�|����k��KR���
o-�`�p�F �`�~�F�>{��=��gF�Lo���b�*/<|���b�Sb�{*��y�M-����~A������������������>�!Y���<;�q��/���3���v.CI��rK5����4z��Q����@|>�^��c�2�,$�<��>o~����K���5<
f�6l���)1oLR�W'�'�a��
&,�GW5+?(>~\���OMWRtT���MAlV�E��������
�X����B��s��L���n�j������>=@�^�H?�������DE~*���	�O.�|��2�-k"�R�m�j���d�Cn~��c����)��q��S!��6�oz��	-�[�8����G���Q�-p|�oi�
����M�1V��3"!��q����|~QJ���S����������4>��'$e����"(����7l#��I�
zb�����i�U}��2�N��t����Y�]�a����##>���8E��Ue����/�0�Y=�w�=<�������A���meV�@�d-��n[|�/�'g,��ve��<���#K�^�1eM������
(����P��9��]����>����#���m������a������#!Y"�5��pu�#����Qe(���u�G��s�2h����3����0�n�\�+gZ��&�JA��?�7E������^�P��e��3�������/��������4C�������`�b7�(�����c��h��~�B
�q������9y�c5�\��b���sY�"e���r;�L2�|&$�H~���Z�y��e�Pr���H5�}bK�y���jx�[��u��Z?;��<��"�������@,��$�'����)3/��L�%y$����Kv��'��@y��aD�5i>�[6��-�g�
���P������'*�zV1�lN�;��gt�l}��0�P��y,
����G�>H��nF�
�B�`{&����`M��P}�J����;�wJ�\ W�?��~���*��*�
L/�Z4�����?{�����n�`���d$|�~��S�T����LEvX���*�����/���X����
G�����
���*~��3�`�:�k-vmyz�b�����1�S�f�����	�t�&T��1w�$�+�+$	�?*@��d���g�J*�6l��$��xb�t"���������$�K�S�	W��n6<���I�Zg�w:��h�20ZxN����2�����a�"U�~�+��b�1����������RJa��s�����.XUp;�+��h�s�J%O�;�=�;��_
S�p�lv�7$����P�9;x�2��G�ZM������p� �d_:'�����i@�`cR�	��������^����1X�n��4�����y�(�-L}Gky6����p��L�W��$��Jp��)C�����/P>�.��0g��a����w���3�3+\�N�^$����dt���[Y���1�V��[����w���6�p��J&p"K��KO��P*�����4���}*;�XN��
����A��\�J=���e���=���M���S�~c��`�w��?Y��`^����L��)/�=���BC��*630HE�����n1�
iC?��g#�����j-�d3����|�����$%����DQ�������	����+|�D�G��
G�>�D{���,^�8��^g>��\.��(fE�������'���>��G�9
��8^.��&t��������]P�����#	x�UI5����g����tHyz�
|Mo
ox�ty���j���hZ�#;fi�'�Lu��Ud�T��br�_�Q,����)G����,J��vC2g�@�G�Bbo$
����o�t����{R���"�
B^<�+0%��vC>;�^h���A!I�*����� �B��M���EiG�/����'�����,�|��T�	�F���y$���7��qYOL���p�]�W��]�����$�+��q@�����0�w�o���h�>e`+0b?.o�1�����
�G��X���R| ����c1��A��{�W�u&p��7�(=���/&����PK�l��'A����1��"�9�"��IS/
4�{�%%������w��Fn������0�n���d���@�#�e7V��1!t:<�!doD� ���vP4��9�	�r[�����s4)��	,���@�p�G�����=�K��&�����I��J6-�F"8���n���0�<�P�xS���_)�>��.�d�)P�d�c;W4:�
�yUZ���>^�&�_h��	�	�/��D?�����������mB��+�]/�m8��I�U���z�`Tp�<A�iW�e������J�d7����x�i5g�|����
�"
�q���~E�E�yr��0�c8���PXf�sR5�K(�����H�c�F�<��RG�p������@D�T�v�C�D�F?���Z��a���:�.T�|
������%�7�;�<��:q]`�X�}�
���B*���������F��m�{��b���G	�#"�7r���V	�����T��1E�P*
H�e�LA)f��]S�$���A�/�����j7�n�L+�����R�V
��O���s]Wf�����4��a��G��@f,�����wN��T
@��k*Gc�i��x=/]&�/��/��9��"�/-�����=�7��Pj��aL�3���W�R��.?Gd��N*��>�I���8f��r.��G�G�+G-`}�/����T��F�)�`�����1�|��[>��g����BS��vhv�����`:�OP5���������U^��3�
��[�^x7�%a_�{�";n�x{�J\`�������4�(��d��-���m��Q+z�bA\@�[oG*S�~Ut������__�����h���R��ed�{�����F����#���d
���������8��*��R��RR��	)���E�oe��
�|�R�V��R.ieq�(����A�����Lv3����31�Q��<mG���y-�o�����iT�p'��)�a���,��LJ)a�pZ�#<�{#.`}r=�I6KufFY^��\@����V��[���rA��2\�b�t�IKT�`�$�g�r����Uz_#��.�|��i��Q�bK���;�U,R���\��c�<=/�
8����c�>�����!EM2R�����������.��x�/��D���P�M0��O���	�O��?���5
g����
G�,�K�8����)�1���s*���m
|�X�h�H�#�
o��-Ro��2��i/��C���2�0a���*���K���
��"�
ef�g�}#R����m�	�G�'0*2"{��_���([r	�W�T���a��D��?1W����72���tC��Im��"�V�7��\6A��%���������W�g��V����t�~��K���{����y��b�����9+
�!�r�F\@��ks}���G��������uF�fk��}tp)^K��l8F�S��3��}!�z-X5��G;�^�06D��]����.��	�kN�FL��4��`E�a��@���>�%��v�U�������O�Y��E]�)Z/���u���f�B����6��������t�_j�y�wx\���\�9�A
U���Q������>=Q����)p5Osj�q���u���
og�����[��~�B��tC%Y.��4D�8�
��^�5`���z�x�����	����1
\���9����������p�U��p
������`�q��|u%U�W��	U���&w�S�(��B�H��^���W��f,���l��Q�����
������ �r����Xa�UaL��I)`��K��G
J��t���w�*s��3��R����|�(3����wt)$�����3X��U��
o"������������D�6�����
CEf|��6O�H��5'��g����-LY3\3TV����K�����u<*���l`�9)�a���z�m(u3���
��jqza2s�1���~!� �.���p�V)�Q�������J��f���&�R�N$�V��3��	�!����e.(
y���?I��u�~�1�+q��Y�\��$~k�!��3���B����y�"J��FK(e7F?��PJ�F�a�������]��V*�p?����,��`��4���R��W9�H����*���
e���7�
f�`�a��=Ra4d_h����$��y�K*����i�O�0��0an�s���+
��hR�����9u�pu����wK�:��OYr|�H�\��|	U��O��9�T���F��5gL�{�Rj�C�	E*�)5���J�`��,b�$�S��0%�G�1m^�����$��~�����U��= >F��^��-Du:Z����aHO|A����%!<@�%�Z8o���F3�R�����W0[���*{��"
X�_OQo�����������RO��~\R����x1�)�����;S��_�Y����:Y��#8S��]��>�3HQ��| ��
��j@�mM��T�wJt�M��@�$U�~�3	+	����r9H7�(,�j��D��b��6�}�"��[@������J� ;�z3����/��c�����v�R����p5O����L)Be5&��]���y �d��18����_A�}�sdg������Fk����������_�C���o\]Ko@>�4.S9-��~,��\=q�g��{�e6�G@����5�;aBr{)�	���d��c��"
w��1�T&����>aWS���8��%�q��M�9��?���<6�`Toy!h�@3##����r����6lP���I�����b>=��������=10J�i.�ea�2A��y�A�����1����;mw
��
�}���y����Y���>%��a��KM1#��B��Oo
+���!�Q�l�\x(`L���@�7T��qMo5���`'Y���q#p
�����Il"Yt�_?	��4�*f�K��������"����k�.R{�a#t#��������o��a	�\������	)��aV&���m��
��4l�h�NL�ITD�Y�����i����p����U&����n��TJY/[�>!��_G7��e<x������6��TX����:$KOy%���������_d��}��A!;�7�<�h���N0�qg��w_�5����9��~~pr�ct�.xx��e���M*Y��Y:"����C@��E�����<��}�biy�1�LHK�*�E���=N��\������~���y�����Q���}���������`�������&0R���\w���<�`}���	)
���Uhlx9���/�w�:R+\_��7��� ���+\]x�\}3k{P����� ��a�Z?g?v�����%���
�>����*��V���G*\-��0!�[��32�6�'�
U��l������D6���Eo�6������'�����
.|�8�f�i��w^��*�>��S|����YL�Kri�ob��l����u[��:�n�*�g��6/�y@���;�oMY�=�;ZR3C�
��
�����7�[l`*x!������C%����M9�R��fb�S*)D�p\n���1WX%rR�s:L0�P�����=�7u S���M�v��R���b�O
0��Q������D��"��iC�3����7��yW�����.B����G�)0�M���
��8���y������>����������e�����z��%f����|���g*�������	V�<e7�lJ�$�DN��CC2�
�L4��s��H<��CJ�1?�0E����y�b#.�k6?�;���Z4�0���
�iM*�����]�>��w �o<��L0�����J
SU�H]������[@=�e�}��Gj��f����|��N��{��JC�����}�	o�>HD_x�>V	#z���AvK@�I�U� [\c��C	�1��N�B�pv������]����
(G��`U0�O�j�yqO��,�����9m�@�~���S=�
��tj �od�[���"T�1H��K,���!�IH�T�a�����X�{D:�X�\����*{��G*�K6�n0+�6��Iu����%^t��)�Zv�_�_@��\6���~x��~�RF�@���t�S��f�����$(g�@G�=(L���3���������X}��P\��3��/�������P.��^C�p��2x�QL�����_���#,���������$�]��{?�1g�����!g�m���#�BSc@.�x�}<�PI�����AP'
V����J�r�R�����r&*|&/-����g��$�	�7$�B^�<�]��3Q�2��R6������0[�^��-s.z$����[tU$���T%0���� �|o��X�"=��2'+V�
EdbD���D!Y�{�krP}���n�� ��(e2�p��������O`s@���	^�86�	!I�L��OlU�
�����DKM*|�����^�����N.<Q	��]�}��A���|����g4���4�}��s�����
7t�@��������7��hDx�����yKQ����{��N���wm>��V�DL$�n("�����=���*
'�I���oL�II	���7���bd�]���/L`<e�����
C���2��3�R�,cT�~~L���S�d�0a����.������R��gX�8��Q�q�2�F�������#.��~�I&�d��������)D!�Bh#zY��8��x������A�����5�(��oGE
H�5��\���N�W��������C��q=?[B�h���v��P��-�(�p}�g2���43>"����nH�^8$NkUx!�od���@��&I����b���x
P036�E����9��N$sF�a�	�2�������"��Fx~4)�ayy��N�5�����\a�rt�7�rg�K�3�WXe�����?JD3���(aC��24��������.�\PeU��Wc�����Z���S|�2��C�F	;$�Y��J3���B��������P��~$/������	)?R-7[�m�)~JRx�X4��z�����!��0[�ND����-A�e+B=�T��U%Q�����/���PNI��'{P;Eg�i��?\�9���X�e������A��K��Qj�Z.a���C�7���kA�J� i����NK��c!82��v���wb�8F2���*�5]F��������2L�H���^\���������L?5([��x����]7O���I5��
��Hx���8
��w�[e����mLO	
�1��L!q�������Q�������v�����/kn=����t��_��I��Gg�)���m��ofuW�7f.�XE�b��~�����L@$L&\]|����F�:�*���i;��j�6QSx^��C&+�E�g*�H:	-j����	
<|Ly[����4��Qu�w����<Q�������7K��~V�Q"�*�hO��
�^MrB&����,C [_kS����.��X���r���K|��tl	Gx����R]���@������/;T�f�J�G�l/
%��K�KI�3�l���U`dCu7����u�@�rR���OH����a��-(�f��R��j6V�)����cB�`S9K�>����A����������U&��J�����ZK�
?�b��6 m^`��qV��r��J�O�{�1�vV�_JQ��#�:-�a�F�A
��y���D���n������
�a�^�js�?Y�/R������II��#*r��v�����=
K����������2P�H��)*�0V�:��o�enj_T�_���.�>
���!PNV�S������_.������%����3Slq�*=���j0aL�G*�CJ��5 '[����&=������������6������U�z~�}����.#�b4da	�`<?����g��T���0����ugOnj3�p���}��
�@���q)s�U&��^��P�Na��,]LB9�i��)=o����X�wd���C��G�VOR�<��p�$����w�7�s.5�'ZJ�@2w1�g���B�.@\������a�x����ACFF�|[|���3z������iM���,�U$�;��)�)N����d�H���Q�~H	z#.��Yt�2]A��]3������E��o�md)���D�6��@�s�q\�)I������������q@�3Z3���o��L
��DzoWx�-��eg.6�|��6�E�����Qv��Fh��`u��q�0,�U����-L���D���������=����������N@�k1�������/%`�0�?X�;����U��vR����w�l@��1���q-@��L*������7E��s��N��P�l����������Mok������U�u��/����DAl�|�
���1;�,B���Yt1���!������/��f�i?��%��f�U@p��@Y��l���J�(E��w�.$�����2���d��G�N�>E6v!���!�t�R���E6j��9n��
W����QI�2�~Za2����l���g���-�}gT����0�.(�`���$�E7_��
�����)Gy'fT���H6��Eb~t��
��\S8������
W`�<�H_0�R��u�D�a(��O�oAza������D(,q���V8�MQ�2�����D�*��4
�\���BlZPU|�MM+��FGr���z���O}��6���6	>e)��rm>�ac�)��H�+\E��B	���O�#U��������	����� ��������P<x�G�dK/�U��F�f�~�	���.�]����>'-��Aa���2e�H����;Yp�f����8�e�����RJN^D���x<�
|���F�@[�
?��� ��0�QHyF��]��|tk�y
aY0F?�fo�,98�i�_RQ��E��������E
O6N��L�PJ����7RZ���T�����|g�����P�0l/`��)�
�Y�J�sbY2�����0����m���"i�"��Tf���d�����V�������_JQ6��h����yd�bS�h���w����'*��uF����X�3��8�:�r]����LM9m�F�-p�0�j\O7T��xcVhPf
`��:g���8�{6���nd�@�F�������Bv)'�<~���K���������
�G5r6���Xj��l���nzg4��M�7�}��B�.d����]��%����>�tlSQ;]������R������mL�G*��{��~�Q@��'�9�!EH��8-��Z�J��������4���'�;��D?�M�P9z
&6(]#�C
F���a_�yQa���n���|z�"�h����U���8l|����D�d�4^0�6�I�@�1s�a��E��7���"�h2&��Pat���P:Z�z��l���3^�#�e��vA���/�gG�Bo�������W�@"%*w�S��w*���z�����U ���J�Rv3�}�c/wUG+���>	='��h�o��`@��,@����q��'��� $KD�f��V�����#�p�bJS�e/8�i`l��P)=/��y�m���)�W�mApC:���. s%)w�a4�_A�6�!s��)<)U��$�S�F��/�tW�������qL������$&X#��[�*r=�A6Wm0]�;j��!}J*����T���#��m1�*��)D�����@���o�PV����C��=<�����*����mC|���7�
i{������
^�8�����9�h8v�4��:��q��3jK�v���|����!$K8��a���>��/��Iiz���)M�p��)>���D��L�����C�y3I��C�?��2��4��	�_A4��p�@����f�
b8� �����~9�����$L@��)G.���Ki�Y��JX?�������`\@��L�Be;��Z��0��J����f���u
TR������Nex�A����m���|�'��,����Q��;n|���
��n#�zJj�)���>qE��7�U���4���p�H#s�I�gH��c�5*y����N��Y����)5�1�`�c>�����D�T,����
���=���7���T����?Rb?�=��������_h��5�O\�'*��m��N+�y����*����|]t,IG�u�Q��JC�!s���>% �����|��h@6��n#E���y����"���IG�	�T��'f&�f�b\�CrS	�P����:����='��$r��2��IpN�R� ���a�~��u�DS�<xDv��?w���O�ywg'K=�B/!E>p��gI����eg��vC2g�@�G�����7T�}������������Z����O����
�l`�X�
��
W��
��	a6�2�=Gh��PT�9�f{d�����A^�GB��xg#�Z�4�p��}�V&�Y����z)cm��s��]PU��O`����;#�R����\I���~�A���~5�(W��s�;�KJU�~�!���:����������N���pXspDx�y�J������y�";\2��jAB�[�{k��%�{~-���D-�A�?,Co���o�I4[�@b�����P���m�(	��k��.�{U�w��f�"��M�7T�e�?�����oH�s�?lV��r5�������a�y��rt�\a$��T��B�q�����o�I�������P:R�������oK����
:"�����A��s�I���3��g��G1���[<3�Z���!�����tA\��q=��Q�Q�#?-�j��O�*���j:�i>S�
��������������J�g��w�<��l�X�����M_X�Z���3�����4l��=[P_��5�n�T^!��c����k�$���:��$��9W�Q4����#���F?�/ t����*Yg�~�x��i-�F&y������mX�������9|�m���q:�����@��������
B����e39�[i8��K-������_P�ky��*�O*����Q���������6;~��8���.@?�-�?����]@��
\`���_�?���1�T:9@m�F��0����:0�����Q����E^_�s�D�,����
�n6d	���	?L�F��c���*����9�;�xWK��TN�m���/o��"_���
�z���3��2�������t'@� ��{u�`��X5x�{�'Y@�9*�7@m��UN���z����"�s�cXa������\]�A���=�i�����K�����;H3����k���#�zDv|���!�%��T
r�x����sT�}�
=H}^Z�z�_�X@S.��|zt�r �����/����B���+�,����9���!��)7�]�c�XUd��
������e���
�Ka�y�7If������%����{�����xx$�<�?o���'*��H����
fQY��6>e���t��F�S���S�D[ U������6u!�6������}A�"8�Y�'�
����&�~9�n��
Up���D�������7$�B�v2�r�
��D6�m8�-�L���_a%����� �e����XEn��D����H�GUfS�2��R/������Lk���HE��.��~����P��I��mf�,=)j���n��
o��i�����()To����w}--����$�>e�_�^���y�l�)P�S�.\��!��������z��������n
7P��/QJ�1:��I�a�����-Pc��&���jx�\��'���Cc������N�����m8��B4��TOpO4X�CD/y+a�������`�b���lP��LE���/���[���s�.�PjL��h�dw�~��V��GL��2+��t���i�a#@�8���x�
�L�7T��^T6P5s��)+1�'���
�)S��3��v�d1z�����A�?���WT?�����f��'X4���
h)S�7F
��6��#������������c,����?�����!�)n�u#������L-%��W���2	�3w�>�2����!7�Uf��
���*�s$��@�1�*=[s���"�Q���r�B�!C�OE�u��iH�:��6�A�B�j{���x5���YW�y�>n?]��U�]����=
5�i��0��,
��'��<E�F�o-U�m$N$d'^BW������t���Pv(�e����^�O-�;EL��{�+D(CU
��������ky�
�o��7<��2��4z�a���*4oq�\n����Gk����i7hq[���d�O��\@u�[n�E�T"�'�4�(��_
���[�*�0����3pk�Q0n�{�[gN%���)+c��������)2��Q@. M~>����
��7_gB��~�(��$�;oeM�5����Vz}����}v��,�������_d���e��%>^�W�>.�|�H��79/k��P[@Ci|�����1��8�����&T�;)bDz�}S�'�~�O��-/�Fm�b�H
?��H���3#p��\�����rP�[�}�%�&~5�w?ug�`�\a#7Wb��(��sc|����@�K���*��Z���o��p{�-���;��AM����l���$d�����?���_~�}��������J����/h<�_�ia���p�������������^|C���{�
������p�cP��-|q���=-�_���*)�8i�����|p�%^qy�c�j��J�������w����/�s����-����W�EV��RY)��U��y�77M�2���������6�|hU���2x�����3��^lt}�!�� �>���^���o������7>�p�:.m��cNK�������B�2����/�@�����OD��4j>��j�.	����6�~������;���wmAUR�C�T>�J�#��V8�|�t��D��qnZ~���Z���i���������"F���A�G��_�+^{a���|��S����^e%@�x��f� z�����z�M����I��������9��d8���'�Q�������~�l\
�<%m�~�5b����^������'o4����C���a���o��o|��_�M�}~$)�����@���_�����Q�(�����F��&���m��E�K������;^��+�������y���$�����[3�'c�^�'5:�m%vo_Y�:R��F-!��h�r��gF9s��(��X;8����/5u7���Q�J1��%����;�����8Tz����������u����_"���..��Fl���[����+_K��K�eWF�������y���o��S�/���-?�[����e|���o���/��Y��x��R�Z~�t�F�����#��n��5�������?�����6��L�!n����]�R�C3�r�1�^3��/�.�h#N�!4�xb��3�
��(�
K�@+b�F�������vK��3�f��O�f ��F����C�k���Vf��!
3*ic��������7&�DH�:���eD��f�
�=>������_YF��ji�T�p��h��p��1�kl�4M�"S��9"���j���R~8D����������\�����b����-+���X��w�`�DM����b�br?h��,��<��8���w'��S���3��2��[V*s9]�UL�����'�"�i����;������y�l�g�P"G�//�y��x�K��x�-�c���z�$�j���}���G�^�D��8������O����������|�%�F��Z>��^xe\*�V�^����h���{�d>6�����[>=��c������������?*��nK�efN�y?^�&&�d�O~�����-_oN��%�$}�����xc�i/
�����_�����s+-�U�$�	��/&���NIZ.��7'/z�Mw�:��@^���:g�����.����;�������3���g���
>����g�q1����r$x�d�����i��,;�������n4&J��>//&-���~ \�������bx���L�`-t�b����El��f�'k@���!�����S7D�����hUL��j��oN�4���-8�&:X���1�d�T��v� Hm����&���k��������M�L����$�6$;�����hc����HZ#���m`�vz��3���cb|�r��=�J�o��c�jC���iq��@��M�/�_��		*�l�����6��f��GKW{�deC[""Yg�%G/�G�����!g���6�dG��+�c -/�'�����l�4�m�,�~��\���M����-�����`��9�� V��l���k-���n�x{�8� m/_����2�~sB���.-�^��'_��iyw:�p��EQ��k:<��v�-��e[{e����%_���;
�^��5F�"�6����XK�����^�~1�$��Oy��s�H���l<$3Z�f����x�����m���Gf4�_K�\-��b���� ���,Li��y5
����^t�
����$v��;�#�R����~��9���
^S0-6e�3�h��a�q�b��H��J�>V	{�X/4&I��b��\�b���4$�{_��;wd����_�Fm���U�_s-�Z����W��=���2�B/�l���`2-!N�����%%�v�������jkI�����M���2O;���:�����|e���qil���"L+,��?�E�Q;b��T�:��6f����M�(���	����9���bmyg��
��}���^���V���+[�h~�0��h`^b� !�QMe�e���N�5�1�����C�{���1��E�������b��������i���`|is%��3s�m����L���S7
xu����)-LQma��B�(;d���-u��&QiJBW1���b-:[��l!�xI���P*���~��u���P�������8R����/B
]bM��r���W�����O�(����l7�j����x����dT�0�w*��-�b,4�r�I4�M��x?M��!j��P,����>���i�y�Q�t���������/o�����a�u��J�x���Af�3�KE�4}�c��v����F�\}�l?G����`��}c������	����09^�b��qk%!C)�G��_������CM2����e�Xj�^���1�L#j���3d��	�K0�Y�\
��Nk�y�Dm`�0]�d�hD�&Rl��L\�����t-,���W��Pml�u#%�$C�P%?��������_��/�o����[�t�w�A���k���ma{�Hc���e�E+���aC�
i�61��h3Z�^����>���Xshv�����7��/k��:�"�}�eb�l�2|0/-��0���2���mn�cR����^�aC��&�1�hU��K�;��/#/f�yu�gX1��_lB@��H�i[P�u0SK�F'7IL�t",::�C��w�(��l�!K��O
	���t��5��L�Ic?�R��
�H�����QW��|M�����d��������$��F�"�h��L�w�XWz�B�e��B��0;w�m3S��+6��6�='�6��4y�76�lB��PLF�m�;��W7pQ7-X�������������laO�-]�3.��P�m��a����H�/#�G��������z�.a���mvo���������G���y}������/���L���\.���������@���3=I �<�������n��D*���H���7Na��4'���k7h�:�	M����y��o���P���@@d�������W`^`B�(�E�oD��dc����V�)�}��6T��h?F���@�T��D�dI�pt��@�0R���q�;����U���@��	e�1��B�_�����LH��������\1}�
���#����/w5�q�����M"�7&	��F�����x*������D�#��������B�+��o������gVd:����B&�m��3��b�X�Q�~��%����n����~��6'�%/�Jb�?%������ox�h�>���x�^�������:����e5��2�.i�����^9|�:�
�h�K����m�����W0
p�J�xzb?����2����q������
��/x�~��r�0�/(*�K1 �+��/���5��sk&w��M��G�����xI�I����\��Aj7,�@%1������\j|'�	n�gfcg�Iv��9$H�sv���J"�E"�7R�}n� ���?��>�o~=�Eu"�e�����5���w��-���@�}�����n�J���0�����=7+/:2��O�&�B�,s����hi�bfb�l�&��������
��v����VK�[�Ba�3K�����W��}A��?������&tu6B�=�"�f�_����i�T���*�'*�6[�S	�x�28X�g��<�9�
$1�&�����m�M�6��;vq�y	�Z��a���B�T�q�������H��0�wk+d���4�=�����L���yX(��3g(
o����Hvsn/��4H�������u
o��@�X����i�}]��P�������"����7�o���E�|��X����HVg��p�p�>`��_P�CF*m���J�[%��:E x���#p�?.2�3O^5^K�@(@#�������H�v�H�� ����g��6#�1GX��E�������q�=����n��d}C�\&�I��A����VL�����w(=�B��X��m�� a��~AB�����O�p������J�*�,|�z�{(&�����8�$,���J����2�B%���
�B
C��*�YI��U���3����E��ok@D�~l�(� ����D��l9�3E��O �R�!|F�
 :�;���Wi�_����F�[Z�{MB��P�� &�Z��-f�L�!0�=�;�����q<�0�Jt�>l
B�n�����>����bC���
E�!�����"�[X�|]s���2>���a��=�	Tv�
��*(����_-�lg�G���e�J��|4[}�q�	`�;oGWJ��)4��������2���dd�R��R'*�=v���Y1}����jm�#`>k�@an1�H|&1�$����>h1�U&Y2�o_��[��+��7���HxP'��bp\�E�R>���7���ZR+Ea��q�I����!�S�A����A�:!�"��c<j��K�s����k�Y�s��9�'������Y��8�L�t���w�oR�����=��
��������	�K�nZ<H<{u"�Q�8���� ��
�������q&O��=X�!��w����zH�~��t�����bCv#r��M�!21�S�!3�1��������<����I��3���L��^2l�
�I�\N����p�8as/!��9��i�L���������a��g8�(���'fd�d��	y���/�I�g��t�_SMVr&S<���i��E+��0�P�%���7����F���e�����/���$�J#!��H�� ��C�3���s�����������'J-�T�O�����k(J>ic������cS�a����
�-?��o����r�1���,�cz�V�If�=�?� g	�X��%b�z(��c����(O'��3���3J��T�Q9L�=t��� ����qL�Tl���[�-�T|y��#0,0z��Ne��7H@Q{0�D0D����v�
��y|�3��
(���g���np��o�"��5���P��7�a�y����*~I
��b-Py���n82-��-�����l��)`/�X�t�AW���+��Ic��������!X�$U,��
�A?�9�Pu
�N��g�-�������Y
����	���#���	�|a�x<!6I�&�I�*�%J��ig�O������/�|^F8R���������s����
(�N]��	R�a#A�P*7��|��+T_2>���6iL$uXHn�%NmH$	���p��)��7�AJ�B�@�G���3����n��@�� yr�y7�������Z > k�-k�0��2d�=O�`���Ec i�������6`g�>`����&��8��L/6�t������}�o��?Q�����lGt��$������Tb�G������"z 76$��Y�R����HEB������.�h����R�?��!��h,����0�0��Hs�����TZ�d��XR9s��d�>�v9J�pN%9,d#1J���`C��q��q��J� Z6V*mRD��H=�ID����RQ�����}bK}������"��j\�.*-T8��#'#����J5���u G���[���)�8.��F�h���@S����=^L�W-z�h��-!9%�X���F��<?�0�QVX�Bat$DEe�B�z�E�*0����z��)��7�?\��~��N
�8�W�i��
��.�]��u�F7��;��%o��z����Jj�(y��ge�DG���.g&��Fk���|��Xr�|�ar
��O��@�T�c#K�OBWW��gu��5
��7��K����4PU`nf0>j ������Ip��`�M��<�B�����
���.��� ���R����QLTM�I�B+��c#��B�N�/#�V���^HY(Wg�J,X%YHw\�
����L4� �C�<��J�.7��$+lm8=���Ce���O���^G����+�|"(�$t��9����b!M�/��|:��)�HW7��$qE�5����$�M�%1�J��7z�h���}�u��l� 6�ZB2@P�if�@�pc�l��f��`yK����"��,�C�v����Y\`�9�R�[L��j�`L���)�TF��14�X�&��w'�����	�5��8��;�GM��g����dN�C�,r�'k�������))�
w�q�������#3���`�
��~q���D���\��c>���1�!
)+��A7B�5�@�.�Sd��2G�*�q�\?w�N��4e%���EC��6=�����k������v(Z�E�
%��F�z��;�C��j���a����e�-�%VK���1���_�`���{���i���|��j��`}|Zy�r�E8*�T�\�19��x�����J,�E�<������0`�1�Gx�XZ::��;����\7��$��
�8A��I
�lJJxe*�(��}�n���s���X��|���U�MO.D�
���3{�2(���>�u)��g@�(����������JRmRD�U"����R��������|����
@)X�j�O	��GC��.*M5�Q�[�"m�0�=;�
���H����j���Q
V	����G��d�U�M%.{Yb���~?2{�H��2
3��YV�:J�\����q���V���(C��&4�Y���O7;�ORf���d��4�������!�F9��L<;\��+����O�&q�N:&�����<�!AFT��G��}����o���!�`� ���9���]������E���=�������
j�M��rlyj��`�&�LTC(�k��@�
!$��3����^%v��4<�@=��l=gI8������} h�W�@e���$z��H���y�I:���\$���sj�>�.15fA� �C��sK��
u���2��I�0C����8�akA:���
;5�������'+EA/��|c��P;V	
D��0�!�&C��R�uI`$�NS���d��@��8'�V������*L�"�G;0�������d�Z�1�x�iZ���f��f���d��D�B"Gs$��
%o�4 ���6�k<fd;R�s����96?F�R�v�"+����65&��& H��I���#*�������z.r<%HQ�K���O&?�g��Q�'�*w�P�
��b�4J��������[ ����szO�9��\��!1B��'9%x'~��9A�U���n���[#�T�/�w*�h� (���@�%(:4D�4��2�Oc���*��N���q��9�(Ue��0�HT������A�����
GT��;^cL�`��q�����~U�k�{��ac2>�;���!>I�P��UkFBr����|�d?�;�b:V�Q���a3�4��W���m���v�O��c�Aj�U&�"K?�E�AB?.�I��N��S�	�/
��8�9Q�|����XU(.���y��g������jlWH���P8�%���Q���{���Esw���L����a~����X���/B�D��Y
G	7�6J���x"'�g*`�!��+>��EhP�����v#5�����3e+�6lD��Sxb�����Sa��W*SNH��*,h*X'*H���I�p�������.�V�T	�`���>RI?�X�I�<��B�Xe���6�E6���!	�zq���+���C�x$��P;V�Q����(���r��SN�+�2c#��*s������)*�2�1�����!a��p#��n���@�����������"e(�H����	��*<=	 Z��R�h�Jy����R�2�p�jXc	��ekG��V�����U$�r�~�a�|W�B�����3���N��=E�f�)R���b�����0�P����<1�:���H����)R�}��"Ej�#b
Q<�����T(#����I��� a79�D���OLC�H��K�~~I���
h�����urY@
�q�O�����PAD�U����Te!r�H1V��iHa������+�_��
$�����U��������oIS��:b�.��(��q����eW���Z2PD���|%F�|�p!1��~!xX�qi��M��C^��n)Q�sH���U����������r��TbJ��T"��B7,��[��->���+�)l"��qD���h(��E�����2)NQ&����MS|��tv������|��a�` ��*�HZ�5C��`�2E�"�q��qz�J�/�d*@���9��N��!�������C@HI��H$c`)����e?JMVTa��(�u����`�7T�'�A�h��,�8�
?�����\f���	�S���
��n����B$��	|�jF���\���z��k��Q'
V��}Y��
��
�e��a�x�A�O B�}�h��Mt<d;#[��\a�v�����P3�@LG�i�B$�!����;���4��%GJE6J�R8
��P�2���C���*Lh�@*����7��ml�I%��
j}a��V~E�_a%��9(g�fC��(��?M�I�����
���5Er�Q�&��u"'+#$�s\e4���g��l�Z�B�k����������aR�L�=��7Y1���R��@��m�b�vv�H$�q<d*��8.8nev�
�m�O��)995;<7��Q�ag~Ak?�/���
G�������ms
Q�7J����S�-`�K�w�fn�0������������w�M�)D�7�M��BT��(���0t��	U^�D�m(u�R�UH$��.�!f.�f{I3��J�
G/X3�u�H��
$}����P��8���2���8Q�f�C�����U��p�CO/��	��2p�Dr��X�?���1��
���k&%|��S�zA).a��+�KC���[������������
�tR���@�
rtG�yqw���}��)<�m�pe���������CL��o�������o��/���t���
y~B���y;�����THXx����+���wGk��Ud%��2�?��������	6�L�2$^���{.!�l��&�����v�#%<K��$|p+�{q���#C�\fE:��3Z3�����\�qA��
s��-
��~UdAT��d��0�����@�}��j�F~pgl��R�*�
��M��st�*b�[�~>'k_�@�*c����(eFa�����?`���](�U�y��;*B	��Pa�m�$[��:c����x���w�\�I����������� �h��p#i�������"+o+U��Rm�1� ��_K�N<���`���X�	gJ�*�E$kB�Kqd�`�[�
�K��S��z�-���z#��W��e�o9��Tco*���F��}�����X�jQm�������Iq��i��$L�"[S�T��j��q�i���!��\�/H�c���y4HAj�Gm%Xa-��NQ��P:.�
����T�aROP������x#�\��������`����-(`�g%�R	xS	��,�*+�����lmU��}�?y�3���*��\lz���w���� 1��Oe���^���-M��������&��7���L�5�h�i��(�$!�#	63u4o�fG��fq��������v����p����~,�����N*~�����V��BJ�H���2��D0]�����^�*�����h@z����Y���M�[A�H.� ��x�NV���^�tl��W�R�<��B~�1�M"Z���9�QMGx��%��������;Y=_*�G���o�J��Cft�+�j,t���b����5�t����7z��tb��H�o5x��p�?!��A�d����v��gP!�5����z�`t���4>��^I�(N������4b����Q�l,��]�2���
�S��9`Ss��BE4<�
d�/�)��}%a�R��|���w�axQ.@�n����s��^c��P������'"���E���7�����Q�*V-2Ut�,&��M��D���M���I�@\�A��-l�I?��������V�$?�Yf��fg��!T6':�m���H�1�������@j���T�$*�I "��h;����M����$�Zd6e�[��S�k��
���
�9"Kh�`�������ND��:��eT���O�;��A,x��������'���t.D��t���)1qi������E��;.������\c�����!c��h;��
_���d���:�I[��g	��pB��/s��
�S�C��~pH����Z��x�<E����
��R�r�/ZB�� ��P��m������R�AL���qOA��Y�g5/���a���S
�,��,V�Q<��[&�Pa!��nd��|�&�pJ��%��-#��'�L�*X��!�y$jO�U�I����Bq[T�IIlK&���ofG�e2lnH�����P����%8�Bb�?M�d��n0K��aho�����~��
Tx��C���-�����_(J>(�)���8�X% yj�6Q��uY^�nX��}Snk��h�W.pP<�c�-'�u���F�'�F�����+��9�O�AD��K}i�x�(y�Z��3�f�����B�5��
ph�#`��O,���l�8���(>�c�M��@��0��'}S��o	��_��4��p+�D7�|���TQ�]=�D���6`�x�3�gNv�>�����^��~a�/*������O�PEz���
4�S��~��������
!L�=���M���	���Pd������
GT�I������!�x�}�����S�T���8��O<VtT�Q#
���
8�H*�K�����]l��EK����i���Q
���}�Z�#$u��$X	�{i��X��d�(C	��$4�c%4	�h�Y�PAJ�l����Zer���*���M9�?�<�l���@��V�zN�0�#"%��xc��Ebn�
E&T5�<Q��V�&$���#lq#i����l;���f�GC|�����{g<O_\p�!�;[������.��P��E�l��Q���"�e�p����7��xXa=�����e���a 2�A(�B(�b���)c��+�)XnXl���J�I$w	vC��y'[%;�H�%v�-n$-�)J�%w|��
,���[n��&\�V��"�_R��Pa���K�h=(�����K3��0��h`��~:AK.������~����h.
*o*��Q"q�\����F2�)Mo����?'ojL�.cj����-��I�Q�A�A�y�mUEL9��0ceN=������M>�!-�������>�P����[�L�A�<NT�b�164T�k��I�hDj��������<?�0��v�	
D�B	���=��[q	�qF��X�ts�W��?��4�(1�:��$��7��������[2���
�
��(0���'��R����h�Bn5T.<N���S�"u�Ri��g?���Y*i��y~��Y�1�q!4��f�W��c��Tg���b�$���� �q:r�,}T��N��"K��^@������E`?��Z�F����2'�	iy�<�w��5g����A"�O$��)U���
�������	+L|5r���y�A2R���Hx ����MF'����Pu�<�)U�ptc�QT"P���S�r�4l�jEk���DT�c�F"iS���Q@cH� &/��v#��
B~��/H��
�8�LHx��H���$��W]|��S����V��{����W	�s������b�������M��������2
+�)��|�O�-��;i��s���O4n��O��]T���������dH�����I��F�l���ZBJ�Je-������}���Dd_�1�����yM&�_Sh27��]����j�&�7y���^�4��&�^�"�n`)w�s#mB����X3��V��5@�/�|Z�!�j���
���R�mM��s�#�F��C�`����RE#����B�;�4�d�`�.%�����e�_���x7Q��lj�I����"H���A���U�7��� e�m�G�b������;$������>�+P�����m}�W~�~��0�k�j	*#C~�+��n4`Ssk�$p�����X��o.�}J�{L������iU?<,������8��������d������?SD�c����Ng����j������q��-��Y�9�h#�8����N��$�K
}F���������DD�GD�_�g�2�w9��o���)\�����1����������
k��H*�}8�f�	FR���b��g��(YgB� �F��R�iWC�����Xf�������a���W�;qkF� /e���f�E
����"�����0����TF�R<�0�:$�S�(��7a(j,]#�;����
c���1o=]#
J,��`�2���C4F7NIS��F�)y�m�U��+�a	%�o��
�/i�F�����4
�l��KEuf���2�4z�W�%��>�>i�e_��Y��j��L"�}t���H��},��`F�$������-�8J�2���F�R?��5;7�Ks������
����3L���q�q+����f���}�a@����(��������P���.����7�"���������L�$��(3�R��d�j���t���[q^i�-�^)��O�������PJYc��
����kB�^&�-F?��CL��,��s��2��������L��FIoRH�x�=7�I?j�����*�E+u�F�V���&�L��_{��H�pJu�NBID"%����(Y��+�}T8���	��?}tY1($�F9�`D!�[H7F��`��:B���w�i���K�3���*��W�
Os�b�dF���=�j��E��.��>��]\!�)��#�,u?}m3�v6����#k6�Z6:# ���
~��p�����a�<�vt�Fe�_��E��L���H��;���I��+�"���\��Jy�����G��	�z��P���<XP���Y��|�Q`�I**��\�;�FMe�L��M36�����)����|��Qy���Z03 |i,X�����I�sD��v��������Q6+g�79�W����9������r5c���cs��Aks�z4�/?Y��'�.W�����K��4W�AN��{��J�f���j7�#����N%4|b$���"X��_������I���)k����^)���������t��@�_
0=?|�x����������yv�������vt�2&�!��,�M��0���'����q����~�3��4Q��k���f�Ds���=�<Td�{��w2}����D�l���N����^�@0u�nV+���h�d�?{���s�d��T�����R�K"e�\�lB/U�����!&����
jrW���.��"$w�����N��e���Rg �kz���>`�m�V�2f��d���J�"7)3F�* 	<:�1�����flB/�`.��.ftK�i)�uH(r+���%�Dt/������,[�����R��eW]������"X�w%~b}X��r5~#5��0���_��B�iD����>�Fp��q����@�}������f�F:.�W1����@���fll���R��X`ok��/����E�w���?dc!ID?j�N �{��FlB/��Uc�`��DA�/��h�rb<o5����?����%r���`D>+)��:[�b�|_	z]�=v�H���|������E�����tmf"p�r6�o�D�uM7g����QVcP�����P[�Cj�����4k�Pz�rI����z.�!c+z?7�7
���fi�w�*�8��� ���6���$�����,���:��n������d�G���P�)D���(�a�WD�4���_/��n)�?D�t�s����7j��F��D����y/�5��{��|����*X)i���H���KI�.��j����v��D�c\�-���F.)�9b��0"'��,�#DC;&6���mzE�j�Ph�Q`����a2����+Q7@���
��yp�8,P�[
v��������V�HA�]JI���J7)�#�Yp`.VM�K}0�?�`�yu����NW�$#��:4�el�W�b�����9t����y$�gV�v�.aW��3/Bh��_G
��J>D�_a�t�o�i4�7Q=��nq`6��^3\:�t/I�����Ji�G�b�DW)u#//� �
�Nype-t�����5Dh�J��m���0f��=N�%p����7q]F��^�WcC�b����"���������NC6���>F��)Z�S�b���g#��;���G��j�c�:��m��C�l�
�"�I/�/#�
��P��:eP�a����[8H����5B���8{��������vt�������]~��*\�-���T�,?�lD�!�?����j�l��v)N�t�2B/�m�`
�1ae5cB���cN�M"���TM�eQOM>,@D���&���_c�v�Z�&���K�S�+S����)n|X����j���
��k
N���d���
@S��8m���?�f4?O���������dF��u�x������8��]2*���M)�
YS����A]r8�E���#e�i�&���4?���XY�(��\W��@k�����Wj�C���M����=�aDw���~�_A���~�������?`���2��^E�D=R�%�����W���.���F�R�c�<�r$�k��r
V����, e�dz/XU���;�@�����A�+B�-���x�<�V�n>}!�G��zo����a}>����k��6�xK��6(]��W) ��(�2"
���A
Pf�1b������^w8H������+6�G
u�.��R��Y�aC3��4��S�P/U@��&�OM�;�4�j��F�T��Jb}����eF����QXp����o��S��!���(��y�K)�H!�?�P�V	&S(��`��y
�:���I/<������u����I����A��SS7T�=6��k8����A����?��1e�6�D�Zi�K�+���1�����a���'���n:��V�I��L(���#���v1�%M'�0�����2�L��$2���<��1U1$��iV'�rq�BW���{�2�{K�0j�.����%�@Z�����D���I�(���,6RS\�z��7o[����!���4�>�kjqf3��hLX�f�>>hf�^bU��
�������l�K`7��x�@�����]3�����d�qo���'a�H��7���W���~DIk���m���z������,��Y�I�;�&�������4�i����e����R#b���$�80�3�x:05���� 8���|����hW����A�iV�_&= �u��������;5�}���/��Z�����n^E>��v)iW�2l���^GF��t7o|0V,L��~47Q`N]r]L��!���������6��%hA����D%������fD�uH�N�Y]T�Ef�c��oS��Y��O�G
!�
�r�s������U3S�`��q�J �r��6&�)���SL�2���+h�~[����K.P(�vM�>H����5�L�����/]c��/�_�z�������O��>.�dX�R[����?4�R�;���������Z��f5�aJX���
*wGbh�MU�����TL5�Nw3�<4����c��t�GP�
^�KS����1`�I�T�?4��v�w�}��t�f�x��2�����z%�6]5~�X��s@����
����/������D�F,��� 7��z�x����-Oz��i���bc3�M�g��p5$���%B���-)i�:l����/)���TV���i��D����|��N�H���*���L������P'�Y[&&��t.`�31��k6�����T+�w���A��>|����.�2���=�)A�ne��l�x��Ek�K�k��w�����n�E��	��"��>65�Fn~d4��������V��I��pc��.D�@��~�9�vAmXC6FV��cz�Z��i�����GC ��f�$mBi��'O��j5S��1/ip9H��`g��N.�xg��K&
*o��A���FllZ�r����h:aF�59P5�!t������	�C6����Vp�P5�j�����K,iiO2$X�'_@M8��z)M�%�hv�
a��MD�1�o�/�hS�&�
<V%�	�Q�$�3�md�k�"_Sl��vF!R��@J�8dD<��=YE{� �
�i4.�|�m�Y#g�s�vP�v��a��%�
YQ�'�Y�������������
��D�p*����-m�?V'����m�)���vt��a&�������9�4����'`���Ua��Q~$�7��+�g���u��@��q����*��f���w|�<W�Z���6&yOj?#>�G��c�]P�WP;:���'�k��z�s�s�$=�<)�,]�2���Yj���H3��Q�����Q�{��K�F�'f�^�<}�*����J�~����h���R���������~-Z����	R�n1��/4��`�Nk�o�a�GL��:d=1A��s���d�&����1&n���4\�"t��j�1�=�EGP���1�c}�VA�]3�B/���|���[ve�R�I������;+��YMX�#���(���)�k������3&�bUV@t��q�P^����@�Z���V %���BMAH�������Q��L�A������BhVpGK�%(
���Q[v���A���+�
�����"����=tH�>��z
�6Z�e��eT�$2
��:�������o]DI�RX�u�fH=O����jd~��g�K)�Hi]:��L
��������2~+�������A`+*�Sh-�dl�W�0��3HH>�/�c`+U=;�\C���Dc.�L@
V��$�8C?%S��4���zDD�G,]���;��(�+���)��]oy�K�����&\��5"(G�E��i*�������A��:�$Eku���]���Or�j�F`c�
��S�]�^���e8��i���m������?�����~��L+�/5"F�mz��UF}�h[\��?�Q��������}�e�8dI;�)�MGx��>��"�S����B[��O��&����p�������>f�oo�{��F,��K*Q�b���DpY�P�qf��($}>J�d#�,�!���'����C&�0�����a|}���h|��Kv7�+T�i�=]�z�]�|*���b�QU�9�����:�K"���t5�y`�����O�������F�?�3��[n_�a�!�p	�G���b�LcxY��*����
f_Y���������P�mhDv�^������;����Z����}0�!
p��N0/��Y�q��8����L�YMx�����ZT�%��h_!�}U���+c�"ffOw0hH
p�d/�w����W��UJ\y���V���r���#.�1(F��#�!j���1�e�����E����8�h�'gdn����/��>��?X�YN�sY�9e��_����#e�i�z��:���'C�{�����,��@�����}$��K?�E�s�vQ)������.��p�5���!}2�C�����1�B���������)��MJ�%�B��`�[�(g�������3���n=
1��8G�?NQu�,�JY��?���j���4�I���qZ\��*��[k�u�)S��@/�DT�A�'�������yp`]�	������?�9��-hv^�	��f�\i��3�[��^?�I�������u�i�W ����F�`�{1W�������^k6�$��yp���l�i������)*�!%�j�6��X>��
AsAm��jY!�}���������B!g�����=,�����Kq�&{y��6K_���L�(��FCam��_��J��?>="�)"�J�W�����������m]�������!k���:r���iW?llZ�	�D��G��;�A�>�w����Gm�
�al��W������2l
_Bo�����W���.���06�������D�CkH�F����|;$L�PSv���u\��]p����4��o\YE��gT�Efl�����&����{��������.���I�x���cb��k���A�v�����V|$�l���s���)z��6F ��x���`i"�^��N4r�hb��oj�W�0�x����8&v8�{�KlYN^e�t� V�c�~Y�i�S�8�0������E=""�C}�'!uWM!���Z
U�$_�DH�`b��gB
�J��\R2,>��-�$��M���������U!�X�*S���y�M���I]*U�
-�b��{J
dG�CM<������4�	/�!����g,�S���F<q+��E�f,hwZ��^���E�cT���n���-���9���1�	��X��6�X���������Y����4\��1��F�#�Rv�S�T�E%6/�Y>@��UMs�SB3X&����#+KZ-�a)$��L�~����Ve���a��bJ�')��C�y^�S5~�*P)i�J6}����q�.�:���2lJ����A�����Ae��2�[���/R���M�%�{�F]���ku�w�I
�K�>�0Z}��L�B]�R#"�A���]��G&]JZ�i�C�����kK������B|y@�����Lz
��"P7�u���,�A�y��{�c��i��q��zX���g����a��
b|~BwOE3A������y����7�������\u7��2��Q�%B���HEb�T�7�_H�A�j�XQYL���f�K&�1��#eN`����Z�B��q���������Q�����6�=?�v6zv��h����+o����_A�� ��`.[�1����>�l�gV�( ��<�iu�����i�m�k�4��~U�M#�4�5y��94�@bzDzk���.�h�j������d���&<L��+��Y"AE�UV�2��B�f7av>A&��������S�@Q�i�$�+���>n��t���E�PmTo��h*���Q��5�M��9�mXS^�����X7���y����X3u��{�
�]4�
b���@��Y}�����.e��M�����P�A�(b�\ER�����N�e�8��>�Y2�m�'��+�]��1���$�����e���������F%��~��E��+h1.A�(���m5u���<������V�����5Er��.�Q��v5dc�z���
�I0��kL�3�ZP�+`
�K��N���M���~������j����0X�3�H����[�!����_h����{>l*�f�N����5��e����������?�t����6B�;I�w>�uanZ�]������KR����Q��|�����p��b����-�(�����i�D,�������
�����
��� }`l�Pz������ �t�7��]M��6 �}\u�Q���4d7*����%tn���$�FbF��T�	t�}=T]�Bj�C.)��������tu^����t�v��#@�i���nd0�[�k���s��R�������KoI)?a��T��7M����3�l�%�]
���C�������
����(��]q���b�)!X����_J3(�����oP�w1��E2UD�_��m7����lyNM�o�2�s�vS�D�W������2�>�~#�C�X3g�~
}`\�#jvP|����u^b�����eA��{�.�66mx��Ja�d����vj
n���P�O=�d6�
e��Fd�(������ ��H.)z�����9�4S�`�>X�H���@tO����h�r����i��>E,&�����6[�����]YJ{6m����c�n�<Q���kv��4�r~����1�6�(p����/���}�;mDR�5����\��q���+QiA�(�p�8���,@��[C���2m��`|��!�d-�0k�5�/�,�-T�
`Mz�X>����38��b����$V�����H�nsT�pS������j��DQ�6~�k�0��!p���
�F�|g�R�����OAm���|������r���Wt7"��R�����PA��<��
,6�M��g�R�]��du�UJ��������6����~��=���&���?�
������e��N#6RW�������:���M(�����~6"����&T�J�v��I���h=��w)��O7)���B�qv;�P��7V��}�}�����P~$d���_,�/�������^
p($���A�W�
���h�PQ]*/jD�)��M��1Lh�	<�3jp�5���ouY�����I������u�\��2�����)k
?R��
���xD��c�Ot���d����~;��H���'	(�)1��O������|��~Ha���.Z��t�X�5���6�����U���Lif�1L���z�n�5E��&X�����A)�L��5BG\]���6�!;����/���0Q%��#/�����&��s�v��/�%��9��H����{���|��W��._Rx�������$��~�<��(���"����c�C�����3�R�iW3���>R�R6.}����:S�)��W�H^kq~��E�33����,��v���MW$S����+��>�f�+i
E�oq{E�>���L�L0����y�|?-H�*`*�sD���*b�x���f��vg�i�{�q��4�����_s�
iV�1Lz�x�Nl`�����hS(�!���`F������Mp�b`rb��w�v~Y��D����i�U��Dx���A���u�F���R�kn]����|���x^%qa���e6��l�d���%��"���X�
�!�J�A�r����#e������1�UR��O��p��c����P��Y����-��}P�����K
���{Dqe�`*�Gu�FH��H�4�-���#��� ����P��H�����H����x�P^�J(8�MB}���G������
HRV��2�iW3�������aF!��t�^����a�y~���}��iU���^�5�a����������KX|����
�U���5&�~(P{�LA�NW������iXc���^A���K�,	�D��L�/��y����5,�a�
��X���V�����D��+"��_�u���90A��������������C����	��]9��%�I�V��$OX�Y���N5��~tB3.!0?L\&
�	zg���+���-7<4���+F2/���
��(�����i�HYl#�)�C
Z��
-���`�<�gD�#]L��H�-�?f|�����.�d�����h�u *S]Nj�'��{�B�H"uW���WN��\����_��MW<}�"3J{^j
���NY��ypW�E���#�M��4~~e]��ReK�W��E
6�����6U��`�2��"xc�i��
VX)#���N��o�+:�
�
,�Q�g8�D�������K����\��v:u��-=��f�
��n@���� t4%3���`�>�w1�@I���6�xLg_~��2)�NC���U�}/7�����rx���9g,����|���+��p���]��T_����kX��'���Z��T����*�4�e'�>�M�B��q����v���m���MJ��<�����������!�\c|�@h�H{N�:��ATJ
�0���_�zd|���\S����@<�S�	A�x�?�n����FC�W<��=�^'2"c�/8'������9t1o{H��BU(�<N����{"c����������t�c�7�tD��-�������<�6
<d��x���a[�j#7)���q��#�����I��QKC)$������v.�u�"�����9e!
���`�r��Lx�)���In��	o�3���w,2?2���P��^q��%��1�@I<cxp�������v�W��6"9��~��C��+h1.A�(��0y�{��.���^������3�t�:
n����g�����S�_L�(�/����H���_����	rh�������c�A����T��B��,2C�fz�����c[3i��$�����`�������~J�.���|��jC�l-FA���R ����WL�L�w|E�O���(I��J�E�X�f�L/��+��m�b���s�������H��6���j��7B/�Q�6��CY���k���64�J���/�)'>���n9a�KW����y�
�4�!;����1�T�@TI%�?��Wz��h]�|�����m<����:��x@Ik3{V�b�!��,�����)���l}�-���P(*����w?i	v�����Y�W��]I���*��9��]1��G����9�W������]���r�F�z��(�[���0��%�@��� �Fj�7('����(�z%l������G�C�-�l-������������2���������k��T�����M36��h�uOL�J�X|@E�	�1�T���:�
=K�&z�K����YAm�);����V+�����)j�tl�M�kJdu����nib�]2��v�1%
Y��fllKJ�8����������ANQc��!~���g�����c�lD�����Y�:czI,��4�����om�1z��D�#wZr=;�������~���%��&_*dNS1&	"��)�;�s���K��3��4�OR:���t���a[RW)(��T���6tS��������C$Y�����&��5"���Ju{������<�
�"���/�����	������
}� j�$�<M	)c���[>W-������CC����5���4���v5cc�
��������i�vjE/���zZ���
Ic��X8�e\\��%�\���X�_�W~.��8/��&���5�J�mM���+�U��4Rl��Wh�WJ����"�9������:���������kR�(��UA�����2��
�Z������d��:2.����_�)����Ke,P�����k�pc�6JO ��C\�+�Y��v.�b�����
E��d�,����~D��=��R�M+�>��(�u�M�����>0�����bF.�QG��2��B�)���:TP����� ��K�]]�2^�2}%&���O4^��"TD�CgK��!,��q������FJ�8�����m��U���Q�{�7�
�p�+���C�E��
@z��# ak�����b)��
6/�V���mfI���h���0"�����'�s�{.�eC�]J��b]~\���_��B^�m�0��[]�Rt�?�������d�Lxe(+��j/�8������"�"�:��G���3��1��d�jzw�e!i�����6V<}��9��������8�3:��#�f��;�!y%����A
�0�����&(�%��������~��C��
������
���!n���W�n�*Z�y��x���*�#H�����\.�|9*��5�"��j������sj���j���Oc����I,�+�IM+6����@���R����Fj��j�����%�]���%%}\cA�=��T�b�����z��F'��V����PGV�0�C[�k;�
b�Hb�F3��:����~1�����R��Y4NF�f�)�j���K�����Io������z�^`��f�� )�Fdo�X|l�1W�bj�F'���'d&�\�EwL@�� ����1�f�k�(c\Z�hel��1�+������1�Kx�c�LV�9��HB3x���}�B����2�/�d%�Of�i,��"-�
}cd���k�����<����������.#�����q�?��-�������;vf$T���
�_#��]������H8��n�E��X|�y: 6��Q��%�������d6�
���E��#b��� 6�o"P7 V�3�����-}`l$/���oX��%�=�8�h�CD����a#�R����	�c����R_��"�D@��N45�w����������WA|��^�z�,[����1�����)��
+���^�F*�NC6��������Lf0;�bp�/��h���kF���-���Do��f��[J����b)��
pO�TnfaC����_�h�/�8�AW0!k@Hw1���)�5����i�&�\Q9��������?6�����}�	ryt��lf+b�FD�_EpQ$v��z���)p�W\Y]�����t��5�`��[J��[��	d���c,�:��aM�p���H�H ��i
�{��WR�� (�u�����vt��R�:�yvO���2MR�:��dJmN�����9���a�]�p��V��NW]�VM,�Q���iy
}`\�JZ	��E5�LV��H$����Y
x���x������l	fp��W��w�p�P�Y&�o�����,�'4�f;M��%�}��h`����
h��x��NW���)��Q,�U�0��JG��n�x�����`�3	%}e����+��R�AB�2�
i�h�L�!m�_�`��X����"8�c2��^t���8*�IHre���*dcL���+e���z
&llsj�v���/�.�MV���v4G�6� t���#���5%����X���U`V(z���gVPyp[
CK,@U�!4C���hCc
K[
�b	%�@%�)��p�T[������~��(�Q��q}��
j���2&��u9]5�%�r^�$��\�#ch�j�k�W������}���~�#�9/D@[I*N��p&�����&E�(]LZm�0�$�AJ�d������FJ��JaY:f�w�?v8��c�Z�;�-��15TM0g]\�S�="�I���k8�sJJ�V(�M(�:��(�+)
�R�+g9��=u� O���WA���^g�p�c�5������Y� �"���o�@�G���0�L���0���"(N�{�9��{�9:������g��z/2�������=?�F
�C{�`��������^��^�
���|S��0c�����k�@P�����(nBS�<rA�_*�o��P�q��Ki�~��&��PFD������+8��UXj�X�������Y���sL�l�(l�+%}�b���2@�@����H@����V�|�am_���H���&(�&(4J���T�=�-��o�h��\O>��1�_��X���VU�aY���w@){>62S�������(2�l��!�:���e2m�G�b��M�y������D>Ha�B�`F�����j��Y�Ql�
A���������Mu, ��!������|��0����_(#,�k�9W�2�a��3.A�������9M��@m	H
}�B��@�e�DE�)U3�>�:� �f5�e��W�%����Y�1���_w)���sYp'5�-�l�W~��&r���\R�
}\'_�V��=�$���<�	<�E��%���;]�;�>
��o1��&h����A��Z�%��C���U��(Z�P��?���j|3�H�T�G�������"I"a��\p���n,f4������7�:>�_F�.��+�4����i��]�S�N�������VHfM��[�:�19�������V�15;��|��^��}Tbp�����*�\�7d�i�`f����c�-[�iH�,�?��^��-�������aQ3����&X�c��
`}I#�4����)�D�=R�HYB/�`C�q�)3���v2��hV��]
J���D���� ��'�+���x�����I��1	J�����P�=7}�F����R�<�>���RV����n~p19f<3�ZypueA%���( ��$�4��g�F6lmW3<U����^R��:�5�%��}��$o���� ���>���a^�\�����>��?�# ���L#�qN�:�>�C����-{)M`b z��0�,��������Ep�d��r���vgt���/{/NQCU}(�`��6���f����L����)���>�,�n���G)b�������{@m�(���l���������!r$��WVi72����*�~6h�r	ak;#�����Oq�y$p3<�T���V��E`��������/C�sWs���Ii���P�%��J�m*��`B{�.��������Fj���aD�H��vq�F�R��-�����:���������;�
B������>R,����Va)e���u������v��AQ}���t$�)@Y�u
�����#��)�-R�1}�������Z�L
}`\��0��1"W�J��(z��#��i��2lKJ�8�������������X=r��*P
D��]�a�4Q���C��4������M��,���S��:WQBxE�U�!4C�s�a&��j�K�+�C����<]�z�j�X�M���g��4	:�w����!�
c����j<����TlpZj�;k&�`U��~Z:?��� �2d�X�8��O������z5 }�9�}'�D�#���"�T�G�����=�=LST6J�Ad$���n�\�b�i�c���K�	�K�oG�}����I�]#�L�������e���*N��D��w�q��ZX7���^T�(�eK;Sh#���q��Q:Pe���A���'L+@H^x���o;������a���,�b��������3��3K�}pP�!����78�d{�t��W�h�����HYB��]���N�NHgF�>�<��|c������C�,W�UJ����|����t���'�$�t���@�!=<s X��k$�����U�Hi;#�����,���>�q�#f�VJH��B��}c���|��!|�v�o�c8��zR����I�����V����4
���'�Z��7�zg g��9����|l�
cjH#��v�aL`(��Z�


�>�#0�ot&�/e,�8��~�F�Y���;{x����:������Il6*��>�Z�~"��'�~�	�����v�����i���.��$�fg�91��Xq�
Sh��l�0��;C�����v/[F�/R��H���6(c�wS������.6��8j��F�~��p|��aLC����%��(o5
	)��jc�
%�S���"p�o�w\�T����c�ZA]�j�����{��,~��%c�W�BM����{OHK�H^A������P6:A��%����.37�e ��xc|��kQ"b�N��(!������6�1c�:��9?�:��Z�cG���4/�4����Ch����o���,���?��w��e�8�Y��{J����=2�������Lr���Z�����^�%g���Ff��F�rx�d�������O5�����s��
�L�&�1��el���3.AK�%�c����!�	&��_K�O�ylW����dF����Vek�G�b��/(�L���	A�)1�P��M�`X�.z$SU���4)(��A>��]t���F�z�ZQY]f5�6>0�����\��1\;��kL�B@��
���%�FO`�_�YP���,���c�����3�X�B���>~�%��Zh�vzd#)Kht�1?��
_�E,�����	��$
<d��E��X����]���t�&��R�.|E	��t2K�����)<-�Y#,lK~���]��w�M#$��:�t�����.z�>����iv��0V�0��O�A�=��}��L"b�FC�.��^��4�)���>8�fNq���A���T��n_���]���"%;R�:��=��O$�f���Kpz�~j���"7�=�8�����
_A�q	�H��X\�r6�7����|����@TR��*�"{�|��DD����	�""��-(�9=�w4�������#�"�2/����	��V���cy$�O�G�b��M���)�D��W�����T��������at���B���+2�c`VJ��C��]�������CF)����{a�&���g���F#�����
e����_��4���'����[_C2����+�$�\T����p�\�M�WD���+����z�1�;���������i�%��ot@�KTj��	J�MPI��������r�;*� ���5�����w�e�hQ^Bs.c��b1�a��A#�����z��S~%� fR����Z.��@<��ENPt�������VI�E�`�S�|���H����R������Rh��Q�I*U�^$��T��|��*u$9#2������������F{9���
����H�I�m�
Z��s���Q#J�����W'��d(@u>�/� ����W��������ak;���������4&8 �~�����^�y3�t]����G����sa�.������]
+�����#&�����5�*k��3
�X4���liW7<��R���
*y6���2Q30����aA�g��:�M��n��m"4����i������%�2��'W36&k�����0�]���(�����>�D��&"�����
0����z����k�g_o����D�8�~�m�����k;�#�D�U`��G�������ix�&��r�Lif�E���#"��7}�_�������a ~�������d���N���0?���v5�3��6R��K��&I��i�� �'��������E���"26��*��+�H�:V�R�Jn�FE
�},9RX��/���������3L��f5�e�P,��"��K�,�y�+0���o��4���d����y��M�6�����#8W��ca��^�4Q�NE��B�g��i�����@��Eb�l���|��-����s5���r����&�C������l�U]��,�7N*������2xP��[����t#�})z��$�-��]M��4���C
Z���&�>�+0Q
6:������o	r��*��� ���\BI/����3G����n�%��F$�J]�<G�?�7���3"�������y����=������� 4n�5��2�	��T���@��R}�I"�����MB)e��t�Um>����{W��B�O��������������qpb���V$�H:R�B����!9�3����I$�W��VU}b�#��-���{�<Ew&h����;L'��R�������{�On��\R)mg�,v�H�>�+�e��?���=�3�H+��G�3�d�#����a���������+��L���jtOP}�2�?k���E�T@�7>t�g�
,��~�!k�i�
F�����Ug{����`s��sL9"�u�m'M,f��{_�<�>�4�8v�FJ�8����������f����-��1��"3�)���q�3��Rsk�xv3czF�P��)%?�3>��k $�:|�X�9����c����]�x�M�}L,~,�Q���k��	��!���6`��S��1�zz�yh����oV�2,���1u��4��������������=���#{�~�#([��}�,����&}~�<E>�i�+����?���5[j/W.�����������?>�n�����RT�}KP;:gCB8�zI���������0
~�J�[��!	��.?_HG���#e�EJ�
�����8��e�<��������aQ5A�Z�v���f5�eR�W�E�M���z�������ay��|��N�@j�}l��%e~����/�>y}��3'��o�W]���:t��Mv�f��s��H���kd�[��N�Z����v��:M������5���������$�F� ����.f`c�]=��id��J��`����!B�*}��h�O���Fj� �<,�v_��{�[��}�'67c��bO�b�~WQ?/?��E�����*H-]C��Wj�:{���{;hw1j���Y���Zx��u
[���>�����s���UCd�3�ss1_�me��/���	�v�V�pT�\���q��K<�����Y����cA�-�P����,~$_���i������� ��[�t-2
Y�C:�~G�c.���AM����*���dZ�H����?04����Fj�g���4
�m�HYl#�$�K���TJp��<p���/�~k��������%�����mXS^�)��"���e8`�r�" tl��E��n-[D�!I����?�H+%�j������������2+:���������~���4$nc!&He
���R�N3��/l#e	����
�J�;<���R�><B�����w��F���0���/��@�]�����
5!�T��c.N2K����Y��B�).�G��x��K��}*@J�ZD����y��:����"�1�a#_rV�HYW�GJ�8��XP�b>7��3�>����K����PB���R5�cP�����u���r���P�.r	8��,��`J�t�|R�(�o�5"���1�f��_�~���X�bu�Hp�W��O� v��23�
�~mj������t������?f�6le�S���������G��c�F�����������f<�#�}|�H�ji��NW<��;vU��H�U�_��Kw�k�2br)K��	l-��baUH%z�R��-��V����
�MR_&S[����a�v����r�R�?��R�X��"��`~GD��8��E��}�\���i7R�����M��f����������t����5�r
����8r�����>$[��f��aTA�y��tt��E����*� ��m�5����V���?JWiV�Y�J������Wv�'A�� ���kK1��5.I�aZ���?����N�Ty�;�����������Y�T���<8(]��� �N�� N��/��������Flk���w�F��%�+��I���%���V�?x�D�R�oI��&�l�B8?��P��g2fK,�_�hDI�a��Fj�+V|	��v�H�c�%�6��X�
��2�JI��u0���#=��#7:��������D�����a���$���U#�WZ�=*A�iO���T�`K�FD����~���i�J6����X��`�L,��Wv5�Xyp�?��J�wJ�7A�!c�+��p�i�������?"e	�������8�Sb�D���dbK�������!<<�?"LiV^��W6
T�d�3�Y���u`��-e�����C�1���o���	��hVo��W�F�I�UHb'����S�����T�rc��%����)IVG7�R�b�|!�<(�����
4���$���������������R������u�K�:`����f��J>��Vu�����)�9	�|�k�WP:Fv�=�?*��\����I���9h��s�-X%�3�_�x0�����5L���"B="�����(bW
�?�UF�wN3A���\�Dm���?��{�,6���>G)Z��u�4�)V����lr��Vb�lV���{|XqAl-�10l��2l��z��J��pR�����y~�"�"Y7T��4�}?g�s�-�)����fR�E��IA�����f�l5���y+��
2)��U�I�9������f<�#�}|�|��IX��U��
N]���/�q���Nhi�`�H��'K�k^P�[Q�.�AF���&���b��UA������oC�1��tW[��������,(�Y����Sf��z�a���[�{{�0P���;����[s���z?u
�<�%f�aH��1,J'2��C���K-yI���z�Ilrf��f6S����
�������Y��C�{I����g�K�E������T����=����}p0#���D0���Fj����)����i���]�F��f�:�S8�Y3�ft�t�*���T%��&���.��
k�K;�Ty�%�MN9�
���U����D��A:
����4e!�%/���5fg�i6�+����S����v�>��A&��4�
&g�#^���/6����������E`E����A�;pYx0e%����g
��[��N���2
$����"��P#b�_�<�$����(��_77UY�A����� �	�8�A�W��/�J����YnbeA�r#^e�3���++`[�R0/�g3�_����lm�HYl����>��p�$�s���q��<���)��|U�fHf<(��U0�v��a��M�ed�H�"��4f���A���A��1j ���!���Bp��o����p	:�A#�jL���
V$$���|�z����67b~zg��(�\��/��X��e�fd�}A�s���� ��M����"��B��}�Ji�G�b�
����b{�j���g�>����Q2a��N_�C��Z5�k�j����vt�I��~���������|,HI�j�/�t�	��>�����������(�
���=���sX��@��:�M'�!��83����0��^A�=R���^��;���(�A�Y�\�O�
��-$	��F8
������b;{�e��
���2x�x@�TL8�N�P��k����8�����XD�K&���=R������#�0������`��	�C���������������dp�D�@S2���q�m���2�puz�,J���J�M�0b�/������U�ii��������8�YoI�Yn�C~��dF�3=mt�����C��U�	��Z
}��$��
B^��P��>��l��
o�n"B=""�6�P���h'�����u���T0_]^PR2��u��M����m���������!������2K����:6M����bF�����j+����WM��A���v�*e�����A�����R�������V��O��Yil$(x4��Z_c+lmgD7Rg �4IW3�v�&����b����g,A%I�f���*��|��_
4�ak����i���AJ���4��z�;����Xq\�,9����������;��
��Dy��%�Rp�W�R������,.�7�2�P�a���F�;��0���H8��f���4""��P�:��X$���?�R�9ML�A�~�q E�_S��
TJ��R�M+���������)�lx$
�(5�������Dk�01^�8���m���v���	��GJ�^R	p�Tb��%�m���L,�taS��j�����������9@t����&4+���aU��?������'ue�r�5�?S�Qp���L�HYlZ�	�D�-LS	�g%�s)�iP���W����`�+(���?�2��+h�v*�7F���fg�m�>��3�?nK�� ��9���{��p<�^)���|dR�Uq�,��1#>\	�D����"3����ql0liW3@��M36���bK3��P<D���������;�a��%��y�������N~�7Af�}�bcs���0��(;�LN3a�SC���_kd��j�.�l�cz��Y���[��L^A����������-�3~�K�0�o�����0G������!~^���V�;�+�bH�f0��ld����H�����]����)w�0?R����8��x�s��#����P:��d���M6m��?��j�����c��qM��xJ����>[��A�3�AF�	�~,��Ee�Y�6�iDD�Y2�i9����d��t����.�l�t�N��D����1�wT�������1f.(P�S��1N��,V0D�!�H�}9K�`�"�ldz���T#�X���NF��YT�(E�����%��\��$_?�Z)igX7Rg ��-;(�y��';�33H�����I��h������(��Y��z���
Es~��Gs������A���Yg=��n��oG����1�
�x�i�C���J�?�LCLY�����(���+>��+���N�fC=""��E�@��)��_�tp��)��@ c�/|��
�D�#(�D�Y�G�0��%�V�P7��c���^a�_���a���NkD~p�����f������������>���:�ld�,I����Q#��]�,��@���:I���C�q"%7`�>��?:�����(2��G��E���;$T�K-���)Z���%�,�F��Aj�7dy��T����9�P%n�^�Q�4L,v��1V�����f���������A����M�	G'������+&H����8�Y�>B/1�:�%�q��V���;��*Lt[��x�� ���&��"�d@E���F����6���s`�:�
2�v���)E�YZ#���X��� <�y5hu�G�b\���K,H�l%���%���P��>���)Dd��1F@Y��B��s?��HXGK�M��v������FaK���5�h
"����l�,���
*i�����y]�0���e������'��-�t�����������%Fh�%�}\�w�����2Y/��q]����+n��#�,�������K	�J��H���*9f�^b���*}�>��A.�hH��)���^��d��""��Xos���k�� g����1����NID���l��u�}��`a�p����>��J�6ez��e���d���zx��,W:E�c����{��PG{#�w��czf�P{�I�Aq����D:���"7k|~
g:��_����r���q���zmT��6p����a��J��:k��R;����7���.9SD�'�f[Kz?����z�����F<�Q|O�bJr�)e3n��Kj�����=
��	���y�����`g.�+���������3�B ow�K.~���I�������4A�t��R�����]\S�w������X��=	�zq��n�I��7����KN[g-U���M��Q��V�H���l�
�JG���6�XS;�QA�s
F������j��)�#�aJ��G/�@E�U}�y�p��(������}���v����oE�i�HYlZ��}�g������e���7�<0��`<����L�@�t�6��������U�6|-�%h	>
���O�S�B�����K��5��FY����2�S;t�gL�: t���P��b7(1���;�q�r�9�hM���f��a���P7 ��D�R�>���$\������"�7n�,F�-T �E��8�
������&�9'���F��\Sj��z���
(���+�A��JC�2������
���{@(���t@h������X������M����������_�T,�*�K;9��*��RJ�v�����Q�������U6���m�+%}��q���.#��C#����n�C�{�(��
A�f��F�>nR0�v�����qcjb��)^�HW&�U�_����� R[�d������sEA:��>t�X�������?���,���~z������+e����q�%�M1����4c����\y��F��W�g�,?2=i����6|
c���
B��Q+�*��h4n�c�,�Nt��������1-��Xr@hek�G�b)����������Q���H��m�k�n��l!kD�x\�R������/�FlBO+(\�QH�:��.�lD+|p-�����/XL0^�����GP����Ko��.9�]�p.[�Yq���
eA��p�N6Q�������*�R"��/s]��eUfwj��~��O������zm#�z�6�o}Q l��N7��BR����l��F�JU�����(m�6���B����u4���lg��2lK�k�ST
�����!�d �c������K�w��}Q5��������f5�������+�QOH�a������F�}�#�=�x#k��v�V���mW#66�x�8�Rgr��4b�,Q���Qp+��X�I����,���z>F���N#6rIA���E@YR�{\�a�2��>�H.�����K�s������:&�Y
x�����ZT=Q���������;k '����{�0C���HNh���@���#e�E
V�-!Y�%��qkz0z��[�z�3g�p�#(������r����?�H�&�������a[RF��8s��X`}o<3!��<�{�U���(����,���������Yg�1��q�(gq1#/	*�mLS
�L�����	��+����d����?���=���F�^��nb���k��r6��<g���6s���)q�.�Y#���H#eU��\R�d���!��b{�����jw�����E2�Y��%KW���9���:�0�����Gv���p$�>��9Pjj��i2��5&��WY�CC&&�{�x����T�)m�!�C�BH��U�_�	r�Z�!�<���BO���jI���4��\��
j�W�b��]�)&)�\�]�l�P��NC_�P��o�����?B�%��cU)i���m���z2��Qd�R���Y����W���_���R�@�G�e���F"Li�/��������;V*�H`�����\��4���Z�����^����_�u?�3�jB�����&^��IA��Q<�������ra���j��uc��>��*���B�	Fm��{�����C� ����S��N���
�xx1��$�)�v6���(#�����iAE4E]�����i���kcaO�����C	�����)�P1�)!_)��&�M�g�1}0�YT{<+J2��WH:��Y���a������MCA���A��A���_� �&$�x��5����)=)@��? p\E4�2�YMX��G�_c�������Xv1�1���
���%��&
�F�K�}E�Y���������T��}J+�T���1��7�����:/�t��9� Rp���
]l#�}t���O��F����	OC_�#�����R5��~���h3b�Q#"."�/;�+<U��gO����(Cg��������rC+
���S`��f�>�T
[�I%�� +��Zq/�L"��3���v�f�(��V���?���a���-$��?�Tdc�/}����d�u�T��"��'uLh�*�2i=�4�ZT���_��U9���e\&S�'�Rm#�������&����~\:�Yl7$=<��:�!}����?���5�3��y����Wek���/f�B4��f�`�#�s�M��0��h�md���w���<�x����|���y���
����>�n����1��]>m$
���G��E)���Rki�JrI���V,7�:�9`E���W���7���C�{0��};�H��)�MW,���`k2
gYK�
���c>�HH��xN;�Z��v�[�~�-����HYl��q�'�*��"{�Q�9K��j�)��3K��2�����g	�F�����lI&�`,}1�������@�u-������}\���Q��f���� ]���fc2��DtT��q0���	)�+R��9������&������>�����}���dJ3����&��Z�XJ�9�}�7�����i���K�gv��]:�Z����Z^ ��N�^\�=nL�������vv_)�)�Bos��N���(�2�6
���L���!�zZ����4�FJ�)��ub��C�GlJ&>�g� �6�fq�����DbA��)m�#e�E
����QJqNC�
tCo[+�S�AB*���R�/?%�L�������@�����.���R�{�����]�B���L9�q��ku���Fb�
h���X�4�Q��)M�>�N5��;�W�xP�����'�)m�HYl#�}�Hf�uNE��%5#��}p`]��E�S��"Sh}�L3�������b��4czI-�e������NU\����e&��6����U\[R5e�_A������>��|R+)s0�=h����VY
��������G5dc�!����K��s:�����9����!w�����U�n��{d0��^�
=����6d�h���s�E�������P���+�l|lt�7�rZ�[�<�!�YVP�A���U90���R���2���jkS��J�o%��+t���5/�JI������������A���h�����1�,����1{��N&���H����-��lH�L��eW�4�����D����.� w���Ie����\R��>�����4����Q�DZ16r"{h���>�6[���R6�	����������:��V���PH���dQ�OiO�C�"��&"������m�$�\����rO����H�?�t�����YBz�q�a���_����R��S���^����Qo�|�0�c\N|�,�rl��~��$�i�xHE_�,�FP:�
GE�z`K`)U)#�\�=)M��2(��J���^�X��j�	AU*�������>K���%�Qv/����+k�'���V���L$���,�D���Fll:��}^V�i`)��(��&��<3~���g����_hb���#��?�
zV�#hx� ��n��{���Y�@�b�q��)�I�����1�W���MD�����2�cDD�Uh\�-m���R<��<&�Y	���$K�q�c�p��������b�c�j8������Rlz����5@�H$5 /~��{��q� �����#k���
����C����f����i>3�L�����Uo��5�ak����i�������d�7��+��Z�h��x��Y�3�t���&��zM��M��J�eR�%�*�m�Lv��O�D	����@�����j���$A��t����>�r�\��d�������C�D����;�-T;�DN"���`qe3��h�Lz�xq��H9�@�B��an�V�����U�i�Qyw���h1����W�1o�����c\��8*/<��2��3�6�����
���LRV�z]� �mI�g�Sj9YEi�t���p]"tq��_#�����IS���eC�]J��B)#4F�XP:��g6��=\�s$p}�%�;����A��\�)�����J(#"��Z��������I0?���=��������L'ed<��qLy;��jn�g6A4���(���k:8��A����5���n�s��6�.T�b4���Ls����������]b~�clyI��_)���_.i�[�i�F��K7)��BR�=d��j���a�qh�c�/]&�g��>��WA�w����1,q��I�9~�[����� ������������~���F#��&�Og#�?f����MD�I}p/�E�\�h@6R72
0D�Kb��O;����6R(�/D�1;l�X��la�`'�h>��������!���='a���
k���-oG�Yt��I���^�
��dF�
=��a�H���Y�#�w�������y����!hfRQ.��P�`yb�|)(����j����	��H6��AE��/�-*��Q�M�l�S�
J}��?�oD%v��!��w�[����M+8Nmf��8g��L��a�y8
��F���� B�w.����k@�1
7A��JGgcH`�Ub��A]�nfR:��Q^]�d�b�C�HN���{Ii��]l��	��M-����!��[
�;}/D�n�'�\RY3 9��@�X���R�F^k�&�����zf:@U:�u�������}�LVH����;���:R��/�w��K,�b,b�u-H5��<X�y�
�o�y$���}��+���v��F�����h�����-���>�sg^�R�!�����rY�&,m��1��|��h`�2w���V�i�����r�{�k6���E������]��*����
�K��Ky����f�x�����������sM����c�<���FlB/R�m��#F�F/��� KFL�R�@�G3*�����=WD�"'(����E��^���A ������o�) ��Jybp�>�R��Lw��.;��d��Kf��P\:#"��rQ����/�J��!�oG��J�	dX�R�&��x���I(g��?	e��&TV�VB��!��$�e+���7G�����vZ��m#��~�=��%�����������v����
n�����2�P.NjD��J�S�Bk����'����+m����eQ�G��Q�l�D������W��Rm���!���'���_������D��9���@Y3"^_����.p��*^��5��U�|���x#�aA��*�{����1���1�:p��,�w�B��9u��oA�P�D��XA$G:�	B���hk���<���Pp#]����,i\(�	��HwT)E�7)C.)���������D�
z�7n8�7��e�"S������)e������o��IA�yotl��FO]�����(j�F�A��$��x���i@�,���$s��}g�x8Z�l}g��?�;���
z_TM�mn}�h�P�eR�G���Q��;��r��+���q�@�1�8oQ�;�����|]�]�L��
��v���L����+�9��|�u�"����h+���_u� Rk�BJ�%�BX#����I*�zq�\�B��CS`#����~P!��[R6�!���h��G��I0�=d��[;oa���B`��%R��!4��d���f���<z���+�����}��khb�,������;1�/�o#����R�=�����{�,6\��q��N`�2caU"��U��>sJ�N��e����Hi��������X��X��1����q<i$H��:��T�Ei�=���cd������"L�lOg��%�h*����,'\c��Fg^�Z8V|�?,�i�Z�,�"�%]5b�r�k�<5Yj,R���gTR2�w���W�����j�lmW366�x�8��`�FB�*������"D��+�����\�4��SD��S��C���f�o�5	�9��dlr�!�|$|T����/�������X��B�D���}gX8�W�/��91��V~�m��������u����;��Dsx�����G%2��M�JE��g�,��d�|�J���T��2��
@��lgc$L����M%�hS#�iS�G�44�\��?F�ttM&V�c��x0����%H�p��s�5&WZ�����<����*��#Hc*�����[w�.52��g��Z��D>X�n���X�6���f���C���\�Xc�:�%�^L�g�����z�!���Q���,X����\R��k9��4���3�z�cHRv�i�F�}N\����cf�=R�Hi�(�d0e���d��^\E�N�-*/U�����L6�
,��/��������:���q������T.B�q��� 9'��(�:���2lZ�	�Hak2���B����t��q�'$S^7:�5�����,��PSv��5O����DA/� ��S�:�

}?������h]=b�Um�D,��X��@]�'15vU��S�:������U��������T�oL(�#tz�qm��pf�&��au��5��AW����P2�p�)�K���I��MJ0�U
S("�c���YZ�1�|��������&�_~���m�J�H���q�����y�fs��-��:����8sH�A�O^��A���S����Jy�^Fu�n�R+D�:��R��h9��!S�R��)5��vuE�c�4"'��qpE���}������x�I����n^���l$ �:>0������#e�i�&�	���\v��1���������nCz/B����@��/�������TD�"�"��%L���\�����<8�(d$x��������C�:�!A��G/�l��Q�q	Z�/�`S2��c%aL?��`L���^�@�#p,]&����H�MP�#w�5FAK������p����n��.��`�����d�HMq���+�1P������FJ��<�����c&*:�n����"�J�Ap�~���=���k8����q�R���J�']�t�Di��2����J�������#�Y�H��0���T��v��R��R*��(��[�q��
���8]��a�nj#k��~.���!���g1�����
0c3���B���x��d��^O�R��.D����L7.'8���k46�H���q��V���
�R�}`L�e����Z��ar���w�eA����{�DYn�r���%��R*sO������1���K�H����#H#�aY�H>�����"�w���lR�2������{VU��=��lif�/������u������!Ky����\��{;�~I����p�o)mW#���6R��]
������D!�q��U�8���v���?�vA�_9���B��?,��W������=�f ;
q����8�������Z�����a���h���s��{�����x��z����"�X�=�}$��V�,i����
%�o��i���JTp��H��2rh/��������<$����9l������,J�O��%9,��>8�1��`�Z+/Y3��(~F!_8�v��R���^�3c�QO<����i ����&�6�����"~����3V�iF�[����*�"��r;D1���q?������*��#�{8v?��9����N�#T������!
w`(�e�����_Bp2��od�g}��B]�")��+rIi����NBG#�P&��!l$H��n_G�K�9�����&]�!"�iD��^W��P�e�oR�|.�2���NhYR�X!~Y��lg*�R!�6��}�b���zKCBX+?�������������D��8e[x�W���J����A#��1���x��E:����J����WlJe����?��2��h�� "��}a!��+�V�D����D!a~���C�������w����6)e��M�Q�Jv�>��B�/���X�S��3�9�F��`��}{i��E�Q�!��
��
�pry�L�����R��$����FVA�s����l�������qq��I`'?Y�5|��_�!��h���t�v@�+���p���f<d�����aF`��9�6������NN��?�Y;vZb�?����2�����+��,vE�=b����}pfM��(�P6��\d5p�G��4���v5�q5l:dz�l�f���bH��s���$����E�+E&4TM�����:u5��zD���7��(�x�EC_b����J�����&���M�`�M�*��%%���Y�x�/V�O� ��{|�2nd������i�uII�Z��i�����S8�YO��cF�i<�0���v��@������+ �L�K�+������~4]<
}e��gu�������%
g���VQ�eA�%��vJ�#E�0�9����`�!����tU���l��fU~���r����N*�>Ux�]k���5w���T_a�������b���iDD���������A�!w�!�K�=C^:�����l{_u�9��Kp�24��k`)����#PT����Z���(jL��YC�)�����e������RB�R�)���;(�2�#����<�:2���gg��El\��1A$�0�[
�7()���HYB��::Y�$�����������uc�����bT�p��Z���Qa�_AS�o�����2�P%D��%�����n�`Tk���LG	Q��;(C��B�+c�+;���E�@S����W��)
���\Ib%����A��s+"��1LA�@�][��E
?z@'����F��w#k�����c�]N��]M���!�}t�60c�_g|O���wV�'
��lt"��z7t����0��&�����������VI��h�$�],t�
��G'���4&LX*d�%��[���!���>f4?8\�s��)4�X�1��P�fv�w�U�% �"s�C��v5��,���>�a�
I�� �-&4�q���P��H��^��vY���-����|�������8`lb�?hC�4%<8��06�'�P34>d���|z����8���t�&��R�4u��u=4QN,\�����=H����1��	yrt���D�Y���aR�G�et�nOI���=%�>x�1(��u�s#5 ��|�Rd�Rwx4�RB�R��M
%9��;:y��L�d`#�����d�	3�)~���]���"��$}|���Sp^G���_��.�~L�>��|}(P���FS��R��X�����M	dwZ�$��[��8;�'%3�e�?��yJ��Q��~�K	Xf��}�t�����pV�������=q$�_�/��C����o�����Y"��a"������=,����p�L���](����/�4��S�s��c�&�hl"B��R�m���9)%��F�I�����bG1�F�w�J�}�c���vF�R�M7(%}\c�Fer"���;��r��Q;h!���I)�\O���B�v5�!�����*��#����>��A�i�O,�>����q#U��������f2l�FC�FJ��Ja�2��������w�_qe����(���-d���k~AJ�����;��
����#|�F�r��8"W$,�� @�x�v�wE���8s��FllFbz���i���@!B<��Z|�B@��6t��1?S��1c��JI;��H��l��U
�-��-e��������(d��n�A�l�!�')����j�����K,�����j,���:>�+P����'5����~W�A��$�
_A�HX+�[��nLb-<GHza�x���
�����@Y���*X���,�O�����%���4?�����X��D�j��K�� G�%��$�@�h�X���������M��FT
�-FdRb��E���q�����1�w�jH_�����Pv���!.�����}�d�q������\Fh�T{���5�a�Y=�Lz�x�K���VS�A��/h�|_�/��K�{�����_���E�"����Qk
q��}�hX�}���g���4t�����'�:�MP�6�:�gS���GP���F\iff����2t�5��%C�H����c-]��2(��bX"./�4�D}�{
��������U��Q{���o�p=x��o�
�� s��{I��T���^C<�[k4)�S�v:������APk�0��&��CJ��� �
�r�E�S~5����5x����/�,�`K��cgd����2l���r��P��F�D��z�xY��z��������*���w�Xs��N3*t����q��������h���F�i\Y����4�$UH���t��:��4\�����H}.��H�V��G����~V���5�:�uZ���/�d��i��f*[����Mw������+J���8�}�T��X7���)H*�	d)\��&"�4 ���`1}�gA!/8�.0B��X�F��4�SE�C��F��Yr=��<������������qpec�~�)�������r�E!;S
�C�IVi��|���`��i�&��^B��S����"�4�&�';[t�A�%5�����5��nw�Rpi�F�3~�(e=���M(h��Db�(p��c�����5"dg/F
{�k�1B���W��g)p�W6l�b��vM�"�F�2;����!,����R�$���\R*����sRf���A�I���NT���-��%��Gn�f���igJF�b���KX�9�=�P�H�O��`�#���F��%��1���v5B:l!i�X�O#�o2����Q�	��\��4���}��;NP��J�X���
�%%}t�P��D`�Q9�b&�}p2c) c6i��Cr������A�m�J�H��!T�J�R�����f��4/�
����y�Bt������X���]�x�WJ��0���n �����W��Fn�4_2k�����cb6�eK;{�H�P
�k����B�W�
�1����5�>������1��>�Q6����7j��t�G����@��3hc���t}����Q��C�3�U����z+,i�����_oh8���B>�y�����z�"W��>s�X2t\�i�?&��X���A��%|��)q���b0)a��G���7^�Z�_T� �
=���(S�%/�����~�}������(��yp5 p���L`2����'�==W�F4lD~��
�)���L6'��H�)�^�4y�a����h��5$���9� �K�T����ae�2*�{�LLA������
g���v�@���A(�{� ��0*�
����H/�?&X������)���
}��|
j�#����@b���4R�
C���d{��6e�?o}[��D��g�o0������Y�<4}����\���44;�	������2� �$R�<������)�u�K�^�i����
_A����1]5��������L2I�O-	�K�JJ�8�hD`��r�-HX#"�ZT�~b����RB��p.X���,c�;������G',dK;�W)����;�`�2F�U<t����_S�������!5"_���������5�4�R�mIIW)�%���q�ff����.����Dz����$~L�aJ���aR�%�6CP}FIb��������'����B��5�����R�:R�6�!;���4JZ�;����1�����u�'�<u�	��AkL��:K�o�p	Z82���� �	8ec����b=�a����U�!��
��:R������{�,6�����&�=�32C��>AVx����S���	|�����-Sa������.�g-�bm��I2	v����4�6R�3���(��v���|���[)����@�B����nt�L��j�R=z?�o������a�����
0u�����(�Y}o��[��oU�W����1a�\md��<a&���^��&�r)�2) �)�ypW:8��A	K�/����h���A�2-����x�o�/�A9�!���2��J+P|�P���@H�j�}\��=�0k�S��KYldp�G5D�S��$&�U��<8��pz��2n�f0G�5�����5-�D������6Rp�u����J+�2Y�I���������w�����N��O�WKh�Y8����.�q���5��F]� �����+\���5�!���'Bo^
j�W���M�ed�2�����c+��)�=7%Z�_J(�����f��������#���c�O0�m2
r`);v@�k
;� 5-��j`R��~���{����HvBBt�f|U#�Hvo�`J�eA���[B8���a�����������3�K����f1���J�$��3�"�����������u���9^
�0#��s�Y��8D�-$����ype-(��M��t�4rv�5��.�0*� Q^C"(]G�O�����
�������S��MB���������
��N��R�G���&�|�71e�2���� D�;�Y��������X�O�q�����YgdG�m,p�#�X�t�eZZ�r��b���^�#��X�������+6��
������@�|p�b�%f���hQ#hIT�!���Z�64;mX+�]�
Z2�8S��8ib��u`�Zzk�F���G��M&4����|����������)��W���,�9�\���h�v�g\���jV��,�����	Wu�w�����
0	Tv��W��qsm��.���C�����N6r�R��1�������4b�(i�jP"jO�dFe�<3:��KI����������z$����pm��>Q/���	f������u��!^�Q����=�\�Y��>���4vmG$B_4�Z�"���GI���xB���FLI�<�!bt>��S&.��v�jGa;��Cu#.��nb�Q�O-����s�mg�!q���|a��������0fAuJ���2�"]G�K�A|�JD-~��(2M3
��ap��B���9`��~�f�c��W����-\�od
�S�O�X����	���M�>n�0�nb{c�f��V�+A.I�,�&@�����SV���GD��oX}Yu��1�F}s��X�%keB�R�M!�3s�D�L�+*��"�j?�50��LH
y[Js���\A���v��F��Ha:\��a�]M��tCH�8��1`2��I�|��i�?q����mPDj�����_�2���J�\n�/)_0�3�C�����������R���$�;����6R���9�r����)>H��Tn��H�4�<d�"d���m���R>��
����)��nE�@N���X2ipXSy3�g�I����>o���ms��G�,j��M�}|�����1!h3��1p�W�b�C����y���w��C��R���W-�lD��PS@�����
�B�U�?d{�������L�&e��M�9��3Q���+	��f�.
���������D������i�
������q8G�M[�$�#�����atUxHcbf�Sr�%����i��<l�2}���`�!f��i��H��s#���`^K0��6�7B�1A�������U�/����BN�r��W@�s�<��QX�/SV��������������k
��\X�/w=�OL2Q�}'&��cM��P��8��!�����!AH�zdc�!��K�s 3(�I�-c�%��H�{��7����<���� �*����|�N����L1Bh
��x_71��u<��T��L�������f;������m������n�>@���(��:8��B�b�v�_}�?.]Qy5��zE\������E�5�w+Z�
/��Iu�����__A�m��
��K�%��z$}�Y7���*���jya!/?����q�@�2}e�������]M��F�/b~����������5����(�-|�H�h������/H�v5cc�b�>��pq�4Rk�Wpj���a�9��yv���q#5��3���@�!�flR�\R����&6��A�CQ�u��;�F���5�1b��Gu��jNF��Hi��bc�n������&d�E�'kE�h�_^L=;^�~l�x�K�����,�$�j�.h�G����A�`)V���-���W�BOV�a{���RO8|��SD���"�\��r���f����~��}����C����Nj|c����	��
�8j,�8
k��8���c	���R��}J�*[�����O�Yo_2��;���R�)i��h���%���>nRP:'���55yl��F���Ek�@xH��|������v��a[R��A���{ *��m0x`0����H^��}5x���4����X�6|
#�x^���n��=P�o�Lby����0w?rjT�\8����`�m"�<�-�4c#_)#������>��>���wR~8�����T�*�H��������)�Y=�U)������mj�#�na�{j��!kd�A����V���i5c��36���`���(�?�r|`N��A��@P�����i
aM���~���$X�6���~���F�r'A.H�1ln�Z}pc
|�R�c��!3�����%��iWC�vp:�>n�0c�t����A�G�`X��Q��J�%��S6�&?�"1���L����}��*F�%,4�}��W�#�S
�Kp��+�����VQ�eY���:�����|�}D��������w<��d�����L��!���L�l��1Lj�
<���(��ML�x4N��3Pkgk��C�}
s���5~ak;#��:)���%��K����=g�^��]�C��B��7A�r����l��\�����F$lW#66���^b�2���r-����c*�H�L)�M%��R5����������Y
PD���x��S����#�=KG��8��A�Q\����S���DT}�r�xQ�%�3�lS&L��H__0R��y���VLk|�*/�D��|m;���KJ��W"�A1����H��c k��v
�
c�v��~m��-��RP����	��.y�;`\V��<I��>[#!��u����sj�?�uZOD���{�1�o{�T��QV�3� )��(n�u���NWP�����b<A���
k���7����
�l]:'����<�1yp0��(�����<���	,9���"(_��1#X������x� �l05D�V��{��ph�j��K�P*S���!�vlB3>~^���� *�VrN��(V��g/QP��������2�)��.��U6��_������T�<���0�4����-`���������	*�
jG�a�J�Z����@Q&1D��d���!���@KX�t|�FEK;�)R�1 ���K�p�����E�-2e��d�R��a��q��j��Wr�z��������$,c��%Bgz�����jSf�}jl�2�T�����R�N36��2B��d�9o�6�����kb5"��	�L�@��c$�}%V��]�y.6��	�������hM��ye���ja�$rv���z�S�k4E=��C���J>��"�����A���z�c�H
@W�(���Bg�
r3�����%�,��p��.GMlbf9�?4������H'K&"O��o)m�HYl���^'`�����5������[lB��1�7�Sa����s=P�������Py���2�%�J�~S�����X���������/J����J��F��Q�L�H�f�V����~|�j�B,HR�K�mgE�b3 ���h���6���aWj�(�6�
l${l|R)�_�!lm�HYl����m��0�Zd
�(j��?OR9���lv�x�H��k���UphM����a�u�����l~�	j%��1?�&F���T�B��5��)2WAmXcf38���%��`,,���+�{�S
)�8Hq[k���O��1�fo�����|�S����������<H(��6(�K� ��;�:�75<�����@b1����F��tt�������wk.��R��Z�f~������s�!��b%e��(7�I��q\��t�$�W�(���<88���D�g�a�T�
Hf��,g��h����MglB/c��fb����!��WIs`��������������}>��
k�K[�+�]5b0��0���Q�7�����>Y��F%Y��_W���(c��N���Q�9���E�Q[n�*���c� )pU %#�E����>�\�l�tx��
�
')�����P���HQ]�G�������Hz]L���p�J��O��&�2��������wI��r���P��A�Nb'�~�6�T���6f;{��a[R��5
�`�9^}�-a������>��������R5�7@������f5������j��!Q?;ZJ!�2��������=�x�:XB����������VYl����/$�/?p,���Pu�9���Z�I$�F�tP��}���b��.(�&�#oq<$���I.�%k�l��(���3�����
�������w]&5n�T�Q80���=d�����i�K�2E���~���xe��+�LK�������L���=����7��hQ8|�m����5�O};��}3�liW<Y6mx�8Q),���p]��er����H���
�K�H��S8O���v�������������b��e�U��*H���p�7R3�S�:>��6��.��.���>nR����r*��y��@r�Rx�8HU���1�m����T�0�c	���F� �d����T�sJN9z�F�(���������K|�$�^VK����-�L| G*C��_�e6�~��R����2l����>nRh ��5�?4?����4�0������5D{���CCQAm�
Z���>�d�$n �2&'��v��6}�+�v�5&��t��cZ+���#M$�q	����H;Xy "~�����E��*<dM�t��^�+%�;�����u��8N1���dc"�2Hypp��;���K������U!�{���8����������Q��1RB��H�A���@p�LA��w����f �f}x�a��4����FKt�i*+�F�
Mguv�X�~�`�g��5��5EK�5,�����]
y�GJ�]~,C�����]r�y�+||��QH8Y�*^~l��`�C�����w|�	�=������_��H(��z�629��I��BJT�#�����#W6��-;��=Me"��6�W�����@����u��j�~���$�#U��2����"X
��&�ps�S��}�)8�@��oU�1���SH��9��1m�86/��h���	�����@��q�P	�w+Q:�4w?v\_��G}��
w#�?�.� ,}�\2���e����ks�#���-@������}����/iWll�	������^��si���=iE�����	�������Yr&��e[s�Hy�j��5W��rM�(�F��5�|�M3�`��*\(I��v�x��	-����
�0��&hA���������/��f%E>�q�$_V�����j��6���X���������KD��Q?�(���q�����r�|,
���e(#y�GD`���~������4����QQxP���w��o���bih'2�H>���A����&
#���D�P�u	��C{�����<���my��9
Y������i�&���43�W��f��BLH�IG����������5))I��H��h�����Q#:h+��6<�/��x*�Y#�#��Yu\��g�Y�ll!9B��k%�s�h��N����\�t����#��F�$������~�2lKJ�8�B<�xNB�w�)�?2.n�ik���N0��c��L&�$�^j�RJ������v��*�N�A�>0v@3��c�2d���{�NR�+�����+66]�	�qpW
�1��P@��{���|kqq#50�r�0U	]��R��)x�>��������T�����H�U����f�E��}D�x@����Oq���60�������[>0y��!�4K������r�I~-f�����N;�*�}��R~v��c�C��v��������wTu�i#�w���=�al�W�b\��Q�9L����Q~��x ���}p���5A����,j K�>%?�K�O���gi)�M;�>�4������u.f���=�Xp������>%�������C�I��8m�i�)�'S��B�;e�+E���F���S���%d����a��HIg]p�*�w��
���T7C��Z�����N:"�E��;V� ����-R�a���������~����#:C���9/e$(�1��L�`P,�D�zDD�A��l�9�XQ<5��+�$\3�7:%6{��_c��mh$v��VP;�C5����p�I@�.!�
WA4R�*8$gZ
$s�<
�bqh��R�mI�?.=�L)�������P
}�?p��Oa��^�|`��V��A1L6���J@+B�W-��LJe5��{�<�E���07���K������\GPn��Qo����!��^���

Uo��a,�:47��2?k.�����ll�!�>>bR��Hj-��_{�L3@Q�JV��	�Y����eb���5"F�5/��LJ��]�	`,o@�03ee�j��lt+w�

�MP�WI~���}=��du-�-���Z���:��PlJ������)_�v��2li�J�J!����x�R��]`cE�A��k#0J%�U�jo���jK��[�����{�,6����Xj+|`b�����w�F�cDP�dc����HA�]J�%%����bv
Tu�.�A���k,���/��;�
~����������aLC�����tt�hE�Jd��������&���A�#"ZT]���)�yq����+e�"%C�>�RpDk��bF�+�X3�?ne�:�hJ�j��5$t��:�u��p4�����j0�%� 	�;B�BSR���';�1�(����+?n'#�J
B��	����qy$���K�����m.Y�+�3e��|�wG��sU li7�,����)�9ft��0aR^����lV�[L��{���,y��cm��R+��h�1�B����L�B%��_J��Bi��
�FF�|�3�=go�-�L��|���1���	H��8`=9N��������&{�QKV�H?7
�����?+������.Mc
���G���Xq�PXjeN����
>���?E�������CDSD$�@����!&��U���)"}��1(�D�����JE���Y�6��j���
��>�D
��'���1��I��7�BQ2�+��P5�/Hb�D�Y
���CU�%���TRG/@�,U��~c�$X0�R�OIG�$��	+m��1L#"�Z��l_&j��q��:��
���'�f�T$�R����k'����N��S�Ahv=�-��K/�F&:�	aW�<D���P
|����
����$���4�<a�Hi�Fdc��M����dZ���I��ud~:�1[�HC�>E���"�f*��9L��"������m�U���T�����h�i�b�)A��sF)�������U�P:��8b�!p�:yp��'��L�.�J��y�y�%;&�a�PS6:�k}��A��S����A�i��g���r��F��%���).9/8��v5dc�!��c����jx�C���sDXf{b��u�a#k�����EJ�/�M���i���Y��v��g���5B��=��������wR#�y�r_��aK;��B��R��5+P:'��;���������OB8�j#k���c���������z*}t�����Xp%0�����C�B�hB�6:���zqG���q~����x�%�}�c���k��X�����R�I�z`F��~�V��������df(c�����8�5"�����������:Dy���[�,2��Y�7En�qU�}����<d�9k����1&8�1�/����\��:�`!��x�� �=!>?����6�)/�
jGA�$�~�M]k3f�`U�iu�7y��n0�������a��	]�b�Pc�y�eA�
#k���C�������j��,�&���TM�3�jV4~k�����L*�����ri$�����>�����w������E����%_d}�-��4)y�0l#%}���K��x|�����BW0����3���v�G�^����9�F��e�#�j>C����d�%�U���2Ebo�����Z��B���2����""o�~��R���Va��k����o�(\�_J�=�Q��^C�&T���WD�����6*1~B!��Vs�a_}Oze������=��/,6���H)�6,���$�lZ&�a��d���L�����}�:���4������K���H�e�Z�
�*��V
jD����R}�x��yeZ����"n[4����zC/?og������<�:����@`���Z?3��
_A�Qo�y;���-Kw�h��grJ_�)���L-��t���X([�i�F����}\u��?c!hL9�dTX�F��f���U8I�=/�Z`��R�f �}tQ��7�����zb�a#��1��!�d��Qag�5��`}���J�
I���`}#�����wPh������8����A������
�%�P\��t@)�h$��4������&A�R5�0�a?,��f5`Q#�(�R����f`b�\�%V��nrL���%Q���������a3���:]#F�p��#.��:�P���a#5BT�r���a[��\Rp}�GWI�Lz�G+��L&��!��L*#F�Y@�<�����`f��R�Mg(%}\��u@(f�{�� �F���w���j���)|k/�Pm~E��Gp�����<�o��= ���b��������A)�,mTrw���s�9�bZmC�2��������7��[B�L-���T����W�"SM��^�f�"]���In��&�L ,U�%�� ��('(��;'�`�a�F�
x2&��U����s#uR�6���&�KB��� ���Qp��R�����v�l\}�+v)!_)x�C
X*#�

mB��=�A���I�j������Io1��Q&��</��WJ�/�
�APi\����as����J?�7s�n�-��M��-)����"�`[�N)F%
N��W�i��d�di���v/8���v��f�MJ�*%��~��F��t+,��I��;� ����7h����z �f���z	�,7��1���E�x�����$l/���$9N��n���_��0��~�fdj�vT:�0xN�F�
�Eq%(x�w�������O��C,�����w�"�����x(�{�:�������t��_3���T~5��ig��2lKJ�8�e�����~xoA�5y(Z������lGi��5"�#�rY�Xa<M��f����a��|�j:�tv!�J�M�0��]�%���������A������ ���q�4Zc�5c�}����rNc6�E�4����������1�C_C��$�\���E�w)WHot�U��^e��W�E}6)���~�(�r���a�D^�q�y'�}����O�0<�^)$�l� LM?T
#����|�SQ� %�ua��]��d�U!���[C7H���RA���WP;:hTc�����y��*@�1u	f����9��L���
����G�T�Hi�����j��0�����$��M�c�]q��e�fuAc���O]�������]��i����=
�u�/-�]||e-���lk����iY�
j�Pv�[���F��QZ2X�������V+��Fx�*x�����%|NH�R���2lK��1��h����X�=��XVF������g(M��'�>�UD���y��F��Q�$&��r[���F!H��Q`!E�q����w�F�9G�|N N{��
_A�Qo��"J� ��,
�HCr��<��;<c�1T��B��u!"(D6�	/��/�`3�:�����;����i�h}	��*��C6�1�����;O��ak;��H�.����3�Rla&^��d�o�i����Q�.��-Qnd����e�_G+�D�n�����iD�|�mX�H'4w�9��n�*V�CPd���1(W�~I���I�����D��u�J[r\z��~���E�%5�Bcv���
H�����
�%%}��P�PI8}-���_�CQ��<��d� �j��70l��Z6�X}Pl/C������zjD�*F��u9	�������r,�q/J�������>R�����U��H���X����z_TM���}�I�f��K��$L��x��>}w_�����Z�f��l�!9��{N)Xi������M�%����	t����5t��A����Y%�
c�9����0��&��L1#(]5"�f�������S0��U����y�Q��
�w\/u�i����W�����������g,���]��U��5"��������<������^��W�iKo�J_�Ta(i�`l����T���y	��v�����8��rd*����[��y��t(�"��~�v.Ve-�{-uV�0T����8�����ZbH+��Y�|D!�����K���yu�C����1�l������q�)�`�3��Rz�x�Z61Ez/w��~�Ba��.��-JM)�tZR,g]p>IS1$��}�����we�$�/E��wET� "��AQl�""���������+(�o���xqm����2���;�? �k�AJP��>BE��%�K�6&��^'<dL�������-Q
ftq6c,X�8Ud�\ZAm�
�,A���*eJ��$��JC�x�E�s�3R�f�//K����&,��:���F]@)1��B�����W�:���2�A�J="�������f��R�����>DP���YD-e�'*����,Q��;]
���_Q����5��_A���u�`FM,�P�6����q^����J��^S����A�X�K)Y;
q?�!Z����!����{k�|Z�5�PtUw����&�f��a��/Z\�on��������	&��X2��2t�DN����*(���5D,:�T��Y����p�N���Cc��2��:wG~[�Fz�����4R���0G��9#{n�D���s�_��Q�owL�bJ�N���:��+�h������q2���8=+]S�L�4�"]����5
CO���g���V���!���������r���=l�r�)}ua�!'��H�'������0��j�Xl�+%}��1}Q�g����T�3R��M]�M����E�s-�\�m	GD��tJ��a���K@D��{�<0���d��]d��.��#�o~:5������Q������a�UP;:��	��"�xN�F0(NQ��b�0���1;mDr��e>��3�8�
5f�7A�(�����w��������r=�<;4����G��x,Y
$su��M4����I6�	�$�41q�I��A,3��D�a0��1t����w���u���a��i�u�OG7����.3����4��*Q��[��r#�H����q?�{9lm�HYl#�zA�s8*����������d��%�0����
��	!����:P�6�C%��##�*�����(/&�	���=��=����Ru
?�&�C�k�]�T]��"~����:��J��l�AE{��<��j�����
g��;;�c!��3
k��D����U#T���?y�J>� p���o��-�����d�%Q�<�"��)���lB�����������mK�}���!���s�\�(M��#�s4D��R�<��flXN���f
e<�{L(�����L���]���D�V�M�lc�&�I�Tb0;^!������j�����}Q5
������.��D8L��xIg�~D}�+I�E��&�������U�)�|�����Fl$�P
5���z����l^gY-��^��[��E�vOu+��%�p�/�����+e�4bz�����C����P�yxH��[�2��U�2��'��������e�����j��0������y�X�	����s>k����E9���K�U[�I�u��7yp[t�2H��������@���st8O������)�`yZt�{c�n�1��Q+!~h;
��a�5�oK���A������&�L�������d����==�j?��WV����U�E&3<���U5�����{��v�\��Ja�21af�2�	���qEj]�F����S�AJm����j���OW)X=��� B!25�>�����H�����T�%���a��Fl���.�q-sa!)�)J<*u��Q(jg3���1����#���fU�e�K�A�De�����|�|�����D�h����l.�����o:�u���5����+���{`�o�/�j�����nA&���4,�����5Y=�4�R\����>�����e����T�c��bD��I��w;���m��,Uq�A�+���Q1�[y���*��b�o/o�����Fk������jq����	t����4��|�����@8�Ia �pU��;��8�3/���A|�����ZD�Gl^�$�����q����a#��4�� eE������a��Y��I��BP��kV�BP�+���,���N)M ����,��L��x!oQ`�r#�g����N������o{)�����G
���	�������M7lB�9������%�^�J�������|M����S�����~|�R�����"%�F�q�3M@�=���@���R���9���|8��8q�I�#�P��R�u���L\�m��}��Z�<d{hy\��RPz�rI�W�������L�K�d��!x3�Dz�X�����V�����[�i����<w�*���BPOGg0(���{��D���Y�!����g��e�f`��������.GH�-�0+f�`��3����/� �nU?�����q�x/c���Q��4j@Yo��K`P�Pu��+k�&a�hJS>BkL�����`&
MCU��,}+h	�$~s��V]�a��A������aUA6v
����w�{N�B��{�,���������@Q��Tz���"+
}�?����W�0?��ld��j��������`-���T�K���4J��Y_��I��6��!>��eh	���f�A��j//�X�RZ7���i;�����t��,��P5CLi�C�����_/��#b^�����\)<H,�����EF��?E�B*�������Of���*���q��vt����I���wt�V�����H]�E�6:{wa��Q)3�����hP�+c��E�����]��Hz`=�+CZ�0C�I�G�h�������%��X\1��a<��!\���D�g�I��!������+(
�X;�
jG�}bB�J�B��������1
g�d&�NW���#,T����+h�S�$���I�70V�SD�L����q5�8*-2��i�U��@��)!� m�88v��J�^�
�c#���~���Rv�|�	,tN"�7L�t���T�n���:�*o�����"�!�W���^���	�t��ky	�<��x�R�U�����)��� t���X����\������z�h��F6B�w�y*��mWC66�x�8�#������n�@�2��W>����'�������Y����a����q����J@�I���yP?����:/����N�����������G:Hc��?�xq��ym:l�3�I�:�D�\���?�����_��O�����!�\�����<w"������:/����z����uS�M��FF}?E5>MN� lm�HYl8����Flj�a��X�4�>0��Y���T�t�!�X�g���]�`�_l�	=/?�
s���H� V����YN
�,G�1���\�0������m�);�
jGht���+��Q"�g�����Nu���U������0)"T�p��j���'����o��R�3�	�����X@��9���@$U�#�K��&��M$������k���5,����V�3Z�$ZCxkI�GA�b��i�)�O;���	���,�@<�������#�n���V��J���g&�N�I��UOp�7��Z���
�
��N(��+Tf:���h������2B����K~���`P<&"�����b�735^A�%U>�*_vy���1��RB�R���������G�����?����w��8g�\T;���S�x�4����(�^x^b@2yP7{��(���Y%WhT��i���Fj��0����~�V��6���\R��>��d1#������(�C"5A��$���v�_%��4���]J�WJ�8��`R�L$�����f��>QF�f_�l ��������`�@\�)��P>/�;3�t~:IE���m�������kWL%�	�X(���(�n�
S���#e����_��r��R��(�oc�,w&��+:�}���F��T6�EA���F�H�M#�}���)��+Z�WI��h�����
���	~�r��^M�F�eR�%�*�	5��2��-R�{�A����J�R�Ob����}�)�T��FD�sF�m$�ei� &�y0q;�pGB����\�Y�H��m0�6��6�!;���_�@�����[fT�#WG��s����5�A���%M#W����<B���73_�'/������ypP��(|����dz2�B
A
F�;
b�.6��%��yFZ��A�Q�:V_\�u-�8N���p� z��cYW~Z%tU���w3al�~K��O���W�J��.)6R��n�n�(������a[R��A�:�k#�An��~jWP&����58k�zp#�E�}���_��v��a�l��������q����Q��aT(�����_�UA����tP
�L���3.A�� ���:6$� �4Xc:�.I�3k�q�!��H!;d���9cn���elC��f�1�K�|�T��O���IcY3���1@K� A��C�A���}-V�WJ�]����.G����02,jF�g�h^H|�N�G�{��&�o�:��7A�����������Xu`��i^tb����EF��� �	�K����
#�\���M#�u���F�A����Q�h
%56��,�}�%��(m;�)�M;���c����u��
���hPO�b�i��9�|��`'5����u�3ll
�N#*t����oH��T�g��Q��X��{b�50��YMlt������gp�GT���vb� w�Q�T1�#�un|9�
�"H��]���f��'��j�I�������X�2���Pb�?���9�W��O���~��(�q>�~K!H�����jE��Ja�I���+�VOI�`XF�"�]W�R�om����u�5q�6�����%������qb"�S��3$�(�A�t�D��wR�$���G>DD����)!]�?b���W���J���<f����Q��T������A��_�1�8y����t�c�9�V�I����e�C}&&�o��F�q���n�_�7Rf�|I)�n��p��JY%�\bH�WRH��X:h�u�K�yF~c�OoC�}���M����*B�8��=�5���9��������^9����i�D,&,�F��Y�8����p�R<��JE\YI���_�w'��U�_=������4������/A��QB��"� �2d
@��Rcv����������\6��|������\��W�*��5^Os�_�
1�:u����E`(�k�Y
�~������H����c�>�R��LL(���y����d�R@,I�S
��@h5���A�����1���� j��	=�d�����1*������g��\2��v���=R\��hv��BT,syP�(���FD���P�$=���~�b���f���b[R�}A�$��"�����Z��������!���`���V*�I��q��e;3FT/"k�����SC)�2����-������vY4�dbY)�e���P������C��LypM&A��4Y��5��Lso��@�2���)����s�#q�]1�������@���m���z^��I���}�1�K���\�����1@�c�W�>������)��~	���#(
u�N���HTLG���{�7�J�C���4��R+>P��(�T=�k��M����&,��������J��~��J�tz.l��v'����N[��l��fK�b�����*���<��rI�R���r��lv76&��������\�>-����c���~=�O��;`���c����Jj�R ~�p,r�!��j���W���!������.(]�<(*��a=M3B��9H4����cFP�A��px�i�x�a���� �w�R"�Y�����5��+�����7_�{H��1yFJ�=R[�$w)L��.#8=Q�?k���V}6�]�^G%(6�2_��R2��������N��^�$��ope��P;���wPT0j3}�R��H�8���Wu�*�t��r��/K���T�'���=��|��oRlW�u��6��J�b���3�[���%e�H���D�Y�j���!5�W��5�r���vv)�m������r�����j��q^4����`����:������v5cc���qu)��,F�M����^�u��2x>�dh�5kG��sY��]����v5�!��%�����)�����q�Ht��������9e��E����%e�����vqo�����8�y/������S�w��R5���-:��]&���#����������Rx����TZQ�p����KW�8���ca�#;����~���F���P3?S������fUo8(#MB�CP����F�ZCF���d?�F�%��^�����4�J�R`)�PCU3��o8y�����/��x^����5����P\w������}�2�>������K
�}nRr*��A�G����:0��
�F��H��a�".\q)e��b���M}�D��T��~no3[z�6�w@�E�$�*P)�Jp�i#������_�v���L>I�cc��h���������.�23	=m�1L#"�Z4���R)�VLS=#%�>��8%�Z�6:c�}0'q����a
�	�"�>1��Tn%!2�����!�c���2]T��l<�A�����R������c�c�jfp�5f4���+�)�&�Js!��KZJ�0A�r��0��<T�&~��U��E�x^Jpz�����5 ����t��R�l��]J�WJ�8������6#��4}p`mX?!��
��vn�yY��+���"�����Q����,��z\���O�������O�C��Y8��JY��h�������(����h���Q\=P��]�~[����c����
EPh&�]�D�gL�=0c����S��jL�(A���j��H&%�����3��2l��@�oR0#0��]��hA���N���PD�R5�`��|�a����L��7�x�o.���}������t�}�@��34Y?V�����=��7A�]�TP;�0��{�d�1���=�Dr�y
�N^���5��+v�����+���C����R�@�z���k��Bz�7Y�V�P��v|�������YMx���+��Z��k����{HO^}��|C�����Y�e��C�liW���i��}�7����g������f��>�\s%{Tv������e,���x���]
6�������`���,
63����J�� H��B���K�c��m����3���"����O�7�Pl$����;��y0�q�C���^�7wZOf�|����	�0��+h1.A�w�1����I,�TbL��5��I�!��K��H����6�1bX�A�+�]5b�pbQ[��t���}/3h�C�y�Q��������)�j����.��TO3>:�0���X3����}��E������b��@3;R�au��!�����q���fa]4��[	���`�0����tUHn����%O�M��,my;�����b['��w3��4������K�����?�	������a�DD�A�F�Q�3��\&���C�+k%�N2l�%��L#��ReK;��H�����%�YR
�_�B3Q�F4
Y<��a�������B9H��&}V2��iDH��Jz%���������������!0����)�|U�U���/�%��kI���P;I������x�o����IV�N�>�x{��1K����#��m�0M���%gx`��+k��aD��a.,����)V�U[��&��+�]�s����O5b�C��>���NJ���K��gP1�?����a�a1�O
��G==��l^:��A�fk#���O�od{�r�������n����>����l[&�r��6<����`�����KM�����UA��_G/����
_A��x��/P��KWV���x�1}p0fM$�N.��w� ERh��
V���� �
� � �`}-�cx�1A��/��A���r�%�V�%��d���Q)�p�ll��	�?C63M0�[1���>;��(t���o�A[#R0.�p�U���WJ�WJ�8�B6���>�bt�FA�����5��� �:7�Q�W�ie����0���`*	$E��HC��>X����cV��PU
~��W�Y����x�(J����q���+������qj���elC
�iV�������K���(�1Y�I�-���5:E��Fi4���!�R���X���S�����>F$/����$P���������C��G��������x�\+������.6����fZ��/��Ip���^Z�o�������Z�Xu�I�������FT��-)�c�cDE{��yi���)�E����K��HxR��7�����+�E$�ym��x�?s`@'�Ft&�wFt�j]��TU�3
��n�aJ�z�aJ���C"wp$���*A�`N����2mTP?w7J�)�Q��i[&�F�"P*%-""��~���Iw���z\�W(������5B�������\��������7��b�i��(�B�[��,��;�������gb��!4��*�W?��VV3�K�Q�c�M:�4�������������*�;Zo�d���,�?�#b�87�}�����M������P_xI�eI��������O96}���A��v5�+��Fi��U@T�g�V�(����LL�w*)Mp��*���|����K=""����AA�3���#�C�u`m`�1�`�/	�&(e���YA��n��a)� �����X�	�Tb��������3�
}�.Y8X1t
�sTbO���&��+c�wZlI���f`���kb	�u��P�I@�C^�4�B�86	.cj�j��~-�'A0���U���$X\Z��Y�>"�@:F>�4R���ll:��S����&�\�P�<3~�~�KT�,p�K���Y�����6�)/�	���!�lV�1��<c����`��_�CP�+�C�w{��{@�$(
5f��0>���U#.��
��<���������U�i{*�H�@���b�t��u�&���a�	Pe�.KS��D�`���3��T�v�CJ����b�-}�v��R�mIab����f�]�L
��2_|p2��5 4��jI�E�!��,|-�*�
_A��my�5b�eb��/a>J��h�<�B)�����	@K����X]����������!�Inc>gSh����U�!����F��s1���'��	<�[��a\
�����#h	�� lc6�q�_��D��U}a$���l���a��
E�0v����>R�RJ�S �����`�1�WV�n6�8�>\��P2X��,�EH76������.�d<��@�0c����^�����_Jz����X���A��tCD��������'����M>H���������bo��y�t������(.c�����TA��&�{�w�?�{�w��6L��)f�6�9��I��4R���0G������F����z�6��	=����J�~��\��0"��1LX�
���0m�B�s B��bw��i���5��NC����Y�5D8��O�*�l��	=��{������;���q�{��+����4��������6��AE�)"F�
�~/����������C��B�(0���f��	��R:HYn0i�J6���^��
�	�2�(�����,qU��.A�%k�����}�G��j�C.)���v5�x3E"�X�oX��P��r��������k9|cj�No���A��
Tngh�A��B��ypc�O�$���
����&��|-�*%�^)��C���>n��u/����"%�6c�`X^�;���������s�a�mXS^z����I��^��������������1Q;��pp7�y1�1AD�[8��ch��'n`���a�p1�������T�^D���)���v�e�]c#_)�B�q����K"F�'�NF$+���gpT�6���V�z��p�V��R�l���:�e�������^���"K�p�i�:�=K�� �:��I��N%d��	�R���I,gp3����3��C��RS����'�:9���
�jf�X&
x�(�EM�*SaOQ�?@��.,�*���i$���TO���_��KP���D��|1<�t��%	:<�gya�o��<�CC���^}������&����.����9au�]���Lf�����%�[��P�!k
$e��=Hij��]
���&��Xn�L��-=�
h��>g+?�!����S=��_z%e��-��i�K�|D�������0B�?������`��<�S����=����@)Xj������M�%dLb�{V�ll&��X�]�5����.a�y��D���Uz�=��V8]�q,��M��������*����3����R�Y�1�e��,�/
k�|x�6���=Oa��yp�W���PN1��j�`f\x�\^<:1P��^5E����m<��4Z�c���!
#������Tf>��aMP���{����q�B�\38LP�%9�q�5zk��\[�Y����~J�~����Yg0�G�
�����b*����CA��cr��=�x�v/9C���9�m�����M�E
�:F���X�b�������l���ltFh�i��4������a$���.�,���������4��`��t�R����?v��,L���/^���Au����]�<0������QI��.���q��G��a���������T5��u=U��}�* ���������	�b��v)�1������T?�����s/���{���O i�lT����dc��M5��o�=R����Z�T��0�C'��.�Q����
g�����v^��Q�u�n�[���[v�tOUkEOU���@cQ��p�����!�h��A�����+h1j�.����W.4�������Sr�����"3N�R��lIB��{�P6���n)j���,P�&�,��������\��l5(P-mL�
�=>?�����
_A�q	Z�O1�t����3���V�C���rl���e���B���3/�����+�(B&��~������3\5!X5��^Fr���R/U���+�+
�Yc��GD�_������]��>J%ype���t�bwZd�G��	�;���Y����
��\����&���P�t�*ax�Oc�����:�w�*�9����X�m���+���aL��+ou����S_��z
��
��w+�/
�c
[i�C��v
�:��9Q�}J��?�!��1y���^J}9'e����	;N�l��5��,���.P��5���gLT��?vxz���*�����B�5�)cX-����11���F��#�~T�_���F7\�	�"���[�=R�Hi��������"�a�S3��`F�"P�R@�*`�B�����AJR����,6���^������Q��5�����>83���T7����_�C��!�bl�:dc�UP;����x��!�wb��)AE7raZ�����8���,CQ���e~n��<>	{TfdY�c���k�hJ���4	���9%e������l��(��?�(�	�����:�I�����-bV�!�|���tY�q��E�a��g�(y�
���$s������As;V�w��(~7����'4��8�eA��DA����� &�����?��LI��G:�r��������J�N{�"����P�A��������"]85��@�g���2��b
�-��[w���>��������E#68M��������]Y{K2������S�~z������Z�6���uc�jX�n����}��k�Y���X����.;lm�HYl��T��� �k�Bz�l\��X/�d{=ci#��������,;��q�������K����_�9>+��`FU#s��h8D0���_��5ecddTP;�i����D���o�m���.�Ni�
YoB��U�>B1O�G�b��M��<�������6<���<�y@���)�*��D�_�����;������,��I����������g)�� �������I�v�t
	�6���+����VP��6A�7A]�_�cQ�>������v�[�z��Z=�3j�V��0��S��������luF{�n����ll�������J�O�w��-��H)�HY���(��� �h���zo
������� JbT-]D��O����[�g�!;#��:��H���R�1���o�L�@O�F���J���#�C�oi�	1L��
$��o"P��������N�����X��k���p=���sq�]�2�0�=�fT���:�m����E$�����^���v]�B������aC6��*sN���5����?�I�o]\GLL)�)^n��
�R�M�r�uy����>�R�z�`��9���(�����m��&I���"�L�E�0����v��_���t
����������!k�$w���V
?����a��M�yf���Sl�mbt*}2�)P���51b#5�E��[pp�.%�tE�[�h��,�n`)J{��:X��}#���&�K�0%�|$L1t1L����?hQ'��!���d�G���~��T���n.`�FgN��<�L�1����AJ�.�iaiQq�����R|�f
F�i\T�^�/�Qu�[B���4$|���W���
_A�q	jG1���g^�U�'��������VD#��U3�&G.#3�3������N������(����g����	a�0mb�N}P��r��
jC
�ik���� �h��j��Mm����F��k�2A��H8�6-��R����G�%
�;���Oi���OP��Vj8�(5������W�*��j�7�~�Q��y�?/��O��g����
�K�/E>)�o�2�o"�[�FD�_k�� ����1�����9J�tr����+�l�������R6m���}r/e*�3��	�
?7
���?�&���{�\Y����_`?�vbceT�v1�a�'�?n�q�e5_@3���G-�zokd���,�?T���h~��e�g-�<��%��Rp!K3���B�F�%����_X��-+�����Z���������FjB���pr-R_JR������K��4���zD��_1(����\`�(����BJO�K�o����uz]�4�����kis��.u;�GU����4���0��ju���� R��?����]Mx�WJ�8�21���M�
jD�����,���41x);7������)�4����@��
mafV����d
h���"R$���7�&��m-�O����]���t�h���n�z|s	�|�^�
�{]ntJj�"��b����KH�-k�-Sk>s�`��k/X�����Tv�����H#�+��?j[�l�'�V1C���>nc��`gfX�;�K������qu,�<.G;����������t�
P1P������u�Q�5�t��p� �9:���9�1�\��p0i.�������,K�)c
5]P:�]��F?e���B�Ie�����thM���3vF��(�s����2B/�"G�K+7`�Z�6)�W����%�0]�W2AD�����Gy�/�1`�'�d�����A`��?�<X�J��	��R��q�������'>;h�
*�
Z�O����4�l�v�3E��kC���R^�y���������v��a�T���)	5K�*������J<6,�.��U�q��B�w��2��+��~��h)�5���1�R��T�v������8�V^��k�������Y��ttpi5\gtd^R%��<����N��C�bh#��A�������o�z��7A#���1�>@4��G�p�B��
�*��=����!��}E����2��r��|_����h�+������L$b����c��i�Bhv|����2l���1>�q	�lT���+��g���Fb���I%kDHo���vQ'�����M#6��A
�JM��u�����@����d$��c
�C�/HybQ)+J����*����!��
�(�b�a?.8��*�������s���/�������L#"����T#����:��H��?A���p��q��Dg���&<���>��G|0��=��A�}p0���E^c�z��*�n���������ig���6R��U
���J�8�bD����S�_��s����O�A�iV�5"�,���Q6@#�N��'�>8X�"��X��KW���M����hsn���0��/��1������19���1k�����T���������f�pqP+w`f���A]��������R.�U�ai}
}]l���%��gQ��v���
���T����K,H^c�
��Wh�*�5E���oq|��/������eK;����A)��Y��mY��'&}qT�`DX�I��l�J�����N:��Fll����KR�9)���CrxT�WVE��V��C:_�jq�1:o
���j��)�"5�&�N}�_oH'����E�'���</
i,|�.��c��J�Z�Uh�W�=�
JY�=����� V��dF+T�'��L�����K��E�5Aih<FP	�JG�6����0sU?�%�%�44���4�=���hn���'��q�cu5{D�����&��Pf�k�h��bFb1E^I�!���D���3A�b;h�F'�#��R�NI��vHO^��/D����`�hH��~|����	N���5cc{�\�LT�(%]�� ��n�(�:����E��5(dy �H����G��Ue��OWE8��@G�Bg���������aZu�Xt��^�������;x������F�`ub����+�F�{���_��K�06�|����K�)�q�DaB��L*@Pt�������wG�$G�-�E!"_~���k�I4Q�x�gz��f���K��_�U�
lv��P��PK�1���!������1���Y�������
�X�{'����SV,�����N8��Zh#�����i�t��FlWfM��#�,��U!z?�^����� �r��VU�aQ��|�;%�k=�=Z+���8��I����,�|�geYd���������������	HaH���j�����LO����s�Y�����*`LFh��f�F>R~�����R��9��9��ts����CL7����t������{��a��&�l�B��G �Xd�q�tM�X�2m��!�N���q�Z��U������Z�G�}\��I����\�D�C���F$�}G��%����d�����v5�!�������Bh��D�&lLy�`��x��1Q�ISXv��k�Z�U���������vt�	�c�2���������������>����5"����k�;�<��wL����
�L����bP��O��(����3�1T���YqvZ����6��_&�^&66*�%�����;����K���fo���G5���-�4a#��B��
��1bAH�b��
�5
��8~�4;]��K�L���3��vtu	�,3�;�
bwcf���0�0L����q�?����/6�y�f+�)�u"I<��� �vb��8Iy ��aAaH�������T,9������%e�^F��cD�a�����o��l��=E���Y��C�,��u��lg�GJ�)#~.� �T#����d�'y� ��"��B�������b��������iL���W�
lIf/��=+�
�����CEu)6�H��^AE��#�LZ�	�1}�>�mN@�����?=��O�����y�����j���{���6t<�t��tt����@�q6)���o(�ltY�T
|�1
4�W4*��*GJ��B�z�h]��@"K.����h*dj}pL��$�� ���`n��X��t�&���j4X�Z������
9}�C�g�QxHl���i(�sJ����9�M�G����U�v�bBC��[� ���]�)�	�nD4A�i�1L#"�o"(�=e���iM%�������YqQU����3��H����aR�G�����:	�:Q���A�Q�������!k@H=z�^*���&ll��	��|�����7�fBp���(��`������}a����fl��@
6���3(�=��5;t�>/K��"���16�C�s)���7*�+���L��
�D�z�JD=N��J��[�T��o_��Nj��hs��X������RB.)��>������� ����������	&��%����)����E<�.�g������eW2n�\b�-h�����?d�GrY$�����v����K���hk�7�bF0'��d:�Q#��"�>f��NkH�U>>?"��\�����vtD����.��-�p�!�������/�!
U^"(���C�%77Ad�4�X����WbL/��V#��Rn�oS�{������=�������6��jW��v
g�K�T$�x-K����YPp�S:�7R�=(aHp��G�+[���F.)��>"��l1�
+:F�N:u_�\������!��$���C�0�v5bc���Kq�Y�2�
��)��{;O�>&�����������_"�qG���Z���p=[��n��:���H�md�;�
��tb�-��RNPJ��Ja�,i�"#��r�2~LL0�`���W��>�k�Wy��l����y#��%�>�RI��$�Rv$�FypP��\�'�c%��||����@����HYl#�}tiD�	���`�%�iFYw3����*f��1�iW3���U��J��MJ-<OQ�m�A��?nE
8duF���`�tU}�QXa}n��Q���O�I�Q���oL�A_��k86)m8���Fj
e��{�H�����@2��mIIW]��0fL��S��sJ��5��]�ld�O��~C������������.v'�(�M�$��A�fDA#u��
f����4�^A]��6lD^����O�E(�V��/�)����v��)�������\��]�Ll�Y��	S���K�R�kC�,��(�F
�K��`@c�H��������Uta�U������l]�L���F��R���x�H��&ry����r]^R<��������v)!���qY�h��*�!�����k
 1��r#��#
m>����]�����w)���T���l���}4B�t��H�>/��~zK���3�"e�E
C�2�u`�ZgHh�^�>t�X��$�F��+NR�P��75�RB�R��*�J��Q*�f>X�����b�� ���N�m�T�%�4��6�d���{�)�"�xP|���'��������&�o�������v�������XE�B�,�-T<�k���ZM���s
�K����5�����������p�e1���<����%A��B�.���aV�<�0&��:d�|i�"�O,��������d_��U��O��{ ,-s�`J��?�`��f����m�N[T����F���0G���Ny�"q��f~����X}pe-�\gl�5���Q}@��������VM��az>�����y�%�L�>8�1��rOY��D���!���W���|+c&���e��oLX�D�l�`Kl��)eD��9�0L@�Y��[K}r��Le"���/�'�V��j�U��TD�����_�Eh�X��f�K�_\�������2��"�y��������
��S���<YIk�'Y�����sZ�N��s#�v���KJ�������^+z\�E�!t�B�(cxV�������)
l���*e����q���������	>��y��	��S6�y�����
�07lm�7�����|Ia[�e�[�x�?*��; ������U!�U���aMy�WP;:�$SjOQ���,�NH���8/P��H��+)z�M��7�!�ii���9��������1c<��6�9���k��n0se��7�&�)��O�
?G4lm�HYlZ��qpf�IQ3�rCM1�f��X��/������6R#2$��'�k�)����K���=����u��X��{�+�D��o;]��^��p��PG��&(�sv�?I,���ls��yp
f@)�/�A0j�j I���%�7y���ll�dz�68��mJ��\���|����u��j��.?����(��QK�����ad�WP;�
B���YA��
��>�x�|�"~#kH������^6*�.%�+%}��P��H������y����	�t��P]��0e.�q����k4�<lc�����e2|��&bc�����^@����F���q�	B:G�lig��JI��:('�r��A��:�m�DfM��.���l�~Jf�wz��]���t���-�(�������^q��Q(�
�`����>:��=��k4����a��C���je� ��}p`�|L�S�^2c�r�>����N�C2�{��O������@T�fD�z�2��>�9��W
^�*H����!(
7A�_A�����9e�%iWc�Q���CKZ�(���6R
(-X��� ��
m;=)�m�����T��lo:o�nP1��K�* [V�`��L�o
�����2l�B~T�E�p`+���Ii'����W,���\�,�S�l,B:��Xo�H)�FlB���lE�nN� �5�
�v�E/n����c�����{�[�`���$c)����[�`����!��p���8���`�A����O�������q���Z<�!A�Mj��q���tH���;@ew�JW�,i��X
VyPOF�A:kJ��C�t�aL`+�-Vlzfu.}-s�/��S�����k5,fOY���6"
T��0��W��v��%��������^�X�����t�{`��))`�x+la�WD�"b	� +�-D]��Y7�x�q�m���f�Z�g#��h�Qf0��:��� �v�n	jG��ai�X`�B���/��%c`���H�~���]u��6\��1c�����W��2�������,}��~�a�%5��b7�xR�B����E5�����@��B/��Pc���@��/���������RM�HmP
��Kw#��<�|2�^�`D�����#����,���v5bc���q���`5��R0������T.�\}C]_�1|	���(������u����7;����d�����:m�U���&��@.#DC��N����/Al���Dn��K5p��Omd�H�W(���]<�����DGd��=1e������,��CDL1Y���x;����6�,i���e	%��Y����#�)�r��I�V��I=�<�md����(��?l���HnR�#���}��#'�c��Q�x�u�%?�K�D��Kr*}�^6.��.Q��a#���u#�J%b����m�$���:p�do�m$���D*q�O��������Hi��������T'v,5�F��=���*��U@)�83�Hi�G�b��M�%8�w<f��oh �cv5��i�"�������6\����[�����`�	��*���h`'�#Ua#����B�b�y��-�^)�����1���`v����#��D�<��9��������~�y$,x���_��#a���q�Z��/�j��F���q��"��T>D
�����+�'
b�������/
���,��{�wv�z���w)���Zz)�v���k�F��]��W���~�>�~�^�3�d���J����67������#�����������������%�4���Q(���x��F���`����{��\v+@WT���a[R��A��H��_g�d��K�z��l�3�NU�w|xA��o:��������a��vtD^W}��U_u��0rRy�od�	����r�a`b����<�#�}�H~�u��>�������Ny����P��NZd���,~����4������}��"�n�M7'4&>tH����/m^��������_�iX� �a��^4�g*���g�� (����Lt]b�$�F�L(�vpIK�l���<��2}�t�so�5�����Df\�������L=�
*C��,��j^����j�
\�T�z_�H�����u1f6v/�\�3�]^Z[j�G�@���#HcR�F�t�{��3t��.y �����j�I��<�����5��������R�}Q���<�
`����}��^�����X����F2���U���j��7r��]������:�&���6���4$����*�s ���P��1
7A�H8����e9�jT��Ynr&����US��|��K&�����.3.��4�����@+�_���I�BF�Rj�?�k5��B�9d����U�-���]����bz���R�x����S�@<��x����^�-AmXS6FR.�����c;V���}<�hB"
���	P
A��oE<�P��x����)�4��U�-��7��?���R<��
*S����F������[��`~��6gR
3�
�k������Do�h�j�F�f��jw���:1�j����d�:�>�7S������oQ{)M@w4c��[�4���#����X�mB0F�c���J��������M�j�a�wB��Q���ll�6�<����>����`$�.�i����h����K5,cn�J��f�rl�_���\��A�F xRcX���]"����@�lV��"��+�4����I���J�z�M����c7W��
�L]bg�$:�R��������K��*�%J<��)��d����F$�+�k�Ta�dj.��7�kr������R�mII7#��V]�wT}0V\Wn�1
��o�J'~?Y����(#��EPi�/y;�0F��1�����k�+$}� 6�1y�[������kL��BH����Zg��=iU�,;���Yv�����8j�t�n����V|����FH&�d[R��.���@������X������x����
�����j���+hc�.���C�����A�j��`�@9Q����m�^CY�l�Ipq�)����hZ��f��C�j�X�tP�J����w~M���u��XKv���R�X�z����1'
E���Q �T��H���D�����%e(��t*46���qs+J{���?���.Pk�E�	G�����4��4�����P#b�����9��b�3E64
��&�`���� x��
!��B�.6�oG�yi�Y�.�;>H.��K�.���Q
:;Z�m��?�O�m�
F&�
jG7A�^�����E��_���"8_6�iL(��$��x�0�U
��RJ>R����N|,-32�
��{4�Z�JA6R,��������tD�lD,&b�2�������tZ����E�y�q���1�O6��N��5"NyO�Ax���~MGc�����=�������L$��^�Y�i;�V�F���!90�p;Li�����T���X���
(e�"7'?����c�����7�����
JSX1b1�t�Tn��y9�S�y6�����}B��F�xNy��-n�+�][�!�W�FJ�8�18���b��f���%=r���J&�F���'��Q�<�dSm�-)C.)����s 9��[����1h I��
<dp��y]��i��<lFcz�������a�)�G����.��S�<��r��Np�3EZe��pX0U�4��Z�A����M-v/b�:Q�,�sUu������z3��5&�z7��]��NC��a��]�e�d_)	F�m��������`��9����)������"�s��!�vlB/	F%&��kK����`����d^lt,z����]l����a$�*��%�M�1&�9p�V�%��sV����q�P���Mc�c����{����>�hvp�d��6+�����_Kg�!5�vU�kv���$��T|���t/����=�b��r���-A����'�d�M���I�����m�w6L�>>�635��t�@��K��W���UY:� %+)�1
�<�aMQ��#������1����x�&5�B�x�O����K�>�9m�nP�����R����&({K��N%�`�C�
��Q�sN�P�e?�wvLq\EHU�����@���J�)�AR),�9�\��%�KW��l�a������*�
jGgAY#I^Jr�s�7���q��^qq�C%���_�-"�
1L�tQ���`�����������8+6NOZ��yN����0�8�U��iD$�o@�^�e�'U�I%|9�9���e��d�������6�!/mU���%�������A��4��������/e,�=l�c�p�)�4����c'U�U^���Kypc�?������/2���cB�����#e��/����^����Q�y���P{�7����KZq?[��.e����q�%U��c��d��o
@��hy����D��������m-tO�����|�����3N�w�)�=~'��Vz(J[�_�=�������8��sw��o"P?0�aR����M����V�E��/�Fd�V+�w�u`1��Q���JG1���F�
%�0AI��c��0��G��CC���9�?���o�M�0j�.����c����%�����Pl9�����%Y#�pQ*Y������c�a�YMx�4t�g?�������)b��������E����"�I������i�
Z�KP;:�Gn�����J���!��8�c� �����5&��{xcA(�	"��	mV���KbqQ$k-"�.8����V�w����!4������F���Z��^K�+����f3o�?�������M�����S�r,:t�����8�LC�j��D�|q&�Gh�:����Q�"F�)@e�#��^w���
k��hX|^��N�{�YU*��L�An�lc��K��d��������*�$_q�������vv_T;l#e	�8���c���g��n�������Y5W������$�����x�=R[�P;��U*wv=��:�(R��
�bA=��V���/�X�)��c���`��Fll���a�����
E_Oe^C��Q�w,4R����fl�����sE�)"����6�T!j6�
P������B��p\��i���W����}�fU�eZ"*�2
H���� �7�
���xa}P#��b��%M
��;|�a���4��
"���F�v1����u�
��_6RS(�Ha�?��V�ak�G�b)����f;1c���A����#��9RT�R5A��f���5��f5`���g�_E���RR��5���@Y�t��x4����=�
GJ��	�j��%e����+���iYc;3y�Q��2��=�kv�v�gh�1""�F�bbD�WU�����i�\����tw����L����M'�z�Sg�C��6E����T�����@�����X�{X*}���[J�#3�Y%�������h���4!R������9Bo�X�zJJ���WU��u`1#	����Q��UO�?��+p���HYl#�\�J��^�u���������+�z`���3G����3^)��I����4}\��u/���X��N����(��{�ci�lf�T�����D`;])�m��j�M�sG/��o$��G��(��JX5x����4/�������4|-F�A��E�����s�+���m���,��1c�m,��+�?�)��H���2�\�*��.;S
l_���w�#�����!��CKU>�(R�.�;HF(��-�L���-�z+?2���
S����q���Cz��X�:�E��������6R��uYF��.+�V%�1�=>�����k�^�O�m�U�R�������v��6�H�������i������Ze��������=(����,����xNc:lm���GJ�8Hi4��d��f�[�Z����W�@CS���P�C�&�6����+���v��a����q�(����8���yp`��T�!S�s�C�Ek�V�'�����.e����q�����k��Q�i��=�MC�j�R5�k�~(-@D����#���>rqu��V���W����������/'��	M�J	�JI�tY1���L��'�@���j�N�����������l:��������>.xL����������P<��)����j�|���S�0��=G�Tj��P�R9���A���IiOgQ�~1��N;�H��|>�dxy��b<l������r9S]1���?�}�A����Z��!��%/U�L7�����a2xa����b��Xy�)���u��
5`�w#����W%~������JI3�{$�L[�@'�>���+H���v��_Z����\s)Nh;{�\l���l��H�����Q@
,�t�aj�������yD������#e��?�(v��LP�Gf ~��?oWFu��{J��R���_�_��E�"*��6e
M�6��y�]��w�$7�6���������#�v�}�HnR�)�(2���VG��_�n�O�$�R����^���K�#85m;C9R��+"�>�RlM���l��N��Y)��Vu'T���|�1�^/�K��u��$6�q�������O����aD������l��#`�7Avt���3���E���/�6P2���Z��!R�i��UP��c
*�g��������L>!���}0���P�����F�N�{2�OaY�����2����q��������������+z'5q��#�B��|N�-i�AiXS�,6�h3���0ycc��-����.6V�*At��=n���[$���bc��n�R6#�	�q�H\�5���N@��J���S�@en�2J1���{Z����X&�i�C%
Q�?����E��Nu��Zb;���vQq�t/U�[9���C�Lk�{�W�Lr�/hpl2���>�����m�8G�)���x��8��P����b�F`c3����O��`D����kU�fX������|(��}�Q&;��r�Rg���r3*%������kq�%�)�E���
3�+����Irj�9��V�:�!k��0���r�0����ZQ z�;i%��6�|i��N(�������6)e{�|m��QAx�(��@���E���G��"xD�����M��<�����z��lG�*Ii�@��Zi�����7��$��-���Y��2��+����[�����JM:V��;����}#U�i�:���� ^���W��U
����"��S��cB1�����18��L����K��M�{d�X����mI��R���r�}T�(��V�Z��N���	T���SiV1L�����M��xz��1f��
�Fj������#��(�����Z�G�}\��)#��S��c�oL�D�c�C��������q#���q
�����ek2��RJ:�}�_
������G������2���0�9B?��sN/�����O�H\Y4�M�����n���6�5;�� �!���n�}�n�;Y!V%�E�x��z"���C���2����=E��"���9��(����/���
+��Ua`�^as�d
}�K�&;�]�>��U*���/n��f�\�Z��@+�_E08����A�P�F��'��W;~�*�7�.�SJ3x�R6���^�P��0���I��{�:LH�326z�����\>������-oG�6�V&���*�I��G��\�g�'.x)����|<.T�2�L^��	>q@��{�
a�2�K_�f7���s�n�~Y�o0��j�T���K���S�Y�N�ME�A�����E���y���ph����2;���
�M�L�h5A���FST��m���e��������7�t�Ki����M���J���K-��m�5W����F������J���^�j��F������iVS���o	�1��D��[oB%}@*8�Wok��I�i����wZ#���X�:PT���M��8����������h�Y�*��wm���)�xv1j#k
�3�����mW�ll���q��1�@")�y�/*q/�~];�{o�zh��U��A��V�7_^R#b���u�nh^���t����%�k�|#�=�����a����|�HnR��u��n��[���
�fC���������w~?���v��a�K�u���O��wS�}4>b��H^�`
P�fxc��.��Zm�=R�flB��
��N(����;���P�SB5��y�3Fl�F���o~GJ�iD�[��)��=s�l�!j�a����yp�E�C���y�t
�����&�2�aM��E�4j�b�d�������WA�*���R\��5�/+h�����6���w�G����*
F���s�����k�g��:�)};��N��,|~�3�������~|���1]B2���>8����eU�R�H;t��st�����K�0f�K����1y�w�c�h������[6RS<E{_�?�?���v�����1��9�x����I0�i{\�l�'�'�v�*$uY��b��6�)/�
jG�:���L0^
�xc>�<�c��7��
�H$���\�rO�P�L�!X:	����c��0��]L��ypO0a�?�M]���)LHYX��`ak�G�b)���`l�i�����:���>�OVS��V�;3TM����E��i6�������0g�d��]��k��}�l����!k@b���+���&���rM�
���}&�X���@^	�:������k�Y�/�0��kc!"l�?��WD����D���G���	�u�k�$�L��\��%�?$���X)i�Dh�G�z�[����7@VZ�o�
U���F�}~��@~���(���1+%�+%}\����b�3-G���C��4
���y?5>�(LiV������*����TN�[�
a�2�_�k��������`���>U�~�&V�vME���/U�R��a��dioQ�c�����x6�vR�Z2�}�� ��g�rI�G�q�e�D�>��l��� 0D��(��K3�md�S��*���]�b�O]:���f���n���yp2c�O���\�$9��/*��|���O�G�b�����������
�������B�!�+��-i,r�1������0�q�K	�J����R�DOq}���A���������^�N�N�?��|~����mh<*x	k��� t����R�"t�ih�5��5����`�t�zm8���^s��c;*��)^z���i��� y��k(��I�:}0gS�!�q-�a�����(��|(U�/��"*6$+;�md;Du�t�Y�d>��������q�����x���� �3�
`�C���7Krc�Q����3"�������P�������8�h(?�`XF���"�AS
2E�S�D8GC:*�A�����e|��S�=uf�<IA/��>h�l.&Lu�H�I�� ���WB���?�zF���T��"�4��m�+�E�Y���X������4�j�_.���(2����#"�o���z.*����7���J��u����������������������M���eM%�T(��r�D��T��0m*�Z��-��e�ll���Fll��q���E$
��q��UA���cvf"`f/E���FQ�2��@�2.^)!�(�$_R@�c-��FL��(�c�B�N��[�
H2��HLGJ��R�M36���
���L_X����2�(y��/� M~��V*_��,�]��_wv�E����<����\�!K1���]���~C4:Hi#���v���6R�����HI#psG� i��/
4��3f<d��>|�a���f<d�!JMW)�f��E������E�1�������B�����B����J�%�M��.��"�X�WGI���G��������UxHc��*�!>0\���fll�B,��U�~�g�8�}
S���7~�L�����2d����91WL�v��������`��s!a�b\6�]���lj1�D!���]C|��a�rc�����wg)�=]����v�5K����o��������8R8
�Yz��&<}\����$J�.Qg�D>�#�'64j�	��U�����:���%�����t�A���}�����y3�oi����d^:����<���2�a���K~|lQ:
��1�.�y����BO������k��s��3���5�at�1oG������B��L3�*u�C(���
��X���S#l�K)�Hi���U�-K'��d���B�)�����#5 "zX�!d�R��vY:�J�>��
�8�Ub��(�L�E���Y����I�����~�w���������(c�����X�U�����,[��)�c{�'C��K�
�(#�<���c�f�\eZ"AA�I(��&>t��������^@��U��}|�;"lV�Q&�^�=��r�+B�S��v/�'js�k��$QORV��(�)s��M6�������+��A���c��������4�)�����n�Lmf^jD��.)RE��=�{mW���D�_n�2�j�����"D���+b1�0��x�M�T#��~$�|p��P�8���Fgf�w7z����f�q����E���C���]����������)��?u���8P��>}R��IU��Pf�>1'����j���IT���e���:H����v!��7~��O���(5"F�5���>M��]
�1�u[�%��<���V_�C=�h^\���m�zG�W`�����e
��)���2�?X2��^����sT����g�t�����C����6��:t�th�����)������y~T�������8,�-:"��LJ��3"<�pq��������[�7R���]&�cYn����	&,��KJ����������V"��d�
}�>1������:�qf_���.}�h	�E<�.��M3mgA�! O2O��c�����J�u�Y��������������
�Q����Y�9��y~��(]$(���/�S:>�h���GvA�(
���<;#���5bx����L~r~C�b��{���#)U�%��1
j�W�b��`�/A���wH=%1����`"�sFT���b�f�+�y�p�H?���Qh����F>R��Kp�j6�X.3�:q5�	VP��LvDR���h{H�;������L1�oq��K��.�U
�1���)�E����u�:{v��J�,�>V
oCUAeLxw���&t�D��a�?C����d;�u����?��k���� pSC""�#"�Z�j������)?lV:<��9��/�����/���@��q/5"�B��0�Mi*�0���0=�>�����q���l������Z�Y�H�����A�����??A����a��7,�
��� �56�r��|s������#e�E
���R�H�x����S�T������HM���P��$&|,�`"y5nx�F����
������[�sx�!�(&me�G^m���PFN�~T��VP;�
W%�mo0z:��������K:�?7���)7�4
�R���)�M;�>f43XxiW6vL���f��O�8������=K��U�iT��ZG�}�b;���\!dJ����������^[�9~�U�����c��eV�P#v�����z-�g^��\�+�p����r�*��Km��#
e�YMx���Z)�����>AW_<����H��n��:�1����J/z�]����bz���������A��������M,��v-�R|����%[��R��R*�:&(�����-�P�,�c��g��=5@��__�6�oJ)����K���C���'�PZ��OFtQ0P������Ww�����M��=�z`N��@��h�xk��T�����l�����r��mg�>���r��}��������]-����u�&HVe�"M
������
_A�h(��>#teHv�.JILY� P�j�L���vk�D�B��[�iBD������~N=��9OV"��P�:���1d����YL����<!�-��~f�������X������������� ��U��F�)���eKv)��O]:"@IS����1������>�`c�ti����b(6)%uR���^J��+)�f���'#Z�8�����l�~�:�X`{^����W��i�����lT&�:'xIN����������u����\����8����v�nXB�#������a�d��a��������A�%U���Z��)ul�v)�m�,����H��eL��9c��|(R�����tLP�c���
HE�]J�:l�>���M'�Omx�������UIm���+�U!_��r�lI�:3
Ua�_AT�� "���PVX5&������C�f�LV,Y
���O���v5����M�lB/����f8W��*��q|(��/�/U�!k����}KI�x�4�)�)��eZ��mi3���]��-�;v�����~�6�/��:lKJ���W&n�f��wi�c���8, �Q+�=h7�a�Jq���ak����E
�<}\�`�>f�W��n
J�`X���;Cj#
���
�����]���4C���w"?�x���b��R�W���	����\�/������#�v���KJ��qeW)lYZr�)kx�� V����z����9�d�������;�[�����zD�4Z�R�����3>n�yY�����x�(\��]���f�����
.*�|_�P_/�5��r��B��c2|����V���O;��mv6L�>��'5�H�&�`���>P0�?d�����zW����nRB.)I����(�7H]��v��!�O�7�lr��������'
LD�f��4"�T��`�210f�#<�hB~U��H�%�L�RR:#!�1n�M6����ad���vtK(�p��E%��<�mZ�<�$�8�;���s_M����������&(�����3��M�����/4�L�dM3V����/-�5���&VPi�iB�i��B�T�V���n^I'�_}����*��X�f�Q_��-)��	Z�J���Y��i������^�b��>d��� �������FlB�s
F��t]�AOK?��2��<t���t�������������H�I$}��E�|Bi0����������R�i^<�\7P�k|zP�1���{�,6���^����@���h���M��R�e���u��d@��B��7���Tp�T�
Z������sU"&�ypZ�	���I���)|�BM�s�
[����M�<}������V�tNM	����u���f����g��G������2��'��2���>�J������r:cj�5"��o�!�7�ag�~���D��}�t��T��RzY�fx2?:[��x����Bw~��lF2U�0��%�*�����������L���W�IF�����2��if����mW#6�H���q�B
q1G�J���<��-���>����69�����~�e�t�����&(���<��K�R�;e�(}fL�A��S��K����?%A���m�~�-)eK���z)�T�I��6;'���&K>J�����sc���n dY���t��?��nd�[+Y����d��I'���%����H��������#�l� �1]?�����7Z���'�s"�u���$�c�D��4�?gG��#(
k���-��#Z�R����	��w|?}`����P�C���VF�����f��a�b��ll&��	����<8���\,������5B��B��2�a
�5e��*����A�/(O:|9Rd�)���br�������]�x���3��qpk��Q���g� ��N}p`mD���?�H0�|���������HYl#�}t�T�D�o���m�����c����#7:�J���oK�����f!�GB�0?�M�9���4^8.�Xbs����������g��y���?66�H���!"�@W���9V^}gc�
f�)���B�%a����Ho���2�4)�,5l������U�3�J�Z���{�";.�ZX�;\�/U�����s �d��2����Q0.T�p
�����Z�K��}W�R����J#Z��������8
9���~9��&�*$�+4�����QW�������-l�K���k+e�t����L0}�C�h�������4�"��jSb#k���wyR����v��a�����������|���Lq�v���Srs>��|~�a1�����2RrG�'lBw���`�(s��yp��tF�\S�
iL�M�4�����i\)i�J�%%}���@kA�Z0A�aK��TK3���j�����"lV�i���D����*����R��/w��](pNZ��i������eg��:��,����w?D�
��F�X��XH��fNX�iNG3�j�R���&F��^e������v�tE�ak�h25�3�B����K��i��M���O�k�p!�z�����������BJ�����KF�����i��d�R6L��^���#x�0�w�r���;���9
~���5�����1���vuB�F�>�S3I��>�m)�h�bAE�.\����7
���\6�M;3A�[��FaJl����V�k*�1���b'k�Qu��([����n�4b�q�!��E���M���G,[2<��KWh������8o�N�G���-�����*�� �d����������)��5�/23��Os��R�)��qD%�MW)���d]��8����~�h��0���$T��A��fH��+h1.A�� �\6:5���Q��q�1�pT��E�t�Ia9s4f�mXc�&��1~�����>�WZr�'�e�����������n������`wL8Y��e��f�P�azD(��~�J����[R
�M^�a��Pd��j����"a�I��~GJ��+n����z�sQmf�z��]_\�<�E�);�!�0�� ������vt
+h���BznL��jth8��^�a8������S�g�.�{|�����t���A��E.f�~��F���+��B���\���s��S0��O����u����+zf����f�5:�C+���:[6��}�/��(���*/����>>���+�2�i���Z�>:���A�|z�v�f���!"��BD�^X��#��T�7����e�Q���� o������P�������fU�eR�W�e�b��	�mj�Rx`���y�������2�QI}����f@��4��
���KE�X��JI `s��aQz|j�F�>�%�`��r����ll���q�G3�yR3�����"���eM��6@��:��??��*��}�� uN��L_Yz���<8(��v��?y_e��C�w?����"kz��No���A������0�=�1>p���������E���W�KW�&�������
��a�����>jt���l��t�]���!�����$z)�����~�FC�4���zDD�A��P����~�^������t�72���\��y-&�w)���-}\�`2b�i�Pyp`].A��$Te�������P��2�a��3j���A�F��A������]�F�N���LL(�Xr;4hL��N&���_�j����v�1���Wc�s��a����E)��0�6&Y�Z����C�[AmhLv����pIf�@�dbm	R
}fL��.|�PG���{]+iH����V��(f,�����M������s;8��R���})�/����z7��0��qx�GD�_�gS.�[�g��Q��7V���C��72�c*�	_{���
������>n��u *_aa"93(��LU��(�Ci#�:��A8�V�R�w)�����K������	A��'#V�F[-|U���cm"�L�E�0��Kf�p�(fw����}p�<IL G����C�H���(8��FllZ�	�,;(!���-B	��l�ua'�K�+��v.%��E���}���R�}0��F���b�0������D�"�s���e��>;����&S�E�i���)H�E]
�n`������@��\g8�yHn�`���x~GJ�5
[�`��(Z��fTv(��I�F4
���\vD�O��!M���~��AJ]!�f:�5b��+e�^R�K#����y)h4W�?��{��E�s�X���k�Y
��^x^�pQ70�+�Ei<��yp0�Qp��������X3���/h�U2x�^�J���Mi%��@�����_��A@�#��Cr
���R6����������.�#`�n.���9���R���x�aZ����{w|MdG-�+�E�7a�C�Ye����R��������I�I��<�+�I�&�6�Z��]�(7���jF��Ngl��@JR�K
G?I�R*�� ���Q�G�@��\�R��U����:���
j���3j����%)���:/���;�|���
]�2:�$��W�!���JI����\�b�d�8A�&��6-M�<���FN6R(�H��Q�<4��v���6R~#;���(�Nx�R+n�%�X�I�nY#"�l�zQ�l��R�B6�����fe�`G �:�Cu�� ������S�)i�J6�XB�Y�JBe�WSv*^������'k��%5�yZ+�� �Rb�.e���%�\p)�y��C�L�>H0>OL��������������t	Z�������yn���x���1&�x���*3�h��JnwzLW\t���.�Q����JG��a��{n�u[�������hzAbI��/U3<��k���@ipl���1���	E(�������������B�>����4�D��F�X���~_��)cj�NP*���?���������_?��������t�����TT-���:I
���v��������T����O9\�
endstream
endobj
5 0 obj
233886
endobj
2 0 obj
<< /Type /Page /Parent 3 0 R /Resources 6 0 R /Contents 4 0 R /MediaBox [0 0 846 594]
/Thumb 8 0 R >>
endobj
6 0 obj
<< /ProcSet [ /PDF ] /ColorSpace << /Cs1 7 0 R >> >>
endobj
8 0 obj
<< /Length 9 0 R /Type /XObject /Subtype /Image /Width 256 /Height 180 /ColorSpace
10 0 R /BitsPerComponent 8 /Filter /DCTDecode >>
stream
����JFIFHH��LExifMM*�i������8Photoshop 3.08BIM8BIM%�������	���B~���"��	
���}!1AQa"q2���#B��R��$3br�	
%&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz���������������������������������������������������������������������������	
���w!1AQaq"2�B����	#3R�br�
$4�%�&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz��������������������������������������������������������������������������C








��C	����?����K������x�����x[D�L��J���oR'0$���
v*��[�8�]}�6�5:������o��&E�dn�#&�'%�9�H�#��j����.����kk+��9�LD�K)����W��|m���#����|2����a�s�Nw4G�������~���J��������~���J�Q��<��������~���J��O�Y�������IxHt�9���u��=�Mz�y�����~x�\k�l�VX���j�&=�y���0�yV�Q�M/�CZ]I��wn�y[�������~���J��������~���J��F�MGH��#b�sR�9�P�9���X�.-�g(���]�i��Nf��z_�%Q�i��Nf��z_�%W��R#�U���{!���|h��t��9`o�`���v������������U�4P�y��������Ty��������U�4P�y��������UZ�����
�UIa��KV�IS������j���#O��s7�����*�#O��s7�����*�F���#O��s7�����*�]>�i�'�{�����Kc� 8]l��d��Q@]�i��Nf��z_�%Q�i��Nf��z_�%W��@]�i��Nf��z_�%Q�i��Nf��z_�%W��@J��=��C���x�$,a��!`o�`���v���>F��D�o�����Uu� �<7�I��{P���qDo8�IwK���2J����u�3����<���n�%Io]�p�V���G[�N6�>H��~H��6@=+�������~���J��������~���J���M�_�n���v����|�"����S{�	��rI�����I,�������fE��PT�V�'�x#�i
i2YEH�
.&e����7]���Z���W��/����@�������b�{XVd��5�V@��8
���=k��>�K�$��������:B?��������~�wi��'�"�����HM�B���X�:�*���?�+�b�J���<����dGw
]�Q��'�����qO�j�"�
s[Z7E9��O����Rq��^�^y���@������9����EW�&��O�g��Z�����,�������]^�t�:=����	d%s��I����x���O�_-|Sx������fg�%y[ceU�69�����<��G��~M��c�E�Utk�o�:?�:����+��<FP��5�\�R4q�g$��Q��5������;k���!	u8!!h%�g
��������2��b*TJ������QN������QE�sQ@Q@Q@Q@Q@Q@Q@��<���Z_k�_h��!��I$oWY�����T�\���
��2��,k>��d�F�2�U�T������A��9/������|?c�K�K�/�H�Q
��;����}2q�����eyqmus�l��M�6$`�sW���AIZJ�{������P����o+���������Q����F�{G���W�|%��U������M�������c���Z��(���6�W=�O���~�������<��>�V��L�]������F�����p
����jo���w6�mo<r�m�kbTV����H���f�4�>2�;h����(����=N~�����@������9����E��P�6?<C:��l�EE��lQ��$�+���d���������f��	��{�rH-� �;
��b���f��������
���'Y[�Uo�C��>��G�N�|W�Y�m'R���$��d���>R��!��_M����~���r��w���P��5���Z����B�f[6�d��8.����b�h�L2s)��u�5����� ��v��k�S]G��W��hIc�;{���2�_.H��0z�6���WL�o���<��.���n�F�4>D���b������=k�zj����j7Z���������e��w ����|��h�8Y�-g���I�}�o~�^��BI��W���w�Ve���^jw�-��S_��6��!c.���Fz��������g���5��fI<'q%����v���b�G$���,7n���AF,�HT�|��=o��K��{Y}���r����|��=�iw>��yS}���;�����CP����c�A�f�&��%O	4�jrZ����0����DW����?/�W{xnu�\�E#�=��-�����^����U����l�Y1�����&y��]���0�����^�+'���eO��{�G��M���)���P��o����QE��QEQEQEQEQ^S�{�=��-�Z�����Z������#�BL��U��)�y�=8<%LEX���'��:�T��g�QH`�K\�!EP{��[w���-��e���"�R�W����x-��N ��}�:�*�����J��`];�I�������n. �����E�U���UFI$��H�V�p��*	�X�
��/RG}ae�Y���0%�����)2:�����C��\]�a�h�3:���Ws;�(9$��������@������9���o�������C��{gI"uLh�������t��YY�������-����]T+HWR��'������+�
xbMr�������#:,2N�!!9��p{V�����/��W��FgM2�k��s�H[h�-�k��>K���S��,Y��������4����C�(,;WY�q��'��;�E��	��Ldm=����^�$*�����^����B>����O���	���O||���
����Km��72������+��[[B�*�1W�Cp>����ZG�%�K�����Czn2vA���B&s��;��!H��*�{�z�T�Wv�s�=�]����Y��N2�-��[�������gJ���ME8��z��z�5�~���*�e%��,v$��� �7v����x9`}q\����?�$����Ia�i��Wjd�-�Lm�_(e��������W^5�}���[�E����P�j�����s��q��_�5/j~���g{m~G�/�E��[��uD`B�m��W'�������5B�%�	�6����?��c�/�NJ�*����v�K�?����_�x���3��������"�.��'t������*��n[�l�V_��g����t?^L�eM^B�Hd�wO8�H�5eo�����0H�y���F��t^��ln��H�4v����"I#w*��1F9�$�����~��--��/i2=���6d�JL�r`�;u�|�1�V��O�^��(���m�����:1���#h�oH).���.h���������R��&}�6����c�`z�z{��/��A7�9�E{cqk�G�X4X�rK[M�@�3�9�g"�X%�|;�_\�4������6�Q�-�4L��%�iH9�?		����3\���-��sC������F��OO@8�H�S��2�pZ�����
���J�j��w�����t�|y�|9���&��0����L�%��5
�g'���n9����C�xb����X�y'�W1�.�-��|�'�@���+K���o|�����
�jW�-����6I5��q��
�� 61����O-UiP�%��Mo�D�:�y���HR�?�M��/N���X�����7P����he�DwF��{�O����R}����{K+�T�	��1�3��#BN���{J�s��{rG2	"`��
����}|��H�/���e�����R�-u�I���{���<��i��?�e��/��P�k�K��	�^I\\����gUpR�a����xzq�^�9sF��v�m��i��G�[(�aV��	[���������<O�M;�R��c4����6��2Z(T*���$W����#����V?k�\�vW��������J��������zu����OX�byv��0�<)�-���o��R��sT��������u������vnQI�:�����#�7�,>�;P�~���@����H�w��a�s�|��7LY�#ne)!�dIA�M������y�����j~�=����k�����6t�����k�Q�B2��Y��-CE��ZV�c{�\�� �>y!�"v���D5�xL(��^zJNO��kGK����{�|4i��VZ7��V���3�#�H�P���nl���<s�Y��>%��c��V�5���tYn<��c���+2y�%��+��|N��%�o6������4�P���=��#pN3����Ui��&:��|�3I|�3����Yj���V��?�����������@��������Ai"��fB�H����X+����������Mi�]^������������ �����04��=yI]����w�&��e��J������#������*�g��t��&����K[�w���. �a��C�B� ���U~x-T`
N�}�:�N������>��_G�:�����>�h�V,\�Y�o@���OVfUR�p$��W�k�������Z����A8��&�����%W����y^�)���Z�{i=���N��g<��hI*$RA�}+��e� 
C��z����k������k=f�;����9b1X�F��TH��If%������(4mZtiO�ke�H�����U�T�
z�h�9����<A���Vh����  v���exnz�T^ K����D|K?�.��gz�~t[�b]����+wT�����a+#����>S��${8��
7�GU���7R�d�MB�>Yc3X$�����FO�}.I��)=#f�=�~��AI�
�#��?���(��<R��v�\Cw,(�����e�x�m=FGj�Sm���|T��|!/���?�L��#K�,��):lp��� ��dG������K����U��:��sosiS4�6�]��1��pN+�h�O���*R���I�wk��vvKC�
�T�(5���_u��c�|o�������V���i������+�%���,�W�a�Fj]/A����C��W����co$�G
8�T��*8����~�z�s^1��_j3j��8b2�D������rI���V��N����������ORIQ��y�o3���O�h�W��J��!wp���m�6�/��bNk��Gs�x���w�����������V8���"\�c��nHd��Q����OhZ��������x�#�4*�b���@`0A���������'=��tf���6'l�:�uw3;��������k���*t�I+:qo^��+N��}����'
|�_o^��-;���>Gs�t+[�E�������D�Z!��������T?|C�x_�z���:�����{�Dh�_�l�G��w�|{}x�I�Pw�F\$qM#�w���\�W��1������	~�k���k)�;LH��n#vo������~�z9l�V�)��56�[��������w����w��w�!2xg]��kx���]��H�2��g���������_j���|-��m8����N�*���,ff��U��IK���J����.u���m6�u�;��S2nH���
Nc���j��]WJ���g��|�1j*Fi�$f@Y�a�����1�[a�)�4-~t��q���cN�����+|I�J��:Z�����eE���`�l��������� gt�5�-����_��x��i�g���������W�y8��z�O�����B$���,y�0������_1��y�;�����Z��-��Z��mGk��@��|e�n:��4�x4������#
�Vo�l�z���4�g�����������r�!7:���\��g�Lg5��>'�?�v�g�x��mc��������0^�cs�������g�[���j�S���Y���rm��G*�d���pz�*��
x����?������K��o�iX'�u� �������������~eJ�=�[�Rv����]N�F�I�Uo��kt\��9���X�����m,�f�M#J��������i%,F���6`�'}1~���?C|�%�����K�^R������>_'#��z��>|0���*]'�Wp_j�wYe�\ �"�0v�!28��|3����_�7�v�\�'P����NX����;�8�����4!��w�qmi�o�m�����4]�]}M�6�#��n������7Zv�ax����$y�,�K`�c����hz>�}���~!��/�U^�s/�V�0$%v������D��q�6�������nn��H���@b�R�����9�5��ZT^�����{]C���G���79<�=:���j�`�������V�k��b�X�T������z�Gi�k�.�/�/��:t�mq�:��<`��~V�o��)tb����O�E�%�� ����@�z���'��iz���9Pi�j������eUi�#i�]�F�K���#���]��x"{�2��D��7�������v���qc2��N���������������M;������������J��`];�I�����;�w������#����V���#~x-GA��������5��������l&��&���.��+"9�F*J�7m���9������&��c@���Cgi4���8��f��Frq�����]�\Z�)����d\�B��9G"�-W��x�<����,��f�H�u��b��uI��@�F�~+���_�[-Z�Qj�0[���y��Q�o���x�=1�yI��[��Z��� k>.��x�RT���+����k���^�SJ���Ex�_����4�&�enV��o/���>F��z��>�h��<��Y���Hlg��>�����qx�������}f��U&�S�����������GM������)>�?��+��/u}%�6r��1���26�F�8��3��	�Mw�_�.��g���o�v$�W\H��j�s�>�5��[��s-!��~2��xe����f��^I���>���+�LB�(��(��|G��/�z���n35���������a�e �=
mQWN��%8�5�P��R��G�_�w���'��*�`�SF����t{<H�f��������]�H��$��|��C����K��eq&�p����6�+B�oEm�=s^���0��x�<�Z�s>!���YM||��r����}����76�iM�|V�??>�S��~����|4P^��x�Y��S$,w����w�S�p�9��P?�����l�+�!���cY{4U�(%�����1�6�$���+_	x������������\UD�-�G{u�)A���NG\��i�������hRb��X��d
�i =����	���S�m.��]�j�w_��b3��i�������Ue�����im��M�ax~�|����I6Cmr���dNv�A\���~6|?�>#xsM�t+��^-J�k�����N��c��zf��>[M�J��lw9���t�!s(y����0T��^�^%\�R�B������U���<r��T���[���?�>>�~�Iu����3��C9��v�e�f�R@��,�u�������;��,Q\^jQ\}�(LFA����������WS��C��S��x[�Ouk����En�D�A9I��D���������p]���~�E��M�i����������`6�q��*0���!�z�m�d�������K/3�����(��sj��S�����Eo���?j�
�Iq�^Fe!��S���&��u�
��1������&k8Y!h����f9v'x-1 ���V���%��=�z�l�v���#���i���dh9c���k?���+������������M-���������QTs��`Vx�me��������m;�[�}�b14��h�k�d��Z��W�z�y��>i����|koq,��)���G
��`T��$�'��'�������#8A�IY�����x��J*Q������h�>x�-/Q�M��7������
��7,y��02O5���y'�Il|M+b�};T�]���4h��u�,�#z����+�~#|7��#��C�^Of�&�o�(��J`����l�Ez�^aM��|n�w���w�qpx�W�w��|��e�u��i7���6��I4�|����[�xB�y�rwJ�������e�e���'�~%|�ap�q,je(�[:D��`���������-�����������j{v����$$�v�py���e��'�a��\���;K���YT0�xN��\�e=�t&�)��y{	���������/���ft���Q_�����N2D���=2+��K�$��������:�n��{w�	���8�B�_p���~�>�,�4];�_�h�������������.>)Zx��J�
��C�y�]�|��X��;���~UA����*�����]&��o'�`�I��%�H�}�����z�7u
w�:O�|]e��Pi�C��C�r(�%R�L���;p���.w����o��j�r�x��m5&F�3�BL�������=�~n��B��_<]��}�^��x�0g[���pr��;��f�L���}��}��C	`���������K-}R>���m]�>��
F��[E����Vh�L/��� ��c�y^�o�^������Z�'����M��i���w���q���a�o�m7E�|e6�b���Akq����uYd���>c����:Sn�,���c?���?��j�,]�O�I�����U�z�
u*���V�M{_�������~�(�$�P��(��(��(��(��(��(
�!������(��(��(��(��(�>�K�$��������:�n���sd�����O���W��w�*������g3�h����������so�]Od�K��v�B�,�IQ�rg'�����@��<	�h��5���a��|F�:�i�.r���:d���b�W������v4k4�p2��T�N$2q��+K�F��k�f��&��a���]���%B�<t;��dU�VFu�Fqi�#�+_���
|]�Y[�n��n#�G\9��|����c�@������s��2v9g�����������6V�U�V�$I$r3�.�T��9>��O�����k���VZ|���yjm�`�9�������H�����J�D�`�-��v���v��G�'���=;�^�{.������I�b�lK�����,��A�^��x���)�������k��KyE
��[i8`�8#��<�5�W|%�4e�+{���r���9��|��	�*ZKSN&K�t�T���h���N�y_S�������0����'�<S��#��9&�z^FE18[b(.�NO9�����"���t�C�WT�f0������v����|���N��O�Z��[��}:�N7E�:��
�\� }�RA9>���?g�I�e�x��O�u�i�+�-��6Ka�cL��(����=i�Q�V����g��,&cQJ�r�;&�yJW����������Ea�w���t���7{�a#2���8l�<V�q4~�J�*EN��{5�a^ic�o�:��+���nu�%,��a�dd+~����
�Y=�Y%�mp�-p\������W���t�|)�G�@�>5[����A�#(Y��1������R�g������J��k���V��K��}v�����-�������������x��h���T2�w(�a�v�#=:��./��"Y��#�7�4��z�v��^
�b� �%���Lt�.�=����hE�P��||����8��\x���<b	�P�]F�i,��qm�0�9
�O����[J?3��q�>�
��\S�gJ	4�)��������������~#h<'?���y��dE��d�!�����'�&���>&�t�����R�;���2(a�����Zn���/�>	�as���������;`
����H������|9�r�sB5I��P Cm$�m�	���y����U�>�����8��^���z�������bI#�6�f�	fc��I=x����:��b���ko}t"����i�U$�wb��ik�]�\�:	�_��OR[nf����<W�^�m'N�~����|X�Z	�-���8���^H�F��1���U:u�e��\����i��1�{]���=��.�����_��.c�>��	���.Y�,���8�p9'&�O���m'��q�[[
2��w��a,�T)�W�����k����?`�������m�\yLY[,�I,C0�O*g�+�6|G���|+i5���$z^��0"7�bGw|�<���P��7V|�<��X��$��������Q�����-��~Q_���A�BO�]����`�����c<V���	��=[�`W�xk^����_���%��+���Y�)�d0�����!&=��6�����>���O�xX���)J��e��QqN{�y[�?Kk����|_����Z�4�55����2;�q
�v:m<�'����;����VO�����w�|%}��������"�;��v��m�3^������������A.���@��/���.O������
<����m��2�����}�"����9$����~������o������]����,��	eRSm��Xe����@������y�~\C���]O�S�Z�RN��3���,N3��W�����o�����p�0�m#�����,L��7*�9 ���/kk��\���_29<r�Mg( ��&!F0��z:�\��O���b���j�\����e�uZ�M�U�2�>��|a��xs���o����� 0������R9�������Z5�����`,.~���As�+��	$�������?���,��'z����]�L�v������NO;{]���Z���x�L��x��eg��L�RD��Pd�$lw���������<V�����P�+���ZV\��g	i��>��>�K�$��������:�Q�q��PIt�F��lW`��U�
���~x,���N�����W!��d��?�������y<�d��)lt�hZ���_��?�7����� �����xSPLWl��M��I����s���n%u�]�>��@� h5�l1h��R�#k0
��I�C�z�^}��c>dl����|��5�;��v��n<�5�����"���R��p�\�:��R���� G��~���t��|3ut.l4��u9��%A���s�\
��������t������~Q���m�>�x'����_
�������
�7';��E| �;Tp:����}~���uMt���H0���g����<��~n���'��~+I/���~'���ZC*�q��u�~bp�����q�Ws�2�(���y�-o��)���������cPOrF;t;TQqs������R�S���NM)��h���r���������+����y�>#��F��#X�x��v$+!�"4\!l7�� �9'�J+��Gt~��<=��0�	�W�i����};;tl�#�����'��/WR�h���x�����\��'�����������
|N���&2Eb���.G���`�A�zs^M�~�z^����Q���s���t�c�<��A#nx88������|{�|j��g�|#�6�>��j���V�����X�VB$�*�&Ul�7W_$*l�����u��PQ�������#�;|�n��n�z�����5��?��K��c��c`\�B%h��+v�v��goM����4Wi4Ww�e��q�*�3C��Gn��G�{�M��������"���-�K6��;����N3��������~e��������k�n�H�(�E9���]l���:��Z��w�O���)!n3���OC_~��	��OW�n���[��t�;�O�l��&��p�X��>c��~��YB��iu>�;�Z�^Y��N��K�=W��'���/���uK�GM�$b�G<��l����=���Z�b�@�W�|I��Z��>A{43M3+�J��� g�����K���~����xb����J&����I��*	!~��p:`u�C��wI���KI��-V!��3	NA�����5�����G�/����a�}���Z������,�Z��;��9��&�H�_��:D��5�Z�q��DU���<1�=
z�_|.�%��-�����O%$��! �v�s������fO�������>'�h����-���W1s�Rq�v6�F�Ga��gV|��C��\�x�?�sz_����Z~�"���G-�cY������[���-�D$�"���u��@5��l����������<����S�o`�I���H�y�}+���#&�g�c��8�5h�YTN-���������uO���m�b��.|cw�N�Id��������`H�n��2k�5��:~��R��7_a�$,w*��&�_/v[q��ry�v���"O�>s�9m56�M��x�������c��O���_�!�j�k���}��,�)��
��R��w�"�#�|�E���X�Z��Yj�I,��pK��F�e�'h�=8�m��U��L���7��S���g{4�uk����[����^s��)�O�<s�|D���}oJD��O5�F���)�\�s�dW�QY�5���
�Q�$�j������|�����o���;m��E}<l����>^~|�z��W%�A�-��h���V����
h'�@*b����g�P7�u���'�	e�5gO��G����4�}����
�(���TU$��u��������=#E�5
�p��/����N��h��������e���������
p��o�a���q�`���}�s�z������d�,Q���c�A�>������5�
���s���_����K.�2����d+9�����IPy����i�~����)��/���q&�����?�w8��e�>e��G�=��+�}�x��_��M�������N�N������j���_I����U�C����o�9U-~%|�i��Z
�Lw9���V���OZ��k����K,V'
W���'&���O��ze��������������Uk���	�"Y_�C��8����s+�N��X�5���Q^}�k�W�z/�m���������������P��^}�k�W�z/�m���������������P��^j�>���i��$,o������H�����V�m|*���E������A���������_����?�m|*���E������A���������_�����i���=��<a� %���[�%z;��j��+���m|*���E������������_�����+���m|*���E�������_>�D���4�
$q�/���G�l��d��J���������_����?�m|*���E������A���������_����?�m|*���E������A���~2|'{�l��4��F�7����7��Gl�Z��k�W�z/�m���z
��������������Q�k�W�z/�m���z
y��_�%^��.�����r�_�Km,�6�#E��jV��2S~O��/����hzc�x8kX�������eWR�`���Z(�i�leLv�)\�Q�`�q�h���gx{U�ukI'����$G
#v�XlV'8���*J�N�QEW�����)�uk��^mj�����JW���]
�?up��r7����O�����-2[�oD�A�&h�	�{2V���,G��(��
���Oe����@n���I�22)!9b1��
]��<R��_�-�n-���\����0Q*���>7c'�����&���M����k9�@��'� ����(��+��W��In5��(��2�oH�f�
���e�c�z�p47���cC��]H}by7���p�8��7`d��2G!�x��Z�������>	
;K�Pn�s�gv�"nr�����Q"��]J��E>\���e�����s�c�m>'�K/�4��� wF��")��V�`0M�O�v�;�+������;�k��{�Dk���Uv?x�[��T���utW�k_<g�j�v:g�n�Kh�;��x��p>���	<	C�62G��@��|A�����������+n�7��2�yUW��p2�_EV?�5�#E���m
���e��w�W���Id�q���[P���O��B��d�	bT�'�5� �1�M�n.7�j�����{�Z�][�����<Ah����	Rpj�QE�x��^$��������@���� ���b��R	��T�8/�~j���o��S�i:��n4�7y��4��f@����V��"�7U�
(������(��(��(��(��(��(��(���Y�:Iss4a�P�C6	du��4PX��P
>�_�'p8�$�5!�t������q����n�NMhQ@b�4�$���?���������5z�(��(��(��(��(��(��(��
endstream
endobj
9 0 obj
15491
endobj
11 0 obj
<< /Length 12 0 R /N 3 /Alternate /DeviceRGB /Filter /FlateDecode >>
stream
x�U[�U��9�
�����-�C�t)�K�����[��k���A���d��$�L�}*�����IA��-��z���R�PVw�"(>�xA(�E��;�d&Yj�e�|����o�����B����%�6s�����c��:��!�Q,�V=���~B+���[?�O0W'�l�Wo�,rK%���V��%�D��j�����O����M$����6�����5G����9,��Bxx|��/��vP�O���TE�"k�J��C{���Gy7��7P����u����u��R,��^Q�9�G��5��L�����cD����|x7p�d���Yi����S��������X���]S�zI;������o�HR4;����Y�	=r�JEO��^�9��������g�T%&����
������r=)��%�[���X��3".b�8��z����J>q�n���^�\��;�O*fJ�b�����(r��FN��X����H�g ��y�O����+�-bU��MR(GI��Z'�i����r0w]�����*x������u���]�Be�]w�*�BQ�*����S������������aa����,����)�)�4;��`g�>�w{��|n J������j��m*`��Y����,�6�<��M����=�����*&�:z�^=��X���p}(���[Go�Zj���eqRN����z]U����%tAC�����^�N��m��{�����%cy�cE���[:3�����W���?�.�-}*}%��>�.�"]�.J_K�JK_�����{�$2s%��������X9*o�����Qy�U)��<%��]�lw���o��r��(�u�s�X�Y�\O8������7��X���i��b�:	m�������Ko��i1�]��D0����	N	�}���`�����
��*�*�6?!�'��O�Z�b+{��'�>}\I���R�u�1Y��-n6yq��wS�#��s���mW<�~�h�_x�}�q�D+���7�w���{Bm���?���#�J{�8���(�_?�Z7�x�h��V���[���������|U
endstream
endobj
12 0 obj
1079
endobj
7 0 obj
[ /ICCBased 11 0 R ]
endobj
13 0 obj
<< /Length 14 0 R /N 3 /Alternate /DeviceRGB /Filter /FlateDecode >>
stream
x��wTS����7��" %�z	 �;HQ�I�P��&vDF)VdT�G�"cE��b�	�P��QDE���k	��5�����Y������g�}��P���tX�4�X���\���X��ffG�D���=���H����.�d��,�P&s���"7C$
E�6<~&��S��2����)2�12�	��"���l���+����&��Y��4���P��%����\�%�g�|e�TI���(����L0�_��&�l�2E�����9�r��9h�x�g���Ib���i���f���S�b1+��M��xL����0��o�E%Ym�h�����Y��h����~S�=�z�U�&���A��Y�l��/��$Z����U�m@���O� ������l^���'���ls�k.+�7���o���9�����V;�?�#I3eE����KD����d�����9i���,������UQ��	��h��<�X�.d
���6'~�khu_}�9P�I�o=C#$n?z}�[1
���h���s�2z���\�n�LA"S���dr%�,���l��t�
4�.0,`
�3p� ��H�.Hi@�A>�
A1�v�jp��z�N�6p\W�
p�G@
��K0��i���A����B�ZyCAP8�C���@��&�*���CP=�#t�]���� 4�}���a
�����;G���Dx����J�>����,�_��@��FX�DB�X$!k�"��E�����H�q���a����Y��bVa�bJ0��c�VL�6f3����b���X'�?v	6��-�V`�`[����a�;���p~�\2n5��������
�&�x�*����s�b|!�
����'�	Zk�!� $l$T����4Q��Ot"�y�\b)���A�I&N�I�$R$)���TIj"]&=&�!��:dGrY@^O�$� _%�?P�(&OJEB�N9J�@y@yC�R
�n�X����ZO�D}J}/G�3���������k���{%O���w�_.�'_!J����Q�@�S���V�F���=�IE���b�b�b�b��5�Q%�����O�@���%�!B��y���M�:�e�0G7����������	e%e[�(�����R�0`�3R��������4������6�i^��)��*n*|�"�f����LUo����m�O�0j&jaj�j��.�����w���_4��������z��j���=����U�4�5�n������4��hZ�Z�Z��^0����Tf%��9�����-�>���=�c��Xg�N��]�.[7A�\�SwBOK/X/_�Q��>Q�����G�[��� �`�A�������a�a��c#����*�Z�;�8c�q��>�[&���I�I��MS���T`����k�h&4�5�����YY�F��9�<�|�y��+=�X���_,�,S-�,Y)YXm��������k]c}��j�c��������-�v��};�]���N����"�&�1=�x����tv(��}���������'{'��I���Y�)�
����-r�q��r�.d.�_xp��U���Z���M���v�m���=����+K�G�������^���W�W����b�j��>:>�>�>�v��}/�a��v���������O8�	�
�FV>2	u�����/�_$\�B�Cv�<	5]�s.,4�&�y�Ux~xw-bEDC��H����G��KwF�G�E�GME{E�EK�X,Y��F�Z� �={$vr����K����
��.3\����r�������_�Yq*������L��_�w���������+���]�e�������D��]�cI�II�OA��u�_��������)3����i�����B%a��+]3='�/�4�0C��i��U�@��L(sYf����L�H�$�%�Y�j��gGe��Q������n�����~5f5wug�v����5�k����\��Nw]�������m mH���F��e�n���Q�Q��`h����B�BQ��-�[l�ll��f��j��"^��b����O%����Y}W�����������w�vw�����X�bY^����]��������W��Va[q`i�d��2���J�jG�����������{���������m���>���Pk�Am�a����������g_D�H���G�G����u�;��7�7�6������q�o���C{��P3���8!9������<�y�}��'�����Z�Z�������6i{L{������-?��|�������gK�����9�w~�B������:Wt>�������������^��r�����U��g�9];}�}���������_�~i���m��p�������}��]�/���}�������.�{�^�=�}����^?�z8�h�c���'
O*��?�����f������`���g���C/����O����+F�F�G�G�����z�����������)�������~w��gb���k���?J���9���m�d���wi�������?�����c�����O�O���?w|	��x&mf������
endstream
endobj
14 0 obj
2612
endobj
10 0 obj
[ /ICCBased 13 0 R ]
endobj
3 0 obj
<< /Type /Pages /MediaBox [0 0 846 594] /Count 1 /Kids [ 2 0 R ] >>
endobj
15 0 obj
<< /Type /Catalog /Pages 3 0 R >>
endobj
16 0 obj
(Mac OS X 10.12.1 Quartz PDFContext)
endobj
17 0 obj
(D:20170319071534Z00'00')
endobj
1 0 obj
<< /Producer 16 0 R /CreationDate 17 0 R /ModDate 17 0 R >>
endobj
xref
0 18
0000000000 65535 f 
0000254106 00000 n 
0000234004 00000 n 
0000253878 00000 n 
0000000022 00000 n 
0000233982 00000 n 
0000234121 00000 n 
0000251069 00000 n 
0000234189 00000 n 
0000249845 00000 n 
0000253841 00000 n 
0000249866 00000 n 
0000251048 00000 n 
0000251105 00000 n 
0000253820 00000 n 
0000253961 00000 n 
0000254011 00000 n 
0000254064 00000 n 
trailer
<< /Size 18 /Root 15 0 R /Info 1 0 R /ID [ <ebddec571cd3e0bece7ab6c2a5e7a4d6>
<ebddec571cd3e0bece7ab6c2a5e7a4d6> ] >>
startxref
254181
%%EOF
#108Robert Haas
robertmhaas@gmail.com
In reply to: Pavan Deolasee (#106)
Re: Patch: Write Amplification Reduction Method (WARM)

On Sun, Mar 19, 2017 at 3:05 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Thu, Mar 16, 2017 at 12:53 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Wed, Mar 15, 2017 at 3:44 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

I couldn't find a better way without a lot of complex infrastructure.
Even
though we now have ability to mark index pointers and we know that a
given
pointer either points to the pre-WARM chain or post-WARM chain, this
does
not solve the case when an index does not receive a new entry. In that
case,
both pre-WARM and post-WARM tuples are reachable via the same old index
pointer. The only way we could deal with this is to mark index pointers
as
"common", "pre-warm" and "post-warm". But that would require us to
update
the old pointer's state from "common" to "pre-warm" for the index whose
keys
are being updated. May be it's doable, but might be more complex than
the
current approach.

/me scratches head.

Aren't pre-warm and post-warm just (better) names for blue and red?

Yeah, sounds better.

My point here wasn't really about renaming, although I do think
renaming is something that should get done. My point was that you
were saying we need to mark index pointers as common, pre-warm, and
post-warm. But you're pretty much already doing that, I think. I
guess you don't have "common", but you do have "pre-warm" and
"post-warm".

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#109Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#108)
Re: Patch: Write Amplification Reduction Method (WARM)

On Mon, Mar 20, 2017 at 8:11 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Sun, Mar 19, 2017 at 3:05 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Thu, Mar 16, 2017 at 12:53 PM, Robert Haas <robertmhaas@gmail.com>

wrote:

/me scratches head.

Aren't pre-warm and post-warm just (better) names for blue and red?

Yeah, sounds better.

My point here wasn't really about renaming, although I do think
renaming is something that should get done. My point was that you
were saying we need to mark index pointers as common, pre-warm, and
post-warm. But you're pretty much already doing that, I think. I
guess you don't have "common", but you do have "pre-warm" and
"post-warm".

Ah, I mis-read that. Strictly speaking, we already have common (blue) and
post-warm (red), and I just finished renaming them to CLEAR (of WARM bit)
and WARM. May be it's still not the best name, but I think it looks better
than before.

But the larger point is that we don't have an easy to know if an index
pointer which was inserted with the original heap tuple (i.e. pre-WARM
update) should only return pre-WARM tuples or should it also return
post-WARM tuples. Right now we make that decision by looking at the
index-keys and discard the pointer whose index-key does not match the ones
created from heap-keys. If we need to change that then at every WARM
update, we will have to go back to the original pointer and change it's
state to pre-warm. That looks more invasive and requires additional index
management.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#110Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Alvaro Herrera (#100)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Mar 15, 2017 at 12:46 AM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

Pavan Deolasee wrote:

On Tue, Mar 14, 2017 at 7:17 AM, Alvaro Herrera <

alvherre@2ndquadrant.com>

wrote:

I have already commented about the executor involvement in btrecheck();
that doesn't seem good. I previously suggested to pass the EState down
from caller, but that's not a great idea either since you still need to
do the actual FormIndexDatum. I now think that a workable option would
be to compute the values/isnulls arrays so that btrecheck gets them
already computed.

I agree with your complaint about modularity violation. What I am unclear
is how passing values/isnulls array will fix that. The way code is
structured currently, recheck routines are called by index_fetch_heap().

So

if we try to compute values/isnulls in that function, we'll still need
access EState, which AFAIU will lead to similar violation. Or am I
mis-reading your idea?

You're right, it's still a problem.

BTW I realised that we don't really need those executor bits in recheck
routines. We don't support WARM when attributes in index expressions are
modified. So we really don't need to do any comparison for those
attributes. I've written a separate form of FormIndexDatum() which will
only return basic index attributes and comparing them should be enough.
Will share rebased and updated patch soon.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#111Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Alvaro Herrera (#96)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 14, 2017 at 7:17 AM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

@@ -234,6 +236,21 @@ index_beginscan(Relation heapRelation,
scan->heapRelation = heapRelation;
scan->xs_snapshot = snapshot;

+     /*
+      * If the index supports recheck, make sure that index tuple is

saved

+      * during index scans.
+      *
+      * XXX Ideally, we should look at all indexes on the table and

check if

+ * WARM is at all supported on the base table. If WARM is not

supported

+ * then we don't need to do any recheck.

RelationGetIndexAttrBitmap() does

+ * do that and sets rd_supportswarm after looking at all indexes.

But we

+ * don't know if the function was called earlier in the session

when we're

+ * here. We can't call it now because there exists a risk of

causing

+      * deadlock.
+      */
+     if (indexRelation->rd_amroutine->amrecheck)
+             scan->xs_want_itup = true;
+
return scan;
}

I didn't like this comment very much. But it's not necessary: you have
already given relcache responsibility for setting rd_supportswarm. The
only problem seems to be that you set it in RelationGetIndexAttrBitmap
instead of RelationGetIndexList, but it's not clear to me why. I think
if the latter function is in charge, then we can trust the flag more
than the current situation.

I looked at this today. AFAICS we don't have access to rd_amroutine in
RelationGetIndexList since we don't actually call index_open() in that
function. Would it be safe to do that? I'll give it a shot, but thought of
asking here first.

Thanks,
Pavan

#112Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Pavan Deolasee (#111)
Re: Patch: Write Amplification Reduction Method (WARM)

Pavan Deolasee wrote:

On Tue, Mar 14, 2017 at 7:17 AM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

I didn't like this comment very much. But it's not necessary: you have
already given relcache responsibility for setting rd_supportswarm. The
only problem seems to be that you set it in RelationGetIndexAttrBitmap
instead of RelationGetIndexList, but it's not clear to me why. I think
if the latter function is in charge, then we can trust the flag more
than the current situation.

I looked at this today. AFAICS we don't have access to rd_amroutine in
RelationGetIndexList since we don't actually call index_open() in that
function. Would it be safe to do that? I'll give it a shot, but thought of
asking here first.

Ah, you're right, we only have the pg_index tuple for the index, not the
pg_am one. I think one pg_am cache lookup isn't really all that
terrible (though we should ensure that there's no circularity problem in
doing that), but I doubt that going to the trouble of invoking the
amhandler just to figure out if it supports WARM is acceptable.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: Pavan Deolasee (#107)
Re: Patch: Write Amplification Reduction Method (WARM)

On Sun, Mar 19, 2017 at 12:15 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

It seems like an important invariant for WARM is that any duplicate
index values ought to have different TIDs (actually, it's a bit
stricter than that, since btrecheck() cares about simple binary
equality).

Yes. I think in the current code, indexes can never duplicate TIDs (at least
for btrees and hash). With WARM, indexes can have duplicate TIDs, but iff
index values differ. In addition there can only be one more duplicate and
one of them must be a Blue pointer (or a non-WARM pointer if we accept the
new nomenclature proposed a few mins back).

It looks like those additional Red/Blue details are available right
from the IndexTuple, which makes the check a good fit for amcheck (no
need to bring the heap into it).

You wouldn't have to teach amcheck about the heap, because a TID that
points to the heap can only be duplicated within a B-Tree index
because of WARM. So, if we find that two adjacent tuples are equal,
check if the TIDs are equal. If they are also equal, check for strict
binary equality. If strict binary equality is indicated, throw an
error due to invariant failing.

Wouldn't this be much more expensive for non-unique indexes?

Only in the worst case, where there are many many duplicates, and only
if you insisted on being completely comprehensive, rather than merely
very comprehensive. That is, you can store the duplicate TIDs in local
memory up to a quasi-arbitrary budget, since you do have to make sure
that any local buffer cannot grow in an unbounded fashion. Certainly,
if you stored 10,000 TIDs, there is always going to be a theoretical
case where that wasn't enough. But you can always say something like
that. We are defending against Murphy here, not Machiavelli.

You're going to have to qsort() a particular value's duplicate TIDs
once you encounter a distinct value, and therefore need to evaluate
the invariant. That's not a big deal, because sorting less than 1,000
items is generally very fast. It's well worth it. I'd probably choose
a generic budget for storing TIDs in local memory, and throw out half
of the TIDs when that budget is exceeded.

I see no difficulty with race conditions when you have only an
AccessShareLock on target. Concurrent page splits won't hurt, because
you reliably skip over those by always moving right. I'm pretty sure
that VACUUM killing IndexTuples that you've already stored with the
intention of sorting later is also not a complicating factor, since
you know that the heap TIDs that are WARM root pointers are not going
to be recycled in the lifetime of the amcheck query such that you get
a false positive.

A WARM check seems like a neat adjunct to what amcheck does already.
It seems like a really good idea for WARM to buy into this kind of
verification. It is, at worst, cheap insurance.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#114Amit Kapila
amit.kapila16@gmail.com
In reply to: Alvaro Herrera (#95)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Mar 10, 2017 at 11:37 PM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:

Robert Haas wrote:

On Wed, Mar 8, 2017 at 2:30 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:

Not really -- it's a bit slower actually in a synthetic case measuring
exactly the slowed-down case. See
/messages/by-id/CAD__OugK12ZqMWWjZiM-YyuD1y8JmMy6x9YEctNiF3rPp6hy0g@mail.gmail.com
I bet in normal cases it's unnoticeable. If WARM flies, then it's going
to provide a larger improvement than is lost to this.

Hmm, that test case isn't all that synthetic. It's just a single
column bulk update, which isn't anything all that crazy,

The problem is that the update touches the second indexed column. With
the original code we would have stopped checking at that point, but with
the patched code we continue to verify all the other indexed columns for
changes.

Maybe we need more than one bitmapset to be given -- multiple ones for
for "any of these" checks (such as HOT, KEY and Identity) which can be
stopped as soon as one is found, and one for "all of these" (for WARM,
indirect indexes) which needs to be checked to completion.

How will that help to mitigate the regression? I think what might
help here is if we fetch the required columns for WARM only when we
know hot_update is false.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#115Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#94)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 9, 2017 at 8:43 AM, Robert Haas <robertmhaas@gmail.com> wrote:

On Wed, Mar 8, 2017 at 2:30 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:

Not really -- it's a bit slower actually in a synthetic case measuring
exactly the slowed-down case. See
/messages/by-id/CAD__OugK12ZqMWWjZiM-YyuD1y8JmMy6x9YEctNiF3rPp6hy0g@mail.gmail.com
I bet in normal cases it's unnoticeable. If WARM flies, then it's going
to provide a larger improvement than is lost to this.

Hmm, that test case isn't all that synthetic. It's just a single
column bulk update, which isn't anything all that crazy, and 5-10%
isn't nothing.

I'm kinda surprised it made that much difference, though.

I think it is because heap_getattr() is not that cheap. We have
noticed the similar problem during development of scan key push down
work [1]https://commitfest.postgresql.org/12/850/.

[1]: https://commitfest.postgresql.org/12/850/

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#116Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#115)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 6:56 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:

Hmm, that test case isn't all that synthetic. It's just a single
column bulk update, which isn't anything all that crazy, and 5-10%
isn't nothing.

I'm kinda surprised it made that much difference, though.

I think it is because heap_getattr() is not that cheap. We have
noticed the similar problem during development of scan key push down
work [1].

Yeah. So what's the deal with this? Is somebody working on figuring
out a different approach that would reduce this overhead? Are we
going to defer WARM to v11? Or is the intent to just ignore the 5-10%
slowdown on a single column update and commit everything anyway? (A
strong -1 on that course of action from me.)

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#117Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#116)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 5:34 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Tue, Mar 21, 2017 at 6:56 AM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

Hmm, that test case isn't all that synthetic. It's just a single
column bulk update, which isn't anything all that crazy, and 5-10%
isn't nothing.

I'm kinda surprised it made that much difference, though.

I think it is because heap_getattr() is not that cheap. We have
noticed the similar problem during development of scan key push down
work [1].

Yeah. So what's the deal with this? Is somebody working on figuring
out a different approach that would reduce this overhead? Are we
going to defer WARM to v11? Or is the intent to just ignore the 5-10%
slowdown on a single column update and commit everything anyway?

I think I should clarify something. The test case does a single column
update, but it also has columns which are very wide, has an index on many
columns (and it updates a column early in the list). In addition, in the
test Mithun updated all 10million rows of the table in a single
transaction, used UNLOGGED table and fsync was turned off.

TBH I see many artificial scenarios here. It will be very useful if he can
rerun the query with some of these restrictions lifted. I'm all for
addressing whatever we can, but I am not sure if this test demonstrates a
real world usage.

Having said that, may be if we can do a few things to reduce the overhead.

- Check if the page has enough free space to perform a HOT/WARM update. If
not, don't look for all index keys.
- Pass bitmaps separately for each index and bail out early if we conclude
neither HOT nor WARM is possible. In this case since there is just one
index and as soon as we check the second column we know neither HOT nor
WARM is possible, we will return early. It might complicate the API a lot,
but I can give it a shot if that's what is needed to make progress.

Any other ideas?

Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#118Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Alvaro Herrera (#97)
5 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 14, 2017 at 10:47 PM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

After looking at how index_fetch_heap and heap_hot_search_buffer
interact, I can't say I'm in love with the idea. I started thinking
that we should not have index_fetch_heap release the buffer lock only to
re-acquire it five lines later, so it should keep the buffer lock, do
the recheck and only release it afterwards (I realize that this means
there'd be need for two additional "else release buffer lock" branches);
but then this got me thinking that perhaps it would be better to have
another routine that does both call heap_hot_search_buffer and then call
recheck -- it occurs to me that what we're doing here is essentially
heap_warm_search_buffer.

Does that make sense?

Another thing is BuildIndexInfo being called over and over for each
recheck(). Surely we need to cache the indexinfo for each indexscan.

Please find attached rebased patches. There are a few changes in this
version, so let me mention them here instead of trying to reply in-line to
various points on various emails:

1. The patch now has support for hash redo recovery since that was added to
the master (it might be broken since a bug was reported in the original
code itself)

2. Based on Robert's comments and my discussion with him in person, I
removed the Blue/Red naming and instead now using CLEAR and WARM to
identify the parts of the chain and the index pointers. This also resulted
in changes to the way heap tuple header bits are named. So
HEAP_WARM_UPDATED is now used to mark the old tuple which gets WARM updated
and the same flag is copied to all subsequent versions of the tuple, until
a non-HOT updates happens. The new version and all subsequent versions are
marked with HEAP_WARM_TUPLE flag (in the earlier versions this was used for
marking old and the new versions. This might cause confusion, but looks a
more accurate naming to me.

3. IndexInfo is now cached inside IndexScanDescData, which should address
your comment above.

4. I realised that we don't really need to ever compare expression
attributes in the index since WARM is never used when one of those columns
is updated. Hence I've now created a new version of FormIndexDatum which
only returns plain attributes and hence recheck routine does not need
access to any executor stuff.

5. We don't release the lock of the buffer if we are going to apply
recheck. This should address part of the your comment. I haven't though put
them inside a single wrapper function because there is just one caller to
amrecheck function and after this change, it looked ok. But if you don't
still like, I'll make that change.

6. Unnecessary header files included at various places have been removed.

7. Some comments have been updated and rewritten. Hopefully they look
better than before now.

8. I merged the main WARM patch and the chain conversion code in a single
patch since I don't think we will apply them separately. But if it helps
with review, let me know and I can split that again.

9. I realised that we don't really need xs_tuple_recheck in the scan
descriptor and hence removed that and used a stack variable to get that
info.

10. Accidentally WARM was disabled on the system relations during one of
the earlier rebases. So restored that back and made a slight change to
regression expected output.

All tests pass with the patch set. I am now writing TAP tests for WARM and
will submit that separately. Per your suggestion, I am first turning the
stress tests I'd used earlier to use TAP tests and then add more tests,
especially around recovery and index addition/deletion.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0004_freeup_3bits_ip_posid_v18.patchapplication/octet-stream; name=0004_freeup_3bits_ip_posid_v18.patchDownload
diff --git a/src/backend/access/gin/ginget.c b/src/backend/access/gin/ginget.c
index aa0b02f..1e1c978 100644
--- a/src/backend/access/gin/ginget.c
+++ b/src/backend/access/gin/ginget.c
@@ -928,7 +928,7 @@ keyGetItem(GinState *ginstate, MemoryContext tempCtx, GinScanKey key,
 	 * Find the minimum item > advancePast among the active entry streams.
 	 *
 	 * Note: a lossy-page entry is encoded by a ItemPointer with max value for
-	 * offset (0xffff), so that it will sort after any exact entries for the
+	 * offset (0x1fff), so that it will sort after any exact entries for the
 	 * same page.  So we'll prefer to return exact pointers not lossy
 	 * pointers, which is good.
 	 */
diff --git a/src/backend/access/gin/ginpostinglist.c b/src/backend/access/gin/ginpostinglist.c
index 8d2d31a..b22b9f5 100644
--- a/src/backend/access/gin/ginpostinglist.c
+++ b/src/backend/access/gin/ginpostinglist.c
@@ -253,7 +253,7 @@ ginCompressPostingList(const ItemPointer ipd, int nipd, int maxsize,
 
 		Assert(ndecoded == totalpacked);
 		for (i = 0; i < ndecoded; i++)
-			Assert(memcmp(&tmp[i], &ipd[i], sizeof(ItemPointerData)) == 0);
+			Assert(ItemPointerEquals(&tmp[i], &ipd[i]));
 		pfree(tmp);
 	}
 #endif
diff --git a/src/include/access/ginblock.h b/src/include/access/ginblock.h
index 438912c..3f7a3f0 100644
--- a/src/include/access/ginblock.h
+++ b/src/include/access/ginblock.h
@@ -160,14 +160,14 @@ typedef struct GinMetaPageData
 	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)0 && \
 	 GinItemPointerGetBlockNumber(p) == (BlockNumber)0)
 #define ItemPointerSetMax(p)  \
-	ItemPointerSet((p), InvalidBlockNumber, (OffsetNumber)0xffff)
+	ItemPointerSet((p), InvalidBlockNumber, (OffsetNumber)OffsetNumberMask)
 #define ItemPointerIsMax(p)  \
-	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)0xffff && \
+	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)OffsetNumberMask && \
 	 GinItemPointerGetBlockNumber(p) == InvalidBlockNumber)
 #define ItemPointerSetLossyPage(p, b)  \
-	ItemPointerSet((p), (b), (OffsetNumber)0xffff)
+	ItemPointerSet((p), (b), (OffsetNumber)OffsetNumberMask)
 #define ItemPointerIsLossyPage(p)  \
-	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)0xffff && \
+	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)OffsetNumberMask && \
 	 GinItemPointerGetBlockNumber(p) != InvalidBlockNumber)
 
 /*
@@ -218,7 +218,7 @@ typedef signed char GinNullCategory;
  */
 #define GinGetNPosting(itup)	GinItemPointerGetOffsetNumber(&(itup)->t_tid)
 #define GinSetNPosting(itup,n)	ItemPointerSetOffsetNumber(&(itup)->t_tid,n)
-#define GIN_TREE_POSTING		((OffsetNumber)0xffff)
+#define GIN_TREE_POSTING		((OffsetNumber)OffsetNumberMask)
 #define GinIsPostingTree(itup)	(GinGetNPosting(itup) == GIN_TREE_POSTING)
 #define GinSetPostingTree(itup, blkno)	( GinSetNPosting((itup),GIN_TREE_POSTING), ItemPointerSetBlockNumber(&(itup)->t_tid, blkno) )
 #define GinGetPostingTree(itup) GinItemPointerGetBlockNumber(&(itup)->t_tid)
diff --git a/src/include/access/gist_private.h b/src/include/access/gist_private.h
index 1ad4ed6..0ad11f1 100644
--- a/src/include/access/gist_private.h
+++ b/src/include/access/gist_private.h
@@ -269,8 +269,8 @@ typedef struct
  * invalid tuples in an index, so throwing an error is as far as we go with
  * supporting that.
  */
-#define TUPLE_IS_VALID		0xffff
-#define TUPLE_IS_INVALID	0xfffe
+#define TUPLE_IS_VALID		OffsetNumberMask
+#define TUPLE_IS_INVALID	OffsetNumberPrev(OffsetNumberMask)
 
 #define  GistTupleIsInvalid(itup)	( ItemPointerGetOffsetNumber( &((itup)->t_tid) ) == TUPLE_IS_INVALID )
 #define  GistTupleSetValid(itup)	ItemPointerSetOffsetNumber( &((itup)->t_tid), TUPLE_IS_VALID )
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 24433c7..4d614b7 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -288,7 +288,7 @@ struct HeapTupleHeaderData
  * than MaxOffsetNumber, so that it can be distinguished from a valid
  * offset number in a regular item pointer.
  */
-#define SpecTokenOffsetNumber		0xfffe
+#define SpecTokenOffsetNumber		OffsetNumberPrev(OffsetNumberMask)
 
 /*
  * HeapTupleHeader accessor macros
diff --git a/src/include/storage/itemptr.h b/src/include/storage/itemptr.h
index 60d0070..3144bdd 100644
--- a/src/include/storage/itemptr.h
+++ b/src/include/storage/itemptr.h
@@ -57,7 +57,7 @@ typedef ItemPointerData *ItemPointer;
  *		True iff the disk item pointer is not NULL.
  */
 #define ItemPointerIsValid(pointer) \
-	((bool) (PointerIsValid(pointer) && ((pointer)->ip_posid != 0)))
+	((bool) (PointerIsValid(pointer) && (((pointer)->ip_posid & OffsetNumberMask) != 0)))
 
 /*
  * ItemPointerGetBlockNumber
@@ -82,13 +82,37 @@ typedef ItemPointerData *ItemPointer;
 #define ItemPointerGetOffsetNumber(pointer) \
 ( \
 	AssertMacro(ItemPointerIsValid(pointer)), \
-	(pointer)->ip_posid \
+	((pointer)->ip_posid & OffsetNumberMask) \
 )
 
 /* Same as ItemPointerGetOffsetNumber but without any assert-checks */
 #define ItemPointerGetOffsetNumberNoCheck(pointer) \
 ( \
-	(pointer)->ip_posid \
+	((pointer)->ip_posid & OffsetNumberMask) \
+)
+
+/*
+ * Get the flags stored in high order bits in the OffsetNumber.
+ */
+#define ItemPointerGetFlags(pointer) \
+( \
+	((pointer)->ip_posid & ~OffsetNumberMask) >> OffsetNumberBits \
+)
+
+/*
+ * Set the flag bits. We first left-shift since flags are defined starting 0x01
+ */
+#define ItemPointerSetFlags(pointer, flags) \
+( \
+	((pointer)->ip_posid |= ((flags) << OffsetNumberBits)) \
+)
+
+/*
+ * Clear all flags.
+ */
+#define ItemPointerClearFlags(pointer) \
+( \
+	((pointer)->ip_posid &= OffsetNumberMask) \
 )
 
 /*
@@ -99,7 +123,7 @@ typedef ItemPointerData *ItemPointer;
 ( \
 	AssertMacro(PointerIsValid(pointer)), \
 	BlockIdSet(&((pointer)->ip_blkid), blockNumber), \
-	(pointer)->ip_posid = offNum \
+	(pointer)->ip_posid = (offNum) \
 )
 
 /*
diff --git a/src/include/storage/off.h b/src/include/storage/off.h
index fe8638f..fe1834c 100644
--- a/src/include/storage/off.h
+++ b/src/include/storage/off.h
@@ -26,8 +26,15 @@ typedef uint16 OffsetNumber;
 #define InvalidOffsetNumber		((OffsetNumber) 0)
 #define FirstOffsetNumber		((OffsetNumber) 1)
 #define MaxOffsetNumber			((OffsetNumber) (BLCKSZ / sizeof(ItemIdData)))
-#define OffsetNumberMask		(0xffff)		/* valid uint16 bits */
 
+/*
+ * Currently we support maxinum 32kB blocks and each ItemId takes 6 bytes. That
+ * limits the number of line pointers to (32kB/6 = 5461). 13 bits are enought o
+ * represent all line pointers. Hence we can reuse the high order bits in
+ * OffsetNumber for other purposes.
+ */
+#define OffsetNumberMask		(0x1fff)		/* valid uint16 bits */
+#define OffsetNumberBits		13	/* number of valid bits in OffsetNumber */
 /* ----------------
  *		support macros
  * ----------------
0003_clear_ip_posid_blkid_refs_v18.patchapplication/octet-stream; name=0003_clear_ip_posid_blkid_refs_v18.patchDownload
diff --git a/contrib/pageinspect/btreefuncs.c b/contrib/pageinspect/btreefuncs.c
index 6f35e28..07496db 100644
--- a/contrib/pageinspect/btreefuncs.c
+++ b/contrib/pageinspect/btreefuncs.c
@@ -363,8 +363,8 @@ bt_page_items(PG_FUNCTION_ARGS)
 		j = 0;
 		values[j++] = psprintf("%d", uargs->offset);
 		values[j++] = psprintf("(%u,%u)",
-							   BlockIdGetBlockNumber(&(itup->t_tid.ip_blkid)),
-							   itup->t_tid.ip_posid);
+							   ItemPointerGetBlockNumberNoCheck(&itup->t_tid),
+							   ItemPointerGetOffsetNumberNoCheck(&itup->t_tid));
 		values[j++] = psprintf("%d", (int) IndexTupleSize(itup));
 		values[j++] = psprintf("%c", IndexTupleHasNulls(itup) ? 't' : 'f');
 		values[j++] = psprintf("%c", IndexTupleHasVarwidths(itup) ? 't' : 'f');
diff --git a/contrib/pgstattuple/pgstattuple.c b/contrib/pgstattuple/pgstattuple.c
index 1e0de5d..44f90cd 100644
--- a/contrib/pgstattuple/pgstattuple.c
+++ b/contrib/pgstattuple/pgstattuple.c
@@ -356,7 +356,7 @@ pgstat_heap(Relation rel, FunctionCallInfo fcinfo)
 		 * heap_getnext may find no tuples on a given page, so we cannot
 		 * simply examine the pages returned by the heap scan.
 		 */
-		tupblock = BlockIdGetBlockNumber(&tuple->t_self.ip_blkid);
+		tupblock = ItemPointerGetBlockNumber(&tuple->t_self);
 
 		while (block <= tupblock)
 		{
diff --git a/src/backend/access/gin/ginget.c b/src/backend/access/gin/ginget.c
index 87cd9ea..aa0b02f 100644
--- a/src/backend/access/gin/ginget.c
+++ b/src/backend/access/gin/ginget.c
@@ -626,8 +626,9 @@ entryLoadMoreItems(GinState *ginstate, GinScanEntry entry,
 		}
 		else
 		{
-			entry->btree.itemptr = advancePast;
-			entry->btree.itemptr.ip_posid++;
+			ItemPointerSet(&entry->btree.itemptr,
+					GinItemPointerGetBlockNumber(&advancePast),
+					OffsetNumberNext(GinItemPointerGetOffsetNumber(&advancePast)));
 		}
 		entry->btree.fullScan = false;
 		stack = ginFindLeafPage(&entry->btree, true, snapshot);
@@ -979,15 +980,17 @@ keyGetItem(GinState *ginstate, MemoryContext tempCtx, GinScanKey key,
 		if (GinItemPointerGetBlockNumber(&advancePast) <
 			GinItemPointerGetBlockNumber(&minItem))
 		{
-			advancePast.ip_blkid = minItem.ip_blkid;
-			advancePast.ip_posid = 0;
+			ItemPointerSet(&advancePast,
+					GinItemPointerGetBlockNumber(&minItem),
+					InvalidOffsetNumber);
 		}
 	}
 	else
 	{
-		Assert(minItem.ip_posid > 0);
-		advancePast = minItem;
-		advancePast.ip_posid--;
+		Assert(GinItemPointerGetOffsetNumber(&minItem) > 0);
+		ItemPointerSet(&advancePast,
+				GinItemPointerGetBlockNumber(&minItem),
+				OffsetNumberPrev(GinItemPointerGetOffsetNumber(&minItem)));
 	}
 
 	/*
@@ -1245,15 +1248,17 @@ scanGetItem(IndexScanDesc scan, ItemPointerData advancePast,
 				if (GinItemPointerGetBlockNumber(&advancePast) <
 					GinItemPointerGetBlockNumber(&key->curItem))
 				{
-					advancePast.ip_blkid = key->curItem.ip_blkid;
-					advancePast.ip_posid = 0;
+					ItemPointerSet(&advancePast,
+						GinItemPointerGetBlockNumber(&key->curItem),
+						InvalidOffsetNumber);
 				}
 			}
 			else
 			{
-				Assert(key->curItem.ip_posid > 0);
-				advancePast = key->curItem;
-				advancePast.ip_posid--;
+				Assert(GinItemPointerGetOffsetNumber(&key->curItem) > 0);
+				ItemPointerSet(&advancePast,
+						GinItemPointerGetBlockNumber(&key->curItem),
+						OffsetNumberPrev(GinItemPointerGetOffsetNumber(&key->curItem)));
 			}
 
 			/*
diff --git a/src/backend/access/gin/ginpostinglist.c b/src/backend/access/gin/ginpostinglist.c
index 598069d..8d2d31a 100644
--- a/src/backend/access/gin/ginpostinglist.c
+++ b/src/backend/access/gin/ginpostinglist.c
@@ -79,13 +79,11 @@ itemptr_to_uint64(const ItemPointer iptr)
 	uint64		val;
 
 	Assert(ItemPointerIsValid(iptr));
-	Assert(iptr->ip_posid < (1 << MaxHeapTuplesPerPageBits));
+	Assert(GinItemPointerGetOffsetNumber(iptr) < (1 << MaxHeapTuplesPerPageBits));
 
-	val = iptr->ip_blkid.bi_hi;
-	val <<= 16;
-	val |= iptr->ip_blkid.bi_lo;
+	val = GinItemPointerGetBlockNumber(iptr);
 	val <<= MaxHeapTuplesPerPageBits;
-	val |= iptr->ip_posid;
+	val |= GinItemPointerGetOffsetNumber(iptr);
 
 	return val;
 }
@@ -93,11 +91,9 @@ itemptr_to_uint64(const ItemPointer iptr)
 static inline void
 uint64_to_itemptr(uint64 val, ItemPointer iptr)
 {
-	iptr->ip_posid = val & ((1 << MaxHeapTuplesPerPageBits) - 1);
+	GinItemPointerSetOffsetNumber(iptr, val & ((1 << MaxHeapTuplesPerPageBits) - 1));
 	val = val >> MaxHeapTuplesPerPageBits;
-	iptr->ip_blkid.bi_lo = val & 0xFFFF;
-	val = val >> 16;
-	iptr->ip_blkid.bi_hi = val & 0xFFFF;
+	GinItemPointerSetBlockNumber(iptr, val);
 
 	Assert(ItemPointerIsValid(iptr));
 }
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index b437799..12ebadc 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -3013,8 +3013,8 @@ DisplayMapping(HTAB *tuplecid_data)
 			 ent->key.relnode.dbNode,
 			 ent->key.relnode.spcNode,
 			 ent->key.relnode.relNode,
-			 BlockIdGetBlockNumber(&ent->key.tid.ip_blkid),
-			 ent->key.tid.ip_posid,
+			 ItemPointerGetBlockNumber(&ent->key.tid),
+			 ItemPointerGetOffsetNumber(&ent->key.tid),
 			 ent->cmin,
 			 ent->cmax
 			);
diff --git a/src/backend/storage/page/itemptr.c b/src/backend/storage/page/itemptr.c
index 703cbb9..28ac885 100644
--- a/src/backend/storage/page/itemptr.c
+++ b/src/backend/storage/page/itemptr.c
@@ -54,18 +54,21 @@ ItemPointerCompare(ItemPointer arg1, ItemPointer arg2)
 	/*
 	 * Don't use ItemPointerGetBlockNumber or ItemPointerGetOffsetNumber here,
 	 * because they assert ip_posid != 0 which might not be true for a
-	 * user-supplied TID.
+	 * user-supplied TID. Instead we use ItemPointerGetBlockNumberNoCheck and
+	 * ItemPointerGetOffsetNumberNoCheck which do not do any validation.
 	 */
-	BlockNumber b1 = BlockIdGetBlockNumber(&(arg1->ip_blkid));
-	BlockNumber b2 = BlockIdGetBlockNumber(&(arg2->ip_blkid));
+	BlockNumber b1 = ItemPointerGetBlockNumberNoCheck(arg1);
+	BlockNumber b2 = ItemPointerGetBlockNumberNoCheck(arg2);
 
 	if (b1 < b2)
 		return -1;
 	else if (b1 > b2)
 		return 1;
-	else if (arg1->ip_posid < arg2->ip_posid)
+	else if (ItemPointerGetOffsetNumberNoCheck(arg1) <
+			ItemPointerGetOffsetNumberNoCheck(arg2))
 		return -1;
-	else if (arg1->ip_posid > arg2->ip_posid)
+	else if (ItemPointerGetOffsetNumberNoCheck(arg1) >
+			ItemPointerGetOffsetNumberNoCheck(arg2))
 		return 1;
 	else
 		return 0;
diff --git a/src/backend/utils/adt/tid.c b/src/backend/utils/adt/tid.c
index 49a5a15..7f3a692 100644
--- a/src/backend/utils/adt/tid.c
+++ b/src/backend/utils/adt/tid.c
@@ -109,8 +109,8 @@ tidout(PG_FUNCTION_ARGS)
 	OffsetNumber offsetNumber;
 	char		buf[32];
 
-	blockNumber = BlockIdGetBlockNumber(&(itemPtr->ip_blkid));
-	offsetNumber = itemPtr->ip_posid;
+	blockNumber = ItemPointerGetBlockNumberNoCheck(itemPtr);
+	offsetNumber = ItemPointerGetOffsetNumberNoCheck(itemPtr);
 
 	/* Perhaps someday we should output this as a record. */
 	snprintf(buf, sizeof(buf), "(%u,%u)", blockNumber, offsetNumber);
@@ -146,14 +146,12 @@ Datum
 tidsend(PG_FUNCTION_ARGS)
 {
 	ItemPointer itemPtr = PG_GETARG_ITEMPOINTER(0);
-	BlockId		blockId;
 	BlockNumber blockNumber;
 	OffsetNumber offsetNumber;
 	StringInfoData buf;
 
-	blockId = &(itemPtr->ip_blkid);
-	blockNumber = BlockIdGetBlockNumber(blockId);
-	offsetNumber = itemPtr->ip_posid;
+	blockNumber = ItemPointerGetBlockNumberNoCheck(itemPtr);
+	offsetNumber = ItemPointerGetOffsetNumberNoCheck(itemPtr);
 
 	pq_begintypsend(&buf);
 	pq_sendint(&buf, blockNumber, sizeof(blockNumber));
diff --git a/src/include/access/gin_private.h b/src/include/access/gin_private.h
index 34e7339..2fd4479 100644
--- a/src/include/access/gin_private.h
+++ b/src/include/access/gin_private.h
@@ -460,8 +460,8 @@ extern ItemPointer ginMergeItemPointers(ItemPointerData *a, uint32 na,
 static inline int
 ginCompareItemPointers(ItemPointer a, ItemPointer b)
 {
-	uint64		ia = (uint64) a->ip_blkid.bi_hi << 32 | (uint64) a->ip_blkid.bi_lo << 16 | a->ip_posid;
-	uint64		ib = (uint64) b->ip_blkid.bi_hi << 32 | (uint64) b->ip_blkid.bi_lo << 16 | b->ip_posid;
+	uint64		ia = (uint64) GinItemPointerGetBlockNumber(a) << 32 | GinItemPointerGetOffsetNumber(a);
+	uint64		ib = (uint64) GinItemPointerGetBlockNumber(b) << 32 | GinItemPointerGetOffsetNumber(b);
 
 	if (ia == ib)
 		return 0;
diff --git a/src/include/access/ginblock.h b/src/include/access/ginblock.h
index a3fb056..438912c 100644
--- a/src/include/access/ginblock.h
+++ b/src/include/access/ginblock.h
@@ -132,10 +132,17 @@ typedef struct GinMetaPageData
  * to avoid Asserts, since sometimes the ip_posid isn't "valid"
  */
 #define GinItemPointerGetBlockNumber(pointer) \
-	BlockIdGetBlockNumber(&(pointer)->ip_blkid)
+	(ItemPointerGetBlockNumberNoCheck(pointer))
 
 #define GinItemPointerGetOffsetNumber(pointer) \
-	((pointer)->ip_posid)
+	(ItemPointerGetOffsetNumberNoCheck(pointer))
+
+#define GinItemPointerSetBlockNumber(pointer, blkno) \
+	(ItemPointerSetBlockNumber((pointer), (blkno)))
+
+#define GinItemPointerSetOffsetNumber(pointer, offnum) \
+	(ItemPointerSetOffsetNumber((pointer), (offnum)))
+
 
 /*
  * Special-case item pointer values needed by the GIN search logic.
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 7552186..24433c7 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -428,7 +428,7 @@ do { \
 
 #define HeapTupleHeaderIsSpeculative(tup) \
 ( \
-	(tup)->t_ctid.ip_posid == SpecTokenOffsetNumber \
+	(ItemPointerGetOffsetNumberNoCheck(&(tup)->t_ctid) == SpecTokenOffsetNumber) \
 )
 
 #define HeapTupleHeaderGetSpeculativeToken(tup) \
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index 6289ffa..f9304db 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -151,9 +151,8 @@ typedef struct BTMetaPageData
  *	within a level). - vadim 04/09/97
  */
 #define BTTidSame(i1, i2)	\
-	( (i1).ip_blkid.bi_hi == (i2).ip_blkid.bi_hi && \
-	  (i1).ip_blkid.bi_lo == (i2).ip_blkid.bi_lo && \
-	  (i1).ip_posid == (i2).ip_posid )
+	((ItemPointerGetBlockNumber(&(i1)) == ItemPointerGetBlockNumber(&(i2))) && \
+	 (ItemPointerGetOffsetNumber(&(i1)) == ItemPointerGetOffsetNumber(&(i2))))
 #define BTEntrySame(i1, i2) \
 	BTTidSame((i1)->t_tid, (i2)->t_tid)
 
diff --git a/src/include/storage/itemptr.h b/src/include/storage/itemptr.h
index 576aaa8..60d0070 100644
--- a/src/include/storage/itemptr.h
+++ b/src/include/storage/itemptr.h
@@ -69,6 +69,12 @@ typedef ItemPointerData *ItemPointer;
 	BlockIdGetBlockNumber(&(pointer)->ip_blkid) \
 )
 
+/* Same as ItemPointerGetBlockNumber but without any assert-checks */
+#define ItemPointerGetBlockNumberNoCheck(pointer) \
+( \
+	BlockIdGetBlockNumber(&(pointer)->ip_blkid) \
+)
+
 /*
  * ItemPointerGetOffsetNumber
  *		Returns the offset number of a disk item pointer.
@@ -79,6 +85,12 @@ typedef ItemPointerData *ItemPointer;
 	(pointer)->ip_posid \
 )
 
+/* Same as ItemPointerGetOffsetNumber but without any assert-checks */
+#define ItemPointerGetOffsetNumberNoCheck(pointer) \
+( \
+	(pointer)->ip_posid \
+)
+
 /*
  * ItemPointerSet
  *		Sets a disk item pointer to the specified block and offset.
0002_track_root_lp_v18.patchapplication/octet-stream; name=0002_track_root_lp_v18.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index fd4291b..26a7af4 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -94,7 +94,8 @@ static HeapTuple heap_prepare_insert(Relation relation, HeapTuple tup,
 					TransactionId xid, CommandId cid, int options);
 static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
-				HeapTuple newtup, HeapTuple old_key_tup,
+				HeapTuple newtup, OffsetNumber root_offnum,
+				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
 static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
 							 Bitmapset *interesting_cols,
@@ -2264,13 +2265,13 @@ heap_get_latest_tid(Relation relation,
 		 */
 		if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) ||
 			HeapTupleHeaderIsOnlyLocked(tp.t_data) ||
-			ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid))
+			HeapTupleHeaderIsHeapLatest(tp.t_data, &ctid))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
 		}
 
-		ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tp.t_data, &ctid);
 		priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		UnlockReleaseBuffer(buffer);
 	}							/* end of loop */
@@ -2401,6 +2402,7 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	bool		all_visible_cleared = false;
+	OffsetNumber	root_offnum;
 
 	/*
 	 * Fill in tuple header fields, assign an OID, and toast the tuple if
@@ -2439,8 +2441,13 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
-	RelationPutHeapTuple(relation, buffer, heaptup,
-						 (options & HEAP_INSERT_SPECULATIVE) != 0);
+	root_offnum = RelationPutHeapTuple(relation, buffer, heaptup,
+						 (options & HEAP_INSERT_SPECULATIVE) != 0,
+						 InvalidOffsetNumber);
+
+	/* We must not overwrite the speculative insertion token. */
+	if ((options & HEAP_INSERT_SPECULATIVE) == 0)
+		HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 	if (PageIsAllVisible(BufferGetPage(buffer)))
 	{
@@ -2668,6 +2675,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 	Size		saveFreeSpace;
 	bool		need_tuple_data = RelationIsLogicallyLogged(relation);
 	bool		need_cids = RelationIsAccessibleInLogicalDecoding(relation);
+	OffsetNumber	root_offnum;
 
 	needwal = !(options & HEAP_INSERT_SKIP_WAL) && RelationNeedsWAL(relation);
 	saveFreeSpace = RelationGetTargetPageFreeSpace(relation,
@@ -2738,7 +2746,12 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		 * RelationGetBufferForTuple has ensured that the first tuple fits.
 		 * Put that on the page, and then as many other tuples as fit.
 		 */
-		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false);
+		root_offnum = RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false,
+				InvalidOffsetNumber);
+
+		/* Mark this tuple as the latest and also set root offset. */
+		HeapTupleHeaderSetHeapLatest(heaptuples[ndone]->t_data, root_offnum);
+
 		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++)
 		{
 			HeapTuple	heaptup = heaptuples[ndone + nthispage];
@@ -2746,7 +2759,10 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace)
 				break;
 
-			RelationPutHeapTuple(relation, buffer, heaptup, false);
+			root_offnum = RelationPutHeapTuple(relation, buffer, heaptup, false,
+					InvalidOffsetNumber);
+			/* Mark each tuple as the latest and also set root offset. */
+			HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 			/*
 			 * We don't use heap_multi_insert for catalog tuples yet, but
@@ -3018,6 +3034,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	HeapTupleData tp;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	TransactionId new_xmax;
@@ -3028,6 +3045,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	bool		all_visible_cleared = false;
 	HeapTuple	old_key_tuple = NULL;	/* replica identity of the tuple */
 	bool		old_key_copied = false;
+	OffsetNumber	root_offnum;
 
 	Assert(ItemPointerIsValid(tid));
 
@@ -3069,7 +3087,8 @@ heap_delete(Relation relation, ItemPointer tid,
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 	}
 
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
+	offnum = ItemPointerGetOffsetNumber(tid);
+	lp = PageGetItemId(page, offnum);
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
@@ -3199,7 +3218,17 @@ l1:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tp.t_data->t_ctid;
+
+		/*
+		 * If we're at the end of the chain, then just return the same TID back
+		 * to the caller. The caller uses that as a hint to know if we have hit
+		 * the end of the chain.
+		 */
+		if (!HeapTupleHeaderIsHeapLatest(tp.t_data, &tp.t_self))
+			HeapTupleHeaderGetNextTid(tp.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&tp.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tp.t_data);
@@ -3248,6 +3277,22 @@ l1:
 							  xid, LockTupleExclusive, true,
 							  &new_xmax, &new_infomask, &new_infomask2);
 
+	/*
+	 * heap_get_root_tuple_one() may call palloc, which is disallowed once we
+	 * enter the critical section. So check if the root offset is cached in the
+	 * tuple and if not, fetch that information hard way before entering the
+	 * critical section.
+	 *
+	 * Most often and unless we are dealing with a pg-upgraded cluster, the
+	 * root offset information should be cached. So there should not be too
+	 * much overhead of fetching this information. Also, once a tuple is
+	 * updated, the information will be copied to the new version. So it's not
+	 * as if we're going to pay this price forever.
+	 */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tp.t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -3275,8 +3320,10 @@ l1:
 	HeapTupleHeaderClearHotUpdated(tp.t_data);
 	HeapTupleHeaderSetXmax(tp.t_data, new_xmax);
 	HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo);
-	/* Make sure there is no forward chain link in t_ctid */
-	tp.t_data->t_ctid = tp.t_self;
+
+	/* Mark this tuple as the latest tuple in the update chain. */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		HeapTupleHeaderSetHeapLatest(tp.t_data, root_offnum);
 
 	MarkBufferDirty(buffer);
 
@@ -3477,6 +3524,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
+	OffsetNumber	root_offnum;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3539,6 +3588,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 
 
 	block = ItemPointerGetBlockNumber(otid);
+	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3823,7 +3873,12 @@ l2:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = oldtup.t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(oldtup.t_data, &oldtup.t_self))
+			HeapTupleHeaderGetNextTid(oldtup.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&oldtup.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
@@ -3963,6 +4018,7 @@ l2:
 		uint16		infomask_lock_old_tuple,
 					infomask2_lock_old_tuple;
 		bool		cleared_all_frozen = false;
+		OffsetNumber	root_offnum;
 
 		/*
 		 * To prevent concurrent sessions from updating the tuple, we have to
@@ -3990,6 +4046,14 @@ l2:
 
 		Assert(HEAP_XMAX_IS_LOCKED_ONLY(infomask_lock_old_tuple));
 
+		/*
+		 * Fetch root offset before entering the critical section. We do this
+		 * only if the information is not already available.
+		 */
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&oldtup.t_self));
+
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
@@ -4004,7 +4068,8 @@ l2:
 		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			HeapTupleHeaderSetHeapLatest(oldtup.t_data, root_offnum);
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4162,6 +4227,10 @@ l2:
 										   bms_overlap(modified_attrs, id_attrs),
 										   &old_key_copied);
 
+	if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
@@ -4187,6 +4256,17 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way (we should have done
+		 * that before entering the critical section above).
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
 	else
 	{
@@ -4194,10 +4274,22 @@ l2:
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		root_offnum = InvalidOffsetNumber;
 	}
 
-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
-
+	/* insert new tuple */
+	root_offnum = RelationPutHeapTuple(relation, newbuf, heaptup, false,
+									   root_offnum);
+	/*
+	 * Also mark both copies as latest and set the root offset information. If
+	 * we're doing a HOT/WARM update, then we just copy the information from
+	 * old tuple, if available or computed above. For regular updates,
+	 * RelationPutHeapTuple must have returned us the actual offset number
+	 * where the new version was inserted and we store the same value since the
+	 * update resulted in a new HOT-chain.
+	 */
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
 	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
@@ -4210,7 +4302,7 @@ l2:
 	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	HeapTupleHeaderSetNextTid(oldtup.t_data, &(heaptup->t_self));
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4249,6 +4341,7 @@ l2:
 
 		recptr = log_heap_update(relation, buffer,
 								 newbuf, &oldtup, heaptup,
+								 root_offnum,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4529,7 +4622,8 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	ItemId		lp;
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
-	BlockNumber block;
+	BlockNumber	block;
+	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4538,9 +4632,11 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	bool		first_time = true;
 	bool		have_tuple_lock = false;
 	bool		cleared_all_frozen = false;
+	OffsetNumber	root_offnum;
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
+	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -4560,6 +4656,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	tuple->t_data = (HeapTupleHeader) PageGetItem(page, lp);
 	tuple->t_len = ItemIdGetLength(lp);
 	tuple->t_tableOid = RelationGetRelid(relation);
+	tuple->t_self = *tid;
 
 l3:
 	result = HeapTupleSatisfiesUpdate(tuple, cid, *buffer);
@@ -4587,7 +4684,11 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &t_ctid);
+		else
+			ItemPointerCopy(tid, &t_ctid);
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -5025,7 +5126,12 @@ failed:
 		Assert(result == HeapTupleSelfUpdated || result == HeapTupleUpdated ||
 			   result == HeapTupleWouldBlock);
 		Assert(!(tuple->t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tuple->t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(tid, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tuple->t_data);
@@ -5073,6 +5179,10 @@ failed:
 							  GetCurrentTransactionId(), mode, false,
 							  &xid, &new_infomask, &new_infomask2);
 
+	if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tuple->t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -5101,7 +5211,10 @@ failed:
 	 * the tuple as well.
 	 */
 	if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask))
-		tuple->t_data->t_ctid = *tid;
+	{
+		if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+			HeapTupleHeaderSetHeapLatest(tuple->t_data, root_offnum);
+	}
 
 	/* Clear only the all-frozen bit on visibility map if needed */
 	if (PageIsAllVisible(page) &&
@@ -5615,6 +5728,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
+	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5623,6 +5737,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
+		offnum = ItemPointerGetOffsetNumber(&tupid);
+
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
@@ -5852,7 +5968,7 @@ l4:
 
 		/* if we find the end of update chain, we're done. */
 		if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID ||
-			ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) ||
+			HeapTupleHeaderIsHeapLatest(mytup.t_data, &mytup.t_self) ||
 			HeapTupleHeaderIsOnlyLocked(mytup.t_data))
 		{
 			result = HeapTupleMayBeUpdated;
@@ -5861,7 +5977,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		HeapTupleHeaderGetNextTid(mytup.t_data, &tupid);
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
@@ -5978,7 +6094,7 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	 * Replace the speculative insertion token with a real t_ctid, pointing to
 	 * itself like it does on regular tuples.
 	 */
-	htup->t_ctid = tuple->t_self;
+	HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 	/* XLOG stuff */
 	if (RelationNeedsWAL(relation))
@@ -6104,8 +6220,7 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
 
 	/* Clear the speculative insertion token too */
-	tp.t_data->t_ctid = tp.t_self;
-
+	HeapTupleHeaderSetHeapLatest(tp.t_data, ItemPointerGetOffsetNumber(tid));
 	MarkBufferDirty(buffer);
 
 	/*
@@ -7453,6 +7568,7 @@ log_heap_visible(RelFileNode rnode, Buffer heap_buffer, Buffer vm_buffer,
 static XLogRecPtr
 log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup, HeapTuple newtup,
+				OffsetNumber root_offnum,
 				HeapTuple old_key_tuple,
 				bool all_visible_cleared, bool new_all_visible_cleared)
 {
@@ -7573,6 +7689,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self);
 	xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data);
 
+	Assert(OffsetNumberIsValid(root_offnum));
+	xlrec.root_offnum = root_offnum;
+
 	bufflags = REGBUF_STANDARD;
 	if (init)
 		bufflags |= REGBUF_WILL_INIT;
@@ -8227,7 +8346,13 @@ heap_xlog_delete(XLogReaderState *record)
 			PageClearAllVisible(page);
 
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = target_tid;
+		if (!HeapTupleHeaderHasRootOffset(htup))
+		{
+			OffsetNumber	root_offnum;
+			root_offnum = heap_get_root_tuple(page, xlrec->offnum); 
+			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+		}
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8317,7 +8442,8 @@ heap_xlog_insert(XLogReaderState *record)
 		htup->t_hoff = xlhdr.t_hoff;
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
-		htup->t_ctid = target_tid;
+
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->offnum);
 
 		if (PageAddItem(page, (Item) htup, newlen, xlrec->offnum,
 						true, true) == InvalidOffsetNumber)
@@ -8452,8 +8578,8 @@ heap_xlog_multi_insert(XLogReaderState *record)
 			htup->t_hoff = xlhdr->t_hoff;
 			HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 			HeapTupleHeaderSetCmin(htup, FirstCommandId);
-			ItemPointerSetBlockNumber(&htup->t_ctid, blkno);
-			ItemPointerSetOffsetNumber(&htup->t_ctid, offnum);
+
+			HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 			offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 			if (offnum == InvalidOffsetNumber)
@@ -8589,7 +8715,7 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 		/* Set forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetNextTid(htup, &newtid);
 
 		/* Mark the page as a candidate for pruning */
 		PageSetPrunable(page, XLogRecGetXid(record));
@@ -8722,13 +8848,17 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
-		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = newtid;
 
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
 
+		/*
+		 * Make sure the tuple is marked as the latest and root offset
+		 * information is restored.
+		 */
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->root_offnum);
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
@@ -8791,6 +8921,9 @@ heap_xlog_confirm(XLogReaderState *record)
 		 */
 		ItemPointerSet(&htup->t_ctid, BufferGetBlockNumber(buffer), offnum);
 
+		/* For newly inserted tuple, set root offset to itself. */
+		HeapTupleHeaderSetHeapLatest(htup, offnum);
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8854,11 +8987,17 @@ heap_xlog_lock(XLogReaderState *record)
 		 */
 		if (HEAP_XMAX_IS_LOCKED_ONLY(htup->t_infomask))
 		{
+			ItemPointerData	target_tid;
+
+			ItemPointerSet(&target_tid, BufferGetBlockNumber(buffer), offnum);
 			HeapTupleHeaderClearHotUpdated(htup);
 			/* Make sure there is no forward chain link in t_ctid */
-			ItemPointerSet(&htup->t_ctid,
-						   BufferGetBlockNumber(buffer),
-						   offnum);
+			if (!HeapTupleHeaderHasRootOffset(htup))
+			{
+				OffsetNumber	root_offnum;
+				root_offnum = heap_get_root_tuple(page, offnum);
+				HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+			}
 		}
 		HeapTupleHeaderSetXmax(htup, xlrec->locking_xid);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index 6529fe3..8052519 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -31,12 +31,20 @@
  * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
  * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *
+ * The caller can optionally tell us to set the root offset to the given value.
+ * Otherwise, the root offset is set to the offset of the new location once its
+ * known. The former is used while updating an existing tuple where the caller
+ * tells us about the root line pointer of the chain.  The latter is used
+ * during insertion of a new row, hence root line pointer is set to the offset
+ * where this tuple is inserted.
  */
-void
+OffsetNumber
 RelationPutHeapTuple(Relation relation,
 					 Buffer buffer,
 					 HeapTuple tuple,
-					 bool token)
+					 bool token,
+					 OffsetNumber root_offnum)
 {
 	Page		pageHeader;
 	OffsetNumber offnum;
@@ -60,17 +68,24 @@ RelationPutHeapTuple(Relation relation,
 	ItemPointerSet(&(tuple->t_self), BufferGetBlockNumber(buffer), offnum);
 
 	/*
-	 * Insert the correct position into CTID of the stored tuple, too (unless
-	 * this is a speculative insertion, in which case the token is held in
-	 * CTID field instead)
+	 * Set block number and the root offset into CTID of the stored tuple, too
+	 * (unless this is a speculative insertion, in which case the token is held
+	 * in CTID field instead).
 	 */
 	if (!token)
 	{
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
+		/* Copy t_ctid to set the correct block number. */
 		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+
+		if (!OffsetNumberIsValid(root_offnum))
+			root_offnum = offnum;
+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item, root_offnum);
 	}
+
+	return root_offnum;
 }
 
 /*
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index d69a266..f54337c 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -55,6 +55,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
 
+static void heap_get_root_tuples_internal(Page page,
+				OffsetNumber target_offnum, OffsetNumber *root_offsets);
 
 /*
  * Optionally prune and repair fragmentation in the specified page.
@@ -553,6 +555,17 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 		if (!HeapTupleHeaderIsHotUpdated(htup))
 			break;
 
+
+		/*
+		 * If the tuple was HOT-updated and the update was later
+		 * aborted, someone could mark this tuple to be the last tuple
+		 * in the chain, without clearing the HOT-updated flag. So we must
+		 * check if this is the last tuple in the chain and stop following the
+		 * CTID, else we risk getting into an infinite recursion (though
+		 * prstate->marked[] currently protects against that).
+		 */
+		if (HeapTupleHeaderHasRootOffset(htup))
+			break;
 		/*
 		 * Advance to next chain member.
 		 */
@@ -726,27 +739,47 @@ heap_page_prune_execute(Buffer buffer,
 
 
 /*
- * For all items in this page, find their respective root line pointers.
- * If item k is part of a HOT-chain with root at item j, then we set
- * root_offsets[k - 1] = j.
+ * Either for all items in this page or for the given item, find their
+ * respective root line pointers.
+ *
+ * When target_offnum is a valid offset number, the caller is interested in
+ * just one item. In that case, the root line pointer is returned in
+ * root_offsets.
  *
- * The passed-in root_offsets array must have MaxHeapTuplesPerPage entries.
- * We zero out all unused entries.
+ * When target_offnum is a InvalidOffsetNumber then the caller wants to know
+ * the root line pointers of all the items in this page. The root_offsets array
+ * must have MaxHeapTuplesPerPage entries in that case. If item k is part of a
+ * HOT-chain with root at item j, then we set root_offsets[k - 1] = j. We zero
+ * out all unused entries.
  *
  * The function must be called with at least share lock on the buffer, to
  * prevent concurrent prune operations.
  *
+ * This is not a cheap function since it must scan through all line pointers
+ * and tuples on the page in order to find the root line pointers. To minimize
+ * the cost, we break early if target_offnum is specified and root line pointer
+ * to target_offnum is found.
+ *
  * Note: The information collected here is valid only as long as the caller
  * holds a pin on the buffer. Once pin is released, a tuple might be pruned
  * and reused by a completely unrelated tuple.
+ *
+ * Note: This function must not be called inside a critical section because it
+ * internally calls HeapTupleHeaderGetUpdateXid which somewhere down the stack
+ * may try to allocate heap memory. Memory allocation is disallowed in a
+ * critical section.
  */
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
 {
 	OffsetNumber offnum,
 				maxoff;
 
-	MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
+	if (OffsetNumberIsValid(target_offnum))
+		*root_offsets = InvalidOffsetNumber;
+	else
+		MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
 
 	maxoff = PageGetMaxOffsetNumber(page);
 	for (offnum = FirstOffsetNumber; offnum <= maxoff; offnum = OffsetNumberNext(offnum))
@@ -774,9 +807,28 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 
 			/*
 			 * This is either a plain tuple or the root of a HOT-chain.
-			 * Remember it in the mapping.
+			 *
+			 * If the target_offnum is specified and if we found its mapping,
+			 * return.
 			 */
-			root_offsets[offnum - 1] = offnum;
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (target_offnum == offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember it in the mapping. */
+				root_offsets[offnum - 1] = offnum;
+			}
 
 			/* If it's not the start of a HOT-chain, we're done with it */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
@@ -817,15 +869,65 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 				!TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(htup)))
 				break;
 
-			/* Remember the root line pointer for this item */
-			root_offsets[nextoffnum - 1] = offnum;
+			/*
+			 * If target_offnum is specified and we found its mapping, return.
+			 */
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (nextoffnum == target_offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember the root line pointer for this item. */
+				root_offsets[nextoffnum - 1] = offnum;
+			}
 
 			/* Advance to next chain member, if any */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				break;
 
+			/*
+			 * If the tuple was HOT-updated and the update was later aborted,
+			 * someone could mark this tuple to be the last tuple in the chain
+			 * and store root offset in CTID, without clearing the HOT-updated
+			 * flag. So we must check if CTID is actually root offset and break
+			 * to avoid infinite recursion.
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
 		}
 	}
 }
+
+/*
+ * Get root line pointer for the given tuple.
+ */
+OffsetNumber
+heap_get_root_tuple(Page page, OffsetNumber target_offnum)
+{
+	OffsetNumber offnum = InvalidOffsetNumber;
+	heap_get_root_tuples_internal(page, target_offnum, &offnum);
+	return offnum;
+}
+
+/*
+ * Get root line pointers for all tuples in the page
+ */
+void
+heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+{
+	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+			root_offsets);
+}
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index d7f65a5..2d3ae9b 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -421,14 +421,18 @@ rewrite_heap_tuple(RewriteState state,
 	 */
 	if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
 		  HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
-		!(ItemPointerEquals(&(old_tuple->t_self),
-							&(old_tuple->t_data->t_ctid))))
+		!(HeapTupleHeaderIsHeapLatest(old_tuple->t_data, &old_tuple->t_self)))
 	{
 		OldToNewMapping mapping;
 
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
-		hashkey.tid = old_tuple->t_data->t_ctid;
+
+		/* 
+		 * We've already checked that this is not the last tuple in the chain,
+		 * so fetch the next TID in the chain.
+		 */
+		HeapTupleHeaderGetNextTid(old_tuple->t_data, &hashkey.tid);
 
 		mapping = (OldToNewMapping)
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -441,7 +445,7 @@ rewrite_heap_tuple(RewriteState state,
 			 * set the ctid of this tuple to point to the new location, and
 			 * insert it right away.
 			 */
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextTid(new_tuple->t_data, &mapping->new_tid);
 
 			/* We don't need the mapping entry anymore */
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -527,7 +531,7 @@ rewrite_heap_tuple(RewriteState state,
 				new_tuple = unresolved->tuple;
 				free_new = true;
 				old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextTid(new_tuple->t_data, &new_tid);
 
 				/*
 				 * We don't need the hash entry anymore, but don't free its
@@ -733,7 +737,12 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		onpage_tup->t_ctid = tup->t_self;
+		/* 
+		 * Set t_ctid just to ensure that block number is copied correctly, but
+		 * then immediately mark the tuple as the latest.
+		 */
+		HeapTupleHeaderSetNextTid(onpage_tup, &tup->t_self);
+		HeapTupleHeaderSetHeapLatest(onpage_tup, newoff);
 	}
 
 	/* If heaptup is a private copy, release it. */
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 5242dee..2142273 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -789,7 +789,8 @@ retry:
 			  DirtySnapshot.speculativeToken &&
 			  TransactionIdPrecedes(GetCurrentTransactionId(), xwait))))
 		{
-			ctid_wait = tup->t_data->t_ctid;
+			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
+				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index f5cd65d..44a501f 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2592,7 +2592,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		 * As above, it should be safe to examine xmax and t_ctid without the
 		 * buffer content lock, because they can't be changing.
 		 */
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, &tuple.t_self))
 		{
 			/* deleted, so forget about it */
 			ReleaseBuffer(buffer);
@@ -2600,7 +2600,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		}
 
 		/* updated, so look at the updated row */
-		tuple.t_self = tuple.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tuple.t_data, &tuple.t_self);
 		/* updated row should have xmin matching this xmax */
 		priorXmax = HeapTupleHeaderGetUpdateXid(tuple.t_data);
 		ReleaseBuffer(buffer);
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 7e85510..5540e12 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -190,6 +190,7 @@ extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
+extern OffsetNumber heap_get_root_tuple(Page page, OffsetNumber target_offnum);
 extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
 
 /* in heap/syncscan.c */
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index b285f17..e6019d5 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -193,6 +193,8 @@ typedef struct xl_heap_update
 	uint8		flags;
 	TransactionId new_xmax;		/* xmax of the new tuple */
 	OffsetNumber new_offnum;	/* new tuple's offset */
+	OffsetNumber root_offnum;	/* offset of the root line pointer in case of
+								   HOT or WARM update */
 
 	/*
 	 * If XLOG_HEAP_CONTAINS_OLD_TUPLE or XLOG_HEAP_CONTAINS_OLD_KEY flags are
@@ -200,7 +202,7 @@ typedef struct xl_heap_update
 	 */
 } xl_heap_update;
 
-#define SizeOfHeapUpdate	(offsetof(xl_heap_update, new_offnum) + sizeof(OffsetNumber))
+#define SizeOfHeapUpdate	(offsetof(xl_heap_update, root_offnum) + sizeof(OffsetNumber))
 
 /*
  * This is what we need to know about vacuum page cleanup/redirect
diff --git a/src/include/access/hio.h b/src/include/access/hio.h
index 2824f23..921cb37 100644
--- a/src/include/access/hio.h
+++ b/src/include/access/hio.h
@@ -35,8 +35,8 @@ typedef struct BulkInsertStateData
 }	BulkInsertStateData;
 
 
-extern void RelationPutHeapTuple(Relation relation, Buffer buffer,
-					 HeapTuple tuple, bool token);
+extern OffsetNumber RelationPutHeapTuple(Relation relation, Buffer buffer,
+					 HeapTuple tuple, bool token, OffsetNumber root_offnum);
 extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
 						  Buffer otherBuffer, int options,
 						  BulkInsertState bistate,
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index a6c7e31..7552186 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,13 +260,19 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1800 are available */
+/* bits 0x0800 are available */
+#define HEAP_LATEST_TUPLE		0x1000	/*
+										 * This is the last tuple in chain and
+										 * ip_posid points to the root line
+										 * pointer
+										 */
 #define HEAP_KEYS_UPDATED		0x2000	/* tuple was updated and key cols
 										 * modified, or tuple deleted */
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xE000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+
 
 /*
  * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins.  It is
@@ -504,6 +510,43 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+/*
+ * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
+ * the TID of the tuple itself in t_ctid field to mark the end of the chain.
+ * But starting PG v10, we use a special flag HEAP_LATEST_TUPLE to identify the
+ * last tuple and store the root line pointer of the HOT chain in t_ctid field
+ * instead.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetHeapLatest(tup, offnum) \
+do { \
+	AssertMacro(OffsetNumberIsValid(offnum)); \
+	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE; \
+	ItemPointerSetOffsetNumber(&(tup)->t_ctid, (offnum)); \
+} while (0)
+
+#define HeapTupleHeaderClearHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 &= ~HEAP_LATEST_TUPLE \
+)
+
+/*
+ * Starting from PostgreSQL 10, the latest tuple in an update chain has
+ * HEAP_LATEST_TUPLE set; but tuples upgraded from earlier versions do not.
+ * For those, we determine whether a tuple is latest by testing that its t_ctid
+ * points to itself.
+ *
+ * Note: beware of multiple evaluations of "tup" and "tid" arguments.
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  (((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(tid))) \
+)
+
+
 #define HeapTupleHeaderSetHeapOnly(tup) \
 ( \
   (tup)->t_infomask2 |= HEAP_ONLY_TUPLE \
@@ -542,6 +585,56 @@ do { \
 
 
 /*
+ * Set the t_ctid chain and also clear the HEAP_LATEST_TUPLE flag since we
+ * now have a new tuple in the chain and this is no longer the last tuple of
+ * the chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetNextTid(tup, tid) \
+do { \
+		ItemPointerCopy((tid), &((tup)->t_ctid)); \
+		HeapTupleHeaderClearHeapLatest((tup)); \
+} while (0)
+
+/*
+ * Get TID of next tuple in the update chain. Caller must have checked that
+ * we are not already at the end of the chain because in that case t_ctid may
+ * actually store the root line pointer of the HOT chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetNextTid(tup, next_ctid) \
+do { \
+	AssertMacro(!((tup)->t_infomask2 & HEAP_LATEST_TUPLE)); \
+	ItemPointerCopy(&(tup)->t_ctid, (next_ctid)); \
+} while (0)
+
+/*
+ * Get the root line pointer of the HOT chain. The caller should have confirmed
+ * that the root offset is cached before calling this macro.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+	AssertMacro(((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0), \
+	ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)
+
+/*
+ * Return whether the tuple has a cached root offset.  We don't use
+ * HeapTupleHeaderIsHeapLatest because that one also considers the case of
+ * t_ctid pointing to itself, for tuples migrated from pre v10 clusters. Here
+ * we are only interested in the tuples which are marked with HEAP_LATEST_TUPLE
+ * flag.
+ */
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+	((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0 \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
0001_interesting_attrs_v18.patchapplication/octet-stream; name=0001_interesting_attrs_v18.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 8526137..fd4291b 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -96,11 +96,8 @@ static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
 				HeapTuple newtup, HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
-static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
-							 Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
+							 Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup);
 static bool heap_acquire_tuplock(Relation relation, ItemPointer tid,
 					 LockTupleMode mode, LockWaitPolicy wait_policy,
@@ -3471,6 +3468,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *interesting_attrs;
+	Bitmapset  *modified_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3488,9 +3487,6 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				pagefree;
 	bool		have_tuple_lock = false;
 	bool		iscombo;
-	bool		satisfies_hot;
-	bool		satisfies_key;
-	bool		satisfies_id;
 	bool		use_hot_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
@@ -3517,21 +3513,30 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				 errmsg("cannot update tuples during a parallel operation")));
 
 	/*
-	 * Fetch the list of attributes to be checked for HOT update.  This is
-	 * wasted effort if we fail to update or have to put the new tuple on a
-	 * different page.  But we must compute the list before obtaining buffer
-	 * lock --- in the worst case, if we are doing an update on one of the
-	 * relevant system catalogs, we could deadlock if we try to fetch the list
-	 * later.  In any case, the relcache caches the data so this is usually
-	 * pretty cheap.
+	 * Fetch the list of attributes to be checked for various operations.
 	 *
-	 * Note that we get a copy here, so we need not worry about relcache flush
-	 * happening midway through.
+	 * For HOT considerations, this is wasted effort if we fail to update or
+	 * have to put the new tuple on a different page.  But we must compute the
+	 * list before obtaining buffer lock --- in the worst case, if we are doing
+	 * an update on one of the relevant system catalogs, we could deadlock if
+	 * we try to fetch the list later.  In any case, the relcache caches the
+	 * data so this is usually pretty cheap.
+	 *
+	 * We also need columns used by the replica identity, the columns that
+	 * are considered the "key" of rows in the table, and columns that are
+	 * part of indirect indexes.
+	 *
+	 * Note that we get copies of each bitmap, so we need not worry about
+	 * relcache flush happening midway through.
 	 */
 	hot_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_ALL);
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	interesting_attrs = bms_add_members(NULL, hot_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
+
 
 	block = ItemPointerGetBlockNumber(otid);
 	buffer = ReadBuffer(relation, block);
@@ -3552,7 +3557,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Assert(ItemIdIsNormal(lp));
 
 	/*
-	 * Fill in enough data in oldtup for HeapSatisfiesHOTandKeyUpdate to work
+	 * Fill in enough data in oldtup for HeapDetermineModifiedColumns to work
 	 * properly.
 	 */
 	oldtup.t_tableOid = RelationGetRelid(relation);
@@ -3578,6 +3583,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 		Assert(!(newtup->t_data->t_infomask & HEAP_HASOID));
 	}
 
+	/* Determine columns modified by the update. */
+	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
+												  &oldtup, newtup);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3589,10 +3598,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	 * is updates that don't manipulate key columns, not those that
 	 * serendipitiously arrive at the same key values.
 	 */
-	HeapSatisfiesHOTandKeyUpdate(relation, hot_attrs, key_attrs, id_attrs,
-								 &satisfies_hot, &satisfies_key,
-								 &satisfies_id, &oldtup, newtup);
-	if (satisfies_key)
+	if (!bms_overlap(modified_attrs, key_attrs))
 	{
 		*lockmode = LockTupleNoKeyExclusive;
 		mxact_status = MultiXactStatusNoKeyUpdate;
@@ -3831,6 +3837,8 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(modified_attrs);
+		bms_free(interesting_attrs);
 		return result;
 	}
 
@@ -4135,7 +4143,7 @@ l2:
 		 * to do a HOT update.  Check if any of the index columns have been
 		 * changed.  If not, then HOT update is possible.
 		 */
-		if (satisfies_hot)
+		if (!bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
 	}
 	else
@@ -4150,7 +4158,9 @@ l2:
 	 * ExtractReplicaIdentity() will return NULL if nothing needs to be
 	 * logged.
 	 */
-	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup, !satisfies_id, &old_key_copied);
+	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup,
+										   bms_overlap(modified_attrs, id_attrs),
+										   &old_key_copied);
 
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
@@ -4298,13 +4308,15 @@ l2:
 	bms_free(hot_attrs);
 	bms_free(key_attrs);
 	bms_free(id_attrs);
+	bms_free(modified_attrs);
+	bms_free(interesting_attrs);
 
 	return HeapTupleMayBeUpdated;
 }
 
 /*
  * Check if the specified attribute's value is same in both given tuples.
- * Subroutine for HeapSatisfiesHOTandKeyUpdate.
+ * Subroutine for HeapDetermineModifiedColumns.
  */
 static bool
 heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
@@ -4338,7 +4350,7 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 
 	/*
 	 * Extract the corresponding values.  XXX this is pretty inefficient if
-	 * there are many indexed columns.  Should HeapSatisfiesHOTandKeyUpdate do
+	 * there are many indexed columns.  Should HeapDetermineModifiedColumns do
 	 * a single heap_deform_tuple call on each tuple, instead?	But that
 	 * doesn't work for system columns ...
 	 */
@@ -4383,114 +4395,30 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 /*
  * Check which columns are being updated.
  *
- * This simultaneously checks conditions for HOT updates, for FOR KEY
- * SHARE updates, and REPLICA IDENTITY concerns.  Since much of the time they
- * will be checking very similar sets of columns, and doing the same tests on
- * them, it makes sense to optimize and do them together.
- *
- * We receive three bitmapsets comprising the three sets of columns we're
- * interested in.  Note these are destructively modified; that is OK since
- * this is invoked at most once in heap_update.
+ * Given an updated tuple, determine (and return into the output bitmapset),
+ * from those listed as interesting, the set of columns that changed.
  *
- * hot_result is set to TRUE if it's okay to do a HOT update (i.e. it does not
- * modified indexed columns); key_result is set to TRUE if the update does not
- * modify columns used in the key; id_result is set to TRUE if the update does
- * not modify columns in any index marked as the REPLICA IDENTITY.
+ * The input bitmapset is destructively modified; that is OK since this is
+ * invoked at most once in heap_update.
  */
-static void
-HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *
+HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup)
 {
-	int			next_hot_attnum;
-	int			next_key_attnum;
-	int			next_id_attnum;
-	bool		hot_result = true;
-	bool		key_result = true;
-	bool		id_result = true;
-
-	/* If REPLICA IDENTITY is set to FULL, id_attrs will be empty. */
-	Assert(bms_is_subset(id_attrs, key_attrs));
-	Assert(bms_is_subset(key_attrs, hot_attrs));
-
-	/*
-	 * If one of these sets contains no remaining bits, bms_first_member will
-	 * return -1, and after adding FirstLowInvalidHeapAttributeNumber (which
-	 * is negative!)  we'll get an attribute number that can't possibly be
-	 * real, and thus won't match any actual attribute number.
-	 */
-	next_hot_attnum = bms_first_member(hot_attrs);
-	next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_key_attnum = bms_first_member(key_attrs);
-	next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_id_attnum = bms_first_member(id_attrs);
-	next_id_attnum += FirstLowInvalidHeapAttributeNumber;
+	int		attnum;
+	Bitmapset *modified = NULL;
 
-	for (;;)
+	while ((attnum = bms_first_member(interesting_cols)) >= 0)
 	{
-		bool		changed;
-		int			check_now;
-
-		/*
-		 * Since the HOT attributes are a superset of the key attributes and
-		 * the key attributes are a superset of the id attributes, this logic
-		 * is guaranteed to identify the next column that needs to be checked.
-		 */
-		if (hot_result && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_hot_attnum;
-		else if (key_result && next_key_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_key_attnum;
-		else if (id_result && next_id_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_id_attnum;
-		else
-			break;
+		attnum += FirstLowInvalidHeapAttributeNumber;
 
-		/* See whether it changed. */
-		changed = !heap_tuple_attr_equals(RelationGetDescr(relation),
-										  check_now, oldtup, newtup);
-		if (changed)
-		{
-			if (check_now == next_hot_attnum)
-				hot_result = false;
-			if (check_now == next_key_attnum)
-				key_result = false;
-			if (check_now == next_id_attnum)
-				id_result = false;
-
-			/* if all are false now, we can stop checking */
-			if (!hot_result && !key_result && !id_result)
-				break;
-		}
-
-		/*
-		 * Advance the next attribute numbers for the sets that contain the
-		 * attribute we just checked.  As we work our way through the columns,
-		 * the next_attnum values will rise; but when each set becomes empty,
-		 * bms_first_member() will return -1 and the attribute number will end
-		 * up with a value less than FirstLowInvalidHeapAttributeNumber.
-		 */
-		if (hot_result && check_now == next_hot_attnum)
-		{
-			next_hot_attnum = bms_first_member(hot_attrs);
-			next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (key_result && check_now == next_key_attnum)
-		{
-			next_key_attnum = bms_first_member(key_attrs);
-			next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (id_result && check_now == next_id_attnum)
-		{
-			next_id_attnum = bms_first_member(id_attrs);
-			next_id_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
+		if (!heap_tuple_attr_equals(RelationGetDescr(relation),
+								   attnum, oldtup, newtup))
+			modified = bms_add_member(modified,
+									  attnum - FirstLowInvalidHeapAttributeNumber);
 	}
 
-	*satisfies_hot = hot_result;
-	*satisfies_key = key_result;
-	*satisfies_id = id_result;
+	return modified;
 }
 
 /*
0005_warm_updates_v18.patchapplication/octet-stream; name=0005_warm_updates_v18.patchDownload
diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c
index f2eda67..b356e2b 100644
--- a/contrib/bloom/blutils.c
+++ b/contrib/bloom/blutils.c
@@ -142,6 +142,7 @@ blhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/contrib/bloom/blvacuum.c b/contrib/bloom/blvacuum.c
index 04abd0f..ff50361 100644
--- a/contrib/bloom/blvacuum.c
+++ b/contrib/bloom/blvacuum.c
@@ -88,7 +88,7 @@ blbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 		while (itup < itupEnd)
 		{
 			/* Do we have to delete this tuple? */
-			if (callback(&itup->heapPtr, callback_state))
+			if (callback(&itup->heapPtr, false, callback_state) == IBDCR_DELETE)
 			{
 				/* Yes; adjust count of tuples that will be left on page */
 				BloomPageGetOpaque(page)->maxoff--;
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index b22563b..b4a1465 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -116,6 +116,7 @@ brinhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gin/ginvacuum.c b/src/backend/access/gin/ginvacuum.c
index c9ccfee..8ed71c5 100644
--- a/src/backend/access/gin/ginvacuum.c
+++ b/src/backend/access/gin/ginvacuum.c
@@ -56,7 +56,8 @@ ginVacuumItemPointers(GinVacuumState *gvs, ItemPointerData *items,
 	 */
 	for (i = 0; i < nitem; i++)
 	{
-		if (gvs->callback(items + i, gvs->callback_state))
+		if (gvs->callback(items + i, false, gvs->callback_state) ==
+				IBDCR_DELETE)
 		{
 			gvs->result->tuples_removed += 1;
 			if (!tmpitems)
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index 6593771..843389b 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -94,6 +94,7 @@ gisthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gist/gistvacuum.c b/src/backend/access/gist/gistvacuum.c
index 77d9d12..0955db6 100644
--- a/src/backend/access/gist/gistvacuum.c
+++ b/src/backend/access/gist/gistvacuum.c
@@ -202,7 +202,8 @@ gistbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 				iid = PageGetItemId(page, i);
 				idxtuple = (IndexTuple) PageGetItem(page, iid);
 
-				if (callback(&(idxtuple->t_tid), callback_state))
+				if (callback(&(idxtuple->t_tid), false, callback_state) ==
+						IBDCR_DELETE)
 					todelete[ntodelete++] = i;
 				else
 					stats->num_index_tuples += 1;
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index cfcec34..2274237 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -75,6 +75,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = hashbuild;
 	amroutine->ambuildempty = hashbuildempty;
 	amroutine->aminsert = hashinsert;
+	amroutine->amwarminsert = hashwarminsert;
 	amroutine->ambulkdelete = hashbulkdelete;
 	amroutine->amvacuumcleanup = hashvacuumcleanup;
 	amroutine->amcanreturn = NULL;
@@ -92,6 +93,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = hashrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -233,11 +235,11 @@ hashbuildCallback(Relation index,
  *	Hash on the heap tuple's key, form an index tuple with hash code.
  *	Find the appropriate location for the new tuple, and put it there.
  */
-bool
-hashinsert(Relation rel, Datum *values, bool *isnull,
+static bool
+hashinsert_internal(Relation rel, Datum *values, bool *isnull,
 		   ItemPointer ht_ctid, Relation heapRel,
 		   IndexUniqueCheck checkUnique,
-		   IndexInfo *indexInfo)
+		   IndexInfo *indexInfo, bool warm_update)
 {
 	Datum		index_values[1];
 	bool		index_isnull[1];
@@ -253,6 +255,11 @@ hashinsert(Relation rel, Datum *values, bool *isnull,
 	itup = index_form_tuple(RelationGetDescr(rel), index_values, index_isnull);
 	itup->t_tid = *ht_ctid;
 
+	if (warm_update)
+		ItemPointerSetFlags(&itup->t_tid, HASH_INDEX_WARM_POINTER);
+	else
+		ItemPointerClearFlags(&itup->t_tid);
+
 	_hash_doinsert(rel, itup, heapRel);
 
 	pfree(itup);
@@ -260,6 +267,26 @@ hashinsert(Relation rel, Datum *values, bool *isnull,
 	return false;
 }
 
+bool
+hashinsert(Relation rel, Datum *values, bool *isnull,
+		   ItemPointer ht_ctid, Relation heapRel,
+		   IndexUniqueCheck checkUnique,
+		   IndexInfo *indexInfo)
+{
+	return hashinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, false);
+}
+
+bool
+hashwarminsert(Relation rel, Datum *values, bool *isnull,
+		   ItemPointer ht_ctid, Relation heapRel,
+		   IndexUniqueCheck checkUnique,
+		   IndexInfo *indexInfo)
+{
+	return hashinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, true);
+
+}
 
 /*
  *	hashgettuple() -- Get the next tuple in the scan.
@@ -274,6 +301,8 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 	OffsetNumber offnum;
 	ItemPointer current;
 	bool		res;
+	IndexTuple	itup;
+
 
 	/* Hash indexes are always lossy since we store only the hash code */
 	scan->xs_recheck = true;
@@ -316,8 +345,6 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
 			 offnum <= maxoffnum;
 			 offnum = OffsetNumberNext(offnum))
 		{
-			IndexTuple	itup;
-
 			itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 			if (ItemPointerEquals(&(so->hashso_heappos), &(itup->t_tid)))
 				break;
@@ -789,6 +816,8 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 		Page		page;
 		OffsetNumber deletable[MaxOffsetNumber];
 		int			ndeletable = 0;
+		OffsetNumber clearwarm[MaxOffsetNumber];
+		int			nclearwarm = 0;
 		bool		retain_pin = false;
 
 		vacuum_delay_point();
@@ -806,20 +835,35 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 			IndexTuple	itup;
 			Bucket		bucket;
 			bool		kill_tuple = false;
+			bool		clear_tuple = false;
+			int			flags;
+			bool		is_warm;
+			IndexBulkDeleteCallbackResult	result;
 
 			itup = (IndexTuple) PageGetItem(page,
 											PageGetItemId(page, offno));
 			htup = &(itup->t_tid);
 
+			flags = ItemPointerGetFlags(&itup->t_tid);
+			is_warm = ((flags & HASH_INDEX_WARM_POINTER) != 0);
+
 			/*
 			 * To remove the dead tuples, we strictly want to rely on results
 			 * of callback function.  refer btvacuumpage for detailed reason.
 			 */
-			if (callback && callback(htup, callback_state))
+			if (callback)
 			{
-				kill_tuple = true;
-				if (tuples_removed)
-					*tuples_removed += 1;
+				result = callback(htup, is_warm, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					kill_tuple = true;
+					if (tuples_removed)
+						*tuples_removed += 1;
+				}
+				else if (result == IBDCR_CLEAR_WARM)
+				{
+					clear_tuple = true;
+				}
 			}
 			else if (split_cleanup)
 			{
@@ -842,6 +886,12 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 				}
 			}
 
+			if (clear_tuple)
+			{
+				/* clear the WARM pointer */
+				clearwarm[nclearwarm++] = offno;
+			}
+
 			if (kill_tuple)
 			{
 				/* mark the item for deletion */
@@ -866,12 +916,27 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 		/*
 		 * Apply deletions, advance to next page and write page if needed.
 		 */
-		if (ndeletable > 0)
+		if (ndeletable > 0 || nclearwarm > 0)
 		{
 			/* No ereport(ERROR) until changes are logged */
 			START_CRIT_SECTION();
 
-			PageIndexMultiDelete(page, deletable, ndeletable);
+			/*
+			 * Clear the WARM pointers.
+			 *
+			 * We must do this before dealing with the dead items because
+			 * PageIndexMultiDelete may move items around to compactify the
+			 * array and hence offnums recorded earlier won't make any sense
+			 * after PageIndexMultiDelete is called.
+			 */
+			if (nclearwarm > 0)
+				_hash_clear_items(page, clearwarm, nclearwarm);
+
+			/*
+			 * And delete the deletable items
+			 */
+			if (ndeletable > 0)
+				PageIndexMultiDelete(page, deletable, ndeletable);
 			bucket_dirty = true;
 
 			/*
@@ -892,6 +957,7 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 				XLogRecPtr	recptr;
 
 				xlrec.is_primary_bucket_page = (buf == bucket_buf) ? true : false;
+				xlrec.nclearitems = nclearwarm;
 
 				XLogBeginInsert();
 				XLogRegisterData((char *) &xlrec, SizeOfHashDelete);
@@ -904,6 +970,8 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 					XLogRegisterBuffer(0, bucket_buf, REGBUF_STANDARD | REGBUF_NO_IMAGE);
 
 				XLogRegisterBuffer(1, buf, REGBUF_STANDARD);
+				XLogRegisterBufData(1, (char *) clearwarm,
+									nclearwarm * sizeof(OffsetNumber));
 				XLogRegisterBufData(1, (char *) deletable,
 									ndeletable * sizeof(OffsetNumber));
 
diff --git a/src/backend/access/hash/hash_xlog.c b/src/backend/access/hash/hash_xlog.c
index 8647e8c..fe89ee1 100644
--- a/src/backend/access/hash/hash_xlog.c
+++ b/src/backend/access/hash/hash_xlog.c
@@ -840,6 +840,7 @@ hash_xlog_delete(XLogReaderState *record)
 	/* replay the record for deleting entries in bucket page */
 	if (action == BLK_NEEDS_REDO)
 	{
+		uint16		nclearwarm = xldata->nclearitems;
 		char	   *ptr;
 		Size		len;
 
@@ -849,12 +850,17 @@ hash_xlog_delete(XLogReaderState *record)
 
 		if (len > 0)
 		{
+			OffsetNumber *clearwarm;
 			OffsetNumber *unused;
 			OffsetNumber *unend;
 
-			unused = (OffsetNumber *) ptr;
+			clearwarm = (OffsetNumber *) ptr;
+			unused = clearwarm + nclearwarm;
 			unend = (OffsetNumber *) ((char *) ptr + len);
 
+			if (nclearwarm)
+				_hash_clear_items(page, clearwarm, nclearwarm);
+
 			if ((unend - unused) > 0)
 				PageIndexMultiDelete(page, unused, unend - unused);
 		}
diff --git a/src/backend/access/hash/hashpage.c b/src/backend/access/hash/hashpage.c
index 622cc4b..e689f90 100644
--- a/src/backend/access/hash/hashpage.c
+++ b/src/backend/access/hash/hashpage.c
@@ -1576,3 +1576,17 @@ _hash_getbucketbuf_from_hashkey(Relation rel, uint32 hashkey, int access,
 
 	return buf;
 }
+
+void _hash_clear_items(Page page, OffsetNumber *clearitemnos,
+					   uint16 nclearitems)
+{
+	int			i;
+	IndexTuple	itup;
+
+	for (i = 0; i < nclearitems; i++)
+	{
+		itup = (IndexTuple) PageGetItem(page,
+				PageGetItemId(page, clearitemnos[i]));
+		ItemPointerClearFlags(&itup->t_tid);
+	}
+}
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index 2d92049..330ccc5 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -59,6 +59,8 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
 	return true;
 }
 
@@ -367,6 +369,9 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
+
 	return true;
 }
 
diff --git a/src/backend/access/hash/hashutil.c b/src/backend/access/hash/hashutil.c
index 2e99719..48464b8 100644
--- a/src/backend/access/hash/hashutil.c
+++ b/src/backend/access/hash/hashutil.c
@@ -17,9 +17,11 @@
 #include "access/hash.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 #include "storage/buf_internals.h"
+#include "utils/datum.h"
 
 #define CALC_NEW_BUCKET(old_bucket, lowmask) \
 			old_bucket | (lowmask + 1)
@@ -514,3 +516,70 @@ _hash_kill_items(IndexScanDesc scan)
 		MarkBufferDirtyHint(so->hashso_curbuf, true);
 	}
 }
+
+/*
+ * Recheck if the heap tuple satisfies the key stored in the index tuple
+ */
+bool
+hashrecheck(Relation indexRel, IndexInfo *indexInfo, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	bool		isavail[INDEX_MAX_KEYS];
+	Datum		values2[INDEX_MAX_KEYS];
+	bool		isnull2[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	FormIndexPlainDatum(indexInfo, heapRel, heapTuple, values, isnull, isavail);
+
+	/*
+	 * HASH indexes compute a hash value of the key and store that in the
+	 * index. So we must first obtain the hash of the value obtained from the
+	 * heap and then do a comparison
+	 */
+	_hash_convert_tuple(indexRel, values, isnull, values2, isnull2);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		if (!isavail[i - 1])
+			continue;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL then they are equal
+		 */
+		if (isnull2[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If either is NULL then they are not equal
+		 */
+		if (isnull2[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now do a raw memory comparison
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values2[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	return equal;
+}
diff --git a/src/backend/access/heap/README.WARM b/src/backend/access/heap/README.WARM
new file mode 100644
index 0000000..7569227
--- /dev/null
+++ b/src/backend/access/heap/README.WARM
@@ -0,0 +1,305 @@
+src/backend/access/heap/README.WARM
+
+Write Amplification Reduction Method (WARM)
+===========================================
+
+The Heap Only Tuple (HOT) feature greatly eliminated redudant index
+entries and allowed re-use of the dead space occupied by previously
+updated or deleted tuples (see src/backend/access/heap/README.HOT)
+
+One of the necessary conditions for satisfying HOT update is that the
+update must not change a column used in any of the indexes on the table.
+The condition is sometimes hard to meet, especially for complex
+workloads with several indexes on large yet frequently updated tables.
+Worse, sometimes only one or two index columns may be updated, but the
+regular non-HOT update will still insert a new index entry in every
+index on the table, irrespective of whether the key pertaining to the
+index changed or not.
+
+WARM is a technique devised to address these problems.
+
+
+Update Chains With Multiple Index Entries Pointing to the Root
+--------------------------------------------------------------
+
+When a non-HOT update is caused by an index key change, a new index
+entry must be inserted for the changed index. But if the index key
+hasn't changed for other indexes, we don't really need to insert a new
+entry. Even though the existing index entry is pointing to the old
+tuple, the new tuple is reachable via the t_ctid chain. To keep things
+simple, a WARM update requires that the heap block must have enough
+space to store the new version of the tuple. This is same as HOT
+updates.
+
+In WARM, we ensure that every index entry always points to the root of
+the WARM chain. In fact, a WARM chain looks exactly like a HOT chain
+except for the fact that there could be multiple index entries pointing
+to the root of the chain. So when new entry is inserted in an index for
+updated tuple, and if we are doing a WARM update, the new entry is made
+point to the root of the WARM chain.
+
+For example, if we have a table with two columns and two indexes on each
+of the column. When a tuple is first inserted the table, we have exactly
+one index entry pointing to the tuple from both indexes.
+
+	lp [1]
+	[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's entry (aaaa) also points to 1
+
+Now if the tuple's second column is updated and if there is room on the
+page, we perform a WARM update. To do so, Index1 does not get any new
+entry and Index2's new entry will still point to the root tuple of the
+chain.
+
+	lp [1]  [2]
+	[1111, aaaa]->[111, bbbb]
+
+	Index1's entry (1111) points to 1
+	Index2's old entry (aaaa) points to 1
+	Index2's new entry (bbbb) also points to 1
+
+"A update chain which has more than one index entries pointing to its
+root line pointer is called WARM chain and the action that creates a
+WARM chain is called WARM update."
+
+Since all indexes always point to the root of the WARM chain, even when
+there are more than one index entries, WARM chains can be pruned and
+dead tuples can be removed without a need to do corresponding index
+cleanup.
+
+While this solves the problem of pruning dead tuples from a HOT/WARM
+chain, it also opens up a new technical challenge because now we have a
+situation where a heap tuple is reachable from multiple index entries,
+each having a different index key. While MVCC still ensures that only
+valid tuples are returned, a tuple with a wrong index key may be
+returned because of wrong index entries. In the above example, tuple
+[1111, bbbb] is reachable from both keys (aaaa) as well as (bbbb). For
+this reason, tuples returned from a WARM chain must always be rechecked
+for index key-match.
+
+Recheck Index Key Againt Heap Tuple
+-----------------------------------
+
+Since every Index AM has it's own notion of index tuples, each Index AM
+must implement its own method to recheck heap tuples. For example, a
+hash index stores the hash value of the column and hence recheck routine
+for hash AM must first compute the hash value of the heap attribute and
+then compare it against the value stored in the index tuple.
+
+The patch currently implement recheck routines for hash and btree
+indexes. If the table has an index which doesn't support recheck
+routine, WARM updates are disabled on such tables.
+
+Problem With Duplicate (key, ctid) Index Entries
+------------------------------------------------
+
+The index-key recheck logic works as long as there are no duplicate
+index keys, both pointing to the same WARM chain. In that case, the same
+valid tuple will be reachable via multiple index keys, yet satisfying
+the index key checks. In the above example, if the tuple [1111, bbbb] is
+again updated to [1111, aaaa] and if we insert a new index entry (aaaa)
+pointing to the root line pointer, we will end up with the following
+structure:
+
+	lp [1]  [2]  [3]
+	[1111, aaaa]->[1111, bbbb]->[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's oldest entry (aaaa) points to 1
+	Index2's old entry (bbbb) also points to 1
+	Index2's new entry (aaaa) also points to 1
+
+We must solve this problem to ensure that the same tuple is not
+reachable via multiple index pointers. There are couple of ways to
+address this issue:
+
+1. Do not allow WARM update to a tuple from a WARM chain. This
+guarantees that there can never be duplicate index entries to the same
+root line pointer because we must have checked for old and new index
+keys while doing the first WARM update.
+
+2. Do not allow duplicate (key, ctid) index pointers. In the above
+example, since (aaaa, 1) already exists in the index, we must not insert
+a duplicate index entry.
+
+The patch currently implements 1 i.e. do not do WARM updates to a tuple
+from a WARM chain. HOT updates are fine because they do not add a new
+index entry.
+
+Even with the restriction, this is a significant improvement because the
+number of regular UPDATEs are curtailed down to half.
+
+Expression and Partial Indexes
+------------------------------
+
+Expressions may evaluate to the same value even if the underlying column
+values have changed. A simple example is an index on "lower(col)" which
+will return the same value if the new heap value only differs in the
+case sensitivity. So we can not solely rely on the heap column check to
+decide whether or not to insert a new index entry for expression
+indexes. Similarly, for partial indexes, the predicate expression must
+be evaluated to decide whether or not to cause a new index entry when
+columns referred in the predicate expressions change.
+
+(None of these things are currently implemented and we squarely disallow
+WARM update if a column from expression indexes or predicate has
+changed).
+
+
+Efficiently Finding the Root Line Pointer
+-----------------------------------------
+
+During WARM update, we must be able to find the root line pointer of the
+tuple being updated. It must be noted that the t_ctid field in the heap
+tuple header is usually used to find the next tuple in the update chain.
+But the tuple that we are updating, must be the last tuple in the update
+chain. In such cases, the c_tid field usually points the tuple itself.
+So in theory, we could use the t_ctid to store additional information in
+the last tuple of the update chain, if the information about the tuple
+being the last tuple is stored elsewhere.
+
+We now utilize another bit from t_infomask2 to explicitly identify that
+this is the last tuple in the update chain.
+
+HEAP_LATEST_TUPLE - When this bit is set, the tuple is the last tuple in
+the update chain. The OffsetNumber part of t_ctid points to the root
+line pointer of the chain when HEAP_LATEST_TUPLE flag is set.
+
+If UPDATE operation is aborted, the last tuple in the update chain
+becomes dead. The root line pointer information stored in the tuple
+which remains the last valid tuple in the chain is also lost. In such
+rare cases, the root line pointer must be found in a hard way by
+scanning the entire heap page.
+
+Tracking WARM Chains
+--------------------
+
+When a tuple is WARM updated, the old, the new and every subsequent tuple in
+the chain is marked with a special HEAP_WARM_UPDATED flag. We use the last
+remaining bit in t_infomask2 to store this information.
+
+When a tuple is returned from a WARM chain, the caller must do additional
+checks to ensure that the tuple matches the index key. Even if the tuple
+precedes the WARM update in the chain, it must still be rechecked for the index
+key match (case when old tuple is returned by the new index key). So we must
+follow the update chain everytime to the end to see check if this is a WARM
+chain.
+
+Converting WARM chains back to HOT chains (VACUUM ?)
+----------------------------------------------------
+
+The current implementation of WARM allows only one WARM update per
+chain. This simplifies the design and addresses certain issues around
+duplicate scans. But this also implies that the benefit of WARM will be
+no more than 50%, which is still significant, but if we could return
+WARM chains back to normal status, we could do far more WARM updates.
+
+A distinct property of a WARM chain is that at least one index has more
+than one live index entries pointing to the root of the chain. In other
+words, if we can remove duplicate entry from every index or conclusively
+prove that there are no duplicate index entries for the root line
+pointer, the chain can again be marked as HOT.
+
+Here is one idea:
+
+A WARM chain has two parts, separated by the tuple that caused WARM
+update. All tuples in each part has matching index keys, but certain
+index keys may not match between these two parts. Lets say we mark heap
+tuples in the second part with a special HEAP_WARM_TUPLE flag. Similarly, the
+new index entries caused by the first WARM update are also marked with
+INDEX_WARM_POINTER flags.
+
+There are two distinct parts of the WARM chain. The first part where none of
+the tuples have HEAP_WARM_TUPLE flag set and the second part where every tuple
+has the flag set. Each of these parts satisfy HOT property on its own i.e. all
+tuples have the same value for indexed columns. But these two parts are
+separated by the WARM update which breaks HOT property for one or more indexes.
+
+Heap chain: [1] [2] [3] [4]
+			[aaaa, 1111] -> [aaaa, 1111] -> [bbbb, 1111]W -> [bbbb, 1111]W
+
+Index1: 	(aaaa) points to 1 (satisfies only tuples without W)
+			(bbbb)W points to 1 (satisfies only tuples marked with W)
+
+Index2:		(1111) points to 1 (satisfies tuples with and without W)
+
+
+It's clear that for indexes with both pointers, a heap tuple without
+HEAP_WARM_TUPLE flag will be reachable from the index pointer cleared of
+INDEX_WARM_POINTER flag and that with HEAP_WARM_TUPLE flag will be reachable
+from the pointer with INDEX_WARM_POINTER. But for indexes which did not create
+a new entry, tuples with and without the HEAP_WARM_TUPLE flag will be reachable
+from the original index pointer which doesn't have the INDEX_WARM_POINTER flag.
+(there is no pointer with INDEX_WARM_POINTER in such indexes).
+
+During first heap scan of VACUUM, we look for tuples with HEAP_WARM_UPDATED
+set.  If all or none of the live tuples in the chain are marked with
+HEAP_WARM_TUPLE flag, then the chain is a candidate for HOT conversion. We
+remember the root line pointer and whether the tuples in the chain had
+HEAP_WARM_TUPLE flags set or not.
+
+If we have a WARM chain with HEAP_WARM_TUPLE set, then our goal is to remove
+the index pointers without INDEX_WARM_POINTER flags and vice versa. But there
+is a catch. For Index2 above, there is only one pointer and it does not have
+the INDEX_WARM_POINTER flag set. Since all heap tuples are reachable only via
+this pointer, it must not be removed. IOW we should remove index pointer
+without INDEX_WARM_POINTER iff a another index pointer with INDEX_WARM_POINTER
+exists. Since index vacuum may visit these pointers in any order, we will need
+another index pass to remove dead index pointers. So in the first index pass we
+check which WARM candidates have 2 index pointers. In the second pass, we
+remove the dead pointer and clear the INDEX_WARM_POINTER flag if that's the
+surviving index pointer.
+
+During the second heap scan, we fix WARM chain by clearing HEAP_WARM_UPDATED
+flag and also clear HEAP_WARM_TUPLE flags on tuples.
+
+There are some more problems around aborted vacuums. For example, if vacuum
+aborts after clearing INDEX_WARM_POINTER flag but before removing the other
+index pointer, we will end up with two index pointers and none of those will
+have INDEX_WARM_POINTER set.  But since the HEAP_WARM_UPDATED flag on the heap
+tuple is still set, further WARM updates to the chain will be blocked. I guess
+we will need some special handling for case with multiple index pointers where
+none of the index pointers has INDEX_WARM_POINTER flag set. We can either leave
+these WARM chains alone and let them die with a subsequent non-WARM update or
+must apply heap-recheck logic during index vacuum to find the dead pointer.
+Given that vacuum-aborts are not common, I am inclined to leave this case
+unhandled. We must still check for presence of multiple index pointers without
+INDEX_WARM_POINTER flags and ensure that we don't accidently remove either of
+these pointers and also must not clear WARM chains.
+
+CREATE INDEX CONCURRENTLY
+-------------------------
+
+Currently CREATE INDEX CONCURRENTLY (CIC) is implemented as a 3-phase
+process.  In the first phase, we create catalog entry for the new index
+so that the index is visible to all other backends, but still don't use
+it for either read or write.  But we ensure that no new broken HOT
+chains are created by new transactions. In the second phase, we build
+the new index using a MVCC snapshot and then make the index available
+for inserts. We then do another pass over the index and insert any
+missing tuples, everytime indexing only it's root line pointer. See
+README.HOT for details about how HOT impacts CIC and how various
+challenges are tackeled.
+
+WARM poses another challenge because it allows creation of HOT chains
+even when an index key is changed. But since the index is not ready for
+insertion until the second phase is over, we might end up with a
+situation where the HOT chain has tuples with different index columns,
+yet only one of these values are indexed by the new index. Note that
+during the third phase, we only index tuples whose root line pointer is
+missing from the index. But we can't easily check if the existing index
+tuple is actually indexing the heap tuple visible to the new MVCC
+snapshot. Finding that information will require us to query the index
+again for every tuple in the chain, especially if it's a WARM tuple.
+This would require repeated access to the index. Another option would be
+to return index keys along with the heap TIDs when index is scanned for
+collecting all indexed TIDs during third phase. We can then compare the
+heap tuple against the already indexed key and decide whether or not to
+index the new tuple.
+
+We solve this problem more simply by disallowing WARM updates until the
+index is ready for insertion. We don't need to disallow WARM on a
+wholesale basis, but only those updates that change the columns of the
+new index are disallowed to be WARM updates.
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 26a7af4..c86fbc6 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -1974,6 +1974,206 @@ heap_fetch(Relation relation,
 }
 
 /*
+ * Check status of a (possibly) WARM chain.
+ *
+ * This function looks at a HOT/WARM chain starting at tid and return a bitmask
+ * of information. We only follow the chain as long as it's known to be valid
+ * HOT chain. Information returned by the function consists of:
+ *
+ *  HCWC_WARM_UPDATED_TUPLE - a tuple with HEAP_WARM_UPDATED is found somewhere
+ *  						  in the chain. Note that when a tuple is WARM
+ *  						  updated, both old and new versions are marked
+ *  						  with this flag/
+ *
+ *  HCWC_WARM_TUPLE  - a tuple with HEAP_WARM_TUPLE is found somewhere in
+ *					  the chain.
+ *
+ *  HCWC_CLEAR_TUPLE - a tuple without HEAP_WARM_TUPLE is found somewhere in
+ *  					 the chain.
+ *
+ *	If stop_at_warm is true, we stop when the first HEAP_WARM_UPDATED tuple is
+ *	found and return information collected so far.
+ */
+HeapCheckWarmChainStatus
+heap_check_warm_chain(Page dp, ItemPointer tid, bool stop_at_warm)
+{
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	HeapCheckWarmChainStatus	status = 0;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
+			break;
+		}
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		if (HeapTupleHeaderIsWarmUpdated(heapTuple.t_data))
+		{
+			/* We found a WARM_UPDATED tuple */
+			status |= HCWC_WARM_UPDATED_TUPLE;
+
+			/*
+			 * If we've been told to stop at the first WARM_UPDATED tuple, just
+			 * return whatever information collected so far.
+			 */
+			if (stop_at_warm)
+				return status;
+
+			/*
+			 * Remember whether it's a CLEAR or a WARM tuple.
+			 */
+			if (HeapTupleHeaderIsWarm(heapTuple.t_data))
+				status |= HCWC_WARM_TUPLE;
+			else
+				status |= HCWC_CLEAR_TUPLE;
+		}
+		else
+			/* Must be a regular, non-WARM tuple */
+			status |= HCWC_CLEAR_TUPLE;
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	/* All OK. No need to recheck */
+	return status;
+}
+
+/*
+ * Scan through the WARM chain starting at tid and reset all WARM related
+ * flags. At the end, the chain will have all characteristics of a regular HOT
+ * chain.
+ *
+ * Return the number of cleared offnums. Cleared offnums are returned in the
+ * passed-in cleared_offnums array. The caller must ensure that the array is
+ * large enough to hold maximum offnums that can be cleared by this invokation
+ * of heap_clear_warm_chain().
+ */
+int
+heap_clear_warm_chain(Page dp, ItemPointer tid, OffsetNumber *cleared_offnums)
+{
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	int							num_cleared = 0;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
+			break;
+		}
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Clear WARM_UPDATED and WARM flags.
+		 */
+		if (HeapTupleHeaderIsWarmUpdated(heapTuple.t_data))
+		{
+			HeapTupleHeaderClearWarmUpdated(heapTuple.t_data);
+			HeapTupleHeaderClearWarm(heapTuple.t_data);
+			cleared_offnums[num_cleared++] = offnum;
+		}
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	return num_cleared;
+}
+
+/*
  *	heap_hot_search_buffer	- search HOT chain for tuple satisfying snapshot
  *
  * On entry, *tid is the TID of a tuple (either a simple tuple, or the root
@@ -1993,11 +2193,14 @@ heap_fetch(Relation relation,
  * Unlike heap_fetch, the caller must already have pin and (at least) share
  * lock on the buffer; it is still pinned/locked at exit.  Also unlike
  * heap_fetch, we do not report any pgstats count; caller may do so if wanted.
+ *
+ * recheck should be set false on entry by caller, will be set true on exit
+ * if a WARM tuple is encountered.
  */
 bool
 heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 					   Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call)
+					   bool *all_dead, bool first_call, bool *recheck)
 {
 	Page		dp = (Page) BufferGetPage(buffer);
 	TransactionId prev_xmax = InvalidTransactionId;
@@ -2051,9 +2254,12 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		ItemPointerSetOffsetNumber(&heapTuple->t_self, offnum);
 
 		/*
-		 * Shouldn't see a HEAP_ONLY tuple at chain start.
+		 * Shouldn't see a HEAP_ONLY tuple at chain start, unless we are
+		 * dealing with a WARM updated tuple in which case deferred triggers
+		 * may request to fetch a WARM tuple from middle of a chain.
 		 */
-		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple))
+		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple) &&
+				!HeapTupleIsWarmUpdated(heapTuple))
 			break;
 
 		/*
@@ -2066,6 +2272,20 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 			break;
 
 		/*
+		 * Check if there exists a WARM tuple somewhere down the chain and set
+		 * recheck to TRUE.
+		 *
+		 * XXX This is not very efficient right now, and we should look for
+		 * possible improvements here.
+		 */
+		if (recheck && *recheck == false)
+		{
+			HeapCheckWarmChainStatus status;
+			status = heap_check_warm_chain(dp, &heapTuple->t_self, true);
+			*recheck = HCWC_IS_WARM_UPDATED(status);
+		}
+
+		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
 		 * return the first tuple we find.  But on later passes, heapTuple
 		 * will initially be pointing to the tuple we returned last time.
@@ -2114,7 +2334,8 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		 * Check to see if HOT chain continues past this tuple; if so fetch
 		 * the next offnum and loop around.
 		 */
-		if (HeapTupleIsHotUpdated(heapTuple))
+		if (HeapTupleIsHotUpdated(heapTuple) &&
+			!HeapTupleHeaderHasRootOffset(heapTuple->t_data))
 		{
 			Assert(ItemPointerGetBlockNumber(&heapTuple->t_data->t_ctid) ==
 				   ItemPointerGetBlockNumber(tid));
@@ -2138,18 +2359,41 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
  */
 bool
 heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
-				bool *all_dead)
+				bool *all_dead, bool *recheck, Buffer *cbuffer,
+				HeapTuple heapTuple)
 {
 	bool		result;
 	Buffer		buffer;
-	HeapTupleData heapTuple;
+	ItemPointerData ret_tid = *tid;
 
 	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	LockBuffer(buffer, BUFFER_LOCK_SHARE);
-	result = heap_hot_search_buffer(tid, relation, buffer, snapshot,
-									&heapTuple, all_dead, true);
-	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buffer);
+	result = heap_hot_search_buffer(&ret_tid, relation, buffer, snapshot,
+									heapTuple, all_dead, true, recheck);
+
+	/*
+	 * If we are returning a potential candidate tuple from this chain and the
+	 * caller has requested for "recheck" hint, keep the buffer locked and
+	 * pinned. The caller must release the lock and pin on the buffer in all
+	 * such cases.
+	 */
+	if (!result || !recheck || !(*recheck))
+	{
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	/*
+	 * Set the caller supplied tid with the actual location of the tuple being
+	 * returned.
+	 */
+	if (result)
+	{
+		*tid = ret_tid;
+		if (cbuffer)
+			*cbuffer = buffer;
+	}
+
 	return result;
 }
 
@@ -2792,7 +3036,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		{
 			XLogRecPtr	recptr;
 			xl_heap_multi_insert *xlrec;
-			uint8		info = XLOG_HEAP2_MULTI_INSERT;
+			uint8		info = XLOG_HEAP_MULTI_INSERT;
 			char	   *tupledata;
 			int			totaldatalen;
 			char	   *scratchptr = scratch;
@@ -2889,7 +3133,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			/* filtering by origin on a row level is much more efficient */
 			XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);
 
-			recptr = XLogInsert(RM_HEAP2_ID, info);
+			recptr = XLogInsert(RM_HEAP_ID, info);
 
 			PageSetLSN(page, recptr);
 		}
@@ -3313,7 +3557,9 @@ l1:
 	}
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	tp.t_data->t_infomask |= new_infomask;
 	tp.t_data->t_infomask2 |= new_infomask2;
@@ -3508,15 +3754,18 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 HTSU_Result
 heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode)
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update)
 {
 	HTSU_Result result;
 	TransactionId xid = GetCurrentTransactionId();
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *exprindx_attrs;
 	Bitmapset  *interesting_attrs;
 	Bitmapset  *modified_attrs;
+	Bitmapset  *notready_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3537,6 +3786,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		use_hot_update = false;
+	bool		use_warm_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
 	bool		all_visible_cleared_new = false;
@@ -3561,6 +3811,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot update tuples during a parallel operation")));
 
+	/* Assume no-warm update */
+	if (warm_update)
+		*warm_update = false;
+
 	/*
 	 * Fetch the list of attributes to be checked for various operations.
 	 *
@@ -3582,10 +3836,17 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	exprindx_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_EXPR_PREDICATE);
+	notready_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_NOTREADY);
+
+
 	interesting_attrs = bms_add_members(NULL, hot_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
-
+	interesting_attrs = bms_add_members(interesting_attrs, exprindx_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, notready_attrs);
 
 	block = ItemPointerGetBlockNumber(otid);
 	offnum = ItemPointerGetOffsetNumber(otid);
@@ -3637,6 +3898,9 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
 												  &oldtup, newtup);
 
+	if (modified_attrsp)
+		*modified_attrsp = bms_copy(modified_attrs);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3892,6 +4156,7 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(exprindx_attrs);
 		bms_free(modified_attrs);
 		bms_free(interesting_attrs);
 		return result;
@@ -4057,7 +4322,9 @@ l2:
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
-		oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(oldtup.t_data))
+			oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 		oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleClearHotUpdated(&oldtup);
 		/* ... and store info about transaction updating this tuple */
@@ -4210,6 +4477,24 @@ l2:
 		 */
 		if (!bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
+		else
+		{
+			/*
+			 * If no WARM updates yet on this chain, let this update be a WARM
+			 * update.
+			 *
+			 * We check for both warm and warm updated tuples since if the
+			 * previous WARM update aborted, we may still have added
+			 * another index entry for this HOT chain. In such situations, we
+			 * must not attempt a WARM update.
+			 */
+			if (relation->rd_supportswarm &&
+				!bms_overlap(modified_attrs, exprindx_attrs) &&
+				!bms_is_subset(hot_attrs, modified_attrs) &&
+				!bms_overlap(notready_attrs, modified_attrs) &&
+				!HeapTupleIsWarmUpdated(&oldtup))
+				use_warm_update = true;
+		}
 	}
 	else
 	{
@@ -4256,6 +4541,32 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+
+		/*
+		 * Even if we are doing a HOT update, we must carry forward the WARM
+		 * flag because we may have already inserted another index entry
+		 * pointing to our root and a third entry may create duplicates.
+		 *
+		 * Note: If we ever have a mechanism to avoid duplicate <key, TID> in
+		 * indexes, we could look at relaxing this restriction and allow even
+		 * more WARM udpates.
+		 */
+		if (HeapTupleIsWarmUpdated(&oldtup))
+		{
+			HeapTupleSetWarmUpdated(heaptup);
+			HeapTupleSetWarmUpdated(newtup);
+		}
+
+		/*
+		 * If the old tuple is a WARM tuple then mark the new tuple as a WARM
+		 * tuple as well.
+		 */
+		if (HeapTupleIsWarm(&oldtup))
+		{
+			HeapTupleSetWarm(heaptup);
+			HeapTupleSetWarm(newtup);
+		}
+
 		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
@@ -4268,12 +4579,45 @@ l2:
 		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
 			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
+	else if (use_warm_update)
+	{
+		/* Mark the old tuple as HOT-updated */
+		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetWarmUpdated(&oldtup);
+
+		/* And mark the new tuple as heap-only */
+		HeapTupleSetHeapOnly(heaptup);
+		/* Mark the new tuple as WARM tuple */
+		HeapTupleSetWarmUpdated(heaptup);
+		/* This update also starts the WARM chain */
+		HeapTupleSetWarm(heaptup);
+		Assert(!HeapTupleIsWarm(&oldtup));
+
+		/* Mark the caller's copy too, in case different from heaptup */
+		HeapTupleSetHeapOnly(newtup);
+		HeapTupleSetWarmUpdated(newtup);
+		HeapTupleSetWarm(newtup);
+
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
+		/* Let the caller know we did a WARM update */
+		if (warm_update)
+			*warm_update = true;
+	}
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		HeapTupleClearWarmUpdated(heaptup);
+		HeapTupleClearWarmUpdated(newtup);
+		HeapTupleClearWarm(heaptup);
+		HeapTupleClearWarm(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4292,7 +4636,9 @@ l2:
 	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
-	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(oldtup.t_data))
+		oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 	oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	/* ... and store info about transaction updating this tuple */
 	Assert(TransactionIdIsValid(xmax_old_tuple));
@@ -4383,7 +4729,10 @@ l2:
 	if (have_tuple_lock)
 		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
 
-	pgstat_count_heap_update(relation, use_hot_update);
+	/*
+	 * Count HOT and WARM updates separately
+	 */
+	pgstat_count_heap_update(relation, use_hot_update, use_warm_update);
 
 	/*
 	 * If heaptup is a private copy, release it.  Don't forget to copy t_self
@@ -4523,7 +4872,8 @@ HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
  * via ereport().
  */
 void
-simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
+simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup,
+		Bitmapset **modified_attrs, bool *warm_update)
 {
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
@@ -4532,7 +4882,7 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &hufd, &lockmode);
+						 &hufd, &lockmode, modified_attrs, warm_update);
 	switch (result)
 	{
 		case HeapTupleSelfUpdated:
@@ -6209,7 +6559,9 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	PageSetPrunable(page, RecentGlobalXmin);
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 
 	/*
@@ -6783,7 +7135,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 	 * Old-style VACUUM FULL is gone, but we have to keep this code as long as
 	 * we support having MOVED_OFF/MOVED_IN tuples in the database.
 	 */
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 
@@ -6802,7 +7154,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			 * have failed; whereas a non-dead MOVED_IN tuple must mean the
 			 * xvac transaction succeeded.
 			 */
-			if (tuple->t_infomask & HEAP_MOVED_OFF)
+			if (HeapTupleHeaderIsMovedOff(tuple))
 				frz->frzflags |= XLH_INVALID_XVAC;
 			else
 				frz->frzflags |= XLH_FREEZE_XVAC;
@@ -7272,7 +7624,7 @@ heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple)
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid))
@@ -7355,7 +7707,7 @@ heap_tuple_needs_freeze(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid) &&
@@ -7381,7 +7733,7 @@ HeapTupleHeaderAdvanceLatestRemovedXid(HeapTupleHeader tuple,
 	TransactionId xmax = HeapTupleHeaderGetUpdateXid(tuple);
 	TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		if (TransactionIdPrecedes(*latestRemovedXid, xvac))
 			*latestRemovedXid = xvac;
@@ -7430,6 +7782,36 @@ log_heap_cleanup_info(RelFileNode rnode, TransactionId latestRemovedXid)
 }
 
 /*
+ * Perform XLogInsert for a heap-warm-clear operation.  Caller must already
+ * have modified the buffer and marked it dirty.
+ */
+XLogRecPtr
+log_heap_warmclear(Relation reln, Buffer buffer,
+			   OffsetNumber *cleared, int ncleared)
+{
+	xl_heap_warmclear	xlrec;
+	XLogRecPtr			recptr;
+
+	/* Caller should not call me on a non-WAL-logged relation */
+	Assert(RelationNeedsWAL(reln));
+
+	xlrec.ncleared = ncleared;
+
+	XLogBeginInsert();
+	XLogRegisterData((char *) &xlrec, SizeOfHeapWarmClear);
+
+	XLogRegisterBuffer(0, buffer, REGBUF_STANDARD);
+
+	if (ncleared > 0)
+		XLogRegisterBufData(0, (char *) cleared,
+							ncleared * sizeof(OffsetNumber));
+
+	recptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_WARMCLEAR);
+
+	return recptr;
+}
+
+/*
  * Perform XLogInsert for a heap-clean operation.  Caller must already
  * have modified the buffer and marked it dirty.
  *
@@ -7584,6 +7966,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
+	bool		warm_update = false;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
@@ -7595,6 +7978,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	else
 		info = XLOG_HEAP_UPDATE;
 
+	if (HeapTupleIsWarmUpdated(newtup))
+		warm_update = true;
+
 	/*
 	 * If the old and new tuple are on the same page, we only need to log the
 	 * parts of the new tuple that were changed.  That saves on the amount of
@@ -7668,6 +8054,8 @@ log_heap_update(Relation reln, Buffer oldbuf,
 				xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY;
 		}
 	}
+	if (warm_update)
+		xlrec.flags |= XLH_UPDATE_WARM_UPDATE;
 
 	/* If new tuple is the single and first tuple on page... */
 	if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber &&
@@ -8082,6 +8470,60 @@ heap_xlog_clean(XLogReaderState *record)
 		XLogRecordPageWithFreeSpace(rnode, blkno, freespace);
 }
 
+
+/*
+ * Handles HEAP2_WARMCLEAR record type
+ */
+static void
+heap_xlog_warmclear(XLogReaderState *record)
+{
+	XLogRecPtr	lsn = record->EndRecPtr;
+	xl_heap_warmclear	*xlrec = (xl_heap_warmclear *) XLogRecGetData(record);
+	Buffer		buffer;
+	RelFileNode rnode;
+	BlockNumber blkno;
+	XLogRedoAction action;
+
+	XLogRecGetBlockTag(record, 0, &rnode, NULL, &blkno);
+
+	/*
+	 * If we have a full-page image, restore it (using a cleanup lock) and
+	 * we're done.
+	 */
+	action = XLogReadBufferForRedoExtended(record, 0, RBM_NORMAL, true,
+										   &buffer);
+	if (action == BLK_NEEDS_REDO)
+	{
+		Page		page = (Page) BufferGetPage(buffer);
+		OffsetNumber *cleared;
+		int			ncleared;
+		Size		datalen;
+		int			i;
+
+		cleared = (OffsetNumber *) XLogRecGetBlockData(record, 0, &datalen);
+
+		ncleared = xlrec->ncleared;
+
+		for (i = 0; i < ncleared; i++)
+		{
+			ItemId			lp;
+			OffsetNumber	offnum = cleared[i];
+			HeapTupleData	heapTuple;
+
+			lp = PageGetItemId(page, offnum);
+			heapTuple.t_data = (HeapTupleHeader) PageGetItem(page, lp);
+
+			HeapTupleHeaderClearWarmUpdated(heapTuple.t_data);
+			HeapTupleHeaderClearWarm(heapTuple.t_data);
+		}
+
+		PageSetLSN(page, lsn);
+		MarkBufferDirty(buffer);
+	}
+	if (BufferIsValid(buffer))
+		UnlockReleaseBuffer(buffer);
+}
+
 /*
  * Replay XLOG_HEAP2_VISIBLE record.
  *
@@ -8328,7 +8770,9 @@ heap_xlog_delete(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleHeaderClearHotUpdated(htup);
 		fix_infomask_from_infobits(xlrec->infobits_set,
@@ -8349,7 +8793,7 @@ heap_xlog_delete(XLogReaderState *record)
 		if (!HeapTupleHeaderHasRootOffset(htup))
 		{
 			OffsetNumber	root_offnum;
-			root_offnum = heap_get_root_tuple(page, xlrec->offnum); 
+			root_offnum = heap_get_root_tuple(page, xlrec->offnum);
 			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
 		}
 
@@ -8645,16 +9089,22 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	Size		freespace = 0;
 	XLogRedoAction oldaction;
 	XLogRedoAction newaction;
+	bool		warm_update = false;
 
 	/* initialize to keep the compiler quiet */
 	oldtup.t_data = NULL;
 	oldtup.t_len = 0;
 
+	if (xlrec->flags & XLH_UPDATE_WARM_UPDATE)
+		warm_update = true;
+
 	XLogRecGetBlockTag(record, 0, &rnode, NULL, &newblk);
 	if (XLogRecGetBlockTag(record, 1, NULL, NULL, &oldblk))
 	{
 		/* HOT updates are never done across pages */
 		Assert(!hot_update);
+		/* WARM updates are never done across pages */
+		Assert(!warm_update);
 	}
 	else
 		oldblk = newblk;
@@ -8714,6 +9164,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 								   &htup->t_infomask2);
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
+
+		/* Mark the old tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetWarmUpdated(htup);
+
 		/* Set forward chain link in t_ctid */
 		HeapTupleHeaderSetNextTid(htup, &newtid);
 
@@ -8849,6 +9304,10 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
 
+		/* Mark the new tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetWarmUpdated(htup);
+
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
@@ -8976,7 +9435,9 @@ heap_xlog_lock(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9055,7 +9516,9 @@ heap_xlog_lock_updated(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9124,6 +9587,9 @@ heap_redo(XLogReaderState *record)
 		case XLOG_HEAP_INSERT:
 			heap_xlog_insert(record);
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			heap_xlog_multi_insert(record);
+			break;
 		case XLOG_HEAP_DELETE:
 			heap_xlog_delete(record);
 			break;
@@ -9152,7 +9618,7 @@ heap2_redo(XLogReaderState *record)
 {
 	uint8		info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
 
-	switch (info & XLOG_HEAP_OPMASK)
+	switch (info & XLOG_HEAP2_OPMASK)
 	{
 		case XLOG_HEAP2_CLEAN:
 			heap_xlog_clean(record);
@@ -9166,9 +9632,6 @@ heap2_redo(XLogReaderState *record)
 		case XLOG_HEAP2_VISIBLE:
 			heap_xlog_visible(record);
 			break;
-		case XLOG_HEAP2_MULTI_INSERT:
-			heap_xlog_multi_insert(record);
-			break;
 		case XLOG_HEAP2_LOCK_UPDATED:
 			heap_xlog_lock_updated(record);
 			break;
@@ -9182,6 +9645,9 @@ heap2_redo(XLogReaderState *record)
 		case XLOG_HEAP2_REWRITE:
 			heap_xlog_logical_rewrite(record);
 			break;
+		case XLOG_HEAP2_WARMCLEAR:
+			heap_xlog_warmclear(record);
+			break;
 		default:
 			elog(PANIC, "heap2_redo: unknown op code %u", info);
 	}
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index f54337c..4e8ed79 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -834,6 +834,13 @@ heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				continue;
 
+			/*
+			 * If the tuple has root line pointer, it must be the end of the
+			 * chain
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			/* Set up to scan the HOT-chain */
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index 2d3ae9b..bd469ee 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -404,6 +404,14 @@ rewrite_heap_tuple(RewriteState state,
 		old_tuple->t_data->t_infomask & HEAP_XACT_MASK;
 
 	/*
+	 * We must clear the HEAP_WARM_TUPLE flag if the HEAP_WARM_UPDATED is
+	 * cleared above.
+	 */
+	if (HeapTupleHeaderIsWarmUpdated(old_tuple->t_data))
+		HeapTupleHeaderClearWarm(new_tuple->t_data);
+
+
+	/*
 	 * While we have our hands on the tuple, we may as well freeze any
 	 * eligible xmin or xmax, so that future VACUUM effort can be saved.
 	 */
@@ -428,7 +436,7 @@ rewrite_heap_tuple(RewriteState state,
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
 
-		/* 
+		/*
 		 * We've already checked that this is not the last tuple in the chain,
 		 * so fetch the next TID in the chain.
 		 */
@@ -737,7 +745,7 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		/* 
+		/*
 		 * Set t_ctid just to ensure that block number is copied correctly, but
 		 * then immediately mark the tuple as the latest.
 		 */
diff --git a/src/backend/access/heap/tuptoaster.c b/src/backend/access/heap/tuptoaster.c
index 19e7048..47b01eb 100644
--- a/src/backend/access/heap/tuptoaster.c
+++ b/src/backend/access/heap/tuptoaster.c
@@ -1620,7 +1620,8 @@ toast_save_datum(Relation rel, Datum value,
 							 toastrel,
 							 toastidxs[i]->rd_index->indisunique ?
 							 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-							 NULL);
+							 NULL,
+							 false);
 		}
 
 		/*
diff --git a/src/backend/access/index/genam.c b/src/backend/access/index/genam.c
index a91fda7..d523c8f 100644
--- a/src/backend/access/index/genam.c
+++ b/src/backend/access/index/genam.c
@@ -127,6 +127,8 @@ RelationGetIndexScan(Relation indexRelation, int nkeys, int norderbys)
 	scan->xs_cbuf = InvalidBuffer;
 	scan->xs_continue_hot = false;
 
+	scan->indexInfo = NULL;
+
 	return scan;
 }
 
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index cc5ac8b..04018fe 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -197,7 +197,8 @@ index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 IndexInfo *indexInfo)
+			 IndexInfo *indexInfo,
+			 bool warm_update)
 {
 	RELATION_CHECKS;
 	CHECK_REL_PROCEDURE(aminsert);
@@ -207,6 +208,12 @@ index_insert(Relation indexRelation,
 									   (HeapTuple) NULL,
 									   InvalidBuffer);
 
+	if (warm_update)
+	{
+		Assert(indexRelation->rd_amroutine->amwarminsert != NULL);
+		return indexRelation->rd_amroutine->amwarminsert(indexRelation, values,
+				isnull, heap_t_ctid, heapRelation, checkUnique, indexInfo);
+	}
 	return indexRelation->rd_amroutine->aminsert(indexRelation, values, isnull,
 												 heap_t_ctid, heapRelation,
 												 checkUnique, indexInfo);
@@ -234,6 +241,25 @@ index_beginscan(Relation heapRelation,
 	scan->heapRelation = heapRelation;
 	scan->xs_snapshot = snapshot;
 
+	/*
+	 * If the index supports recheck, make sure that index tuple is saved
+	 * during index scans. Also build and cache IndexInfo which is used by
+	 * amrecheck routine.
+	 *
+	 * XXX Ideally, we should look at all indexes on the table and check if
+	 * WARM is at all supported on the base table. If WARM is not supported
+	 * then we don't need to do any recheck. RelationGetIndexAttrBitmap() does
+	 * do that and sets rd_supportswarm after looking at all indexes. But we
+	 * don't know if the function was called earlier in the session when we're
+	 * here. We can't call it now because there exists a risk of causing
+	 * deadlock.
+	 */
+	if (indexRelation->rd_amroutine->amrecheck)
+	{
+		scan->xs_want_itup = true;
+		scan->indexInfo = BuildIndexInfo(indexRelation);
+	}
+
 	return scan;
 }
 
@@ -358,6 +384,10 @@ index_endscan(IndexScanDesc scan)
 	if (scan->xs_temp_snap)
 		UnregisterSnapshot(scan->xs_snapshot);
 
+	/* Free cached IndexInfo, if any */
+	if (scan->indexInfo)
+		pfree(scan->indexInfo);
+
 	/* Release the scan data structure itself */
 	IndexScanEnd(scan);
 }
@@ -535,7 +565,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
 	/*
 	 * The AM's amgettuple proc finds the next index entry matching the scan
 	 * keys, and puts the TID into scan->xs_ctup.t_self.  It should also set
-	 * scan->xs_recheck and possibly scan->xs_itup/scan->xs_hitup, though we
+	 * scan->xs_tuple_recheck and possibly scan->xs_itup/scan->xs_hitup, though we
 	 * pay no attention to those fields here.
 	 */
 	found = scan->indexRelation->rd_amroutine->amgettuple(scan, direction);
@@ -574,7 +604,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
  * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
  * call).
  *
- * Note: caller must check scan->xs_recheck, and perform rechecking of the
+ * Note: caller must check scan->xs_tuple_recheck, and perform rechecking of the
  * scan keys if required.  We do not do that here because we don't have
  * enough information to do it efficiently in the general case.
  * ----------------
@@ -585,6 +615,7 @@ index_fetch_heap(IndexScanDesc scan)
 	ItemPointer tid = &scan->xs_ctup.t_self;
 	bool		all_dead = false;
 	bool		got_heap_tuple;
+	bool		tuple_recheck;
 
 	/* We can skip the buffer-switching logic if we're in mid-HOT chain. */
 	if (!scan->xs_continue_hot)
@@ -603,6 +634,8 @@ index_fetch_heap(IndexScanDesc scan)
 			heap_page_prune_opt(scan->heapRelation, scan->xs_cbuf);
 	}
 
+	tuple_recheck = false;
+
 	/* Obtain share-lock on the buffer so we can examine visibility */
 	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
 	got_heap_tuple = heap_hot_search_buffer(tid, scan->heapRelation,
@@ -610,32 +643,60 @@ index_fetch_heap(IndexScanDesc scan)
 											scan->xs_snapshot,
 											&scan->xs_ctup,
 											&all_dead,
-											!scan->xs_continue_hot);
-	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+											!scan->xs_continue_hot,
+											&tuple_recheck);
 
 	if (got_heap_tuple)
 	{
+		bool res = true;
+
+		/*
+		 * Ok we got a tuple which satisfies the snapshot, but if its part of a
+		 * WARM chain, we must do additional checks to ensure that we are
+		 * indeed returning a correct tuple. Note that if the index AM does not
+		 * implement amrecheck method, then we don't any additional checks
+		 * since WARM must have been disabled on such tables.
+		 */
+		if (tuple_recheck && scan->xs_itup &&
+			scan->indexRelation->rd_amroutine->amrecheck)
+		{
+			res = scan->indexRelation->rd_amroutine->amrecheck(
+						scan->indexRelation,
+						scan->indexInfo,
+						scan->xs_itup,
+						scan->heapRelation,
+						&scan->xs_ctup);
+		}
+
+		LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+
 		/*
 		 * Only in a non-MVCC snapshot can more than one member of the HOT
 		 * chain be visible.
 		 */
 		scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
 		pgstat_count_heap_fetch(scan->indexRelation);
-		return &scan->xs_ctup;
+
+		if (res)
+			return &scan->xs_ctup;
 	}
+	else
+	{
+		LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
 
-	/* We've reached the end of the HOT chain. */
-	scan->xs_continue_hot = false;
+		/* We've reached the end of the HOT chain. */
+		scan->xs_continue_hot = false;
 
-	/*
-	 * If we scanned a whole HOT chain and found only dead tuples, tell index
-	 * AM to kill its entry for that TID (this will take effect in the next
-	 * amgettuple call, in index_getnext_tid).  We do not do this when in
-	 * recovery because it may violate MVCC to do so.  See comments in
-	 * RelationGetIndexScan().
-	 */
-	if (!scan->xactStartedInRecovery)
-		scan->kill_prior_tuple = all_dead;
+		/*
+		 * If we scanned a whole HOT chain and found only dead tuples, tell index
+		 * AM to kill its entry for that TID (this will take effect in the next
+		 * amgettuple call, in index_getnext_tid).  We do not do this when in
+		 * recovery because it may violate MVCC to do so.  See comments in
+		 * RelationGetIndexScan().
+		 */
+		if (!scan->xactStartedInRecovery)
+			scan->kill_prior_tuple = all_dead;
+	}
 
 	return NULL;
 }
diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c
index 6dca810..328184b 100644
--- a/src/backend/access/nbtree/nbtinsert.c
+++ b/src/backend/access/nbtree/nbtinsert.c
@@ -20,12 +20,12 @@
 #include "access/nbtxlog.h"
 #include "access/transam.h"
 #include "access/xloginsert.h"
+#include "catalog/index.h"
 #include "miscadmin.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
 #include "utils/tqual.h"
 
-
 typedef struct
 {
 	/* context data for _bt_checksplitloc */
@@ -250,6 +250,10 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 	BTPageOpaque opaque;
 	Buffer		nbuf = InvalidBuffer;
 	bool		found = false;
+	Buffer		buffer;
+	HeapTupleData	heapTuple;
+	bool		recheck = false;
+	IndexInfo	*indexInfo = BuildIndexInfo(rel);
 
 	/* Assume unique until we find a duplicate */
 	*is_unique = true;
@@ -309,6 +313,8 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				curitup = (IndexTuple) PageGetItem(page, curitemid);
 				htid = curitup->t_tid;
 
+				recheck = false;
+
 				/*
 				 * If we are doing a recheck, we expect to find the tuple we
 				 * are rechecking.  It's not a duplicate, but we have to keep
@@ -326,112 +332,153 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				 * have just a single index entry for the entire chain.
 				 */
 				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
-										 &all_dead))
+							&all_dead, &recheck, &buffer,
+							&heapTuple))
 				{
 					TransactionId xwait;
+					bool result = true;
 
 					/*
-					 * It is a duplicate. If we are only doing a partial
-					 * check, then don't bother checking if the tuple is being
-					 * updated in another transaction. Just return the fact
-					 * that it is a potential conflict and leave the full
-					 * check till later.
+					 * If the tuple was WARM update, we may again see our own
+					 * tuple. Since WARM updates don't create new index
+					 * entries, our own tuple is only reachable via the old
+					 * index pointer.
 					 */
-					if (checkUnique == UNIQUE_CHECK_PARTIAL)
+					if (checkUnique == UNIQUE_CHECK_EXISTING &&
+							ItemPointerCompare(&htid, &itup->t_tid) == 0)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						*is_unique = false;
-						return InvalidTransactionId;
+						found = true;
+						result = false;
+						if (recheck)
+							UnlockReleaseBuffer(buffer);
 					}
-
-					/*
-					 * If this tuple is being updated by other transaction
-					 * then we have to wait for its commit/abort.
-					 */
-					xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
-						SnapshotDirty.xmin : SnapshotDirty.xmax;
-
-					if (TransactionIdIsValid(xwait))
+					else if (recheck)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						/* Tell _bt_doinsert to wait... */
-						*speculativeToken = SnapshotDirty.speculativeToken;
-						return xwait;
+						result = btrecheck(rel, indexInfo, curitup, heapRel, &heapTuple);
+						UnlockReleaseBuffer(buffer);
 					}
 
-					/*
-					 * Otherwise we have a definite conflict.  But before
-					 * complaining, look to see if the tuple we want to insert
-					 * is itself now committed dead --- if so, don't complain.
-					 * This is a waste of time in normal scenarios but we must
-					 * do it to support CREATE INDEX CONCURRENTLY.
-					 *
-					 * We must follow HOT-chains here because during
-					 * concurrent index build, we insert the root TID though
-					 * the actual tuple may be somewhere in the HOT-chain.
-					 * While following the chain we might not stop at the
-					 * exact tuple which triggered the insert, but that's OK
-					 * because if we find a live tuple anywhere in this chain,
-					 * we have a unique key conflict.  The other live tuple is
-					 * not part of this chain because it had a different index
-					 * entry.
-					 */
-					htid = itup->t_tid;
-					if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL))
-					{
-						/* Normal case --- it's still live */
-					}
-					else
+					if (result)
 					{
 						/*
-						 * It's been deleted, so no error, and no need to
-						 * continue searching
+						 * It is a duplicate. If we are only doing a partial
+						 * check, then don't bother checking if the tuple is being
+						 * updated in another transaction. Just return the fact
+						 * that it is a potential conflict and leave the full
+						 * check till later.
 						 */
-						break;
-					}
+						if (checkUnique == UNIQUE_CHECK_PARTIAL)
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							*is_unique = false;
+							return InvalidTransactionId;
+						}
 
-					/*
-					 * Check for a conflict-in as we would if we were going to
-					 * write to this page.  We aren't actually going to write,
-					 * but we want a chance to report SSI conflicts that would
-					 * otherwise be masked by this unique constraint
-					 * violation.
-					 */
-					CheckForSerializableConflictIn(rel, NULL, buf);
+						/*
+						 * If this tuple is being updated by other transaction
+						 * then we have to wait for its commit/abort.
+						 */
+						xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
+							SnapshotDirty.xmin : SnapshotDirty.xmax;
+
+						if (TransactionIdIsValid(xwait))
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							/* Tell _bt_doinsert to wait... */
+							*speculativeToken = SnapshotDirty.speculativeToken;
+							return xwait;
+						}
 
-					/*
-					 * This is a definite conflict.  Break the tuple down into
-					 * datums and report the error.  But first, make sure we
-					 * release the buffer locks we're holding ---
-					 * BuildIndexValueDescription could make catalog accesses,
-					 * which in the worst case might touch this same index and
-					 * cause deadlocks.
-					 */
-					if (nbuf != InvalidBuffer)
-						_bt_relbuf(rel, nbuf);
-					_bt_relbuf(rel, buf);
+						/*
+						 * Otherwise we have a definite conflict.  But before
+						 * complaining, look to see if the tuple we want to insert
+						 * is itself now committed dead --- if so, don't complain.
+						 * This is a waste of time in normal scenarios but we must
+						 * do it to support CREATE INDEX CONCURRENTLY.
+						 *
+						 * We must follow HOT-chains here because during
+						 * concurrent index build, we insert the root TID though
+						 * the actual tuple may be somewhere in the HOT-chain.
+						 * While following the chain we might not stop at the
+						 * exact tuple which triggered the insert, but that's OK
+						 * because if we find a live tuple anywhere in this chain,
+						 * we have a unique key conflict.  The other live tuple is
+						 * not part of this chain because it had a different index
+						 * entry.
+						 */
+						recheck = false;
+						ItemPointerCopy(&itup->t_tid, &htid);
+						if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL,
+									&recheck, &buffer, &heapTuple))
+						{
+							bool result = true;
+							if (recheck)
+							{
+								/*
+								 * Recheck if the tuple actually satisfies the
+								 * index key. Otherwise, we might be following
+								 * a wrong index pointer and mustn't entertain
+								 * this tuple.
+								 */
+								result = btrecheck(rel, indexInfo, itup, heapRel, &heapTuple);
+								UnlockReleaseBuffer(buffer);
+							}
+							if (!result)
+								break;
+							/* Normal case --- it's still live */
+						}
+						else
+						{
+							/*
+							 * It's been deleted, so no error, and no need to
+							 * continue searching.
+							 */
+							break;
+						}
 
-					{
-						Datum		values[INDEX_MAX_KEYS];
-						bool		isnull[INDEX_MAX_KEYS];
-						char	   *key_desc;
-
-						index_deform_tuple(itup, RelationGetDescr(rel),
-										   values, isnull);
-
-						key_desc = BuildIndexValueDescription(rel, values,
-															  isnull);
-
-						ereport(ERROR,
-								(errcode(ERRCODE_UNIQUE_VIOLATION),
-								 errmsg("duplicate key value violates unique constraint \"%s\"",
-										RelationGetRelationName(rel)),
-							   key_desc ? errdetail("Key %s already exists.",
-													key_desc) : 0,
-								 errtableconstraint(heapRel,
-											 RelationGetRelationName(rel))));
+						/*
+						 * Check for a conflict-in as we would if we were going to
+						 * write to this page.  We aren't actually going to write,
+						 * but we want a chance to report SSI conflicts that would
+						 * otherwise be masked by this unique constraint
+						 * violation.
+						 */
+						CheckForSerializableConflictIn(rel, NULL, buf);
+
+						/*
+						 * This is a definite conflict.  Break the tuple down into
+						 * datums and report the error.  But first, make sure we
+						 * release the buffer locks we're holding ---
+						 * BuildIndexValueDescription could make catalog accesses,
+						 * which in the worst case might touch this same index and
+						 * cause deadlocks.
+						 */
+						if (nbuf != InvalidBuffer)
+							_bt_relbuf(rel, nbuf);
+						_bt_relbuf(rel, buf);
+
+						{
+							Datum		values[INDEX_MAX_KEYS];
+							bool		isnull[INDEX_MAX_KEYS];
+							char	   *key_desc;
+
+							index_deform_tuple(itup, RelationGetDescr(rel),
+									values, isnull);
+
+							key_desc = BuildIndexValueDescription(rel, values,
+									isnull);
+
+							ereport(ERROR,
+									(errcode(ERRCODE_UNIQUE_VIOLATION),
+									 errmsg("duplicate key value violates unique constraint \"%s\"",
+										 RelationGetRelationName(rel)),
+									 key_desc ? errdetail("Key %s already exists.",
+										 key_desc) : 0,
+									 errtableconstraint(heapRel,
+										 RelationGetRelationName(rel))));
+						}
 					}
 				}
 				else if (all_dead)
diff --git a/src/backend/access/nbtree/nbtpage.c b/src/backend/access/nbtree/nbtpage.c
index f815fd4..ce1bea0 100644
--- a/src/backend/access/nbtree/nbtpage.c
+++ b/src/backend/access/nbtree/nbtpage.c
@@ -766,29 +766,20 @@ _bt_page_recyclable(Page page)
 }
 
 /*
- * Delete item(s) from a btree page during VACUUM.
+ * Delete item(s) and clear WARM item(s) on a btree page during VACUUM.
  *
  * This must only be used for deleting leaf items.  Deleting an item on a
  * non-leaf page has to be done as part of an atomic action that includes
- * deleting the page it points to.
+ * deleting the page it points to. We don't ever clear pointers on a non-leaf
+ * page.
  *
  * This routine assumes that the caller has pinned and locked the buffer.
  * Also, the given itemnos *must* appear in increasing order in the array.
- *
- * We record VACUUMs and b-tree deletes differently in WAL. InHotStandby
- * we need to be able to pin all of the blocks in the btree in physical
- * order when replaying the effects of a VACUUM, just as we do for the
- * original VACUUM itself. lastBlockVacuumed allows us to tell whether an
- * intermediate range of blocks has had no changes at all by VACUUM,
- * and so must be scanned anyway during replay. We always write a WAL record
- * for the last block in the index, whether or not it contained any items
- * to be removed. This allows us to scan right up to end of index to
- * ensure correct locking.
  */
 void
-_bt_delitems_vacuum(Relation rel, Buffer buf,
-					OffsetNumber *itemnos, int nitems,
-					BlockNumber lastBlockVacuumed)
+_bt_handleitems_vacuum(Relation rel, Buffer buf,
+					OffsetNumber *delitemnos, int ndelitems,
+					OffsetNumber *clearitemnos, int nclearitems)
 {
 	Page		page = BufferGetPage(buf);
 	BTPageOpaque opaque;
@@ -796,9 +787,20 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 	/* No ereport(ERROR) until changes are logged */
 	START_CRIT_SECTION();
 
+	/*
+	 * Clear the WARM pointers.
+	 *
+	 * We must do this before dealing with the dead items because
+	 * PageIndexMultiDelete may move items around to compactify the array and
+	 * hence offnums recorded earlier won't make any sense after
+	 * PageIndexMultiDelete is called.
+	 */
+	if (nclearitems > 0)
+		_bt_clear_items(page, clearitemnos, nclearitems);
+
 	/* Fix the page */
-	if (nitems > 0)
-		PageIndexMultiDelete(page, itemnos, nitems);
+	if (ndelitems > 0)
+		PageIndexMultiDelete(page, delitemnos, ndelitems);
 
 	/*
 	 * We can clear the vacuum cycle ID since this page has certainly been
@@ -824,7 +826,8 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 		XLogRecPtr	recptr;
 		xl_btree_vacuum xlrec_vacuum;
 
-		xlrec_vacuum.lastBlockVacuumed = lastBlockVacuumed;
+		xlrec_vacuum.ndelitems = ndelitems;
+		xlrec_vacuum.nclearitems = nclearitems;
 
 		XLogBeginInsert();
 		XLogRegisterBuffer(0, buf, REGBUF_STANDARD);
@@ -835,8 +838,11 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 		 * is.  When XLogInsert stores the whole buffer, the offsets array
 		 * need not be stored too.
 		 */
-		if (nitems > 0)
-			XLogRegisterBufData(0, (char *) itemnos, nitems * sizeof(OffsetNumber));
+		if (ndelitems > 0)
+			XLogRegisterBufData(0, (char *) delitemnos, ndelitems * sizeof(OffsetNumber));
+
+		if (nclearitems > 0)
+			XLogRegisterBufData(0, (char *) clearitemnos, nclearitems * sizeof(OffsetNumber));
 
 		recptr = XLogInsert(RM_BTREE_ID, XLOG_BTREE_VACUUM);
 
@@ -1882,3 +1888,18 @@ _bt_unlink_halfdead_page(Relation rel, Buffer leafbuf, bool *rightsib_empty)
 
 	return true;
 }
+
+void
+_bt_clear_items(Page page, OffsetNumber *clearitemnos, uint16 nclearitems)
+{
+	int			i;
+	ItemId		itemid;
+	IndexTuple	itup;
+
+	for (i = 0; i < nclearitems; i++)
+	{
+		itemid = PageGetItemId(page, clearitemnos[i]);
+		itup = (IndexTuple) PageGetItem(page, itemid);
+		ItemPointerClearFlags(&itup->t_tid);
+	}
+}
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 775f2ff..6d558af 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -146,6 +146,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = btbuild;
 	amroutine->ambuildempty = btbuildempty;
 	amroutine->aminsert = btinsert;
+	amroutine->amwarminsert = btwarminsert;
 	amroutine->ambulkdelete = btbulkdelete;
 	amroutine->amvacuumcleanup = btvacuumcleanup;
 	amroutine->amcanreturn = btcanreturn;
@@ -163,6 +164,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = btestimateparallelscan;
 	amroutine->aminitparallelscan = btinitparallelscan;
 	amroutine->amparallelrescan = btparallelrescan;
+	amroutine->amrecheck = btrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -315,11 +317,12 @@ btbuildempty(Relation index)
  *		Descend the tree recursively, find the appropriate location for our
  *		new tuple, and put it there.
  */
-bool
-btinsert(Relation rel, Datum *values, bool *isnull,
+static bool
+btinsert_internal(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
-		 IndexInfo *indexInfo)
+		 IndexInfo *indexInfo,
+		 bool warm_update)
 {
 	bool		result;
 	IndexTuple	itup;
@@ -328,6 +331,11 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	itup = index_form_tuple(RelationGetDescr(rel), values, isnull);
 	itup->t_tid = *ht_ctid;
 
+	if (warm_update)
+		ItemPointerSetFlags(&itup->t_tid, BTREE_INDEX_WARM_POINTER);
+	else
+		ItemPointerClearFlags(&itup->t_tid);
+
 	result = _bt_doinsert(rel, itup, checkUnique, heapRel);
 
 	pfree(itup);
@@ -335,6 +343,26 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	return result;
 }
 
+bool
+btinsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, false);
+}
+
+bool
+btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, true);
+}
+
 /*
  *	btgettuple() -- Get the next tuple in the scan.
  */
@@ -1103,7 +1131,7 @@ btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 								 RBM_NORMAL, info->strategy);
 		LockBufferForCleanup(buf);
 		_bt_checkpage(rel, buf);
-		_bt_delitems_vacuum(rel, buf, NULL, 0, vstate.lastBlockVacuumed);
+		_bt_handleitems_vacuum(rel, buf, NULL, 0, NULL, 0);
 		_bt_relbuf(rel, buf);
 	}
 
@@ -1201,6 +1229,8 @@ restart:
 	{
 		OffsetNumber deletable[MaxOffsetNumber];
 		int			ndeletable;
+		OffsetNumber clearwarm[MaxOffsetNumber];
+		int			nclearwarm;
 		OffsetNumber offnum,
 					minoff,
 					maxoff;
@@ -1239,7 +1269,7 @@ restart:
 		 * Scan over all items to see which ones need deleted according to the
 		 * callback function.
 		 */
-		ndeletable = 0;
+		ndeletable = nclearwarm = 0;
 		minoff = P_FIRSTDATAKEY(opaque);
 		maxoff = PageGetMaxOffsetNumber(page);
 		if (callback)
@@ -1250,6 +1280,9 @@ restart:
 			{
 				IndexTuple	itup;
 				ItemPointer htup;
+				int			flags;
+				bool		is_warm = false;
+				IndexBulkDeleteCallbackResult	result;
 
 				itup = (IndexTuple) PageGetItem(page,
 												PageGetItemId(page, offnum));
@@ -1276,16 +1309,36 @@ restart:
 				 * applies to *any* type of index that marks index tuples as
 				 * killed.
 				 */
-				if (callback(htup, callback_state))
+				flags = ItemPointerGetFlags(&itup->t_tid);
+				is_warm = ((flags & BTREE_INDEX_WARM_POINTER) != 0);
+
+				if (is_warm)
+					stats->num_warm_pointers++;
+				else
+					stats->num_clear_pointers++;
+
+				result = callback(htup, is_warm, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					if (is_warm)
+						stats->warm_pointers_removed++;
+					else
+						stats->clear_pointers_removed++;
 					deletable[ndeletable++] = offnum;
+				}
+				else if (result == IBDCR_CLEAR_WARM)
+				{
+					clearwarm[nclearwarm++] = offnum;
+				}
 			}
 		}
 
 		/*
-		 * Apply any needed deletes.  We issue just one _bt_delitems_vacuum()
-		 * call per page, so as to minimize WAL traffic.
+		 * Apply any needed deletes and clearing.  We issue just one
+		 * _bt_handleitems_vacuum() call per page, so as to minimize WAL
+		 * traffic.
 		 */
-		if (ndeletable > 0)
+		if (ndeletable > 0 || nclearwarm > 0)
 		{
 			/*
 			 * Notice that the issued XLOG_BTREE_VACUUM WAL record includes
@@ -1301,8 +1354,8 @@ restart:
 			 * doesn't seem worth the amount of bookkeeping it'd take to avoid
 			 * that.
 			 */
-			_bt_delitems_vacuum(rel, buf, deletable, ndeletable,
-								vstate->lastBlockVacuumed);
+			_bt_handleitems_vacuum(rel, buf, deletable, ndeletable,
+								clearwarm, nclearwarm);
 
 			/*
 			 * Remember highest leaf page number we've issued a
@@ -1312,6 +1365,7 @@ restart:
 				vstate->lastBlockVacuumed = blkno;
 
 			stats->tuples_removed += ndeletable;
+			stats->pointers_cleared += nclearwarm;
 			/* must recompute maxoff */
 			maxoff = PageGetMaxOffsetNumber(page);
 		}
diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c
index 5b259a3..8dab5a8 100644
--- a/src/backend/access/nbtree/nbtutils.c
+++ b/src/backend/access/nbtree/nbtutils.c
@@ -20,11 +20,13 @@
 #include "access/nbtree.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
 #include "miscadmin.h"
 #include "utils/array.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 typedef struct BTSortArrayContext
@@ -2069,3 +2071,64 @@ btproperty(Oid index_oid, int attno,
 			return false;		/* punt to generic code */
 	}
 }
+
+/*
+ * Check if the index tuple's key matches the one computed from the given heap
+ * tuple's attribute
+ */
+bool
+btrecheck(Relation indexRel, IndexInfo *indexInfo, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	bool		isavail[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	FormIndexPlainDatum(indexInfo, heapRel, heapTuple, values, isnull, isavail);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		/* No need to compare if the attribute value is not available */
+		if (!isavail[i - 1])
+			continue;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL, then they are equal
+		 */
+		if (isnull[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If just one is NULL, then they are not equal
+		 */
+		if (isnull[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now just do a raw memory comparison. If the index tuple was formed
+		 * using this heap tuple, the computed index values must match
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	return equal;
+}
diff --git a/src/backend/access/nbtree/nbtxlog.c b/src/backend/access/nbtree/nbtxlog.c
index ac60db0..92be5c8 100644
--- a/src/backend/access/nbtree/nbtxlog.c
+++ b/src/backend/access/nbtree/nbtxlog.c
@@ -390,83 +390,9 @@ btree_xlog_vacuum(XLogReaderState *record)
 	Buffer		buffer;
 	Page		page;
 	BTPageOpaque opaque;
-#ifdef UNUSED
 	xl_btree_vacuum *xlrec = (xl_btree_vacuum *) XLogRecGetData(record);
 
 	/*
-	 * This section of code is thought to be no longer needed, after analysis
-	 * of the calling paths. It is retained to allow the code to be reinstated
-	 * if a flaw is revealed in that thinking.
-	 *
-	 * If we are running non-MVCC scans using this index we need to do some
-	 * additional work to ensure correctness, which is known as a "pin scan"
-	 * described in more detail in next paragraphs. We used to do the extra
-	 * work in all cases, whereas we now avoid that work in most cases. If
-	 * lastBlockVacuumed is set to InvalidBlockNumber then we skip the
-	 * additional work required for the pin scan.
-	 *
-	 * Avoiding this extra work is important since it requires us to touch
-	 * every page in the index, so is an O(N) operation. Worse, it is an
-	 * operation performed in the foreground during redo, so it delays
-	 * replication directly.
-	 *
-	 * If queries might be active then we need to ensure every leaf page is
-	 * unpinned between the lastBlockVacuumed and the current block, if there
-	 * are any.  This prevents replay of the VACUUM from reaching the stage of
-	 * removing heap tuples while there could still be indexscans "in flight"
-	 * to those particular tuples for those scans which could be confused by
-	 * finding new tuples at the old TID locations (see nbtree/README).
-	 *
-	 * It might be worth checking if there are actually any backends running;
-	 * if not, we could just skip this.
-	 *
-	 * Since VACUUM can visit leaf pages out-of-order, it might issue records
-	 * with lastBlockVacuumed >= block; that's not an error, it just means
-	 * nothing to do now.
-	 *
-	 * Note: since we touch all pages in the range, we will lock non-leaf
-	 * pages, and also any empty (all-zero) pages that may be in the index. It
-	 * doesn't seem worth the complexity to avoid that.  But it's important
-	 * that HotStandbyActiveInReplay() will not return true if the database
-	 * isn't yet consistent; so we need not fear reading still-corrupt blocks
-	 * here during crash recovery.
-	 */
-	if (HotStandbyActiveInReplay() && BlockNumberIsValid(xlrec->lastBlockVacuumed))
-	{
-		RelFileNode thisrnode;
-		BlockNumber thisblkno;
-		BlockNumber blkno;
-
-		XLogRecGetBlockTag(record, 0, &thisrnode, NULL, &thisblkno);
-
-		for (blkno = xlrec->lastBlockVacuumed + 1; blkno < thisblkno; blkno++)
-		{
-			/*
-			 * We use RBM_NORMAL_NO_LOG mode because it's not an error
-			 * condition to see all-zero pages.  The original btvacuumpage
-			 * scan would have skipped over all-zero pages, noting them in FSM
-			 * but not bothering to initialize them just yet; so we mustn't
-			 * throw an error here.  (We could skip acquiring the cleanup lock
-			 * if PageIsNew, but it's probably not worth the cycles to test.)
-			 *
-			 * XXX we don't actually need to read the block, we just need to
-			 * confirm it is unpinned. If we had a special call into the
-			 * buffer manager we could optimise this so that if the block is
-			 * not in shared_buffers we confirm it as unpinned. Optimizing
-			 * this is now moot, since in most cases we avoid the scan.
-			 */
-			buffer = XLogReadBufferExtended(thisrnode, MAIN_FORKNUM, blkno,
-											RBM_NORMAL_NO_LOG);
-			if (BufferIsValid(buffer))
-			{
-				LockBufferForCleanup(buffer);
-				UnlockReleaseBuffer(buffer);
-			}
-		}
-	}
-#endif
-
-	/*
 	 * Like in btvacuumpage(), we need to take a cleanup lock on every leaf
 	 * page. See nbtree/README for details.
 	 */
@@ -482,19 +408,30 @@ btree_xlog_vacuum(XLogReaderState *record)
 
 		if (len > 0)
 		{
-			OffsetNumber *unused;
-			OffsetNumber *unend;
+			OffsetNumber *offnums = (OffsetNumber *) ptr;
 
-			unused = (OffsetNumber *) ptr;
-			unend = (OffsetNumber *) ((char *) ptr + len);
+			/*
+			 * Clear the WARM pointers.
+			 *
+			 * We must do this before dealing with the dead items because
+			 * PageIndexMultiDelete may move items around to compactify the
+			 * array and hence offnums recorded earlier won't make any sense
+			 * after PageIndexMultiDelete is called.
+			 */
+			if (xlrec->nclearitems > 0)
+				_bt_clear_items(page, offnums + xlrec->ndelitems,
+						xlrec->nclearitems);
 
-			if ((unend - unused) > 0)
-				PageIndexMultiDelete(page, unused, unend - unused);
+			/*
+			 * And handle the deleted items too
+			 */
+			if (xlrec->ndelitems > 0)
+				PageIndexMultiDelete(page, offnums, xlrec->ndelitems);
 		}
 
 		/*
 		 * Mark the page as not containing any LP_DEAD items --- see comments
-		 * in _bt_delitems_vacuum().
+		 * in _bt_handleitems_vacuum().
 		 */
 		opaque = (BTPageOpaque) PageGetSpecialPointer(page);
 		opaque->btpo_flags &= ~BTP_HAS_GARBAGE;
diff --git a/src/backend/access/rmgrdesc/heapdesc.c b/src/backend/access/rmgrdesc/heapdesc.c
index 44d2d63..d373e61 100644
--- a/src/backend/access/rmgrdesc/heapdesc.c
+++ b/src/backend/access/rmgrdesc/heapdesc.c
@@ -44,6 +44,12 @@ heap_desc(StringInfo buf, XLogReaderState *record)
 
 		appendStringInfo(buf, "off %u", xlrec->offnum);
 	}
+	else if (info == XLOG_HEAP_MULTI_INSERT)
+	{
+		xl_heap_multi_insert *xlrec = (xl_heap_multi_insert *) rec;
+
+		appendStringInfo(buf, "%d tuples", xlrec->ntuples);
+	}
 	else if (info == XLOG_HEAP_DELETE)
 	{
 		xl_heap_delete *xlrec = (xl_heap_delete *) rec;
@@ -102,7 +108,7 @@ heap2_desc(StringInfo buf, XLogReaderState *record)
 	char	   *rec = XLogRecGetData(record);
 	uint8		info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
 
-	info &= XLOG_HEAP_OPMASK;
+	info &= XLOG_HEAP2_OPMASK;
 	if (info == XLOG_HEAP2_CLEAN)
 	{
 		xl_heap_clean *xlrec = (xl_heap_clean *) rec;
@@ -129,12 +135,6 @@ heap2_desc(StringInfo buf, XLogReaderState *record)
 		appendStringInfo(buf, "cutoff xid %u flags %d",
 						 xlrec->cutoff_xid, xlrec->flags);
 	}
-	else if (info == XLOG_HEAP2_MULTI_INSERT)
-	{
-		xl_heap_multi_insert *xlrec = (xl_heap_multi_insert *) rec;
-
-		appendStringInfo(buf, "%d tuples", xlrec->ntuples);
-	}
 	else if (info == XLOG_HEAP2_LOCK_UPDATED)
 	{
 		xl_heap_lock_updated *xlrec = (xl_heap_lock_updated *) rec;
@@ -171,6 +171,12 @@ heap_identify(uint8 info)
 		case XLOG_HEAP_INSERT | XLOG_HEAP_INIT_PAGE:
 			id = "INSERT+INIT";
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			id = "MULTI_INSERT";
+			break;
+		case XLOG_HEAP_MULTI_INSERT | XLOG_HEAP_INIT_PAGE:
+			id = "MULTI_INSERT+INIT";
+			break;
 		case XLOG_HEAP_DELETE:
 			id = "DELETE";
 			break;
@@ -219,12 +225,6 @@ heap2_identify(uint8 info)
 		case XLOG_HEAP2_VISIBLE:
 			id = "VISIBLE";
 			break;
-		case XLOG_HEAP2_MULTI_INSERT:
-			id = "MULTI_INSERT";
-			break;
-		case XLOG_HEAP2_MULTI_INSERT | XLOG_HEAP_INIT_PAGE:
-			id = "MULTI_INSERT+INIT";
-			break;
 		case XLOG_HEAP2_LOCK_UPDATED:
 			id = "LOCK_UPDATED";
 			break;
diff --git a/src/backend/access/rmgrdesc/nbtdesc.c b/src/backend/access/rmgrdesc/nbtdesc.c
index fbde9d6..6b2c5d6 100644
--- a/src/backend/access/rmgrdesc/nbtdesc.c
+++ b/src/backend/access/rmgrdesc/nbtdesc.c
@@ -48,8 +48,8 @@ btree_desc(StringInfo buf, XLogReaderState *record)
 			{
 				xl_btree_vacuum *xlrec = (xl_btree_vacuum *) rec;
 
-				appendStringInfo(buf, "lastBlockVacuumed %u",
-								 xlrec->lastBlockVacuumed);
+				appendStringInfo(buf, "ndelitems %u, nclearitems %u",
+								 xlrec->ndelitems, xlrec->nclearitems);
 				break;
 			}
 		case XLOG_BTREE_DELETE:
diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c
index e57ac49..59ef7f3 100644
--- a/src/backend/access/spgist/spgutils.c
+++ b/src/backend/access/spgist/spgutils.c
@@ -72,6 +72,7 @@ spghandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/spgist/spgvacuum.c b/src/backend/access/spgist/spgvacuum.c
index cce9b3f..711d351 100644
--- a/src/backend/access/spgist/spgvacuum.c
+++ b/src/backend/access/spgist/spgvacuum.c
@@ -155,7 +155,8 @@ vacuumLeafPage(spgBulkDeleteState *bds, Relation index, Buffer buffer,
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				deletable[i] = true;
@@ -425,7 +426,8 @@ vacuumLeafRoot(spgBulkDeleteState *bds, Relation index, Buffer buffer)
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				toDelete[xlrec.nDelete] = i;
@@ -902,10 +904,10 @@ spgbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 }
 
 /* Dummy callback to delete no tuples during spgvacuumcleanup */
-static bool
-dummy_callback(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+dummy_callback(ItemPointer itemptr, bool is_warm, void *state)
 {
-	return false;
+	return IBDCR_KEEP;
 }
 
 /*
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 8d42a34..67e68d1 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -54,6 +54,7 @@
 #include "nodes/makefuncs.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/var.h"
 #include "parser/parser.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
@@ -114,7 +115,7 @@ static void IndexCheckExclusion(Relation heapRelation,
 					IndexInfo *indexInfo);
 static inline int64 itemptr_encode(ItemPointer itemptr);
 static inline void itemptr_decode(ItemPointer itemptr, int64 encoded);
-static bool validate_index_callback(ItemPointer itemptr, void *opaque);
+static IndexBulkDeleteCallbackResult validate_index_callback(ItemPointer itemptr, bool is_warm, void *opaque);
 static void validate_index_heapscan(Relation heapRelation,
 						Relation indexRelation,
 						IndexInfo *indexInfo,
@@ -1691,6 +1692,20 @@ BuildIndexInfo(Relation index)
 	ii->ii_AmCache = NULL;
 	ii->ii_Context = CurrentMemoryContext;
 
+	/* build a bitmap of all table attributes referred by this index */
+	for (i = 0; i < ii->ii_NumIndexAttrs; i++)
+	{
+		AttrNumber attr = ii->ii_KeyAttrNumbers[i];
+		ii->ii_indxattrs = bms_add_member(ii->ii_indxattrs, attr -
+				FirstLowInvalidHeapAttributeNumber);
+	}
+
+	/* Collect all attributes used in expressions, too */
+	pull_varattnos((Node *) ii->ii_Expressions, 1, &ii->ii_indxattrs);
+
+	/* Collect all attributes in the index predicate, too */
+	pull_varattnos((Node *) ii->ii_Predicate, 1, &ii->ii_indxattrs);
+
 	return ii;
 }
 
@@ -1816,6 +1831,50 @@ FormIndexDatum(IndexInfo *indexInfo,
 		elog(ERROR, "wrong number of index expressions");
 }
 
+/*
+ * This is same as FormIndexDatum but we don't compute any expression
+ * attributes and hence can be used when executor interfaces are not available.
+ * If i'th attribute is available then isavail[i] is set to true, else set to
+ * false. The caller must always check if an attribute value is available
+ * before trying to do anything useful with that.
+ */
+void
+FormIndexPlainDatum(IndexInfo *indexInfo,
+			   Relation heapRel,
+			   HeapTuple heapTup,
+			   Datum *values,
+			   bool *isnull,
+			   bool *isavail)
+{
+	int			i;
+
+	for (i = 0; i < indexInfo->ii_NumIndexAttrs; i++)
+	{
+		int			keycol = indexInfo->ii_KeyAttrNumbers[i];
+		Datum		iDatum;
+		bool		isNull;
+
+		if (keycol != 0)
+		{
+			/*
+			 * Plain index column; get the value we need directly from the
+			 * heap tuple.
+			 */
+			iDatum = heap_getattr(heapTup, keycol, RelationGetDescr(heapRel), &isNull);
+			values[i] = iDatum;
+			isnull[i] = isNull;
+			isavail[i] = true;
+		}
+		else
+		{
+			/*
+			 * This is an expression attribute which can't be computed by us.
+			 * So just inform the caller about it.
+			 */
+			isavail[i] = false;
+		}
+	}
+}
 
 /*
  * index_update_stats --- update pg_class entry after CREATE INDEX or REINDEX
@@ -2934,15 +2993,15 @@ itemptr_decode(ItemPointer itemptr, int64 encoded)
 /*
  * validate_index_callback - bulkdelete callback to collect the index TIDs
  */
-static bool
-validate_index_callback(ItemPointer itemptr, void *opaque)
+static IndexBulkDeleteCallbackResult
+validate_index_callback(ItemPointer itemptr, bool is_warm, void *opaque)
 {
 	v_i_state  *state = (v_i_state *) opaque;
 	int64		encoded = itemptr_encode(itemptr);
 
 	tuplesort_putdatum(state->tuplesort, Int64GetDatum(encoded), false);
 	state->itups += 1;
-	return false;				/* never actually delete anything */
+	return IBDCR_KEEP;				/* never actually delete anything */
 }
 
 /*
@@ -3163,7 +3222,8 @@ validate_index_heapscan(Relation heapRelation,
 						 heapRelation,
 						 indexInfo->ii_Unique ?
 						 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-						 indexInfo);
+						 indexInfo,
+						 false);
 
 			state->tups_inserted += 1;
 		}
diff --git a/src/backend/catalog/indexing.c b/src/backend/catalog/indexing.c
index abc344a..6392f33 100644
--- a/src/backend/catalog/indexing.c
+++ b/src/backend/catalog/indexing.c
@@ -66,10 +66,15 @@ CatalogCloseIndexes(CatalogIndexState indstate)
  *
  * This should be called for each inserted or updated catalog tuple.
  *
+ * If the tuple was WARM updated, the modified_attrs contains the list of
+ * columns updated by the update. We must not insert new index entries for
+ * indexes which do not refer to any of the modified columns.
+ *
  * This is effectively a cut-down version of ExecInsertIndexTuples.
  */
 static void
-CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
+CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple,
+		Bitmapset *modified_attrs, bool warm_update)
 {
 	int			i;
 	int			numIndexes;
@@ -79,12 +84,28 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 	IndexInfo **indexInfoArray;
 	Datum		values[INDEX_MAX_KEYS];
 	bool		isnull[INDEX_MAX_KEYS];
+	ItemPointerData root_tid;
 
-	/* HOT update does not require index inserts */
-	if (HeapTupleIsHeapOnly(heapTuple))
+	/*
+	 * HOT update does not require index inserts, but WARM may need for some
+	 * indexes.
+	 */
+	if (HeapTupleIsHeapOnly(heapTuple) && !warm_update)
 		return;
 
 	/*
+	 * If we've done a WARM update, then we must index the TID of the root line
+	 * pointer and not the actual TID of the new tuple.
+	 */
+	if (warm_update)
+		ItemPointerSet(&root_tid,
+				ItemPointerGetBlockNumber(&(heapTuple->t_self)),
+				HeapTupleHeaderGetRootOffset(heapTuple->t_data));
+	else
+		ItemPointerCopy(&heapTuple->t_self, &root_tid);
+
+
+	/*
 	 * Get information from the state structure.  Fall out if nothing to do.
 	 */
 	numIndexes = indstate->ri_NumIndices;
@@ -112,6 +133,17 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 			continue;
 
 		/*
+		 * If we've done WARM update, then we must not insert a new index tuple
+		 * if none of the index keys have changed. This is not just an
+		 * optimization, but a requirement for WARM to work correctly.
+		 */
+		if (warm_update)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
+		/*
 		 * Expressional and partial indexes on system catalogs are not
 		 * supported, nor exclusion constraints, nor deferred uniqueness
 		 */
@@ -136,11 +168,12 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 		index_insert(relationDescs[i],	/* index relation */
 					 values,	/* array of index Datums */
 					 isnull,	/* is-null flags */
-					 &(heapTuple->t_self),		/* tid of heap tuple */
+					 &root_tid,
 					 heapRelation,
 					 relationDescs[i]->rd_index->indisunique ?
 					 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-					 indexInfo);
+					 indexInfo,
+					 warm_update);
 	}
 
 	ExecDropSingleTupleTableSlot(slot);
@@ -168,7 +201,7 @@ CatalogTupleInsert(Relation heapRel, HeapTuple tup)
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 	CatalogCloseIndexes(indstate);
 
 	return oid;
@@ -190,7 +223,7 @@ CatalogTupleInsertWithInfo(Relation heapRel, HeapTuple tup,
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 
 	return oid;
 }
@@ -210,12 +243,14 @@ void
 CatalogTupleUpdate(Relation heapRel, ItemPointer otid, HeapTuple tup)
 {
 	CatalogIndexState indstate;
+	bool	warm_update;
+	Bitmapset	*modified_attrs;
 
 	indstate = CatalogOpenIndexes(heapRel);
 
-	simple_heap_update(heapRel, otid, tup);
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 	CatalogCloseIndexes(indstate);
 }
 
@@ -231,9 +266,12 @@ void
 CatalogTupleUpdateWithInfo(Relation heapRel, ItemPointer otid, HeapTuple tup,
 						   CatalogIndexState indstate)
 {
-	simple_heap_update(heapRel, otid, tup);
+	Bitmapset  *modified_attrs;
+	bool		warm_update;
+
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 }
 
 /*
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index b6552da..15d0fe4 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -498,6 +498,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_tuples_deleted(C.oid) AS n_tup_del,
             pg_stat_get_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
@@ -528,7 +529,8 @@ CREATE VIEW pg_stat_xact_all_tables AS
             pg_stat_get_xact_tuples_inserted(C.oid) AS n_tup_ins,
             pg_stat_get_xact_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_xact_tuples_deleted(C.oid) AS n_tup_del,
-            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd
+            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_xact_tuples_warm_updated(C.oid) AS n_tup_warm_upd
     FROM pg_class C LEFT JOIN
          pg_index I ON C.oid = I.indrelid
          LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index e2544e5..330b661 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -40,6 +40,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	TriggerData *trigdata = castNode(TriggerData, fcinfo->context);
 	const char *funcname = "unique_key_recheck";
 	HeapTuple	new_row;
+	HeapTupleData heapTuple;
 	ItemPointerData tmptid;
 	Relation	indexRel;
 	IndexInfo  *indexInfo;
@@ -102,7 +103,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	 * removed.
 	 */
 	tmptid = new_row->t_self;
-	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL,
+				NULL, NULL, &heapTuple))
 	{
 		/*
 		 * All rows in the HOT chain are dead, so skip the check.
@@ -166,7 +168,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 		 */
 		index_insert(indexRel, values, isnull, &(new_row->t_self),
 					 trigdata->tg_relation, UNIQUE_CHECK_EXISTING,
-					 indexInfo);
+					 indexInfo,
+					 false);
 	}
 	else
 	{
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index ba89b29..120e261 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -2681,6 +2681,8 @@ CopyFrom(CopyState cstate)
 					if (resultRelInfo->ri_NumIndices > 0)
 						recheckIndexes = ExecInsertIndexTuples(slot,
 															&(tuple->t_self),
+															&(tuple->t_self),
+															NULL,
 															   estate,
 															   false,
 															   NULL,
@@ -2835,6 +2837,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
 			ExecStoreTuple(bufferedTuples[i], myslot, InvalidBuffer, false);
 			recheckIndexes =
 				ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
+									  &(bufferedTuples[i]->t_self), NULL,
 									  estate, false, NULL, NIL);
 			ExecARInsertTriggers(estate, resultRelInfo,
 								 bufferedTuples[i],
diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c
index 9618032..1b2abd4 100644
--- a/src/backend/commands/indexcmds.c
+++ b/src/backend/commands/indexcmds.c
@@ -694,7 +694,14 @@ DefineIndex(Oid relationId,
 	 * visible to other transactions before we start to build the index. That
 	 * will prevent them from making incompatible HOT updates.  The new index
 	 * will be marked not indisready and not indisvalid, so that no one else
-	 * tries to either insert into it or use it for queries.
+	 * tries to either insert into it or use it for queries. In addition to
+	 * that, WARM updates will be disallowed if an update is modifying one of
+	 * the columns used by this new index. This is necessary to ensure that we
+	 * don't create WARM tuples which do not have corresponding entry in this
+	 * index. It must be noted that during the second phase, we will index only
+	 * those heap tuples whose root line pointer is not already in the index,
+	 * hence it's important that all tuples in a given chain, has the same
+	 * value for any indexed column (including this new index).
 	 *
 	 * We must commit our current transaction so that the index becomes
 	 * visible; then start another.  Note that all the data structures we just
@@ -742,7 +749,10 @@ DefineIndex(Oid relationId,
 	 * marked as "not-ready-for-inserts".  The index is consulted while
 	 * deciding HOT-safety though.  This arrangement ensures that no new HOT
 	 * chains can be created where the new tuple and the old tuple in the
-	 * chain have different index keys.
+	 * chain have different index keys. Also, the new index is consulted for
+	 * deciding whether a WARM update is possible, and WARM update is not done
+	 * if a column used by this index is being updated. This ensures that we
+	 * don't create WARM tuples which are not indexed by this index.
 	 *
 	 * We now take a new snapshot, and build the index using all tuples that
 	 * are visible in this snapshot.  We can be sure that any HOT updates to
@@ -777,7 +787,8 @@ DefineIndex(Oid relationId,
 	/*
 	 * Update the pg_index row to mark the index as ready for inserts. Once we
 	 * commit this transaction, any new transactions that open the table must
-	 * insert new entries into the index for insertions and non-HOT updates.
+	 * insert new entries into the index for insertions and non-HOT updates or
+	 * WARM updates where this index needs new entry.
 	 */
 	index_set_state_flags(indexRelationId, INDEX_CREATE_SET_READY);
 
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index b74e493..2b054f7 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -104,6 +104,39 @@
  */
 #define PREFETCH_SIZE			((BlockNumber) 32)
 
+/*
+ * Structure to track WARM chains that can be converted into HOT chains during
+ * this run.
+ *
+ * To reduce space requirement, we're using bitfields. But the way things are
+ * laid down, we're still wasting 1-byte per candidate chain.
+ */
+typedef struct LVWarmChain
+{
+	ItemPointerData	chain_tid;			/* root of the chain */
+
+	/*
+	 * 1 - if the chain contains only post-warm tuples
+	 * 0 - if the chain contains only pre-warm tuples
+	 */
+	uint8			is_postwarm_chain:2;
+
+	/* 1 - if this chain can't be cleared of WARM tuples */
+	uint8			keep_warm_chain:2;
+
+	/*
+	 * Number of CLEAR pointers to this root TID found so far - must never be
+	 * more than 2.
+	 */
+	uint8			num_clear_pointers:2;
+
+	/*
+	 * Number of WARM pointers to this root TID found so far - must never be
+	 * more than 1.
+	 */
+	uint8			num_warm_pointers:2;
+} LVWarmChain;
+
 typedef struct LVRelStats
 {
 	/* hasindex = true means two-pass strategy; false means one-pass */
@@ -122,6 +155,14 @@ typedef struct LVRelStats
 	BlockNumber pages_removed;
 	double		tuples_deleted;
 	BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */
+
+	/* List of candidate WARM chains that can be converted into HOT chains */
+	/* NB: this list is ordered by TID of the root pointers */
+	int				num_warm_chains;	/* current # of entries */
+	int				max_warm_chains;	/* # slots allocated in array */
+	LVWarmChain 	*warm_chains;		/* array of LVWarmChain */
+	double			num_non_convertible_warm_chains;
+
 	/* List of TIDs of tuples we intend to delete */
 	/* NB: this list is ordered by TID address */
 	int			num_dead_tuples;	/* current # of entries */
@@ -150,6 +191,7 @@ static void lazy_scan_heap(Relation onerel, int options,
 static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats);
 static bool lazy_check_needs_freeze(Buffer buf, bool *hastup);
 static void lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats);
 static void lazy_cleanup_index(Relation indrel,
@@ -157,6 +199,10 @@ static void lazy_cleanup_index(Relation indrel,
 				   LVRelStats *vacrelstats);
 static int lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 				 int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer);
+static int lazy_warmclear_page(Relation onerel, BlockNumber blkno,
+				 Buffer buffer, int chainindex, LVRelStats *vacrelstats,
+				 Buffer *vmbuffer, bool check_all_visible);
+static void lazy_reset_warm_pointer_count(LVRelStats *vacrelstats);
 static bool should_attempt_truncation(LVRelStats *vacrelstats);
 static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
 static BlockNumber count_nondeletable_pages(Relation onerel,
@@ -164,8 +210,15 @@ static BlockNumber count_nondeletable_pages(Relation onerel,
 static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks);
 static void lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr);
-static bool lazy_tid_reaped(ItemPointer itemptr, void *state);
+static void lazy_record_warm_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static void lazy_record_clear_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static IndexBulkDeleteCallbackResult lazy_tid_reaped(ItemPointer itemptr, bool is_warm, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state);
 static int	vac_cmp_itemptr(const void *left, const void *right);
+static int vac_cmp_warm_chain(const void *left, const void *right);
 static bool heap_page_is_all_visible(Relation rel, Buffer buf,
 					 TransactionId *visibility_cutoff_xid, bool *all_frozen);
 
@@ -690,8 +743,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		 * If we are close to overrunning the available space for dead-tuple
 		 * TIDs, pause and do a cycle of vacuuming before we tackle this page.
 		 */
-		if ((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
-			vacrelstats->num_dead_tuples > 0)
+		if (((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
+			vacrelstats->num_dead_tuples > 0) ||
+			((vacrelstats->max_warm_chains - vacrelstats->num_warm_chains) < MaxHeapTuplesPerPage &&
+			 vacrelstats->num_warm_chains > 0))
 		{
 			const int	hvp_index[] = {
 				PROGRESS_VACUUM_PHASE,
@@ -721,6 +776,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			/* Remove index entries */
 			for (i = 0; i < nindexes; i++)
 				lazy_vacuum_index(Irel[i],
+								  (vacrelstats->num_warm_chains > 0),
 								  &indstats[i],
 								  vacrelstats);
 
@@ -743,6 +799,9 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			 * valid.
 			 */
 			vacrelstats->num_dead_tuples = 0;
+			vacrelstats->num_warm_chains = 0;
+			memset(vacrelstats->warm_chains, 0,
+					vacrelstats->max_warm_chains * sizeof (LVWarmChain));
 			vacrelstats->num_index_scans++;
 
 			/* Report that we are once again scanning the heap */
@@ -947,15 +1006,31 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 				continue;
 			}
 
+			ItemPointerSet(&(tuple.t_self), blkno, offnum);
+
 			/* Redirect items mustn't be touched */
 			if (ItemIdIsRedirected(itemid))
 			{
+				HeapCheckWarmChainStatus status = heap_check_warm_chain(page,
+						&tuple.t_self, false);
+				if (HCWC_IS_WARM_UPDATED(status))
+				{
+					/*
+					 * A chain which is either complete WARM or CLEAR is a
+					 * candidate for chain conversion. Remember the chain and
+					 * whether the chain has all WARM tuples or not.
+					 */
+					if (HCWC_IS_ALL_WARM(status))
+						lazy_record_warm_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_CLEAR(status))
+						lazy_record_clear_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
 				hastup = true;	/* this page won't be truncatable */
 				continue;
 			}
 
-			ItemPointerSet(&(tuple.t_self), blkno, offnum);
-
 			/*
 			 * DEAD item pointers are to be vacuumed normally; but we don't
 			 * count them in tups_vacuumed, else we'd be double-counting (at
@@ -975,6 +1050,26 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			tuple.t_len = ItemIdGetLength(itemid);
 			tuple.t_tableOid = RelationGetRelid(onerel);
 
+			if (!HeapTupleIsHeapOnly(&tuple))
+			{
+				HeapCheckWarmChainStatus status = heap_check_warm_chain(page,
+						&tuple.t_self, false);
+				if (HCWC_IS_WARM_UPDATED(status))
+				{
+					/*
+					 * A chain which is either complete WARM or CLEAR is a
+					 * candidate for chain conversion. Remember the chain and
+					 * its color.
+					 */
+					if (HCWC_IS_ALL_WARM(status))
+						lazy_record_warm_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_CLEAR(status))
+						lazy_record_clear_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
+			}
+
 			tupgone = false;
 
 			switch (HeapTupleSatisfiesVacuum(&tuple, OldestXmin, buf))
@@ -1040,6 +1135,19 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 							break;
 						}
 
+						/*
+						 * If this tuple was ever WARM updated or is a WARM
+						 * tuple, there could be multiple index entries
+						 * pointing to the root of this chain. We can't do
+						 * index-only scans for such tuples without verifying
+						 * index key check. So mark the page as !all_visible
+						 */
+						if (HeapTupleHeaderIsWarmUpdated(tuple.t_data))
+						{
+							all_visible = false;
+							break;
+						}
+
 						/* Track newest xmin on page. */
 						if (TransactionIdFollows(xmin, visibility_cutoff_xid))
 							visibility_cutoff_xid = xmin;
@@ -1282,7 +1390,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 
 	/* If any tuples need to be deleted, perform final vacuum cycle */
 	/* XXX put a threshold on min number of tuples here? */
-	if (vacrelstats->num_dead_tuples > 0)
+	if (vacrelstats->num_dead_tuples > 0 || vacrelstats->num_warm_chains > 0)
 	{
 		const int	hvp_index[] = {
 			PROGRESS_VACUUM_PHASE,
@@ -1300,6 +1408,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		/* Remove index entries */
 		for (i = 0; i < nindexes; i++)
 			lazy_vacuum_index(Irel[i],
+							  (vacrelstats->num_warm_chains > 0),
 							  &indstats[i],
 							  vacrelstats);
 
@@ -1367,7 +1476,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
  *
  *		This routine marks dead tuples as unused and compacts out free
  *		space on their pages.  Pages not having dead tuples recorded from
- *		lazy_scan_heap are not visited at all.
+ *		lazy_scan_heap are not visited at all. This routine also converts
+ *		candidate WARM chains to HOT chains by clearing WARM related flags. The
+ *		candidate chains are determined by the preceeding index scans after
+ *		looking at the data collected by the first heap scan.
  *
  * Note: the reason for doing this as a second pass is we cannot remove
  * the tuples until we've removed their index entries, and we want to
@@ -1376,7 +1488,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 static void
 lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 {
-	int			tupindex;
+	int			tupindex, chainindex;
 	int			npages;
 	PGRUsage	ru0;
 	Buffer		vmbuffer = InvalidBuffer;
@@ -1385,33 +1497,69 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 	npages = 0;
 
 	tupindex = 0;
-	while (tupindex < vacrelstats->num_dead_tuples)
+	chainindex = 0;
+	while (tupindex < vacrelstats->num_dead_tuples ||
+		   chainindex < vacrelstats->num_warm_chains)
 	{
-		BlockNumber tblk;
+		BlockNumber tblk, chainblk, vacblk;
 		Buffer		buf;
 		Page		page;
 		Size		freespace;
 
 		vacuum_delay_point();
 
-		tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
-		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, tblk, RBM_NORMAL,
+		tblk = chainblk = InvalidBlockNumber;
+		if (chainindex < vacrelstats->num_warm_chains)
+			chainblk =
+				ItemPointerGetBlockNumber(&(vacrelstats->warm_chains[chainindex].chain_tid));
+
+		if (tupindex < vacrelstats->num_dead_tuples)
+			tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
+
+		if (tblk == InvalidBlockNumber)
+			vacblk = chainblk;
+		else if (chainblk == InvalidBlockNumber)
+			vacblk = tblk;
+		else
+			vacblk = Min(chainblk, tblk);
+
+		Assert(vacblk != InvalidBlockNumber);
+
+		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, vacblk, RBM_NORMAL,
 								 vac_strategy);
-		if (!ConditionalLockBufferForCleanup(buf))
+
+
+		if (vacblk == chainblk)
+			LockBufferForCleanup(buf);
+		else if (!ConditionalLockBufferForCleanup(buf))
 		{
 			ReleaseBuffer(buf);
 			++tupindex;
 			continue;
 		}
-		tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
-									&vmbuffer);
+
+		/*
+		 * Convert WARM chains on this page. This should be done before
+		 * vacuuming the page to ensure that we can correctly set visibility
+		 * bits after clearing WARM chains.
+		 *
+		 * If we are going to vacuum this page then don't check for
+		 * all-visibility just yet.
+		*/
+		if (vacblk == chainblk)
+			chainindex = lazy_warmclear_page(onerel, chainblk, buf, chainindex,
+					vacrelstats, &vmbuffer, chainblk != tblk);
+
+		if (vacblk == tblk)
+			tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
+					&vmbuffer);
 
 		/* Now that we've compacted the page, record its available space */
 		page = BufferGetPage(buf);
 		freespace = PageGetHeapFreeSpace(page);
 
 		UnlockReleaseBuffer(buf);
-		RecordPageWithFreeSpace(onerel, tblk, freespace);
+		RecordPageWithFreeSpace(onerel, vacblk, freespace);
 		npages++;
 	}
 
@@ -1430,6 +1578,107 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 }
 
 /*
+ *	lazy_warmclear_page() -- clear various WARM bits on the tuples.
+ *
+ * Caller must hold pin and buffer cleanup lock on the buffer.
+ *
+ * chainindex is the index in vacrelstats->warm_chains of the first dead
+ * tuple for this page.  We assume the rest follow sequentially.
+ * The return value is the first tupindex after the tuples of this page.
+ *
+ * If check_all_visible is set then we also check if the page has now become
+ * all visible and update visibility map.
+ */
+static int
+lazy_warmclear_page(Relation onerel, BlockNumber blkno, Buffer buffer,
+				 int chainindex, LVRelStats *vacrelstats, Buffer *vmbuffer,
+				 bool check_all_visible)
+{
+	Page			page = BufferGetPage(buffer);
+	OffsetNumber	cleared_offnums[MaxHeapTuplesPerPage];
+	int				num_cleared = 0;
+	TransactionId	visibility_cutoff_xid;
+	bool			all_frozen;
+
+	pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED, blkno);
+
+	START_CRIT_SECTION();
+
+	for (; chainindex < vacrelstats->num_warm_chains ; chainindex++)
+	{
+		BlockNumber tblk;
+		LVWarmChain	*chain;
+
+		chain = &vacrelstats->warm_chains[chainindex];
+
+		tblk = ItemPointerGetBlockNumber(&chain->chain_tid);
+		if (tblk != blkno)
+			break;				/* past end of tuples for this block */
+
+		/*
+		 * Since a heap page can have no more than MaxHeapTuplesPerPage
+		 * offnums and we process each offnum only once, MaxHeapTuplesPerPage
+		 * size array should be enough to hold all cleared tuples in this page.
+		 */
+		if (!chain->keep_warm_chain)
+			num_cleared += heap_clear_warm_chain(page, &chain->chain_tid,
+					cleared_offnums + num_cleared);
+	}
+
+	/*
+	 * Mark buffer dirty before we write WAL.
+	 */
+	MarkBufferDirty(buffer);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(onerel))
+	{
+		XLogRecPtr	recptr;
+
+		recptr = log_heap_warmclear(onerel, buffer,
+								cleared_offnums, num_cleared);
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	/* If not checking for all-visibility then we're done */
+	if (!check_all_visible)
+		return chainindex;
+
+	/*
+	 * The following code should match the corresponding code in
+	 * lazy_vacuum_page
+	 **/
+	if (heap_page_is_all_visible(onerel, buffer, &visibility_cutoff_xid,
+								 &all_frozen))
+		PageSetAllVisible(page);
+
+	/*
+	 * All the changes to the heap page have been done. If the all-visible
+	 * flag is now set, also set the VM all-visible bit (and, if possible, the
+	 * all-frozen bit) unless this has already been done previously.
+	 */
+	if (PageIsAllVisible(page))
+	{
+		uint8		vm_status = visibilitymap_get_status(onerel, blkno, vmbuffer);
+		uint8		flags = 0;
+
+		/* Set the VM all-frozen bit to flag, if needed */
+		if ((vm_status & VISIBILITYMAP_ALL_VISIBLE) == 0)
+			flags |= VISIBILITYMAP_ALL_VISIBLE;
+		if ((vm_status & VISIBILITYMAP_ALL_FROZEN) == 0 && all_frozen)
+			flags |= VISIBILITYMAP_ALL_FROZEN;
+
+		Assert(BufferIsValid(*vmbuffer));
+		if (flags != 0)
+			visibilitymap_set(onerel, blkno, buffer, InvalidXLogRecPtr,
+							  *vmbuffer, visibility_cutoff_xid, flags);
+	}
+	return chainindex;
+}
+
+/*
  *	lazy_vacuum_page() -- free dead tuples on a page
  *					 and repair its fragmentation.
  *
@@ -1582,6 +1831,24 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
 	return false;
 }
 
+/*
+ * Reset counters tracking number of WARM and CLEAR pointers per candidate TID.
+ * These counters are maintained per index and cleared when the next index is
+ * picked up for cleanup.
+ *
+ * We don't touch the keep_warm_chain since once a chain is known to be
+ * non-convertible, we must remember that across all indexes.
+ */
+static void
+lazy_reset_warm_pointer_count(LVRelStats *vacrelstats)
+{
+	int i;
+	for (i = 0; i < vacrelstats->num_warm_chains; i++)
+	{
+		LVWarmChain *chain = &vacrelstats->warm_chains[i];
+		chain->num_clear_pointers = chain->num_warm_pointers = 0;
+	}
+}
 
 /*
  *	lazy_vacuum_index() -- vacuum one index relation.
@@ -1591,6 +1858,7 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
  */
 static void
 lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats)
 {
@@ -1606,15 +1874,87 @@ lazy_vacuum_index(Relation indrel,
 	ivinfo.num_heap_tuples = vacrelstats->old_rel_tuples;
 	ivinfo.strategy = vac_strategy;
 
-	/* Do bulk deletion */
-	*stats = index_bulk_delete(&ivinfo, *stats,
-							   lazy_tid_reaped, (void *) vacrelstats);
+	/*
+	 * If told, convert WARM chains into HOT chains.
+	 *
+	 * We must have already collected candidate WARM chains i.e. chains that
+	 * have either all tuples with HEAP_WARM_TUPLE flag set or none.
+	 *
+	 * This works in two phases. In the first phase, we do a complete index
+	 * scan and collect information about index pointers to the candidate
+	 * chains, but we don't do conversion. To be precise, we count the number
+	 * of WARM and CLEAR index pointers to each candidate chain and use that
+	 * knowledge to arrive at a decision and do the actual conversion during
+	 * the second phase (we kill known dead pointers though in this phase).
+	 *
+	 * In the second phase, for each candidate chain we check if we have seen a
+	 * WARM index pointer. For such chains, we kill the CLEAR pointer and
+	 * convert the WARM pointer into a CLEAR pointer. The heap tuples are
+	 * cleared of WARM flags in the second heap scan. If we did not find any
+	 * WARM pointer to a WARM chain, that means that the chain is reachable
+	 * from the CLEAR pointer (because say WARM update did not add a new entry
+	 * for this index). In that case, we do nothing.  There is a third case
+	 * where we find two CLEAR pointers to a candidate chain. This can happen
+	 * because of aborted vacuums. We don't handle that case yet, but it should
+	 * be possible to apply the same recheck logic and find which of the clear
+	 * pointers is redundant and should be removed.
+	 *
+	 * For CLEAR chains, we just kill the WARM pointer, if it exist,s and keep
+	 * the CLEAR pointer.
+	 */
+	if (clear_warm)
+	{
+		/*
+		 * Before starting the index scan, reset the counters of WARM and CLEAR
+		 * pointers, probably carried forward from the previous index.
+		 */
+		lazy_reset_warm_pointer_count(vacrelstats);
+
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase1, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row version, found "
+						"%0.f warm pointers, %0.f clear pointers, removed "
+						"%0.f warm pointers, removed %0.f clear pointers",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples,
+						(*stats)->num_warm_pointers,
+						(*stats)->num_clear_pointers,
+						(*stats)->warm_pointers_removed,
+						(*stats)->clear_pointers_removed)));
+
+		(*stats)->num_warm_pointers = 0;
+		(*stats)->num_clear_pointers = 0;
+		(*stats)->warm_pointers_removed = 0;
+		(*stats)->clear_pointers_removed = 0;
+		(*stats)->pointers_cleared = 0;
+
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase2, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to convert WARM pointers, found "
+						"%0.f WARM pointers, %0.f CLEAR pointers, removed "
+						"%0.f WARM pointers, removed %0.f CLEAR pointers, "
+						"cleared %0.f WARM pointers",
+						RelationGetRelationName(indrel),
+						(*stats)->num_warm_pointers,
+						(*stats)->num_clear_pointers,
+						(*stats)->warm_pointers_removed,
+						(*stats)->clear_pointers_removed,
+						(*stats)->pointers_cleared)));
+	}
+	else
+	{
+		/* Do bulk deletion */
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_tid_reaped, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row versions",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples),
+				 errdetail("%s.", pg_rusage_show(&ru0))));
+	}
 
-	ereport(elevel,
-			(errmsg("scanned index \"%s\" to remove %d row versions",
-					RelationGetRelationName(indrel),
-					vacrelstats->num_dead_tuples),
-			 errdetail("%s.", pg_rusage_show(&ru0))));
 }
 
 /*
@@ -1988,9 +2328,11 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 
 	if (vacrelstats->hasindex)
 	{
-		maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData);
+		maxtuples = (vac_work_mem * 1024L) / (sizeof(ItemPointerData) +
+				sizeof(LVWarmChain));
 		maxtuples = Min(maxtuples, INT_MAX);
-		maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData));
+		maxtuples = Min(maxtuples, MaxAllocSize / (sizeof(ItemPointerData) +
+					sizeof(LVWarmChain)));
 
 		/* curious coding here to ensure the multiplication can't overflow */
 		if ((BlockNumber) (maxtuples / LAZY_ALLOC_TUPLES) > relblocks)
@@ -2008,6 +2350,57 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 	vacrelstats->max_dead_tuples = (int) maxtuples;
 	vacrelstats->dead_tuples = (ItemPointer)
 		palloc(maxtuples * sizeof(ItemPointerData));
+
+	/*
+	 * XXX Cheat for now and allocate the same size array for tracking warm
+	 * chains. maxtuples must have been already adjusted above to ensure we
+	 * don't cross vac_work_mem.
+	 */
+	vacrelstats->num_warm_chains = 0;
+	vacrelstats->max_warm_chains = (int) maxtuples;
+	vacrelstats->warm_chains = (LVWarmChain *)
+		palloc0(maxtuples * sizeof(LVWarmChain));
+
+}
+
+/*
+ * lazy_record_clear_chain - remember one CLEAR chain
+ */
+static void
+lazy_record_clear_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	{
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 0;
+		vacrelstats->num_warm_chains++;
+	}
+}
+
+/*
+ * lazy_record_warm_chain - remember one WARM chain
+ */
+static void
+lazy_record_warm_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	{
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 1;
+		vacrelstats->num_warm_chains++;
+	}
 }
 
 /*
@@ -2038,8 +2431,8 @@ lazy_record_dead_tuple(LVRelStats *vacrelstats,
  *
  *		Assumes dead_tuples array is in sorted order.
  */
-static bool
-lazy_tid_reaped(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+lazy_tid_reaped(ItemPointer itemptr, bool is_warm, void *state)
 {
 	LVRelStats *vacrelstats = (LVRelStats *) state;
 	ItemPointer res;
@@ -2050,7 +2443,193 @@ lazy_tid_reaped(ItemPointer itemptr, void *state)
 								sizeof(ItemPointerData),
 								vac_cmp_itemptr);
 
-	return (res != NULL);
+	return (res != NULL) ? IBDCR_DELETE : IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase1() -- run first pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	ItemPointer		res;
+	LVWarmChain	*chain;
+
+	res = (ItemPointer) bsearch((void *) itemptr,
+								(void *) vacrelstats->dead_tuples,
+								vacrelstats->num_dead_tuples,
+								sizeof(ItemPointerData),
+								vac_cmp_itemptr);
+
+	if (res != NULL)
+		return IBDCR_DELETE;
+
+	chain = (LVWarmChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->warm_chains,
+								vacrelstats->num_warm_chains,
+								sizeof(LVWarmChain),
+								vac_cmp_warm_chain);
+	if (chain != NULL)
+	{
+		if (is_warm)
+			chain->num_warm_pointers++;
+		else
+			chain->num_clear_pointers++;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase2() -- run first pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	LVWarmChain	*chain;
+
+	chain = (LVWarmChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->warm_chains,
+								vacrelstats->num_warm_chains,
+								sizeof(LVWarmChain),
+								vac_cmp_warm_chain);
+
+	if (chain != NULL && (chain->keep_warm_chain != 1))
+	{
+		/*
+		 * At no point, we can have more than 1 warm pointer to any chain and
+		 * no more than 2 clear pointers.
+		 */
+		Assert(chain->num_warm_pointers <= 1);
+		Assert(chain->num_clear_pointers <= 2);
+
+		if (chain->is_postwarm_chain == 1)
+		{
+			if (is_warm)
+			{
+				/*
+				 * A WARM pointer, pointing to a WARM chain.
+				 *
+				 * Clear the warm pointer (and delete the CLEAR pointer). We
+				 * may have already seen the CLEAR pointer in the scan and
+				 * deleted that or we may see it later in the scan. It doesn't
+				 * matter if we fail at any point because we won't clear up
+				 * WARM bits on the heap tuples until we have dealt with the
+				 * index pointers cleanly.
+				 */
+				return IBDCR_CLEAR_WARM;
+			}
+			else
+			{
+				/*
+				 * CLEAR pointer to a WARM chain.
+				 */
+				if (chain->num_warm_pointers > 0)
+				{
+					/*
+					 * If there exists a WARM pointer to the chain, we can
+					 * delete the CLEAR pointer and clear the WARM bits on the
+					 * heap tuples.
+					 */
+					return IBDCR_DELETE;
+				}
+				else if (chain->num_clear_pointers == 1)
+				{
+					/*
+					 * If this is the only pointer to a WARM chain, we must
+					 * keep the CLEAR pointer.
+					 *
+					 * The presence of WARM chain indicates that the WARM update
+					 * must have been committed good. But during the update
+					 * this index was probably not updated and hence it
+					 * contains just one, original CLEAR pointer to the chain.
+					 * We should be able to clear the WARM bits on heap tuples
+					 * unless we later find another index which prevents the
+					 * cleanup.
+					 */
+					return IBDCR_KEEP;
+				}
+			}
+		}
+		else
+		{
+			/*
+			 * This is a CLEAR chain.
+			 */
+			if (is_warm)
+			{
+				/*
+				 * A WARM pointer to a CLEAR chain.
+				 *
+				 * This can happen when a WARM update is aborted. Later the HOT
+				 * chain is pruned leaving behind only CLEAR tuples in the
+				 * chain. But the WARM index pointer inserted in the index
+				 * remains and it must now be deleted before we clear WARM bits
+				 * from the heap tuple.
+				 */
+				return IBDCR_DELETE;
+			}
+
+			/*
+			 * CLEAR pointer to a CLEAR chain.
+			 *
+			 * If this is the only surviving CLEAR pointer, keep it and clear
+			 * the WARM bits from the heap tuples.
+			 */
+			if (chain->num_clear_pointers == 1)
+				return IBDCR_KEEP;
+
+			/*
+			 * If there are more than 1 CLEAR pointers to this chain, we can
+			 * apply the recheck logic and kill the redudant CLEAR pointer and
+			 * convert the chain. But that's not yet done.
+			 */
+		}
+
+		/*
+		 * For everything else, we must keep the WARM bits and also keep the
+		 * index pointers.
+		 */
+		chain->keep_warm_chain = 1;
+		return IBDCR_KEEP;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ * Comparator routines for use with qsort() and bsearch(). Similar to
+ * vac_cmp_itemptr, but right hand argument is LVWarmChain struct pointer.
+ */
+static int
+vac_cmp_warm_chain(const void *left, const void *right)
+{
+	BlockNumber lblk,
+				rblk;
+	OffsetNumber loff,
+				roff;
+
+	lblk = ItemPointerGetBlockNumber((ItemPointer) left);
+	rblk = ItemPointerGetBlockNumber(&((LVWarmChain *) right)->chain_tid);
+
+	if (lblk < rblk)
+		return -1;
+	if (lblk > rblk)
+		return 1;
+
+	loff = ItemPointerGetOffsetNumber((ItemPointer) left);
+	roff = ItemPointerGetOffsetNumber(&((LVWarmChain *) right)->chain_tid);
+
+	if (loff < roff)
+		return -1;
+	if (loff > roff)
+		return 1;
+
+	return 0;
 }
 
 /*
@@ -2166,6 +2745,18 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 						break;
 					}
 
+					/*
+					 * If this or any other tuple in the chain ever WARM
+					 * updated, there could be multiple index entries pointing
+					 * to the root of this chain. We can't do index-only scans
+					 * for such tuples without verifying index key check. So
+					 * mark the page as !all_visible
+					 */
+					if (HeapTupleHeaderIsWarmUpdated(tuple.t_data))
+					{
+						all_visible = false;
+					}
+
 					/* Track newest xmin on page. */
 					if (TransactionIdFollows(xmin, *visibility_cutoff_xid))
 						*visibility_cutoff_xid = xmin;
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 2142273..3e49a8f 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -270,6 +270,8 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
 List *
 ExecInsertIndexTuples(TupleTableSlot *slot,
 					  ItemPointer tupleid,
+					  ItemPointer root_tid,
+					  Bitmapset *modified_attrs,
 					  EState *estate,
 					  bool noDupErr,
 					  bool *specConflict,
@@ -324,6 +326,17 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 		if (!indexInfo->ii_ReadyForInserts)
 			continue;
 
+		/*
+		 * If modified_attrs is set, we only insert index entries for those
+		 * indexes whose column has changed. All other indexes can use their
+		 * existing index pointers to look up the new tuple
+		 */
+		if (modified_attrs)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
 		/* Check for partial index */
 		if (indexInfo->ii_Predicate != NIL)
 		{
@@ -389,10 +402,11 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 			index_insert(indexRelation, /* index relation */
 						 values,	/* array of index Datums */
 						 isnull,	/* null flags */
-						 tupleid,		/* tid of heap tuple */
+						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique,	/* type of uniqueness check to do */
-						 indexInfo);	/* index AM may need this */
+						 indexInfo,	/* index AM may need this */
+						 (modified_attrs != NULL));	/* type of uniqueness check to do */
 
 		/*
 		 * If the index has an associated exclusion constraint, check that.
@@ -791,6 +805,9 @@ retry:
 		{
 			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
 				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
+			else
+				ItemPointerCopy(&tup->t_self, &ctid_wait);
+
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index f20d728..747e4ce 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -399,6 +399,8 @@ ExecSimpleRelationInsert(EState *estate, TupleTableSlot *slot)
 
 		if (resultRelInfo->ri_NumIndices > 0)
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &(tuple->t_self),
+												   NULL,
 												   estate, false, NULL,
 												   NIL);
 
@@ -445,6 +447,8 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 	if (!skip_tuple)
 	{
 		List	   *recheckIndexes = NIL;
+		bool		warm_update;
+		Bitmapset  *modified_attrs;
 
 		/* Check the constraints of the tuple */
 		if (rel->rd_att->constr)
@@ -455,13 +459,35 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 
 		/* OK, update the tuple and index entries for it */
 		simple_heap_update(rel, &searchslot->tts_tuple->t_self,
-						   slot->tts_tuple);
+						   slot->tts_tuple, &modified_attrs, &warm_update);
 
 		if (resultRelInfo->ri_NumIndices > 0 &&
-			!HeapTupleIsHeapOnly(slot->tts_tuple))
+			(!HeapTupleIsHeapOnly(slot->tts_tuple) || warm_update))
+		{
+			ItemPointerData root_tid;
+
+			/*
+			 * If we did a WARM update then we must index the tuple using its
+			 * root line pointer and not the tuple TID itself.
+			 */
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self,
+						&root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
+
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL,
 												   NIL);
+		}
 
 		/* AFTER ROW UPDATE Triggers */
 		ExecARUpdateTriggers(estate, resultRelInfo,
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index 2e9ff7d..f7bb6ca 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -39,6 +39,7 @@
 
 #include "access/relscan.h"
 #include "access/transam.h"
+#include "access/valid.h"
 #include "executor/execdebug.h"
 #include "executor/nodeBitmapHeapscan.h"
 #include "pgstat.h"
@@ -395,11 +396,27 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			OffsetNumber offnum = tbmres->offsets[curslot];
 			ItemPointerData tid;
 			HeapTupleData heapTuple;
+			bool recheck = false;
 
 			ItemPointerSet(&tid, page, offnum);
 			if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
-									   &heapTuple, NULL, true))
-				scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+									   &heapTuple, NULL, true, &recheck))
+			{
+				bool valid = true;
+
+				if (scan->rs_key)
+					HeapKeyTest(&heapTuple, RelationGetDescr(scan->rs_rd),
+							scan->rs_nkeys, scan->rs_key, valid);
+				if (valid)
+					scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+
+				/*
+				 * If the heap tuple needs a recheck because of a WARM update,
+				 * it's a lossy case.
+				 */
+				if (recheck)
+					tbmres->recheck = true;
+			}
 		}
 	}
 	else
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index cb6aff9..dff4086 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -142,8 +142,8 @@ IndexNext(IndexScanState *node)
 					   false);	/* don't pfree */
 
 		/*
-		 * If the index was lossy, we have to recheck the index quals using
-		 * the fetched tuple.
+		 * If the index was lossy or the tuple was WARM, we have to recheck
+		 * the index quals using the fetched tuple.
 		 */
 		if (scandesc->xs_recheck)
 		{
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index 95e1589..a1f3440 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -512,6 +512,7 @@ ExecInsert(ModifyTableState *mtstate,
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												 &(tuple->t_self), NULL,
 												 estate, true, &specConflict,
 												   arbiterIndexes);
 
@@ -558,6 +559,7 @@ ExecInsert(ModifyTableState *mtstate,
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
 				recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+													   &(tuple->t_self), NULL,
 													   estate, false, NULL,
 													   arbiterIndexes);
 		}
@@ -891,6 +893,9 @@ ExecUpdate(ItemPointer tupleid,
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
 	List	   *recheckIndexes = NIL;
+	Bitmapset  *modified_attrs = NULL;
+	ItemPointerData	root_tid;
+	bool		warm_update;
 
 	/*
 	 * abort the operation if not running transactions
@@ -1007,7 +1012,7 @@ lreplace:;
 							 estate->es_output_cid,
 							 estate->es_crosscheck_snapshot,
 							 true /* wait for commit */ ,
-							 &hufd, &lockmode);
+							 &hufd, &lockmode, &modified_attrs, &warm_update);
 		switch (result)
 		{
 			case HeapTupleSelfUpdated:
@@ -1094,10 +1099,28 @@ lreplace:;
 		 * the t_self field.
 		 *
 		 * If it's a HOT update, we mustn't insert new index entries.
+		 *
+		 * If it's a WARM update, then we must insert new entries with TID
+		 * pointing to the root of the WARM chain.
 		 */
-		if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
+		if (resultRelInfo->ri_NumIndices > 0 &&
+			(!HeapTupleIsHeapOnly(tuple) || warm_update))
+		{
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL, NIL);
+		}
 	}
 
 	if (canSetTag)
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index 3a50488..806d812 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -1824,7 +1824,7 @@ pgstat_count_heap_insert(Relation rel, PgStat_Counter n)
  * pgstat_count_heap_update - count a tuple update
  */
 void
-pgstat_count_heap_update(Relation rel, bool hot)
+pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 {
 	PgStat_TableStatus *pgstat_info = rel->pgstat_info;
 
@@ -1842,6 +1842,8 @@ pgstat_count_heap_update(Relation rel, bool hot)
 		/* t_tuples_hot_updated is nontransactional, so just advance it */
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
+		else if (warm)
+			pgstat_info->t_counts.t_tuples_warm_updated++;
 	}
 }
 
@@ -4324,6 +4326,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_updated = 0;
 		result->tuples_deleted = 0;
 		result->tuples_hot_updated = 0;
+		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
 		result->changes_since_analyze = 0;
@@ -5433,6 +5436,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated = tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted = tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated = tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
@@ -5460,6 +5464,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated += tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted += tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated += tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated += tabmsg->t_counts.t_tuples_warm_updated;
 			/* If table was truncated, first reset the live/dead counters */
 			if (tabmsg->t_counts.t_truncated)
 			{
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 5c13d26..7a9b48a 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -347,7 +347,7 @@ DecodeStandbyOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 static void
 DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 {
-	uint8		info = XLogRecGetInfo(buf->record) & XLOG_HEAP_OPMASK;
+	uint8		info = XLogRecGetInfo(buf->record) & XLOG_HEAP2_OPMASK;
 	TransactionId xid = XLogRecGetXid(buf->record);
 	SnapBuild  *builder = ctx->snapshot_builder;
 
@@ -359,10 +359,6 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 
 	switch (info)
 	{
-		case XLOG_HEAP2_MULTI_INSERT:
-			if (SnapBuildProcessChange(builder, xid, buf->origptr))
-				DecodeMultiInsert(ctx, buf);
-			break;
 		case XLOG_HEAP2_NEW_CID:
 			{
 				xl_heap_new_cid *xlrec;
@@ -390,6 +386,7 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 		case XLOG_HEAP2_CLEANUP_INFO:
 		case XLOG_HEAP2_VISIBLE:
 		case XLOG_HEAP2_LOCK_UPDATED:
+		case XLOG_HEAP2_WARMCLEAR:
 			break;
 		default:
 			elog(ERROR, "unexpected RM_HEAP2_ID record type: %u", info);
@@ -418,6 +415,10 @@ DecodeHeapOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 			if (SnapBuildProcessChange(builder, xid, buf->origptr))
 				DecodeInsert(ctx, buf);
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			if (SnapBuildProcessChange(builder, xid, buf->origptr))
+				DecodeMultiInsert(ctx, buf);
+			break;
 
 			/*
 			 * Treat HOT update as normal updates. There is no useful
@@ -809,7 +810,7 @@ DecodeDelete(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 }
 
 /*
- * Decode XLOG_HEAP2_MULTI_INSERT_insert record into multiple tuplebufs.
+ * Decode XLOG_HEAP_MULTI_INSERT_insert record into multiple tuplebufs.
  *
  * Currently MULTI_INSERT will always contain the full tuples.
  */
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index a987d0d..b8677f3 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -145,6 +145,22 @@ pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS)
 
 
 Datum
+pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+
+Datum
 pg_stat_get_live_tuples(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
@@ -1644,6 +1660,21 @@ pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS)
 }
 
 Datum
+pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_TableStatus *tabentry;
+
+	if ((tabentry = find_tabstat_entry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->t_counts.t_tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+Datum
 pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index ce55fc5..64dbaaa 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2338,6 +2338,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	list_free_deep(relation->rd_fkeylist);
 	list_free(relation->rd_indexlist);
 	bms_free(relation->rd_indexattr);
+	bms_free(relation->rd_exprindexattr);
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_pkattr);
 	bms_free(relation->rd_idattr);
@@ -4352,6 +4353,13 @@ RelationGetIndexList(Relation relation)
 		return list_copy(relation->rd_indexlist);
 
 	/*
+	 * If the index list was invalidated, we better also invalidate the index
+	 * attribute list (which should automatically invalidate other attributes
+	 * such as primary key and replica identity)
+	 */
+	relation->rd_indexattr = NULL;
+
+	/*
 	 * We build the list we intend to return (in the caller's context) while
 	 * doing the scan.  After successfully completing the scan, we copy that
 	 * list into the relcache entry.  This avoids cache-context memory leakage
@@ -4759,15 +4767,19 @@ Bitmapset *
 RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 {
 	Bitmapset  *indexattrs;		/* indexed columns */
+	Bitmapset  *exprindexattrs;	/* indexed columns in expression/prediacate
+									 indexes */
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *pkindexattrs;	/* columns in the primary index */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
+	Bitmapset  *indxnotreadyattrs;	/* columns in not ready indexes */
 	List	   *indexoidlist;
 	List	   *newindexoidlist;
 	Oid			relpkindex;
 	Oid			relreplindex;
 	ListCell   *l;
 	MemoryContext oldcxt;
+	bool		supportswarm = true;/* True if the table can be WARM updated */
 
 	/* Quick exit if we already computed the result. */
 	if (relation->rd_indexattr != NULL)
@@ -4782,6 +4794,10 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_pkattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
+			case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+				return bms_copy(relation->rd_exprindexattr);
+			case INDEX_ATTR_BITMAP_NOTREADY:
+				return bms_copy(relation->rd_indxnotreadyattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -4822,9 +4838,11 @@ restart:
 	 * won't be returned at all by RelationGetIndexList.
 	 */
 	indexattrs = NULL;
+	exprindexattrs = NULL;
 	uindexattrs = NULL;
 	pkindexattrs = NULL;
 	idindexattrs = NULL;
+	indxnotreadyattrs = NULL;
 	foreach(l, indexoidlist)
 	{
 		Oid			indexOid = lfirst_oid(l);
@@ -4861,6 +4879,10 @@ restart:
 				indexattrs = bms_add_member(indexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
 
+				if (!indexInfo->ii_ReadyForInserts)
+					indxnotreadyattrs = bms_add_member(indxnotreadyattrs,
+							   attrnum - FirstLowInvalidHeapAttributeNumber);
+
 				if (isKey)
 					uindexattrs = bms_add_member(uindexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
@@ -4876,10 +4898,29 @@ restart:
 		}
 
 		/* Collect all attributes used in expressions, too */
-		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &exprindexattrs);
 
 		/* Collect all attributes in the index predicate, too */
-		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
+
+		/*
+		 * indexattrs should include attributes referenced in index expressions
+		 * and predicates too.
+		 */
+		indexattrs = bms_add_members(indexattrs, exprindexattrs);
+
+		if (!indexInfo->ii_ReadyForInserts)
+			indxnotreadyattrs = bms_add_members(indxnotreadyattrs,
+					exprindexattrs);
+
+		/*
+		 * Check if the index has amrecheck method defined. If the method is
+		 * not defined, the index does not support WARM update. Completely
+		 * disable WARM updates on such tables.
+		 */
+		if (!indexDesc->rd_amroutine->amrecheck)
+			supportswarm = false;
+
 
 		index_close(indexDesc, AccessShareLock);
 	}
@@ -4912,15 +4953,22 @@ restart:
 		goto restart;
 	}
 
+	/* Remember if the table can do WARM updates */
+	relation->rd_supportswarm = supportswarm;
+
 	/* Don't leak the old values of these bitmaps, if any */
 	bms_free(relation->rd_indexattr);
 	relation->rd_indexattr = NULL;
+	bms_free(relation->rd_exprindexattr);
+	relation->rd_exprindexattr = NULL;
 	bms_free(relation->rd_keyattr);
 	relation->rd_keyattr = NULL;
 	bms_free(relation->rd_pkattr);
 	relation->rd_pkattr = NULL;
 	bms_free(relation->rd_idattr);
 	relation->rd_idattr = NULL;
+	bms_free(relation->rd_indxnotreadyattr);
+	relation->rd_indxnotreadyattr = NULL;
 
 	/*
 	 * Now save copies of the bitmaps in the relcache entry.  We intentionally
@@ -4933,7 +4981,9 @@ restart:
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_pkattr = bms_copy(pkindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_indexattr = bms_copy(indexattrs);
+	relation->rd_exprindexattr = bms_copy(exprindexattrs);
+	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
+	relation->rd_indxnotreadyattr = bms_copy(indxnotreadyattrs);
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -4947,6 +4997,10 @@ restart:
 			return bms_copy(relation->rd_pkattr);
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
+		case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+			return exprindexattrs;
+		case INDEX_ATTR_BITMAP_NOTREADY:
+			return indxnotreadyattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
@@ -5559,6 +5613,7 @@ load_relcache_init_file(bool shared)
 		rel->rd_keyattr = NULL;
 		rel->rd_pkattr = NULL;
 		rel->rd_idattr = NULL;
+		rel->rd_indxnotreadyattr = NULL;
 		rel->rd_pubactions = NULL;
 		rel->rd_createSubid = InvalidSubTransactionId;
 		rel->rd_newRelfilenodeSubid = InvalidSubTransactionId;
diff --git a/src/backend/utils/time/combocid.c b/src/backend/utils/time/combocid.c
index baff998..6a2e2f2 100644
--- a/src/backend/utils/time/combocid.c
+++ b/src/backend/utils/time/combocid.c
@@ -106,7 +106,7 @@ HeapTupleHeaderGetCmin(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 	Assert(TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmin(tup)));
 
 	if (tup->t_infomask & HEAP_COMBOCID)
@@ -120,7 +120,7 @@ HeapTupleHeaderGetCmax(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 
 	/*
 	 * Because GetUpdateXid() performs memory allocations if xmax is a
diff --git a/src/backend/utils/time/tqual.c b/src/backend/utils/time/tqual.c
index 519f3b6..e54d0df 100644
--- a/src/backend/utils/time/tqual.c
+++ b/src/backend/utils/time/tqual.c
@@ -186,7 +186,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -205,7 +205,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -377,7 +377,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -396,7 +396,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -471,7 +471,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			return HeapTupleInvisible;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -490,7 +490,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -753,7 +753,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -772,7 +772,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -974,7 +974,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -993,7 +993,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1180,7 +1180,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 		if (HeapTupleHeaderXminInvalid(tuple))
 			return HEAPTUPLE_DEAD;
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_OFF)
+		else if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1198,7 +1198,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 						InvalidTransactionId);
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h
index f919cf8..8b7af1e 100644
--- a/src/include/access/amapi.h
+++ b/src/include/access/amapi.h
@@ -13,6 +13,7 @@
 #define AMAPI_H
 
 #include "access/genam.h"
+#include "access/itup.h"
 
 /*
  * We don't wish to include planner header files here, since most of an index
@@ -74,6 +75,14 @@ typedef bool (*aminsert_function) (Relation indexRelation,
 											   Relation heapRelation,
 											   IndexUniqueCheck checkUnique,
 											   struct IndexInfo *indexInfo);
+/* insert this WARM tuple */
+typedef bool (*amwarminsert_function) (Relation indexRelation,
+											   Datum *values,
+											   bool *isnull,
+											   ItemPointer heap_tid,
+											   Relation heapRelation,
+											   IndexUniqueCheck checkUnique,
+											   struct IndexInfo *indexInfo);
 
 /* bulk delete */
 typedef IndexBulkDeleteResult *(*ambulkdelete_function) (IndexVacuumInfo *info,
@@ -152,6 +161,11 @@ typedef void (*aminitparallelscan_function) (void *target);
 /* (re)start parallel index scan */
 typedef void (*amparallelrescan_function) (IndexScanDesc scan);
 
+/* recheck index tuple and heap tuple match */
+typedef bool (*amrecheck_function) (Relation indexRel,
+		struct IndexInfo *indexInfo, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 /*
  * API struct for an index AM.  Note this must be stored in a single palloc'd
  * chunk of memory.
@@ -198,6 +212,7 @@ typedef struct IndexAmRoutine
 	ambuild_function ambuild;
 	ambuildempty_function ambuildempty;
 	aminsert_function aminsert;
+	amwarminsert_function amwarminsert;
 	ambulkdelete_function ambulkdelete;
 	amvacuumcleanup_function amvacuumcleanup;
 	amcanreturn_function amcanreturn;	/* can be NULL */
@@ -217,6 +232,9 @@ typedef struct IndexAmRoutine
 	amestimateparallelscan_function amestimateparallelscan;		/* can be NULL */
 	aminitparallelscan_function aminitparallelscan;		/* can be NULL */
 	amparallelrescan_function amparallelrescan; /* can be NULL */
+
+	/* interface function to support WARM */
+	amrecheck_function amrecheck;		/* can be NULL */
 } IndexAmRoutine;
 
 
diff --git a/src/include/access/genam.h b/src/include/access/genam.h
index f467b18..965be45 100644
--- a/src/include/access/genam.h
+++ b/src/include/access/genam.h
@@ -75,12 +75,29 @@ typedef struct IndexBulkDeleteResult
 	bool		estimated_count;	/* num_index_tuples is an estimate */
 	double		num_index_tuples;		/* tuples remaining */
 	double		tuples_removed; /* # removed during vacuum operation */
+	double		num_warm_pointers;	/* # WARM pointers found */
+	double		num_clear_pointers;	/* # CLEAR pointers found */
+	double		pointers_cleared;	/* # WARM pointers cleared */
+	double		warm_pointers_removed;	/* # WARM pointers removed */
+	double		clear_pointers_removed;	/* # CLEAR pointers removed */
 	BlockNumber pages_deleted;	/* # unused pages in index */
 	BlockNumber pages_free;		/* # pages available for reuse */
 } IndexBulkDeleteResult;
 
+/*
+ * IndexBulkDeleteCallback should return one of the following
+ */
+typedef enum IndexBulkDeleteCallbackResult
+{
+	IBDCR_KEEP,			/* index tuple should be preserved */
+	IBDCR_DELETE,		/* index tuple should be deleted */
+	IBDCR_CLEAR_WARM	/* index tuple should be cleared of WARM bit */
+} IndexBulkDeleteCallbackResult;
+
 /* Typedef for callback function to determine if a tuple is bulk-deletable */
-typedef bool (*IndexBulkDeleteCallback) (ItemPointer itemptr, void *state);
+typedef IndexBulkDeleteCallbackResult (*IndexBulkDeleteCallback) (
+										 ItemPointer itemptr,
+										 bool is_warm, void *state);
 
 /* struct definitions appear in relscan.h */
 typedef struct IndexScanDescData *IndexScanDesc;
@@ -135,7 +152,8 @@ extern bool index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 struct IndexInfo *indexInfo);
+			 struct IndexInfo *indexInfo,
+			 bool warm_update);
 
 extern IndexScanDesc index_beginscan(Relation heapRelation,
 				Relation indexRelation,
diff --git a/src/include/access/hash.h b/src/include/access/hash.h
index eb1df57..f2094e3 100644
--- a/src/include/access/hash.h
+++ b/src/include/access/hash.h
@@ -281,6 +281,11 @@ typedef HashMetaPageData *HashMetaPage;
 #define HASHPROC		1
 #define HASHNProcs		1
 
+/*
+ * Flags overloaded on t_tid.ip_posid field. They are managed by
+ * ItemPointerSetFlags and corresponing routines.
+ */
+#define HASH_INDEX_WARM_POINTER	0x01
 
 /* public routines */
 
@@ -291,6 +296,10 @@ extern bool hashinsert(Relation rel, Datum *values, bool *isnull,
 		   ItemPointer ht_ctid, Relation heapRel,
 		   IndexUniqueCheck checkUnique,
 		   struct IndexInfo *indexInfo);
+extern bool hashwarminsert(Relation rel, Datum *values, bool *isnull,
+		   ItemPointer ht_ctid, Relation heapRel,
+		   IndexUniqueCheck checkUnique,
+		   struct IndexInfo *indexInfo);
 extern bool hashgettuple(IndexScanDesc scan, ScanDirection dir);
 extern int64 hashgetbitmap(IndexScanDesc scan, TIDBitmap *tbm);
 extern IndexScanDesc hashbeginscan(Relation rel, int nkeys, int norderbys);
@@ -360,6 +369,8 @@ extern void _hash_expandtable(Relation rel, Buffer metabuf);
 extern void _hash_finish_split(Relation rel, Buffer metabuf, Buffer obuf,
 				   Bucket obucket, uint32 maxbucket, uint32 highmask,
 				   uint32 lowmask);
+extern void _hash_clear_items(Page page, OffsetNumber *clearitemnos,
+				   uint16 nclearitems);
 
 /* hashsearch.c */
 extern bool _hash_next(IndexScanDesc scan, ScanDirection dir);
@@ -404,4 +415,8 @@ extern void hashbucketcleanup(Relation rel, Bucket cur_bucket,
 				  bool bucket_has_garbage,
 				  IndexBulkDeleteCallback callback, void *callback_state);
 
+/* hash.c */
+extern bool hashrecheck(Relation indexRel, struct IndexInfo *indexInfo,
+		IndexTuple indexTuple, Relation heapRel, HeapTuple heapTuple);
+
 #endif   /* HASH_H */
diff --git a/src/include/access/hash_xlog.h b/src/include/access/hash_xlog.h
index dfd9237..0549a5a 100644
--- a/src/include/access/hash_xlog.h
+++ b/src/include/access/hash_xlog.h
@@ -199,9 +199,10 @@ typedef struct xl_hash_delete
 {
 	bool		is_primary_bucket_page; /* TRUE if the operation is for
 										 * primary bucket page */
+	uint16		nclearitems;			/* # of items to clear of WARM bits */
 }	xl_hash_delete;
 
-#define SizeOfHashDelete	(offsetof(xl_hash_delete, is_primary_bucket_page) + sizeof(bool))
+#define SizeOfHashDelete	(offsetof(xl_hash_delete, nclearitems) + sizeof(uint16))
 
 /*
  * This is what we need for metapage update operation.
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 5540e12..2217af9 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -72,6 +72,20 @@ typedef struct HeapUpdateFailureData
 	CommandId	cmax;
 } HeapUpdateFailureData;
 
+typedef int HeapCheckWarmChainStatus;
+
+#define HCWC_CLEAR_TUPLE		0x0001
+#define	HCWC_WARM_TUPLE			0x0002
+#define HCWC_WARM_UPDATED_TUPLE	0x0004
+
+#define HCWC_IS_MIXED(status) \
+	(((status) & (HCWC_CLEAR_TUPLE | HCWC_WARM_TUPLE)) != 0)
+#define HCWC_IS_ALL_WARM(status) \
+	(((status) & HCWC_CLEAR_TUPLE) == 0)
+#define HCWC_IS_ALL_CLEAR(status) \
+	(((status) & HCWC_WARM_TUPLE) == 0)
+#define HCWC_IS_WARM_UPDATED(status) \
+	(((status) & HCWC_WARM_UPDATED_TUPLE) != 0)
 
 /* ----------------
  *		function prototypes for heap access method
@@ -137,9 +151,10 @@ extern bool heap_fetch(Relation relation, Snapshot snapshot,
 		   Relation stats_relation);
 extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
 					   Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call);
+					   bool *all_dead, bool first_call, bool *recheck);
 extern bool heap_hot_search(ItemPointer tid, Relation relation,
-				Snapshot snapshot, bool *all_dead);
+				Snapshot snapshot, bool *all_dead,
+				bool *recheck, Buffer *buffer, HeapTuple heapTuple);
 
 extern void heap_get_latest_tid(Relation relation, Snapshot snapshot,
 					ItemPointer tid);
@@ -161,7 +176,8 @@ extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
 extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
@@ -176,10 +192,16 @@ extern bool heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple);
 extern Oid	simple_heap_insert(Relation relation, HeapTuple tup);
 extern void simple_heap_delete(Relation relation, ItemPointer tid);
 extern void simple_heap_update(Relation relation, ItemPointer otid,
-				   HeapTuple tup);
+				   HeapTuple tup,
+				   Bitmapset **modified_attrs,
+				   bool *warm_update);
 
 extern void heap_sync(Relation relation);
 extern void heap_update_snapshot(HeapScanDesc scan, Snapshot snapshot);
+extern HeapCheckWarmChainStatus heap_check_warm_chain(Page dp,
+				   ItemPointer tid, bool stop_at_warm);
+extern int heap_clear_warm_chain(Page dp, ItemPointer tid,
+				   OffsetNumber *cleared_offnums);
 
 /* in heap/pruneheap.c */
 extern void heap_page_prune_opt(Relation relation, Buffer buffer);
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index e6019d5..66fd0ea 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -32,7 +32,7 @@
 #define XLOG_HEAP_INSERT		0x00
 #define XLOG_HEAP_DELETE		0x10
 #define XLOG_HEAP_UPDATE		0x20
-/* 0x030 is free, was XLOG_HEAP_MOVE */
+#define XLOG_HEAP_MULTI_INSERT	0x30
 #define XLOG_HEAP_HOT_UPDATE	0x40
 #define XLOG_HEAP_CONFIRM		0x50
 #define XLOG_HEAP_LOCK			0x60
@@ -47,18 +47,23 @@
 /*
  * We ran out of opcodes, so heapam.c now has a second RmgrId.  These opcodes
  * are associated with RM_HEAP2_ID, but are not logically different from
- * the ones above associated with RM_HEAP_ID.  XLOG_HEAP_OPMASK applies to
- * these, too.
+ * the ones above associated with RM_HEAP_ID.
+ *
+ * In PG 10, we moved XLOG_HEAP2_MULTI_INSERT to RM_HEAP_ID. That allows us to
+ * use 0x80 bit in RM_HEAP2_ID, thus potentially creating another 8 possible
+ * opcodes in RM_HEAP2_ID.
  */
 #define XLOG_HEAP2_REWRITE		0x00
 #define XLOG_HEAP2_CLEAN		0x10
 #define XLOG_HEAP2_FREEZE_PAGE	0x20
 #define XLOG_HEAP2_CLEANUP_INFO 0x30
 #define XLOG_HEAP2_VISIBLE		0x40
-#define XLOG_HEAP2_MULTI_INSERT 0x50
+#define XLOG_HEAP2_WARMCLEAR	0x50
 #define XLOG_HEAP2_LOCK_UPDATED 0x60
 #define XLOG_HEAP2_NEW_CID		0x70
 
+#define XLOG_HEAP2_OPMASK		0x70
+
 /*
  * xl_heap_insert/xl_heap_multi_insert flag values, 8 bits are available.
  */
@@ -80,6 +85,7 @@
 #define XLH_UPDATE_CONTAINS_NEW_TUPLE			(1<<4)
 #define XLH_UPDATE_PREFIX_FROM_OLD				(1<<5)
 #define XLH_UPDATE_SUFFIX_FROM_OLD				(1<<6)
+#define XLH_UPDATE_WARM_UPDATE					(1<<7)
 
 /* convenience macro for checking whether any form of old tuple was logged */
 #define XLH_UPDATE_CONTAINS_OLD						\
@@ -225,6 +231,14 @@ typedef struct xl_heap_clean
 
 #define SizeOfHeapClean (offsetof(xl_heap_clean, ndead) + sizeof(uint16))
 
+typedef struct xl_heap_warmclear
+{
+	uint16		ncleared;
+	/* OFFSET NUMBERS are in the block reference 0 */
+} xl_heap_warmclear;
+
+#define SizeOfHeapWarmClear (offsetof(xl_heap_warmclear, ncleared) + sizeof(uint16))
+
 /*
  * Cleanup_info is required in some cases during a lazy VACUUM.
  * Used for reporting the results of HeapTupleHeaderAdvanceLatestRemovedXid()
@@ -388,6 +402,8 @@ extern XLogRecPtr log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid);
+extern XLogRecPtr log_heap_warmclear(Relation reln, Buffer buffer,
+			   OffsetNumber *cleared, int ncleared);
 extern XLogRecPtr log_heap_freeze(Relation reln, Buffer buffer,
 				TransactionId cutoff_xid, xl_heap_freeze_tuple *tuples,
 				int ntuples);
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 4d614b7..bcefba6 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -201,6 +201,21 @@ struct HeapTupleHeaderData
 										 * upgrade support */
 #define HEAP_MOVED (HEAP_MOVED_OFF | HEAP_MOVED_IN)
 
+/*
+ * A WARM chain usually consists of two parts. Each of these parts are HOT
+ * chains in themselves i.e. all indexed columns has the same value, but a WARM
+ * update separates these parts. We need a mechanism to identify which part a
+ * tuple belongs to. We can't just look at if it's a
+ * HeapTupleHeaderIsWarmUpdated() because during WARM update, both old and new
+ * tuples are marked as WARM tuples.
+ *
+ * We need another infomask bit for this. But we use the same infomask bit that
+ * was earlier used for by old-style VACUUM FULL. This is safe because
+ * HEAP_WARM_TUPLE flag will always be set along with HEAP_WARM_UPDATED. So if
+ * HEAP_WARM_TUPLE and HEAP_WARM_UPDATED is set then we know that it's
+ * referring to red part of the WARM chain.
+ */
+#define HEAP_WARM_TUPLE			0x4000
 #define HEAP_XACT_MASK			0xFFF0	/* visibility-related bits */
 
 /*
@@ -260,7 +275,11 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x0800 are available */
+#define HEAP_WARM_UPDATED		0x0800	/*
+										 * This or a prior version of this
+										 * tuple in the current HOT chain was
+										 * once WARM updated
+										 */
 #define HEAP_LATEST_TUPLE		0x1000	/*
 										 * This is the last tuple in chain and
 										 * ip_posid points to the root line
@@ -271,7 +290,7 @@ struct HeapTupleHeaderData
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF800	/* visibility-related bits */
 
 
 /*
@@ -396,7 +415,7 @@ struct HeapTupleHeaderData
 /* SetCmin is reasonably simple since we never need a combo CID */
 #define HeapTupleHeaderSetCmin(tup, cid) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	(tup)->t_infomask &= ~HEAP_COMBOCID; \
 } while (0)
@@ -404,7 +423,7 @@ do { \
 /* SetCmax must be used after HeapTupleHeaderAdjustCmax; see combocid.c */
 #define HeapTupleHeaderSetCmax(tup, cid, iscombo) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	if (iscombo) \
 		(tup)->t_infomask |= HEAP_COMBOCID; \
@@ -414,7 +433,7 @@ do { \
 
 #define HeapTupleHeaderGetXvac(tup) \
 ( \
-	((tup)->t_infomask & HEAP_MOVED) ? \
+	HeapTupleHeaderIsMoved(tup) ? \
 		(tup)->t_choice.t_heap.t_field3.t_xvac \
 	: \
 		InvalidTransactionId \
@@ -422,7 +441,7 @@ do { \
 
 #define HeapTupleHeaderSetXvac(tup, xid) \
 do { \
-	Assert((tup)->t_infomask & HEAP_MOVED); \
+	Assert(HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_xvac = (xid); \
 } while (0)
 
@@ -510,6 +529,21 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+#define HeapTupleHeaderSetWarmUpdated(tup) \
+do { \
+	(tup)->t_infomask2 |= HEAP_WARM_UPDATED; \
+} while (0)
+
+#define HeapTupleHeaderClearWarmUpdated(tup) \
+do { \
+	(tup)->t_infomask2 &= ~HEAP_WARM_UPDATED; \
+} while (0)
+
+#define HeapTupleHeaderIsWarmUpdated(tup) \
+( \
+  ((tup)->t_infomask2 & HEAP_WARM_UPDATED) != 0 \
+)
+
 /*
  * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
  * the TID of the tuple itself in t_ctid field to mark the end of the chain.
@@ -635,6 +669,58 @@ do { \
 )
 
 /*
+ * Macros to check if tuple is a moved-off/in tuple by VACUUM FULL in from
+ * pre-9.0 era. Such tuple must not have HEAP_WARM_TUPLE flag set.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderIsMovedOff(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED_OFF) \
+)
+
+#define HeapTupleHeaderIsMovedIn(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED_IN) \
+)
+
+#define HeapTupleHeaderIsMoved(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED) \
+)
+
+/*
+ * Check if tuple belongs to the second part of the WARM chain.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderIsWarm(tuple) \
+( \
+	HeapTupleHeaderIsWarmUpdated(tuple) && \
+	(((tuple)->t_infomask & HEAP_WARM_TUPLE) != 0) \
+)
+
+/*
+ * Mark tuple as a member of the second part of the chain. Must only be done on
+ * a tuple which is already marked a WARM-tuple.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderSetWarm(tuple) \
+( \
+	AssertMacro(HeapTupleHeaderIsWarmUpdated(tuple)), \
+	(tuple)->t_infomask |= HEAP_WARM_TUPLE \
+)
+
+#define HeapTupleHeaderClearWarm(tuple) \
+( \
+	(tuple)->t_infomask &= ~HEAP_WARM_TUPLE \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
@@ -785,6 +871,24 @@ struct MinimalTupleData
 #define HeapTupleClearHeapOnly(tuple) \
 		HeapTupleHeaderClearHeapOnly((tuple)->t_data)
 
+#define HeapTupleIsWarmUpdated(tuple) \
+		HeapTupleHeaderIsWarmUpdated((tuple)->t_data)
+
+#define HeapTupleSetWarmUpdated(tuple) \
+		HeapTupleHeaderSetWarmUpdated((tuple)->t_data)
+
+#define HeapTupleClearWarmUpdated(tuple) \
+		HeapTupleHeaderClearWarmUpdated((tuple)->t_data)
+
+#define HeapTupleIsWarm(tuple) \
+		HeapTupleHeaderIsWarm((tuple)->t_data)
+
+#define HeapTupleSetWarm(tuple) \
+		HeapTupleHeaderSetWarm((tuple)->t_data)
+
+#define HeapTupleClearWarm(tuple) \
+		HeapTupleHeaderClearWarm((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index f9304db..163180d 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -427,6 +427,12 @@ typedef BTScanOpaqueData *BTScanOpaque;
 #define SK_BT_NULLS_FIRST	(INDOPTION_NULLS_FIRST << SK_BT_INDOPTION_SHIFT)
 
 /*
+ * Flags overloaded on t_tid.ip_posid field. They are managed by
+ * ItemPointerSetFlags and corresponing routines.
+ */
+#define BTREE_INDEX_WARM_POINTER	0x01
+
+/*
  * external entry points for btree, in nbtree.c
  */
 extern IndexBuildResult *btbuild(Relation heap, Relation index,
@@ -436,6 +442,10 @@ extern bool btinsert(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
 		 struct IndexInfo *indexInfo);
+extern bool btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 struct IndexInfo *indexInfo);
 extern IndexScanDesc btbeginscan(Relation rel, int nkeys, int norderbys);
 extern Size btestimateparallelscan(void);
 extern void btinitparallelscan(void *target);
@@ -487,10 +497,12 @@ extern void _bt_pageinit(Page page, Size size);
 extern bool _bt_page_recyclable(Page page);
 extern void _bt_delitems_delete(Relation rel, Buffer buf,
 					OffsetNumber *itemnos, int nitems, Relation heapRel);
-extern void _bt_delitems_vacuum(Relation rel, Buffer buf,
-					OffsetNumber *itemnos, int nitems,
-					BlockNumber lastBlockVacuumed);
+extern void _bt_handleitems_vacuum(Relation rel, Buffer buf,
+					OffsetNumber *delitemnos, int ndelitems,
+					OffsetNumber *clearitemnos, int nclearitems);
 extern int	_bt_pagedel(Relation rel, Buffer buf);
+extern void	_bt_clear_items(Page page, OffsetNumber *clearitemnos,
+					uint16 nclearitems);
 
 /*
  * prototypes for functions in nbtsearch.c
@@ -537,6 +549,9 @@ extern bytea *btoptions(Datum reloptions, bool validate);
 extern bool btproperty(Oid index_oid, int attno,
 		   IndexAMProperty prop, const char *propname,
 		   bool *res, bool *isnull);
+extern bool btrecheck(Relation indexRel, struct IndexInfo *indexInfo,
+		IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * prototypes for functions in nbtvalidate.c
diff --git a/src/include/access/nbtxlog.h b/src/include/access/nbtxlog.h
index d6a3085..7efd0d7 100644
--- a/src/include/access/nbtxlog.h
+++ b/src/include/access/nbtxlog.h
@@ -142,34 +142,20 @@ typedef struct xl_btree_reuse_page
 /*
  * This is what we need to know about vacuum of individual leaf index tuples.
  * The WAL record can represent deletion of any number of index tuples on a
- * single index page when executed by VACUUM.
- *
- * For MVCC scans, lastBlockVacuumed will be set to InvalidBlockNumber.
- * For a non-MVCC index scans there is an additional correctness requirement
- * for applying these changes during recovery, which is that we must do one
- * of these two things for every block in the index:
- *		* lock the block for cleanup and apply any required changes
- *		* EnsureBlockUnpinned()
- * The purpose of this is to ensure that no index scans started before we
- * finish scanning the index are still running by the time we begin to remove
- * heap tuples.
- *
- * Any changes to any one block are registered on just one WAL record. All
- * blocks that we need to run EnsureBlockUnpinned() are listed as a block range
- * starting from the last block vacuumed through until this one. Individual
- * block numbers aren't given.
+ * single index page when executed by VACUUM. It also includes tuples which
+ * are cleared of WARM bits by VACUUM.
  *
  * Note that the *last* WAL record in any vacuum of an index is allowed to
  * have a zero length array of offsets. Earlier records must have at least one.
  */
 typedef struct xl_btree_vacuum
 {
-	BlockNumber lastBlockVacuumed;
-
-	/* TARGET OFFSET NUMBERS FOLLOW */
+	uint16		ndelitems;
+	uint16		nclearitems;
+	/* ndelitems + nclearitems TARGET OFFSET NUMBERS FOLLOW */
 } xl_btree_vacuum;
 
-#define SizeOfBtreeVacuum	(offsetof(xl_btree_vacuum, lastBlockVacuumed) + sizeof(BlockNumber))
+#define SizeOfBtreeVacuum	(offsetof(xl_btree_vacuum, nclearitems) + sizeof(uint16))
 
 /*
  * This is what we need to know about marking an empty branch for deletion.
diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h
index 3fc726d..fa178d3 100644
--- a/src/include/access/relscan.h
+++ b/src/include/access/relscan.h
@@ -104,6 +104,9 @@ typedef struct IndexScanDescData
 	/* index access method's private state */
 	void	   *opaque;			/* access-method-specific info */
 
+	/* IndexInfo structure for this index */
+	struct IndexInfo  *indexInfo;
+
 	/*
 	 * In an index-only scan, a successful amgettuple call must fill either
 	 * xs_itup (and xs_itupdesc) or xs_hitup (and xs_hitupdesc) to provide the
@@ -119,7 +122,7 @@ typedef struct IndexScanDescData
 	HeapTupleData xs_ctup;		/* current heap tuple, if any */
 	Buffer		xs_cbuf;		/* current heap buffer in scan, if any */
 	/* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */
-	bool		xs_recheck;		/* T means scan keys must be rechecked */
+	bool		xs_recheck;		/* T means scan keys must be rechecked for each tuple */
 
 	/*
 	 * When fetching with an ordering operator, the values of the ORDER BY
diff --git a/src/include/catalog/index.h b/src/include/catalog/index.h
index 20bec90..f92ec29 100644
--- a/src/include/catalog/index.h
+++ b/src/include/catalog/index.h
@@ -89,6 +89,13 @@ extern void FormIndexDatum(IndexInfo *indexInfo,
 			   Datum *values,
 			   bool *isnull);
 
+extern void FormIndexPlainDatum(IndexInfo *indexInfo,
+			   Relation heapRel,
+			   HeapTuple heapTup,
+			   Datum *values,
+			   bool *isnull,
+			   bool *isavail);
+
 extern void index_build(Relation heapRelation,
 			Relation indexRelation,
 			IndexInfo *indexInfo,
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 836d6ff..0ca6e22 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2769,6 +2769,8 @@ DATA(insert OID = 1933 (  pg_stat_get_tuples_deleted	PGNSP PGUID 12 1 0 0 0 f f
 DESCR("statistics: number of tuples deleted");
 DATA(insert OID = 1972 (  pg_stat_get_tuples_hot_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated");
+DATA(insert OID = 3355 (  pg_stat_get_tuples_warm_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated");
 DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_live_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
@@ -2921,6 +2923,8 @@ DATA(insert OID = 3042 (  pg_stat_get_xact_tuples_deleted		PGNSP PGUID 12 1 0 0
 DESCR("statistics: number of tuples deleted in current transaction");
 DATA(insert OID = 3043 (  pg_stat_get_xact_tuples_hot_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated in current transaction");
+DATA(insert OID = 3356 (  pg_stat_get_xact_tuples_warm_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated in current transaction");
 DATA(insert OID = 3044 (  pg_stat_get_xact_blocks_fetched		PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_fetched _null_ _null_ _null_ ));
 DESCR("statistics: number of blocks fetched in current transaction");
 DATA(insert OID = 3045 (  pg_stat_get_xact_blocks_hit			PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_hit _null_ _null_ _null_ ));
diff --git a/src/include/commands/progress.h b/src/include/commands/progress.h
index 9472ecc..b355b61 100644
--- a/src/include/commands/progress.h
+++ b/src/include/commands/progress.h
@@ -25,6 +25,7 @@
 #define PROGRESS_VACUUM_NUM_INDEX_VACUUMS		4
 #define PROGRESS_VACUUM_MAX_DEAD_TUPLES			5
 #define PROGRESS_VACUUM_NUM_DEAD_TUPLES			6
+#define PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED	7
 
 /* Phases of vacuum (as advertised via PROGRESS_VACUUM_PHASE) */
 #define PROGRESS_VACUUM_PHASE_SCAN_HEAP			1
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index 02dbe7b..c4495a3 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -382,6 +382,7 @@ extern void UnregisterExprContextCallback(ExprContext *econtext,
 extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
 extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
 extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
+					  ItemPointer root_tid, Bitmapset *modified_attrs,
 					  EState *estate, bool noDupErr, bool *specConflict,
 					  List *arbiterIndexes);
 extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
diff --git a/src/include/executor/nodeIndexscan.h b/src/include/executor/nodeIndexscan.h
index ea3f3a5..ebeec74 100644
--- a/src/include/executor/nodeIndexscan.h
+++ b/src/include/executor/nodeIndexscan.h
@@ -41,5 +41,4 @@ extern void ExecIndexEvalRuntimeKeys(ExprContext *econtext,
 extern bool ExecIndexEvalArrayKeys(ExprContext *econtext,
 					   IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 extern bool ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
-
 #endif   /* NODEINDEXSCAN_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index f856f60..cd09553 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -66,6 +66,7 @@ typedef struct IndexInfo
 	NodeTag		type;
 	int			ii_NumIndexAttrs;
 	AttrNumber	ii_KeyAttrNumbers[INDEX_MAX_KEYS];
+	Bitmapset  *ii_indxattrs;	/* bitmap of all columns used in this index */
 	List	   *ii_Expressions; /* list of Expr */
 	List	   *ii_ExpressionsState;	/* list of ExprState */
 	List	   *ii_Predicate;	/* list of Expr */
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index f2daf32..af8a3ba 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -105,6 +105,7 @@ typedef struct PgStat_TableCounts
 	PgStat_Counter t_tuples_updated;
 	PgStat_Counter t_tuples_deleted;
 	PgStat_Counter t_tuples_hot_updated;
+	PgStat_Counter t_tuples_warm_updated;
 	bool		t_truncated;
 
 	PgStat_Counter t_delta_live_tuples;
@@ -625,6 +626,7 @@ typedef struct PgStat_StatTabEntry
 	PgStat_Counter tuples_updated;
 	PgStat_Counter tuples_deleted;
 	PgStat_Counter tuples_hot_updated;
+	PgStat_Counter tuples_warm_updated;
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
@@ -1257,7 +1259,7 @@ pgstat_report_wait_end(void)
 	(pgStatBlockWriteTime += (n))
 
 extern void pgstat_count_heap_insert(Relation rel, PgStat_Counter n);
-extern void pgstat_count_heap_update(Relation rel, bool hot);
+extern void pgstat_count_heap_update(Relation rel, bool hot, bool warm);
 extern void pgstat_count_heap_delete(Relation rel);
 extern void pgstat_count_truncate(Relation rel);
 extern void pgstat_update_heap_dead_tuples(Relation rel, int delta);
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index a617a7c..fbac7c0 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -138,9 +138,14 @@ typedef struct RelationData
 
 	/* data managed by RelationGetIndexAttrBitmap: */
 	Bitmapset  *rd_indexattr;	/* identifies columns used in indexes */
+	Bitmapset  *rd_exprindexattr; /* indentified columns used in expression or
+									 predicate indexes */
+	Bitmapset  *rd_indxnotreadyattr;	/* columns used by indexes not yet
+										   ready */
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_pkattr;		/* cols included in primary key */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	PublicationActions  *rd_pubactions;	/* publication actions */
 
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index da36b67..d18bd09 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -50,7 +50,9 @@ typedef enum IndexAttrBitmapKind
 	INDEX_ATTR_BITMAP_ALL,
 	INDEX_ATTR_BITMAP_KEY,
 	INDEX_ATTR_BITMAP_PRIMARY_KEY,
-	INDEX_ATTR_BITMAP_IDENTITY_KEY
+	INDEX_ATTR_BITMAP_IDENTITY_KEY,
+	INDEX_ATTR_BITMAP_EXPR_PREDICATE,
+	INDEX_ATTR_BITMAP_NOTREADY
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
diff --git a/src/test/modules/Makefile b/src/test/modules/Makefile
index 3ce9904..347c4ce 100644
--- a/src/test/modules/Makefile
+++ b/src/test/modules/Makefile
@@ -15,6 +15,7 @@ SUBDIRS = \
 		  test_pg_dump \
 		  test_rls_hooks \
 		  test_shm_mq \
+		  warm \
 		  worker_spi
 
 all: submake-generated-headers
diff --git a/src/test/regress/expected/alter_generic.out b/src/test/regress/expected/alter_generic.out
index b01be59..37719c9 100644
--- a/src/test/regress/expected/alter_generic.out
+++ b/src/test/regress/expected/alter_generic.out
@@ -161,15 +161,15 @@ ALTER SERVER alt_fserv1 RENAME TO alt_fserv3;   -- OK
 SELECT fdwname FROM pg_foreign_data_wrapper WHERE fdwname like 'alt_fdw%';
  fdwname  
 ----------
- alt_fdw2
  alt_fdw3
+ alt_fdw2
 (2 rows)
 
 SELECT srvname FROM pg_foreign_server WHERE srvname like 'alt_fserv%';
   srvname   
 ------------
- alt_fserv2
  alt_fserv3
+ alt_fserv2
 (2 rows)
 
 --
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index bd13ae6..44c59ae 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1732,6 +1732,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_tuples_deleted(c.oid) AS n_tup_del,
     pg_stat_get_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
@@ -1875,6 +1876,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1918,6 +1920,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1955,7 +1958,8 @@ pg_stat_xact_all_tables| SELECT c.oid AS relid,
     pg_stat_get_xact_tuples_inserted(c.oid) AS n_tup_ins,
     pg_stat_get_xact_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_xact_tuples_deleted(c.oid) AS n_tup_del,
-    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd
+    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_xact_tuples_warm_updated(c.oid) AS n_tup_warm_upd
    FROM ((pg_class c
      LEFT JOIN pg_index i ON ((c.oid = i.indrelid)))
      LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
@@ -1971,7 +1975,8 @@ pg_stat_xact_sys_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname = ANY (ARRAY['pg_catalog'::name, 'information_schema'::name])) OR (pg_stat_xact_all_tables.schemaname ~ '^pg_toast'::text));
 pg_stat_xact_user_functions| SELECT p.oid AS funcid,
@@ -1993,7 +1998,8 @@ pg_stat_xact_user_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname <> ALL (ARRAY['pg_catalog'::name, 'information_schema'::name])) AND (pg_stat_xact_all_tables.schemaname !~ '^pg_toast'::text));
 pg_statio_all_indexes| SELECT c.oid AS relid,
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
new file mode 100644
index 0000000..6391891
--- /dev/null
+++ b/src/test/regress/expected/warm.out
@@ -0,0 +1,367 @@
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Check if index only scan works correctly
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx1 on updtst_tab1
+   Index Cond: (b = 140001)
+(2 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab1;
+------------------
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE a = 1;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE b = 701;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+VACUUM updtst_tab2;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx2 on updtst_tab2
+   Index Cond: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab2 WHERE b = 701;
+  b  
+-----
+ 701
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab2;
+------------------
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 1;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+       QUERY PLAN        
+-------------------------
+ Seq Scan on updtst_tab3
+   Filter: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 701;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+  b   
+------
+ 1421
+(1 row)
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+SET enable_seqscan = false;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    98
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 2;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx3 on updtst_tab3
+   Index Cond: (b = 702)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 702;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+  b   
+------
+ 1422
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab3;
+------------------
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+explain select * from test_warm where lower(a) = 'test';
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on test_warm  (cost=4.18..12.65 rows=4 width=64)
+   Recheck Cond: (lower(a) = 'test'::text)
+   ->  Bitmap Index Scan on test_warmindx  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (lower(a) = 'test'::text)
+(4 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+select *, ctid from test_warm where a = 'test';
+ a | b | ctid 
+---+---+------
+(0 rows)
+
+select *, ctid from test_warm where a = 'TEST';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+                                   QUERY PLAN                                    
+---------------------------------------------------------------------------------
+ Index Scan using test_warmindx on test_warm  (cost=0.15..20.22 rows=4 width=64)
+   Index Cond: (lower(a) = 'test'::text)
+(2 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+DROP TABLE test_warm;
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index ea7b5b4..7cc0d21 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -42,6 +42,8 @@ test: create_type
 test: create_table
 test: create_function_2
 
+test: warm
+
 # ----------
 # Load huge amounts of data
 # We should split the data files into single files and then
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
new file mode 100644
index 0000000..3a078dd
--- /dev/null
+++ b/src/test/regress/sql/warm.sql
@@ -0,0 +1,170 @@
+
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Check if index only scan works correctly
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab1;
+
+------------------
+
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE a = 1;
+
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE b = 701;
+
+VACUUM updtst_tab2;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+SELECT b FROM updtst_tab2 WHERE b = 701;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab2;
+------------------
+
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+SELECT * FROM updtst_tab3 WHERE a = 1;
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+SET enable_seqscan = false;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+SELECT * FROM updtst_tab3 WHERE a = 2;
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab3;
+------------------
+
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where a = 'test';
+select *, ctid from test_warm where a = 'TEST';
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+DROP TABLE test_warm;
#119Robert Haas
robertmhaas@gmail.com
In reply to: Pavan Deolasee (#117)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 8:41 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

Yeah. So what's the deal with this? Is somebody working on figuring
out a different approach that would reduce this overhead? Are we
going to defer WARM to v11? Or is the intent to just ignore the 5-10%
slowdown on a single column update and commit everything anyway?

I think I should clarify something. The test case does a single column
update, but it also has columns which are very wide, has an index on many
columns (and it updates a column early in the list). In addition, in the
test Mithun updated all 10million rows of the table in a single transaction,
used UNLOGGED table and fsync was turned off.

TBH I see many artificial scenarios here. It will be very useful if he can
rerun the query with some of these restrictions lifted. I'm all for
addressing whatever we can, but I am not sure if this test demonstrates a
real world usage.

That's a very fair point, but if these patches - or some of them - are
going to get committed then these things need to get discussed. Let's
not just have nothing-nothing-nothing giant unagreed code drop.

I think that very wide columns and highly indexed tables are not
particularly unrealistic, nor do I think updating all the rows is
particularly unrealistic. Sure, it's not everything, but it's
something. Now, I would agree that all of that PLUS unlogged tables
with fsync=off is not too realistic. What kind of regression would we
observe if we eliminated those last two variables?

Having said that, may be if we can do a few things to reduce the overhead.

- Check if the page has enough free space to perform a HOT/WARM update. If
not, don't look for all index keys.
- Pass bitmaps separately for each index and bail out early if we conclude
neither HOT nor WARM is possible. In this case since there is just one index
and as soon as we check the second column we know neither HOT nor WARM is
possible, we will return early. It might complicate the API a lot, but I can
give it a shot if that's what is needed to make progress.

I think that whether the code ends up getting contorted is an
important consideration here. For example, if the first of the things
you mention can be done without making the code ugly, then I think
that would be worth doing; it's likely to help fairly often in
real-world cases. The problem with making the code contorted and
ugly, as you say that the second idea would require, is that it can
easily mask bugs.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#120Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Amit Kapila (#115)
Re: Patch: Write Amplification Reduction Method (WARM)

Amit Kapila wrote:

I think it is because heap_getattr() is not that cheap. We have
noticed the similar problem during development of scan key push down
work [1].

One possibility to reduce the cost of that is to use whole tuple deform
instead of repeated individual heap_getattr() calls. Since we don't
actually need *all* attrs, we can create a version of heap_deform_tuple
that takes an attribute number as argument and decodes up to that point.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#121Amit Kapila
amit.kapila16@gmail.com
In reply to: Robert Haas (#119)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 6:55 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Tue, Mar 21, 2017 at 8:41 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

Yeah. So what's the deal with this? Is somebody working on figuring
out a different approach that would reduce this overhead? Are we
going to defer WARM to v11? Or is the intent to just ignore the 5-10%
slowdown on a single column update and commit everything anyway?

I think I should clarify something. The test case does a single column
update, but it also has columns which are very wide, has an index on many
columns (and it updates a column early in the list). In addition, in the
test Mithun updated all 10million rows of the table in a single transaction,
used UNLOGGED table and fsync was turned off.

TBH I see many artificial scenarios here. It will be very useful if he can
rerun the query with some of these restrictions lifted. I'm all for
addressing whatever we can, but I am not sure if this test demonstrates a
real world usage.

That's a very fair point, but if these patches - or some of them - are
going to get committed then these things need to get discussed. Let's
not just have nothing-nothing-nothing giant unagreed code drop.

I think that very wide columns and highly indexed tables are not
particularly unrealistic, nor do I think updating all the rows is
particularly unrealistic. Sure, it's not everything, but it's
something. Now, I would agree that all of that PLUS unlogged tables
with fsync=off is not too realistic. What kind of regression would we
observe if we eliminated those last two variables?

Sure, we can try that. I think we need to try it with
synchronous_commit = off, otherwise, WAL writes completely overshadows
everything.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#122Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#121)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 10:01 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:

I think that very wide columns and highly indexed tables are not
particularly unrealistic, nor do I think updating all the rows is
particularly unrealistic. Sure, it's not everything, but it's
something. Now, I would agree that all of that PLUS unlogged tables
with fsync=off is not too realistic. What kind of regression would we
observe if we eliminated those last two variables?

Sure, we can try that. I think we need to try it with
synchronous_commit = off, otherwise, WAL writes completely overshadows
everything.

synchronous_commit = off is a much more realistic scenario than fsync = off.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#123Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#119)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 6:55 PM, Robert Haas <robertmhaas@gmail.com> wrote:

I think that very wide columns and highly indexed tables are not
particularly unrealistic, nor do I think updating all the rows is
particularly unrealistic.

Ok. But those who update 10M rows in a single transaction, would they
really notice 5-10% variation? I think it probably makes sense to run those
updates in smaller transactions and see if the regression is still visible
(otherwise tweaking synchronous_commit is mute anyways).

Sure, it's not everything, but it's
something. Now, I would agree that all of that PLUS unlogged tables
with fsync=off is not too realistic. What kind of regression would we
observe if we eliminated those last two variables?

Hard to say. I didn't find any regression on the machines available to me
even with the original test case that I used, which was pretty bad case to
start with (sure, Mithun tweaked it further to create even worse scenario).
May be the kind of machines he has access to, it might show up even with
those changes.

I think that whether the code ends up getting contorted is an
important consideration here. For example, if the first of the things
you mention can be done without making the code ugly, then I think
that would be worth doing; it's likely to help fairly often in
real-world cases. The problem with making the code contorted and
ugly, as you say that the second idea would require, is that it can
easily mask bugs.

Agree. That's probably one reason why Alvaro wrote the patch to start with.
I'll give the first of those two options a try.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#124Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Robert Haas (#122)
Re: Patch: Write Amplification Reduction Method (WARM)

Robert Haas wrote:

On Tue, Mar 21, 2017 at 10:01 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:

Sure, we can try that. I think we need to try it with
synchronous_commit = off, otherwise, WAL writes completely overshadows
everything.

synchronous_commit = off is a much more realistic scenario than fsync = off.

Sure, synchronous_commit=off is a reasonable case. But I say if we lose
a few % on the case where you update only the first indexed of a large
number of very wide columns all indexed, and this is only noticeable if
you don't write WAL and only if you update all the rows in the table,
then I don't see much reason for concern.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#125Robert Haas
robertmhaas@gmail.com
In reply to: Alvaro Herrera (#124)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 10:21 AM, Alvaro Herrera
<alvherre@2ndquadrant.com> wrote:

Robert Haas wrote:

On Tue, Mar 21, 2017 at 10:01 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:

Sure, we can try that. I think we need to try it with
synchronous_commit = off, otherwise, WAL writes completely overshadows
everything.

synchronous_commit = off is a much more realistic scenario than fsync = off.

Sure, synchronous_commit=off is a reasonable case. But I say if we lose
a few % on the case where you update only the first indexed of a large
number of very wide columns all indexed, and this is only noticeable if
you don't write WAL and only if you update all the rows in the table,
then I don't see much reason for concern.

If the WAL writing hides the loss, then I agree that's not a big
concern. But if the loss is still visible even when WAL is written,
then I'm not so sure.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#126Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#116)
Re: Patch: Write Amplification Reduction Method (WARM)

On 2017-03-21 08:04:11 -0400, Robert Haas wrote:

On Tue, Mar 21, 2017 at 6:56 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:

Hmm, that test case isn't all that synthetic. It's just a single
column bulk update, which isn't anything all that crazy, and 5-10%
isn't nothing.

I'm kinda surprised it made that much difference, though.

I think it is because heap_getattr() is not that cheap. We have
noticed the similar problem during development of scan key push down
work [1].

Yeah. So what's the deal with this? Is somebody working on figuring
out a different approach that would reduce this overhead?

I think one reasonable thing would be to use slots here, and use
slot_getsomeattrs(), with a pre-computed offset, for doing the
deforming. Given that more than one place run into the issue with
deforming cost via heap_*, that seems like something we're going to have
to do. Additionally the patches I had for JITed deforming all
integrated at the slot layer, so it'd be a good thing from that angle as
well.

Deforming all columns at once would also a boon for the accompanying
index_getattr calls.

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#127Andres Freund
andres@anarazel.de
In reply to: Pavan Deolasee (#123)
Re: Patch: Write Amplification Reduction Method (WARM)

On 2017-03-21 19:49:07 +0530, Pavan Deolasee wrote:

On Tue, Mar 21, 2017 at 6:55 PM, Robert Haas <robertmhaas@gmail.com> wrote:

I think that very wide columns and highly indexed tables are not
particularly unrealistic, nor do I think updating all the rows is
particularly unrealistic.

Ok. But those who update 10M rows in a single transaction, would they
really notice 5-10% variation?

Yes. It's very common in ETL, and that's quite performance sensitive.

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#128Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#119)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 09:25:49AM -0400, Robert Haas wrote:

On Tue, Mar 21, 2017 at 8:41 AM, Pavan Deolasee

TBH I see many artificial scenarios here. It will be very useful if he can
rerun the query with some of these restrictions lifted. I'm all for
addressing whatever we can, but I am not sure if this test demonstrates a
real world usage.

That's a very fair point, but if these patches - or some of them - are
going to get committed then these things need to get discussed. Let's
not just have nothing-nothing-nothing giant unagreed code drop.

First, let me say I love this feature for PG 10, along with
multi-variate statistics.

However, not to be a bummer on this, but the persistent question I have
is whether we are locking ourselves into a feature that can only do
_one_ index-change per WARM chain before a lazy vacuum is required. Are
we ever going to figure out how to do more changes per WARM chain in the
future, and is our use of so many bits for this feature going to
restrict our ability to do that in the future.

I know we have talked about it, but not recently, and if everyone else
is fine with it, I am too, but I have to ask these questions.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#129Robert Haas
robertmhaas@gmail.com
In reply to: Bruce Momjian (#128)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 12:49 PM, Bruce Momjian <bruce@momjian.us> wrote:

On Tue, Mar 21, 2017 at 09:25:49AM -0400, Robert Haas wrote:

On Tue, Mar 21, 2017 at 8:41 AM, Pavan Deolasee

TBH I see many artificial scenarios here. It will be very useful if he can
rerun the query with some of these restrictions lifted. I'm all for
addressing whatever we can, but I am not sure if this test demonstrates a
real world usage.

That's a very fair point, but if these patches - or some of them - are
going to get committed then these things need to get discussed. Let's
not just have nothing-nothing-nothing giant unagreed code drop.

First, let me say I love this feature for PG 10, along with
multi-variate statistics.

However, not to be a bummer on this, but the persistent question I have
is whether we are locking ourselves into a feature that can only do
_one_ index-change per WARM chain before a lazy vacuum is required. Are
we ever going to figure out how to do more changes per WARM chain in the
future, and is our use of so many bits for this feature going to
restrict our ability to do that in the future.

I know we have talked about it, but not recently, and if everyone else
is fine with it, I am too, but I have to ask these questions.

I think that's a good question. I previously expressed similar
concerns. On the one hand, it's hard to ignore the fact that, in the
cases where this wins, it already buys us a lot of performance
improvement. On the other hand, as you say (and as I said), it eats
up a lot of bits, and that limits what we can do in the future. On
the one hand, there is a saying that a bird in the hand is worth two
in the bush. On the other hand, there is also a saying that one
should not paint oneself into the corner.

I'm not sure we've had any really substantive discussion of these
issues. Pavan's response to my previous comments was basically "well,
I think it's worth it", which is entirely reasonable, because he
presumably wouldn't have written the patch that way if he thought it
sucked. But it might not be the only opinion.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: Robert Haas (#129)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 10:04 AM, Robert Haas <robertmhaas@gmail.com> wrote:

I think that's a good question. I previously expressed similar
concerns. On the one hand, it's hard to ignore the fact that, in the
cases where this wins, it already buys us a lot of performance
improvement. On the other hand, as you say (and as I said), it eats
up a lot of bits, and that limits what we can do in the future. On
the one hand, there is a saying that a bird in the hand is worth two
in the bush. On the other hand, there is also a saying that one
should not paint oneself into the corner.

Are we really saying that there can be no incompatible change to the
on-disk representation for the rest of eternity? I can see why that's
something to avoid indefinitely, but I wouldn't like to rule it out.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#131Robert Haas
robertmhaas@gmail.com
In reply to: Peter Geoghegan (#130)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 1:08 PM, Peter Geoghegan <pg@bowt.ie> wrote:

On Tue, Mar 21, 2017 at 10:04 AM, Robert Haas <robertmhaas@gmail.com> wrote:

I think that's a good question. I previously expressed similar
concerns. On the one hand, it's hard to ignore the fact that, in the
cases where this wins, it already buys us a lot of performance
improvement. On the other hand, as you say (and as I said), it eats
up a lot of bits, and that limits what we can do in the future. On
the one hand, there is a saying that a bird in the hand is worth two
in the bush. On the other hand, there is also a saying that one
should not paint oneself into the corner.

Are we really saying that there can be no incompatible change to the
on-disk representation for the rest of eternity? I can see why that's
something to avoid indefinitely, but I wouldn't like to rule it out.

Well, I don't want to rule it out either, but if we do a release to
which you can't pg_upgrade, it's going to be really painful for a lot
of users. Many users can't realistically upgrade using pg_dump, ever.
So they'll be stuck on the release before the one that breaks
compatibility for a very long time.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#132Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#129)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 01:04:14PM -0400, Robert Haas wrote:

I know we have talked about it, but not recently, and if everyone else
is fine with it, I am too, but I have to ask these questions.

I think that's a good question. I previously expressed similar
concerns. On the one hand, it's hard to ignore the fact that, in the
cases where this wins, it already buys us a lot of performance
improvement. On the other hand, as you say (and as I said), it eats
up a lot of bits, and that limits what we can do in the future. On
the one hand, there is a saying that a bird in the hand is worth two
in the bush. On the other hand, there is also a saying that one
should not paint oneself into the corner.

I'm not sure we've had any really substantive discussion of these
issues. Pavan's response to my previous comments was basically "well,
I think it's worth it", which is entirely reasonable, because he
presumably wouldn't have written the patch that way if he thought it
sucked. But it might not be the only opinion.

Early in the discussion we talked about allowing multiple changes per
WARM chain if they all changed the same index and were in the same
direction so there were no duplicates, but it was complicated. There
was also discussion about checking the index during INSERT/UPDATE to see
if there was a duplicate. However, those ideas never led to further
discussion.

I know the current patch yields good results, but only on a narrow test
case, so I am not ready to just stop asking questions based the opinion
of the author or test results alone.

If someone came to me and said, "We have thought about allowing more
than one index change per WARM chain, and if we can ever do it, it will
probably be done this way, and we have the bits for it," I would be more
comfortable.

One interesting side-issue is that indirect indexes have a similar
problem with duplicate index entries, and there is no plan on how to fix
that either. I guess I just don't feel we have explored the
duplicate-index-entry problem enough for me to be comfortable.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#133Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#131)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 01:14:00PM -0400, Robert Haas wrote:

On Tue, Mar 21, 2017 at 1:08 PM, Peter Geoghegan <pg@bowt.ie> wrote:

On Tue, Mar 21, 2017 at 10:04 AM, Robert Haas <robertmhaas@gmail.com> wrote:

I think that's a good question. I previously expressed similar
concerns. On the one hand, it's hard to ignore the fact that, in the
cases where this wins, it already buys us a lot of performance
improvement. On the other hand, as you say (and as I said), it eats
up a lot of bits, and that limits what we can do in the future. On
the one hand, there is a saying that a bird in the hand is worth two
in the bush. On the other hand, there is also a saying that one
should not paint oneself into the corner.

Are we really saying that there can be no incompatible change to the
on-disk representation for the rest of eternity? I can see why that's
something to avoid indefinitely, but I wouldn't like to rule it out.

Well, I don't want to rule it out either, but if we do a release to
which you can't pg_upgrade, it's going to be really painful for a lot
of users. Many users can't realistically upgrade using pg_dump, ever.
So they'll be stuck on the release before the one that breaks
compatibility for a very long time.

Right. If we weren't setting tuple and tid bits we could imrpove it
easily in PG 11, but if we use them for a single-change WARM chain for
PG 10, we might need bits that are not available to improve it later.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#134Petr Jelinek
petr.jelinek@2ndquadrant.com
In reply to: Robert Haas (#131)
Re: Patch: Write Amplification Reduction Method (WARM)

On 21/03/17 18:14, Robert Haas wrote:

On Tue, Mar 21, 2017 at 1:08 PM, Peter Geoghegan <pg@bowt.ie> wrote:

On Tue, Mar 21, 2017 at 10:04 AM, Robert Haas <robertmhaas@gmail.com> wrote:

I think that's a good question. I previously expressed similar
concerns. On the one hand, it's hard to ignore the fact that, in the
cases where this wins, it already buys us a lot of performance
improvement. On the other hand, as you say (and as I said), it eats
up a lot of bits, and that limits what we can do in the future. On
the one hand, there is a saying that a bird in the hand is worth two
in the bush. On the other hand, there is also a saying that one
should not paint oneself into the corner.

Are we really saying that there can be no incompatible change to the
on-disk representation for the rest of eternity? I can see why that's
something to avoid indefinitely, but I wouldn't like to rule it out.

Well, I don't want to rule it out either, but if we do a release to
which you can't pg_upgrade, it's going to be really painful for a lot
of users. Many users can't realistically upgrade using pg_dump, ever.
So they'll be stuck on the release before the one that breaks
compatibility for a very long time.

This is why I like the idea of pluggable storage, if we ever get that it
would buy us ability to implement completely different heap format
without breaking pg_upgrade.

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#135Petr Jelinek
petr.jelinek@2ndquadrant.com
In reply to: Bruce Momjian (#133)
Re: Patch: Write Amplification Reduction Method (WARM)

On 21/03/17 18:19, Bruce Momjian wrote:

On Tue, Mar 21, 2017 at 01:14:00PM -0400, Robert Haas wrote:

On Tue, Mar 21, 2017 at 1:08 PM, Peter Geoghegan <pg@bowt.ie> wrote:

On Tue, Mar 21, 2017 at 10:04 AM, Robert Haas <robertmhaas@gmail.com> wrote:

I think that's a good question. I previously expressed similar
concerns. On the one hand, it's hard to ignore the fact that, in the
cases where this wins, it already buys us a lot of performance
improvement. On the other hand, as you say (and as I said), it eats
up a lot of bits, and that limits what we can do in the future. On
the one hand, there is a saying that a bird in the hand is worth two
in the bush. On the other hand, there is also a saying that one
should not paint oneself into the corner.

Are we really saying that there can be no incompatible change to the
on-disk representation for the rest of eternity? I can see why that's
something to avoid indefinitely, but I wouldn't like to rule it out.

Well, I don't want to rule it out either, but if we do a release to
which you can't pg_upgrade, it's going to be really painful for a lot
of users. Many users can't realistically upgrade using pg_dump, ever.
So they'll be stuck on the release before the one that breaks
compatibility for a very long time.

Right. If we weren't setting tuple and tid bits we could imrpove it
easily in PG 11, but if we use them for a single-change WARM chain for
PG 10, we might need bits that are not available to improve it later.

I thought there is still couple of bits available.

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#136Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Bruce Momjian (#132)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 10:47 PM, Bruce Momjian <bruce@momjian.us> wrote:

On Tue, Mar 21, 2017 at 01:04:14PM -0400, Robert Haas wrote:

I know we have talked about it, but not recently, and if everyone else
is fine with it, I am too, but I have to ask these questions.

I think that's a good question. I previously expressed similar
concerns. On the one hand, it's hard to ignore the fact that, in the
cases where this wins, it already buys us a lot of performance
improvement. On the other hand, as you say (and as I said), it eats
up a lot of bits, and that limits what we can do in the future. On
the one hand, there is a saying that a bird in the hand is worth two
in the bush. On the other hand, there is also a saying that one
should not paint oneself into the corner.

I'm not sure we've had any really substantive discussion of these
issues. Pavan's response to my previous comments was basically "well,
I think it's worth it", which is entirely reasonable, because he
presumably wouldn't have written the patch that way if he thought it
sucked. But it might not be the only opinion.

Early in the discussion we talked about allowing multiple changes per
WARM chain if they all changed the same index and were in the same
direction so there were no duplicates, but it was complicated. There
was also discussion about checking the index during INSERT/UPDATE to see
if there was a duplicate. However, those ideas never led to further
discussion.

Well, once I started thinking about how to do vacuum etc, I realised that
any mechanism which allows unlimited (even handful) updates per chain is
going to be very complex and error prone. But if someone has ideas to do
that, I am open. I must say though, it will make an already complex problem
even more complex.

I know the current patch yields good results, but only on a narrow test
case,

Hmm. I am kinda surprised you say that because I never thought it was a
narrow test case that we are targeting here. But may be I'm wrong.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#137Robert Haas
robertmhaas@gmail.com
In reply to: Petr Jelinek (#134)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 2:03 PM, Petr Jelinek
<petr.jelinek@2ndquadrant.com> wrote:

This is why I like the idea of pluggable storage, if we ever get that it
would buy us ability to implement completely different heap format
without breaking pg_upgrade.

You probably won't be surprised to hear that I agree. :-)

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#138Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#129)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 10:34 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Tue, Mar 21, 2017 at 12:49 PM, Bruce Momjian <bruce@momjian.us> wrote:

On Tue, Mar 21, 2017 at 09:25:49AM -0400, Robert Haas wrote:

On Tue, Mar 21, 2017 at 8:41 AM, Pavan Deolasee

TBH I see many artificial scenarios here. It will be very useful if

he can

rerun the query with some of these restrictions lifted. I'm all for
addressing whatever we can, but I am not sure if this test

demonstrates a

real world usage.

That's a very fair point, but if these patches - or some of them - are
going to get committed then these things need to get discussed. Let's
not just have nothing-nothing-nothing giant unagreed code drop.

First, let me say I love this feature for PG 10, along with
multi-variate statistics.

However, not to be a bummer on this, but the persistent question I have
is whether we are locking ourselves into a feature that can only do
_one_ index-change per WARM chain before a lazy vacuum is required. Are
we ever going to figure out how to do more changes per WARM chain in the
future, and is our use of so many bits for this feature going to
restrict our ability to do that in the future.

I know we have talked about it, but not recently, and if everyone else
is fine with it, I am too, but I have to ask these questions.

I think that's a good question. I previously expressed similar
concerns. On the one hand, it's hard to ignore the fact that, in the
cases where this wins, it already buys us a lot of performance
improvement. On the other hand, as you say (and as I said), it eats
up a lot of bits, and that limits what we can do in the future.

I think we can save a bit few bits, at some additional costs and/or
complexity. It all depends on what matters us more. For example, we can
choose not to use HEAP_LATEST_TUPLE bit and instead always find the root
tuple the hard way. Since only WARM would ever need to find that
information, may be it's ok since WARM's other advantage will justify that.
Or we cache the information computed during earlier heap_prune_page call
and use that (just guessing that we can make it work, no concrete idea at
this moment).

We can also save HEAP_WARM_UPDATED flag since this is required only for
abort-handling case. We can find a way to push that information down to the
old tuple if UPDATE aborts and we detect the broken chain. Again, not fully
thought through, but doable. Of course, we will have to carefully evaluate
all code paths and make sure that we don't lose that information ever.

If the consumption of bits become a deal breaker then I would first trade
the HEAP_LATEST_TUPLE bit and then HEAP_WARM_UPDATED just from correctness
perspective.

Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#139Bruce Momjian
bruce@momjian.us
In reply to: Petr Jelinek (#135)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 07:05:15PM +0100, Petr Jelinek wrote:

Well, I don't want to rule it out either, but if we do a release to
which you can't pg_upgrade, it's going to be really painful for a lot
of users. Many users can't realistically upgrade using pg_dump, ever.
So they'll be stuck on the release before the one that breaks
compatibility for a very long time.

Right. If we weren't setting tuple and tid bits we could improve it
easily in PG 11, but if we use them for a single-change WARM chain for
PG 10, we might need bits that are not available to improve it later.

I thought there is still couple of bits available.

Yes, there are. The issue is that we don't know how we would improve it
so we don't know how many bits we need, and my concern is that we
haven't discussed the improvement ideas enough to know we have done the
best we can for PG 10.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#140Bruce Momjian
bruce@momjian.us
In reply to: Pavan Deolasee (#136)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 11:45:09PM +0530, Pavan Deolasee wrote:

Early in the discussion we talked about allowing multiple changes per
WARM chain if they all changed the same index and were in the same
direction so there were no duplicates, but it was complicated.� There
was also discussion about checking the index during INSERT/UPDATE to see
if there was a duplicate.� However, those ideas never led to further
discussion.

Well, once I started thinking about how to do vacuum etc, I realised that any
mechanism which allows unlimited (even handful) updates per chain is going to
be very complex and error prone. But if someone has ideas to do that, I am
open. I must say though, it will make an already complex problem even more
complex.

Yes, that is where we got stuck. Have enough people studied the issue
to know that there are no simple answers?

I know the current patch yields good results, but only on a narrow test
case,

Hmm. I am kinda surprised you say that because I never thought it was a narrow
test case that we are targeting here. But may be I'm wrong.

Well, it is really a question of how often you want to do a second WARM
update (not possible) vs. the frequency of lazy vacuum. I assumed that
would be a 100X or 10kX difference, but I am not sure myself either. My
initial guess was that only allowing a single WARM update between lazy
vacuums would show no improvementin in real-world workloads, but maybe I
am wrong.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#141Bruce Momjian
bruce@momjian.us
In reply to: Pavan Deolasee (#138)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 11:54:25PM +0530, Pavan Deolasee wrote:

We can also save HEAP_WARM_UPDATED flag since this is required only for
abort-handling case. We can find a way to push that information down to the old
tuple if UPDATE aborts and we detect the broken chain. Again, not fully thought
through, but doable. Of course, we will have to carefully evaluate all code
paths and make sure that we don't lose that information ever.

If the consumption of bits become a deal breaker then I would first trade the
HEAP_LATEST_TUPLE bit and then HEAP_WARM_UPDATED just from correctness
perspective.

I don't think it makes sense to try and save bits and add complexity
when we have no idea if we will ever use them, but again, I am back to
my original question of whether we have done sufficient research, and if
everyone says "yes", I am find with that.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#142Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Bruce Momjian (#141)
Re: Patch: Write Amplification Reduction Method (WARM)

Bruce Momjian wrote:

I don't think it makes sense to try and save bits and add complexity
when we have no idea if we will ever use them,

If we find ourselves in dire need of additional bits, there is a known
mechanism to get back 2 bits from old-style VACUUM FULL. I assume that
the reason nobody has bothered to write the code for that is that
there's no *that* much interest.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#143Bruce Momjian
bruce@momjian.us
In reply to: Alvaro Herrera (#142)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 04:43:58PM -0300, Alvaro Herrera wrote:

Bruce Momjian wrote:

I don't think it makes sense to try and save bits and add complexity
when we have no idea if we will ever use them,

If we find ourselves in dire need of additional bits, there is a known
mechanism to get back 2 bits from old-style VACUUM FULL. I assume that
the reason nobody has bothered to write the code for that is that
there's no *that* much interest.

We have no way of tracking if users still have pages that used the bits
via pg_upgrade before they were removed.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#144Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Bruce Momjian (#143)
Re: Patch: Write Amplification Reduction Method (WARM)

Bruce Momjian wrote:

On Tue, Mar 21, 2017 at 04:43:58PM -0300, Alvaro Herrera wrote:

Bruce Momjian wrote:

I don't think it makes sense to try and save bits and add complexity
when we have no idea if we will ever use them,

If we find ourselves in dire need of additional bits, there is a known
mechanism to get back 2 bits from old-style VACUUM FULL. I assume that
the reason nobody has bothered to write the code for that is that
there's no *that* much interest.

We have no way of tracking if users still have pages that used the bits
via pg_upgrade before they were removed.

Yes, that's exactly the code that needs to be written.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#145Bruce Momjian
bruce@momjian.us
In reply to: Alvaro Herrera (#144)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 04:56:16PM -0300, Alvaro Herrera wrote:

Bruce Momjian wrote:

On Tue, Mar 21, 2017 at 04:43:58PM -0300, Alvaro Herrera wrote:

Bruce Momjian wrote:

I don't think it makes sense to try and save bits and add complexity
when we have no idea if we will ever use them,

If we find ourselves in dire need of additional bits, there is a known
mechanism to get back 2 bits from old-style VACUUM FULL. I assume that
the reason nobody has bothered to write the code for that is that
there's no *that* much interest.

We have no way of tracking if users still have pages that used the bits
via pg_upgrade before they were removed.

Yes, that's exactly the code that needs to be written.

Yes, but once it is written it will take years before those bits can be
used on most installations.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#146Mithun Cy
mithun.cy@enterprisedb.com
In reply to: Robert Haas (#125)
1 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 8:10 PM, Robert Haas <robertmhaas@gmail.com> wrote:

If the WAL writing hides the loss, then I agree that's not a big
concern. But if the loss is still visible even when WAL is written,
then I'm not so sure.

The tests table schema was taken from earlier tests what Pavan has posted
[1]: Re: rewrite HeapSatisfiesHOTAndKey </messages/by-id/CABOikdMUQQs4BnJ4Ws-ObOEDh8vhNp13Y1caK_i8seSHKPjbhw@mail.gmail.com&gt;
updating 1 row at a time through pgbench (For which I and Pavan both did
not see any regression), I tried to update all the rows in the single
statement. I have changed the settings as recommended and did a quick test
as above in our machine by removing UNLOGGED world in create table
statement.

Patch Tested : Only 0001_interesting_attrs_v18.patch in [2]Re: Patch: Write Amplification Reduction Method (WARM) </messages/by-id/CABOikdP1yeicUPH0NByjrg2Sv3ZtJXWyFPSqwppid8G3kLVKjw@mail.gmail.com&gt; -- Thanks and Regards Mithun C Y EnterpriseDB: http://www.enterprisedb.com

Machine: Scylla [ Last time I did same tests on IBM power2 but It is not
immediately available. So trying on another intel based performance
machine.]
============
[mithun.cy@scylla bin]$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 56
On-line CPU(s) list: 0-55
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz
Stepping: 2
CPU MHz: 1235.800
BogoMIPS: 4594.35
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 35840K
NUMA node0 CPU(s): 0-13,28-41
NUMA node1 CPU(s): 14-27,42-55

[mithun.cy@scylla bin]$ cat /proc/meminfo
MemTotal: 65687464 kB

Postgresql.conf non default settings
===========================
shared_buffers = 24 GB
max_wal_size = 10GB
min_wal_size = 5GB
synchronous_commit=off
autovacuum = off /*manually doing vacumm full before every update. */

This system has 2 storage I have kept datadir on spinning disc and pg_wal
on ssd.

Tests :

DROP TABLE IF EXISTS testtab;

CREATE TABLE testtab (

col1 integer,

col2 text,

col3 float,

col4 text,

col5 text,

col6 char(30),

col7 text,

col8 date,

col9 text,

col10 text

);

INSERT INTO testtab

SELECT generate_series(1,10000000),

md5(random()::text),

random(),

md5(random()::text),

md5(random()::text),

md5(random()::text)::char(30),

md5(random()::text),

now(),

md5(random()::text),

md5(random()::text);

CREATE INDEX testindx ON testtab (col1, col2, col3, col4, col5, col6, col7,
col8, col9);
Performance measurement tests: Ran12 times to eliminate run to run
latencies.
==========================
VACUUM FULL;
BEGIN;
UPDATE testtab SET col2 = md5(random()::text);
ROLLBACK;

Response time recorded shows there is a much higher increase in response
time from 10% to 25% after the patch.

[1]: Re: rewrite HeapSatisfiesHOTAndKey </messages/by-id/CABOikdMUQQs4BnJ4Ws-ObOEDh8vhNp13Y1caK_i8seSHKPjbhw@mail.gmail.com&gt;
</messages/by-id/CABOikdMUQQs4BnJ4Ws-ObOEDh8vhNp13Y1caK_i8seSHKPjbhw@mail.gmail.com&gt;
[2]: Re: Patch: Write Amplification Reduction Method (WARM) </messages/by-id/CABOikdP1yeicUPH0NByjrg2Sv3ZtJXWyFPSqwppid8G3kLVKjw@mail.gmail.com&gt; -- Thanks and Regards Mithun C Y EnterpriseDB: http://www.enterprisedb.com
</messages/by-id/CABOikdP1yeicUPH0NByjrg2Sv3ZtJXWyFPSqwppid8G3kLVKjw@mail.gmail.com&gt;
--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com

Attachments:

WARM_test.odsapplication/vnd.oasis.opendocument.spreadsheet; name=WARM_test.odsDownload
#147Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Mithun Cy (#146)
1 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Mar 22, 2017 at 3:51 AM, Mithun Cy <mithun.cy@enterprisedb.com>
wrote:

On Tue, Mar 21, 2017 at 8:10 PM, Robert Haas <robertmhaas@gmail.com>
wrote:

If the WAL writing hides the loss, then I agree that's not a big
concern. But if the loss is still visible even when WAL is written,
then I'm not so sure.

The tests table schema was taken from earlier tests what Pavan has posted
[1], hence it is UNLOGGED all I tried to stress the tests. Instead of
updating 1 row at a time through pgbench (For which I and Pavan both did
not see any regression), I tried to update all the rows in the single
statement.

Sorry, I did not mean to suggest that you set it up wrongly, I was just
trying to point out that the test case itself may not be very practical.
But given your recent numbers, the regression is clearly non-trivial and
something we must address.

I have changed the settings as recommended and did a quick test as above
in our machine by removing UNLOGGED world in create table statement.

Patch Tested : Only 0001_interesting_attrs_v18.patch in [2]

Response time recorded shows there is a much higher increase in response
time from 10% to 25% after the patch.

Thanks for repeating the tests. They are very useful. It might make sense
to reverse the order or do 6 tests each and alternate between patched and
unpatched master just to get rid of any other anomaly.

BTW may I request another test with the attached patch? In this patch, we
check if the PageIsFull() even before deciding which attributes to check
for modification. If the page is already full, there is hardly any chance
of doing a HOT update (there could be a corner case where the new tuple is
smaller than the tuple used in previous UPDATE and we have just enough
space to do HOT update this time, but I can think that's too narrow).

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0001_interesting_attrs_v19.patchapplication/octet-stream; name=0001_interesting_attrs_v19.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 8526137..36f7ac8 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -96,11 +96,8 @@ static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
 				HeapTuple newtup, HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
-static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
-							 Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
+							 Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup);
 static bool heap_acquire_tuplock(Relation relation, ItemPointer tid,
 					 LockTupleMode mode, LockWaitPolicy wait_policy,
@@ -3471,6 +3468,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *interesting_attrs;
+	Bitmapset  *modified_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3488,10 +3487,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				pagefree;
 	bool		have_tuple_lock = false;
 	bool		iscombo;
-	bool		satisfies_hot;
-	bool		satisfies_key;
-	bool		satisfies_id;
 	bool		use_hot_update = false;
+	bool		hot_attrs_checked = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
 	bool		all_visible_cleared_new = false;
@@ -3517,26 +3514,51 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				 errmsg("cannot update tuples during a parallel operation")));
 
 	/*
-	 * Fetch the list of attributes to be checked for HOT update.  This is
-	 * wasted effort if we fail to update or have to put the new tuple on a
-	 * different page.  But we must compute the list before obtaining buffer
-	 * lock --- in the worst case, if we are doing an update on one of the
-	 * relevant system catalogs, we could deadlock if we try to fetch the list
-	 * later.  In any case, the relcache caches the data so this is usually
-	 * pretty cheap.
+	 * Fetch the list of attributes to be checked for various operations.
 	 *
-	 * Note that we get a copy here, so we need not worry about relcache flush
-	 * happening midway through.
+	 * For HOT considerations, this is wasted effort if we fail to update or
+	 * have to put the new tuple on a different page.  But we must compute the
+	 * list before obtaining buffer lock --- in the worst case, if we are doing
+	 * an update on one of the relevant system catalogs, we could deadlock if
+	 * we try to fetch the list later.  In any case, the relcache caches the
+	 * data so this is usually pretty cheap.
+	 *
+	 * We also need columns used by the replica identity, the columns that
+	 * are considered the "key" of rows in the table, and columns that are
+	 * part of indirect indexes.
+	 *
+	 * Note that we get copies of each bitmap, so we need not worry about
+	 * relcache flush happening midway through.
 	 */
 	hot_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_ALL);
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
 
+
 	block = ItemPointerGetBlockNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
+	interesting_attrs = NULL;
+	/*
+	 * If the page is already full, there is hardly any chance of doing a HOT
+	 * update on this page. It might be wasteful effort to look for index
+	 * column updates only to later reject HOT updates for lack of space in the
+	 * same page. So we be conservative and only fetch hot_attrs if the page is
+	 * not already full. Since we are already holding a pin on the buffer,
+	 * there is no chance that the buffer can get cleaned up concurrently and
+	 * even if that was possible, in the worst case we lose a chance to do a
+	 * HOT update.
+	 */
+	if (!PageIsFull(page))
+	{
+		interesting_attrs = bms_add_members(interesting_attrs, hot_attrs);
+		hot_attrs_checked = true;
+	}
+	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
+
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
 	 * be necessary.  Since we haven't got the lock yet, someone else might be
@@ -3552,7 +3574,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Assert(ItemIdIsNormal(lp));
 
 	/*
-	 * Fill in enough data in oldtup for HeapSatisfiesHOTandKeyUpdate to work
+	 * Fill in enough data in oldtup for HeapDetermineModifiedColumns to work
 	 * properly.
 	 */
 	oldtup.t_tableOid = RelationGetRelid(relation);
@@ -3578,6 +3600,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 		Assert(!(newtup->t_data->t_infomask & HEAP_HASOID));
 	}
 
+	/* Determine columns modified by the update. */
+	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
+												  &oldtup, newtup);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3589,10 +3615,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	 * is updates that don't manipulate key columns, not those that
 	 * serendipitiously arrive at the same key values.
 	 */
-	HeapSatisfiesHOTandKeyUpdate(relation, hot_attrs, key_attrs, id_attrs,
-								 &satisfies_hot, &satisfies_key,
-								 &satisfies_id, &oldtup, newtup);
-	if (satisfies_key)
+	if (!bms_overlap(modified_attrs, key_attrs))
 	{
 		*lockmode = LockTupleNoKeyExclusive;
 		mxact_status = MultiXactStatusNoKeyUpdate;
@@ -3831,6 +3854,8 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(modified_attrs);
+		bms_free(interesting_attrs);
 		return result;
 	}
 
@@ -4133,9 +4158,10 @@ l2:
 		/*
 		 * Since the new tuple is going into the same page, we might be able
 		 * to do a HOT update.  Check if any of the index columns have been
-		 * changed.  If not, then HOT update is possible.
+		 * changed. If the page was already full, we may have skipped checking
+		 * for index columns. If so, HOT update is possible.
 		 */
-		if (satisfies_hot)
+		if (hot_attrs_checked && !bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
 	}
 	else
@@ -4150,7 +4176,9 @@ l2:
 	 * ExtractReplicaIdentity() will return NULL if nothing needs to be
 	 * logged.
 	 */
-	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup, !satisfies_id, &old_key_copied);
+	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup,
+										   bms_overlap(modified_attrs, id_attrs),
+										   &old_key_copied);
 
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
@@ -4298,13 +4326,15 @@ l2:
 	bms_free(hot_attrs);
 	bms_free(key_attrs);
 	bms_free(id_attrs);
+	bms_free(modified_attrs);
+	bms_free(interesting_attrs);
 
 	return HeapTupleMayBeUpdated;
 }
 
 /*
  * Check if the specified attribute's value is same in both given tuples.
- * Subroutine for HeapSatisfiesHOTandKeyUpdate.
+ * Subroutine for HeapDetermineModifiedColumns.
  */
 static bool
 heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
@@ -4338,7 +4368,7 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 
 	/*
 	 * Extract the corresponding values.  XXX this is pretty inefficient if
-	 * there are many indexed columns.  Should HeapSatisfiesHOTandKeyUpdate do
+	 * there are many indexed columns.  Should HeapDetermineModifiedColumns do
 	 * a single heap_deform_tuple call on each tuple, instead?	But that
 	 * doesn't work for system columns ...
 	 */
@@ -4383,114 +4413,30 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 /*
  * Check which columns are being updated.
  *
- * This simultaneously checks conditions for HOT updates, for FOR KEY
- * SHARE updates, and REPLICA IDENTITY concerns.  Since much of the time they
- * will be checking very similar sets of columns, and doing the same tests on
- * them, it makes sense to optimize and do them together.
- *
- * We receive three bitmapsets comprising the three sets of columns we're
- * interested in.  Note these are destructively modified; that is OK since
- * this is invoked at most once in heap_update.
+ * Given an updated tuple, determine (and return into the output bitmapset),
+ * from those listed as interesting, the set of columns that changed.
  *
- * hot_result is set to TRUE if it's okay to do a HOT update (i.e. it does not
- * modified indexed columns); key_result is set to TRUE if the update does not
- * modify columns used in the key; id_result is set to TRUE if the update does
- * not modify columns in any index marked as the REPLICA IDENTITY.
+ * The input bitmapset is destructively modified; that is OK since this is
+ * invoked at most once in heap_update.
  */
-static void
-HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *
+HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup)
 {
-	int			next_hot_attnum;
-	int			next_key_attnum;
-	int			next_id_attnum;
-	bool		hot_result = true;
-	bool		key_result = true;
-	bool		id_result = true;
-
-	/* If REPLICA IDENTITY is set to FULL, id_attrs will be empty. */
-	Assert(bms_is_subset(id_attrs, key_attrs));
-	Assert(bms_is_subset(key_attrs, hot_attrs));
-
-	/*
-	 * If one of these sets contains no remaining bits, bms_first_member will
-	 * return -1, and after adding FirstLowInvalidHeapAttributeNumber (which
-	 * is negative!)  we'll get an attribute number that can't possibly be
-	 * real, and thus won't match any actual attribute number.
-	 */
-	next_hot_attnum = bms_first_member(hot_attrs);
-	next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_key_attnum = bms_first_member(key_attrs);
-	next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_id_attnum = bms_first_member(id_attrs);
-	next_id_attnum += FirstLowInvalidHeapAttributeNumber;
+	int		attnum;
+	Bitmapset *modified = NULL;
 
-	for (;;)
+	while ((attnum = bms_first_member(interesting_cols)) >= 0)
 	{
-		bool		changed;
-		int			check_now;
+		attnum += FirstLowInvalidHeapAttributeNumber;
 
-		/*
-		 * Since the HOT attributes are a superset of the key attributes and
-		 * the key attributes are a superset of the id attributes, this logic
-		 * is guaranteed to identify the next column that needs to be checked.
-		 */
-		if (hot_result && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_hot_attnum;
-		else if (key_result && next_key_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_key_attnum;
-		else if (id_result && next_id_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_id_attnum;
-		else
-			break;
-
-		/* See whether it changed. */
-		changed = !heap_tuple_attr_equals(RelationGetDescr(relation),
-										  check_now, oldtup, newtup);
-		if (changed)
-		{
-			if (check_now == next_hot_attnum)
-				hot_result = false;
-			if (check_now == next_key_attnum)
-				key_result = false;
-			if (check_now == next_id_attnum)
-				id_result = false;
-
-			/* if all are false now, we can stop checking */
-			if (!hot_result && !key_result && !id_result)
-				break;
-		}
-
-		/*
-		 * Advance the next attribute numbers for the sets that contain the
-		 * attribute we just checked.  As we work our way through the columns,
-		 * the next_attnum values will rise; but when each set becomes empty,
-		 * bms_first_member() will return -1 and the attribute number will end
-		 * up with a value less than FirstLowInvalidHeapAttributeNumber.
-		 */
-		if (hot_result && check_now == next_hot_attnum)
-		{
-			next_hot_attnum = bms_first_member(hot_attrs);
-			next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (key_result && check_now == next_key_attnum)
-		{
-			next_key_attnum = bms_first_member(key_attrs);
-			next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (id_result && check_now == next_id_attnum)
-		{
-			next_id_attnum = bms_first_member(id_attrs);
-			next_id_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
+		if (!heap_tuple_attr_equals(RelationGetDescr(relation),
+								   attnum, oldtup, newtup))
+			modified = bms_add_member(modified,
+									  attnum - FirstLowInvalidHeapAttributeNumber);
 	}
 
-	*satisfies_hot = hot_result;
-	*satisfies_key = key_result;
-	*satisfies_id = id_result;
+	return modified;
 }
 
 /*
#148Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Pavan Deolasee (#147)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Mar 22, 2017 at 8:43 AM, Pavan Deolasee <pavan.deolasee@gmail.com>
wrote:

BTW may I request another test with the attached patch? In this patch, we
check if the PageIsFull() even before deciding which attributes to check
for modification. If the page is already full, there is hardly any chance
of doing a HOT update (there could be a corner case where the new tuple is
smaller than the tuple used in previous UPDATE and we have just enough
space to do HOT update this time, but I can think that's too narrow).

I would also request you to do a slightly different test where instead of
updating the second column, we update the last column of the index i.e.
col9. Would really appreciate if you share results with both master and v19
patch.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#149Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Mithun Cy (#146)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Mar 22, 2017 at 3:51 AM, Mithun Cy <mithun.cy@enterprisedb.com>
wrote:

CREATE INDEX testindx ON testtab (col1, col2, col3, col4, col5, col6,
col7, col8, col9);
Performance measurement tests: Ran12 times to eliminate run to run
latencies.
==========================
VACUUM FULL;
BEGIN;
UPDATE testtab SET col2 = md5(random()::text);
ROLLBACK;

Response time recorded shows there is a much higher increase in response
time from 10% to 25% after the patch.

After doing some tests on my side, I now think that there is something else
going on, unrelated to the patch. I ran the same benchmark on AWS i2.xlarge
machine with 32GB RAM. shared_buffers set to 16GB, max_wal_size to 256GB,
checkpoint_timeout to 60min and autovacuum off.

I compared master and v19, every time running 6 runs of the test. The
database was restarted whenever changing binaries, tables dropped/recreated
and checkpoint taken after each restart (but not between 2 runs, which I
believe what you did too.. but correct me if that's a wrong assumption).

Instead of col2, I am updating col9, but that's probably not too much
relevant.

VACUUM FULL;
BEGIN;
UPDATE testtab SET col9 = md5(random()::text);
ROLLBACK;

First set of 6 runs with master:
163629.8
181183.8
194788.1
194606.1
194589.9
196002.6

(database restart, table drop/create, checkpoint)
First set of 6 runs with v19:
190566.55
228274.489
238110.202
239304.681
258748.189
284882.4

(database restart, table drop/create, checkpoint)
Second set of 6 runs with master:
232267.5
298259.6
312315.1
341817.3
360729.2
385210.7

This looks quite weird to me. Obviously these numbers are completely
non-comparable. Even the time for VACUUM FULL goes up with every run.

May be we can blame it on AWS instance completely, but the pattern in your
tests looks very similar where the number slowly and steadily keeps going
up. If you do complete retest but run v18/v19 first and then run master,
may be we'll see a complete opposite picture?

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#150Amit Kapila
amit.kapila16@gmail.com
In reply to: Pavan Deolasee (#118)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 6:47 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

Please find attached rebased patches.

Few comments on 0005_warm_updates_v18.patch:

1.
@@ -806,20 +835,35 @@ hashbucketcleanup(Relation rel, Bucket
cur_bucket, Buffer bucket_buf,
{
..
- if (callback && callback(htup, callback_state))
+ if(callback)
  {
- kill_tuple = true;
-
if (tuples_removed)
- *tuples_removed += 1;
+result = callback(htup, is_warm, callback_state);
+ if (result== IBDCR_DELETE)
+ {
+ kill_tuple = true;
+ if (tuples_removed)
+*tuples_removed += 1;
+ }
+ else if (result ==IBDCR_CLEAR_WARM)
+ {
+ clear_tuple= true;
+ }
  }
  else if
(split_cleanup)
..
}

I think this will break the existing mechanism of split cleanup. We
need to check for split cleanup if the tuple is tuple is not deletable
by the callback. This is not merely an optimization but a must
condition because we will clear the split cleanup flag after this
bucket is scanned completely.

2.
- PageIndexMultiDelete(page, deletable, ndeletable);
+ /*
+
* Clear the WARM pointers.
+ *
+ * We mustdo this before dealing with the dead items because
+ * PageIndexMultiDelete may move items around to compactify the
+ * array and hence offnums recorded earlierwon't make any sense
+ * after PageIndexMultiDelete is called.
+
 */
+ if (nclearwarm > 0)
+ _hash_clear_items(page,clearwarm, nclearwarm);
+
+ /*
+ * And delete the deletableitems
+ */
+ if (ndeletable > 0)
+
PageIndexMultiDelete(page, deletable, ndeletable);

I think this assumes that the items where we need to clear warm flag
are not deletable, otherwise what is the need to clear the flag if we
are going to delete the tuple. The deletable tuple can have a warm
flag if it is deletable due to split cleanup.

3.
+ /*
+ * HASH indexes compute a hash value of the key and store that in the
+ * index. So
we must first obtain the hash of the value obtained from the
+ * heap and then do a comparison
+
 */
+ _hash_convert_tuple(indexRel, values, isnull, values2, isnull2);

I think here, you need to handle the case where heap has a NULL value
as the hash index doesn't contain NULL values, otherwise, the code in
below function can return true which is not right.

4.
+bool
+hashrecheck(Relation indexRel, IndexInfo *indexInfo, IndexTuple indexTuple,
+ Relation heapRel, HeapTuple heapTuple)
{
..
+ att = indexRel->rd_att->attrs[i - 1];
+ if (!datumIsEqual(values2[i - 1], indxvalue, att->attbyval,
+ att->attlen))
+ {
+ equal = false;
+ break;
+ }
..
}

Hash values are always uint32 and attlen can be different for
different datatypes, so I think above doesn't seem to be the right way
to do the comparison.

5.
@@ -274,6 +301,8 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
OffsetNumber offnum;

ItemPointer current;
  bool res;
+ IndexTuple itup;
+
  /* Hash
indexes are always lossy since we store only the hash code */
  scan->xs_recheck = true;
@@ -316,8
+345,6 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
  offnum <=
maxoffnum;
  offnum = OffsetNumberNext(offnum))
  {
-IndexTuple itup;
-

Why above change?

6.
+ *stats = index_bulk_delete(&ivinfo, *stats,
+lazy_indexvac_phase1, (void *) vacrelstats);
+ ereport(elevel,
+(errmsg("scanned index \"%s\" to remove %d row version, found "
+"%0.f warm pointers, %0.f clear pointers, removed "
+"%0.f warm pointers, removed %0.f clear pointers",
+RelationGetRelationName(indrel),
+ vacrelstats->num_dead_tuples,
+ (*stats)->num_warm_pointers,
+(*stats)->num_clear_pointers,
+(*stats)->warm_pointers_removed,
+ (*stats)->clear_pointers_removed)));
+
+ (*stats)->num_warm_pointers = 0;
+ (*stats)->num_clear_pointers = 0;
+ (*stats)->warm_pointers_removed = 0;
+ (*stats)->clear_pointers_removed = 0;
+ (*stats)->pointers_cleared = 0;
+
+ *stats =index_bulk_delete(&ivinfo, *stats,
+ lazy_indexvac_phase2, (void *)vacrelstats);

To convert WARM chains, we need to do two index passes for all the
indexes. I think it can substantially increase the random I/O. I
think this can help us in doing more WARM updates, but I don't see how
the downside of that (increased random I/O) will be acceptable for all
kind of cases.

+exists. Since index vacuum may visit these pointers in any order, we will need
+another index pass to remove dead index pointers. So in the first index pass we
+check which WARM candidates have 2 index pointers. In the second pass, we
+remove the dead pointer and clear the INDEX_WARM_POINTER flag if that's the
+surviving index pointer.

I think there is some mismatch between README and code. In README, it
is mentioned that dead pointers will be removed in the second phase,
but I think the first phase code lazy_indexvac_phase1() will also
allow to delete the dead pointers (it can return IBDCR_DELETE which
will allow index am to remove dead items.). Am I missing something
here?

7.
+ * For CLEAR chains, we just kill the WARM pointer, if it exist,s and keep
+ * the CLEAR pointer.

typo (exist,s)

8.
+/*
+ * lazy_indexvac_phase2() -- run first pass of index vacuum

Shouldn't this be -- run the second pass

9.
- indexInfo); /* index AM may need this */
+indexInfo, /* index AM may need this */
+(modified_attrs != NULL)); /* type of uniqueness check to do */

comment for the last parameter seems to be wrong.

10.
+follow the update chain everytime to the end to see check if this is a WARM
+chain.

"see check" - seems one of those words is sufficient to explain the meaning.

11.
+chain. This simplifies the design and addresses certain issues around
+duplicate scans.

"duplicate scans" - shouldn't be duplicate key scans.

12.
+index on the table, irrespective of whether the key pertaining to the
+index changed or not.

typo.
/index changed/index is changed

13.
+For example, if we have a table with two columns and two indexes on each
+of the column. When a tuple is first inserted the table, we have exactly

typo.
/inserted the table/inserted in the table

14.
+ lp [1]  [2]
+ [1111, aaaa]->[111, bbbb]

Here, after the update, the first column should be 1111.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#151Mithun Cy
mithun.cy@enterprisedb.com
In reply to: Pavan Deolasee (#149)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Mar 22, 2017 at 3:44 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

This looks quite weird to me. Obviously these numbers are completely
non-comparable. Even the time for VACUUM FULL goes up with every run.

May be we can blame it on AWS instance completely, but the pattern in your
tests looks very similar where the number slowly and steadily keeps going
up. If you do complete retest but run v18/v19 first and then run master, may
be we'll see a complete opposite picture?

For those tests I have done tests in the order --- <Master, patch18,
patch18, Master> both the time numbers were same. One different thing
I did was I was deleting the data directory between tests and creating
the database from scratch. Unfortunately the machine I tested this is
not available. I will test same with v19 once I get the machine and
report you back.

--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#152Greg Stark
stark@mit.edu
In reply to: Bruce Momjian (#145)
Re: Patch: Write Amplification Reduction Method (WARM)

On 21 March 2017 at 20:04, Bruce Momjian <bruce@momjian.us> wrote:

Yes, but once it is written it will take years before those bits can be
used on most installations.

Well the problem isn't most installations. On most installations it
should be pretty straightforward to check the oldest database xid and
compare that to when the database was migrated to post-9.0. (Actually
there may be some additional code to write but it's just ensuring that
the bits are actually cleared and not just ignored but even so
databases do generally need to be vacuumed more often than on the
order of years though.)

The problem is that somebody tomorrow could upgrade an 8.4 database to
10.0. In general it seems even versions we don't support get extra
support for migrating away from. I assume it's better to help support
upgrading than to continue to have users using unsupported versions...
And even if you're not concerned about 8.4 someone could still upgrade
9.4 for years to come.

It probably does make sense pick a version, say, 10.0, and have it go
out of its way to ensure it cleans up the MOVED_IN/MOVED_OFF so that
we can be sure that any database was pg_upgraded from 10.0+ doesn't
have any left. Then at least we'll know when the bits are available
again.

--
greg

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#153Mithun Cy
mithun.cy@enterprisedb.com
In reply to: Pavan Deolasee (#147)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Mar 22, 2017 at 8:43 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

Sorry, I did not mean to suggest that you set it up wrongly, I was just
trying to point out that the test case itself may not be very practical.

That is cool np!, I was just trying to explain why those tests were
made if others wondered about it.

--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#154Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Mithun Cy (#151)
1 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Mar 22, 2017 at 4:53 PM, Mithun Cy <mithun.cy@enterprisedb.com>
wrote:

On Wed, Mar 22, 2017 at 3:44 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

This looks quite weird to me. Obviously these numbers are completely
non-comparable. Even the time for VACUUM FULL goes up with every run.

May be we can blame it on AWS instance completely, but the pattern in

your

tests looks very similar where the number slowly and steadily keeps going
up. If you do complete retest but run v18/v19 first and then run master,

may

be we'll see a complete opposite picture?

For those tests I have done tests in the order --- <Master, patch18,
patch18, Master> both the time numbers were same.

Hmm, interesting.

One different thing
I did was I was deleting the data directory between tests and creating
the database from scratch. Unfortunately the machine I tested this is
not available. I will test same with v19 once I get the machine and
report you back.

Ok, no problem. I did some tests on AWS i2.xlarge instance (4 vCPU, 30GB
RAM, attached SSD) and results are shown below. But I think it is important
to get independent validation from your side too, just to ensure I am not
making any mistake in measurement. I've attached naively put together
scripts which I used to run the benchmark. If you find them useful, please
adjust the paths and run on your machine.

I reverted back to UNLOGGED table because with WAL the results looked very
weird (as posted earlier) even when I was taking a CHECKPOINT before each
set and had set max_wal_size and checkpoint_timeout high enough to avoid
any checkpoint during the run. Anyways, that's a matter of separate
investigation and not related to this patch.

I did two kinds of tests.
a) update last column of the index
b) update second column of the index

v19 does considerably better than even master for the last column update
case and pretty much inline for the second column update test. The reason
is very clear because v19 determines early in the cycle that the buffer is
already full and there is very little chance of doing a HOT update on the
page. In that case, it does not check any columns for modification. The
master on the other hand will scan through all 9 columns (for last column
update case) and incur the same kind of overhead of doing wasteful work.

The first/second/fourth column shows response time in ms and third and
fifth column shows percentage difference over master. (I hope the table
looks fine, trying some text-table generator tool :-). Apologies if it
looks messed up)

+-------------------------------------------------------+
|                  Second column update                 |
+-------------------------------------------------------+
|   Master  |         v18         |         v19         |
+-----------+---------------------+---------------------+
| 96657.681 | 108122.868 | 11.86% | 96873.642  | 0.22%  |
+-----------+------------+--------+------------+--------+
| 98546.35  | 110021.27  | 11.64% | 97569.187  | -0.99% |
+-----------+------------+--------+------------+--------+
| 99297.231 | 110550.054 | 11.33% | 100241.261 | 0.95%  |
+-----------+------------+--------+------------+--------+
| 97196.556 | 110235.808 | 13.42% | 97216.207  | 0.02%  |
+-----------+------------+--------+------------+--------+
| 99072.432 | 110477.327 | 11.51% | 97950.687  | -1.13% |
+-----------+------------+--------+------------+--------+
| 96730.056 | 109217.939 | 12.91% | 96929.617  | 0.21%  |
+-----------+------------+--------+------------+--------+
+-------------------------------------------------------+
|                   Last column update                  |
+-------------------------------------------------------+
|   Master   |         v18        |         v19         |
+------------+--------------------+---------------------+
| 112545.537 | 116563.608 | 3.57% | 103067.276 | -8.42% |
+------------+------------+-------+------------+--------+
| 110169.224 | 115753.991 | 5.07% | 104411.44  | -5.23% |
+------------+------------+-------+------------+--------+
| 112280.874 | 116437.11  | 3.70% | 104868.98  | -6.60% |
+------------+------------+-------+------------+--------+
| 113171.556 | 116813.262 | 3.22% | 103907.012 | -8.19% |
+------------+------------+-------+------------+--------+
| 110721.881 | 117442.709 | 6.07% | 104124.131 | -5.96% |
+------------+------------+-------+------------+--------+
| 112138.601 | 115834.549 | 3.30% | 104858.624 | -6.49% |
+------------+------------+-------+------------+--------+

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

interest_attrs_tests.tar.gzapplication/x-gzip; name=interest_attrs_tests.tar.gzDownload
#155Mithun Cy
mithun.cy@enterprisedb.com
In reply to: Pavan Deolasee (#154)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 23, 2017 at 12:19 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Wed, Mar 22, 2017 at 4:53 PM, Mithun Cy <mithun.cy@enterprisedb.com>
wrote:
Ok, no problem. I did some tests on AWS i2.xlarge instance (4 vCPU, 30GB
RAM, attached SSD) and results are shown below. But I think it is important
to get independent validation from your side too, just to ensure I am not
making any mistake in measurement. I've attached naively put together
scripts which I used to run the benchmark. If you find them useful, please
adjust the paths and run on your machine.

Looking at your postgresql.conf JFYI, I have synchronous_commit = off
but same is on with your run (for logged tables) and rest remains
same. Once I get the machine probably next morning, I will run same
tests on v19.
--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#156Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#150)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Mar 22, 2017 at 4:06 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Mar 21, 2017 at 6:47 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

Please find attached rebased patches.

Few comments on 0005_warm_updates_v18.patch:

Few more comments on 0005_warm_updates_v18.patch:
1.
@@ -234,6 +241,25 @@ index_beginscan(Relation heapRelation,
scan->heapRelation = heapRelation;
scan->xs_snapshot = snapshot;

+ /*
+ * If the index supports recheck,
make sure that index tuple is saved
+ * during index scans. Also build and cache IndexInfo which is used by
+ * amrecheck routine.
+ *
+ * XXX Ideally, we should look at
all indexes on the table and check if
+ * WARM is at all supported on the base table. If WARM is not supported
+ * then we don't need to do any recheck.
RelationGetIndexAttrBitmap() does
+ * do that and sets rd_supportswarm after looking at all indexes. But we
+ * don't know if the function was called earlier in the
session when we're
+ * here. We can't call it now because there exists a risk of causing
+ * deadlock.
+ */
+ if (indexRelation->rd_amroutine->amrecheck)
+ {
+scan->xs_want_itup = true;
+ scan->indexInfo = BuildIndexInfo(indexRelation);
+ }
+

Don't we need to do this rechecking during parallel scans? Also what
about bitmap heap scans?

2.
+++ b/src/backend/access/nbtree/nbtinsert.c
-
 typedef struct

Above change is not require.

3.
+_bt_clear_items(Page page, OffsetNumber *clearitemnos, uint16 nclearitems)
+void _hash_clear_items(Page page, OffsetNumber *clearitemnos,
+   uint16 nclearitems)

Both the above functions look exactly same, isn't it better to have a
single function like page_clear_items? If you want separation for
different index types, then we can have one common function which can
be called from different index types.

4.
- if (callback(htup, callback_state))
+ flags = ItemPointerGetFlags(&itup->t_tid);
+ is_warm = ((flags & BTREE_INDEX_WARM_POINTER) != 0);
+
+ if (is_warm)
+ stats->num_warm_pointers++;
+ else
+ stats->num_clear_pointers++;
+
+ result = callback(htup, is_warm, callback_state);
+ if (result == IBDCR_DELETE)
+ {
+ if (is_warm)
+ stats->warm_pointers_removed++;
+ else
+ stats->clear_pointers_removed++;

The patch looks to be inconsistent in collecting stats for btree and
hash. I don't see above stats are getting updated in hash index code.

5.
+btrecheck(Relation indexRel, IndexInfo *indexInfo, IndexTuple indexTuple,
+ Relation heapRel, HeapTuple heapTuple)
{
..
+ if (!datumIsEqual(values[i - 1], indxvalue, att->attbyval,
+ att->attlen))
..
}

Will this work if the index is using non-default collation?

6.
+++ b/src/backend/access/nbtree/nbtxlog.c
@@ -390,83 +390,9 @@ btree_xlog_vacuum(XLogReaderState *record)
-#ifdef UNUSED
  xl_btree_vacuum *xlrec = (xl_btree_vacuum *) XLogRecGetData(record);

/*
- * This section of code is thought to be no longer needed, after analysis
- * of the calling paths. It is retained to allow the code to be reinstated
- * if a flaw is revealed in that thinking.
- *
..

Why does this patch need to remove the above code under #ifdef UNUSED

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#157Amit Kapila
amit.kapila16@gmail.com
In reply to: Pavan Deolasee (#154)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 23, 2017 at 12:19 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

Ok, no problem. I did some tests on AWS i2.xlarge instance (4 vCPU, 30GB
RAM, attached SSD) and results are shown below. But I think it is important
to get independent validation from your side too, just to ensure I am not
making any mistake in measurement. I've attached naively put together
scripts which I used to run the benchmark. If you find them useful, please
adjust the paths and run on your machine.

I reverted back to UNLOGGED table because with WAL the results looked very
weird (as posted earlier) even when I was taking a CHECKPOINT before each
set and had set max_wal_size and checkpoint_timeout high enough to avoid any
checkpoint during the run. Anyways, that's a matter of separate
investigation and not related to this patch.

I did two kinds of tests.
a) update last column of the index
b) update second column of the index

v19 does considerably better than even master for the last column update
case and pretty much inline for the second column update test. The reason is
very clear because v19 determines early in the cycle that the buffer is
already full and there is very little chance of doing a HOT update on the
page. In that case, it does not check any columns for modification.

That sounds like you are dodging the actual problem. I mean you can
put that same PageIsFull() check in master code as well and then you
will most probably again see the same regression. Also, I think if we
test at fillfactor 80 or 75 (which is not unrealistic considering an
update-intensive workload), then we might again see regression.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#158Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Amit Kapila (#150)
1 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Mar 22, 2017 at 4:06 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Tue, Mar 21, 2017 at 6:47 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

Please find attached rebased patches.

Few comments on 0005_warm_updates_v18.patch:

Thanks a lot Amit for review comments.

1.
@@ -806,20 +835,35 @@ hashbucketcleanup(Relation rel, Bucket
cur_bucket, Buffer bucket_buf,
{
..
- if (callback && callback(htup, callback_state))
+ if(callback)
{
- kill_tuple = true;
-
if (tuples_removed)
- *tuples_removed += 1;
+result = callback(htup, is_warm, callback_state);
+ if (result== IBDCR_DELETE)
+ {
+ kill_tuple = true;
+ if (tuples_removed)
+*tuples_removed += 1;
+ }
+ else if (result ==IBDCR_CLEAR_WARM)
+ {
+ clear_tuple= true;
+ }
}
else if
(split_cleanup)
..
}

I think this will break the existing mechanism of split cleanup. We
need to check for split cleanup if the tuple is tuple is not deletable
by the callback. This is not merely an optimization but a must
condition because we will clear the split cleanup flag after this
bucket is scanned completely.

Ok, I see. Fixed, but please check if this looks good.

2.
- PageIndexMultiDelete(page, deletable, ndeletable);
+ /*
+
* Clear the WARM pointers.
+ *
+ * We mustdo this before dealing with the dead items because
+ * PageIndexMultiDelete may move items around to compactify the
+ * array and hence offnums recorded earlierwon't make any sense
+ * after PageIndexMultiDelete is called.
+
*/
+ if (nclearwarm > 0)
+ _hash_clear_items(page,clearwarm, nclearwarm);
+
+ /*
+ * And delete the deletableitems
+ */
+ if (ndeletable > 0)
+
PageIndexMultiDelete(page, deletable, ndeletable);

I think this assumes that the items where we need to clear warm flag
are not deletable, otherwise what is the need to clear the flag if we
are going to delete the tuple. The deletable tuple can have a warm
flag if it is deletable due to split cleanup.

Yes. Since the callback will either say IBDCR_DELETE or IBDCR_CLEAR_WARM, I
don't think we will ever has a situation where a tuple is deleted as well
as cleared. I also checked that the bucket split code should carry the WARM
flag correctly to the new bucket.

Based on your first comment, I believe the rearranged code with take care
of deleting a tuple even if WARM flag is set, if the deletion is because of
bucket split.

3.

+ /*
+ * HASH indexes compute a hash value of the key and store that in the
+ * index. So
we must first obtain the hash of the value obtained from the
+ * heap and then do a comparison
+
*/
+ _hash_convert_tuple(indexRel, values, isnull, values2, isnull2);

I think here, you need to handle the case where heap has a NULL value
as the hash index doesn't contain NULL values, otherwise, the code in
below function can return true which is not right.

I think we can simply conclude hashrecheck has failed the equality if the
heap has NULL value because such a tuple should not have been reached via
hash index unless a non-NULL hash key was later updated to a NULL key,
right?

4.
+bool
+hashrecheck(Relation indexRel, IndexInfo *indexInfo, IndexTuple
indexTuple,
+ Relation heapRel, HeapTuple heapTuple)
{
..
+ att = indexRel->rd_att->attrs[i - 1];
+ if (!datumIsEqual(values2[i - 1], indxvalue, att->attbyval,
+ att->attlen))
+ {
+ equal = false;
+ break;
+ }
..
}

Hash values are always uint32 and attlen can be different for
different datatypes, so I think above doesn't seem to be the right way
to do the comparison.

Since we're referring to the attr from the index relation, wouldn't it tell
us the attribute specs of what gets stored in the index and not what's
there in the heap? I could be wrong but some quick tests show me that
pg_attribute->attlen for the index relation always returns 4 irrespective
of the underlying data type in heap.

5.
@@ -274,6 +301,8 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
OffsetNumber offnum;

ItemPointer current;
bool res;
+ IndexTuple itup;
+
/* Hash
indexes are always lossy since we store only the hash code */
scan->xs_recheck = true;
@@ -316,8
+345,6 @@ hashgettuple(IndexScanDesc scan, ScanDirection dir)
offnum <=
maxoffnum;
offnum = OffsetNumberNext(offnum))
{
-IndexTuple itup;
-

Why above change?

Seems spurious. Fixed.

6.
+ *stats = index_bulk_delete(&ivinfo, *stats,
+lazy_indexvac_phase1, (void *) vacrelstats);
+ ereport(elevel,
+(errmsg("scanned index \"%s\" to remove %d row version, found "
+"%0.f warm pointers, %0.f clear pointers, removed "
+"%0.f warm pointers, removed %0.f clear pointers",
+RelationGetRelationName(indrel),
+ vacrelstats->num_dead_tuples,
+ (*stats)->num_warm_pointers,
+(*stats)->num_clear_pointers,
+(*stats)->warm_pointers_removed,
+ (*stats)->clear_pointers_removed)));
+
+ (*stats)->num_warm_pointers = 0;
+ (*stats)->num_clear_pointers = 0;
+ (*stats)->warm_pointers_removed = 0;
+ (*stats)->clear_pointers_removed = 0;
+ (*stats)->pointers_cleared = 0;
+
+ *stats =index_bulk_delete(&ivinfo, *stats,
+ lazy_indexvac_phase2, (void *)vacrelstats);

To convert WARM chains, we need to do two index passes for all the
indexes. I think it can substantially increase the random I/O. I
think this can help us in doing more WARM updates, but I don't see how
the downside of that (increased random I/O) will be acceptable for all
kind of cases.

Yes, this is a very fair point. The way I proposed to address this upthread
is by introducing a set of threshold/scale GUCs specific to WARM. So users
can control when to invoke WARM cleanup. Only if the WARM cleanup is
required, we do 2 index scans. Otherwise vacuum will work the way it works
today without any additional overhead.

We already have some intelligence to skip the second index scan if we did
not find any WARM candidate chains during the first heap scan. This should
take care of majority of the users who never update their indexed columns.
For others, we need either a knob or some built-in way to deduce whether to
do WARM cleanup or not.

Does that seem worthwhile?

+exists. Since index vacuum may visit these pointers in any order, we will
need
+another index pass to remove dead index pointers. So in the first index
pass we
+check which WARM candidates have 2 index pointers. In the second pass, we
+remove the dead pointer and clear the INDEX_WARM_POINTER flag if that's
the
+surviving index pointer.

I think there is some mismatch between README and code. In README, it
is mentioned that dead pointers will be removed in the second phase,
but I think the first phase code lazy_indexvac_phase1() will also
allow to delete the dead pointers (it can return IBDCR_DELETE which
will allow index am to remove dead items.). Am I missing something
here?

Hmm.. fixed the README. Clearly we do allow removal of dead pointers which
are known to be certainly dead in the first index pass itself. Some other
pointers can be removed during the second scan once we know about the
existence or non existence of WARM index pointers.

7.
+ * For CLEAR chains, we just kill the WARM pointer, if it exist,s and keep
+ * the CLEAR pointer.

typo (exist,s)

Fixed.

8.
+/*
+ * lazy_indexvac_phase2() -- run first pass of index vacuum

Shouldn't this be -- run the second pass

Yes, fixed.

9.
- indexInfo); /* index AM may need this */
+indexInfo, /* index AM may need this */
+(modified_attrs != NULL)); /* type of uniqueness check to do */

comment for the last parameter seems to be wrong.

Fixed.

10.
+follow the update chain everytime to the end to see check if this is a
WARM
+chain.

"see check" - seems one of those words is sufficient to explain the
meaning.

Fixed.

11.
+chain. This simplifies the design and addresses certain issues around
+duplicate scans.

"duplicate scans" - shouldn't be duplicate key scans.

Ok, seems better. Fixed.

12.
+index on the table, irrespective of whether the key pertaining to the
+index changed or not.

typo.
/index changed/index is changed

Fixed.

13.
+For example, if we have a table with two columns and two indexes on each
+of the column. When a tuple is first inserted the table, we have exactly

typo.
/inserted the table/inserted in the table

Fixed.

14.
+ lp [1]  [2]
+ [1111, aaaa]->[111, bbbb]

Here, after the update, the first column should be 1111.

Fixed.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0005_warm_updates_v19.patchapplication/octet-stream; name=0005_warm_updates_v19.patchDownload
diff --git b/contrib/bloom/blutils.c a/contrib/bloom/blutils.c
index f2eda67..b356e2b 100644
--- b/contrib/bloom/blutils.c
+++ a/contrib/bloom/blutils.c
@@ -142,6 +142,7 @@ blhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git b/contrib/bloom/blvacuum.c a/contrib/bloom/blvacuum.c
index 04abd0f..ff50361 100644
--- b/contrib/bloom/blvacuum.c
+++ a/contrib/bloom/blvacuum.c
@@ -88,7 +88,7 @@ blbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 		while (itup < itupEnd)
 		{
 			/* Do we have to delete this tuple? */
-			if (callback(&itup->heapPtr, callback_state))
+			if (callback(&itup->heapPtr, false, callback_state) == IBDCR_DELETE)
 			{
 				/* Yes; adjust count of tuples that will be left on page */
 				BloomPageGetOpaque(page)->maxoff--;
diff --git b/src/backend/access/brin/brin.c a/src/backend/access/brin/brin.c
index b22563b..b4a1465 100644
--- b/src/backend/access/brin/brin.c
+++ a/src/backend/access/brin/brin.c
@@ -116,6 +116,7 @@ brinhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git b/src/backend/access/gin/ginvacuum.c a/src/backend/access/gin/ginvacuum.c
index c9ccfee..8ed71c5 100644
--- b/src/backend/access/gin/ginvacuum.c
+++ a/src/backend/access/gin/ginvacuum.c
@@ -56,7 +56,8 @@ ginVacuumItemPointers(GinVacuumState *gvs, ItemPointerData *items,
 	 */
 	for (i = 0; i < nitem; i++)
 	{
-		if (gvs->callback(items + i, gvs->callback_state))
+		if (gvs->callback(items + i, false, gvs->callback_state) ==
+				IBDCR_DELETE)
 		{
 			gvs->result->tuples_removed += 1;
 			if (!tmpitems)
diff --git b/src/backend/access/gist/gist.c a/src/backend/access/gist/gist.c
index 6593771..843389b 100644
--- b/src/backend/access/gist/gist.c
+++ a/src/backend/access/gist/gist.c
@@ -94,6 +94,7 @@ gisthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git b/src/backend/access/gist/gistvacuum.c a/src/backend/access/gist/gistvacuum.c
index 77d9d12..0955db6 100644
--- b/src/backend/access/gist/gistvacuum.c
+++ a/src/backend/access/gist/gistvacuum.c
@@ -202,7 +202,8 @@ gistbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 				iid = PageGetItemId(page, i);
 				idxtuple = (IndexTuple) PageGetItem(page, iid);
 
-				if (callback(&(idxtuple->t_tid), callback_state))
+				if (callback(&(idxtuple->t_tid), false, callback_state) ==
+						IBDCR_DELETE)
 					todelete[ntodelete++] = i;
 				else
 					stats->num_index_tuples += 1;
diff --git b/src/backend/access/hash/hash.c a/src/backend/access/hash/hash.c
index cfcec34..a930a1c 100644
--- b/src/backend/access/hash/hash.c
+++ a/src/backend/access/hash/hash.c
@@ -75,6 +75,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = hashbuild;
 	amroutine->ambuildempty = hashbuildempty;
 	amroutine->aminsert = hashinsert;
+	amroutine->amwarminsert = hashwarminsert;
 	amroutine->ambulkdelete = hashbulkdelete;
 	amroutine->amvacuumcleanup = hashvacuumcleanup;
 	amroutine->amcanreturn = NULL;
@@ -92,6 +93,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = hashrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -233,11 +235,11 @@ hashbuildCallback(Relation index,
  *	Hash on the heap tuple's key, form an index tuple with hash code.
  *	Find the appropriate location for the new tuple, and put it there.
  */
-bool
-hashinsert(Relation rel, Datum *values, bool *isnull,
+static bool
+hashinsert_internal(Relation rel, Datum *values, bool *isnull,
 		   ItemPointer ht_ctid, Relation heapRel,
 		   IndexUniqueCheck checkUnique,
-		   IndexInfo *indexInfo)
+		   IndexInfo *indexInfo, bool warm_update)
 {
 	Datum		index_values[1];
 	bool		index_isnull[1];
@@ -253,6 +255,11 @@ hashinsert(Relation rel, Datum *values, bool *isnull,
 	itup = index_form_tuple(RelationGetDescr(rel), index_values, index_isnull);
 	itup->t_tid = *ht_ctid;
 
+	if (warm_update)
+		ItemPointerSetFlags(&itup->t_tid, HASH_INDEX_WARM_POINTER);
+	else
+		ItemPointerClearFlags(&itup->t_tid);
+
 	_hash_doinsert(rel, itup, heapRel);
 
 	pfree(itup);
@@ -260,6 +267,26 @@ hashinsert(Relation rel, Datum *values, bool *isnull,
 	return false;
 }
 
+bool
+hashinsert(Relation rel, Datum *values, bool *isnull,
+		   ItemPointer ht_ctid, Relation heapRel,
+		   IndexUniqueCheck checkUnique,
+		   IndexInfo *indexInfo)
+{
+	return hashinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, false);
+}
+
+bool
+hashwarminsert(Relation rel, Datum *values, bool *isnull,
+		   ItemPointer ht_ctid, Relation heapRel,
+		   IndexUniqueCheck checkUnique,
+		   IndexInfo *indexInfo)
+{
+	return hashinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, true);
+
+}
 
 /*
  *	hashgettuple() -- Get the next tuple in the scan.
@@ -536,6 +563,9 @@ hashbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 	Relation	rel = info->index;
 	double		tuples_removed;
 	double		num_index_tuples;
+	double		warm_pointers_removed;
+	double		clear_pointers_removed;
+	double		pointers_cleared;
 	double		orig_ntuples;
 	Bucket		orig_maxbucket;
 	Bucket		cur_maxbucket;
@@ -544,7 +574,8 @@ hashbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 	HashMetaPage metap;
 	HashMetaPage cachedmetap;
 
-	tuples_removed = 0;
+	tuples_removed = warm_pointers_removed = clear_pointers_removed = 0;
+	pointers_cleared = 0;
 	num_index_tuples = 0;
 
 	/*
@@ -627,7 +658,9 @@ loop_top:
 						  cachedmetap->hashm_maxbucket,
 						  cachedmetap->hashm_highmask,
 						  cachedmetap->hashm_lowmask, &tuples_removed,
-						  &num_index_tuples, split_cleanup,
+						  &num_index_tuples, &warm_pointers_removed,
+						  &clear_pointers_removed, &pointers_cleared,
+						  split_cleanup,
 						  callback, callback_state);
 
 		_hash_dropbuf(rel, bucket_buf);
@@ -709,6 +742,8 @@ loop_top:
 	stats->estimated_count = false;
 	stats->num_index_tuples = num_index_tuples;
 	stats->tuples_removed += tuples_removed;
+	stats->warm_pointers_removed += warm_pointers_removed;
+	stats->clear_pointers_removed += clear_pointers_removed;
 	/* hashvacuumcleanup will fill in num_pages */
 
 	return stats;
@@ -764,6 +799,9 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 				  BlockNumber bucket_blkno, BufferAccessStrategy bstrategy,
 				  uint32 maxbucket, uint32 highmask, uint32 lowmask,
 				  double *tuples_removed, double *num_index_tuples,
+				  double *warm_pointers_removed,
+				  double *clear_pointers_removed,
+				  double *pointers_cleared,
 				  bool split_cleanup,
 				  IndexBulkDeleteCallback callback, void *callback_state)
 {
@@ -789,6 +827,8 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 		Page		page;
 		OffsetNumber deletable[MaxOffsetNumber];
 		int			ndeletable = 0;
+		OffsetNumber clearwarm[MaxOffsetNumber];
+		int			nclearwarm = 0;
 		bool		retain_pin = false;
 
 		vacuum_delay_point();
@@ -806,22 +846,44 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 			IndexTuple	itup;
 			Bucket		bucket;
 			bool		kill_tuple = false;
+			bool		clear_tuple = false;
+			int			flags;
+			bool		is_warm;
+			IndexBulkDeleteCallbackResult	result;
 
 			itup = (IndexTuple) PageGetItem(page,
 											PageGetItemId(page, offno));
 			htup = &(itup->t_tid);
 
+			flags = ItemPointerGetFlags(&itup->t_tid);
+			is_warm = ((flags & HASH_INDEX_WARM_POINTER) != 0);
+
 			/*
 			 * To remove the dead tuples, we strictly want to rely on results
 			 * of callback function.  refer btvacuumpage for detailed reason.
 			 */
-			if (callback && callback(htup, callback_state))
+			if (callback)
 			{
-				kill_tuple = true;
-				if (tuples_removed)
-					*tuples_removed += 1;
+				result = callback(htup, is_warm, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					kill_tuple = true;
+					if (tuples_removed)
+						*tuples_removed += 1;
+					if (is_warm && warm_pointers_removed)
+						*warm_pointers_removed += 1;
+					else if (!is_warm && clear_pointers_removed)
+						*clear_pointers_removed += 1;
+				}
+				else if (result == IBDCR_CLEAR_WARM)
+				{
+					clear_tuple = true;
+					if (pointers_cleared)
+						*pointers_cleared += 1;
+				}
 			}
-			else if (split_cleanup)
+
+			if (!kill_tuple && split_cleanup)
 			{
 				/* delete the tuples that are moved by split. */
 				bucket = _hash_hashkey2bucket(_hash_get_indextuple_hashkey(itup),
@@ -849,6 +911,9 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 			}
 			else
 			{
+				if (clear_tuple)
+					/* clear the WARM pointer */
+					clearwarm[nclearwarm++] = offno;
 				/* we're keeping it, so count it */
 				if (num_index_tuples)
 					*num_index_tuples += 1;
@@ -866,12 +931,27 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 		/*
 		 * Apply deletions, advance to next page and write page if needed.
 		 */
-		if (ndeletable > 0)
+		if (ndeletable > 0 || nclearwarm > 0)
 		{
 			/* No ereport(ERROR) until changes are logged */
 			START_CRIT_SECTION();
 
-			PageIndexMultiDelete(page, deletable, ndeletable);
+			/*
+			 * Clear the WARM pointers.
+			 *
+			 * We must do this before dealing with the dead items because
+			 * PageIndexMultiDelete may move items around to compactify the
+			 * array and hence offnums recorded earlier won't make any sense
+			 * after PageIndexMultiDelete is called.
+			 */
+			if (nclearwarm > 0)
+				_hash_clear_items(page, clearwarm, nclearwarm);
+
+			/*
+			 * And delete the deletable items
+			 */
+			if (ndeletable > 0)
+				PageIndexMultiDelete(page, deletable, ndeletable);
 			bucket_dirty = true;
 
 			/*
@@ -892,6 +972,7 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 				XLogRecPtr	recptr;
 
 				xlrec.is_primary_bucket_page = (buf == bucket_buf) ? true : false;
+				xlrec.nclearitems = nclearwarm;
 
 				XLogBeginInsert();
 				XLogRegisterData((char *) &xlrec, SizeOfHashDelete);
@@ -904,6 +985,8 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 					XLogRegisterBuffer(0, bucket_buf, REGBUF_STANDARD | REGBUF_NO_IMAGE);
 
 				XLogRegisterBuffer(1, buf, REGBUF_STANDARD);
+				XLogRegisterBufData(1, (char *) clearwarm,
+									nclearwarm * sizeof(OffsetNumber));
 				XLogRegisterBufData(1, (char *) deletable,
 									ndeletable * sizeof(OffsetNumber));
 
diff --git b/src/backend/access/hash/hash_xlog.c a/src/backend/access/hash/hash_xlog.c
index 8647e8c..fe89ee1 100644
--- b/src/backend/access/hash/hash_xlog.c
+++ a/src/backend/access/hash/hash_xlog.c
@@ -840,6 +840,7 @@ hash_xlog_delete(XLogReaderState *record)
 	/* replay the record for deleting entries in bucket page */
 	if (action == BLK_NEEDS_REDO)
 	{
+		uint16		nclearwarm = xldata->nclearitems;
 		char	   *ptr;
 		Size		len;
 
@@ -849,12 +850,17 @@ hash_xlog_delete(XLogReaderState *record)
 
 		if (len > 0)
 		{
+			OffsetNumber *clearwarm;
 			OffsetNumber *unused;
 			OffsetNumber *unend;
 
-			unused = (OffsetNumber *) ptr;
+			clearwarm = (OffsetNumber *) ptr;
+			unused = clearwarm + nclearwarm;
 			unend = (OffsetNumber *) ((char *) ptr + len);
 
+			if (nclearwarm)
+				_hash_clear_items(page, clearwarm, nclearwarm);
+
 			if ((unend - unused) > 0)
 				PageIndexMultiDelete(page, unused, unend - unused);
 		}
diff --git b/src/backend/access/hash/hashpage.c a/src/backend/access/hash/hashpage.c
index 622cc4b..c349b28 100644
--- b/src/backend/access/hash/hashpage.c
+++ a/src/backend/access/hash/hashpage.c
@@ -754,8 +754,8 @@ restart_expand:
 		LockBuffer(metabuf, BUFFER_LOCK_UNLOCK);
 
 		hashbucketcleanup(rel, old_bucket, buf_oblkno, start_oblkno, NULL,
-						  maxbucket, highmask, lowmask, NULL, NULL, true,
-						  NULL, NULL);
+						  maxbucket, highmask, lowmask, NULL, NULL, NULL,
+						  NULL, NULL, true, NULL, NULL);
 
 		_hash_dropbuf(rel, buf_oblkno);
 
@@ -1576,3 +1576,13 @@ _hash_getbucketbuf_from_hashkey(Relation rel, uint32 hashkey, int access,
 
 	return buf;
 }
+
+/*
+ * Currently just a wrapper arounf PageIndexClearWarmTuples, but in theory each
+ * index may have it's own way to handle WARM tuples.
+ */
+void _hash_clear_items(Page page, OffsetNumber *clearitemnos,
+					   uint16 nclearitems)
+{
+	PageIndexClearWarmTuples(page, clearitemnos, nclearitems);
+}
diff --git b/src/backend/access/hash/hashsearch.c a/src/backend/access/hash/hashsearch.c
index 2d92049..330ccc5 100644
--- b/src/backend/access/hash/hashsearch.c
+++ a/src/backend/access/hash/hashsearch.c
@@ -59,6 +59,8 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
 	return true;
 }
 
@@ -367,6 +369,9 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
+
 	return true;
 }
 
diff --git b/src/backend/access/hash/hashutil.c a/src/backend/access/hash/hashutil.c
index 2e99719..caac8a9 100644
--- b/src/backend/access/hash/hashutil.c
+++ a/src/backend/access/hash/hashutil.c
@@ -17,9 +17,11 @@
 #include "access/hash.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
 #include "utils/lsyscache.h"
 #include "utils/rel.h"
 #include "storage/buf_internals.h"
+#include "utils/datum.h"
 
 #define CALC_NEW_BUCKET(old_bucket, lowmask) \
 			old_bucket | (lowmask + 1)
@@ -514,3 +516,77 @@ _hash_kill_items(IndexScanDesc scan)
 		MarkBufferDirtyHint(so->hashso_curbuf, true);
 	}
 }
+
+/*
+ * Recheck if the heap tuple satisfies the key stored in the index tuple
+ */
+bool
+hashrecheck(Relation indexRel, IndexInfo *indexInfo, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	bool		isavail[INDEX_MAX_KEYS];
+	Datum		values2[INDEX_MAX_KEYS];
+	bool		isnull2[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	FormIndexPlainDatum(indexInfo, heapRel, heapTuple, values, isnull, isavail);
+
+	/*
+	 * HASH indexes compute a hash value of the key and store that in the
+	 * index. So we must first obtain the hash of the value obtained from the
+	 * heap and then do a comparison.
+	 *
+	 * If the user-supplied value is NULL then we can just assume that the
+	 * recheck has failed because the hash index must not have inserted any
+	 * entry for a NULL-value to start with. So if heap is presenting a NULL
+	 * value now, that pretty much means that the heap contains an updated
+	 * value not satisfying the hash index key.
+	 */
+	if (!_hash_convert_tuple(indexRel, values, isnull, values2, isnull2))
+		return false;
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		if (!isavail[i - 1])
+			continue;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL then they are equal
+		 */
+		if (isnull2[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If either is NULL then they are not equal
+		 */
+		if (isnull2[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now do a raw memory comparison
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values2[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	return equal;
+}
diff --git b/src/backend/access/heap/README.WARM a/src/backend/access/heap/README.WARM
new file mode 100644
index 0000000..7c93a70
--- /dev/null
+++ a/src/backend/access/heap/README.WARM
@@ -0,0 +1,308 @@
+src/backend/access/heap/README.WARM
+
+Write Amplification Reduction Method (WARM)
+===========================================
+
+The Heap Only Tuple (HOT) feature greatly eliminated redudant index
+entries and allowed re-use of the dead space occupied by previously
+updated or deleted tuples (see src/backend/access/heap/README.HOT)
+
+One of the necessary conditions for satisfying HOT update is that the
+update must not change a column used in any of the indexes on the table.
+The condition is sometimes hard to meet, especially for complex
+workloads with several indexes on large yet frequently updated tables.
+Worse, sometimes only one or two index columns may be updated, but the
+regular non-HOT update will still insert a new index entry in every
+index on the table, irrespective of whether the key pertaining to the
+index is changed or not.
+
+WARM is a technique devised to address these problems.
+
+
+Update Chains With Multiple Index Entries Pointing to the Root
+--------------------------------------------------------------
+
+When a non-HOT update is caused by an index key change, a new index
+entry must be inserted for the changed index. But if the index key
+hasn't changed for other indexes, we don't really need to insert a new
+entry. Even though the existing index entry is pointing to the old
+tuple, the new tuple is reachable via the t_ctid chain. To keep things
+simple, a WARM update requires that the heap block must have enough
+space to store the new version of the tuple. This is same as HOT
+updates.
+
+In WARM, we ensure that every index entry always points to the root of
+the WARM chain. In fact, a WARM chain looks exactly like a HOT chain
+except for the fact that there could be multiple index entries pointing
+to the root of the chain. So when new entry is inserted in an index for
+updated tuple, and if we are doing a WARM update, the new entry is made
+point to the root of the WARM chain.
+
+For example, if we have a table with two columns and two indexes on each of the
+column. When a tuple is first inserted into the table, we have exactly one
+index entry pointing to the tuple from both indexes.
+
+	lp [1]
+	[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's entry (aaaa) also points to 1
+
+Now if the tuple's second column is updated and if there is room on the
+page, we perform a WARM update. To do so, Index1 does not get any new
+entry and Index2's new entry will still point to the root tuple of the
+chain.
+
+	lp [1]  [2]
+	[1111, aaaa]->[1111, bbbb]
+
+	Index1's entry (1111) points to 1
+	Index2's old entry (aaaa) points to 1
+	Index2's new entry (bbbb) also points to 1
+
+"A update chain which has more than one index entries pointing to its
+root line pointer is called WARM chain and the action that creates a
+WARM chain is called WARM update."
+
+Since all indexes always point to the root of the WARM chain, even when
+there are more than one index entries, WARM chains can be pruned and
+dead tuples can be removed without a need to do corresponding index
+cleanup.
+
+While this solves the problem of pruning dead tuples from a HOT/WARM
+chain, it also opens up a new technical challenge because now we have a
+situation where a heap tuple is reachable from multiple index entries,
+each having a different index key. While MVCC still ensures that only
+valid tuples are returned, a tuple with a wrong index key may be
+returned because of wrong index entries. In the above example, tuple
+[1111, bbbb] is reachable from both keys (aaaa) as well as (bbbb). For
+this reason, tuples returned from a WARM chain must always be rechecked
+for index key-match.
+
+Recheck Index Key Againt Heap Tuple
+-----------------------------------
+
+Since every Index AM has it's own notion of index tuples, each Index AM
+must implement its own method to recheck heap tuples. For example, a
+hash index stores the hash value of the column and hence recheck routine
+for hash AM must first compute the hash value of the heap attribute and
+then compare it against the value stored in the index tuple.
+
+The patch currently implement recheck routines for hash and btree
+indexes. If the table has an index which doesn't support recheck
+routine, WARM updates are disabled on such tables.
+
+Problem With Duplicate (key, ctid) Index Entries
+------------------------------------------------
+
+The index-key recheck logic works as long as there are no duplicate
+index keys, both pointing to the same WARM chain. In that case, the same
+valid tuple will be reachable via multiple index keys, yet satisfying
+the index key checks. In the above example, if the tuple [1111, bbbb] is
+again updated to [1111, aaaa] and if we insert a new index entry (aaaa)
+pointing to the root line pointer, we will end up with the following
+structure:
+
+	lp [1]  [2]  [3]
+	[1111, aaaa]->[1111, bbbb]->[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's oldest entry (aaaa) points to 1
+	Index2's old entry (bbbb) also points to 1
+	Index2's new entry (aaaa) also points to 1
+
+We must solve this problem to ensure that the same tuple is not
+reachable via multiple index pointers. There are couple of ways to
+address this issue:
+
+1. Do not allow WARM update to a tuple from a WARM chain. This
+guarantees that there can never be duplicate index entries to the same
+root line pointer because we must have checked for old and new index
+keys while doing the first WARM update.
+
+2. Do not allow duplicate (key, ctid) index pointers. In the above
+example, since (aaaa, 1) already exists in the index, we must not insert
+a duplicate index entry.
+
+The patch currently implements 1 i.e. do not do WARM updates to a tuple
+from a WARM chain. HOT updates are fine because they do not add a new
+index entry.
+
+Even with the restriction, this is a significant improvement because the
+number of regular UPDATEs are curtailed down to half.
+
+Expression and Partial Indexes
+------------------------------
+
+Expressions may evaluate to the same value even if the underlying column
+values have changed. A simple example is an index on "lower(col)" which
+will return the same value if the new heap value only differs in the
+case sensitivity. So we can not solely rely on the heap column check to
+decide whether or not to insert a new index entry for expression
+indexes. Similarly, for partial indexes, the predicate expression must
+be evaluated to decide whether or not to cause a new index entry when
+columns referred in the predicate expressions change.
+
+(None of these things are currently implemented and we squarely disallow
+WARM update if a column from expression indexes or predicate has
+changed).
+
+
+Efficiently Finding the Root Line Pointer
+-----------------------------------------
+
+During WARM update, we must be able to find the root line pointer of the
+tuple being updated. It must be noted that the t_ctid field in the heap
+tuple header is usually used to find the next tuple in the update chain.
+But the tuple that we are updating, must be the last tuple in the update
+chain. In such cases, the c_tid field usually points the tuple itself.
+So in theory, we could use the t_ctid to store additional information in
+the last tuple of the update chain, if the information about the tuple
+being the last tuple is stored elsewhere.
+
+We now utilize another bit from t_infomask2 to explicitly identify that
+this is the last tuple in the update chain.
+
+HEAP_LATEST_TUPLE - When this bit is set, the tuple is the last tuple in
+the update chain. The OffsetNumber part of t_ctid points to the root
+line pointer of the chain when HEAP_LATEST_TUPLE flag is set.
+
+If UPDATE operation is aborted, the last tuple in the update chain
+becomes dead. The root line pointer information stored in the tuple
+which remains the last valid tuple in the chain is also lost. In such
+rare cases, the root line pointer must be found in a hard way by
+scanning the entire heap page.
+
+Tracking WARM Chains
+--------------------
+
+When a tuple is WARM updated, the old, the new and every subsequent tuple in
+the chain is marked with a special HEAP_WARM_UPDATED flag. We use the last
+remaining bit in t_infomask2 to store this information.
+
+When a tuple is returned from a WARM chain, the caller must do additional
+checks to ensure that the tuple matches the index key. Even if the tuple
+precedes the WARM update in the chain, it must still be rechecked for the index
+key match (case when old tuple is returned by the new index key). So we must
+follow the update chain everytime to the end to check if this is a WARM
+chain.
+
+Converting WARM chains back to HOT chains (VACUUM ?)
+----------------------------------------------------
+
+The current implementation of WARM allows only one WARM update per
+chain. This simplifies the design and addresses certain issues around
+duplicate key scans. But this also implies that the benefit of WARM will be
+no more than 50%, which is still significant, but if we could return
+WARM chains back to normal status, we could do far more WARM updates.
+
+A distinct property of a WARM chain is that at least one index has more
+than one live index entries pointing to the root of the chain. In other
+words, if we can remove duplicate entry from every index or conclusively
+prove that there are no duplicate index entries for the root line
+pointer, the chain can again be marked as HOT.
+
+Here is one idea:
+
+A WARM chain has two parts, separated by the tuple that caused WARM
+update. All tuples in each part has matching index keys, but certain
+index keys may not match between these two parts. Lets say we mark heap
+tuples in the second part with a special HEAP_WARM_TUPLE flag. Similarly, the
+new index entries caused by the first WARM update are also marked with
+INDEX_WARM_POINTER flags.
+
+There are two distinct parts of the WARM chain. The first part where none of
+the tuples have HEAP_WARM_TUPLE flag set and the second part where every tuple
+has the flag set. Each of these parts satisfy HOT property on its own i.e. all
+tuples have the same value for indexed columns. But these two parts are
+separated by the WARM update which breaks HOT property for one or more indexes.
+
+Heap chain: [1] [2] [3] [4]
+			[aaaa, 1111] -> [aaaa, 1111] -> [bbbb, 1111]W -> [bbbb, 1111]W
+
+Index1: 	(aaaa) points to 1 (satisfies only tuples without W)
+			(bbbb)W points to 1 (satisfies only tuples marked with W)
+
+Index2:		(1111) points to 1 (satisfies tuples with and without W)
+
+
+It's clear that for indexes with both pointers, a heap tuple without
+HEAP_WARM_TUPLE flag will be reachable from the index pointer cleared of
+INDEX_WARM_POINTER flag and that with HEAP_WARM_TUPLE flag will be reachable
+from the pointer with INDEX_WARM_POINTER. But for indexes which did not create
+a new entry, tuples with and without the HEAP_WARM_TUPLE flag will be reachable
+from the original index pointer which doesn't have the INDEX_WARM_POINTER flag.
+(there is no pointer with INDEX_WARM_POINTER in such indexes).
+
+During first heap scan of VACUUM, we look for tuples with HEAP_WARM_UPDATED
+set.  If all or none of the live tuples in the chain are marked with
+HEAP_WARM_TUPLE flag, then the chain is a candidate for HOT conversion. We
+remember the root line pointer and whether the tuples in the chain had
+HEAP_WARM_TUPLE flags set or not.
+
+If we have a WARM chain with HEAP_WARM_TUPLE set, then our goal is to remove
+the index pointers without INDEX_WARM_POINTER flags and vice versa. But there
+is a catch. For Index2 above, there is only one pointer and it does not have
+the INDEX_WARM_POINTER flag set. Since all heap tuples are reachable only via
+this pointer, it must not be removed. IOW we should remove index pointer
+without INDEX_WARM_POINTER iff a another index pointer with INDEX_WARM_POINTER
+exists. Since index vacuum may visit these pointers in any order, we will need
+another index pass to detect redundant index pointers, which can safely be
+removed because all live tuples are reachable via the other index pointer. So
+in the first index pass we check which WARM candidates have 2 index pointers.
+In the second pass, we remove the redundant pointer and clear the
+INDEX_WARM_POINTER flag if that's the surviving index pointer. Note that
+all index pointers, either CLEAR or WARM, to dead tuples are removed during the
+first index scan itself.
+
+During the second heap scan, we fix WARM chain by clearing HEAP_WARM_UPDATED
+and HEAP_WARM_TUPLE flags on tuples.
+
+There are some more problems around aborted vacuums. For example, if vacuum
+aborts after clearing INDEX_WARM_POINTER flag but before removing the other
+index pointer, we will end up with two index pointers and none of those will
+have INDEX_WARM_POINTER set.  But since the HEAP_WARM_UPDATED flag on the heap
+tuple is still set, further WARM updates to the chain will be blocked. I guess
+we will need some special handling for case with multiple index pointers where
+none of the index pointers has INDEX_WARM_POINTER flag set. We can either leave
+these WARM chains alone and let them die with a subsequent non-WARM update or
+must apply heap-recheck logic during index vacuum to find the dead pointer.
+Given that vacuum-aborts are not common, I am inclined to leave this case
+unhandled. We must still check for presence of multiple index pointers without
+INDEX_WARM_POINTER flags and ensure that we don't accidently remove either of
+these pointers and also must not clear WARM chains.
+
+CREATE INDEX CONCURRENTLY
+-------------------------
+
+Currently CREATE INDEX CONCURRENTLY (CIC) is implemented as a 3-phase
+process.  In the first phase, we create catalog entry for the new index
+so that the index is visible to all other backends, but still don't use
+it for either read or write.  But we ensure that no new broken HOT
+chains are created by new transactions. In the second phase, we build
+the new index using a MVCC snapshot and then make the index available
+for inserts. We then do another pass over the index and insert any
+missing tuples, everytime indexing only it's root line pointer. See
+README.HOT for details about how HOT impacts CIC and how various
+challenges are tackeled.
+
+WARM poses another challenge because it allows creation of HOT chains
+even when an index key is changed. But since the index is not ready for
+insertion until the second phase is over, we might end up with a
+situation where the HOT chain has tuples with different index columns,
+yet only one of these values are indexed by the new index. Note that
+during the third phase, we only index tuples whose root line pointer is
+missing from the index. But we can't easily check if the existing index
+tuple is actually indexing the heap tuple visible to the new MVCC
+snapshot. Finding that information will require us to query the index
+again for every tuple in the chain, especially if it's a WARM tuple.
+This would require repeated access to the index. Another option would be
+to return index keys along with the heap TIDs when index is scanned for
+collecting all indexed TIDs during third phase. We can then compare the
+heap tuple against the already indexed key and decide whether or not to
+index the new tuple.
+
+We solve this problem more simply by disallowing WARM updates until the
+index is ready for insertion. We don't need to disallow WARM on a
+wholesale basis, but only those updates that change the columns of the
+new index are disallowed to be WARM updates.
diff --git b/src/backend/access/heap/heapam.c a/src/backend/access/heap/heapam.c
index 26a7af4..c86fbc6 100644
--- b/src/backend/access/heap/heapam.c
+++ a/src/backend/access/heap/heapam.c
@@ -1974,6 +1974,206 @@ heap_fetch(Relation relation,
 }
 
 /*
+ * Check status of a (possibly) WARM chain.
+ *
+ * This function looks at a HOT/WARM chain starting at tid and return a bitmask
+ * of information. We only follow the chain as long as it's known to be valid
+ * HOT chain. Information returned by the function consists of:
+ *
+ *  HCWC_WARM_UPDATED_TUPLE - a tuple with HEAP_WARM_UPDATED is found somewhere
+ *  						  in the chain. Note that when a tuple is WARM
+ *  						  updated, both old and new versions are marked
+ *  						  with this flag/
+ *
+ *  HCWC_WARM_TUPLE  - a tuple with HEAP_WARM_TUPLE is found somewhere in
+ *					  the chain.
+ *
+ *  HCWC_CLEAR_TUPLE - a tuple without HEAP_WARM_TUPLE is found somewhere in
+ *  					 the chain.
+ *
+ *	If stop_at_warm is true, we stop when the first HEAP_WARM_UPDATED tuple is
+ *	found and return information collected so far.
+ */
+HeapCheckWarmChainStatus
+heap_check_warm_chain(Page dp, ItemPointer tid, bool stop_at_warm)
+{
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	HeapCheckWarmChainStatus	status = 0;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
+			break;
+		}
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		if (HeapTupleHeaderIsWarmUpdated(heapTuple.t_data))
+		{
+			/* We found a WARM_UPDATED tuple */
+			status |= HCWC_WARM_UPDATED_TUPLE;
+
+			/*
+			 * If we've been told to stop at the first WARM_UPDATED tuple, just
+			 * return whatever information collected so far.
+			 */
+			if (stop_at_warm)
+				return status;
+
+			/*
+			 * Remember whether it's a CLEAR or a WARM tuple.
+			 */
+			if (HeapTupleHeaderIsWarm(heapTuple.t_data))
+				status |= HCWC_WARM_TUPLE;
+			else
+				status |= HCWC_CLEAR_TUPLE;
+		}
+		else
+			/* Must be a regular, non-WARM tuple */
+			status |= HCWC_CLEAR_TUPLE;
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	/* All OK. No need to recheck */
+	return status;
+}
+
+/*
+ * Scan through the WARM chain starting at tid and reset all WARM related
+ * flags. At the end, the chain will have all characteristics of a regular HOT
+ * chain.
+ *
+ * Return the number of cleared offnums. Cleared offnums are returned in the
+ * passed-in cleared_offnums array. The caller must ensure that the array is
+ * large enough to hold maximum offnums that can be cleared by this invokation
+ * of heap_clear_warm_chain().
+ */
+int
+heap_clear_warm_chain(Page dp, ItemPointer tid, OffsetNumber *cleared_offnums)
+{
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	int							num_cleared = 0;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
+			break;
+		}
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Clear WARM_UPDATED and WARM flags.
+		 */
+		if (HeapTupleHeaderIsWarmUpdated(heapTuple.t_data))
+		{
+			HeapTupleHeaderClearWarmUpdated(heapTuple.t_data);
+			HeapTupleHeaderClearWarm(heapTuple.t_data);
+			cleared_offnums[num_cleared++] = offnum;
+		}
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	return num_cleared;
+}
+
+/*
  *	heap_hot_search_buffer	- search HOT chain for tuple satisfying snapshot
  *
  * On entry, *tid is the TID of a tuple (either a simple tuple, or the root
@@ -1993,11 +2193,14 @@ heap_fetch(Relation relation,
  * Unlike heap_fetch, the caller must already have pin and (at least) share
  * lock on the buffer; it is still pinned/locked at exit.  Also unlike
  * heap_fetch, we do not report any pgstats count; caller may do so if wanted.
+ *
+ * recheck should be set false on entry by caller, will be set true on exit
+ * if a WARM tuple is encountered.
  */
 bool
 heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 					   Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call)
+					   bool *all_dead, bool first_call, bool *recheck)
 {
 	Page		dp = (Page) BufferGetPage(buffer);
 	TransactionId prev_xmax = InvalidTransactionId;
@@ -2051,9 +2254,12 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		ItemPointerSetOffsetNumber(&heapTuple->t_self, offnum);
 
 		/*
-		 * Shouldn't see a HEAP_ONLY tuple at chain start.
+		 * Shouldn't see a HEAP_ONLY tuple at chain start, unless we are
+		 * dealing with a WARM updated tuple in which case deferred triggers
+		 * may request to fetch a WARM tuple from middle of a chain.
 		 */
-		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple))
+		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple) &&
+				!HeapTupleIsWarmUpdated(heapTuple))
 			break;
 
 		/*
@@ -2066,6 +2272,20 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 			break;
 
 		/*
+		 * Check if there exists a WARM tuple somewhere down the chain and set
+		 * recheck to TRUE.
+		 *
+		 * XXX This is not very efficient right now, and we should look for
+		 * possible improvements here.
+		 */
+		if (recheck && *recheck == false)
+		{
+			HeapCheckWarmChainStatus status;
+			status = heap_check_warm_chain(dp, &heapTuple->t_self, true);
+			*recheck = HCWC_IS_WARM_UPDATED(status);
+		}
+
+		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
 		 * return the first tuple we find.  But on later passes, heapTuple
 		 * will initially be pointing to the tuple we returned last time.
@@ -2114,7 +2334,8 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		 * Check to see if HOT chain continues past this tuple; if so fetch
 		 * the next offnum and loop around.
 		 */
-		if (HeapTupleIsHotUpdated(heapTuple))
+		if (HeapTupleIsHotUpdated(heapTuple) &&
+			!HeapTupleHeaderHasRootOffset(heapTuple->t_data))
 		{
 			Assert(ItemPointerGetBlockNumber(&heapTuple->t_data->t_ctid) ==
 				   ItemPointerGetBlockNumber(tid));
@@ -2138,18 +2359,41 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
  */
 bool
 heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
-				bool *all_dead)
+				bool *all_dead, bool *recheck, Buffer *cbuffer,
+				HeapTuple heapTuple)
 {
 	bool		result;
 	Buffer		buffer;
-	HeapTupleData heapTuple;
+	ItemPointerData ret_tid = *tid;
 
 	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	LockBuffer(buffer, BUFFER_LOCK_SHARE);
-	result = heap_hot_search_buffer(tid, relation, buffer, snapshot,
-									&heapTuple, all_dead, true);
-	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buffer);
+	result = heap_hot_search_buffer(&ret_tid, relation, buffer, snapshot,
+									heapTuple, all_dead, true, recheck);
+
+	/*
+	 * If we are returning a potential candidate tuple from this chain and the
+	 * caller has requested for "recheck" hint, keep the buffer locked and
+	 * pinned. The caller must release the lock and pin on the buffer in all
+	 * such cases.
+	 */
+	if (!result || !recheck || !(*recheck))
+	{
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	/*
+	 * Set the caller supplied tid with the actual location of the tuple being
+	 * returned.
+	 */
+	if (result)
+	{
+		*tid = ret_tid;
+		if (cbuffer)
+			*cbuffer = buffer;
+	}
+
 	return result;
 }
 
@@ -2792,7 +3036,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		{
 			XLogRecPtr	recptr;
 			xl_heap_multi_insert *xlrec;
-			uint8		info = XLOG_HEAP2_MULTI_INSERT;
+			uint8		info = XLOG_HEAP_MULTI_INSERT;
 			char	   *tupledata;
 			int			totaldatalen;
 			char	   *scratchptr = scratch;
@@ -2889,7 +3133,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			/* filtering by origin on a row level is much more efficient */
 			XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);
 
-			recptr = XLogInsert(RM_HEAP2_ID, info);
+			recptr = XLogInsert(RM_HEAP_ID, info);
 
 			PageSetLSN(page, recptr);
 		}
@@ -3313,7 +3557,9 @@ l1:
 	}
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	tp.t_data->t_infomask |= new_infomask;
 	tp.t_data->t_infomask2 |= new_infomask2;
@@ -3508,15 +3754,18 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 HTSU_Result
 heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode)
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update)
 {
 	HTSU_Result result;
 	TransactionId xid = GetCurrentTransactionId();
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *exprindx_attrs;
 	Bitmapset  *interesting_attrs;
 	Bitmapset  *modified_attrs;
+	Bitmapset  *notready_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3537,6 +3786,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		use_hot_update = false;
+	bool		use_warm_update = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
 	bool		all_visible_cleared_new = false;
@@ -3561,6 +3811,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot update tuples during a parallel operation")));
 
+	/* Assume no-warm update */
+	if (warm_update)
+		*warm_update = false;
+
 	/*
 	 * Fetch the list of attributes to be checked for various operations.
 	 *
@@ -3582,10 +3836,17 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	exprindx_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_EXPR_PREDICATE);
+	notready_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_NOTREADY);
+
+
 	interesting_attrs = bms_add_members(NULL, hot_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
-
+	interesting_attrs = bms_add_members(interesting_attrs, exprindx_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, notready_attrs);
 
 	block = ItemPointerGetBlockNumber(otid);
 	offnum = ItemPointerGetOffsetNumber(otid);
@@ -3637,6 +3898,9 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
 												  &oldtup, newtup);
 
+	if (modified_attrsp)
+		*modified_attrsp = bms_copy(modified_attrs);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3892,6 +4156,7 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(exprindx_attrs);
 		bms_free(modified_attrs);
 		bms_free(interesting_attrs);
 		return result;
@@ -4057,7 +4322,9 @@ l2:
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
-		oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(oldtup.t_data))
+			oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 		oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleClearHotUpdated(&oldtup);
 		/* ... and store info about transaction updating this tuple */
@@ -4210,6 +4477,24 @@ l2:
 		 */
 		if (!bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
+		else
+		{
+			/*
+			 * If no WARM updates yet on this chain, let this update be a WARM
+			 * update.
+			 *
+			 * We check for both warm and warm updated tuples since if the
+			 * previous WARM update aborted, we may still have added
+			 * another index entry for this HOT chain. In such situations, we
+			 * must not attempt a WARM update.
+			 */
+			if (relation->rd_supportswarm &&
+				!bms_overlap(modified_attrs, exprindx_attrs) &&
+				!bms_is_subset(hot_attrs, modified_attrs) &&
+				!bms_overlap(notready_attrs, modified_attrs) &&
+				!HeapTupleIsWarmUpdated(&oldtup))
+				use_warm_update = true;
+		}
 	}
 	else
 	{
@@ -4256,6 +4541,32 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+
+		/*
+		 * Even if we are doing a HOT update, we must carry forward the WARM
+		 * flag because we may have already inserted another index entry
+		 * pointing to our root and a third entry may create duplicates.
+		 *
+		 * Note: If we ever have a mechanism to avoid duplicate <key, TID> in
+		 * indexes, we could look at relaxing this restriction and allow even
+		 * more WARM udpates.
+		 */
+		if (HeapTupleIsWarmUpdated(&oldtup))
+		{
+			HeapTupleSetWarmUpdated(heaptup);
+			HeapTupleSetWarmUpdated(newtup);
+		}
+
+		/*
+		 * If the old tuple is a WARM tuple then mark the new tuple as a WARM
+		 * tuple as well.
+		 */
+		if (HeapTupleIsWarm(&oldtup))
+		{
+			HeapTupleSetWarm(heaptup);
+			HeapTupleSetWarm(newtup);
+		}
+
 		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
@@ -4268,12 +4579,45 @@ l2:
 		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
 			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
+	else if (use_warm_update)
+	{
+		/* Mark the old tuple as HOT-updated */
+		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetWarmUpdated(&oldtup);
+
+		/* And mark the new tuple as heap-only */
+		HeapTupleSetHeapOnly(heaptup);
+		/* Mark the new tuple as WARM tuple */
+		HeapTupleSetWarmUpdated(heaptup);
+		/* This update also starts the WARM chain */
+		HeapTupleSetWarm(heaptup);
+		Assert(!HeapTupleIsWarm(&oldtup));
+
+		/* Mark the caller's copy too, in case different from heaptup */
+		HeapTupleSetHeapOnly(newtup);
+		HeapTupleSetWarmUpdated(newtup);
+		HeapTupleSetWarm(newtup);
+
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
+		/* Let the caller know we did a WARM update */
+		if (warm_update)
+			*warm_update = true;
+	}
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		HeapTupleClearWarmUpdated(heaptup);
+		HeapTupleClearWarmUpdated(newtup);
+		HeapTupleClearWarm(heaptup);
+		HeapTupleClearWarm(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4292,7 +4636,9 @@ l2:
 	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
-	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(oldtup.t_data))
+		oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 	oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	/* ... and store info about transaction updating this tuple */
 	Assert(TransactionIdIsValid(xmax_old_tuple));
@@ -4383,7 +4729,10 @@ l2:
 	if (have_tuple_lock)
 		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
 
-	pgstat_count_heap_update(relation, use_hot_update);
+	/*
+	 * Count HOT and WARM updates separately
+	 */
+	pgstat_count_heap_update(relation, use_hot_update, use_warm_update);
 
 	/*
 	 * If heaptup is a private copy, release it.  Don't forget to copy t_self
@@ -4523,7 +4872,8 @@ HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
  * via ereport().
  */
 void
-simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
+simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup,
+		Bitmapset **modified_attrs, bool *warm_update)
 {
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
@@ -4532,7 +4882,7 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &hufd, &lockmode);
+						 &hufd, &lockmode, modified_attrs, warm_update);
 	switch (result)
 	{
 		case HeapTupleSelfUpdated:
@@ -6209,7 +6559,9 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	PageSetPrunable(page, RecentGlobalXmin);
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 
 	/*
@@ -6783,7 +7135,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 	 * Old-style VACUUM FULL is gone, but we have to keep this code as long as
 	 * we support having MOVED_OFF/MOVED_IN tuples in the database.
 	 */
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 
@@ -6802,7 +7154,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			 * have failed; whereas a non-dead MOVED_IN tuple must mean the
 			 * xvac transaction succeeded.
 			 */
-			if (tuple->t_infomask & HEAP_MOVED_OFF)
+			if (HeapTupleHeaderIsMovedOff(tuple))
 				frz->frzflags |= XLH_INVALID_XVAC;
 			else
 				frz->frzflags |= XLH_FREEZE_XVAC;
@@ -7272,7 +7624,7 @@ heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple)
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid))
@@ -7355,7 +7707,7 @@ heap_tuple_needs_freeze(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid) &&
@@ -7381,7 +7733,7 @@ HeapTupleHeaderAdvanceLatestRemovedXid(HeapTupleHeader tuple,
 	TransactionId xmax = HeapTupleHeaderGetUpdateXid(tuple);
 	TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		if (TransactionIdPrecedes(*latestRemovedXid, xvac))
 			*latestRemovedXid = xvac;
@@ -7430,6 +7782,36 @@ log_heap_cleanup_info(RelFileNode rnode, TransactionId latestRemovedXid)
 }
 
 /*
+ * Perform XLogInsert for a heap-warm-clear operation.  Caller must already
+ * have modified the buffer and marked it dirty.
+ */
+XLogRecPtr
+log_heap_warmclear(Relation reln, Buffer buffer,
+			   OffsetNumber *cleared, int ncleared)
+{
+	xl_heap_warmclear	xlrec;
+	XLogRecPtr			recptr;
+
+	/* Caller should not call me on a non-WAL-logged relation */
+	Assert(RelationNeedsWAL(reln));
+
+	xlrec.ncleared = ncleared;
+
+	XLogBeginInsert();
+	XLogRegisterData((char *) &xlrec, SizeOfHeapWarmClear);
+
+	XLogRegisterBuffer(0, buffer, REGBUF_STANDARD);
+
+	if (ncleared > 0)
+		XLogRegisterBufData(0, (char *) cleared,
+							ncleared * sizeof(OffsetNumber));
+
+	recptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_WARMCLEAR);
+
+	return recptr;
+}
+
+/*
  * Perform XLogInsert for a heap-clean operation.  Caller must already
  * have modified the buffer and marked it dirty.
  *
@@ -7584,6 +7966,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
+	bool		warm_update = false;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
@@ -7595,6 +7978,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	else
 		info = XLOG_HEAP_UPDATE;
 
+	if (HeapTupleIsWarmUpdated(newtup))
+		warm_update = true;
+
 	/*
 	 * If the old and new tuple are on the same page, we only need to log the
 	 * parts of the new tuple that were changed.  That saves on the amount of
@@ -7668,6 +8054,8 @@ log_heap_update(Relation reln, Buffer oldbuf,
 				xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY;
 		}
 	}
+	if (warm_update)
+		xlrec.flags |= XLH_UPDATE_WARM_UPDATE;
 
 	/* If new tuple is the single and first tuple on page... */
 	if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber &&
@@ -8082,6 +8470,60 @@ heap_xlog_clean(XLogReaderState *record)
 		XLogRecordPageWithFreeSpace(rnode, blkno, freespace);
 }
 
+
+/*
+ * Handles HEAP2_WARMCLEAR record type
+ */
+static void
+heap_xlog_warmclear(XLogReaderState *record)
+{
+	XLogRecPtr	lsn = record->EndRecPtr;
+	xl_heap_warmclear	*xlrec = (xl_heap_warmclear *) XLogRecGetData(record);
+	Buffer		buffer;
+	RelFileNode rnode;
+	BlockNumber blkno;
+	XLogRedoAction action;
+
+	XLogRecGetBlockTag(record, 0, &rnode, NULL, &blkno);
+
+	/*
+	 * If we have a full-page image, restore it (using a cleanup lock) and
+	 * we're done.
+	 */
+	action = XLogReadBufferForRedoExtended(record, 0, RBM_NORMAL, true,
+										   &buffer);
+	if (action == BLK_NEEDS_REDO)
+	{
+		Page		page = (Page) BufferGetPage(buffer);
+		OffsetNumber *cleared;
+		int			ncleared;
+		Size		datalen;
+		int			i;
+
+		cleared = (OffsetNumber *) XLogRecGetBlockData(record, 0, &datalen);
+
+		ncleared = xlrec->ncleared;
+
+		for (i = 0; i < ncleared; i++)
+		{
+			ItemId			lp;
+			OffsetNumber	offnum = cleared[i];
+			HeapTupleData	heapTuple;
+
+			lp = PageGetItemId(page, offnum);
+			heapTuple.t_data = (HeapTupleHeader) PageGetItem(page, lp);
+
+			HeapTupleHeaderClearWarmUpdated(heapTuple.t_data);
+			HeapTupleHeaderClearWarm(heapTuple.t_data);
+		}
+
+		PageSetLSN(page, lsn);
+		MarkBufferDirty(buffer);
+	}
+	if (BufferIsValid(buffer))
+		UnlockReleaseBuffer(buffer);
+}
+
 /*
  * Replay XLOG_HEAP2_VISIBLE record.
  *
@@ -8328,7 +8770,9 @@ heap_xlog_delete(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleHeaderClearHotUpdated(htup);
 		fix_infomask_from_infobits(xlrec->infobits_set,
@@ -8349,7 +8793,7 @@ heap_xlog_delete(XLogReaderState *record)
 		if (!HeapTupleHeaderHasRootOffset(htup))
 		{
 			OffsetNumber	root_offnum;
-			root_offnum = heap_get_root_tuple(page, xlrec->offnum); 
+			root_offnum = heap_get_root_tuple(page, xlrec->offnum);
 			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
 		}
 
@@ -8645,16 +9089,22 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	Size		freespace = 0;
 	XLogRedoAction oldaction;
 	XLogRedoAction newaction;
+	bool		warm_update = false;
 
 	/* initialize to keep the compiler quiet */
 	oldtup.t_data = NULL;
 	oldtup.t_len = 0;
 
+	if (xlrec->flags & XLH_UPDATE_WARM_UPDATE)
+		warm_update = true;
+
 	XLogRecGetBlockTag(record, 0, &rnode, NULL, &newblk);
 	if (XLogRecGetBlockTag(record, 1, NULL, NULL, &oldblk))
 	{
 		/* HOT updates are never done across pages */
 		Assert(!hot_update);
+		/* WARM updates are never done across pages */
+		Assert(!warm_update);
 	}
 	else
 		oldblk = newblk;
@@ -8714,6 +9164,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 								   &htup->t_infomask2);
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
+
+		/* Mark the old tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetWarmUpdated(htup);
+
 		/* Set forward chain link in t_ctid */
 		HeapTupleHeaderSetNextTid(htup, &newtid);
 
@@ -8849,6 +9304,10 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
 
+		/* Mark the new tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetWarmUpdated(htup);
+
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
@@ -8976,7 +9435,9 @@ heap_xlog_lock(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9055,7 +9516,9 @@ heap_xlog_lock_updated(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9124,6 +9587,9 @@ heap_redo(XLogReaderState *record)
 		case XLOG_HEAP_INSERT:
 			heap_xlog_insert(record);
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			heap_xlog_multi_insert(record);
+			break;
 		case XLOG_HEAP_DELETE:
 			heap_xlog_delete(record);
 			break;
@@ -9152,7 +9618,7 @@ heap2_redo(XLogReaderState *record)
 {
 	uint8		info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
 
-	switch (info & XLOG_HEAP_OPMASK)
+	switch (info & XLOG_HEAP2_OPMASK)
 	{
 		case XLOG_HEAP2_CLEAN:
 			heap_xlog_clean(record);
@@ -9166,9 +9632,6 @@ heap2_redo(XLogReaderState *record)
 		case XLOG_HEAP2_VISIBLE:
 			heap_xlog_visible(record);
 			break;
-		case XLOG_HEAP2_MULTI_INSERT:
-			heap_xlog_multi_insert(record);
-			break;
 		case XLOG_HEAP2_LOCK_UPDATED:
 			heap_xlog_lock_updated(record);
 			break;
@@ -9182,6 +9645,9 @@ heap2_redo(XLogReaderState *record)
 		case XLOG_HEAP2_REWRITE:
 			heap_xlog_logical_rewrite(record);
 			break;
+		case XLOG_HEAP2_WARMCLEAR:
+			heap_xlog_warmclear(record);
+			break;
 		default:
 			elog(PANIC, "heap2_redo: unknown op code %u", info);
 	}
diff --git b/src/backend/access/heap/pruneheap.c a/src/backend/access/heap/pruneheap.c
index f54337c..4e8ed79 100644
--- b/src/backend/access/heap/pruneheap.c
+++ a/src/backend/access/heap/pruneheap.c
@@ -834,6 +834,13 @@ heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				continue;
 
+			/*
+			 * If the tuple has root line pointer, it must be the end of the
+			 * chain
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			/* Set up to scan the HOT-chain */
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
diff --git b/src/backend/access/heap/rewriteheap.c a/src/backend/access/heap/rewriteheap.c
index 2d3ae9b..bd469ee 100644
--- b/src/backend/access/heap/rewriteheap.c
+++ a/src/backend/access/heap/rewriteheap.c
@@ -404,6 +404,14 @@ rewrite_heap_tuple(RewriteState state,
 		old_tuple->t_data->t_infomask & HEAP_XACT_MASK;
 
 	/*
+	 * We must clear the HEAP_WARM_TUPLE flag if the HEAP_WARM_UPDATED is
+	 * cleared above.
+	 */
+	if (HeapTupleHeaderIsWarmUpdated(old_tuple->t_data))
+		HeapTupleHeaderClearWarm(new_tuple->t_data);
+
+
+	/*
 	 * While we have our hands on the tuple, we may as well freeze any
 	 * eligible xmin or xmax, so that future VACUUM effort can be saved.
 	 */
@@ -428,7 +436,7 @@ rewrite_heap_tuple(RewriteState state,
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
 
-		/* 
+		/*
 		 * We've already checked that this is not the last tuple in the chain,
 		 * so fetch the next TID in the chain.
 		 */
@@ -737,7 +745,7 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		/* 
+		/*
 		 * Set t_ctid just to ensure that block number is copied correctly, but
 		 * then immediately mark the tuple as the latest.
 		 */
diff --git b/src/backend/access/heap/tuptoaster.c a/src/backend/access/heap/tuptoaster.c
index 19e7048..47b01eb 100644
--- b/src/backend/access/heap/tuptoaster.c
+++ a/src/backend/access/heap/tuptoaster.c
@@ -1620,7 +1620,8 @@ toast_save_datum(Relation rel, Datum value,
 							 toastrel,
 							 toastidxs[i]->rd_index->indisunique ?
 							 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-							 NULL);
+							 NULL,
+							 false);
 		}
 
 		/*
diff --git b/src/backend/access/index/genam.c a/src/backend/access/index/genam.c
index a91fda7..d523c8f 100644
--- b/src/backend/access/index/genam.c
+++ a/src/backend/access/index/genam.c
@@ -127,6 +127,8 @@ RelationGetIndexScan(Relation indexRelation, int nkeys, int norderbys)
 	scan->xs_cbuf = InvalidBuffer;
 	scan->xs_continue_hot = false;
 
+	scan->indexInfo = NULL;
+
 	return scan;
 }
 
diff --git b/src/backend/access/index/indexam.c a/src/backend/access/index/indexam.c
index cc5ac8b..d048714 100644
--- b/src/backend/access/index/indexam.c
+++ a/src/backend/access/index/indexam.c
@@ -197,7 +197,8 @@ index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 IndexInfo *indexInfo)
+			 IndexInfo *indexInfo,
+			 bool warm_update)
 {
 	RELATION_CHECKS;
 	CHECK_REL_PROCEDURE(aminsert);
@@ -207,6 +208,12 @@ index_insert(Relation indexRelation,
 									   (HeapTuple) NULL,
 									   InvalidBuffer);
 
+	if (warm_update)
+	{
+		Assert(indexRelation->rd_amroutine->amwarminsert != NULL);
+		return indexRelation->rd_amroutine->amwarminsert(indexRelation, values,
+				isnull, heap_t_ctid, heapRelation, checkUnique, indexInfo);
+	}
 	return indexRelation->rd_amroutine->aminsert(indexRelation, values, isnull,
 												 heap_t_ctid, heapRelation,
 												 checkUnique, indexInfo);
@@ -291,6 +298,25 @@ index_beginscan_internal(Relation indexRelation,
 	scan->parallel_scan = pscan;
 	scan->xs_temp_snap = temp_snap;
 
+	/*
+	 * If the index supports recheck, make sure that index tuple is saved
+	 * during index scans. Also build and cache IndexInfo which is used by
+	 * amrecheck routine.
+	 *
+	 * XXX Ideally, we should look at all indexes on the table and check if
+	 * WARM is at all supported on the base table. If WARM is not supported
+	 * then we don't need to do any recheck. RelationGetIndexAttrBitmap() does
+	 * do that and sets rd_supportswarm after looking at all indexes. But we
+	 * don't know if the function was called earlier in the session when we're
+	 * here. We can't call it now because there exists a risk of causing
+	 * deadlock.
+	 */
+	if (indexRelation->rd_amroutine->amrecheck)
+	{
+		scan->xs_want_itup = true;
+		scan->indexInfo = BuildIndexInfo(indexRelation);
+	}
+
 	return scan;
 }
 
@@ -358,6 +384,10 @@ index_endscan(IndexScanDesc scan)
 	if (scan->xs_temp_snap)
 		UnregisterSnapshot(scan->xs_snapshot);
 
+	/* Free cached IndexInfo, if any */
+	if (scan->indexInfo)
+		pfree(scan->indexInfo);
+
 	/* Release the scan data structure itself */
 	IndexScanEnd(scan);
 }
@@ -535,7 +565,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
 	/*
 	 * The AM's amgettuple proc finds the next index entry matching the scan
 	 * keys, and puts the TID into scan->xs_ctup.t_self.  It should also set
-	 * scan->xs_recheck and possibly scan->xs_itup/scan->xs_hitup, though we
+	 * scan->xs_tuple_recheck and possibly scan->xs_itup/scan->xs_hitup, though we
 	 * pay no attention to those fields here.
 	 */
 	found = scan->indexRelation->rd_amroutine->amgettuple(scan, direction);
@@ -574,7 +604,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
  * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
  * call).
  *
- * Note: caller must check scan->xs_recheck, and perform rechecking of the
+ * Note: caller must check scan->xs_tuple_recheck, and perform rechecking of the
  * scan keys if required.  We do not do that here because we don't have
  * enough information to do it efficiently in the general case.
  * ----------------
@@ -585,6 +615,7 @@ index_fetch_heap(IndexScanDesc scan)
 	ItemPointer tid = &scan->xs_ctup.t_self;
 	bool		all_dead = false;
 	bool		got_heap_tuple;
+	bool		tuple_recheck;
 
 	/* We can skip the buffer-switching logic if we're in mid-HOT chain. */
 	if (!scan->xs_continue_hot)
@@ -603,6 +634,8 @@ index_fetch_heap(IndexScanDesc scan)
 			heap_page_prune_opt(scan->heapRelation, scan->xs_cbuf);
 	}
 
+	tuple_recheck = false;
+
 	/* Obtain share-lock on the buffer so we can examine visibility */
 	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
 	got_heap_tuple = heap_hot_search_buffer(tid, scan->heapRelation,
@@ -610,32 +643,60 @@ index_fetch_heap(IndexScanDesc scan)
 											scan->xs_snapshot,
 											&scan->xs_ctup,
 											&all_dead,
-											!scan->xs_continue_hot);
-	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+											!scan->xs_continue_hot,
+											&tuple_recheck);
 
 	if (got_heap_tuple)
 	{
+		bool res = true;
+
+		/*
+		 * Ok we got a tuple which satisfies the snapshot, but if its part of a
+		 * WARM chain, we must do additional checks to ensure that we are
+		 * indeed returning a correct tuple. Note that if the index AM does not
+		 * implement amrecheck method, then we don't any additional checks
+		 * since WARM must have been disabled on such tables.
+		 */
+		if (tuple_recheck && scan->xs_itup &&
+			scan->indexRelation->rd_amroutine->amrecheck)
+		{
+			res = scan->indexRelation->rd_amroutine->amrecheck(
+						scan->indexRelation,
+						scan->indexInfo,
+						scan->xs_itup,
+						scan->heapRelation,
+						&scan->xs_ctup);
+		}
+
+		LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+
 		/*
 		 * Only in a non-MVCC snapshot can more than one member of the HOT
 		 * chain be visible.
 		 */
 		scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
 		pgstat_count_heap_fetch(scan->indexRelation);
-		return &scan->xs_ctup;
+
+		if (res)
+			return &scan->xs_ctup;
 	}
+	else
+	{
+		LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
 
-	/* We've reached the end of the HOT chain. */
-	scan->xs_continue_hot = false;
+		/* We've reached the end of the HOT chain. */
+		scan->xs_continue_hot = false;
 
-	/*
-	 * If we scanned a whole HOT chain and found only dead tuples, tell index
-	 * AM to kill its entry for that TID (this will take effect in the next
-	 * amgettuple call, in index_getnext_tid).  We do not do this when in
-	 * recovery because it may violate MVCC to do so.  See comments in
-	 * RelationGetIndexScan().
-	 */
-	if (!scan->xactStartedInRecovery)
-		scan->kill_prior_tuple = all_dead;
+		/*
+		 * If we scanned a whole HOT chain and found only dead tuples, tell index
+		 * AM to kill its entry for that TID (this will take effect in the next
+		 * amgettuple call, in index_getnext_tid).  We do not do this when in
+		 * recovery because it may violate MVCC to do so.  See comments in
+		 * RelationGetIndexScan().
+		 */
+		if (!scan->xactStartedInRecovery)
+			scan->kill_prior_tuple = all_dead;
+	}
 
 	return NULL;
 }
diff --git b/src/backend/access/nbtree/nbtinsert.c a/src/backend/access/nbtree/nbtinsert.c
index 6dca810..463d4bf 100644
--- b/src/backend/access/nbtree/nbtinsert.c
+++ a/src/backend/access/nbtree/nbtinsert.c
@@ -20,6 +20,7 @@
 #include "access/nbtxlog.h"
 #include "access/transam.h"
 #include "access/xloginsert.h"
+#include "catalog/index.h"
 #include "miscadmin.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
@@ -250,6 +251,10 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 	BTPageOpaque opaque;
 	Buffer		nbuf = InvalidBuffer;
 	bool		found = false;
+	Buffer		buffer;
+	HeapTupleData	heapTuple;
+	bool		recheck = false;
+	IndexInfo	*indexInfo = BuildIndexInfo(rel);
 
 	/* Assume unique until we find a duplicate */
 	*is_unique = true;
@@ -309,6 +314,8 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				curitup = (IndexTuple) PageGetItem(page, curitemid);
 				htid = curitup->t_tid;
 
+				recheck = false;
+
 				/*
 				 * If we are doing a recheck, we expect to find the tuple we
 				 * are rechecking.  It's not a duplicate, but we have to keep
@@ -326,112 +333,153 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				 * have just a single index entry for the entire chain.
 				 */
 				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
-										 &all_dead))
+							&all_dead, &recheck, &buffer,
+							&heapTuple))
 				{
 					TransactionId xwait;
+					bool result = true;
 
 					/*
-					 * It is a duplicate. If we are only doing a partial
-					 * check, then don't bother checking if the tuple is being
-					 * updated in another transaction. Just return the fact
-					 * that it is a potential conflict and leave the full
-					 * check till later.
+					 * If the tuple was WARM update, we may again see our own
+					 * tuple. Since WARM updates don't create new index
+					 * entries, our own tuple is only reachable via the old
+					 * index pointer.
 					 */
-					if (checkUnique == UNIQUE_CHECK_PARTIAL)
+					if (checkUnique == UNIQUE_CHECK_EXISTING &&
+							ItemPointerCompare(&htid, &itup->t_tid) == 0)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						*is_unique = false;
-						return InvalidTransactionId;
+						found = true;
+						result = false;
+						if (recheck)
+							UnlockReleaseBuffer(buffer);
 					}
-
-					/*
-					 * If this tuple is being updated by other transaction
-					 * then we have to wait for its commit/abort.
-					 */
-					xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
-						SnapshotDirty.xmin : SnapshotDirty.xmax;
-
-					if (TransactionIdIsValid(xwait))
+					else if (recheck)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						/* Tell _bt_doinsert to wait... */
-						*speculativeToken = SnapshotDirty.speculativeToken;
-						return xwait;
+						result = btrecheck(rel, indexInfo, curitup, heapRel, &heapTuple);
+						UnlockReleaseBuffer(buffer);
 					}
 
-					/*
-					 * Otherwise we have a definite conflict.  But before
-					 * complaining, look to see if the tuple we want to insert
-					 * is itself now committed dead --- if so, don't complain.
-					 * This is a waste of time in normal scenarios but we must
-					 * do it to support CREATE INDEX CONCURRENTLY.
-					 *
-					 * We must follow HOT-chains here because during
-					 * concurrent index build, we insert the root TID though
-					 * the actual tuple may be somewhere in the HOT-chain.
-					 * While following the chain we might not stop at the
-					 * exact tuple which triggered the insert, but that's OK
-					 * because if we find a live tuple anywhere in this chain,
-					 * we have a unique key conflict.  The other live tuple is
-					 * not part of this chain because it had a different index
-					 * entry.
-					 */
-					htid = itup->t_tid;
-					if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL))
-					{
-						/* Normal case --- it's still live */
-					}
-					else
+					if (result)
 					{
 						/*
-						 * It's been deleted, so no error, and no need to
-						 * continue searching
+						 * It is a duplicate. If we are only doing a partial
+						 * check, then don't bother checking if the tuple is being
+						 * updated in another transaction. Just return the fact
+						 * that it is a potential conflict and leave the full
+						 * check till later.
 						 */
-						break;
-					}
+						if (checkUnique == UNIQUE_CHECK_PARTIAL)
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							*is_unique = false;
+							return InvalidTransactionId;
+						}
 
-					/*
-					 * Check for a conflict-in as we would if we were going to
-					 * write to this page.  We aren't actually going to write,
-					 * but we want a chance to report SSI conflicts that would
-					 * otherwise be masked by this unique constraint
-					 * violation.
-					 */
-					CheckForSerializableConflictIn(rel, NULL, buf);
+						/*
+						 * If this tuple is being updated by other transaction
+						 * then we have to wait for its commit/abort.
+						 */
+						xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
+							SnapshotDirty.xmin : SnapshotDirty.xmax;
+
+						if (TransactionIdIsValid(xwait))
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							/* Tell _bt_doinsert to wait... */
+							*speculativeToken = SnapshotDirty.speculativeToken;
+							return xwait;
+						}
 
-					/*
-					 * This is a definite conflict.  Break the tuple down into
-					 * datums and report the error.  But first, make sure we
-					 * release the buffer locks we're holding ---
-					 * BuildIndexValueDescription could make catalog accesses,
-					 * which in the worst case might touch this same index and
-					 * cause deadlocks.
-					 */
-					if (nbuf != InvalidBuffer)
-						_bt_relbuf(rel, nbuf);
-					_bt_relbuf(rel, buf);
+						/*
+						 * Otherwise we have a definite conflict.  But before
+						 * complaining, look to see if the tuple we want to insert
+						 * is itself now committed dead --- if so, don't complain.
+						 * This is a waste of time in normal scenarios but we must
+						 * do it to support CREATE INDEX CONCURRENTLY.
+						 *
+						 * We must follow HOT-chains here because during
+						 * concurrent index build, we insert the root TID though
+						 * the actual tuple may be somewhere in the HOT-chain.
+						 * While following the chain we might not stop at the
+						 * exact tuple which triggered the insert, but that's OK
+						 * because if we find a live tuple anywhere in this chain,
+						 * we have a unique key conflict.  The other live tuple is
+						 * not part of this chain because it had a different index
+						 * entry.
+						 */
+						recheck = false;
+						ItemPointerCopy(&itup->t_tid, &htid);
+						if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL,
+									&recheck, &buffer, &heapTuple))
+						{
+							bool result = true;
+							if (recheck)
+							{
+								/*
+								 * Recheck if the tuple actually satisfies the
+								 * index key. Otherwise, we might be following
+								 * a wrong index pointer and mustn't entertain
+								 * this tuple.
+								 */
+								result = btrecheck(rel, indexInfo, itup, heapRel, &heapTuple);
+								UnlockReleaseBuffer(buffer);
+							}
+							if (!result)
+								break;
+							/* Normal case --- it's still live */
+						}
+						else
+						{
+							/*
+							 * It's been deleted, so no error, and no need to
+							 * continue searching.
+							 */
+							break;
+						}
 
-					{
-						Datum		values[INDEX_MAX_KEYS];
-						bool		isnull[INDEX_MAX_KEYS];
-						char	   *key_desc;
-
-						index_deform_tuple(itup, RelationGetDescr(rel),
-										   values, isnull);
-
-						key_desc = BuildIndexValueDescription(rel, values,
-															  isnull);
-
-						ereport(ERROR,
-								(errcode(ERRCODE_UNIQUE_VIOLATION),
-								 errmsg("duplicate key value violates unique constraint \"%s\"",
-										RelationGetRelationName(rel)),
-							   key_desc ? errdetail("Key %s already exists.",
-													key_desc) : 0,
-								 errtableconstraint(heapRel,
-											 RelationGetRelationName(rel))));
+						/*
+						 * Check for a conflict-in as we would if we were going to
+						 * write to this page.  We aren't actually going to write,
+						 * but we want a chance to report SSI conflicts that would
+						 * otherwise be masked by this unique constraint
+						 * violation.
+						 */
+						CheckForSerializableConflictIn(rel, NULL, buf);
+
+						/*
+						 * This is a definite conflict.  Break the tuple down into
+						 * datums and report the error.  But first, make sure we
+						 * release the buffer locks we're holding ---
+						 * BuildIndexValueDescription could make catalog accesses,
+						 * which in the worst case might touch this same index and
+						 * cause deadlocks.
+						 */
+						if (nbuf != InvalidBuffer)
+							_bt_relbuf(rel, nbuf);
+						_bt_relbuf(rel, buf);
+
+						{
+							Datum		values[INDEX_MAX_KEYS];
+							bool		isnull[INDEX_MAX_KEYS];
+							char	   *key_desc;
+
+							index_deform_tuple(itup, RelationGetDescr(rel),
+									values, isnull);
+
+							key_desc = BuildIndexValueDescription(rel, values,
+									isnull);
+
+							ereport(ERROR,
+									(errcode(ERRCODE_UNIQUE_VIOLATION),
+									 errmsg("duplicate key value violates unique constraint \"%s\"",
+										 RelationGetRelationName(rel)),
+									 key_desc ? errdetail("Key %s already exists.",
+										 key_desc) : 0,
+									 errtableconstraint(heapRel,
+										 RelationGetRelationName(rel))));
+						}
 					}
 				}
 				else if (all_dead)
diff --git b/src/backend/access/nbtree/nbtpage.c a/src/backend/access/nbtree/nbtpage.c
index f815fd4..061c8d4 100644
--- b/src/backend/access/nbtree/nbtpage.c
+++ a/src/backend/access/nbtree/nbtpage.c
@@ -766,29 +766,20 @@ _bt_page_recyclable(Page page)
 }
 
 /*
- * Delete item(s) from a btree page during VACUUM.
+ * Delete item(s) and clear WARM item(s) on a btree page during VACUUM.
  *
  * This must only be used for deleting leaf items.  Deleting an item on a
  * non-leaf page has to be done as part of an atomic action that includes
- * deleting the page it points to.
+ * deleting the page it points to. We don't ever clear pointers on a non-leaf
+ * page.
  *
  * This routine assumes that the caller has pinned and locked the buffer.
  * Also, the given itemnos *must* appear in increasing order in the array.
- *
- * We record VACUUMs and b-tree deletes differently in WAL. InHotStandby
- * we need to be able to pin all of the blocks in the btree in physical
- * order when replaying the effects of a VACUUM, just as we do for the
- * original VACUUM itself. lastBlockVacuumed allows us to tell whether an
- * intermediate range of blocks has had no changes at all by VACUUM,
- * and so must be scanned anyway during replay. We always write a WAL record
- * for the last block in the index, whether or not it contained any items
- * to be removed. This allows us to scan right up to end of index to
- * ensure correct locking.
  */
 void
-_bt_delitems_vacuum(Relation rel, Buffer buf,
-					OffsetNumber *itemnos, int nitems,
-					BlockNumber lastBlockVacuumed)
+_bt_handleitems_vacuum(Relation rel, Buffer buf,
+					OffsetNumber *delitemnos, int ndelitems,
+					OffsetNumber *clearitemnos, int nclearitems)
 {
 	Page		page = BufferGetPage(buf);
 	BTPageOpaque opaque;
@@ -796,9 +787,20 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 	/* No ereport(ERROR) until changes are logged */
 	START_CRIT_SECTION();
 
+	/*
+	 * Clear the WARM pointers.
+	 *
+	 * We must do this before dealing with the dead items because
+	 * PageIndexMultiDelete may move items around to compactify the array and
+	 * hence offnums recorded earlier won't make any sense after
+	 * PageIndexMultiDelete is called.
+	 */
+	if (nclearitems > 0)
+		_bt_clear_items(page, clearitemnos, nclearitems);
+
 	/* Fix the page */
-	if (nitems > 0)
-		PageIndexMultiDelete(page, itemnos, nitems);
+	if (ndelitems > 0)
+		PageIndexMultiDelete(page, delitemnos, ndelitems);
 
 	/*
 	 * We can clear the vacuum cycle ID since this page has certainly been
@@ -824,7 +826,8 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 		XLogRecPtr	recptr;
 		xl_btree_vacuum xlrec_vacuum;
 
-		xlrec_vacuum.lastBlockVacuumed = lastBlockVacuumed;
+		xlrec_vacuum.ndelitems = ndelitems;
+		xlrec_vacuum.nclearitems = nclearitems;
 
 		XLogBeginInsert();
 		XLogRegisterBuffer(0, buf, REGBUF_STANDARD);
@@ -835,8 +838,11 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 		 * is.  When XLogInsert stores the whole buffer, the offsets array
 		 * need not be stored too.
 		 */
-		if (nitems > 0)
-			XLogRegisterBufData(0, (char *) itemnos, nitems * sizeof(OffsetNumber));
+		if (ndelitems > 0)
+			XLogRegisterBufData(0, (char *) delitemnos, ndelitems * sizeof(OffsetNumber));
+
+		if (nclearitems > 0)
+			XLogRegisterBufData(0, (char *) clearitemnos, nclearitems * sizeof(OffsetNumber));
 
 		recptr = XLogInsert(RM_BTREE_ID, XLOG_BTREE_VACUUM);
 
@@ -1882,3 +1888,13 @@ _bt_unlink_halfdead_page(Relation rel, Buffer leafbuf, bool *rightsib_empty)
 
 	return true;
 }
+
+/*
+ * Currently just a wrapper arounf PageIndexClearWarmTuples, but in theory each
+ * index may have it's own way to handle WARM tuples.
+ */
+void
+_bt_clear_items(Page page, OffsetNumber *clearitemnos, uint16 nclearitems)
+{
+	PageIndexClearWarmTuples(page, clearitemnos, nclearitems);
+}
diff --git b/src/backend/access/nbtree/nbtree.c a/src/backend/access/nbtree/nbtree.c
index 775f2ff..6d558af 100644
--- b/src/backend/access/nbtree/nbtree.c
+++ a/src/backend/access/nbtree/nbtree.c
@@ -146,6 +146,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = btbuild;
 	amroutine->ambuildempty = btbuildempty;
 	amroutine->aminsert = btinsert;
+	amroutine->amwarminsert = btwarminsert;
 	amroutine->ambulkdelete = btbulkdelete;
 	amroutine->amvacuumcleanup = btvacuumcleanup;
 	amroutine->amcanreturn = btcanreturn;
@@ -163,6 +164,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = btestimateparallelscan;
 	amroutine->aminitparallelscan = btinitparallelscan;
 	amroutine->amparallelrescan = btparallelrescan;
+	amroutine->amrecheck = btrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -315,11 +317,12 @@ btbuildempty(Relation index)
  *		Descend the tree recursively, find the appropriate location for our
  *		new tuple, and put it there.
  */
-bool
-btinsert(Relation rel, Datum *values, bool *isnull,
+static bool
+btinsert_internal(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
-		 IndexInfo *indexInfo)
+		 IndexInfo *indexInfo,
+		 bool warm_update)
 {
 	bool		result;
 	IndexTuple	itup;
@@ -328,6 +331,11 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	itup = index_form_tuple(RelationGetDescr(rel), values, isnull);
 	itup->t_tid = *ht_ctid;
 
+	if (warm_update)
+		ItemPointerSetFlags(&itup->t_tid, BTREE_INDEX_WARM_POINTER);
+	else
+		ItemPointerClearFlags(&itup->t_tid);
+
 	result = _bt_doinsert(rel, itup, checkUnique, heapRel);
 
 	pfree(itup);
@@ -335,6 +343,26 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	return result;
 }
 
+bool
+btinsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, false);
+}
+
+bool
+btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, true);
+}
+
 /*
  *	btgettuple() -- Get the next tuple in the scan.
  */
@@ -1103,7 +1131,7 @@ btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 								 RBM_NORMAL, info->strategy);
 		LockBufferForCleanup(buf);
 		_bt_checkpage(rel, buf);
-		_bt_delitems_vacuum(rel, buf, NULL, 0, vstate.lastBlockVacuumed);
+		_bt_handleitems_vacuum(rel, buf, NULL, 0, NULL, 0);
 		_bt_relbuf(rel, buf);
 	}
 
@@ -1201,6 +1229,8 @@ restart:
 	{
 		OffsetNumber deletable[MaxOffsetNumber];
 		int			ndeletable;
+		OffsetNumber clearwarm[MaxOffsetNumber];
+		int			nclearwarm;
 		OffsetNumber offnum,
 					minoff,
 					maxoff;
@@ -1239,7 +1269,7 @@ restart:
 		 * Scan over all items to see which ones need deleted according to the
 		 * callback function.
 		 */
-		ndeletable = 0;
+		ndeletable = nclearwarm = 0;
 		minoff = P_FIRSTDATAKEY(opaque);
 		maxoff = PageGetMaxOffsetNumber(page);
 		if (callback)
@@ -1250,6 +1280,9 @@ restart:
 			{
 				IndexTuple	itup;
 				ItemPointer htup;
+				int			flags;
+				bool		is_warm = false;
+				IndexBulkDeleteCallbackResult	result;
 
 				itup = (IndexTuple) PageGetItem(page,
 												PageGetItemId(page, offnum));
@@ -1276,16 +1309,36 @@ restart:
 				 * applies to *any* type of index that marks index tuples as
 				 * killed.
 				 */
-				if (callback(htup, callback_state))
+				flags = ItemPointerGetFlags(&itup->t_tid);
+				is_warm = ((flags & BTREE_INDEX_WARM_POINTER) != 0);
+
+				if (is_warm)
+					stats->num_warm_pointers++;
+				else
+					stats->num_clear_pointers++;
+
+				result = callback(htup, is_warm, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					if (is_warm)
+						stats->warm_pointers_removed++;
+					else
+						stats->clear_pointers_removed++;
 					deletable[ndeletable++] = offnum;
+				}
+				else if (result == IBDCR_CLEAR_WARM)
+				{
+					clearwarm[nclearwarm++] = offnum;
+				}
 			}
 		}
 
 		/*
-		 * Apply any needed deletes.  We issue just one _bt_delitems_vacuum()
-		 * call per page, so as to minimize WAL traffic.
+		 * Apply any needed deletes and clearing.  We issue just one
+		 * _bt_handleitems_vacuum() call per page, so as to minimize WAL
+		 * traffic.
 		 */
-		if (ndeletable > 0)
+		if (ndeletable > 0 || nclearwarm > 0)
 		{
 			/*
 			 * Notice that the issued XLOG_BTREE_VACUUM WAL record includes
@@ -1301,8 +1354,8 @@ restart:
 			 * doesn't seem worth the amount of bookkeeping it'd take to avoid
 			 * that.
 			 */
-			_bt_delitems_vacuum(rel, buf, deletable, ndeletable,
-								vstate->lastBlockVacuumed);
+			_bt_handleitems_vacuum(rel, buf, deletable, ndeletable,
+								clearwarm, nclearwarm);
 
 			/*
 			 * Remember highest leaf page number we've issued a
@@ -1312,6 +1365,7 @@ restart:
 				vstate->lastBlockVacuumed = blkno;
 
 			stats->tuples_removed += ndeletable;
+			stats->pointers_cleared += nclearwarm;
 			/* must recompute maxoff */
 			maxoff = PageGetMaxOffsetNumber(page);
 		}
diff --git b/src/backend/access/nbtree/nbtutils.c a/src/backend/access/nbtree/nbtutils.c
index 5b259a3..8dab5a8 100644
--- b/src/backend/access/nbtree/nbtutils.c
+++ a/src/backend/access/nbtree/nbtutils.c
@@ -20,11 +20,13 @@
 #include "access/nbtree.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "catalog/index.h"
 #include "miscadmin.h"
 #include "utils/array.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 typedef struct BTSortArrayContext
@@ -2069,3 +2071,64 @@ btproperty(Oid index_oid, int attno,
 			return false;		/* punt to generic code */
 	}
 }
+
+/*
+ * Check if the index tuple's key matches the one computed from the given heap
+ * tuple's attribute
+ */
+bool
+btrecheck(Relation indexRel, IndexInfo *indexInfo, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	bool		isavail[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+
+	FormIndexPlainDatum(indexInfo, heapRel, heapTuple, values, isnull, isavail);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue;
+		bool	indxisnull;
+
+		/* No need to compare if the attribute value is not available */
+		if (!isavail[i - 1])
+			continue;
+
+		indxvalue = index_getattr(indexTuple, i, indexRel->rd_att, &indxisnull);
+
+		/*
+		 * If both are NULL, then they are equal
+		 */
+		if (isnull[i - 1] && indxisnull)
+			continue;
+
+		/*
+		 * If just one is NULL, then they are not equal
+		 */
+		if (isnull[i - 1] || indxisnull)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now just do a raw memory comparison. If the index tuple was formed
+		 * using this heap tuple, the computed index values must match
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(values[i - 1], indxvalue, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	return equal;
+}
diff --git b/src/backend/access/nbtree/nbtxlog.c a/src/backend/access/nbtree/nbtxlog.c
index ac60db0..92be5c8 100644
--- b/src/backend/access/nbtree/nbtxlog.c
+++ a/src/backend/access/nbtree/nbtxlog.c
@@ -390,83 +390,9 @@ btree_xlog_vacuum(XLogReaderState *record)
 	Buffer		buffer;
 	Page		page;
 	BTPageOpaque opaque;
-#ifdef UNUSED
 	xl_btree_vacuum *xlrec = (xl_btree_vacuum *) XLogRecGetData(record);
 
 	/*
-	 * This section of code is thought to be no longer needed, after analysis
-	 * of the calling paths. It is retained to allow the code to be reinstated
-	 * if a flaw is revealed in that thinking.
-	 *
-	 * If we are running non-MVCC scans using this index we need to do some
-	 * additional work to ensure correctness, which is known as a "pin scan"
-	 * described in more detail in next paragraphs. We used to do the extra
-	 * work in all cases, whereas we now avoid that work in most cases. If
-	 * lastBlockVacuumed is set to InvalidBlockNumber then we skip the
-	 * additional work required for the pin scan.
-	 *
-	 * Avoiding this extra work is important since it requires us to touch
-	 * every page in the index, so is an O(N) operation. Worse, it is an
-	 * operation performed in the foreground during redo, so it delays
-	 * replication directly.
-	 *
-	 * If queries might be active then we need to ensure every leaf page is
-	 * unpinned between the lastBlockVacuumed and the current block, if there
-	 * are any.  This prevents replay of the VACUUM from reaching the stage of
-	 * removing heap tuples while there could still be indexscans "in flight"
-	 * to those particular tuples for those scans which could be confused by
-	 * finding new tuples at the old TID locations (see nbtree/README).
-	 *
-	 * It might be worth checking if there are actually any backends running;
-	 * if not, we could just skip this.
-	 *
-	 * Since VACUUM can visit leaf pages out-of-order, it might issue records
-	 * with lastBlockVacuumed >= block; that's not an error, it just means
-	 * nothing to do now.
-	 *
-	 * Note: since we touch all pages in the range, we will lock non-leaf
-	 * pages, and also any empty (all-zero) pages that may be in the index. It
-	 * doesn't seem worth the complexity to avoid that.  But it's important
-	 * that HotStandbyActiveInReplay() will not return true if the database
-	 * isn't yet consistent; so we need not fear reading still-corrupt blocks
-	 * here during crash recovery.
-	 */
-	if (HotStandbyActiveInReplay() && BlockNumberIsValid(xlrec->lastBlockVacuumed))
-	{
-		RelFileNode thisrnode;
-		BlockNumber thisblkno;
-		BlockNumber blkno;
-
-		XLogRecGetBlockTag(record, 0, &thisrnode, NULL, &thisblkno);
-
-		for (blkno = xlrec->lastBlockVacuumed + 1; blkno < thisblkno; blkno++)
-		{
-			/*
-			 * We use RBM_NORMAL_NO_LOG mode because it's not an error
-			 * condition to see all-zero pages.  The original btvacuumpage
-			 * scan would have skipped over all-zero pages, noting them in FSM
-			 * but not bothering to initialize them just yet; so we mustn't
-			 * throw an error here.  (We could skip acquiring the cleanup lock
-			 * if PageIsNew, but it's probably not worth the cycles to test.)
-			 *
-			 * XXX we don't actually need to read the block, we just need to
-			 * confirm it is unpinned. If we had a special call into the
-			 * buffer manager we could optimise this so that if the block is
-			 * not in shared_buffers we confirm it as unpinned. Optimizing
-			 * this is now moot, since in most cases we avoid the scan.
-			 */
-			buffer = XLogReadBufferExtended(thisrnode, MAIN_FORKNUM, blkno,
-											RBM_NORMAL_NO_LOG);
-			if (BufferIsValid(buffer))
-			{
-				LockBufferForCleanup(buffer);
-				UnlockReleaseBuffer(buffer);
-			}
-		}
-	}
-#endif
-
-	/*
 	 * Like in btvacuumpage(), we need to take a cleanup lock on every leaf
 	 * page. See nbtree/README for details.
 	 */
@@ -482,19 +408,30 @@ btree_xlog_vacuum(XLogReaderState *record)
 
 		if (len > 0)
 		{
-			OffsetNumber *unused;
-			OffsetNumber *unend;
+			OffsetNumber *offnums = (OffsetNumber *) ptr;
 
-			unused = (OffsetNumber *) ptr;
-			unend = (OffsetNumber *) ((char *) ptr + len);
+			/*
+			 * Clear the WARM pointers.
+			 *
+			 * We must do this before dealing with the dead items because
+			 * PageIndexMultiDelete may move items around to compactify the
+			 * array and hence offnums recorded earlier won't make any sense
+			 * after PageIndexMultiDelete is called.
+			 */
+			if (xlrec->nclearitems > 0)
+				_bt_clear_items(page, offnums + xlrec->ndelitems,
+						xlrec->nclearitems);
 
-			if ((unend - unused) > 0)
-				PageIndexMultiDelete(page, unused, unend - unused);
+			/*
+			 * And handle the deleted items too
+			 */
+			if (xlrec->ndelitems > 0)
+				PageIndexMultiDelete(page, offnums, xlrec->ndelitems);
 		}
 
 		/*
 		 * Mark the page as not containing any LP_DEAD items --- see comments
-		 * in _bt_delitems_vacuum().
+		 * in _bt_handleitems_vacuum().
 		 */
 		opaque = (BTPageOpaque) PageGetSpecialPointer(page);
 		opaque->btpo_flags &= ~BTP_HAS_GARBAGE;
diff --git b/src/backend/access/rmgrdesc/heapdesc.c a/src/backend/access/rmgrdesc/heapdesc.c
index 44d2d63..d373e61 100644
--- b/src/backend/access/rmgrdesc/heapdesc.c
+++ a/src/backend/access/rmgrdesc/heapdesc.c
@@ -44,6 +44,12 @@ heap_desc(StringInfo buf, XLogReaderState *record)
 
 		appendStringInfo(buf, "off %u", xlrec->offnum);
 	}
+	else if (info == XLOG_HEAP_MULTI_INSERT)
+	{
+		xl_heap_multi_insert *xlrec = (xl_heap_multi_insert *) rec;
+
+		appendStringInfo(buf, "%d tuples", xlrec->ntuples);
+	}
 	else if (info == XLOG_HEAP_DELETE)
 	{
 		xl_heap_delete *xlrec = (xl_heap_delete *) rec;
@@ -102,7 +108,7 @@ heap2_desc(StringInfo buf, XLogReaderState *record)
 	char	   *rec = XLogRecGetData(record);
 	uint8		info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
 
-	info &= XLOG_HEAP_OPMASK;
+	info &= XLOG_HEAP2_OPMASK;
 	if (info == XLOG_HEAP2_CLEAN)
 	{
 		xl_heap_clean *xlrec = (xl_heap_clean *) rec;
@@ -129,12 +135,6 @@ heap2_desc(StringInfo buf, XLogReaderState *record)
 		appendStringInfo(buf, "cutoff xid %u flags %d",
 						 xlrec->cutoff_xid, xlrec->flags);
 	}
-	else if (info == XLOG_HEAP2_MULTI_INSERT)
-	{
-		xl_heap_multi_insert *xlrec = (xl_heap_multi_insert *) rec;
-
-		appendStringInfo(buf, "%d tuples", xlrec->ntuples);
-	}
 	else if (info == XLOG_HEAP2_LOCK_UPDATED)
 	{
 		xl_heap_lock_updated *xlrec = (xl_heap_lock_updated *) rec;
@@ -171,6 +171,12 @@ heap_identify(uint8 info)
 		case XLOG_HEAP_INSERT | XLOG_HEAP_INIT_PAGE:
 			id = "INSERT+INIT";
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			id = "MULTI_INSERT";
+			break;
+		case XLOG_HEAP_MULTI_INSERT | XLOG_HEAP_INIT_PAGE:
+			id = "MULTI_INSERT+INIT";
+			break;
 		case XLOG_HEAP_DELETE:
 			id = "DELETE";
 			break;
@@ -219,12 +225,6 @@ heap2_identify(uint8 info)
 		case XLOG_HEAP2_VISIBLE:
 			id = "VISIBLE";
 			break;
-		case XLOG_HEAP2_MULTI_INSERT:
-			id = "MULTI_INSERT";
-			break;
-		case XLOG_HEAP2_MULTI_INSERT | XLOG_HEAP_INIT_PAGE:
-			id = "MULTI_INSERT+INIT";
-			break;
 		case XLOG_HEAP2_LOCK_UPDATED:
 			id = "LOCK_UPDATED";
 			break;
diff --git b/src/backend/access/rmgrdesc/nbtdesc.c a/src/backend/access/rmgrdesc/nbtdesc.c
index fbde9d6..6b2c5d6 100644
--- b/src/backend/access/rmgrdesc/nbtdesc.c
+++ a/src/backend/access/rmgrdesc/nbtdesc.c
@@ -48,8 +48,8 @@ btree_desc(StringInfo buf, XLogReaderState *record)
 			{
 				xl_btree_vacuum *xlrec = (xl_btree_vacuum *) rec;
 
-				appendStringInfo(buf, "lastBlockVacuumed %u",
-								 xlrec->lastBlockVacuumed);
+				appendStringInfo(buf, "ndelitems %u, nclearitems %u",
+								 xlrec->ndelitems, xlrec->nclearitems);
 				break;
 			}
 		case XLOG_BTREE_DELETE:
diff --git b/src/backend/access/spgist/spgutils.c a/src/backend/access/spgist/spgutils.c
index e57ac49..59ef7f3 100644
--- b/src/backend/access/spgist/spgutils.c
+++ a/src/backend/access/spgist/spgutils.c
@@ -72,6 +72,7 @@ spghandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git b/src/backend/access/spgist/spgvacuum.c a/src/backend/access/spgist/spgvacuum.c
index cce9b3f..711d351 100644
--- b/src/backend/access/spgist/spgvacuum.c
+++ a/src/backend/access/spgist/spgvacuum.c
@@ -155,7 +155,8 @@ vacuumLeafPage(spgBulkDeleteState *bds, Relation index, Buffer buffer,
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				deletable[i] = true;
@@ -425,7 +426,8 @@ vacuumLeafRoot(spgBulkDeleteState *bds, Relation index, Buffer buffer)
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				toDelete[xlrec.nDelete] = i;
@@ -902,10 +904,10 @@ spgbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 }
 
 /* Dummy callback to delete no tuples during spgvacuumcleanup */
-static bool
-dummy_callback(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+dummy_callback(ItemPointer itemptr, bool is_warm, void *state)
 {
-	return false;
+	return IBDCR_KEEP;
 }
 
 /*
diff --git b/src/backend/catalog/index.c a/src/backend/catalog/index.c
index 8d42a34..67e68d1 100644
--- b/src/backend/catalog/index.c
+++ a/src/backend/catalog/index.c
@@ -54,6 +54,7 @@
 #include "nodes/makefuncs.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/var.h"
 #include "parser/parser.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
@@ -114,7 +115,7 @@ static void IndexCheckExclusion(Relation heapRelation,
 					IndexInfo *indexInfo);
 static inline int64 itemptr_encode(ItemPointer itemptr);
 static inline void itemptr_decode(ItemPointer itemptr, int64 encoded);
-static bool validate_index_callback(ItemPointer itemptr, void *opaque);
+static IndexBulkDeleteCallbackResult validate_index_callback(ItemPointer itemptr, bool is_warm, void *opaque);
 static void validate_index_heapscan(Relation heapRelation,
 						Relation indexRelation,
 						IndexInfo *indexInfo,
@@ -1691,6 +1692,20 @@ BuildIndexInfo(Relation index)
 	ii->ii_AmCache = NULL;
 	ii->ii_Context = CurrentMemoryContext;
 
+	/* build a bitmap of all table attributes referred by this index */
+	for (i = 0; i < ii->ii_NumIndexAttrs; i++)
+	{
+		AttrNumber attr = ii->ii_KeyAttrNumbers[i];
+		ii->ii_indxattrs = bms_add_member(ii->ii_indxattrs, attr -
+				FirstLowInvalidHeapAttributeNumber);
+	}
+
+	/* Collect all attributes used in expressions, too */
+	pull_varattnos((Node *) ii->ii_Expressions, 1, &ii->ii_indxattrs);
+
+	/* Collect all attributes in the index predicate, too */
+	pull_varattnos((Node *) ii->ii_Predicate, 1, &ii->ii_indxattrs);
+
 	return ii;
 }
 
@@ -1816,6 +1831,50 @@ FormIndexDatum(IndexInfo *indexInfo,
 		elog(ERROR, "wrong number of index expressions");
 }
 
+/*
+ * This is same as FormIndexDatum but we don't compute any expression
+ * attributes and hence can be used when executor interfaces are not available.
+ * If i'th attribute is available then isavail[i] is set to true, else set to
+ * false. The caller must always check if an attribute value is available
+ * before trying to do anything useful with that.
+ */
+void
+FormIndexPlainDatum(IndexInfo *indexInfo,
+			   Relation heapRel,
+			   HeapTuple heapTup,
+			   Datum *values,
+			   bool *isnull,
+			   bool *isavail)
+{
+	int			i;
+
+	for (i = 0; i < indexInfo->ii_NumIndexAttrs; i++)
+	{
+		int			keycol = indexInfo->ii_KeyAttrNumbers[i];
+		Datum		iDatum;
+		bool		isNull;
+
+		if (keycol != 0)
+		{
+			/*
+			 * Plain index column; get the value we need directly from the
+			 * heap tuple.
+			 */
+			iDatum = heap_getattr(heapTup, keycol, RelationGetDescr(heapRel), &isNull);
+			values[i] = iDatum;
+			isnull[i] = isNull;
+			isavail[i] = true;
+		}
+		else
+		{
+			/*
+			 * This is an expression attribute which can't be computed by us.
+			 * So just inform the caller about it.
+			 */
+			isavail[i] = false;
+		}
+	}
+}
 
 /*
  * index_update_stats --- update pg_class entry after CREATE INDEX or REINDEX
@@ -2934,15 +2993,15 @@ itemptr_decode(ItemPointer itemptr, int64 encoded)
 /*
  * validate_index_callback - bulkdelete callback to collect the index TIDs
  */
-static bool
-validate_index_callback(ItemPointer itemptr, void *opaque)
+static IndexBulkDeleteCallbackResult
+validate_index_callback(ItemPointer itemptr, bool is_warm, void *opaque)
 {
 	v_i_state  *state = (v_i_state *) opaque;
 	int64		encoded = itemptr_encode(itemptr);
 
 	tuplesort_putdatum(state->tuplesort, Int64GetDatum(encoded), false);
 	state->itups += 1;
-	return false;				/* never actually delete anything */
+	return IBDCR_KEEP;				/* never actually delete anything */
 }
 
 /*
@@ -3163,7 +3222,8 @@ validate_index_heapscan(Relation heapRelation,
 						 heapRelation,
 						 indexInfo->ii_Unique ?
 						 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-						 indexInfo);
+						 indexInfo,
+						 false);
 
 			state->tups_inserted += 1;
 		}
diff --git b/src/backend/catalog/indexing.c a/src/backend/catalog/indexing.c
index abc344a..6392f33 100644
--- b/src/backend/catalog/indexing.c
+++ a/src/backend/catalog/indexing.c
@@ -66,10 +66,15 @@ CatalogCloseIndexes(CatalogIndexState indstate)
  *
  * This should be called for each inserted or updated catalog tuple.
  *
+ * If the tuple was WARM updated, the modified_attrs contains the list of
+ * columns updated by the update. We must not insert new index entries for
+ * indexes which do not refer to any of the modified columns.
+ *
  * This is effectively a cut-down version of ExecInsertIndexTuples.
  */
 static void
-CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
+CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple,
+		Bitmapset *modified_attrs, bool warm_update)
 {
 	int			i;
 	int			numIndexes;
@@ -79,12 +84,28 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 	IndexInfo **indexInfoArray;
 	Datum		values[INDEX_MAX_KEYS];
 	bool		isnull[INDEX_MAX_KEYS];
+	ItemPointerData root_tid;
 
-	/* HOT update does not require index inserts */
-	if (HeapTupleIsHeapOnly(heapTuple))
+	/*
+	 * HOT update does not require index inserts, but WARM may need for some
+	 * indexes.
+	 */
+	if (HeapTupleIsHeapOnly(heapTuple) && !warm_update)
 		return;
 
 	/*
+	 * If we've done a WARM update, then we must index the TID of the root line
+	 * pointer and not the actual TID of the new tuple.
+	 */
+	if (warm_update)
+		ItemPointerSet(&root_tid,
+				ItemPointerGetBlockNumber(&(heapTuple->t_self)),
+				HeapTupleHeaderGetRootOffset(heapTuple->t_data));
+	else
+		ItemPointerCopy(&heapTuple->t_self, &root_tid);
+
+
+	/*
 	 * Get information from the state structure.  Fall out if nothing to do.
 	 */
 	numIndexes = indstate->ri_NumIndices;
@@ -112,6 +133,17 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 			continue;
 
 		/*
+		 * If we've done WARM update, then we must not insert a new index tuple
+		 * if none of the index keys have changed. This is not just an
+		 * optimization, but a requirement for WARM to work correctly.
+		 */
+		if (warm_update)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
+		/*
 		 * Expressional and partial indexes on system catalogs are not
 		 * supported, nor exclusion constraints, nor deferred uniqueness
 		 */
@@ -136,11 +168,12 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 		index_insert(relationDescs[i],	/* index relation */
 					 values,	/* array of index Datums */
 					 isnull,	/* is-null flags */
-					 &(heapTuple->t_self),		/* tid of heap tuple */
+					 &root_tid,
 					 heapRelation,
 					 relationDescs[i]->rd_index->indisunique ?
 					 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-					 indexInfo);
+					 indexInfo,
+					 warm_update);
 	}
 
 	ExecDropSingleTupleTableSlot(slot);
@@ -168,7 +201,7 @@ CatalogTupleInsert(Relation heapRel, HeapTuple tup)
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 	CatalogCloseIndexes(indstate);
 
 	return oid;
@@ -190,7 +223,7 @@ CatalogTupleInsertWithInfo(Relation heapRel, HeapTuple tup,
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 
 	return oid;
 }
@@ -210,12 +243,14 @@ void
 CatalogTupleUpdate(Relation heapRel, ItemPointer otid, HeapTuple tup)
 {
 	CatalogIndexState indstate;
+	bool	warm_update;
+	Bitmapset	*modified_attrs;
 
 	indstate = CatalogOpenIndexes(heapRel);
 
-	simple_heap_update(heapRel, otid, tup);
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 	CatalogCloseIndexes(indstate);
 }
 
@@ -231,9 +266,12 @@ void
 CatalogTupleUpdateWithInfo(Relation heapRel, ItemPointer otid, HeapTuple tup,
 						   CatalogIndexState indstate)
 {
-	simple_heap_update(heapRel, otid, tup);
+	Bitmapset  *modified_attrs;
+	bool		warm_update;
+
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 }
 
 /*
diff --git b/src/backend/catalog/system_views.sql a/src/backend/catalog/system_views.sql
index b6552da..15d0fe4 100644
--- b/src/backend/catalog/system_views.sql
+++ a/src/backend/catalog/system_views.sql
@@ -498,6 +498,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_tuples_deleted(C.oid) AS n_tup_del,
             pg_stat_get_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
@@ -528,7 +529,8 @@ CREATE VIEW pg_stat_xact_all_tables AS
             pg_stat_get_xact_tuples_inserted(C.oid) AS n_tup_ins,
             pg_stat_get_xact_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_xact_tuples_deleted(C.oid) AS n_tup_del,
-            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd
+            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_xact_tuples_warm_updated(C.oid) AS n_tup_warm_upd
     FROM pg_class C LEFT JOIN
          pg_index I ON C.oid = I.indrelid
          LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
diff --git b/src/backend/commands/constraint.c a/src/backend/commands/constraint.c
index e2544e5..330b661 100644
--- b/src/backend/commands/constraint.c
+++ a/src/backend/commands/constraint.c
@@ -40,6 +40,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	TriggerData *trigdata = castNode(TriggerData, fcinfo->context);
 	const char *funcname = "unique_key_recheck";
 	HeapTuple	new_row;
+	HeapTupleData heapTuple;
 	ItemPointerData tmptid;
 	Relation	indexRel;
 	IndexInfo  *indexInfo;
@@ -102,7 +103,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	 * removed.
 	 */
 	tmptid = new_row->t_self;
-	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL,
+				NULL, NULL, &heapTuple))
 	{
 		/*
 		 * All rows in the HOT chain are dead, so skip the check.
@@ -166,7 +168,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 		 */
 		index_insert(indexRel, values, isnull, &(new_row->t_self),
 					 trigdata->tg_relation, UNIQUE_CHECK_EXISTING,
-					 indexInfo);
+					 indexInfo,
+					 false);
 	}
 	else
 	{
diff --git b/src/backend/commands/copy.c a/src/backend/commands/copy.c
index ba89b29..120e261 100644
--- b/src/backend/commands/copy.c
+++ a/src/backend/commands/copy.c
@@ -2681,6 +2681,8 @@ CopyFrom(CopyState cstate)
 					if (resultRelInfo->ri_NumIndices > 0)
 						recheckIndexes = ExecInsertIndexTuples(slot,
 															&(tuple->t_self),
+															&(tuple->t_self),
+															NULL,
 															   estate,
 															   false,
 															   NULL,
@@ -2835,6 +2837,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
 			ExecStoreTuple(bufferedTuples[i], myslot, InvalidBuffer, false);
 			recheckIndexes =
 				ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
+									  &(bufferedTuples[i]->t_self), NULL,
 									  estate, false, NULL, NIL);
 			ExecARInsertTriggers(estate, resultRelInfo,
 								 bufferedTuples[i],
diff --git b/src/backend/commands/indexcmds.c a/src/backend/commands/indexcmds.c
index 9618032..1b2abd4 100644
--- b/src/backend/commands/indexcmds.c
+++ a/src/backend/commands/indexcmds.c
@@ -694,7 +694,14 @@ DefineIndex(Oid relationId,
 	 * visible to other transactions before we start to build the index. That
 	 * will prevent them from making incompatible HOT updates.  The new index
 	 * will be marked not indisready and not indisvalid, so that no one else
-	 * tries to either insert into it or use it for queries.
+	 * tries to either insert into it or use it for queries. In addition to
+	 * that, WARM updates will be disallowed if an update is modifying one of
+	 * the columns used by this new index. This is necessary to ensure that we
+	 * don't create WARM tuples which do not have corresponding entry in this
+	 * index. It must be noted that during the second phase, we will index only
+	 * those heap tuples whose root line pointer is not already in the index,
+	 * hence it's important that all tuples in a given chain, has the same
+	 * value for any indexed column (including this new index).
 	 *
 	 * We must commit our current transaction so that the index becomes
 	 * visible; then start another.  Note that all the data structures we just
@@ -742,7 +749,10 @@ DefineIndex(Oid relationId,
 	 * marked as "not-ready-for-inserts".  The index is consulted while
 	 * deciding HOT-safety though.  This arrangement ensures that no new HOT
 	 * chains can be created where the new tuple and the old tuple in the
-	 * chain have different index keys.
+	 * chain have different index keys. Also, the new index is consulted for
+	 * deciding whether a WARM update is possible, and WARM update is not done
+	 * if a column used by this index is being updated. This ensures that we
+	 * don't create WARM tuples which are not indexed by this index.
 	 *
 	 * We now take a new snapshot, and build the index using all tuples that
 	 * are visible in this snapshot.  We can be sure that any HOT updates to
@@ -777,7 +787,8 @@ DefineIndex(Oid relationId,
 	/*
 	 * Update the pg_index row to mark the index as ready for inserts. Once we
 	 * commit this transaction, any new transactions that open the table must
-	 * insert new entries into the index for insertions and non-HOT updates.
+	 * insert new entries into the index for insertions and non-HOT updates or
+	 * WARM updates where this index needs new entry.
 	 */
 	index_set_state_flags(indexRelationId, INDEX_CREATE_SET_READY);
 
diff --git b/src/backend/commands/vacuumlazy.c a/src/backend/commands/vacuumlazy.c
index b74e493..025a024 100644
--- b/src/backend/commands/vacuumlazy.c
+++ a/src/backend/commands/vacuumlazy.c
@@ -104,6 +104,39 @@
  */
 #define PREFETCH_SIZE			((BlockNumber) 32)
 
+/*
+ * Structure to track WARM chains that can be converted into HOT chains during
+ * this run.
+ *
+ * To reduce space requirement, we're using bitfields. But the way things are
+ * laid down, we're still wasting 1-byte per candidate chain.
+ */
+typedef struct LVWarmChain
+{
+	ItemPointerData	chain_tid;			/* root of the chain */
+
+	/*
+	 * 1 - if the chain contains only post-warm tuples
+	 * 0 - if the chain contains only pre-warm tuples
+	 */
+	uint8			is_postwarm_chain:2;
+
+	/* 1 - if this chain can't be cleared of WARM tuples */
+	uint8			keep_warm_chain:2;
+
+	/*
+	 * Number of CLEAR pointers to this root TID found so far - must never be
+	 * more than 2.
+	 */
+	uint8			num_clear_pointers:2;
+
+	/*
+	 * Number of WARM pointers to this root TID found so far - must never be
+	 * more than 1.
+	 */
+	uint8			num_warm_pointers:2;
+} LVWarmChain;
+
 typedef struct LVRelStats
 {
 	/* hasindex = true means two-pass strategy; false means one-pass */
@@ -122,6 +155,14 @@ typedef struct LVRelStats
 	BlockNumber pages_removed;
 	double		tuples_deleted;
 	BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */
+
+	/* List of candidate WARM chains that can be converted into HOT chains */
+	/* NB: this list is ordered by TID of the root pointers */
+	int				num_warm_chains;	/* current # of entries */
+	int				max_warm_chains;	/* # slots allocated in array */
+	LVWarmChain 	*warm_chains;		/* array of LVWarmChain */
+	double			num_non_convertible_warm_chains;
+
 	/* List of TIDs of tuples we intend to delete */
 	/* NB: this list is ordered by TID address */
 	int			num_dead_tuples;	/* current # of entries */
@@ -150,6 +191,7 @@ static void lazy_scan_heap(Relation onerel, int options,
 static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats);
 static bool lazy_check_needs_freeze(Buffer buf, bool *hastup);
 static void lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats);
 static void lazy_cleanup_index(Relation indrel,
@@ -157,6 +199,10 @@ static void lazy_cleanup_index(Relation indrel,
 				   LVRelStats *vacrelstats);
 static int lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 				 int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer);
+static int lazy_warmclear_page(Relation onerel, BlockNumber blkno,
+				 Buffer buffer, int chainindex, LVRelStats *vacrelstats,
+				 Buffer *vmbuffer, bool check_all_visible);
+static void lazy_reset_warm_pointer_count(LVRelStats *vacrelstats);
 static bool should_attempt_truncation(LVRelStats *vacrelstats);
 static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
 static BlockNumber count_nondeletable_pages(Relation onerel,
@@ -164,8 +210,15 @@ static BlockNumber count_nondeletable_pages(Relation onerel,
 static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks);
 static void lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr);
-static bool lazy_tid_reaped(ItemPointer itemptr, void *state);
+static void lazy_record_warm_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static void lazy_record_clear_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static IndexBulkDeleteCallbackResult lazy_tid_reaped(ItemPointer itemptr, bool is_warm, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state);
 static int	vac_cmp_itemptr(const void *left, const void *right);
+static int vac_cmp_warm_chain(const void *left, const void *right);
 static bool heap_page_is_all_visible(Relation rel, Buffer buf,
 					 TransactionId *visibility_cutoff_xid, bool *all_frozen);
 
@@ -690,8 +743,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		 * If we are close to overrunning the available space for dead-tuple
 		 * TIDs, pause and do a cycle of vacuuming before we tackle this page.
 		 */
-		if ((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
-			vacrelstats->num_dead_tuples > 0)
+		if (((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
+			vacrelstats->num_dead_tuples > 0) ||
+			((vacrelstats->max_warm_chains - vacrelstats->num_warm_chains) < MaxHeapTuplesPerPage &&
+			 vacrelstats->num_warm_chains > 0))
 		{
 			const int	hvp_index[] = {
 				PROGRESS_VACUUM_PHASE,
@@ -721,6 +776,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			/* Remove index entries */
 			for (i = 0; i < nindexes; i++)
 				lazy_vacuum_index(Irel[i],
+								  (vacrelstats->num_warm_chains > 0),
 								  &indstats[i],
 								  vacrelstats);
 
@@ -743,6 +799,9 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			 * valid.
 			 */
 			vacrelstats->num_dead_tuples = 0;
+			vacrelstats->num_warm_chains = 0;
+			memset(vacrelstats->warm_chains, 0,
+					vacrelstats->max_warm_chains * sizeof (LVWarmChain));
 			vacrelstats->num_index_scans++;
 
 			/* Report that we are once again scanning the heap */
@@ -947,15 +1006,31 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 				continue;
 			}
 
+			ItemPointerSet(&(tuple.t_self), blkno, offnum);
+
 			/* Redirect items mustn't be touched */
 			if (ItemIdIsRedirected(itemid))
 			{
+				HeapCheckWarmChainStatus status = heap_check_warm_chain(page,
+						&tuple.t_self, false);
+				if (HCWC_IS_WARM_UPDATED(status))
+				{
+					/*
+					 * A chain which is either complete WARM or CLEAR is a
+					 * candidate for chain conversion. Remember the chain and
+					 * whether the chain has all WARM tuples or not.
+					 */
+					if (HCWC_IS_ALL_WARM(status))
+						lazy_record_warm_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_CLEAR(status))
+						lazy_record_clear_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
 				hastup = true;	/* this page won't be truncatable */
 				continue;
 			}
 
-			ItemPointerSet(&(tuple.t_self), blkno, offnum);
-
 			/*
 			 * DEAD item pointers are to be vacuumed normally; but we don't
 			 * count them in tups_vacuumed, else we'd be double-counting (at
@@ -975,6 +1050,26 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			tuple.t_len = ItemIdGetLength(itemid);
 			tuple.t_tableOid = RelationGetRelid(onerel);
 
+			if (!HeapTupleIsHeapOnly(&tuple))
+			{
+				HeapCheckWarmChainStatus status = heap_check_warm_chain(page,
+						&tuple.t_self, false);
+				if (HCWC_IS_WARM_UPDATED(status))
+				{
+					/*
+					 * A chain which is either complete WARM or CLEAR is a
+					 * candidate for chain conversion. Remember the chain and
+					 * its color.
+					 */
+					if (HCWC_IS_ALL_WARM(status))
+						lazy_record_warm_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_CLEAR(status))
+						lazy_record_clear_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
+			}
+
 			tupgone = false;
 
 			switch (HeapTupleSatisfiesVacuum(&tuple, OldestXmin, buf))
@@ -1040,6 +1135,19 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 							break;
 						}
 
+						/*
+						 * If this tuple was ever WARM updated or is a WARM
+						 * tuple, there could be multiple index entries
+						 * pointing to the root of this chain. We can't do
+						 * index-only scans for such tuples without verifying
+						 * index key check. So mark the page as !all_visible
+						 */
+						if (HeapTupleHeaderIsWarmUpdated(tuple.t_data))
+						{
+							all_visible = false;
+							break;
+						}
+
 						/* Track newest xmin on page. */
 						if (TransactionIdFollows(xmin, visibility_cutoff_xid))
 							visibility_cutoff_xid = xmin;
@@ -1282,7 +1390,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 
 	/* If any tuples need to be deleted, perform final vacuum cycle */
 	/* XXX put a threshold on min number of tuples here? */
-	if (vacrelstats->num_dead_tuples > 0)
+	if (vacrelstats->num_dead_tuples > 0 || vacrelstats->num_warm_chains > 0)
 	{
 		const int	hvp_index[] = {
 			PROGRESS_VACUUM_PHASE,
@@ -1300,6 +1408,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		/* Remove index entries */
 		for (i = 0; i < nindexes; i++)
 			lazy_vacuum_index(Irel[i],
+							  (vacrelstats->num_warm_chains > 0),
 							  &indstats[i],
 							  vacrelstats);
 
@@ -1367,7 +1476,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
  *
  *		This routine marks dead tuples as unused and compacts out free
  *		space on their pages.  Pages not having dead tuples recorded from
- *		lazy_scan_heap are not visited at all.
+ *		lazy_scan_heap are not visited at all. This routine also converts
+ *		candidate WARM chains to HOT chains by clearing WARM related flags. The
+ *		candidate chains are determined by the preceeding index scans after
+ *		looking at the data collected by the first heap scan.
  *
  * Note: the reason for doing this as a second pass is we cannot remove
  * the tuples until we've removed their index entries, and we want to
@@ -1376,7 +1488,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 static void
 lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 {
-	int			tupindex;
+	int			tupindex, chainindex;
 	int			npages;
 	PGRUsage	ru0;
 	Buffer		vmbuffer = InvalidBuffer;
@@ -1385,33 +1497,69 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 	npages = 0;
 
 	tupindex = 0;
-	while (tupindex < vacrelstats->num_dead_tuples)
+	chainindex = 0;
+	while (tupindex < vacrelstats->num_dead_tuples ||
+		   chainindex < vacrelstats->num_warm_chains)
 	{
-		BlockNumber tblk;
+		BlockNumber tblk, chainblk, vacblk;
 		Buffer		buf;
 		Page		page;
 		Size		freespace;
 
 		vacuum_delay_point();
 
-		tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
-		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, tblk, RBM_NORMAL,
+		tblk = chainblk = InvalidBlockNumber;
+		if (chainindex < vacrelstats->num_warm_chains)
+			chainblk =
+				ItemPointerGetBlockNumber(&(vacrelstats->warm_chains[chainindex].chain_tid));
+
+		if (tupindex < vacrelstats->num_dead_tuples)
+			tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
+
+		if (tblk == InvalidBlockNumber)
+			vacblk = chainblk;
+		else if (chainblk == InvalidBlockNumber)
+			vacblk = tblk;
+		else
+			vacblk = Min(chainblk, tblk);
+
+		Assert(vacblk != InvalidBlockNumber);
+
+		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, vacblk, RBM_NORMAL,
 								 vac_strategy);
-		if (!ConditionalLockBufferForCleanup(buf))
+
+
+		if (vacblk == chainblk)
+			LockBufferForCleanup(buf);
+		else if (!ConditionalLockBufferForCleanup(buf))
 		{
 			ReleaseBuffer(buf);
 			++tupindex;
 			continue;
 		}
-		tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
-									&vmbuffer);
+
+		/*
+		 * Convert WARM chains on this page. This should be done before
+		 * vacuuming the page to ensure that we can correctly set visibility
+		 * bits after clearing WARM chains.
+		 *
+		 * If we are going to vacuum this page then don't check for
+		 * all-visibility just yet.
+		*/
+		if (vacblk == chainblk)
+			chainindex = lazy_warmclear_page(onerel, chainblk, buf, chainindex,
+					vacrelstats, &vmbuffer, chainblk != tblk);
+
+		if (vacblk == tblk)
+			tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
+					&vmbuffer);
 
 		/* Now that we've compacted the page, record its available space */
 		page = BufferGetPage(buf);
 		freespace = PageGetHeapFreeSpace(page);
 
 		UnlockReleaseBuffer(buf);
-		RecordPageWithFreeSpace(onerel, tblk, freespace);
+		RecordPageWithFreeSpace(onerel, vacblk, freespace);
 		npages++;
 	}
 
@@ -1430,6 +1578,107 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 }
 
 /*
+ *	lazy_warmclear_page() -- clear various WARM bits on the tuples.
+ *
+ * Caller must hold pin and buffer cleanup lock on the buffer.
+ *
+ * chainindex is the index in vacrelstats->warm_chains of the first dead
+ * tuple for this page.  We assume the rest follow sequentially.
+ * The return value is the first tupindex after the tuples of this page.
+ *
+ * If check_all_visible is set then we also check if the page has now become
+ * all visible and update visibility map.
+ */
+static int
+lazy_warmclear_page(Relation onerel, BlockNumber blkno, Buffer buffer,
+				 int chainindex, LVRelStats *vacrelstats, Buffer *vmbuffer,
+				 bool check_all_visible)
+{
+	Page			page = BufferGetPage(buffer);
+	OffsetNumber	cleared_offnums[MaxHeapTuplesPerPage];
+	int				num_cleared = 0;
+	TransactionId	visibility_cutoff_xid;
+	bool			all_frozen;
+
+	pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED, blkno);
+
+	START_CRIT_SECTION();
+
+	for (; chainindex < vacrelstats->num_warm_chains ; chainindex++)
+	{
+		BlockNumber tblk;
+		LVWarmChain	*chain;
+
+		chain = &vacrelstats->warm_chains[chainindex];
+
+		tblk = ItemPointerGetBlockNumber(&chain->chain_tid);
+		if (tblk != blkno)
+			break;				/* past end of tuples for this block */
+
+		/*
+		 * Since a heap page can have no more than MaxHeapTuplesPerPage
+		 * offnums and we process each offnum only once, MaxHeapTuplesPerPage
+		 * size array should be enough to hold all cleared tuples in this page.
+		 */
+		if (!chain->keep_warm_chain)
+			num_cleared += heap_clear_warm_chain(page, &chain->chain_tid,
+					cleared_offnums + num_cleared);
+	}
+
+	/*
+	 * Mark buffer dirty before we write WAL.
+	 */
+	MarkBufferDirty(buffer);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(onerel))
+	{
+		XLogRecPtr	recptr;
+
+		recptr = log_heap_warmclear(onerel, buffer,
+								cleared_offnums, num_cleared);
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	/* If not checking for all-visibility then we're done */
+	if (!check_all_visible)
+		return chainindex;
+
+	/*
+	 * The following code should match the corresponding code in
+	 * lazy_vacuum_page
+	 **/
+	if (heap_page_is_all_visible(onerel, buffer, &visibility_cutoff_xid,
+								 &all_frozen))
+		PageSetAllVisible(page);
+
+	/*
+	 * All the changes to the heap page have been done. If the all-visible
+	 * flag is now set, also set the VM all-visible bit (and, if possible, the
+	 * all-frozen bit) unless this has already been done previously.
+	 */
+	if (PageIsAllVisible(page))
+	{
+		uint8		vm_status = visibilitymap_get_status(onerel, blkno, vmbuffer);
+		uint8		flags = 0;
+
+		/* Set the VM all-frozen bit to flag, if needed */
+		if ((vm_status & VISIBILITYMAP_ALL_VISIBLE) == 0)
+			flags |= VISIBILITYMAP_ALL_VISIBLE;
+		if ((vm_status & VISIBILITYMAP_ALL_FROZEN) == 0 && all_frozen)
+			flags |= VISIBILITYMAP_ALL_FROZEN;
+
+		Assert(BufferIsValid(*vmbuffer));
+		if (flags != 0)
+			visibilitymap_set(onerel, blkno, buffer, InvalidXLogRecPtr,
+							  *vmbuffer, visibility_cutoff_xid, flags);
+	}
+	return chainindex;
+}
+
+/*
  *	lazy_vacuum_page() -- free dead tuples on a page
  *					 and repair its fragmentation.
  *
@@ -1582,6 +1831,24 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
 	return false;
 }
 
+/*
+ * Reset counters tracking number of WARM and CLEAR pointers per candidate TID.
+ * These counters are maintained per index and cleared when the next index is
+ * picked up for cleanup.
+ *
+ * We don't touch the keep_warm_chain since once a chain is known to be
+ * non-convertible, we must remember that across all indexes.
+ */
+static void
+lazy_reset_warm_pointer_count(LVRelStats *vacrelstats)
+{
+	int i;
+	for (i = 0; i < vacrelstats->num_warm_chains; i++)
+	{
+		LVWarmChain *chain = &vacrelstats->warm_chains[i];
+		chain->num_clear_pointers = chain->num_warm_pointers = 0;
+	}
+}
 
 /*
  *	lazy_vacuum_index() -- vacuum one index relation.
@@ -1591,6 +1858,7 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
  */
 static void
 lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats)
 {
@@ -1606,15 +1874,87 @@ lazy_vacuum_index(Relation indrel,
 	ivinfo.num_heap_tuples = vacrelstats->old_rel_tuples;
 	ivinfo.strategy = vac_strategy;
 
-	/* Do bulk deletion */
-	*stats = index_bulk_delete(&ivinfo, *stats,
-							   lazy_tid_reaped, (void *) vacrelstats);
+	/*
+	 * If told, convert WARM chains into HOT chains.
+	 *
+	 * We must have already collected candidate WARM chains i.e. chains that
+	 * have either all tuples with HEAP_WARM_TUPLE flag set or none.
+	 *
+	 * This works in two phases. In the first phase, we do a complete index
+	 * scan and collect information about index pointers to the candidate
+	 * chains, but we don't do conversion. To be precise, we count the number
+	 * of WARM and CLEAR index pointers to each candidate chain and use that
+	 * knowledge to arrive at a decision and do the actual conversion during
+	 * the second phase (we kill known dead pointers though in this phase).
+	 *
+	 * In the second phase, for each candidate chain we check if we have seen a
+	 * WARM index pointer. For such chains, we kill the CLEAR pointer and
+	 * convert the WARM pointer into a CLEAR pointer. The heap tuples are
+	 * cleared of WARM flags in the second heap scan. If we did not find any
+	 * WARM pointer to a WARM chain, that means that the chain is reachable
+	 * from the CLEAR pointer (because say WARM update did not add a new entry
+	 * for this index). In that case, we do nothing.  There is a third case
+	 * where we find two CLEAR pointers to a candidate chain. This can happen
+	 * because of aborted vacuums. We don't handle that case yet, but it should
+	 * be possible to apply the same recheck logic and find which of the clear
+	 * pointers is redundant and should be removed.
+	 *
+	 * For CLEAR chains, we just kill the WARM pointer, if it exists, and keep
+	 * the CLEAR pointer.
+	 */
+	if (clear_warm)
+	{
+		/*
+		 * Before starting the index scan, reset the counters of WARM and CLEAR
+		 * pointers, probably carried forward from the previous index.
+		 */
+		lazy_reset_warm_pointer_count(vacrelstats);
+
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase1, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row version, found "
+						"%0.f warm pointers, %0.f clear pointers, removed "
+						"%0.f warm pointers, removed %0.f clear pointers",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples,
+						(*stats)->num_warm_pointers,
+						(*stats)->num_clear_pointers,
+						(*stats)->warm_pointers_removed,
+						(*stats)->clear_pointers_removed)));
+
+		(*stats)->num_warm_pointers = 0;
+		(*stats)->num_clear_pointers = 0;
+		(*stats)->warm_pointers_removed = 0;
+		(*stats)->clear_pointers_removed = 0;
+		(*stats)->pointers_cleared = 0;
+
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase2, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to convert WARM pointers, found "
+						"%0.f WARM pointers, %0.f CLEAR pointers, removed "
+						"%0.f WARM pointers, removed %0.f CLEAR pointers, "
+						"cleared %0.f WARM pointers",
+						RelationGetRelationName(indrel),
+						(*stats)->num_warm_pointers,
+						(*stats)->num_clear_pointers,
+						(*stats)->warm_pointers_removed,
+						(*stats)->clear_pointers_removed,
+						(*stats)->pointers_cleared)));
+	}
+	else
+	{
+		/* Do bulk deletion */
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_tid_reaped, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row versions",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples),
+				 errdetail("%s.", pg_rusage_show(&ru0))));
+	}
 
-	ereport(elevel,
-			(errmsg("scanned index \"%s\" to remove %d row versions",
-					RelationGetRelationName(indrel),
-					vacrelstats->num_dead_tuples),
-			 errdetail("%s.", pg_rusage_show(&ru0))));
 }
 
 /*
@@ -1988,9 +2328,11 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 
 	if (vacrelstats->hasindex)
 	{
-		maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData);
+		maxtuples = (vac_work_mem * 1024L) / (sizeof(ItemPointerData) +
+				sizeof(LVWarmChain));
 		maxtuples = Min(maxtuples, INT_MAX);
-		maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData));
+		maxtuples = Min(maxtuples, MaxAllocSize / (sizeof(ItemPointerData) +
+					sizeof(LVWarmChain)));
 
 		/* curious coding here to ensure the multiplication can't overflow */
 		if ((BlockNumber) (maxtuples / LAZY_ALLOC_TUPLES) > relblocks)
@@ -2008,6 +2350,57 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 	vacrelstats->max_dead_tuples = (int) maxtuples;
 	vacrelstats->dead_tuples = (ItemPointer)
 		palloc(maxtuples * sizeof(ItemPointerData));
+
+	/*
+	 * XXX Cheat for now and allocate the same size array for tracking warm
+	 * chains. maxtuples must have been already adjusted above to ensure we
+	 * don't cross vac_work_mem.
+	 */
+	vacrelstats->num_warm_chains = 0;
+	vacrelstats->max_warm_chains = (int) maxtuples;
+	vacrelstats->warm_chains = (LVWarmChain *)
+		palloc0(maxtuples * sizeof(LVWarmChain));
+
+}
+
+/*
+ * lazy_record_clear_chain - remember one CLEAR chain
+ */
+static void
+lazy_record_clear_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	{
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 0;
+		vacrelstats->num_warm_chains++;
+	}
+}
+
+/*
+ * lazy_record_warm_chain - remember one WARM chain
+ */
+static void
+lazy_record_warm_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	{
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 1;
+		vacrelstats->num_warm_chains++;
+	}
 }
 
 /*
@@ -2038,8 +2431,8 @@ lazy_record_dead_tuple(LVRelStats *vacrelstats,
  *
  *		Assumes dead_tuples array is in sorted order.
  */
-static bool
-lazy_tid_reaped(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+lazy_tid_reaped(ItemPointer itemptr, bool is_warm, void *state)
 {
 	LVRelStats *vacrelstats = (LVRelStats *) state;
 	ItemPointer res;
@@ -2050,7 +2443,193 @@ lazy_tid_reaped(ItemPointer itemptr, void *state)
 								sizeof(ItemPointerData),
 								vac_cmp_itemptr);
 
-	return (res != NULL);
+	return (res != NULL) ? IBDCR_DELETE : IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase1() -- run first pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	ItemPointer		res;
+	LVWarmChain	*chain;
+
+	res = (ItemPointer) bsearch((void *) itemptr,
+								(void *) vacrelstats->dead_tuples,
+								vacrelstats->num_dead_tuples,
+								sizeof(ItemPointerData),
+								vac_cmp_itemptr);
+
+	if (res != NULL)
+		return IBDCR_DELETE;
+
+	chain = (LVWarmChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->warm_chains,
+								vacrelstats->num_warm_chains,
+								sizeof(LVWarmChain),
+								vac_cmp_warm_chain);
+	if (chain != NULL)
+	{
+		if (is_warm)
+			chain->num_warm_pointers++;
+		else
+			chain->num_clear_pointers++;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase2() -- run second pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	LVWarmChain	*chain;
+
+	chain = (LVWarmChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->warm_chains,
+								vacrelstats->num_warm_chains,
+								sizeof(LVWarmChain),
+								vac_cmp_warm_chain);
+
+	if (chain != NULL && (chain->keep_warm_chain != 1))
+	{
+		/*
+		 * At no point, we can have more than 1 warm pointer to any chain and
+		 * no more than 2 clear pointers.
+		 */
+		Assert(chain->num_warm_pointers <= 1);
+		Assert(chain->num_clear_pointers <= 2);
+
+		if (chain->is_postwarm_chain == 1)
+		{
+			if (is_warm)
+			{
+				/*
+				 * A WARM pointer, pointing to a WARM chain.
+				 *
+				 * Clear the warm pointer (and delete the CLEAR pointer). We
+				 * may have already seen the CLEAR pointer in the scan and
+				 * deleted that or we may see it later in the scan. It doesn't
+				 * matter if we fail at any point because we won't clear up
+				 * WARM bits on the heap tuples until we have dealt with the
+				 * index pointers cleanly.
+				 */
+				return IBDCR_CLEAR_WARM;
+			}
+			else
+			{
+				/*
+				 * CLEAR pointer to a WARM chain.
+				 */
+				if (chain->num_warm_pointers > 0)
+				{
+					/*
+					 * If there exists a WARM pointer to the chain, we can
+					 * delete the CLEAR pointer and clear the WARM bits on the
+					 * heap tuples.
+					 */
+					return IBDCR_DELETE;
+				}
+				else if (chain->num_clear_pointers == 1)
+				{
+					/*
+					 * If this is the only pointer to a WARM chain, we must
+					 * keep the CLEAR pointer.
+					 *
+					 * The presence of WARM chain indicates that the WARM update
+					 * must have been committed good. But during the update
+					 * this index was probably not updated and hence it
+					 * contains just one, original CLEAR pointer to the chain.
+					 * We should be able to clear the WARM bits on heap tuples
+					 * unless we later find another index which prevents the
+					 * cleanup.
+					 */
+					return IBDCR_KEEP;
+				}
+			}
+		}
+		else
+		{
+			/*
+			 * This is a CLEAR chain.
+			 */
+			if (is_warm)
+			{
+				/*
+				 * A WARM pointer to a CLEAR chain.
+				 *
+				 * This can happen when a WARM update is aborted. Later the HOT
+				 * chain is pruned leaving behind only CLEAR tuples in the
+				 * chain. But the WARM index pointer inserted in the index
+				 * remains and it must now be deleted before we clear WARM bits
+				 * from the heap tuple.
+				 */
+				return IBDCR_DELETE;
+			}
+
+			/*
+			 * CLEAR pointer to a CLEAR chain.
+			 *
+			 * If this is the only surviving CLEAR pointer, keep it and clear
+			 * the WARM bits from the heap tuples.
+			 */
+			if (chain->num_clear_pointers == 1)
+				return IBDCR_KEEP;
+
+			/*
+			 * If there are more than 1 CLEAR pointers to this chain, we can
+			 * apply the recheck logic and kill the redudant CLEAR pointer and
+			 * convert the chain. But that's not yet done.
+			 */
+		}
+
+		/*
+		 * For everything else, we must keep the WARM bits and also keep the
+		 * index pointers.
+		 */
+		chain->keep_warm_chain = 1;
+		return IBDCR_KEEP;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ * Comparator routines for use with qsort() and bsearch(). Similar to
+ * vac_cmp_itemptr, but right hand argument is LVWarmChain struct pointer.
+ */
+static int
+vac_cmp_warm_chain(const void *left, const void *right)
+{
+	BlockNumber lblk,
+				rblk;
+	OffsetNumber loff,
+				roff;
+
+	lblk = ItemPointerGetBlockNumber((ItemPointer) left);
+	rblk = ItemPointerGetBlockNumber(&((LVWarmChain *) right)->chain_tid);
+
+	if (lblk < rblk)
+		return -1;
+	if (lblk > rblk)
+		return 1;
+
+	loff = ItemPointerGetOffsetNumber((ItemPointer) left);
+	roff = ItemPointerGetOffsetNumber(&((LVWarmChain *) right)->chain_tid);
+
+	if (loff < roff)
+		return -1;
+	if (loff > roff)
+		return 1;
+
+	return 0;
 }
 
 /*
@@ -2166,6 +2745,18 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 						break;
 					}
 
+					/*
+					 * If this or any other tuple in the chain ever WARM
+					 * updated, there could be multiple index entries pointing
+					 * to the root of this chain. We can't do index-only scans
+					 * for such tuples without verifying index key check. So
+					 * mark the page as !all_visible
+					 */
+					if (HeapTupleHeaderIsWarmUpdated(tuple.t_data))
+					{
+						all_visible = false;
+					}
+
 					/* Track newest xmin on page. */
 					if (TransactionIdFollows(xmin, *visibility_cutoff_xid))
 						*visibility_cutoff_xid = xmin;
diff --git b/src/backend/executor/execIndexing.c a/src/backend/executor/execIndexing.c
index 2142273..0beca6d 100644
--- b/src/backend/executor/execIndexing.c
+++ a/src/backend/executor/execIndexing.c
@@ -270,6 +270,8 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
 List *
 ExecInsertIndexTuples(TupleTableSlot *slot,
 					  ItemPointer tupleid,
+					  ItemPointer root_tid,
+					  Bitmapset *modified_attrs,
 					  EState *estate,
 					  bool noDupErr,
 					  bool *specConflict,
@@ -324,6 +326,17 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 		if (!indexInfo->ii_ReadyForInserts)
 			continue;
 
+		/*
+		 * If modified_attrs is set, we only insert index entries for those
+		 * indexes whose column has changed. All other indexes can use their
+		 * existing index pointers to look up the new tuple
+		 */
+		if (modified_attrs)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
 		/* Check for partial index */
 		if (indexInfo->ii_Predicate != NIL)
 		{
@@ -389,10 +402,11 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 			index_insert(indexRelation, /* index relation */
 						 values,	/* array of index Datums */
 						 isnull,	/* null flags */
-						 tupleid,		/* tid of heap tuple */
+						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique,	/* type of uniqueness check to do */
-						 indexInfo);	/* index AM may need this */
+						 indexInfo,	/* index AM may need this */
+						 (modified_attrs != NULL));	/* is it a WARM update? */
 
 		/*
 		 * If the index has an associated exclusion constraint, check that.
@@ -791,6 +805,9 @@ retry:
 		{
 			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
 				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
+			else
+				ItemPointerCopy(&tup->t_self, &ctid_wait);
+
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git b/src/backend/executor/execReplication.c a/src/backend/executor/execReplication.c
index f20d728..747e4ce 100644
--- b/src/backend/executor/execReplication.c
+++ a/src/backend/executor/execReplication.c
@@ -399,6 +399,8 @@ ExecSimpleRelationInsert(EState *estate, TupleTableSlot *slot)
 
 		if (resultRelInfo->ri_NumIndices > 0)
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &(tuple->t_self),
+												   NULL,
 												   estate, false, NULL,
 												   NIL);
 
@@ -445,6 +447,8 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 	if (!skip_tuple)
 	{
 		List	   *recheckIndexes = NIL;
+		bool		warm_update;
+		Bitmapset  *modified_attrs;
 
 		/* Check the constraints of the tuple */
 		if (rel->rd_att->constr)
@@ -455,13 +459,35 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 
 		/* OK, update the tuple and index entries for it */
 		simple_heap_update(rel, &searchslot->tts_tuple->t_self,
-						   slot->tts_tuple);
+						   slot->tts_tuple, &modified_attrs, &warm_update);
 
 		if (resultRelInfo->ri_NumIndices > 0 &&
-			!HeapTupleIsHeapOnly(slot->tts_tuple))
+			(!HeapTupleIsHeapOnly(slot->tts_tuple) || warm_update))
+		{
+			ItemPointerData root_tid;
+
+			/*
+			 * If we did a WARM update then we must index the tuple using its
+			 * root line pointer and not the tuple TID itself.
+			 */
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self,
+						&root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
+
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL,
 												   NIL);
+		}
 
 		/* AFTER ROW UPDATE Triggers */
 		ExecARUpdateTriggers(estate, resultRelInfo,
diff --git b/src/backend/executor/nodeBitmapHeapscan.c a/src/backend/executor/nodeBitmapHeapscan.c
index 2e9ff7d..f7bb6ca 100644
--- b/src/backend/executor/nodeBitmapHeapscan.c
+++ a/src/backend/executor/nodeBitmapHeapscan.c
@@ -39,6 +39,7 @@
 
 #include "access/relscan.h"
 #include "access/transam.h"
+#include "access/valid.h"
 #include "executor/execdebug.h"
 #include "executor/nodeBitmapHeapscan.h"
 #include "pgstat.h"
@@ -395,11 +396,27 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			OffsetNumber offnum = tbmres->offsets[curslot];
 			ItemPointerData tid;
 			HeapTupleData heapTuple;
+			bool recheck = false;
 
 			ItemPointerSet(&tid, page, offnum);
 			if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
-									   &heapTuple, NULL, true))
-				scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+									   &heapTuple, NULL, true, &recheck))
+			{
+				bool valid = true;
+
+				if (scan->rs_key)
+					HeapKeyTest(&heapTuple, RelationGetDescr(scan->rs_rd),
+							scan->rs_nkeys, scan->rs_key, valid);
+				if (valid)
+					scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+
+				/*
+				 * If the heap tuple needs a recheck because of a WARM update,
+				 * it's a lossy case.
+				 */
+				if (recheck)
+					tbmres->recheck = true;
+			}
 		}
 	}
 	else
diff --git b/src/backend/executor/nodeIndexscan.c a/src/backend/executor/nodeIndexscan.c
index cb6aff9..dff4086 100644
--- b/src/backend/executor/nodeIndexscan.c
+++ a/src/backend/executor/nodeIndexscan.c
@@ -142,8 +142,8 @@ IndexNext(IndexScanState *node)
 					   false);	/* don't pfree */
 
 		/*
-		 * If the index was lossy, we have to recheck the index quals using
-		 * the fetched tuple.
+		 * If the index was lossy or the tuple was WARM, we have to recheck
+		 * the index quals using the fetched tuple.
 		 */
 		if (scandesc->xs_recheck)
 		{
diff --git b/src/backend/executor/nodeModifyTable.c a/src/backend/executor/nodeModifyTable.c
index 95e1589..a1f3440 100644
--- b/src/backend/executor/nodeModifyTable.c
+++ a/src/backend/executor/nodeModifyTable.c
@@ -512,6 +512,7 @@ ExecInsert(ModifyTableState *mtstate,
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												 &(tuple->t_self), NULL,
 												 estate, true, &specConflict,
 												   arbiterIndexes);
 
@@ -558,6 +559,7 @@ ExecInsert(ModifyTableState *mtstate,
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
 				recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+													   &(tuple->t_self), NULL,
 													   estate, false, NULL,
 													   arbiterIndexes);
 		}
@@ -891,6 +893,9 @@ ExecUpdate(ItemPointer tupleid,
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
 	List	   *recheckIndexes = NIL;
+	Bitmapset  *modified_attrs = NULL;
+	ItemPointerData	root_tid;
+	bool		warm_update;
 
 	/*
 	 * abort the operation if not running transactions
@@ -1007,7 +1012,7 @@ lreplace:;
 							 estate->es_output_cid,
 							 estate->es_crosscheck_snapshot,
 							 true /* wait for commit */ ,
-							 &hufd, &lockmode);
+							 &hufd, &lockmode, &modified_attrs, &warm_update);
 		switch (result)
 		{
 			case HeapTupleSelfUpdated:
@@ -1094,10 +1099,28 @@ lreplace:;
 		 * the t_self field.
 		 *
 		 * If it's a HOT update, we mustn't insert new index entries.
+		 *
+		 * If it's a WARM update, then we must insert new entries with TID
+		 * pointing to the root of the WARM chain.
 		 */
-		if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
+		if (resultRelInfo->ri_NumIndices > 0 &&
+			(!HeapTupleIsHeapOnly(tuple) || warm_update))
+		{
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL, NIL);
+		}
 	}
 
 	if (canSetTag)
diff --git b/src/backend/postmaster/pgstat.c a/src/backend/postmaster/pgstat.c
index 3a50488..806d812 100644
--- b/src/backend/postmaster/pgstat.c
+++ a/src/backend/postmaster/pgstat.c
@@ -1824,7 +1824,7 @@ pgstat_count_heap_insert(Relation rel, PgStat_Counter n)
  * pgstat_count_heap_update - count a tuple update
  */
 void
-pgstat_count_heap_update(Relation rel, bool hot)
+pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 {
 	PgStat_TableStatus *pgstat_info = rel->pgstat_info;
 
@@ -1842,6 +1842,8 @@ pgstat_count_heap_update(Relation rel, bool hot)
 		/* t_tuples_hot_updated is nontransactional, so just advance it */
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
+		else if (warm)
+			pgstat_info->t_counts.t_tuples_warm_updated++;
 	}
 }
 
@@ -4324,6 +4326,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_updated = 0;
 		result->tuples_deleted = 0;
 		result->tuples_hot_updated = 0;
+		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
 		result->changes_since_analyze = 0;
@@ -5433,6 +5436,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated = tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted = tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated = tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
@@ -5460,6 +5464,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated += tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted += tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated += tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated += tabmsg->t_counts.t_tuples_warm_updated;
 			/* If table was truncated, first reset the live/dead counters */
 			if (tabmsg->t_counts.t_truncated)
 			{
diff --git b/src/backend/replication/logical/decode.c a/src/backend/replication/logical/decode.c
index 5c13d26..7a9b48a 100644
--- b/src/backend/replication/logical/decode.c
+++ a/src/backend/replication/logical/decode.c
@@ -347,7 +347,7 @@ DecodeStandbyOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 static void
 DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 {
-	uint8		info = XLogRecGetInfo(buf->record) & XLOG_HEAP_OPMASK;
+	uint8		info = XLogRecGetInfo(buf->record) & XLOG_HEAP2_OPMASK;
 	TransactionId xid = XLogRecGetXid(buf->record);
 	SnapBuild  *builder = ctx->snapshot_builder;
 
@@ -359,10 +359,6 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 
 	switch (info)
 	{
-		case XLOG_HEAP2_MULTI_INSERT:
-			if (SnapBuildProcessChange(builder, xid, buf->origptr))
-				DecodeMultiInsert(ctx, buf);
-			break;
 		case XLOG_HEAP2_NEW_CID:
 			{
 				xl_heap_new_cid *xlrec;
@@ -390,6 +386,7 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 		case XLOG_HEAP2_CLEANUP_INFO:
 		case XLOG_HEAP2_VISIBLE:
 		case XLOG_HEAP2_LOCK_UPDATED:
+		case XLOG_HEAP2_WARMCLEAR:
 			break;
 		default:
 			elog(ERROR, "unexpected RM_HEAP2_ID record type: %u", info);
@@ -418,6 +415,10 @@ DecodeHeapOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 			if (SnapBuildProcessChange(builder, xid, buf->origptr))
 				DecodeInsert(ctx, buf);
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			if (SnapBuildProcessChange(builder, xid, buf->origptr))
+				DecodeMultiInsert(ctx, buf);
+			break;
 
 			/*
 			 * Treat HOT update as normal updates. There is no useful
@@ -809,7 +810,7 @@ DecodeDelete(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 }
 
 /*
- * Decode XLOG_HEAP2_MULTI_INSERT_insert record into multiple tuplebufs.
+ * Decode XLOG_HEAP_MULTI_INSERT_insert record into multiple tuplebufs.
  *
  * Currently MULTI_INSERT will always contain the full tuples.
  */
diff --git b/src/backend/storage/page/bufpage.c a/src/backend/storage/page/bufpage.c
index fdf045a..8d23e92 100644
--- b/src/backend/storage/page/bufpage.c
+++ a/src/backend/storage/page/bufpage.c
@@ -1151,6 +1151,29 @@ PageIndexTupleOverwrite(Page page, OffsetNumber offnum,
 	return true;
 }
 
+/*
+ * PageIndexClearWarmTuples
+ *
+ * Clear the given WARM pointers by resetting the flags stored in the TID
+ * field. We assume there is nothing else in the TID flags other than the WARM
+ * information and clearing all flag bits is safe. If that changes, we must
+ * change this routine as well.
+ */
+void
+PageIndexClearWarmTuples(Page page, OffsetNumber *clearitemnos,
+						 uint16 nclearitems)
+{
+	int			i;
+	ItemId		itemid;
+	IndexTuple	itup;
+
+	for (i = 0; i < nclearitems; i++)
+	{
+		itemid = PageGetItemId(page, clearitemnos[i]);
+		itup = (IndexTuple) PageGetItem(page, itemid);
+		ItemPointerClearFlags(&itup->t_tid);
+	}
+}
 
 /*
  * Set checksum for a page in shared buffers.
diff --git b/src/backend/utils/adt/pgstatfuncs.c a/src/backend/utils/adt/pgstatfuncs.c
index a987d0d..b8677f3 100644
--- b/src/backend/utils/adt/pgstatfuncs.c
+++ a/src/backend/utils/adt/pgstatfuncs.c
@@ -145,6 +145,22 @@ pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS)
 
 
 Datum
+pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+
+Datum
 pg_stat_get_live_tuples(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
@@ -1644,6 +1660,21 @@ pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS)
 }
 
 Datum
+pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_TableStatus *tabentry;
+
+	if ((tabentry = find_tabstat_entry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->t_counts.t_tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+Datum
 pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
diff --git b/src/backend/utils/cache/relcache.c a/src/backend/utils/cache/relcache.c
index ce55fc5..64dbaaa 100644
--- b/src/backend/utils/cache/relcache.c
+++ a/src/backend/utils/cache/relcache.c
@@ -2338,6 +2338,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	list_free_deep(relation->rd_fkeylist);
 	list_free(relation->rd_indexlist);
 	bms_free(relation->rd_indexattr);
+	bms_free(relation->rd_exprindexattr);
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_pkattr);
 	bms_free(relation->rd_idattr);
@@ -4352,6 +4353,13 @@ RelationGetIndexList(Relation relation)
 		return list_copy(relation->rd_indexlist);
 
 	/*
+	 * If the index list was invalidated, we better also invalidate the index
+	 * attribute list (which should automatically invalidate other attributes
+	 * such as primary key and replica identity)
+	 */
+	relation->rd_indexattr = NULL;
+
+	/*
 	 * We build the list we intend to return (in the caller's context) while
 	 * doing the scan.  After successfully completing the scan, we copy that
 	 * list into the relcache entry.  This avoids cache-context memory leakage
@@ -4759,15 +4767,19 @@ Bitmapset *
 RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 {
 	Bitmapset  *indexattrs;		/* indexed columns */
+	Bitmapset  *exprindexattrs;	/* indexed columns in expression/prediacate
+									 indexes */
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *pkindexattrs;	/* columns in the primary index */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
+	Bitmapset  *indxnotreadyattrs;	/* columns in not ready indexes */
 	List	   *indexoidlist;
 	List	   *newindexoidlist;
 	Oid			relpkindex;
 	Oid			relreplindex;
 	ListCell   *l;
 	MemoryContext oldcxt;
+	bool		supportswarm = true;/* True if the table can be WARM updated */
 
 	/* Quick exit if we already computed the result. */
 	if (relation->rd_indexattr != NULL)
@@ -4782,6 +4794,10 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_pkattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
+			case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+				return bms_copy(relation->rd_exprindexattr);
+			case INDEX_ATTR_BITMAP_NOTREADY:
+				return bms_copy(relation->rd_indxnotreadyattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -4822,9 +4838,11 @@ restart:
 	 * won't be returned at all by RelationGetIndexList.
 	 */
 	indexattrs = NULL;
+	exprindexattrs = NULL;
 	uindexattrs = NULL;
 	pkindexattrs = NULL;
 	idindexattrs = NULL;
+	indxnotreadyattrs = NULL;
 	foreach(l, indexoidlist)
 	{
 		Oid			indexOid = lfirst_oid(l);
@@ -4861,6 +4879,10 @@ restart:
 				indexattrs = bms_add_member(indexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
 
+				if (!indexInfo->ii_ReadyForInserts)
+					indxnotreadyattrs = bms_add_member(indxnotreadyattrs,
+							   attrnum - FirstLowInvalidHeapAttributeNumber);
+
 				if (isKey)
 					uindexattrs = bms_add_member(uindexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
@@ -4876,10 +4898,29 @@ restart:
 		}
 
 		/* Collect all attributes used in expressions, too */
-		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &exprindexattrs);
 
 		/* Collect all attributes in the index predicate, too */
-		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
+
+		/*
+		 * indexattrs should include attributes referenced in index expressions
+		 * and predicates too.
+		 */
+		indexattrs = bms_add_members(indexattrs, exprindexattrs);
+
+		if (!indexInfo->ii_ReadyForInserts)
+			indxnotreadyattrs = bms_add_members(indxnotreadyattrs,
+					exprindexattrs);
+
+		/*
+		 * Check if the index has amrecheck method defined. If the method is
+		 * not defined, the index does not support WARM update. Completely
+		 * disable WARM updates on such tables.
+		 */
+		if (!indexDesc->rd_amroutine->amrecheck)
+			supportswarm = false;
+
 
 		index_close(indexDesc, AccessShareLock);
 	}
@@ -4912,15 +4953,22 @@ restart:
 		goto restart;
 	}
 
+	/* Remember if the table can do WARM updates */
+	relation->rd_supportswarm = supportswarm;
+
 	/* Don't leak the old values of these bitmaps, if any */
 	bms_free(relation->rd_indexattr);
 	relation->rd_indexattr = NULL;
+	bms_free(relation->rd_exprindexattr);
+	relation->rd_exprindexattr = NULL;
 	bms_free(relation->rd_keyattr);
 	relation->rd_keyattr = NULL;
 	bms_free(relation->rd_pkattr);
 	relation->rd_pkattr = NULL;
 	bms_free(relation->rd_idattr);
 	relation->rd_idattr = NULL;
+	bms_free(relation->rd_indxnotreadyattr);
+	relation->rd_indxnotreadyattr = NULL;
 
 	/*
 	 * Now save copies of the bitmaps in the relcache entry.  We intentionally
@@ -4933,7 +4981,9 @@ restart:
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_pkattr = bms_copy(pkindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_indexattr = bms_copy(indexattrs);
+	relation->rd_exprindexattr = bms_copy(exprindexattrs);
+	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
+	relation->rd_indxnotreadyattr = bms_copy(indxnotreadyattrs);
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -4947,6 +4997,10 @@ restart:
 			return bms_copy(relation->rd_pkattr);
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
+		case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+			return exprindexattrs;
+		case INDEX_ATTR_BITMAP_NOTREADY:
+			return indxnotreadyattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
@@ -5559,6 +5613,7 @@ load_relcache_init_file(bool shared)
 		rel->rd_keyattr = NULL;
 		rel->rd_pkattr = NULL;
 		rel->rd_idattr = NULL;
+		rel->rd_indxnotreadyattr = NULL;
 		rel->rd_pubactions = NULL;
 		rel->rd_createSubid = InvalidSubTransactionId;
 		rel->rd_newRelfilenodeSubid = InvalidSubTransactionId;
diff --git b/src/backend/utils/time/combocid.c a/src/backend/utils/time/combocid.c
index baff998..6a2e2f2 100644
--- b/src/backend/utils/time/combocid.c
+++ a/src/backend/utils/time/combocid.c
@@ -106,7 +106,7 @@ HeapTupleHeaderGetCmin(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 	Assert(TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmin(tup)));
 
 	if (tup->t_infomask & HEAP_COMBOCID)
@@ -120,7 +120,7 @@ HeapTupleHeaderGetCmax(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 
 	/*
 	 * Because GetUpdateXid() performs memory allocations if xmax is a
diff --git b/src/backend/utils/time/tqual.c a/src/backend/utils/time/tqual.c
index 519f3b6..e54d0df 100644
--- b/src/backend/utils/time/tqual.c
+++ a/src/backend/utils/time/tqual.c
@@ -186,7 +186,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -205,7 +205,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -377,7 +377,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -396,7 +396,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -471,7 +471,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			return HeapTupleInvisible;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -490,7 +490,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -753,7 +753,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -772,7 +772,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -974,7 +974,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -993,7 +993,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1180,7 +1180,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 		if (HeapTupleHeaderXminInvalid(tuple))
 			return HEAPTUPLE_DEAD;
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_OFF)
+		else if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1198,7 +1198,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 						InvalidTransactionId);
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
diff --git b/src/include/access/amapi.h a/src/include/access/amapi.h
index f919cf8..8b7af1e 100644
--- b/src/include/access/amapi.h
+++ a/src/include/access/amapi.h
@@ -13,6 +13,7 @@
 #define AMAPI_H
 
 #include "access/genam.h"
+#include "access/itup.h"
 
 /*
  * We don't wish to include planner header files here, since most of an index
@@ -74,6 +75,14 @@ typedef bool (*aminsert_function) (Relation indexRelation,
 											   Relation heapRelation,
 											   IndexUniqueCheck checkUnique,
 											   struct IndexInfo *indexInfo);
+/* insert this WARM tuple */
+typedef bool (*amwarminsert_function) (Relation indexRelation,
+											   Datum *values,
+											   bool *isnull,
+											   ItemPointer heap_tid,
+											   Relation heapRelation,
+											   IndexUniqueCheck checkUnique,
+											   struct IndexInfo *indexInfo);
 
 /* bulk delete */
 typedef IndexBulkDeleteResult *(*ambulkdelete_function) (IndexVacuumInfo *info,
@@ -152,6 +161,11 @@ typedef void (*aminitparallelscan_function) (void *target);
 /* (re)start parallel index scan */
 typedef void (*amparallelrescan_function) (IndexScanDesc scan);
 
+/* recheck index tuple and heap tuple match */
+typedef bool (*amrecheck_function) (Relation indexRel,
+		struct IndexInfo *indexInfo, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 /*
  * API struct for an index AM.  Note this must be stored in a single palloc'd
  * chunk of memory.
@@ -198,6 +212,7 @@ typedef struct IndexAmRoutine
 	ambuild_function ambuild;
 	ambuildempty_function ambuildempty;
 	aminsert_function aminsert;
+	amwarminsert_function amwarminsert;
 	ambulkdelete_function ambulkdelete;
 	amvacuumcleanup_function amvacuumcleanup;
 	amcanreturn_function amcanreturn;	/* can be NULL */
@@ -217,6 +232,9 @@ typedef struct IndexAmRoutine
 	amestimateparallelscan_function amestimateparallelscan;		/* can be NULL */
 	aminitparallelscan_function aminitparallelscan;		/* can be NULL */
 	amparallelrescan_function amparallelrescan; /* can be NULL */
+
+	/* interface function to support WARM */
+	amrecheck_function amrecheck;		/* can be NULL */
 } IndexAmRoutine;
 
 
diff --git b/src/include/access/genam.h a/src/include/access/genam.h
index f467b18..965be45 100644
--- b/src/include/access/genam.h
+++ a/src/include/access/genam.h
@@ -75,12 +75,29 @@ typedef struct IndexBulkDeleteResult
 	bool		estimated_count;	/* num_index_tuples is an estimate */
 	double		num_index_tuples;		/* tuples remaining */
 	double		tuples_removed; /* # removed during vacuum operation */
+	double		num_warm_pointers;	/* # WARM pointers found */
+	double		num_clear_pointers;	/* # CLEAR pointers found */
+	double		pointers_cleared;	/* # WARM pointers cleared */
+	double		warm_pointers_removed;	/* # WARM pointers removed */
+	double		clear_pointers_removed;	/* # CLEAR pointers removed */
 	BlockNumber pages_deleted;	/* # unused pages in index */
 	BlockNumber pages_free;		/* # pages available for reuse */
 } IndexBulkDeleteResult;
 
+/*
+ * IndexBulkDeleteCallback should return one of the following
+ */
+typedef enum IndexBulkDeleteCallbackResult
+{
+	IBDCR_KEEP,			/* index tuple should be preserved */
+	IBDCR_DELETE,		/* index tuple should be deleted */
+	IBDCR_CLEAR_WARM	/* index tuple should be cleared of WARM bit */
+} IndexBulkDeleteCallbackResult;
+
 /* Typedef for callback function to determine if a tuple is bulk-deletable */
-typedef bool (*IndexBulkDeleteCallback) (ItemPointer itemptr, void *state);
+typedef IndexBulkDeleteCallbackResult (*IndexBulkDeleteCallback) (
+										 ItemPointer itemptr,
+										 bool is_warm, void *state);
 
 /* struct definitions appear in relscan.h */
 typedef struct IndexScanDescData *IndexScanDesc;
@@ -135,7 +152,8 @@ extern bool index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 struct IndexInfo *indexInfo);
+			 struct IndexInfo *indexInfo,
+			 bool warm_update);
 
 extern IndexScanDesc index_beginscan(Relation heapRelation,
 				Relation indexRelation,
diff --git b/src/include/access/hash.h a/src/include/access/hash.h
index eb1df57..12a4bb6 100644
--- b/src/include/access/hash.h
+++ a/src/include/access/hash.h
@@ -281,6 +281,11 @@ typedef HashMetaPageData *HashMetaPage;
 #define HASHPROC		1
 #define HASHNProcs		1
 
+/*
+ * Flags overloaded on t_tid.ip_posid field. They are managed by
+ * ItemPointerSetFlags and corresponing routines.
+ */
+#define HASH_INDEX_WARM_POINTER	0x01
 
 /* public routines */
 
@@ -291,6 +296,10 @@ extern bool hashinsert(Relation rel, Datum *values, bool *isnull,
 		   ItemPointer ht_ctid, Relation heapRel,
 		   IndexUniqueCheck checkUnique,
 		   struct IndexInfo *indexInfo);
+extern bool hashwarminsert(Relation rel, Datum *values, bool *isnull,
+		   ItemPointer ht_ctid, Relation heapRel,
+		   IndexUniqueCheck checkUnique,
+		   struct IndexInfo *indexInfo);
 extern bool hashgettuple(IndexScanDesc scan, ScanDirection dir);
 extern int64 hashgetbitmap(IndexScanDesc scan, TIDBitmap *tbm);
 extern IndexScanDesc hashbeginscan(Relation rel, int nkeys, int norderbys);
@@ -360,6 +369,8 @@ extern void _hash_expandtable(Relation rel, Buffer metabuf);
 extern void _hash_finish_split(Relation rel, Buffer metabuf, Buffer obuf,
 				   Bucket obucket, uint32 maxbucket, uint32 highmask,
 				   uint32 lowmask);
+extern void _hash_clear_items(Page page, OffsetNumber *clearitemnos,
+				   uint16 nclearitems);
 
 /* hashsearch.c */
 extern bool _hash_next(IndexScanDesc scan, ScanDirection dir);
@@ -401,7 +412,14 @@ extern void hashbucketcleanup(Relation rel, Bucket cur_bucket,
 				  BufferAccessStrategy bstrategy,
 				  uint32 maxbucket, uint32 highmask, uint32 lowmask,
 				  double *tuples_removed, double *num_index_tuples,
+				  double *warm_pointers_removed,
+				  double *clear_pointers_removed,
+				  double *pointers_cleared,
 				  bool bucket_has_garbage,
 				  IndexBulkDeleteCallback callback, void *callback_state);
 
+/* hash.c */
+extern bool hashrecheck(Relation indexRel, struct IndexInfo *indexInfo,
+		IndexTuple indexTuple, Relation heapRel, HeapTuple heapTuple);
+
 #endif   /* HASH_H */
diff --git b/src/include/access/hash_xlog.h a/src/include/access/hash_xlog.h
index dfd9237..0549a5a 100644
--- b/src/include/access/hash_xlog.h
+++ a/src/include/access/hash_xlog.h
@@ -199,9 +199,10 @@ typedef struct xl_hash_delete
 {
 	bool		is_primary_bucket_page; /* TRUE if the operation is for
 										 * primary bucket page */
+	uint16		nclearitems;			/* # of items to clear of WARM bits */
 }	xl_hash_delete;
 
-#define SizeOfHashDelete	(offsetof(xl_hash_delete, is_primary_bucket_page) + sizeof(bool))
+#define SizeOfHashDelete	(offsetof(xl_hash_delete, nclearitems) + sizeof(uint16))
 
 /*
  * This is what we need for metapage update operation.
diff --git b/src/include/access/heapam.h a/src/include/access/heapam.h
index 5540e12..2217af9 100644
--- b/src/include/access/heapam.h
+++ a/src/include/access/heapam.h
@@ -72,6 +72,20 @@ typedef struct HeapUpdateFailureData
 	CommandId	cmax;
 } HeapUpdateFailureData;
 
+typedef int HeapCheckWarmChainStatus;
+
+#define HCWC_CLEAR_TUPLE		0x0001
+#define	HCWC_WARM_TUPLE			0x0002
+#define HCWC_WARM_UPDATED_TUPLE	0x0004
+
+#define HCWC_IS_MIXED(status) \
+	(((status) & (HCWC_CLEAR_TUPLE | HCWC_WARM_TUPLE)) != 0)
+#define HCWC_IS_ALL_WARM(status) \
+	(((status) & HCWC_CLEAR_TUPLE) == 0)
+#define HCWC_IS_ALL_CLEAR(status) \
+	(((status) & HCWC_WARM_TUPLE) == 0)
+#define HCWC_IS_WARM_UPDATED(status) \
+	(((status) & HCWC_WARM_UPDATED_TUPLE) != 0)
 
 /* ----------------
  *		function prototypes for heap access method
@@ -137,9 +151,10 @@ extern bool heap_fetch(Relation relation, Snapshot snapshot,
 		   Relation stats_relation);
 extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
 					   Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call);
+					   bool *all_dead, bool first_call, bool *recheck);
 extern bool heap_hot_search(ItemPointer tid, Relation relation,
-				Snapshot snapshot, bool *all_dead);
+				Snapshot snapshot, bool *all_dead,
+				bool *recheck, Buffer *buffer, HeapTuple heapTuple);
 
 extern void heap_get_latest_tid(Relation relation, Snapshot snapshot,
 					ItemPointer tid);
@@ -161,7 +176,8 @@ extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
 extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
@@ -176,10 +192,16 @@ extern bool heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple);
 extern Oid	simple_heap_insert(Relation relation, HeapTuple tup);
 extern void simple_heap_delete(Relation relation, ItemPointer tid);
 extern void simple_heap_update(Relation relation, ItemPointer otid,
-				   HeapTuple tup);
+				   HeapTuple tup,
+				   Bitmapset **modified_attrs,
+				   bool *warm_update);
 
 extern void heap_sync(Relation relation);
 extern void heap_update_snapshot(HeapScanDesc scan, Snapshot snapshot);
+extern HeapCheckWarmChainStatus heap_check_warm_chain(Page dp,
+				   ItemPointer tid, bool stop_at_warm);
+extern int heap_clear_warm_chain(Page dp, ItemPointer tid,
+				   OffsetNumber *cleared_offnums);
 
 /* in heap/pruneheap.c */
 extern void heap_page_prune_opt(Relation relation, Buffer buffer);
diff --git b/src/include/access/heapam_xlog.h a/src/include/access/heapam_xlog.h
index e6019d5..66fd0ea 100644
--- b/src/include/access/heapam_xlog.h
+++ a/src/include/access/heapam_xlog.h
@@ -32,7 +32,7 @@
 #define XLOG_HEAP_INSERT		0x00
 #define XLOG_HEAP_DELETE		0x10
 #define XLOG_HEAP_UPDATE		0x20
-/* 0x030 is free, was XLOG_HEAP_MOVE */
+#define XLOG_HEAP_MULTI_INSERT	0x30
 #define XLOG_HEAP_HOT_UPDATE	0x40
 #define XLOG_HEAP_CONFIRM		0x50
 #define XLOG_HEAP_LOCK			0x60
@@ -47,18 +47,23 @@
 /*
  * We ran out of opcodes, so heapam.c now has a second RmgrId.  These opcodes
  * are associated with RM_HEAP2_ID, but are not logically different from
- * the ones above associated with RM_HEAP_ID.  XLOG_HEAP_OPMASK applies to
- * these, too.
+ * the ones above associated with RM_HEAP_ID.
+ *
+ * In PG 10, we moved XLOG_HEAP2_MULTI_INSERT to RM_HEAP_ID. That allows us to
+ * use 0x80 bit in RM_HEAP2_ID, thus potentially creating another 8 possible
+ * opcodes in RM_HEAP2_ID.
  */
 #define XLOG_HEAP2_REWRITE		0x00
 #define XLOG_HEAP2_CLEAN		0x10
 #define XLOG_HEAP2_FREEZE_PAGE	0x20
 #define XLOG_HEAP2_CLEANUP_INFO 0x30
 #define XLOG_HEAP2_VISIBLE		0x40
-#define XLOG_HEAP2_MULTI_INSERT 0x50
+#define XLOG_HEAP2_WARMCLEAR	0x50
 #define XLOG_HEAP2_LOCK_UPDATED 0x60
 #define XLOG_HEAP2_NEW_CID		0x70
 
+#define XLOG_HEAP2_OPMASK		0x70
+
 /*
  * xl_heap_insert/xl_heap_multi_insert flag values, 8 bits are available.
  */
@@ -80,6 +85,7 @@
 #define XLH_UPDATE_CONTAINS_NEW_TUPLE			(1<<4)
 #define XLH_UPDATE_PREFIX_FROM_OLD				(1<<5)
 #define XLH_UPDATE_SUFFIX_FROM_OLD				(1<<6)
+#define XLH_UPDATE_WARM_UPDATE					(1<<7)
 
 /* convenience macro for checking whether any form of old tuple was logged */
 #define XLH_UPDATE_CONTAINS_OLD						\
@@ -225,6 +231,14 @@ typedef struct xl_heap_clean
 
 #define SizeOfHeapClean (offsetof(xl_heap_clean, ndead) + sizeof(uint16))
 
+typedef struct xl_heap_warmclear
+{
+	uint16		ncleared;
+	/* OFFSET NUMBERS are in the block reference 0 */
+} xl_heap_warmclear;
+
+#define SizeOfHeapWarmClear (offsetof(xl_heap_warmclear, ncleared) + sizeof(uint16))
+
 /*
  * Cleanup_info is required in some cases during a lazy VACUUM.
  * Used for reporting the results of HeapTupleHeaderAdvanceLatestRemovedXid()
@@ -388,6 +402,8 @@ extern XLogRecPtr log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid);
+extern XLogRecPtr log_heap_warmclear(Relation reln, Buffer buffer,
+			   OffsetNumber *cleared, int ncleared);
 extern XLogRecPtr log_heap_freeze(Relation reln, Buffer buffer,
 				TransactionId cutoff_xid, xl_heap_freeze_tuple *tuples,
 				int ntuples);
diff --git b/src/include/access/htup_details.h a/src/include/access/htup_details.h
index 4d614b7..bcefba6 100644
--- b/src/include/access/htup_details.h
+++ a/src/include/access/htup_details.h
@@ -201,6 +201,21 @@ struct HeapTupleHeaderData
 										 * upgrade support */
 #define HEAP_MOVED (HEAP_MOVED_OFF | HEAP_MOVED_IN)
 
+/*
+ * A WARM chain usually consists of two parts. Each of these parts are HOT
+ * chains in themselves i.e. all indexed columns has the same value, but a WARM
+ * update separates these parts. We need a mechanism to identify which part a
+ * tuple belongs to. We can't just look at if it's a
+ * HeapTupleHeaderIsWarmUpdated() because during WARM update, both old and new
+ * tuples are marked as WARM tuples.
+ *
+ * We need another infomask bit for this. But we use the same infomask bit that
+ * was earlier used for by old-style VACUUM FULL. This is safe because
+ * HEAP_WARM_TUPLE flag will always be set along with HEAP_WARM_UPDATED. So if
+ * HEAP_WARM_TUPLE and HEAP_WARM_UPDATED is set then we know that it's
+ * referring to red part of the WARM chain.
+ */
+#define HEAP_WARM_TUPLE			0x4000
 #define HEAP_XACT_MASK			0xFFF0	/* visibility-related bits */
 
 /*
@@ -260,7 +275,11 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x0800 are available */
+#define HEAP_WARM_UPDATED		0x0800	/*
+										 * This or a prior version of this
+										 * tuple in the current HOT chain was
+										 * once WARM updated
+										 */
 #define HEAP_LATEST_TUPLE		0x1000	/*
 										 * This is the last tuple in chain and
 										 * ip_posid points to the root line
@@ -271,7 +290,7 @@ struct HeapTupleHeaderData
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF800	/* visibility-related bits */
 
 
 /*
@@ -396,7 +415,7 @@ struct HeapTupleHeaderData
 /* SetCmin is reasonably simple since we never need a combo CID */
 #define HeapTupleHeaderSetCmin(tup, cid) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	(tup)->t_infomask &= ~HEAP_COMBOCID; \
 } while (0)
@@ -404,7 +423,7 @@ do { \
 /* SetCmax must be used after HeapTupleHeaderAdjustCmax; see combocid.c */
 #define HeapTupleHeaderSetCmax(tup, cid, iscombo) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	if (iscombo) \
 		(tup)->t_infomask |= HEAP_COMBOCID; \
@@ -414,7 +433,7 @@ do { \
 
 #define HeapTupleHeaderGetXvac(tup) \
 ( \
-	((tup)->t_infomask & HEAP_MOVED) ? \
+	HeapTupleHeaderIsMoved(tup) ? \
 		(tup)->t_choice.t_heap.t_field3.t_xvac \
 	: \
 		InvalidTransactionId \
@@ -422,7 +441,7 @@ do { \
 
 #define HeapTupleHeaderSetXvac(tup, xid) \
 do { \
-	Assert((tup)->t_infomask & HEAP_MOVED); \
+	Assert(HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_xvac = (xid); \
 } while (0)
 
@@ -510,6 +529,21 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+#define HeapTupleHeaderSetWarmUpdated(tup) \
+do { \
+	(tup)->t_infomask2 |= HEAP_WARM_UPDATED; \
+} while (0)
+
+#define HeapTupleHeaderClearWarmUpdated(tup) \
+do { \
+	(tup)->t_infomask2 &= ~HEAP_WARM_UPDATED; \
+} while (0)
+
+#define HeapTupleHeaderIsWarmUpdated(tup) \
+( \
+  ((tup)->t_infomask2 & HEAP_WARM_UPDATED) != 0 \
+)
+
 /*
  * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
  * the TID of the tuple itself in t_ctid field to mark the end of the chain.
@@ -635,6 +669,58 @@ do { \
 )
 
 /*
+ * Macros to check if tuple is a moved-off/in tuple by VACUUM FULL in from
+ * pre-9.0 era. Such tuple must not have HEAP_WARM_TUPLE flag set.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderIsMovedOff(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED_OFF) \
+)
+
+#define HeapTupleHeaderIsMovedIn(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED_IN) \
+)
+
+#define HeapTupleHeaderIsMoved(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED) \
+)
+
+/*
+ * Check if tuple belongs to the second part of the WARM chain.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderIsWarm(tuple) \
+( \
+	HeapTupleHeaderIsWarmUpdated(tuple) && \
+	(((tuple)->t_infomask & HEAP_WARM_TUPLE) != 0) \
+)
+
+/*
+ * Mark tuple as a member of the second part of the chain. Must only be done on
+ * a tuple which is already marked a WARM-tuple.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderSetWarm(tuple) \
+( \
+	AssertMacro(HeapTupleHeaderIsWarmUpdated(tuple)), \
+	(tuple)->t_infomask |= HEAP_WARM_TUPLE \
+)
+
+#define HeapTupleHeaderClearWarm(tuple) \
+( \
+	(tuple)->t_infomask &= ~HEAP_WARM_TUPLE \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
@@ -785,6 +871,24 @@ struct MinimalTupleData
 #define HeapTupleClearHeapOnly(tuple) \
 		HeapTupleHeaderClearHeapOnly((tuple)->t_data)
 
+#define HeapTupleIsWarmUpdated(tuple) \
+		HeapTupleHeaderIsWarmUpdated((tuple)->t_data)
+
+#define HeapTupleSetWarmUpdated(tuple) \
+		HeapTupleHeaderSetWarmUpdated((tuple)->t_data)
+
+#define HeapTupleClearWarmUpdated(tuple) \
+		HeapTupleHeaderClearWarmUpdated((tuple)->t_data)
+
+#define HeapTupleIsWarm(tuple) \
+		HeapTupleHeaderIsWarm((tuple)->t_data)
+
+#define HeapTupleSetWarm(tuple) \
+		HeapTupleHeaderSetWarm((tuple)->t_data)
+
+#define HeapTupleClearWarm(tuple) \
+		HeapTupleHeaderClearWarm((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git b/src/include/access/nbtree.h a/src/include/access/nbtree.h
index f9304db..163180d 100644
--- b/src/include/access/nbtree.h
+++ a/src/include/access/nbtree.h
@@ -427,6 +427,12 @@ typedef BTScanOpaqueData *BTScanOpaque;
 #define SK_BT_NULLS_FIRST	(INDOPTION_NULLS_FIRST << SK_BT_INDOPTION_SHIFT)
 
 /*
+ * Flags overloaded on t_tid.ip_posid field. They are managed by
+ * ItemPointerSetFlags and corresponing routines.
+ */
+#define BTREE_INDEX_WARM_POINTER	0x01
+
+/*
  * external entry points for btree, in nbtree.c
  */
 extern IndexBuildResult *btbuild(Relation heap, Relation index,
@@ -436,6 +442,10 @@ extern bool btinsert(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
 		 struct IndexInfo *indexInfo);
+extern bool btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 struct IndexInfo *indexInfo);
 extern IndexScanDesc btbeginscan(Relation rel, int nkeys, int norderbys);
 extern Size btestimateparallelscan(void);
 extern void btinitparallelscan(void *target);
@@ -487,10 +497,12 @@ extern void _bt_pageinit(Page page, Size size);
 extern bool _bt_page_recyclable(Page page);
 extern void _bt_delitems_delete(Relation rel, Buffer buf,
 					OffsetNumber *itemnos, int nitems, Relation heapRel);
-extern void _bt_delitems_vacuum(Relation rel, Buffer buf,
-					OffsetNumber *itemnos, int nitems,
-					BlockNumber lastBlockVacuumed);
+extern void _bt_handleitems_vacuum(Relation rel, Buffer buf,
+					OffsetNumber *delitemnos, int ndelitems,
+					OffsetNumber *clearitemnos, int nclearitems);
 extern int	_bt_pagedel(Relation rel, Buffer buf);
+extern void	_bt_clear_items(Page page, OffsetNumber *clearitemnos,
+					uint16 nclearitems);
 
 /*
  * prototypes for functions in nbtsearch.c
@@ -537,6 +549,9 @@ extern bytea *btoptions(Datum reloptions, bool validate);
 extern bool btproperty(Oid index_oid, int attno,
 		   IndexAMProperty prop, const char *propname,
 		   bool *res, bool *isnull);
+extern bool btrecheck(Relation indexRel, struct IndexInfo *indexInfo,
+		IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * prototypes for functions in nbtvalidate.c
diff --git b/src/include/access/nbtxlog.h a/src/include/access/nbtxlog.h
index d6a3085..7efd0d7 100644
--- b/src/include/access/nbtxlog.h
+++ a/src/include/access/nbtxlog.h
@@ -142,34 +142,20 @@ typedef struct xl_btree_reuse_page
 /*
  * This is what we need to know about vacuum of individual leaf index tuples.
  * The WAL record can represent deletion of any number of index tuples on a
- * single index page when executed by VACUUM.
- *
- * For MVCC scans, lastBlockVacuumed will be set to InvalidBlockNumber.
- * For a non-MVCC index scans there is an additional correctness requirement
- * for applying these changes during recovery, which is that we must do one
- * of these two things for every block in the index:
- *		* lock the block for cleanup and apply any required changes
- *		* EnsureBlockUnpinned()
- * The purpose of this is to ensure that no index scans started before we
- * finish scanning the index are still running by the time we begin to remove
- * heap tuples.
- *
- * Any changes to any one block are registered on just one WAL record. All
- * blocks that we need to run EnsureBlockUnpinned() are listed as a block range
- * starting from the last block vacuumed through until this one. Individual
- * block numbers aren't given.
+ * single index page when executed by VACUUM. It also includes tuples which
+ * are cleared of WARM bits by VACUUM.
  *
  * Note that the *last* WAL record in any vacuum of an index is allowed to
  * have a zero length array of offsets. Earlier records must have at least one.
  */
 typedef struct xl_btree_vacuum
 {
-	BlockNumber lastBlockVacuumed;
-
-	/* TARGET OFFSET NUMBERS FOLLOW */
+	uint16		ndelitems;
+	uint16		nclearitems;
+	/* ndelitems + nclearitems TARGET OFFSET NUMBERS FOLLOW */
 } xl_btree_vacuum;
 
-#define SizeOfBtreeVacuum	(offsetof(xl_btree_vacuum, lastBlockVacuumed) + sizeof(BlockNumber))
+#define SizeOfBtreeVacuum	(offsetof(xl_btree_vacuum, nclearitems) + sizeof(uint16))
 
 /*
  * This is what we need to know about marking an empty branch for deletion.
diff --git b/src/include/access/relscan.h a/src/include/access/relscan.h
index 3fc726d..fa178d3 100644
--- b/src/include/access/relscan.h
+++ a/src/include/access/relscan.h
@@ -104,6 +104,9 @@ typedef struct IndexScanDescData
 	/* index access method's private state */
 	void	   *opaque;			/* access-method-specific info */
 
+	/* IndexInfo structure for this index */
+	struct IndexInfo  *indexInfo;
+
 	/*
 	 * In an index-only scan, a successful amgettuple call must fill either
 	 * xs_itup (and xs_itupdesc) or xs_hitup (and xs_hitupdesc) to provide the
@@ -119,7 +122,7 @@ typedef struct IndexScanDescData
 	HeapTupleData xs_ctup;		/* current heap tuple, if any */
 	Buffer		xs_cbuf;		/* current heap buffer in scan, if any */
 	/* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */
-	bool		xs_recheck;		/* T means scan keys must be rechecked */
+	bool		xs_recheck;		/* T means scan keys must be rechecked for each tuple */
 
 	/*
 	 * When fetching with an ordering operator, the values of the ORDER BY
diff --git b/src/include/catalog/index.h a/src/include/catalog/index.h
index 20bec90..f92ec29 100644
--- b/src/include/catalog/index.h
+++ a/src/include/catalog/index.h
@@ -89,6 +89,13 @@ extern void FormIndexDatum(IndexInfo *indexInfo,
 			   Datum *values,
 			   bool *isnull);
 
+extern void FormIndexPlainDatum(IndexInfo *indexInfo,
+			   Relation heapRel,
+			   HeapTuple heapTup,
+			   Datum *values,
+			   bool *isnull,
+			   bool *isavail);
+
 extern void index_build(Relation heapRelation,
 			Relation indexRelation,
 			IndexInfo *indexInfo,
diff --git b/src/include/catalog/pg_proc.h a/src/include/catalog/pg_proc.h
index 836d6ff..0ca6e22 100644
--- b/src/include/catalog/pg_proc.h
+++ a/src/include/catalog/pg_proc.h
@@ -2769,6 +2769,8 @@ DATA(insert OID = 1933 (  pg_stat_get_tuples_deleted	PGNSP PGUID 12 1 0 0 0 f f
 DESCR("statistics: number of tuples deleted");
 DATA(insert OID = 1972 (  pg_stat_get_tuples_hot_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated");
+DATA(insert OID = 3355 (  pg_stat_get_tuples_warm_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated");
 DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_live_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
@@ -2921,6 +2923,8 @@ DATA(insert OID = 3042 (  pg_stat_get_xact_tuples_deleted		PGNSP PGUID 12 1 0 0
 DESCR("statistics: number of tuples deleted in current transaction");
 DATA(insert OID = 3043 (  pg_stat_get_xact_tuples_hot_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated in current transaction");
+DATA(insert OID = 3356 (  pg_stat_get_xact_tuples_warm_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated in current transaction");
 DATA(insert OID = 3044 (  pg_stat_get_xact_blocks_fetched		PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_fetched _null_ _null_ _null_ ));
 DESCR("statistics: number of blocks fetched in current transaction");
 DATA(insert OID = 3045 (  pg_stat_get_xact_blocks_hit			PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_hit _null_ _null_ _null_ ));
diff --git b/src/include/commands/progress.h a/src/include/commands/progress.h
index 9472ecc..b355b61 100644
--- b/src/include/commands/progress.h
+++ a/src/include/commands/progress.h
@@ -25,6 +25,7 @@
 #define PROGRESS_VACUUM_NUM_INDEX_VACUUMS		4
 #define PROGRESS_VACUUM_MAX_DEAD_TUPLES			5
 #define PROGRESS_VACUUM_NUM_DEAD_TUPLES			6
+#define PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED	7
 
 /* Phases of vacuum (as advertised via PROGRESS_VACUUM_PHASE) */
 #define PROGRESS_VACUUM_PHASE_SCAN_HEAP			1
diff --git b/src/include/executor/executor.h a/src/include/executor/executor.h
index 02dbe7b..c4495a3 100644
--- b/src/include/executor/executor.h
+++ a/src/include/executor/executor.h
@@ -382,6 +382,7 @@ extern void UnregisterExprContextCallback(ExprContext *econtext,
 extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
 extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
 extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
+					  ItemPointer root_tid, Bitmapset *modified_attrs,
 					  EState *estate, bool noDupErr, bool *specConflict,
 					  List *arbiterIndexes);
 extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
diff --git b/src/include/executor/nodeIndexscan.h a/src/include/executor/nodeIndexscan.h
index ea3f3a5..ebeec74 100644
--- b/src/include/executor/nodeIndexscan.h
+++ a/src/include/executor/nodeIndexscan.h
@@ -41,5 +41,4 @@ extern void ExecIndexEvalRuntimeKeys(ExprContext *econtext,
 extern bool ExecIndexEvalArrayKeys(ExprContext *econtext,
 					   IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 extern bool ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
-
 #endif   /* NODEINDEXSCAN_H */
diff --git b/src/include/nodes/execnodes.h a/src/include/nodes/execnodes.h
index f856f60..cd09553 100644
--- b/src/include/nodes/execnodes.h
+++ a/src/include/nodes/execnodes.h
@@ -66,6 +66,7 @@ typedef struct IndexInfo
 	NodeTag		type;
 	int			ii_NumIndexAttrs;
 	AttrNumber	ii_KeyAttrNumbers[INDEX_MAX_KEYS];
+	Bitmapset  *ii_indxattrs;	/* bitmap of all columns used in this index */
 	List	   *ii_Expressions; /* list of Expr */
 	List	   *ii_ExpressionsState;	/* list of ExprState */
 	List	   *ii_Predicate;	/* list of Expr */
diff --git b/src/include/pgstat.h a/src/include/pgstat.h
index f2daf32..af8a3ba 100644
--- b/src/include/pgstat.h
+++ a/src/include/pgstat.h
@@ -105,6 +105,7 @@ typedef struct PgStat_TableCounts
 	PgStat_Counter t_tuples_updated;
 	PgStat_Counter t_tuples_deleted;
 	PgStat_Counter t_tuples_hot_updated;
+	PgStat_Counter t_tuples_warm_updated;
 	bool		t_truncated;
 
 	PgStat_Counter t_delta_live_tuples;
@@ -625,6 +626,7 @@ typedef struct PgStat_StatTabEntry
 	PgStat_Counter tuples_updated;
 	PgStat_Counter tuples_deleted;
 	PgStat_Counter tuples_hot_updated;
+	PgStat_Counter tuples_warm_updated;
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
@@ -1257,7 +1259,7 @@ pgstat_report_wait_end(void)
 	(pgStatBlockWriteTime += (n))
 
 extern void pgstat_count_heap_insert(Relation rel, PgStat_Counter n);
-extern void pgstat_count_heap_update(Relation rel, bool hot);
+extern void pgstat_count_heap_update(Relation rel, bool hot, bool warm);
 extern void pgstat_count_heap_delete(Relation rel);
 extern void pgstat_count_truncate(Relation rel);
 extern void pgstat_update_heap_dead_tuples(Relation rel, int delta);
diff --git b/src/include/storage/bufpage.h a/src/include/storage/bufpage.h
index e956dc3..1852195 100644
--- b/src/include/storage/bufpage.h
+++ a/src/include/storage/bufpage.h
@@ -433,6 +433,8 @@ extern void PageIndexMultiDelete(Page page, OffsetNumber *itemnos, int nitems);
 extern void PageIndexTupleDeleteNoCompact(Page page, OffsetNumber offset);
 extern bool PageIndexTupleOverwrite(Page page, OffsetNumber offnum,
 						Item newtup, Size newsize);
+extern void PageIndexClearWarmTuples(Page page, OffsetNumber *clearitemnos,
+						uint16 nclearitems);
 extern char *PageSetChecksumCopy(Page page, BlockNumber blkno);
 extern void PageSetChecksumInplace(Page page, BlockNumber blkno);
 
diff --git b/src/include/utils/rel.h a/src/include/utils/rel.h
index a617a7c..fbac7c0 100644
--- b/src/include/utils/rel.h
+++ a/src/include/utils/rel.h
@@ -138,9 +138,14 @@ typedef struct RelationData
 
 	/* data managed by RelationGetIndexAttrBitmap: */
 	Bitmapset  *rd_indexattr;	/* identifies columns used in indexes */
+	Bitmapset  *rd_exprindexattr; /* indentified columns used in expression or
+									 predicate indexes */
+	Bitmapset  *rd_indxnotreadyattr;	/* columns used by indexes not yet
+										   ready */
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_pkattr;		/* cols included in primary key */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	PublicationActions  *rd_pubactions;	/* publication actions */
 
diff --git b/src/include/utils/relcache.h a/src/include/utils/relcache.h
index da36b67..d18bd09 100644
--- b/src/include/utils/relcache.h
+++ a/src/include/utils/relcache.h
@@ -50,7 +50,9 @@ typedef enum IndexAttrBitmapKind
 	INDEX_ATTR_BITMAP_ALL,
 	INDEX_ATTR_BITMAP_KEY,
 	INDEX_ATTR_BITMAP_PRIMARY_KEY,
-	INDEX_ATTR_BITMAP_IDENTITY_KEY
+	INDEX_ATTR_BITMAP_IDENTITY_KEY,
+	INDEX_ATTR_BITMAP_EXPR_PREDICATE,
+	INDEX_ATTR_BITMAP_NOTREADY
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
diff --git b/src/test/regress/expected/alter_generic.out a/src/test/regress/expected/alter_generic.out
index b01be59..37719c9 100644
--- b/src/test/regress/expected/alter_generic.out
+++ a/src/test/regress/expected/alter_generic.out
@@ -161,15 +161,15 @@ ALTER SERVER alt_fserv1 RENAME TO alt_fserv3;   -- OK
 SELECT fdwname FROM pg_foreign_data_wrapper WHERE fdwname like 'alt_fdw%';
  fdwname  
 ----------
- alt_fdw2
  alt_fdw3
+ alt_fdw2
 (2 rows)
 
 SELECT srvname FROM pg_foreign_server WHERE srvname like 'alt_fserv%';
   srvname   
 ------------
- alt_fserv2
  alt_fserv3
+ alt_fserv2
 (2 rows)
 
 --
diff --git b/src/test/regress/expected/rules.out a/src/test/regress/expected/rules.out
index bd13ae6..44c59ae 100644
--- b/src/test/regress/expected/rules.out
+++ a/src/test/regress/expected/rules.out
@@ -1732,6 +1732,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_tuples_deleted(c.oid) AS n_tup_del,
     pg_stat_get_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
@@ -1875,6 +1876,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1918,6 +1920,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1955,7 +1958,8 @@ pg_stat_xact_all_tables| SELECT c.oid AS relid,
     pg_stat_get_xact_tuples_inserted(c.oid) AS n_tup_ins,
     pg_stat_get_xact_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_xact_tuples_deleted(c.oid) AS n_tup_del,
-    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd
+    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_xact_tuples_warm_updated(c.oid) AS n_tup_warm_upd
    FROM ((pg_class c
      LEFT JOIN pg_index i ON ((c.oid = i.indrelid)))
      LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
@@ -1971,7 +1975,8 @@ pg_stat_xact_sys_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname = ANY (ARRAY['pg_catalog'::name, 'information_schema'::name])) OR (pg_stat_xact_all_tables.schemaname ~ '^pg_toast'::text));
 pg_stat_xact_user_functions| SELECT p.oid AS funcid,
@@ -1993,7 +1998,8 @@ pg_stat_xact_user_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname <> ALL (ARRAY['pg_catalog'::name, 'information_schema'::name])) AND (pg_stat_xact_all_tables.schemaname !~ '^pg_toast'::text));
 pg_statio_all_indexes| SELECT c.oid AS relid,
diff --git b/src/test/regress/expected/warm.out a/src/test/regress/expected/warm.out
new file mode 100644
index 0000000..6391891
--- /dev/null
+++ a/src/test/regress/expected/warm.out
@@ -0,0 +1,367 @@
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Check if index only scan works correctly
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx1 on updtst_tab1
+   Index Cond: (b = 140001)
+(2 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab1;
+------------------
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE a = 1;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE b = 701;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+VACUUM updtst_tab2;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx2 on updtst_tab2
+   Index Cond: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab2 WHERE b = 701;
+  b  
+-----
+ 701
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab2;
+------------------
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 1;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+       QUERY PLAN        
+-------------------------
+ Seq Scan on updtst_tab3
+   Filter: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 701;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+  b   
+------
+ 1421
+(1 row)
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+SET enable_seqscan = false;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    98
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 2;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx3 on updtst_tab3
+   Index Cond: (b = 702)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 702;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+  b   
+------
+ 1422
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab3;
+------------------
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+explain select * from test_warm where lower(a) = 'test';
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on test_warm  (cost=4.18..12.65 rows=4 width=64)
+   Recheck Cond: (lower(a) = 'test'::text)
+   ->  Bitmap Index Scan on test_warmindx  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (lower(a) = 'test'::text)
+(4 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+select *, ctid from test_warm where a = 'test';
+ a | b | ctid 
+---+---+------
+(0 rows)
+
+select *, ctid from test_warm where a = 'TEST';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+                                   QUERY PLAN                                    
+---------------------------------------------------------------------------------
+ Index Scan using test_warmindx on test_warm  (cost=0.15..20.22 rows=4 width=64)
+   Index Cond: (lower(a) = 'test'::text)
+(2 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+DROP TABLE test_warm;
diff --git b/src/test/regress/parallel_schedule a/src/test/regress/parallel_schedule
index ea7b5b4..7cc0d21 100644
--- b/src/test/regress/parallel_schedule
+++ a/src/test/regress/parallel_schedule
@@ -42,6 +42,8 @@ test: create_type
 test: create_table
 test: create_function_2
 
+test: warm
+
 # ----------
 # Load huge amounts of data
 # We should split the data files into single files and then
diff --git b/src/test/regress/sql/warm.sql a/src/test/regress/sql/warm.sql
new file mode 100644
index 0000000..3a078dd
--- /dev/null
+++ a/src/test/regress/sql/warm.sql
@@ -0,0 +1,170 @@
+
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Check if index only scan works correctly
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab1;
+
+------------------
+
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE a = 1;
+
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE b = 701;
+
+VACUUM updtst_tab2;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+SELECT b FROM updtst_tab2 WHERE b = 701;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab2;
+------------------
+
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+SELECT * FROM updtst_tab3 WHERE a = 1;
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+SET enable_seqscan = false;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+SELECT * FROM updtst_tab3 WHERE a = 2;
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab3;
+------------------
+
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where a = 'test';
+select *, ctid from test_warm where a = 'TEST';
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+DROP TABLE test_warm;
#159Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Amit Kapila (#156)
Re: Patch: Write Amplification Reduction Method (WARM)

Thanks Amit. v19 addresses some of the comments below.

On Thu, Mar 23, 2017 at 10:28 AM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Wed, Mar 22, 2017 at 4:06 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Tue, Mar 21, 2017 at 6:47 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

Please find attached rebased patches.

Few comments on 0005_warm_updates_v18.patch:

Few more comments on 0005_warm_updates_v18.patch:
1.
@@ -234,6 +241,25 @@ index_beginscan(Relation heapRelation,
scan->heapRelation = heapRelation;
scan->xs_snapshot = snapshot;

+ /*
+ * If the index supports recheck,
make sure that index tuple is saved
+ * during index scans. Also build and cache IndexInfo which is used by
+ * amrecheck routine.
+ *
+ * XXX Ideally, we should look at
all indexes on the table and check if
+ * WARM is at all supported on the base table. If WARM is not supported
+ * then we don't need to do any recheck.
RelationGetIndexAttrBitmap() does
+ * do that and sets rd_supportswarm after looking at all indexes. But we
+ * don't know if the function was called earlier in the
session when we're
+ * here. We can't call it now because there exists a risk of causing
+ * deadlock.
+ */
+ if (indexRelation->rd_amroutine->amrecheck)
+ {
+scan->xs_want_itup = true;
+ scan->indexInfo = BuildIndexInfo(indexRelation);
+ }
+

Don't we need to do this rechecking during parallel scans? Also what
about bitmap heap scans?

Yes, we need to handle parallel scans. Bitmap scans are not a problem
because it can never return the same TID twice. I fixed this though by
moving this inside index_beginscan_internal.

2.
+++ b/src/backend/access/nbtree/nbtinsert.c
-
typedef struct

Above change is not require.

Sure. Fixed.

3.
+_bt_clear_items(Page page, OffsetNumber *clearitemnos, uint16 nclearitems)
+void _hash_clear_items(Page page, OffsetNumber *clearitemnos,
+   uint16 nclearitems)

Both the above functions look exactly same, isn't it better to have a
single function like page_clear_items? If you want separation for
different index types, then we can have one common function which can
be called from different index types.

Yes, makes sense. Moved that to bufpage.c. The reason why I originally had
index-specific versions because I started by putting WARM flag in
IndexTuple header. But since hash index does not have the bit free, moved
everything to TID bit-flag. I still left index-specific wrappers, but they
just call PageIndexClearWarmTuples.

4.
- if (callback(htup, callback_state))
+ flags = ItemPointerGetFlags(&itup->t_tid);
+ is_warm = ((flags & BTREE_INDEX_WARM_POINTER) != 0);
+
+ if (is_warm)
+ stats->num_warm_pointers++;
+ else
+ stats->num_clear_pointers++;
+
+ result = callback(htup, is_warm, callback_state);
+ if (result == IBDCR_DELETE)
+ {
+ if (is_warm)
+ stats->warm_pointers_removed++;
+ else
+ stats->clear_pointers_removed++;

The patch looks to be inconsistent in collecting stats for btree and
hash. I don't see above stats are getting updated in hash index code.

Fixed. The hashbucketcleanup signature is just getting a bit too long. May
be we should move some of these counters in a structure and pass that
around. Not done here though.

5.
+btrecheck(Relation indexRel, IndexInfo *indexInfo, IndexTuple indexTuple,
+ Relation heapRel, HeapTuple heapTuple)
{
..
+ if (!datumIsEqual(values[i - 1], indxvalue, att->attbyval,
+ att->attlen))
..
}

Will this work if the index is using non-default collation?

Not sure I understand that. Can you please elaborate?

6.
+++ b/src/backend/access/nbtree/nbtxlog.c
@@ -390,83 +390,9 @@ btree_xlog_vacuum(XLogReaderState *record)
-#ifdef UNUSED
xl_btree_vacuum *xlrec = (xl_btree_vacuum *) XLogRecGetData(record);

/*
- * This section of code is thought to be no longer needed, after analysis
- * of the calling paths. It is retained to allow the code to be reinstated
- * if a flaw is revealed in that thinking.
- *
..

Why does this patch need to remove the above code under #ifdef UNUSED

Yeah, it isn't strictly necessary. But that dead code was coming in the way
and hence I decided to strip it out. I can put it back if it's an issue or
remove that as a separate commit first.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#160Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Amit Kapila (#157)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 23, 2017 at 3:02 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

That sounds like you are dodging the actual problem. I mean you can
put that same PageIsFull() check in master code as well and then you
will most probably again see the same regression.

Well I don't see it that way. There was a specific concern about a specific
workload that WARM might regress. I think this change addresses that. Sure
if you pick that one piece, put it in master first and then compare against
rest of the WARM code, you will see a regression. But I thought what we
were worried is WARM causing regression to some existing user, who might
see her workload running 10% slower, which this change mitigates.

Also, I think if we
test at fillfactor 80 or 75 (which is not unrealistic considering an
update-intensive workload), then we might again see regression.

Yeah, we might, but it will be lesser than before, may be 2% instead of
10%. And by doing this we are further narrowing an already narrow test
case. I think we need to see things in totality and weigh in costs-benefits
trade offs. There are numbers for very common workloads, where WARM may
provide 20, 30 or even more than 100% improvements.

Andres and Alvaro already have other ideas to address this problem even
further. And as I said, we can pass-in index specific information and make
that routine bail-out even earlier. We need to accept that WARM will need
to do more attr checks than master, especially when there are more than 1
indexes on the table, and sometimes those checks will go waste. I am ok if
we want to provide table-specific knob to disable WARM, but not sure if
others would like that idea.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#161Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Pavan Deolasee (#160)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 23, 2017 at 4:08 PM, Pavan Deolasee <pavan.deolasee@gmail.com>
wrote:

On Thu, Mar 23, 2017 at 3:02 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

That sounds like you are dodging the actual problem. I mean you can
put that same PageIsFull() check in master code as well and then you
will most probably again see the same regression.

Well I don't see it that way. There was a specific concern about a
specific workload that WARM might regress. I think this change addresses
that. Sure if you pick that one piece, put it in master first and then
compare against rest of the WARM code, you will see a regression.

BTW the PageIsFull() check may not help as much in master as it does with
WARM. In master we anyways bail out early after couple of column checks. In
master it may help to reduce the 10% drop that we see while updating last
index column, but if we compare master and WARM with the patch applied,
regression should be quite nominal.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#162Amit Kapila
amit.kapila16@gmail.com
In reply to: Pavan Deolasee (#158)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 23, 2017 at 3:44 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Wed, Mar 22, 2017 at 4:06 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

3.
+ /*
+ * HASH indexes compute a hash value of the key and store that in the
+ * index. So
we must first obtain the hash of the value obtained from the
+ * heap and then do a comparison
+
*/
+ _hash_convert_tuple(indexRel, values, isnull, values2, isnull2);

I think here, you need to handle the case where heap has a NULL value
as the hash index doesn't contain NULL values, otherwise, the code in
below function can return true which is not right.

I think we can simply conclude hashrecheck has failed the equality if the
heap has NULL value because such a tuple should not have been reached via
hash index unless a non-NULL hash key was later updated to a NULL key,
right?

Right.

6.
+ *stats = index_bulk_delete(&ivinfo, *stats,
+lazy_indexvac_phase1, (void *) vacrelstats);
+ ereport(elevel,
+(errmsg("scanned index \"%s\" to remove %d row version, found "
+"%0.f warm pointers, %0.f clear pointers, removed "
+"%0.f warm pointers, removed %0.f clear pointers",
+RelationGetRelationName(indrel),
+ vacrelstats->num_dead_tuples,
+ (*stats)->num_warm_pointers,
+(*stats)->num_clear_pointers,
+(*stats)->warm_pointers_removed,
+ (*stats)->clear_pointers_removed)));
+
+ (*stats)->num_warm_pointers = 0;
+ (*stats)->num_clear_pointers = 0;
+ (*stats)->warm_pointers_removed = 0;
+ (*stats)->clear_pointers_removed = 0;
+ (*stats)->pointers_cleared = 0;
+
+ *stats =index_bulk_delete(&ivinfo, *stats,
+ lazy_indexvac_phase2, (void *)vacrelstats);

To convert WARM chains, we need to do two index passes for all the
indexes. I think it can substantially increase the random I/O. I
think this can help us in doing more WARM updates, but I don't see how
the downside of that (increased random I/O) will be acceptable for all
kind of cases.

Yes, this is a very fair point. The way I proposed to address this upthread
is by introducing a set of threshold/scale GUCs specific to WARM. So users
can control when to invoke WARM cleanup. Only if the WARM cleanup is
required, we do 2 index scans. Otherwise vacuum will work the way it works
today without any additional overhead.

I am not sure on what basis user can set such parameters, it will be
quite difficult to tune those parameters. I think the point is
whatever threshold we keep, once that is crossed, it will perform two
scans for all the indexes. IIUC, this conversion of WARM chains is
required so that future updates can be WARM or is there any other
reason? I see this as a big penalty for future updates.

We already have some intelligence to skip the second index scan if we did
not find any WARM candidate chains during the first heap scan. This should
take care of majority of the users who never update their indexed columns.
For others, we need either a knob or some built-in way to deduce whether to
do WARM cleanup or not.

Does that seem worthwhile?

Is there any consensus on your proposal, because I feel this needs
somewhat broader discussion, me and you can't take a call on this
point. I request others also to share their opinion on this point.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#163Mithun Cy
mithun.cy@enterprisedb.com
In reply to: Pavan Deolasee (#154)
1 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

Hi Pavan,
On Thu, Mar 23, 2017 at 12:19 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

Ok, no problem. I did some tests on AWS i2.xlarge instance (4 vCPU, 30GB
RAM, attached SSD) and results are shown below. But I think it is important
to get independent validation from your side too, just to ensure I am not
making any mistake in measurement. I've attached naively put together
scripts which I used to run the benchmark. If you find them useful, please
adjust the paths and run on your machine.

I did a similar test appears. Your v19 looks fine to me, it does not
cause any regression, On the other hand, I also ran tests reducing
table fillfactor to 80 there I can see a small regression 2-3% in
average when updating col2 and on updating col9 again I do not see any
regression.

--
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com

Attachments:

WARM_test_02.odsapplication/vnd.oasis.opendocument.spreadsheet; name=WARM_test_02.odsDownload
#164Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Mithun Cy (#163)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 23, 2017 at 11:44 PM, Mithun Cy <mithun.cy@enterprisedb.com>
wrote:

Hi Pavan,
On Thu, Mar 23, 2017 at 12:19 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

Ok, no problem. I did some tests on AWS i2.xlarge instance (4 vCPU, 30GB
RAM, attached SSD) and results are shown below. But I think it is

important

to get independent validation from your side too, just to ensure I am not
making any mistake in measurement. I've attached naively put together
scripts which I used to run the benchmark. If you find them useful,

please

adjust the paths and run on your machine.

I did a similar test appears. Your v19 looks fine to me, it does not
cause any regression, On the other hand, I also ran tests reducing
table fillfactor to 80 there I can see a small regression 2-3% in
average when updating col2 and on updating col9 again I do not see any
regression.

Thanks Mithun for repeating the tests and confirming that the v19 patch
looks ok.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#165Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Bruce Momjian (#140)
2 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Mar 22, 2017 at 12:30 AM, Bruce Momjian <bruce@momjian.us> wrote:

Well, it is really a question of how often you want to do a second WARM
update (not possible) vs. the frequency of lazy vacuum. I assumed that
would be a 100X or 10kX difference, but I am not sure myself either. My
initial guess was that only allowing a single WARM update between lazy
vacuums would show no improvementin in real-world workloads, but maybe I
am wrong.

It's quite hard to say that until we see many more benchmarks. As author of
the patch, I might have got repetitive with my benchmarks. But I've seen
over 50% improvement in TPS even without chain conversion (6 indexes on a
12 column table test).

With chain conversion, in my latest tests, I saw over 100% improvement. The
benchmark probably received between 6-8 autovac cycles in an 8hr test. This
was with a large table which doesn't fit in memory or barely fit in memory.
Graphs attached again just in case you missed (x-axis test duration in
seconds, y-axis moving average of TPS)

May be we should run another set with just 2 or 3 indexes on a 12 column
table and see how much that helps, if at all. Or may be do a mix of HOT and
WARM updates. Or even just do HOT updates on small and large tables and
look for any regression. Will try to schedule some of those tests.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

Moderate_AV_4Indexes_100FF_SF800_Duration28800s.pdfapplication/pdf; name=Moderate_AV_4Indexes_100FF_SF800_Duration28800s.pdfDownload
%PDF-1.3
%�����������
4 0 obj
<< /Length 5 0 R /Filter /FlateDecode >>
stream
x��[�.��e��������A!zB�R�9���F�I�~�$H'b�N��fF#��~����^�����}~����������������o��{���o{��	���^��~�������L����5^�������������s�����Z?���?y�/������������.���Q����{|�B>�k���6#�?���v�mUQ�����~^����O�=.����+}���O��������\>����n�����zP��zy=�����{EpT�|�x�z���	{_���m�R��v�����k��</�g�'�>on���w����J[o����.^�������<���B�w�s����nw���k�'�z�Py����������A"�����z!��,�y��T��>�����~��������D���}o�����E�������_��/���y�������3Wz����CE���3�{_��r���!�����q��#:�yg|;�~�]#z���7.��z{�2����&��C���o��fo��������
cJ"����4�F�`F�!��O���3�g�?�����v�}��/j5��Y�/�!��~{�&�X���������\���u���]���������}�^H�#�#�����<?L��#IM����rgN����u�\�t�_�\?�7���������~���W|/����B/~x>�������a1���uyC���w8��^��#4g�Q���`"�o�4�{�\Q!A��-���Z�h���|����Fr��e@���'�{���b@���������"�"�^��:�|}�����eUR����V+B��-G�_�q�
l��?�'��v��-��>�����	����#-@]0�t$cz���T>��RH�>�l��'r#��#
�sE���]����y���������;s�_������)����vFW*���E5���)K~@���>�5;���mD3����|3����		=.�/��������������m��(��DQ��B�&�B��Lme�R�FQFZ�������JQD�(X(��)�|�(��(
�Tc�(�B��(��D�QhP4-Y)��E����{�(�DQ�P�!E3���������E�Eef(�^*�d���
�2����B���NQ���LQFy��c�PTi�LH���Jy�(�=Ql���e�������R�(�FQ�
E�F����E�����E������N���9�o��NQ�P4�1�(�J��]�U�j������
�(J6������K�����6�*���@+E�BQ�)`����U4-	E[_E�6�������DQ�P4�qVQ��Qd���L�7�����V���D���X�(5��?c9Ql����R�1�����1�(��W���E�����U�9�RtE;��EQ~X)���U4�JQ��Q��-u��[����FQ����������Ig�����/����)_#c�V���y�)������s=�-����q�f�Q����K!=� �F�
������wy�a,\0r'�Q���f���?���3�oykSW��K��n��"��6415=_�8q
�s���������1���oL}z	�_�6��7��#����K���(�������>_?���\y�"/�w�?�z���Po�Rq���``���n#MT��~���7��m���q������?�v�6�y�d�x���Rb���k{�1_)T^���5��i��Kp�Wu�1�}p%�13�u[�����9��d�����R�*N�W6a�B"L����L(D�^����K��z"d��������������&X��������\@���G����oB�/�x=qA��������j��?	j��B1\o(�������%EP���_��+'�#���@<�w�}<���2�v���\��|�#Q��%ho=z�yG�'�]I�u>�B��@f�en:�y��)���zM^�& ����f�oV~%>c���p��7S�Dp���\�u�$�H�~��.���f-���r��C�2y����i:z
���X�}oU�M�E0}J�fb
�g&����co4jM���U7���J��a������{T���_�������I
�`����v������R�Rw<H�����WM"��������X��{x
w�j0���0�b�����imX�z�0��E�Fi�B�������6� f�������7
�S������{	����,��!�9�;v��� �XE���Z�+6C� �xV�a�}"�<���k��J)�bi�,\X�i�flI4��{����W��B�>1�!M��8Th�>�wrTso)�KEYn.5AQ�����L�������� !����/j
�ca�@�z��n�@��������5=��Qe/J�����c~'�&�I`-��8�5I�*#b�z(���3F��jC���V}������ob���zD!��s�7�1*8��q���7�!'](D}1�E�[q�6��i��I8�����0!t�����Ax[(��U����_j�h���[��.L1k�4�����)m� �7!��:&�9���������V&�.A?��k]�W����%��#
���
b\�]����\�(A��"�%��8���ugsJ��5�����l&�
Y�4uBp�Y���c�����
N"��V(��gEH��[IS�����
�.��i
�	�1-oe`
9�����%�
��6R�\(D�(�?���:o��5��1?/����������-oEY^xY��b~�N�N���,W�B������\(��F)��V,u�9�����;�=AP��(�}#AAA�JUT����tHP�A�V���{���]i�-O8a`��A������F�� A'��:8�!hj2���BP�A�V���xB����� �O��FZ�������%���x�p�u�2)
���J@��`�n=J5�R�FP�
't�:z8��3��!`�c���@h;A�JP��t�&��pBF��1��-�0�X=z�)���F�N�
�f�Aic���	'$��p���I��BP�V��1v!h'(���N��)0��P$:�
���)������
������#���e���	��p����ZQeo��W��|�}|���g�_���7m���G����.�[M��~>(�l��UUAP��~�
;FH����f�<P<�Fy�Su���
B�&6td����:&>�������}���f*B����`b���"���x�E�p��:
������r��<�?��W�}6����"C��[`�'�c��yKU��:����}���/����}�`zR����Dx�~�T$�3����i���!6��sU
�S��U�,glYb����[I<GP�A) �;^ui��&���	�.^�K)��Y� �u�~1��my8Y��[@
��N�]��v&��,f,���`6�&"S��#�E'��*���8a�A���'q�	P0q'�@ ��i#WV=X'q9�,��h���Zs�����W�a)��N\�i���M���5���gL���5�Y��:�D`�g���2�������D�S)�z�����$e�N;�{%����.q��0d�7��O{���2Ah:v��w?�TC)���L���1�25��04��r��������)X���������@0�G,�J9�n��5W}����B�9��������0Frc���pTh�2�����\��r!�e�x[2!h���;��(a+6�Qv����y�:g�|F ���
�Q+e��q�%�S=$D��+W����\=�3tA��`D#w5=�-0��cGJV�����!=��(�U�1���
��!��%2P���s��/R���E�L�\'���5�u +����:d����Lv�2A����f��3�h>��G������"cJ8.kD�A"�DN����s��:wb)�$^��R���@��L,��V*,�Yj���R����:
K�L���mYUg����,u���R�"Y��KuqV�NH����K�I�RSc���Y
�������R�7��U+K��6�����7��e:/,uDa��R���
A�,=JK��m,%hqbi��/��`)ma��wb)��X
��Ttg�1�����V�NH��!=JKE6���K�V�:�V����Y��,5Wgf�)����fbm,���X����T��#c-��b)��R������Y���u-�G��l[K�D�R�Y[KAN,�X�����#��m-��l,�����og)XX������,�Kq�6��D-,Ur��e@
��k),���X
��T��#���X
��� +K��`�ZJS7���K���R7�����Y�����F�6�6$C����Tg)~�����K���L���4sc�C����X�X������m��AEg��:�����e��{�o~�������V���;�h����ek#vx}�O����Zl����72v���]��9//�!4I�q*�^����D�W�]em�����gg//�.�������;0�rxS����hO�����k_H_9^���W��9��}qx�M��+�K#�;�bPbrxi?����$�*�w`������t�*)�W$��<����,��^�Gf|������zrxEp=&�����8�N*6/&��i�O79�8�h������l8�"��M�y�����w��N1��i���+�������1��+�����	c��������1',!��j,��j�1�T�[��	����W�s���m#N\���@�����a�7�.4q���T7oNy��
o��f�`�8�wH��Z�m��/
��b_�r�r��m~�lx=)�����w��x8/���t��i���
L�m���a�8v���}f���%�v�?�%u��c�Gy����B02T.��RG��� )�ZIp*N:���@�&��Q����^n��:��"l�4A',�a�������GV�
�E�9����C:�(:�Tj_8fo���Br�&�����Bp
��D�G d��b�ql��"�%�%�6���i����v�&\�&��-wf��2��"��TO����t����V*;������r��Q��
u��vf$�*Wj����(	[��x��O���Eo�GF��`*NK�����S���)�v]�=�l�������l�*�*�J�b�DVXKE��Xd`�/�V<&$���ty>��oC�2x�j�	�?B���
��7,6N�Y�%9���4��|�A;��R����*���������*`7��
�%�)
��������0� ���}���	x|9C�L�M�H�d D^�Xt��P	�\�8c����
�C�UE�����jc�)�kS�=����qc�u if�dS]�L�9��s���{��zD�K�)��#�<T��1�����Q��)��!;���8�0d��H��G|�-�:��O���"����T}����7NIU�T���&��8��[���k#X�Va�q����f�u���8��Uvrj�o�T8mA;y�����z_E@�������S���-2��(�|�B=bE�9�]���j)O�8�-���,R��}fAB
��������S����Y�v��=�����-�)�	���X-p��-EH���)��(�" ��qJ��|+���E�" ;>��=�QVu~ (��S�B`5��qJ�W���d�?��'�U���H�2��QI�T����B�)Gd`��" �)q�����r��=N���e�,��3r�XJ���L,�Xf�,��`)HE@DK���d����R�}��0��" >.k�nBj�� +K��u���bM��#�1Xz K'�X�w^��R����R����|ai�g$sQ��L��8��Fd��U��9v�IG��Kq�+�������XZ��V��V�R�" �� �X
V�4��������@26K�^" ��,e�R�����	��9Xz K{T$�*,��J���K����YJ3����#2�1�����RD�����d��r���W����o�
gTi�����|�>m��h�?��>� '�����8`Lu���'�09nS��	� d6bA�4�M��L�	{��70��1~5�	?P�;G%}���0��Ib���#w���������g���w�<�R��n����D�d��`�����bB�"Z��	�~�BO
�*x���H���t�W9U�+�K�4k�},���iqPI����hZ�4k�#x��aO���<��z��Z�b��I�u8w�r���LVLCJ��^�����Q�+G�E�u�(>R$��Y�R�2[��*��2����<�QE��/{�XN���������%��-�� v2�����I����<`���p~r�95�En���/e���s���S��'�_�<Vq������8WJ�9��	��� �+�D�p6k������NcZ`�8-`[��/������t
����E��c�	���v�)Iq�6����8���f|��_HW���PR�Z�������	����d/2 ����T��� u��B��!O���x��	�Q��������k�: ���������0I��"~���cd�X:<���lI�`N��,�����7�B��`��x|d�����B���J=0�4�3����-�Q�n�����H��1���M<����������<-F�#'����&2����:�81C�WtP&\��m�f
��A_���#�$����5M;D�+e8`����kg>�#`���e)~J������"o'V��g�
$�%��������QZ���;�W
���DMT�r$}_u]0u����W �z5������`�������������$~c"e���]:LU������Mt�TbY�+I��;+[Z�Q�\��K���o��@qs�K�`�j��� �X�+!���m��?�!Z���r�"(`�wjb�RAm��>&X����>����W�
q�h~�j1�hCx�>pV��3����5��$~#�u�2����n����o�0������3��FB�f~O�b%�N�qb�A&����,�d��!5�pr�������b��<�&�KH^G�V�B�����\
7��I����ob!��i;1�UX��5�8��d��>O�,co>JC�_%O������L��%$���gV�7�z�23E�[
����;!K	���@�:[R����#:W���4����2���N�&<�a��4B�����9xz`��L����~V�B���x�OAv�����4~��S<�O
�,<5�`��5ud�t�OS��Sk�L��S��������<�OU_Ha�ib[OO��1�^+�OM.���y�Y���B{O�x�+O���S���S-������S�����m<%���Ti%�7�4����h�O'�xjV��S��;q��Fm<�x�������^���� ;O��C*���,y�u���`�z
��tBO;F]�S���	�,�)����S�+O=�����6��9�<Yy����<����tFO�R�S�6x`�z
r�)��S[��4�[x�X���S��� ���	��<�F"�����z:!�S�u�j�)��S�>��������,<uO<�x��V�z���S��������:+c��<��z*���������S�������I������S���H.���;�P��].+�0��mXc��Q���_�m�(�I��q����"X�6b�f���j�`�cp���@0�z��cG�F�`��[m���2�`��e��]����ml����k�����s�!�b�<�#h��xvK��Nk�8����w�K�9��9�5���6I���<UA�W����,�����)d�i{�b	��e��XG��
a�9h#b�������M��r����C"�DhIPs#Z`G��=N�	�|�s���
To��W�FL�V��h/3�!UY&hs��A.t��6"8�K��V��LA��)h#B��{�m�rV��q��um�;�j��|Zp����
67�9hB&��iHI�m:��[���y#���}����m ���FmJ.s���p>j	����>m�0���3x���U;�6�a��y�AgA1F�mD��`	��%���
��;���U�F$���Q�m��x/�[�c��7
��Dq8
�R�:���NV�m�OL���	���,A1�G��
AI�=m:0m:�~����\���6"o	����O�/,�?O�Tv�W�}��2�r��	�W%��6Q���&�
����������H�SA� ���������_�Tx���R0��uE������3���I:��%�pK�;�p`��%m�uX��*EC�B���`W-h���r�g]K�V�T����NR�
�D�@�:����2�
�\�GfN?����Y���M�<Z�I���Z���rx�U8�ETpz��):�m�B*�
�<k���+n�Ry'��b�7D�FA��2������"�N�"�)n�J=C(��FM�0�t�$�������Z��K�+�Y(>�UfR��!�*H��jKq�qV��hX[+G|���y����miWR���(�d`
�������!�+yQ�'7��4�����N0T���v�J�.J��ve,�'���_�G��v��`���`-����H+����0�w����)d�\�O������@��t�)����������f���4��B�s���LV���t���;w<!w/!^	{H����� $?&���9U�j�XdP���nf�M���ab�qm`��t�
����d�E���+���j�$]�=�^a���?�$}�ep�[�WI�� q��#R�i?o������"���'�������2bk,X�*;)��\�� n���R����u��|VK�"x�!Gw����N�O�}R��u�����+��D�3����X
�#����a%FAE?G��]�4L�U9�V�)tD�c���p���j���*��%z$]�R<V�'b�<��~8�"x����J�j�F(y�G�#�:*�H{��$2P5���������*2v$������9FNM�P�LK�:J1�=J��(����H;
��@��%]Ud ��h]E�B�zo`�3���T�C��Y��eD{�N#��uU�;]�C[h�.�Z$����t?z`9YWp-�v�X�C[���.#��6�N��^���/����AM` ��v�^���*0`�Y��>FO,�C{h�V�����*��HK�E
I�������Z�U��C��#���eK&YW-0�l�Y�������5��vXQK�����z�4�����a����?���2!���
��b)��R�����`)X�w��G������u��M��QU�2!������tub)�6��tN6�v$kQ?v�J�5��&�w�),	K3aK�*~�4���6�����#��]8�����w���F��Y�@4k�,,�a��#07X:!���X��w�bXJ���X
����V�}p���FfP+|w��������X��w��Y�sK�`����di�u�-\Y�R`L[���dc��5Fb>!����������-|���g��_���XL��������[��P���x���#��
��y��?r+��<E�U��h�~&���3�����V`V��a�l������F'p�"�[��s�`~����y�Z�w:����'�Y�L�0��w,X�1�s�aUQ�q���l�rf����N����,4��Y�V�H'���2�Bo�(����	a�ft$�o����@�,�k�VX9�	���k�7�4�O���:�s�PK�k���(����Zh��c�*y����@~R�<nq�����C�x�]��u���.���d�b�+|���_.gt���=r�
���]�4%��M(_��(�gJ���������C,�5'����m�@j�LZ���U��v�2�������������84X��A|�i�e��'��l��4)���V�M�1��D��p����4��mh�u�9�P[���@���y20e���}F2-���7��X5��|]�X"wFN�y��V3��	:B�F2A��X,\w#��A�qfL�X�P��C��p<r����������&�D�	�^nH�:�sx��>-Ss� i&�GMm�b�2��iz�h���h�����!���(�iJE��:h�"�i�����FS���:'�j�/4Xi���6�j�n4��]iz ����Q�R4MM+M)s���Zhj��JS�w�)B�h
���n���"g�B���x	t��!�4�2�MhX�)�o�)���
��T�j��c����4u7����L�AS��w�)H'����364�h��V��\�hJ��z��;M��NS���Tgd�)v��ec�0�6�j7.4���h:!���hjE+MABS(9VS��m5�h��z��V>4
�]r�&�����j����$4����v��0V�	k4���� �iL��������s�)�;�!����VX���&���%of�����{�2]����tJ�j
M'��$4�YD8]MAV��*t�)M�h���4e�#����u5E�g����������
t����M�U.����JS������|���[hj�p���Am4E���eP��%��Yf��uG���M�����8$�������C�w�D��;$6;$H�m��!����Ow��j����6�w���b-cvH0��q`�K�������hK��8��!��g�=[e,S�I�5�K��f�����I�������rs�����0�P��Z� &h0��;�w���M�h�<(V%@�%����������Y�1��l�Q{>oA��:e�,�b�>�>J�4�t���=bF��\�yYv����k�#<��C�a9���
�!
����������_"�9�'Kq�;>A��Bn:b�2t>�1���e.�=u����W	D�oN��U0bwX�<W_%��%���i@>'���}B�Q��Y!��b+����B^aOe���cHm�|� �XMl��>J��D��\�|���7��A7o� �3 ��������"!��N��PXn �KP	�JTH_i[i:x"��t�����%������� ������0E�[$�"F��cP�	��B�0
�'	�sp �6�9��&���$�	�K��9�2���!@�e�I=9T��'}a��N>�)�����\����o Wn��{L�|��([}<��J`��!vv��!����6 �U�64�����J0�K�u1��I�Z�c�J �5��^=8U�3�O]&��e����n���*MZ��rR�D�|h��a��6�����|�j"��c�1�����&����%TN��th*��RI���h	��G%�^�!
�@���}�����`�����QJ���$$ST� �ox�S)����v�J��5u�h��5�Q��~>ex������z>E���r?�r`�D���T��	(1��Se��	t��	!���S����ix�a�uO�C�T�Td@��7�����f�(�Q��hZ&�DS<�2j��S)=Mw���L`~�����?�4O�����0��T�B�a���v��
O�c 1�3��|�i�2���A�|�pU�&���X���0%�4�R�������*�rU�w}�����t�O�*���)�U�tW�����bB�e+#J���R���(9����F�@b�R�SV�>�W�W�	cfQi60��4W���1t)��t�l`&_����f��|�s�l��dPc�Bf�	�7�Ru@�4|����C�����|}�`���5�0l�Q'��
�|�FT�����QA��J�C��WmH96R����v���Q�V�&�z%*e:uQ'����Z�������C��Wu��As��
)����F)J"� !*�j�j����vQ�k��ZL������2�:aE�|�r%*e��z��n��m�
�
���A�h(g��pV;��W���D�gQ7�7t�@�X��6�����D��z8��M��{;Qg����g�~ABT����>|���"����w�����^��v	��^/���g����G
04{�{��\U�n��Y�����z�c��U�"-�R,�� ��h���t �3�Tu` �~Q��0������k�.j2����,�� u�
M�������t�b"���vR21��
~?@C2�|�j���2Vv0�=�L�1�P������IIF)WI��:0J�2�X!�H��N�������dU��1&�K�"eH9�i����(��r��X9��2�^�y�e/�%���8G���6P& ��AO���r��@&\���GM*{�b�]=a<����s��n$��� ���p@����U��
/w�#J���/V]<;�o�R���d�H�J�"������oL�i�{#��e��L�*����)b�bM;Rq��(��E
!��X{$��U��,V��G�;���A�$�#����&dVQ�5%��X�|gCq�A=[n^�'��E~�*��J�"- q.)�t��EvS=4�I�3fn���\!��A��d/�p�Uon��E0\Cs����� }��xe\G(C���'�����5u��u6{i_������H�4����E���N6>��vK���!��\}�js��B���t8-z"'��/��(�%[��E�T_��	b�����	�>���t��V�����MtJg��E���3�4���"�Z*�� Or�Z�J`S��"t�����F�N��IE�hgQy��j������9����$/"��t��eB$C���&�%/RG�+��,E*.�d�q-�m�VN���h�-S#��"�)���X��N�������9��A1�|��a��B
�^�������:�{\��G&o��E}C%%���~�3��T���L��U�9N����S�~�
�s�+_P�B��br�-5>k1�[����Y�b���E���'3���A���tO�b��X����m�����eK����I�"
�iBe�%Z����^������3i�������������H����3����3�+i�P����a��oZ�ub:U	mQ�G4V�7a��t(�"�:����9t~�#���H���O���R���E;�y'sC2��_��a�&�f�S�����o#��dN�c������r7����<��X�s3��f?����rv��q���`7��Wn���	�)����(rSd�ffne7S
�D&f�	r�&��M��r����D;7�+7�q�Xb���I�c�&���2�����d�m���>q��q��]�i�wn"��� |	k��n]�L�����	����9!����	�q����Y�L���7n�������MH7O#�x�X�	�qd�fGfnN�H'��I���	�qS�q^7q�O����_�q��M���"70s�(���jn���	�q�f���s;7��M��qd���q��i\c���70s��7�f�&���z�+7���vx�&���	�q�H�������I$a����<��M��u�4J���`��I#7n�������u�l�Dp��e<����_&�i�����4���Fm�)��~��]��{�m����[����Bh��lMCC!X:��8rX,=�����C!
181B!�T��&-�
�{�0�0�>��aF>2�&�	�
���	a�v`X�43�W(����q1���=��}n�B�������*�]���G(�#3��B!���P�����B��5-V�1��A���$��5-R��|K2�UB!�+��-�������X�;p���X�A-T�w�E*"�v�G(����#�=�8B!
��GCU(�c:�
qk��B��W�{�;X���5
�B! �K�N������*bw�yI(�oe}Q���
�����Xf� ���}I(D�o/��A�B�H`�P�12�C!�'p#���&��B!|��M�v�f����9B! ��{(�=v�4�Q���]��F���4�j�d�PMO,��3�k�\v_��B@0oC��sax�y����aGj�&c���V/�9��]�W�(�w"��BB�\���Z,��%������`�Q���t$�M,��Z)&q�B�g�X�ji%P��q�'{�#���R�.1:g����C��9f[w����j[��U,��N.�h��/b���H����H,$�>��t4bk:�u7����MscV�gKskn�NmA���G)j;�Ps#�
bS�in<�hnj*�mxd��24��u�M�jnjB;%��_*�=in�
b3����K_57>obK����^`�
�w�hn��]s�-��\�h���-����-e���"���q	Ps�)+7,��X5��Is#��TU��RuZ{��'��&��in�hn�*�M��6�
�jn���Ss��h���l��B���47se���'Pbw�-RA���A*��l� v���Ys�� ���8�����T�in'����]s���f@��AN��������-��57�Is��(v�� ]'����i�{���iQ���i����{-��.-���in���b��u�C�b�Ca�h���gE�)R��2��6vVQlT����R��6.v�����Ms��4��Ss�.5���47�Is�-���(vt����v��v�:�8'.��n	�V���� ���U��Xc33����1�r<��*���D%w"�u\�����AM�GA�tZ���;��$���U��#��W�c����E�p�/����Z^�r�P&)���)RF�s6���=�g�Z���#/Vo��d�/���QIS3,�x�����]�8���i�\y��9����CH��S�'��:n�pA���i#TF���F�=F�	s�F����l����.���$���*��s���P
K���-w�����r�;I5w�a���Be��e5w�Ud5w\�s�l���g��Q�	�{d���Q
�/�GL�KCL#T�0=��D���R��Jc��;�"Es3���w��F��c����D����F.�8�j#��Y�	�be��F�N�<�FN+Z�	�beEN������:��?��[����-T�s!'���`9E*��l@l�l)&����t�����l''X�=+9M��i�vr�2�
�o�9�y�q2+�T3�#�r���\��NN�H�����1`� ��\6rR1��{L����U��l��txWr����Hj#'H��)�y���R#'�t��hC�sD�9� ��9G�������l��NN������<be��ti#']j��>rO�t��x���F��M��^'����T�#�?���_�
�������Z����qR�0�o�B	&J���8�����x��Yl]n�2���0'��U��h0yXC
��G�V &0�~>��������8�w�aI�F�v.52u����-�s��D:�Jh[�LG�d�l�����OPx�])A��<���>v�tv�����	�%i`$p"|?�����a���
��hfZ#�|�#��.�V(�|��v�����A{�=N�����`	g:�X��+9H#�6�d�~��9�;���%q�X�ag��>�$8�H��p4���	Mb��/���u��x�/1��lr�����x��2ab,�����?���&Rf�����7o3uPi�DXd�`=���l1�}#`���U��.�s�MT����i�����L��BX�@X#=%�*;0B�B�=���=�A����#�cK�V5C������z�SBY���5t�K�+NEx�XYws)�t��as!����A��3��b�a�JK�b��������D�_5�����?�� �uDS(�/��$Lv�?~��i��Z�H�D�"K	CD������
���gh]��
�(S=�L/��3�NB����'����T��iN���M,�"�����8��T��.:f8k�sIA�}����f;���9:����D�r���s��DQP����u�'�_g�c�P(�u�A��F1XZU�>�<Y�����zE8KR���
���)�5�"�@�� ��+�d����-�Nw�|[E��;A������Uk�L�x�;zK0]���(�`L�K%Y�U.Hn��D���eH�bKA��&������H�`+���/|>����m�����p�d��_�51j����k1&o�'v&8��E�.�@��!��hU�!�k�\�����<G�3�����\eD|�5�.��T����C(��?2�r�^]e���2�5�7��i��(&�0G��#�iLB
���8�����8���������[w�b���
���o8���R5�����D�}a:�c��m5�	����2�-��"����!t��!!�j��;a:6&�zF���F<m�nB�fE`~����O�� >G��y`Nb�PNM~��������|�������`�-��x,S^��E�����<c~��R��,������:'_]�_���N!��i�i���!f� FM#;R�q�6��1�K`*�c�%�YI�d�wmA(�q'�����I�NJ���h����s��q��G&aw"!�a�	]�?7�b�Q�j�Pp7���%=y�vbL���e���=T�5��+�N���R��;1��bF�;�s�FZ�������n�u�9^�]����X����r��0��\��:���l9��9���K���Q�H��LI�b�c����Wc���u��m���a���@!��L&�~ �iw`�����n����V�E���/v�qB��.�l2,=,��>������^p�V��F1)��4�(�3�f�)t���^.��L;|�ML;j�H3�2c;�&������GP1�|Q��V�]����md9uk;��j2�2����j�L;���X
�$`�dl�]D��^��������/{-+OD6�d���O�i�ij�yJ!yjSO�j�)�El<�*�b7�"nx����w�'<e��y��X~7�vd�i��Z��Sj�}'��*�l&V9ai��S�7����V�"����u�jL�Hf�b�s��)���)d��(�x���Z�<Yx*�6�:�3O3~g�bCL<e�j/�<�~^y��16�b��<����<�X�ij�y��y~c}n<��y�����<�
�������n�S&�������� ;O;rK�[����4��v�V]�X���S��Pl<uoe[O���xzE��<�&��}=[xJ�p�V���4���OS��Sj�};O���*Z�����<�\G
O��zi:zk��F��2���b�z
���!s�����XOA����S0yz�J2�3Oi<��}=EV+O�"<����T���d��@O3��T�ij�x��N<[x����4�;�~o<���������<`���7��X<G�����&�U���K�2L�#��9������7L����� Z#�i��9������srp�?If������.�h�k ��?�sD�bIF�
�E	�6qG}����\
D>����hg��i0W�_���=GX���f�����mE5$*��t#��y��oqbu�8�c�"�l0��4����d�t�j��s7r>8���q�A�C&s ��*����lo����n�����TP���J,���;����G�#p�?5��=&{�:i�b��Zo����%��`l,\����K��q����I���+ac�v�Y�R���zX�L����Z_�!8��+�����r�8P��/)����J
���e6m��#��+y`}���Mw%E�S�cr���ga�w��Ad��Hy�8�DFr��!�CU��tQ/e��X�V/�;er�N��	�j���"�;d��>���&�����j�L���I���\I�������D!�Q��>����eWR!�=��1Z�'�9�R��s�S&5gW�'�R6'��n�$�1�����8���#��z`}�KM����dj������M`�J�t_����7�8fHI2>��Q���QJ����R�/��	zR��K�]�%M�/IM����V�6��6����O������%��|n��w	���d~�+i�9�����c�T���N]o
H��^U���u�2�w
���x��6������2�t`��?)���2�t�PI3�Y
����p�O,���1�}��R}�GK�a4��RV�����#3K;�X�jZY��f��}Y�w��==�4�;�!,,E�j����s�Sf����[�K�
�pe�@K�1,����)����-,���s3K������R��������������K��<����YZ�y�������'���XJ3KkH�f�6�"���Hw;��F'���}���;K;r��>V�KS9��h���9���k)=d ����.����Q�+KA�R�>��R����dc�@K���kijZX��v���y-M���&����"����������^X�@��,�X
����HtdZK6XjM+KAV��aK�"����X���R�����*Q��k)R?��w;����9���!���rW��R�� Ky��R Y�NB[Ki��R����Z
�����2~K3����j�Ok�,�JXY
����:2��,YY
��Tu����4��R�����RZ�������,}��V�XKA6�6d�<��R���ia���Yj�`���Y� K�����eciLJ6�)
?�k�:z5����R8���l���#���7b���,�RGf��v��"Mo�
{X����R�tg)�YX�!��R�~b�+ce)��Rj�����
��l,�}+K��������K����`c)�'KC��x��[��F�6s��v����`��i�rp���\���1�����J��/^���c��!�`���k��XO!s��#���z:���c=3V����X�7�&E��z���i�]���t����X��}U��z:6�u�U��l��X�����X{����&�8��&�=Z��������h�)�#�e1���zf�o�p����J�����1������`a��VA1&Di�f6q�z�/��@b=|���z����b"f�zD�L����c(�6���.RIC�I���zY.�Zb=b�1j���������s��VT��z
�g��b��C�=�K�xY�$)_���j�E���,���.�	L+s@�5A~��3b��L�vO�#	�
����.i��e�A���H���I(�����|O�J�`!�^�#v���W�p�����=�=�]�MM]iT=�K����J�9�Ii-,Q Nqg!
�3�$�d� ���\�1��"=��Bl�P'��^i\O*�+e���2����.s���U��� f{�&���p���hs���xi'�b�h���������xS��c��!u��@`��]1�Us8�^d��Wjj�^�y?�^�t/!�dN��2�y1���e{�
&��H���*2��5��2�|?^������

I).����������s&=�M!R��U*'Lia�^�,@7�l/1�rWl�t/jc�D�J�
cH�!4�2��H����h���i�^i�Y���t/������J�W�!RK��e`e��	_���W���&|��I���O����>���F��I3��:��.y�hv�S��;/�r�H�W�����I�Rr\������AT3}&�X�\�|�pc6
8R��XfS6�_P�Gr<���|�HR����T�|��y%<qpMD�X�Y�m�������vsE���>��������#���g�>�J!��<9_Je����nW�
����"���UF���2�i�@�������)�-~�7�-1@�4��L�z�9�E���{�������� ��VD�g��c�B�rv�E�#���{��	k~����o�[>��#�+�����n-1@���*��Y��=3!�F�:&���)��^�)k-��:=RO�r����s{]��i~���zt!5-�O{;_Z��q.c��-K3�CO�6��C��-J�Q��Q
�.1@eLn�d����"���N��?tW>�a���A�����^��#�6!tKs=�n��f�H}f,���>=�K���H�b"?2+g���h;c�X�+K�.xM��R���9b��]>\����o~���XJ��m�@�,�I,L'�j:�,e��Y��\�g�"����%5SdgiGKSU�����ie)�;��F�,�&�f��{'�����R�����R�kig)'=7��,,����2�[��RkZY
������Kt�1]Y���X���X�D��R���H=R�#'�z�hei!s����Mn���R�����E��:�KkH7�"����iai��]��������ZS�Z����T��i-��RJ�k�����Tu�����Av�f�Y����������RjB�o,�X
���!s��6����� '���%h3w�27���lk)�,�tj1@E��^�)�u;K9���DN1@�$tb��K�ie)����)tc)�����N,E2KR
�y?M��R��ok���,���Lk��KA6���,M������,U`�Z�h�,
U���Z�.#Z��Y��m�U+��������wa�� `�6GI���e�q�,A�<=�,� I������6w����GL����s{G�p�H��#W�����7��0k�L9�8
�%�����	+:�H��{�)����5��kAb��s�\�X�� &��'��(�!���EE�k.Ec��z0N�#w��/�F�<��ox�� �G�(n�R�]���UN�BE�I�5��j��O1r\
�A\e�������L9]��Pi5�hm��/��+N��Ym�:N���B��@|�yc���"s��vB
Mf����)L���p)9*	se�s���'N,N!�����������5�P�S�|��q� U��I�qC�J��yE s9z�:�GTJ?��l3��O�u	S=��e��riW4��C��Egy����|����Y�fN*������~���N�#�I�O�����|l�O��a����j��0'�V��y����H�m
�����	p,���t�!B��:�$��Q�#/�0�����������O�Ge��g(���F �M�n`"l$��&B"e���s��!�b���c��f�s:�aL��*������htt���8��G'3�Vq��#][u����/k��Ah@�nY��j,�]�.m�V����6��d*��a'�!����NKMR��;��)�0��-������q�~�I������0"#aJ�s�(�5��3�1��m������R5ZY�*5M����)MMp�k���"'�EXGy�1���HX[�S��������L�������������tG4fVwp�w�g���������`�D
4���U�4���1����Q�H����=+Gp8�>�2G�����Q�����p�y�8�e���O��6��,�&���Q�A���qd��@G%��GS���y��Q[5s4�Z9��*3G�6B�8
�s4C�]��q�6���b�l����?8Z��C�f�s��T7�Ev�2���rp�	Gc��GI:�3GA����j��2>8��p�W�R�{�(����4�fD{�nH[��Q�w�(����������Qd�s����}��Q.��9�=�:
2s4"��h/38j5+GA6������uG��Q�]�9��G�IkYGAN�f��� G��#�::�����p4��8�L�U3G�������.�p4��8�8����4�jZG������2L����8:��Q������Q���tg��c:s��j��2>8JO��Ne}�9�L��X���L��������hf����a�hFt����9��i_/�(rZ8��ZG/c�eO�����Q���)�s�������8JM��j��l��������� ;GM��l]��l]���H�#3G;vp��[�������8�@��ly�l�����8�8���u���:s4�-5s����W�d���G|n�hj:q��mu��u������x@V;G���Q�������4s�z��V�d������p43��QsBj?����{�h�n���CG���Q���^q��Z�mH*����j�&���A�zk�b�aJ���kzj���SP�y�g��qy	J���-'#A)>)n�����?Q���wv���'|cP���=(5c	J�~����d��$(���9(%@��<��D�l�J�3.����q��1�VS�R"d�a��[PJ���jT�R"���IP��`���p��T�����I��9&�6q�PZ�cR�%&%�T�T�IEsL*�"����"�%&��]r��J�I	�Vs��h����:�#&5��JMsL��xl�cR6�������-[L�!'�F��@�I�*n&ZbRVO������	sL�������X��IL�R��L1)�|pc�z����#&�������f�1).�"_��itbd��}�2w��g�L��n)`��Q8�H�,���x�WY�%�����MS:uG���uF��<s�o���$��*87�QU/�����T�q�(����frW��=����XP9&QVNy�;@���#��2QMG$<Z$�};&B�S�J.],5�sbp5��������!x�,Y�8�'�H�8���e#������qm�J����AZ2]a���L��5!CALl�>h7�E&aC�S�L"�����r����s��Y�`~���K'BO	)�tbd�EW$��p1RW�1�]��q����iBl��/� T.]����z�7�s.�%�N�)�9�2�DD������K�rh+�N�����J���hgq���rWz&�}GV���MC�%�������Q�Gg1EsqZ(����I�,�fL���L:���Wa�H �]����x����L:����bm��DB���`iW�e(���������H�D��lI"���D��M�uCc��P�����t����e�Mz��M��X��c�����D=�R�@�2��P	R�tNl6�i��.O����X��4t61�K��S����I��G���K).���J&��Vi�tB�y���ab$�Z�ti%�YQ!^%�Za��wW���S*������,���Txs��u	��0��M��
��V�Y-H�l*�~������������@�)�����)�~��� �nP��MR0�VGK��nL�
"�yG
�������uG�j�6}SA��
9�-�,N��7l'gL�P�4g��-��@p�"����~�=�}�x�D�_&����6\��D�`q`��x�#��u`��y,[
"����J��`2?kK!c��Y���R.d&3��6C$\L�iw'�?��o)��%��@�&%�RO;n)��f20;m�V��� �9�X�}K�6�i��r�l7v@V���QR��,e�^gd�:r��r�T
��o)���U��(��Jp��
S;A�
X	��A��J���T�D+-��������!�AY.P�����"+A���A����$h������=�������Njk��C'��N��j!��LP$��	
��BP���"���p�LN�p���e�X� b���8���	��<��%��\���t�X���G��5e���Ft%h$<��%�;=}����AEV��A����M���JP����a%��)A����'��=?e�bo]�A��^T���k6��������1��jV�������	�aYA������d!(5a�n+��FP���iu����U�BP��j�f��*�W�����V�la��?Ga'(Wm���y���E+5
��DP� A�
?Gt#�#��lE�:s��V^�o+(Tge��<���)�4��cmMM;A����6,�x.+(sE�oEA��BPj�������S��{*qJC��w���d���'�*g�)?����!9�` �87+{]�����<��@����7r�@\�5N���H�?0�HN�!/xq����9b^���c�_�9=iA�\2��Z����'Q|����X�������W�
���|�k/�w��%���W�Y�pK�o+�m�@��#>�}����h���at�R8�)�.6����>��.�q�
�-E�[�+.����0���&��N�=g��P����-�.E���&�������r�5�Pa���������f�(	�O=W�0�9�[�����AA�����o��Y-g7���������2Yw���e:Q���0�I������V��+��?�;���J��|�Ce�tD8����3�	����;��*TY�*H��S����A�L���*�N_��LU�P��5����D�
2\���#4]O�~8T���#%ab�G)Z�|�>���Lb��f���D����H�\��yD8���B��@L�d|��\�b-{�MM�0�����$�n^f=��Y:��A����Q$��1{���T�m�1��9�G��/����9���DiMd�"3��8����('?�#B��97'��`*�9a�����c�E�ah*$�#a:�p�c"�Uj�L�'�J&<��+)��\>s�|�b����p���������]�����!e��=�yU��[1$Z�u�bi�`�(}��K�o��V����J���Tx��������?�z^Hu����g�R$�2�v����dY3\���!�@�<%VnnD��(V~�@E��G��@�K��,����.��<ntV}���}
�����4�������g�B�z�n���J�s,���z)�p,5%h�a��o	����<-�����|��a��9��r��;\���%���"���}B�T3����Y��6��Q#���{LZ��K�`��b1��!%uAN8z�T\�1�c�4��d�jpie�i�DPN:�R��Dm���o9�G�ns��-�
j}���jM��:�x�1*����9����>fHs��#%+�\�<�
BH�FsM
1�2��\p�*l�	{Vbw1�E�6��kG;vx�"|&;�#�"��Z-{��6��f����C��d�= ��q���!%�;+E�m�L�0�N^�
�M��qh���-7��l4���\�4��������l�)��
�"�F@�o��^i����d�)�#q��i��P�4�5Xi��A�TU��ch�Y;&�����6j�)���j��g�"��� +MV%[c�*���{,W�l4m�����R�� MAN4e����:�M���Hf�)"�i�a��Y�M�,M���i4��X�
I�9����M��3M3��� �CJ |�)B�h��V�8x9M����2�v�r��FS���
qL���l�4����8�'�2��9~T�)�6��N�$�{�i��YWS������VS��u5u�4����j
��t �����MS�����iJ�6�bdn4���VS�����4���'4E�'�r��JS���
Yh:J
��l49��&�4e����!=�4����*��� '�ru�FS/C^i:���4�VS�Mi�NS����~�).���"���m�6�z��������v�����4���j�sMy���`MA6�r��������d�)w��h����j
��t M6h�����hj�V��l4�����a�)�JS�F���V��[�o��49�"v�����:h��[�^/'��^~�)M�h���4E2�j��W���h���M�
y��@&�l�d�)����a�)
�hj�v�"����{1z��?YM�z���_�#����� !"02B+dJ;kh����}��'��'N���r�^R���7�x�����Ng��A� ;��cT����c�#�e`�	��=�kr�~��	�
@���<��L�!�L]������ �60������1��H�a����P��*��O��G�-E���?K�e�a>x�U��W�Xyq����u`�F]��Fn��"�:��Q*���k�r��^0��i����-��+������/&z���1�����`���"
�
B���G{�-$�L�q�!�A���
��,EwP`�f�V_{���E�G��!��!;ZFbH�0� �j�Hf��]��%vS�mb90�
�	S�j�r�R?c���WIm���
���4����
������\�8�J)��������`����dN��*�r`k���'�������WM�f>���6w����y`�Tb���`���gE�������J����4���k��l��A>�U���sM� 9=�cG�����x
���{��vt����`4�j���T_����l�Q
{����D��?;@D��b�.4=�&�K`��t-`2���X��P���$�2mSCx~�R��!�7�w����F7)�u�f�6�-��{����.K7�~��6��h	�2�\/�:�5��d[1�9����O6&"L��x�����t�)��
���y�aC��.�SIIRb���x�y���������������R���6�o|X�7���*�WckC"_"��V�����B�������y	��o���dB1U���bbS@	0���#�	F����0�1���Pf��+�$��q�x�_�����c�O��������?����[Cx��2/�^��Qj�F=�M�$����y�E��E{[;z�D��x��a��Irc�2�G!Bv
�=��OD�v���F�JRc�0m�t���X��{���������>��)��q�M��q�|�OT����/`,$)�V�;a������ 9%���1.r��"��B���q9�sl,!�z��K
3���z�"5����?v&�0��EX&�3�eK���k�9��&e����Y��f�B�n�	��2��~��T0s\)�u
A��[��)H��^H{_/��A�c��RK�D����<��H��������J�h,��F��i�����0z�0�"�K��&B�����
ji�8[���@����1s��K�MP���Vh��\eB�6<��A`S���M�����1�,2�o~��������L����Q�"� �*���-��s���V�����(�pXRe�)������N�r���|���k:t;��8��0O��+���f���r�t&�3�9B]X�
*��B�:����U���dmCx�8I�HL�=�Mfh�u&�=g�*$#P�5�4HMB����n��,��_�$X�?tc���!-)Xn���W�U���^��s���U�^�;!�B^Dx����r��=�/����Wc���!�����_������C��-7Y���p
=����dE���0��
����n)[��������!-�^Q�;��uM��n4���eS�Y:Hl��WH�GY7������P��>n)���=��kV;�����|�6tn��}�N#�X�Y��?{�.���a�=������7�����w<��+����^{� �\=&�1�0P�����>����Y����L�B�0��<��+��@�,����R�!.��?�������J�(*���W��j�]=��J6+�������PY.�_�����A=�S�B�s��l>��S�:��0��QT�y�f��
8h�����]���j�s=(D�#b8���"dN�*Jq.N-H)���Y\��H�����7
�9���a4��[a9�'�<X|c�0N!?�hl�H����f/z�$/�%�L��h3rh����PQ*M8�6�����9a�2���{����H���%�C��|���0�)�����A�`|^ MC���m[
X�(me����Q����qh�qmO:o9�
�Sy3��%��:N��	�g��*lg��ijC�+���X��K)4kHA��"�)�5J�x@.����&W����G}� '�EeDX��1,9��L��8��5.O�;�<��
��K��9{h��S4a�d�I�Z6�jua'����R����S���*�\��J��
�������������Xz�����-��8���a�|�mM�P���d���~��C��
����D��LM�����UJ7�lK���<�N8���������!<&=�A�WM���15PS�������g�K�6�>��Y
&19�H�.�Z��M���M�J]4���3���")�A`� OT�Z?��1�p`X��5�j�N�����2)y%v�R������I^%������0b��?3��c�D|��4BG|z�l��K���Bx2X���n	�	t$h���5\ �l;����X�c",f\C��Uh4a������\k��������J}��^�t0i(�2�SB3�$sU��r�I�*�l�b
5��8`�h�r�,^j4T�lF!��E��g��C�K����(R��hy�p�T��<���Q����w���-a������u���%�2��"�@G��"��HO���OV;��3������8�LzM��3�:q)�&�7}�g��u������J����Q�����ZG"��N+?zN#��%Uau@qCR����X:|aq�"V.7�:e]nG;���3�_jrqQ�k���#V���;���n)�mY;#�I�����/�_F�"���M�<�\���4����N"1}���s"����%BA����u��X��]\�?�,���d9K)�����������d�(�Gk<"�������p�pB"'���KP+��Hr����''��D�u&�Q8���,hyXh+ �#o�[B�Lf�J���Hhj�Y#���"�B���ZT&m*#�����[
�����G��Y�D25�F�:(�R�., Y��!��KT�u�q��e��Xk��I]��W@��uZ���Va���H��5�g45���E�]$(}��*�>4�E�h��Qa����!�����R��F��C��%�6�{���;1�J�!�'�U�_3�E ,���QUA0�
����W�lt�96�_�L�i��\>��2��N��!������%DqX��;`l\�{AG��2w������w��c��j��{�.�,*����?�g �?A\�h��{EH��=��\!�C�.w=F�UV���)�&�Eb�b%��$���h���!�_g��q�E���x��J'�+7h����I!"U
�/qz�cC�����`��K5h�>�q~�o\S.�a��(*�!+�����:A��,43	���6���7\ �R��J�Wp$1S�y"Dm	""��'U��0�M����#�p0(n.}�>��)�������R���0��?U�(+�bk�D������5�Hv����Ker�@�����s���k�L
�f��7��\;O���*�6$�4K%N_�=���E�_9[�s�j�`]�Nl����;I&
c�������U,���8�)m��e3��(�fyu�V�����T��(#B���n��@������1�B�+�a�<j���R�^*	I���R�jg��7M����R0�kP��1�<f[�,���^_�R�������BNY�Be�2I�/Ie�����p�2@���\���X$}vE3W` �M15\=J��L�*1����ki��E �����,����5������~�����?����a0����4E���0z��#"~����2 �n7`��s)���=_
�1#��/���<�8X}��(
������T3aF8s�4c,�)�Q�A����5�c�u�B#�L�
(��fr]f:M�^�,�[�n:2|���y��f&��;;4�L�`�����"�"����2.in�Q��$*6��e��W�n!� ��hK�����U��*����9�m�5dEM�a���hM�*K���_��
E�FN�}��tm�&N�W6ac	����T6_v��4�i���y����+�f�-,��^S�am<�9�	�U!��c���)$��J6+0�]������o�����;�%��euE��VFeD~��Z�UC(��WB�_�������h�jU��@@��Z�6�sd���,� ���`a����xb�[t��F"_��^��Zc��h���m5�LJ�����7g����`<���Qrh����rF����1)�ABL�������R9���FB�?Zh*����0dd78"�	���e���y`�C`����tH
a
�jx�CK2\1+.n2�II�r�@=FBZ'�57���R0afw!����zK2LF����23�j��"�R���K��:���4��J��X#�l\ELb���J�����	pQ4�8}Y�G���s�Q��Y[�Tup)4+dJ��rl���LNv��)��]�F=|A�B{�������S��	��a���R�C�������\���=���bTi=4��V
�	y�^H8�z����d��wyq<.4c�`���2e�k�P��V��ib�H����g�Dc���k��{q����D`F2Y�t��wF:���$�z�,Y���$P��o19� �>�5(B�1%�1�`j�Q��h)��h!3=�h���/��`��HD�����Rl;��K.�%�qa��A�7�X8V-��I*��lM�SZ�����\3;#*����\�p)���h��*X���ZB�n(�~�m���8+	|�:��L��a�x��\�yP�K=5t^�vyj�BI���)�l�U���D��2N�"�`���)��)�a,�$z,���y��+��y�)b�5��y�h��<�`<��<�`\a[�<8�`A�dL��j�Q�<q��h���<�`XB�0ybP��bX���e'1fy�>���C�S�5�c�,�m0���/�"9	�����`!I��0e�`���:�Pp�{�p�����u����
��y���586n���v8��y����l\�<�!�)�<1e�z%yb�=���:��p�x���=�#8�l3-9y��F����y��4�2�o�8b�i��2����
�'�|�<��
�+���0<�P3����4����Bl�b���p&������y�57�Sx�4q���dy�HZ���D���x,���C���a�e6���-A���%7����&&�1�8n�6]8Q�x�
��T�\2!v�A�h��fTC7$'A�I��B�J4�?��"�f��K=�K�b�s��=���^�����^���A�G�/�iQ�/�K�Pi�Q��\S:��5DETY?L�K7>�^K���=U�T����Yy�:B���e"+���F��?�(�P�&��rG�a��T���41��_�"X�V����@gM ,�t��#�S
���d#��;�6!���dR*���G\�a�3*�9�K��Y�%6�D�Ub�ac+�-�C,���F�:�C,���s���YcT����	���#�7FC��p�!������k&���Ho0��l��
*�7��Mc'�}���[
5Df�
R��'zX����B�>=�d�o���C�`�g	!y^�c����F����}C�~�����zm�C\o6����I$���@����O���*��L�C�5<���������i����'���2��\�	
S��CF%VK����o�`����W�2o`k�}�����OY~��\���N�G�!�bX��������?Go���������k�a�#{�A��vI��H���P`��K�!-Y�G,w�Lw�oU��n����/l�kd9�)
���9k���0X��"�����OwY���`8lI����%q��0�8F������zZ���?n��R��z$����m������JqR�0�q�hwn����
L����\d#�q�s�|G�J�l���)���������q��+�d�T�$gB��H���E!�+�q�����������"��'3��	C|"#�������=b�k����*TO.L��Su
��)b\�p���(�4�^�y4eU�=,���|�\�H��[��!y^�i�U
��E�|25������:�2�XR�i��h8,�.�\�������P���sE@8\a+����aZ'�����`&\����W��2^B)��i	
Lf��MNW���J!A�t8d#{��^@W����Un&�Y�h�M�th�U��j\���9&*\E� �b
��C�Ts�������u��o�[�f��AT.��vg� *n���]AT08��\]AT�6�qQ�����+�:�rk��W�+�
�u���(*���F�h����'x� *��@�`0�.�Q�_i{b��k���"e�e�U�6!T�>zSR&��X������!TZc���p�2RB7x�P�~�v��A��
�����k"����*T�T)�FPAl&���D�@Vw�5���M4MQ>��ut�<���1��8��jc"�N
7HZ�t�u��u1�@�z���3������Qj�b�`j
�
#����&Tq�vPE$Y�
�.L�Mu�h�T$�E�M����&�
�~cmM����.�D��&��Pp��2T0x!�,@���ve%�
�9ap
��!,�sp	�.�!z���`�I�`Xa��	��AI&����T)��67�p�(�Bv��F7��;�(��@�n0ce���P�x�R��*��HR�`�%B��j�����*�b@�6��p,�	�����*iU�|��P�*����^���&V�����[�=eU�Br���S0D�F������y���
��L�(�S0h�`������i��a4x
��R��)�<�����i6N��O�h	�B^����'��� 
c�X����]�/�1x%�CJ&|���u���)(��JrsfXr@�w�)�*8����*j��'�
���k�k{6�n�!���������u�k��c�9�����
�6�

������������S�vEP�RW��P`��A��������=����������4L3��8}�`�u��������TY���A����C�)L�&�#��fJ������DL����`���8�p`&q�����L� ��,4p�dP0Zh� /��h��wT��n���ApX���$����N�y�P���`<=j��
�"����P����\0
m��u��I�7��[�yD�R���k��c��.i�)�71�f� �����H �"����V�qISw��
Z���,�ILK����6h)�@RR�6(��0�3p����6����R
F�-��"��g���-m��b�����1m���z;�+oPL��aQ$q��0
6�d^87�����b��T���q�aK5�/Ov����Lp���]b>0k?�R�(h�WS)��m[?�v{���������&N�O4��S�?[�*����<�d�4�(�d�!�5��d���B���@��COS�B���E�-u��+f�%��������Of
���B��QSbL���M� 8,Q�	
f
�%<�O������00�5h����h� n|��0:��4�{Q�5H[�T5*v���s���7���u�0�5a4\�������P#^�o��*��������a)qR_�Y��%�)��A19���H��B��H���)�Y���e�L�`z����bC\���F�d@����1����R�~��3f�TR�`=VK2��X��f�"�����H[��N�2�LN:��2�p����-i���m���,W�R�#�;)���<�4e������������(�?lw�*�J$�yVk����H�"�?8����|���ypEU���-�<q���RQ�I�a(�����&��$�`F8.Lfci�����i�����"<���b�����@H��1'gP^U��9�b�z��ei��9��7g�>��#Q�c�������H�d��)����>��Ri�`���I��K�0���>�fA�yqt?	�Y�Q���2���q��I����Yp��9SW� R��{c�]����JY������!b���/��Cw#Su+�-�cjh\�aTh�7,�,!T����aK�P�����>A��A_M�+ 	<.ke��1������D�'_Px���dIGZP�I�XL�^x��5g�����U��P��:�8Jc��^�\
q��M�� &�����j�������$]�Vd+h�����!�=G�)\p�O�	#�rwB�#�!s�����C.�;�3)�����<<�)`6�9GC`$z����%���9"�gcC�>���Am�k���,�d�7���#�Rs�iX���[�c0XL��:G�����s�(`����D	�M�p�'����h��$�bH����D�J:�u�(�?�B��$\b�(������;�,
���j
�%m����,h����#��z��A"�E�+k�_p��������X{�����A"-q�����z��t�����1�zpHd�QOCbe^�\b�����7��){��X�Y�s�(;s���b�������E�sDY�� .��#�`s���sDe~H�:GD���X\4�#>a�9G���ig��r�Yh9G�=*���z��|�0oa[�]��B�nU�$2��M��������������=��%3v��!���\<�A����<��+!r�����`�������kd�L,�/;�TY����{�{Yb$F1c��X\����Wn8��|��|?+���h�}�v��jZa�z���Q��+�������7�d��d�K\Qu
�n-/A0FFD���A���u<�
�"���a'�`�M���J����S��IU<�����Ht�rP$��Z<��c<'�b���q������R���!�R��Ni(� ��fH���Q�x� ��}4�3��s"t9l3��4�(Lg�������K�UX�����p�����=�3v��%M�Az����P�R�.Dx���2�y�)�0���'��Q��m�����P��	�=
�n�a��0W)T�~.��V'��2�0���)�DrJ��,^V@��������{�&�"L�s�O�o����b��
�*��8.i&�!6-�]e�5��P&8u�=wl!�e8V?L��������kO�=�4�b
��!���Z�u��}M����lE!41o��)����hH��1l/������hf��uP���4<�)����$
AQ���S�&}#j��I�_����� RF������g��1�P�A�E��k�Va�XN��>��<|�������Ct#F�^*.\����:#�t��2����$���+��Z��I��*��,�Pv�`������
p����22D��l��7[�I��b���8��0���i�_C���U�����!��6';��O8��o�7�t�#��2<��4+�"/�X��g��R�����T��=[O�?$��� ���N�O
�4%����pm�s(�����hopDp�d��X����/c�H�	;3L��-�M��A������ZKs�������>X���}��G�0P6��t���q6Dr�S��|`�����S��gap
s�p0����&1�1���N����1�=�B�SfNnx����O�4�(�-����6D��E9�2����i��.�q�+�����D[�Rj)bZT�A�Z�@�\6��R�����L!��1_��tz��g2<t7?sQb�����p�e����0�y
�������|!:�V)�5���~�������/$��V�z���h>�������C$�Q�������n��B���i�/�h�,r��������xN�#��*����}A�p�$g�����	�cF-"&�<\���9(N�u����w5[E�M�^�~��lh�(Qh��_���:R�@	�\9��U�������3y^�a��b|�>�:����TM@�	�����G��m5�����qy�����y
�C�������j�������S(�N`���u����[!W5�}�O\&�.��O���Y������(@�
�����~���m��A�<Q�N�[./��Td��9�aJA��������k�!�Z��]��LMsX(};<8T����:j�o�#����j�Ad���sx����g!�\F�K1Z���Q4��E���x�Q.�
��)-&�f�&7��fV#�(_�c�6mj�B��x�2m#M�O~�SZ��~�L=	c�8aN��/k&����1ql;w��#������H����g~CKa��&9����&�}B��������n��c��W�.��F}���A4�%vH@���$f�a���Q��+8�LL���b;�X@�A�.��U��%����m����g���S�8������lHl�h�a�$�q5@��4��z����hsp�����8��YK��e�Ly..2��(>�0>�>��R�hR��&�����#�]��Bd�(�Fs-�#�I��r���������~0	�J�bH!����O��:Z�e��E�a�����K&�����/��,w)�a[K!k�����Z�*;�s���!jl�����(��1bp.n�l";��C=%�	b���������a�����N����%�QnTQr�Uv<��0��cxW1�}p�0��-��Ei5���������:�_,�����rS����H��IW�0L����Nq�r���>/��W�T����X�m������
#k�A!��1N����bz��2�B�<f���B1�5�*�,��\8�����qX����o��+>��w�-����2c�z�-trX�q��V��X/�����}�����A��:�L���9���_d��&|����[�8�r)�#;��F���G���N��7�c�El��*Dz���<2�����=S����0���%M�s�C������*�K��B8��]���op��Y5�`TFf����g�Y�b0���T��
=w�qBl���[�A	��U���D#x ���>����7���7�G���������z��Zo48���"�#�p%���6���pv
�WK�q�L����'R�������b����,<�#j��-�Rl���f�30�zXY8q��`�
��[�P�r�X�s����,���a�����u����2��������+&h���Q/���%6�%x���1��7+KM"Y���D]b+\(6��k����V��h,N\����M���q�V8�l���RG�h����#`4�Qw)��u��q�����@&��'8K���C���_^��}���*�g(�'\x�
���������a���d+����k���t�2e�$5i��,����"i��o��Qq�������c:�8�V%�}����X�X��AZ�����w��dWtEbF#�`�F��[���g	8�A{)���~���M��,�=�k;����!��dJid�i�C
m)��a���;.�E���.����uRS��������C=GZ1�����@�T��G��������
}0f��)<z����
�����`�d1UI�T��y������~/[+���q�^[�;gZ0}�o�F[�y���Q��~��&�g�c�7�L�x�3���c6O���|4F5�������U��\��b=�]/�#H���a|�x��R�	{��_�p�j������������������>��(�\�q�Q�_�F�������>��i-I��H�P�<E���\?���3T,0����`�����J��J�����t��m��.9��%)C�:��A�0B!�<F���b�q`Aq�8�����,&����d��3�a�2*��&�]8�-7���sT:�0�q��xv�H�����E�t
F6��`�����Gc�z���b������9�'�;DL�0W����N�h	C��U,&/�~&Y=o�`So�8M�|,�������!���P���
C����O��Od)V(-(/��d'�����-z-�>"�e(����n��\v��#��}����a��~�������tkH�O0K� #��q����l���c�c�r��`��m�Q�pb�^m��!�0�*������++�-K�eg�$2���a3LC�C`������+���Y	�n��(F�������~S��mAdv�n
�b��n��)���u����
��p(I����g�������dd�/s��2���L1�G�'���z���ebX�}�8�
[O�3��bV�C��Qy����*��TD`�*1t�]���H����V"F��=����h�`���A�.�;Y��#��#�{t(���K��H�& �}�R
��P2��0�	�4���jIk�0oJ���
�5h����R]8$<J�bE�z���$�IA�����ud��KS,�d@�^�2������=�>XZ���A1�U�aVZ��1���!n��:��ymw��8r!2|=��R�-��Yhiia(��������i	^Q���e�40�R�2���`�#"������ �S�!��(t����������G9���Z+F-�b�/�������<��������Hf+���*d�T�1��W"�G"����?K��Ws��0��9}�q.1���Y���cp�a!"X�k����^*D��8w�4������$2���b�'YJ�#��0J4k�/�	�����N?��c� sa:��?B9�!���Q�A����S��a�uJ����bqJ3���5�QY	b0P3�K������LwCDrJR���V,&�(U/ejq�/�[$�Wv&vO��0:GEA���I��Z}��}/���~@-��������w[*��X�����g��.2|��X��.\��*��E���K-0<���X�L�w�X�c�	�)��B��;!�`���:r�&4����_���=._�1��9b��CC�Mw��X|��/n)N��N��p���0��-���R�=O���?;(���Se�4Cb�CC�S�)�Y���R4���d1�����sc�bP�������opW�N*�M��bo)�����Y�]�c)1����ikJ9{I��%>������PkK~���b	e�#>|��g������f��p�8�:k9�1lt�K��Fs�,���n���U���E���h�p�f�Rk��^���6N
nY����u]��ArYhM��t18N�i���=�)iL�~������9e�Q���+�(���|���8jm�>=j���!��l���Usfi)�A�%�Ip��q����+��35����1Tc������&pp<F�*��n&�-q�C9&�b���t�'��"Y%����u���O1}� �&�gy�[��.c1b1�-�w�p�t�3)H�����&C�I���]*�=�ipJ=1	�1���������" 4��b�8�e0�)l!��D���,���E���&�AYO4���`��VC3hU8�IE�ooV63e��R�E�|K<c�q�>��7I��5RaMiiQ��%8N��|R�
0����H����S�7�� &HI��W�D�0A;��|C�������>�jy�
j�$�h=����R�~�qb�E�Ag��I���a��y��+�]g)"�����
�;��2;��5'���&4}:�TH�{�7���W��u�����+I��
�����Y�$7'���������r�E�(;3��q�a%�=�]�v3%�'E,S�2�Z�8���CW9�/�&�VL��bb�K%�O]�yf����AD���2�f1���6�{5��XgP���L��!*��G=l��IJz��_�?NZ'K5y(!��-�X}�W���CT����r\1M�=sN�F�GW����+8x ��LS��F*#c<m�Z^0�mqe:����U�cS�>�$Y��b��e2�����Ab����bH_<��=��&G�L��#:*;.��c*o�W�X,�J ��I��@���K=.eL<QN��Z(���c������C���Rdp������\5���v&BjwR�!�01����!�&�E�[�P�cx���@[&�o2u?o�R����te�20�l���
�g=p]�S0����q��@M�r����Gj�qC@�QD��L2�_����"�EE�����
�NCb�c=��Y$��1J�a+��l��<&���B��%�
1��QKsV�|�@�;#Zn~{��h<W/"�������V���Z@��*�����#k=&�M�-�)����x�)�b�`���tE�J� �wS
IO��-	pN::�vhfe��Y��'Ht@�e�8Q��x�%��P� P���p�RfV[�� AB0��,0�pZ&�#���\�-�	������������Lj����E�L"��[.�6���e!Ns�C`y��	�����(nK�--���QC�r7�6��'%��7<CA
X����#�e
���S������C#	8uI������ZM�	f[S8���2��V�x|�����,����c,�V��Q	,k,|��L����*
�q(B?����`����Ea2��A� \D����
��#���T��A6��)������p��Fz��c��z�������~��-"�F1�&��k��^G����P�Ft����vQ�S�J��Y/�*�;�����<�5(0d�9(b�����y7.Z?�oK�����<m���K�>�~~u2�1��Gv�4f�
����V'_����IS��bL�A���w#���Q�����������U��h&kVLb����T�XChn�v��:]t����%R��F���'�"r���87`�o/�,�`�=��
_`4��[
���F�Q�E���pxW�5Th m|J-�6G#�k�C���n���"N�0��60hR��Z�(%n���;.	6C�L+�lI�T�^[�m/W;�i5L��.�e�f)�Y1���0��[��'�z���&���dBH�o�u���.n�]"S_(���\��g�����k
2M-5����
���r2��U]����r��v�'oj��c.�����Q���M���}��b����S��������#:??X|�@A�������+k(0=+0h~�k���R�0�T�]88��|�r
�CXj8p��=D���<L�$�pl"�M��/�r�A��V�q�("���*�,K���bIx�Vs������J���K�D�����O�<N�B@�
���]]EC���[������\���Ac�p-��4	�r�C1Sa��LR����'�)b#s��I
�d��%A,�-c�E����0����#�'������SD���-����eR�����F�"`<�(75�y�]�1B}`p�(���a61H���,�7���"�la�D�X�r����%�'��H��1�bh"`�$��>J��EK1R[h�eA��U#9-�b2��\v�2�?�Q�b�y�D�Xq
�C}�1bG��y,`|���N0�Qp8�(t�����uzA;}B��/f!C�B���"aWE"?,N�LS��!2ULS�R��	����9��29G�@��,L6��������_����li������.���
��9�F[�(e�����1(���n��mp��� ��I'L��%��pri*�i~�0���8�����v�� &����
��T��.Mq�Nh�9�0�O�\���+?c��K�V�u���iL�{�\2d���}���P����G&�� o�ff$0�,�PF��L1�C&P2��KqH��0v�gZb,q��/���gF��)�N����b��0��M,��eu��:aYH�*�x���y����;^���\3D������������>�3l����j���Rk�:���q�@L.�a-U�g\�wq>,�>�4��2c�b���#6����8��y�����b��� 	 --���l[��C7H�p��%0�E����N�C������$(�&������*�(+U}����`*Y��F"G�"�I/ba���"����xV7;$(��$�
��B��P�Z�8x��-3'vV�J� Nz���,H�N���id���O\i@�����3�K!"���2��:N�l���vX��h�%��^
��awLt��1�x?�*�\t��2Y��p�Of�\���m2XpD������� ����Q�Iz�&���q�����/g��%����!9[d]�=E1��F@���R1����_�L�5i�zx`(D]a�'m��e=�����1|&z8{b�2NRI�#l��l��b��T���ss8��c����Af7�=F�-#W��F�h���f0<sHXTP�G�+b{�������L	F���m`�G7��P���yv���=��E���J���9w���=c�5
q��Y�����a��9|��)���2�~.�9��$������`OW���B#?�?��LJK�]	��)��4ZfEf���<�/�	�!��h�������R8^0����@5.���"��#'i	�
�����a��)���"^ ����=@�3b@�ov����B0<��@@���%�`�x�M�4��xr�����G�,���1��W`��M�`��tQyY�X�d*��t�����6GV���\1}��j�4�/s/�?����x��rb ��m���E�,��p��b,�r�A��l���vX
���!09_�`�aXd�)D����T�,@����6!�D��*�L��E��AQ�uj�lp�g�b�j��#�N�d���jC[����~���&��`jz������
�C�\J�����8\W���6`P����0	a:���U���aB�!�BC1��Z4��M�=�\��m�b�7� 	X�����S���|�rh��)j�,��1���
������yIi��HU��(�	�g��v�A��%\���cT4��B���p�u����7����|F�7�������&����������dW /�}��Cn���pO�i9��]X�-t���)Au"&�*	fBm�i5r8�U�R	0k�C�-�pv��b�����S'f��EL���e�����f�.1o^�F�����P�p(PF��,S.N�(\� >���q��	yC3]���u'��E5��D��s���f�;��P�t��
��k`v��2��!$�blrmw�i-*��?i����+JK��7�e�:���p�7��e��|LI��/��#�b�����eR_�g�S���%��r2)$�"K��L����%��Z���2�V�i/J"�����XF�1������%����&���v/���`5F�!����Q1�d1@J~�ha�R�V��*�vE^���T���
T��I9tR�p�7���#��|��R���!��L���!< �S)�����L��%�1�h�S!�F	�N�k�����q]��dA�
��`�����U��hP�7��Rl�	�B�*��S�MCZ���jb�y�����u�C���k�F� I��EK1!��_qO������f�:Tq����t��1�Ke�hwvf�,8�o��dXpa��9��b�6.����m�MW��3�4�N0��JK3���U��M�����%	[�g��C,,mc���W<Z�;x��&���y�����o�!\yN���=>R(���Qu�#)>��;�m��(1\�b�k�/:�m[���l'
�]��`�f�pY�_��0����i����V�ckL�5HK����@��������9����w��c~Pg�$v�4�����`Vk����1����'!��y���L����9���W�Q�p���^eP����Y� �'^��x$]2����uz<,�2����Ju�;cG!�)>Q"���T����_�j��p����D
E|��U���%&��B�lhY
���� .�p����
s��!/d0�xMn���M�q���>�&�`$���y�q�����A
pw+iCW)�C��I �r�e0�)�A������mF�7�QJv:F
c*T��[n�L'A���kC�G��>�&�����!��]����Y8�%b:s�[kD�������`�������O����
���]e`q��+��m�1\N��zO��]�Xd�U���F��gC�F�6���n���"�P��_�VU�U��I�����qX%�C��a�k�(0Jm/��2F��G���������$,I�������O*W��`2��a�4�m>a.���O�qX��{�d��fLd��;�;+�"f��UV0��Z��p����d���| �:�2/����8�������dz��W����J,���������FT�
a��u%�;[���bb����*��Y/�����'�Y�ua�OP�H�vL�xU�	���\������f�a�U�1����Up��cn��*i���/�<p�!���4�Yt��a�4y)3�*�Y��Mn���z�5��n�A��J��������_�un$j��_�%F���_��f�m��<�����
���q����W�t1j��_�R��_������
���n��#�3���Qy�N���r���rXA�Ug�Z�Px�n@�qXa�3�9���a�����
���?i�>�����d�:J��EQ�v8�*���SFu�Q.%q�W]��c�����y���W)��b�U�26��c�[��X����C5��x�Cx?�"�M
��tDA\I�[���M(�fH��_��;�Onv�����������~X���w9��<]d�;�R�?C��Vf�Nl�m"m�T��J���5�m����`�1��S�z=�H�p�:���r���1����C�*�IG���	.5dC�rQ�b�C(�-���k|�������2���0����f��`AE���!O9��s��m�3����;�9l�	�
��M9���C%�O�/>�
y\8������n������gy���,''Y��P���}��w����������oD#���U�L��(�Pq2���������H�����_������tZ���u�� ���}�#H��X����%�;�bp��cO�3����#�5n-�����M��"�����}�0-���p���M=nx�R�k��M�F�dz�2�@����`/��z��?��H�9U����u�����z%��1�V��48���� ���<Ks���#
e�2��-�����,����Z��+&�i�\D��&B7\&�q�hI��pmmM"{����:���HS�=���\s�H�"����f��Y��z��/\��p�i��a�L)�+D��#`��ed��+�-+|�V�S��2�������a���3�����-�*���vtU�|���}��C�?`�\H	%���eL�2H!e�p����[��-�B��������P��1��������s��D�	D^~�)�Of�r0N/,�qK���!����-_��$�8���-��FGj��p�)��
�Lh��GA!���*Ap&���e6��c�Nx�u��r����2��]I"����Z����������=��v���jp�������@�n/2.NO^�U��<���C������m�8x�9U
����B����$���n��R��s���9S��/+�s��2�fF����`fvw������n�Q?l>��^��J�~��p��fhM������s��+!���@�y`E=�6�����:�!�@TD���0�``T:�7����V�8��r�'�%������������i�W���e���4�v��f�IxZ���e/���f-]��/
�1��RM��
�R$A���4�J4���"�C-*-��2�1�d��p���Vv��}SO	F��)��YM`�H�S;R�EU���YY�XUY�]mILZ�'��U���9Z1����*0������x������6(&Q����jKD��x n!qX�]�,`0��p�k������R$8(T*�s�,�����I�%qX�c5p�i&b8��p���.s�+Nq`u�u2n�����jT����5��O��wH���N'M��d{��H�����+��'�oQ}#��+�k%�G��-���U9>�3J���8�)���7�f04�``�/�1:��r��<�/�5[���Ei�VR�2%i�&���6��/O-�{j56G��)-E�UJ�C������D��RV���=���D���������q�U��Y�*g��L��YkN��Nw��P��������^�rBd�7w1e��G�f�7cq*xIq���DK10�����j�	�<���������18�a;�]�2P� �B(un�p����vT��$�q����n���Z�{�����S�a)D��E)x��c��F%����T1v��q��"����1�3���J�`094���c��;���&4��f]y��S7D&�,�}�^�������u�T�hB�p"�q�?��������z�W�@�@����#�n[J��
b[�V��3��
;N�@��4N[k�`v�\P��������i�� ��E�����������9�u�!���
�e�@Q���������1p���p�]�_G�C&��B�
f�Q��FpdQL�P���0���e�,v������M}�d�����uz�XS7&��,�7�1�j�`j�a�;�
�1d��[A�<Z�2��#������A|�z���z0�w�Zz\� Pw/���aku�E����1�����f��e���=��	��X�Y$����`��8��m ���)
Q�J�z"�
����qS-�F6�����I�N��-����F�d��d��q�c�w�%EQ�h>��c(�D�������p�L�����=`�S�<_B[�o��a'��.�����1�5xF�6�!��Dx�61��T.����F|�m��pq��;��7�,��N
���Uq�W��~�a��Z���)��j1x�Uv�wF��������Q���1�*I��������K��je�����("���
�����6�����7(���X�������8��J�T[=�������:�f���t� x�<��hx[�?LkB���8��P���D��������O0`�<���U�Z�!�o����"���9�|��C�|<����)n3�>�nA�*��7��]!�&�mg����S��f���#������g� *���!�T���}�����u��2R7���S�a�Vn���T�A)����
#z���2_�������]1Rm������a0
����g�&�A�5C'�����Fw����W�����?�V:�.!���0����Y�x;0qw�>���O�]�j���oG)�/���X=/�&����o3
4�� ��k�k���0���?�%Z���
�G�p����&"K�����
��@����<����������<���jmP};i5���,�����|�?B��<\�Z5_�L�x{�5���5�>�'�����T�~��B��5�a������Z���C���&��������}������O��~0�d�8�����9�8}Q��j������jmP�}}�������0���1�Bz�O��Z�i<���c^��3���������S}>����7�qs������3gX���=##$�;���^�U�m\G��oML��Bst�� 
e�@�p�vj�D���G%;������\��!��~���7|�����P�'3y��L1I���2����e0�W1jmp���=4���LA���p.3z!�@�Y�q\�C�0e����"V���2|d��	�4�3|��_�l��Q��:��e+8����%��{m���/(���/����
"9�Z��M8xO��i����&M�{<�OQ��[�)q;H��s���LX�����KB�P�� 
���];����t�
>T����J����������C���2�?Y�G�g�u��a	��:������e������m*]e��;��c=�>����6���C�`l��?�;������ �s�.��Z�{��zG���B_ca����F�!���>���)F5h����A��/���hb,x{^M���q���F.[U�?_����2���;�,�t/���U���CC�_�0h���9��A�H�8u����|�����ib����XP,���x��>�z�&~vB2i��
�])@�q��
8#Y��q7(����G_��y ����7O���������.L#q�_����!����[�6��������u,�(���_
��������mK#�����l8�lR/p��<C��J\���$a_6[��4#�v��1��H�����[�n ���=���R��vZ�Iw����1J�!�(���	�������m]�7����K�.M��yA�"��D������$A�8��p����`���QoC�����vf1��{��x2���0��c�#f�(;���>.a��,�U�9�w\��o�l�t.(�">�@���:��:c��j��)�S)�T���6_�G���R�
J���|�X�|�g�	�Et�r6`O��j&���� ����O��&b��m��z��X����
7IaaZ�'�����*v�X��d�(
�[����>�j�3��e�A��`��m��X<C^���A�-#��?�<�A+���e������XD#ka����w 2����D�!9�����J`�D�SE��W9���+��o��N`&���.��4
(��a2Kx�����q�������P{��� �)�-�D�����'q�&�z��Q�s�
x�F�����
t��
<�G�H `�����S���e�5�L.��8>a���5WW-� ������xj#>�dw����4�������g�f��gax���T�V������R�)���f<��rW��EN�~���Qj2���������n��g������8����1���n���Z�i�^��7$����.�������A�|Xa�md��0����\�qe�BXJ���4?�r�1t��������6����?���/���������D"��'�*�x�(i�p<����	����n��_��o�	���=!�`@zd/M����`������i��gO@|�
i���[��J�M�u���/{��3����,�����H������X%S���A��arn��h��{�>>�%�E�[���0�N%�f&�:��}Rg���"#���������7��(��<�V9�B�q��-�GH�����xL8���gt��z������`}|hc��/8i:�Z����msQ���
2��"�?���[�0��b�����H��$	x|���,���JB������_H�+���6�O(�Pif���N
��?����q��:�Ls���@�>z��� ���u��c��v��$������N0�h��Ps�,.�,f-����x��s�����Y<�>�h�}0���7���3�6�!�*��~���q����!�/�
�e�������Yli����8���)�������B0)/�9�2� z��0l�x�P��c+�G+&�X*��l�i�?E^T���	?t_F���6�D�q�>�RV�����!����,����Vt������Nb���������
,�_k(q?w.FbC;���e���h$2(�"O�E�����B1k�f�
�I�V��a::���j	X[����,����|n��'d��C������K����R��bw�N$����P (�g;:~$v��#DIC�y�����w�v��[C���.�q�e�����og,�D��������k��8'�{<�\�`V[F����,���$S�V6P:\/�c!~HGzI�L�2�Pq�+���S������C+�ch���Q�N�5$B�3�E,p��Z16�v^�A�����b"�+�3<��	E�C���x�U��%����G.�2"m`k+J$x�x�_gKx���
,���V:�����
 �d8�q����J�M*�T�N�� �U�N�#���������/`��b�����n���J2Py��+BU�d~����k��LN5����0��[���h,���^V�V���2���(/����l��� ������6a��4k>�����ut�9#�����2|���9�{��r���~{h����8Y�W/������	5:�^C
��!������ZN�m`	�H����7cl����0 ,>;x��G[]O��l��v%m\�J���
<����%��#��X2<�I��X�R�0BY�DX0d�8���G��s�Vb�X__��Dz\�_`�0�;�����O��3aE��SY5�q���t.�P��o�#z��_��
�I�d$xi%��^pp]@D+��q<��
e�m�J����*�n��#������������"��E<�����������!����J���`��W:�P�R�	�����]T4�'/�#D���������O���
,����J���	����8�h�s �r9����R*rjY�;q8���d7�i��CH��|h~�"����b�~��%���G�R�4Z.�����2���N�#m����  c}{&�b��^K��V�o��yQ��(��:	&+���C�k��S�zL�v���7�;����z51��5D��i��"#:2�8D���X�&�_;��P"�ReN�g\�,��p�q���|eU����s�	G;�4��M�8q4W5���@%���@_��
��8'�R2.��,�M��y�/Q/hO��=;��g�9
�T"����v�K����V�w�"z�HL�;f?oK���
�p��J���%`������M2�,�e�|�m���"��p��46���^��|1T�?�3K��k3�j7T2���&:#��YdV^���M"��l3P,�(�8���w4��`������*
(��XC�>O�,�����Y�����.w��X�/�<�Y����@��.�K+k"��"MC�K�_�P�gp�rP�m��z��DTdEe)�'2Z�t�)������
>'��?Z1=d��%��G��3J��d�a-����EL��-H�k���@os�>�s*�e�E��@a���JY�5Q���T�u�1Z�������`�m�_h%.���.��JB@�$���o J�)"����l����EC[�
�\\_ZY\�=%nZ��qc�,��xzi%�ic�2����.�q@S1����Ng�~}+�6�)�/�d���8���&��#o; ��A��^qA��;�|����~vR:�vI��`���`i�wv'��<;��9e������sEz��|U��]�������o��
u���]y6���2���eA�MQp��L�G������G��|?��t�Y"���(<�6��&f�co�q�-�a��	m���w&��GJ��e9����)��<3����r~J�Z�^i�����8���z��p�lk�)uR��z|,w��@g�_�I���T:�2)X\
�� ���|�^Wt���n�$�o�&���uU���n~��^���qS�=��l�V�~�T|2~�W9�;���C����1|c~���r� w���4�0��yTw"�T�O���&���5��^�a0��]�o��7����������X�����Y�v�+J�Og.���9,����V`�����w66�&�
��q%���]��U�N��S�D7�So��rP��J��7�����X>�9xD+�����W�����s���^}�U�����ZY<���a��L�t��������������29����%|KKI
�(���L��sYn�����~���8����:�pT��?P��xk��U��|��<<r�C��x���\6;���hc��JD:�	�D���Y�R.�����a����"�=��x�EAn`�P����J��$gfB���h���
��*�e�������)�����K�>�XN/�W�4�Ee+�j~��5�BFv�q�7O���� �U�W���s]l��B���/`��@Q4�,$��c`/d��h�f�*>)�x������zq�v�=�;yn������������0g�f�I�e�Vx�.|����Y�6��_�N;%}c�3�17������AtwJI\��T��O�G{+��6N���#���c����fb��>	D���bB��%iRA(E�Ww����207�VC�8l��G���lC	��4{0E)w4yH�%��)/����sUZ��,�?���X�Q��w���c�9���i�~�P�_��+QJz]�n(���'�)M�����@f�=�����P�JBhl����B�vrj�Nx�a_�����!���l�9f��~'���|i((��r�-?��b]����?��$�$�c+vG�ejn`��+�ZW�xt��V�;��:��Z��kX�K�����C�l� L��]�me~������D�����������]o;pL�~��gn�]����P���j\����\�t�����]�s�_������yU�w��~2�bs�|�7����������F����$�P<3B0P
|��kyW� w�<��M�c^���$�Q<>����s�bM�j���*��p�\��g�jp}}heqC����t��FKC@BfoD�v�Jv�OhF�3��bl����
t4�������!�K��m�W(f�<#�8�-d+IE<�s��m7�d\_Z�\��?�Z�r}@�(#���N+&�z����\���rv���m���e���h�j�
,���\�v~'���e�~� �_[���y1��(!�����4�Q{�\Z".�������RC��hK���q�J�t9o0Z��i��
�Q7��~���W���K+�����ZD$BFS�i�A�����*Z!.��-�3kD2f}l`��+����f���a��
�����w������k
�K��d�!;<�����/M�8��K�A4������p)�����d�;^+x�8�
�k��h(���e��������Y���jl2�e���vj����&V�A>G������]�.��'>0�����KV~���z�Q]��}��r�	���!�wZ�L�9�	0�+��z:���q1g����x5v�l/�/f������c��<�E2q�*Z�h�~Q��M����3s���S+�G�p���H�`i�k�����l@f�Cz������:��4]`����D�����w�V_s���-_*t�����`2�*e*~��M������v��������x"^������$BO^dZ��'�Ie����y
c�X��*	���	~�2(;����������}�p�GS�@�r?��YdW=���4��qP6�'�6*
�c;G&RM'��<�U��Q���J[�^Do����^[�;Z�IL���|��8�I�v�<�����}��YX[�g����?����|���.j2�(�/�x��	]N~�j8h,U�	?�[O�������g[y������p�������5�	�\N:9��%#B�)�����(�w���\������P<�tIC�0J��pi������D����h��'��xq��:�*�`�"�:������r=�~����^]O�������0�2!���cig�����6��
��n>��Zt�s Hb�`��1�:vm�����pW�i�:�.���C��R���sCv�V�:��0�t3K�w�Uj�f�!'��DRt(������-'77��\_3��}�}�L����J�B&_��:��d�&��!n)E<T����l����G~����4����^0Nu0��2�d��&_�c!@�[��1��-�X5�[�.!�h:��do��]C�b`i��=�:�NM��C�x�LI�R:�����n�d�oMt:���|��$��+T�����D�r��rcA�t�@�J���������T>3��[N���AN��:|
$'��3}���%6� 7�S���zl%.Mn�_��M�2�_�#_8�����[��������]����&��F�-�Q'�<7QH�"��e�!`#�w�gG�{�E<�^1���*��;2������3����
n�J��#:�X�����0���P#{K�q���;���lA��9rh�K+}~��6�B�gl����;Iws���$�\)~���'���P9	�X�-g:�O��KS)3� (�,(�as�`K��������;�9
2�
!T�����n�'@�R@����5g.8�!WZ������	�a��F���#�`n�����	�aF��p�X���c��@9�]��:t|����,��O�`�C��	<d�\���f(�X��A@#RA���a�e-�`�����%i�['
���0?������rC��p��\>��S��N�j;���z�Q�������-�?����5�jGk��Z�C��q13�N�gbd�������	�nS��.%.��7~�����R��r��Kg������L�
�;���������Wl�
��~3�������	��N`�u�����>��z�Hh�s����g'�Q��9F��}}5\w�r#���X�i���@���X"6�Y��;Z1nZ	�@4��D4h9D|����R0��C|P����x������S���q�%csA����!�F�J@��>���T28m�����P�sE��TK�7d�lx0�M�C�<��q�Q:��G:��7F��|]��������;\"^����<,&
��Pahy3@p�4�9�����V�?��I���b�A1=����&��b�U.	*:�&G���(A�4������M�U-�f ������&��t�8W��8K(P�O4����������~BgWp@U�;��g���s��Om������!Ig`m%`��<jH����S��0!"�k��D��2����R+|��a��r�O/�8���a ����_�^���'�_Y�e@2n�8V�����T�@�7k�_�8�:���K��0j*��3iN=	Uu	#D��^7vOb�z����
<�	vG�m��8[ib��|@�A�����N��}q��z�E�����\�@���Y_�3�*��T6Pf�;�/������@�R��b��Q�+
�sH���D�����
\����y�����r�
D�}�{��d����b����v����z^	tO9f���~�r���GR.��^h.������r?.f���
�lM�x�T}�$,�:��!��j����J���B�n��O��L�`�Zh������6a�E)����/��~����',������A�^�bU�j���������e�V��\�\��_Z�	(�5I����
�+������o���S�y�k+v��	�/�D�t{Nj1B�Q�U��zUFE<����s�U���q}}h�sA����i��P�B�2���f'�����%|��8�-g�(������1NE�D\�s����^�"g��)4sz<��\�����k�����O��g��q�D�K��b����a3����5x�Z�(^A��A�@����vj���U���l�o�*!��/�G,)��O���H�>?�����d�SM�s �f�]��d������q,�@��	����	P:��$	p�s�3�cU�����������\���`��g��rHd[�L�48^
�����f�l�8n4�DwE*\`�c����t�����v�V�2�uT$�r���#��$���7��V�oR������T��P������d[$�e}R�V��S�m'��K!�:�(�cGa���l�rlw�\�=�Kvw�J\��tE����S�3�"�gg�# -�R�I�Vz�oO�0kh:^�m�+��aE���rqBW�)p2o?T�,��\�P��T��
:Ni�+_����������3�LDm7��%�V��������U.���� C{k��me�W;h+��Y�yp��B�E��sI�RV�iTq��7�xm��J���Z��t�{���'/d�#���!#N�kO���+�:7�q�[Aa����g�<D,�H���b��~����h�*��_`ga��;�/@��=��91����������tVp��������+���^��J~#���c+vG+���/0�P���y&���(�M\U]�$~�~����U����$��a�Bu7� �r��X��������*T�*�6!�rv^�s�I�v�{+Y��������!c��mr��{�U�,{K,��,��x
J�;���B&Y�v	�]�!,����f|Q���w�v#P�����':w��u������p�uC6�z/Ex
�*���8��wE$�i������n��SC��+`���K���u���8�\8��F�����������!����c�]P
���(�l�hV� :(?T\3�����~����o����+��z�����K
yx�0�����&" ��n������!yO��q�6A_4���
��B��.t���}�V����@y��r���%����������6#q%���&����u.��H\��mI��Y|���)S��7��`��WZ��%�9�/�t|�p�����7��-�P�3��|W�!g� �������<�/]F�!4�}���b"P�B�b� ��f3�L����$����R`w���A�������-�9�`-&"��A���;})������*���55�3����[/[+>�C<��	�/��0����)��������������7l`��W�W��x
���K!CD����|fl���~x�%d��v�78�^��	��~G/����!Ev��tHq�����*�m��%���B+Q� ���v�t���N��|PM���������>�������,d����4������)�nzV��2���U����ED��x�?��j�?|�e������tI�uq�BRf�%\OER�+��,�G��Y������^���^�����"�Qqi��G�#k�%���SQ�)��W6�o G��n.:����?-�$ ���Sn;_�����y<���D1R�,b�:~=�A<�o�B_5��.;��"�<�`�+h�	.)~���NES�}�EL�"�Y���Gt�����SC��������w����cC�#��8;3
#Y"�6.�i��:P���,)�W��q���(#h+��Yd|�J�8������� 8CK����;�$@�br����R�WNf�KK���0�.vbl0�i(*����!eD��n021/p��B�p.����,n��f]3�en����J*��@��x���v�U1�i�������r��gRL�{
h+|���K+>G!�1���s���sqa���
�{z����Ow���]"�����o�4��8���Ro�PT����$W��q�%d��7�j�X;lC�Ac�����O+7���H�-e*��
f}x/�����w;���X:pZ�E���J�V&,�Z���_+���w�
t����f����S�F[m`��k�,<�R��_��J�;X�W�������)&[�3������O<�����At��
/��y5��sA�Q�*��j�h���vt�����C��?�Q�������4�K+ubC�{�9q��v�?��O�"�����V�t2������W��hQ�����V�y2��(gbH�`���:����3%p�.��Z�;,-������'�{YK�s�����n�U�]�2*K�Z��s2�����C�BbI��k;&�2�}�k=UW����%C>���lHtf�Cx��$/@�wp��	���Q4@D��8�>��US8�\��3���{�P��_6x��|�"9�KXs����2^�)n���Uo�WA�"�s��t��.�\U�w�����G�X}�/M��(�JD*aK�����^,!O�`�l���z�G���6"������{Y�8��K�{((�O�Vz����M�pI;N
�Y��r9�VB2������r��`1fM�������<r����
F3Ei&"zn����y���;!�2c��VP�u]Q6\�U�A:>�W ���s�=�oM�����d����g�@���F�8���0O9�*�q�w����'���N��q�u��j�Op	�;����b��������T
����C��	1wK%f����C�g��'���[��aD�V��g��

�g�oG1��h��PV�����������@�q��R�����r��8D	;��0h��;���7Y__(����z�wh,
qV+<~Q�uV��j��z�>�}��g&�)p/
�K�N��4^��D��;1	�Y�o����U����3:� D������rUdR�To�����|{u�(���(���:�������
l]�L�q��/�s�\�8������_X�9�$�O&^f�3��m�:�WM�P�;�������cN��
���H!{xh�v�!\_�9b_�`�,��*���w��
/��D���UU�-�CBy���h�T
�?��8�E<�^Q&�����?�����J����=���+B�����ql��f��8�"j'���j�P�h�}Js:pg���j�.C2n����O�=n�qD���<n�4���0��6�~����(h����Ga�[���J��i��/���'����|�%�j���J��vd��X������M��I03�����a
�~w�M�oj�p�OS`�4��a�64�w��K]I�8�pK���y`���"�W�:�m�1$�B>?7�g
�G��3�b�KL���!�'��%�{�AsG��x �z&.�]�#� ��Q��������~����X���XHJmU��TU��pv�HV���>I����q}=���b?� �s�u�A"�R��$D����sLB��
��>�.iJ�;��G���������5����~F�����
�DDQ�*��k�9�+n`Ia'�VN*��`�����|��!_��|�o:'<�s���"�V&���`fs�7
?�������y�VN��w��(%�0��KCqbyq�j)Nl��,����o���!��~l�U`C���.1�#�K�C��5���?�� C��1|�����7�������/#��G�>�/P:��5��r8�|9a^��� ����z�^-IHn��JCvI�	���.�w�=��aB������dm!h�`��i�k�p��:Gjr���1X��a'��W#`��3���m(N-C|����8�)f�*���&�;�����/U�����8G/)�3W���6:r����Eg!�]q<�
���y.	|R.f�t����W���g��;K�ovUaE ����U�-Ki�j�������!�H+��"���d�����K>���?6�mMw	��C��U�������l��t.���P�`<03�qg��j'�lU�������_W5��(��C+k���s_2X@K d���
!O5%���O*MA�2nm��T�������oc� ����=T����yi(�,�]�2w��T|ViqWW5�l�����o�K���2h�z�|�*nHNj$��&:��`��
29�Ha�����r�,�)�T�`f!Y_O���a��E�����m�\���q�
��
]�P�k"�,�bWEga�����IV�0�p(�����{&�6g�@�"&W��x f�R�kUd�Y`f����S+P�r�0���������Z�i+�m�`����wF���j��xO�o������@j��1����i�:x�>gR!�+�'�����~��������/���M��gY�;���KqK�_Qq9�yT,T������\?QuR�����v���O�T�x�07�Dd	�A8r>i�������CG>�c,�)f�7T���&�K>g��o�����/�
�I�*2�u�wyG��	�k�������!�}���w���.!Q���u.��t����9i8>j�c���W��6�'�V���������:�;�,�J���f�g��D�x������Gv��*KE<`m���wE����9xPI��|�{Z��.|���o��a,�OZ��c�]���i��q>��#����(��P��qk������N��!�b<qD>������b]�\)��}��G�V�-��i^&�!]������<2$�o(."tJ?T�T���T��� ;�2�/4A�b�l�}�fD���;
tp��45�����0��?HmZ
~A�����/��b��4�U��/��"���Y���?c����$]���7���8��s������k�$��SiAwx��Q�����x}�j�h��x�~���������,	���������������a����-N���5���
��K.�u�#�����Xo�\@������U0�;����s���4I�"��F	W]Z:rq%�_~ �������x��c<�(J�/Z_��o|!\^H���5������;���+1{�J2��^�	�L��S�
��s���8	g�3H�N������9���|���Jg~�����k�)����?��I:_��
?�;���0���[���
T����\b���b��&�A���n+Pu��M9_J��T�>���d��3�T�E�P�������5$az�%����3���#�,�!{�j��]Y��*�#��������(����&�����BOf������'�nn2�!�E#2��8�.^sHP�_MP')x-	)Lb}��|!���rn:����L���/������piG���X��IHY��7��KB*�g,1H��1|@M�����K��#���M(����G�m�g���x���F_�;/!j��L58��������pR�U#����6/��o������u
�FT��C6���;�V��t��Re�q�=��n��m4I�P�,n���L~�c����Dcyb��qy�}2�*Q&����0)�c6�����F���l�v��A��>�8��=��C�.���LU8X���F��[E�p	T�@vX��"K�%L-�+
�*������5T-`�.
#�L�j�w���n+Vt��97qZOD�E���IFyp�z�T������j��F���O��/��+��*����(/��*�����O���,g��(M<��>YX,L�����z����A�}SM���p�E�x���a[�Y���)sc2D�G��6Q�oc��S:����h����CP �����8>���y�����9�mKL�������B������(��8t�L�/:����������4�f�����������GJ	��>�%s�
��,�.M�A�����j�d��ML�������R��LF�����o�=�]}`�s����1����<qUx��6 Dzg�3����2j�-�cT�����t$����./&�D��=(-!��R6��.���5X�BU���x��E�n&��"~�G�8��7��o�/�n��l��{��F��"z��\5>����A���&!�����Z���t��N��2��Y���Q��F����91}o	$_���M�~�qX����f>d/����k���?/!X�Z�"�D������7�N��r
-CSE���?�@��k6�$��H���O��;x��!>K���{^K
��,����.���*����P6���*�����~(KA^�p[$�*���#�0�-���+�.��(m�I����4��XE}JG.V�����������6x������[��d0��oJ�I55�^T��EGE�d���	i<�������)�\��I��<-���/Jyb�_���^62�&��(�c[$�	�<�gc�`Y*�s6����P��"C�������<P��TeN����7h���d��'�2�3��"Q��q�gF�Z�
������&{PZ�����.d���V���k��^j���T��[4�6��^/���zQ�����VE1�a�*���?_$���$�,��@�<P�>Hw��)�7�*�G�Z%�g�m��p��>�#���A"�#���}���~�s���9��z�V��n�������"����4Y�,K���|/7�X:u�%���g�"���cnh��%y�����������S��[��RA����d���"&R�^��/�N<�4�7�V��LY������c����X$c�jl�lF�F���H�������fvq�����Z����8��8{����z�q�?)�x�>������E�D���u�\�{��Q�y({�:�&�`ZM�]9�����L57���J��Fv��AF�w�G�����xO���-��9�����%�����*��C�����^!�bEc���m��*��P���/c���g1�q��)	h���rI2�J9��o��Xf���0�E�f���i��9+hP�$�}������Z�����
����Nh�>�r�����R������^�����OJJvm��4�d�5�^d���_�e}Pd+�,=���-��u�I�I����g����W`�3�U��*>	�7�����J��,D9�1�>����y&�,V2���������K�ls�i������:�yLo�SauP6����$uy����<P2.:�Q�ti�E%8pF�����*��0lOtbG��3S%�x�����>�~���)!���d���K�u%u�R3����R����Y�������`|������{a��C�����.��}��@������?^�w0�V������%j@�o����Z=��v��KMau��c�t{A����#�{����1����M%_�U�]6��������s"Y��5��5"=m��Y�� ��yp!��8���I���`��X��(O<�hVS��hO\U,g���Q"�T������g����v��Xe^LG�2���,����l����UF���9�@�(����o��hU|!�KY�<�^T��������=:�p��
�����,L2����K���W,�g��F�=���c>�=?4`2�3��]au��+�������d�-�+
�E\��D�'�"	����5X%"$�"l�.�<7�.�#�Zvu��)�)�h$�+����/�-��+������W]U`Y
��5 �f�ske��F�<��6`C���H����=����[�>��:n��_�X-(����Q�b�+��K����<?����J��J��y~��o�C�Q��QJ�-�;o�|�c���K��c������%t��!����_�,`�l����X5�[|�bU
S��=��!���������r	���Uz1F� sJ�6|�������qG!SO�?:������R���3���A��
@O��_�?J�0"�yE�8��^��0"�wF���������t=�HL���:e3�)��X\�-[m����?���#�%|0�>A��q�
?���/!j<�j����Z���&O(��b��5$�#%e�w�Z$%
��Gj��t\�xJ'.V/w�w�V	e�(�R,J�M��|��r"�\���a��d||�"dw�_�}� ���"l\]��l�7��Y��f� � ���������P~�e����d�N\���2�]�o��&\�>�0"��W^��Yj,����JD7%�����"�V��Ca�RB��m�W1��>����"���W�$�p��Yq&��t�d�%S������&8$]U�������S#�c�9J�����qV����|��	����]v��������i9��F�(�$��E�}��'E2��,k�H����{a�G��1��X�9�4X����~�yV�F����y���7�(Q6}�h�������6�:Lhr��b
�R�>0�c��\��rh
n��5f����}��t>��t�~��TW`�U��U�,WP��%�*�qu�����>X�?�P[�]��JpbE`��)��P�Z%�rO������(��>�t�q�K����L�I#��Mw��+��=���x����vkF$��2j9W�A�F#KO�b�V���-�s�o���$�[3au��Tj�5W�V�Ut��m���41��y���v<���B4�Do�F�di��>�G+z���B������CX�BIU�;�81G��K��6S���'�6?�5V��������P
���-�%U�k[}��?�~ �h�S��'�2��.��f�^�C�V
Y����8 b��Z��n�}���s�&}	��T���.���n��-�*�@�<�=�3%���Q�S��1��pj�O*��$�8_�W��U>2B����!�m�-������r���8������`IE��e�."<����~l��F���Q�BU@�uV�Q(7F��,X0Tx���[��T����gT���1u@��P�F
]�.>�_W�x-�t�XR�0���']{�,���^e�!m2�?��foZz���.���9�f���=R����<�������K7��I?�x�*i��=������U��GJG�C�,���zh ��4��k:�EUc�]��S�=u�����b)�#&�8]
�7�3].�7�h���I%z���GF���fFN��p\����D��K����)��_m�c��XEH���f�&����y�:��u��6�2�~��+*���Q{7��$#�_Tv����`����H/�2���E�"G�y���n0���-���wzx@k��jKW�?���.B'�<au`����k"Z�|�\^6��6"��mI5��p���q�;�a��&r�W(���O7��y�s=��;YM[0�1��]dI;���.�@]�B2k^���<�}�b� r�~L�������ye�-=����r�N\R\��T��O���vo�G
F���t3�^���f�>������(%cQ�6������V���\;�"l,�,�������s��� v�4���Ya�P�%�0pd`������8�^c-�e��A��:������e�|;��3��?R
����Aq���M�
���v���Z]K\;�F?����V��`���H�����R�;����\R]nt$������wdOr��Y�c�0����q�c��fv�P�U��1��x�!��~��a&O_j�<���.5�����z�d�Uv��U�
m��c��_i����x���h�
"N�4_v��S�}0��j�Ay����9�fGZe���9"l,���s��3#��=�X|��f�=�03~�)9f��J���+<���g������?��#��5��T�������0z����U����N�.D^�#�F�(J�6y�������M���yp�}{A�_�&��U�-�0�4��g�v[!u����oH
8�7���4��A�>�wP�h���"|!��/e��\�cy�����oX=�
�����&u������T,~�������U������t=K}�������A�y�������Q,�q-�$s�<�jYiu������/��o�QiaE��cs'��$!��[YBx����{�����%S�=�������L��
������K�`��������B����l��D�z���YKA�N��������j�T
 :~����	�/>cX�z,h��I1_�N9����mE����
b�FcX�����g�����2���d�\�|�X�c������q�S{=V�Z[����-K(������]gLm~A��G/�
�lt������,@0�Li!��?�M����������A����Iw@`]�����[����4�ZJW���'iGQ�����]�V|1`}�[�}�J$lK��$�3l����d��������	XB��T1���\�>�;:�h����[N���C���gQ�b���������c4�mU�����]H�3��^�����c`�t����Tuu���
X-(��I:m�_�%
���x�;��r�1>�J����:j�R��?�\�l�/nH=-�Gw�^P.�������w��b&��d�r�%�8�3�+��I������S�s(C��������<���������
��V�
G_���j�*�@� ����J�7	(�Iu�3Y���Ft��-��%����Ri
�	�/�i���]:I�����
�{I79&�$k��z����dx��[o�fo-�h����{��H�Q/��u ��������$CK�k��>������A��������n����z�������.}�H�IjD'T.I3��B��+��&U��t�W��F0"�EO	i��W�����E�m��ff~��e~��w�i���O����,���\�7���_k��;I/{>C.,�K�������������~e+�h?�#f�����2��RV���.���Io L��=����B��Jp��"���yk�Dv���:��S1���������d����%J9���wB�a�P[�*���z�9mC,Mtnp�z$�4��5���~���&���t�����(A�5��Tv��R��~WO�>(�����A��N����X�>H*�����*@p�����-�Q��rY����9k�X�4 x�y��x��y0�0�2K�t=NB9��9b��}�CO�.���(C����1�d	4��d�]?���x��E�u�A7��"�!X��7y�U��s�GB�I��7���si��%�#.�7�X����e����a�7k����9L�j�$E/��������zM�	�u�;���1���ZwX����	+�y���T��������D��
�f�������6?��"rv6��G+����������i�K�5�|���C�u-�m|*]�����@)��2H\��
�i��k�	��0��/�c�����>Z]�����*Qx~�gG�l%|������B|W��.�d�k���w��>?���\���G���NBB�4�_X�d�U�D��.~���,_i���l����`M�4b�������K��<XfL�d�P}O�[m\H��6���i�������(<�e3��'�"�����`	�pr��]�pJ���2I+!V8��00�'H`;rq���k"p��A)���S�C���-�&��',D���"�L	��0��&S�������>����U=LU�R`��}pI�O�v���H#K<[��k�=�?`}aV������}�*��*!���s�/�0�����?�z���Hz��$<�jx)m�|����&7��W����F_��?��)	(���$���I1�^� ��?T���Xc}��/���7����|�x2�%%��Z�'	h,���&�)��_���fX��"C����|E���X��)�0����>;1�uuI�7?����yO�=92y�����
l��g�P���n�B���� �s�Z%X��H�tw8��F�";P}��.u��2����dQ�j��EZ�y���.:��.hL�6N����I	r�b�O��AF	�-��qE��\#��`��y���}�<���&�%���U���[S%�'�T�|la�=���:���$�l��1,%X�+��u���Q?�lT5^t������;Q��X^��{^�����G��d��������w*�P�����i-�|t�2��^w�
W��w�ur'�	)V��<�	�E�z�i�>A�&�x`9����r9	�Pc��I�
>�7y���h��N��/��_ �|���
�`w�8� u���c}�����[m���%��s%��W�d�5w��f��{c���Q}����;�����~��>Z�zQbm#y�#J����A'�Aw�*a��%�=���^��� ����:
��y2����3��G�%��U�1����{I��u!�����(�A�����8�&%�^[�HF���N��L���V��4f�������H�9r�b������I#�
$5%,yp!����_w
�Tt���|f�;]+���'�&mP�����&��z#��������C�.T���o.�J�%�9`����un��F��
�����_��3�z>d/�
������N�.��YP��d�W��Mz�HJ��r!������$���{V~�*`J�������U���!���/$%M �M2�������!K���B�]va�
������jfy^T���'�F�g�����
���x�AR��f�/�n�����EMk���3)~Qv����LA7�a�84s����N���k�6*�i3"�c�y�I@h+����WY�F&~Q��[c:T	�����e����U�d��K����Z���U"����v@���l��/b�P_��� P2�-"����
��\�� X��?�������J}�����)Y2O���&�yO�����H���>zg�����>����ORN�^����s}�o���*q���7X�v�R�P%6��8b ��1�
��s@'���.�����C�:"g�5/������(�9�`�6|P����{�K��-�&��?K��z�������.�X{�<�hy!Q0w'��Qb/K� Z����?7��H����FP�=j���[>i�*}/M��9P���f�x1�&��	�x�+�}0-��V>d/P
ba������dM��Y��xCy��s���H:sL�K�>�7>�2�>y���zC�9E��������
�U����DE�w~$ �Eo\N<�2>�[''>�cS���������2�(c*sbs���5������
[���}p1h;�$�!���M�ANx�h��P���B��-B�r��~�"��IRRB��A I_���D���$�_���y*��%�=z�e��
A�V����tU )i\H���
^����!�F�����R���7M�����S���o��Bu��\�]�O����Ei�'�lb�����Y";P`.O�7O`���r��1������1�n�{|���/k��Q��G��5���!�)

#^���S9\�&�d��������9����l2�����K�.d�(�cR�c�C��,=�>]k���	�����}��g���>d�p�Q��g���\�+���6G_�Z��X��a�����P�#�	��<=�J?�+7N�4�o����\LN�����W������$}x�}(O���V��V�D���=quIy2��vt��2���!��;)�r�EJ�N\e����U��_�����2	�L�FFL��vy,$2����3!�k��������%C���1*�HAD����SY��b\�C�����Qs��������0�J�Z"B�W1N�o�.�E��������E��4�k������3F9���etb��_���yYuH`��2��F�a]J�X���Y�Q9��|���u���!U��5}����O*���*?��Q���J��R��H?L��GFV��{�#"l�S[N`1�1'3��$+�s��!����\�t�"	l�����3%����t��U�FI%���T�����zz�[���|$�^#�[boT�m_;�n����� ���K�{�b6}P"����o���UF]|e4���8���%����XR1��5�,EI�����	�S�y��d�V���RRg'&��O�T��G]d�*,3R^�#��9�i^��E���pa���������"�Q�J��HX��,C���s�k$MJJ�_���q��������
����U�q)����G9x�`�������$��^8��2��75M���wm�i����F��q�x�E��M��x�Q#��`A]����h�UV4a���(	+/�-���_��'��n2F#�z�����|����S�%m���s�+�XQ���e��U����$��73��=�v@�����K���H��O(���lkl���S�Y2J�{�T<��t4���s����Zva����t%��E�@gV~P���,8�[��8Yb���H�[�?��3���������_,������E��!-`��y-{��Y'���6�E�����7]s�M�����^��V��nrV�T�=Out��K�NiL9C]�N�#!R�y�vp@MhZ�&:sIu	���/XG$>\��`.�����g��[^H�/�M��:$�C��H`�0
�K��A��*�m,i�D�M�=�0S�{�"9�&���4��~rA�L�]�4�X%(u	p��=`�Qf�e�-����J=�ds���A�<auL����Xy7p��m[�Af���i�6���x�m�<���s��0�$�B��<�k�v	�T���o�/�8�sM����j7�i�Qy�����z������E������[S%/�l�a�W����bL���q��@n��I|i��e|�� >�d*����k�G����w��5�+fH`�����'$}0wrNr�hB���\�PG�p�%�U���H)�	�q������*�.���#!����=\m�ew��#�Cl?�Y%X%�6^��U����R����b�|��"@��S��E)O_4�'F�@8g}�����H,�U�n�"�V���� F���4F��������+��+�tN9$����z���ai�\�����'�`��BX����d/�r��^Y��JH�?�e���&#n�\�_�+)m�`B���L���=�&��:�s����#i>������;�dUW��t����h0��IA�G��f��P~�s����7"K�fQJ���
����E|:�0a�4d��gD�.P"���d/����b����`��6���/bh�c��u@�N����{��Rw#�_W�{V�B���c�O.V�
K�*���DI�R�n�,���
�	�&{������afP��n�����6�����bj�g !�Y��:�����f�WlR��|�:�w%�&B���ul��OYu(���y0�J+}1����w�9���UFF�~��HR���W1�Sur�x`g
����<
��|_�/�������a�qXcQq��:��/87����V��b���M��f`�R
6\�8�`?�����`U���"v�������'�<JU6I:��#�6���;�~ g��T�������k_PW��#e�2���
}^���a#NP������V�����L%x�D���ve��(�)����:�%��W�prN��VJR��4�����E�/���kS�@�H��@{�7^K��WQ����{5�,��)��BlK���l�	!x�@j���D��
������X��a�&����$�;������<x�#'�
�����^RK|��Nf|��T��"2�\%-���8sk���kn��0eX���Y�v(7�>
|�
1\L�3l:
~(��.�a��
-8��v��}���:�Y9�,([K���#�jR�Q�=�k��e^\eN|S��i:�2'�����%/&���z�3�	:���dPzn���3G��o.�d��[��z�#�kpIj��"�wY�y���q`An���'T��Uz1F<�?����w*�@�pK
w]��B��:#�{P�<������
��2xs�
o�����j����q�*�\��Q��|A���t�M+��C�"��u�a�o9���w��FMQ\��f_��[��G�=�j6��G���^S��x�����k���x����3V�5I���v�N$�/��tM+�����Sz��H�:����#t�V!��?}���W���$�|�}�:�%/i������D�����t���Ujq�4���������DN\D��]���a�J��*p�����T����Q)P.HCq�E�z�%�p�N�^ L
��f��V@I�b�E%8pZAo��#�D��V����('����D�����X9�O�`��������Z�������:��y���"�O�DY�������j0s�M���F���/�{���0
���O���:_VfRT��2��r��u�A.���n�=B&�|���5�����<�sCGE8Tyqu/e�t�9S�Y[�PMI���1�`&��2��AK�"O��K
buX��Dfp�j���N\����]6MI��	Y�D_��ET�
���,���@���E��t�e����D��SX='4_\��~��'V	��G�Y%5v�,�qU����{� ����k�
�.�BUC�4	|O�����Sz1H�X�:.���TK�����K�?�j*�n�/��G5(��[�=�3o�+z�� �B����s�R�f�C�,���u���y0�&	]/�����3���}S�=����}���b���7�t��g_	��������$e>��������.�2���O��Xe�9�����3!�h6ZN&���]�0
=_������pq��#���U�����s���Q����$����I�z�H�Y:�|	��>d�+e�q��g����2�o`�R��m��k�j8���S�����_t��������R]�Nrt���0����!39}�{���t=����a
���?P<W:sI?c}\B��� ��]a����t��\�����b��Z��������0�|@f����^�����6=�jP����$as����&�.�
KO	�F�m�E��E$@��'�E&�B��,O\]�Ji9��M���lS%�I�&�Z�����MPs�' %�C��{�1�W�L����t�����y*H���)
�0X����GQsgt������]-����I��5�� �l$� F�y���H[�;u�������(JEm����$W����	^�V���2��H�#C��m]di	%������B�y��a^�M��U�
��W������FnN�zo�����&���&����{������.��{Y�X�o����Sq�[�Gc}P�zS`���@��U+N>����Eg����K�VJ�����B��]���&EC��#�!T8���3iR��bux3Rg�Oe��w]��p�2�	�
>$b|�������T�����'P���^��K���1
7^|�����&^#)��/���cb�|�{��t@/]2��X�\~��\�.�T������b���8ZP����6�(!�T_&���������p�����Et�B&���N].�l.�!Ju�]n��gB���d��o+6�d�Qke�,�:#�s%�d���9�Y������������&Yt9����;�HI�����m}g���A�4���,ZF	"1C8!�u�%~��V�D�'�C�u����r)�M��$�JT?�sz��6���J���RRHf�<��]qT����>AD�G��rX,o6d�<���eY���)�+���e��l��Y��OU�
��?���C�����#'|�����l:�X%���
VGd��	�#T�B���'�~������I��h����	�NJ��_�
�>Y���L����S���~h�N�]7���V`0��/��a�4r�5�,�����v�������jN���������%=Hz �C�f&��~��X������"�6�uAK����D6a�e^��VK��AS�M�"Upq�i��,���h�(�k(���L.�����;o�����I��.xQ��"��B`7'd�"�D����lO����f��K���+i�tT�umm��F@�����*$~��A�r�G8�ZB����i3�^-�7R�����*����7O&>kX}��/�H0�'������{�s�l����%�S�����{n]h�&"�B������[�DZ_���1�X�H���4�����\k���:jS�]����	������v;�����qG]�<[B���b�8	U��7_��URc�Kn6V�l�{��d�"QZ/���>`+cf�L�8�}����t����N\����U�Vn��,��Kv?�ZD����g,���$�nJ��AK,���y��>IC�t�n6�.�{�eR�2=��}6���E6�'+ ��7���xyJ'.��WG{��Ua������Gt]��#W�����8������{U)�?O2�W+�,�e�� �c����-��!����K6���gL��K�������Q>1J�0�
���z�r��9�72b��P����x�X��<��%G�i8�L�w-���M�q�j����V)�1���5
�EP��$�XL@M�6���&�[�g}�a>���
�q���
����|t���&�~��S}O����W��^D���^��`ur;����,>���,���.)��b��H8>/V�W�����.�~6��a�v�'�Q1?�������8c��i{(��(&��7���"=^����9���:�-��j�����~�����
@�z��D�.�����m�}�<��X���A����������@�i���A\D/)s��(O��&)��)X%"0ME���X���P>�P�#Gr9���Pn�E�=:1K~�L<��=7�&����K��\���
��_�((1�>�e*���3?~�D�l��w��q5��x��z�����G����1��F��J%���+x?�����0�E`���������Y�:����G�A���w�d%p���jB/�)'��`�8�������-������#
udmg4)=���X����F��FRS<����������x�������3m�\�7�8#I�(K�i�.)D$��`���-0X8��������#�*��P�Z%���[dqu�iTva�IY�T>0���lz�I�W��E�!�$T����TZ��\���	�����a��HGQ�)��Dr�/t���2d/a��M�*�t$\���W��zX����r�/���'�P
�tw�3��E���(��
��0.��v\8_���+�G�W�`��`H��m��� .�"�
b��2���*�������P���~-�'�$-}[-�,��`���=@��'���C=KM4���Pw�Aa3������3�U��|"G��t/�p�q�?�bu��J��}�*�n��g����'-��;�����F�5���'z%��/��-���$�����o)n4��H��{IB�w�� yhL�+��{�.�"c� �a4w��/b���
V
����j$-�U�L��[���=n�g+�J<���$<&.���`�2�[7*��=����'!�w���z�~���Z���E���H�oF��3#T����8��
�bn�>��B�]R����H���d�k/"feu������!�[d��0�����`m0���t8i��d
noZ"�}U)uA��w�iy�y�t0��%)%Z������PS��e+�L8������_��~]X���O6��IFS3e�A��<�����pe��
`���E��\'�ELW5��3�5�4���r��~�W��'5����rc�.��*a�	U��U���7����3Z��i�\
��~��-�%���5:��_/��i�`�`%�������e�e��~h�'��+��_c)��L��N5J��<�H���!<�������?�VR&�MKW��'N��t���a����7�+lZ:�BXg&)-f{QU0�?��Z��j���w����6������&{�&MS����r����?�:���vi72���({�1�3����Y��e������6�{!#���4'
�*����@fu,��u�����4�N>��l,���f����XuR�E�M�'X��p���q?�I�j���D;n��^S�X��Y7^U#.j�5�|)O��A�a�X��;�����h�~�.�s����D�	���l�9�@e�`$m��R�7�*cb�����R���",���r��|��70�f��������y��������Rk����k��z�[���8
�%��z�����?=���:��f����<��o.T���w�V��Xf<��X���k�)���<�wQ��K��^��C�LK|k���,[x2����X]��N�3�����.��Q��~����!��{�Z��8�.�X�Ne=[�$U�^����M@)���\�~\j�����ID�*��_�w���7�V�t��B3��W�D�A����5�x�@�<����I��u�xs�2o��(gt
������NJ���wE���S��&�IMD�H	�*�x�G�'4��x�j��uJ �E�%���Z���eW�X6�������A�Mf���T�E�F������fxo�w)��	�'��NG#x$LJ���>�(r!(F
R��y��/���y�����"��"�w����\h*�E������c�}���]��w��$�(� ���v�9������9�����l���>��U�)��t��"u;��%�hA�����/��qn���R\Hwok��	�L��_�Wz�o2�Q��Q?����quQ"��V�D0�q^bC��S����D�����T��g��^���'��r:����,��K��i}B�S�������;��3Q��U����m}�{��a��i�Y���R��\�NB���-*�����Dm*\��UB�V��K��������M�\
�>��/�q
�x�v��L_��E����v���h���q���h�C�����������"E����	y��Xh\
0�H��x
���Y8�/�	���;��0J�e��6�SU���������m���K�X�bE�h��b���V�*�[�W1�J�n>�t�hU�$��K�E�v���>th���E�"�����Q8�_��QZ��v�w'Yx/wH�<�X�$&�|07*>���=���Zr����&�s�jW�a�43�����������.i�HHVc
����_���#�����z�N��b�22%a%�L�
���j��������\G%�1	�(!D'�\
i���N��]R��:9����,@����h��p���"�j����h16����5NSOP5�N�/r������{�8�����g���*!�����I����3pSw�e�4a��V��GB:���q�(�vP�g9q��4�$�D�1����LU.?�UN�t�������Kr����m�N}1��<P.����~����y@if���G���Rd��Ub>e#�('�#��.��#�X�4�yqMc\�||x!�?�Y�]���[���.���3������1�/n%[K�Y,1�����<��r[MNW/�s�t+�XJJ/\:lQ�QV7V��s���S|��������j4�����sP��������M�xJG5����u@�r�3��.I_Y�C��������Bz���P���[��x�"C���
KLo0�Z1V��GWFx��I��Ox�����Q��"C��9�C�V��t���	�����/NX���Q�R'my�����A�F��vAn�J��B�����V�?;��`A��k^��j��^��;	I{+���N�}\H��$��\C','�HO���u�Yh
;��c��W�>�o��_xU�E4��<�����.�n66��O�0Z�/����U�s%"���e���������.�-�o������p�I�O��`4Hi���$�F����W��V�t��;�J$Sn)���;�������2!x9lNn��D�xl���������V	�7\�\Vu�����k�L(���~B|�~�^��r�>q��^>���|8,�T���X��?��	�2����~��p��*_uO�'�C�+���l�L�-�^0��d�p�{{HuJ�s[^`nQX%������1���/7�����k���?|�H�{U��]d*�� }rIu��JX��*Q�e����|/�}y1C8�}�/��/�-���EV�a7����s�6k����K���#����L�� �1_�������i�q�*@��b���.��9��U�6�B.���}��a_^�I<�9��E��c��lB������(3�T�w���"�DE��sg�s��7�	a��'�$�V���q����_,�+��q�*�A`:�Q
������+� P�����=��	���U��CDSr�����y�E����<:�{��og>0�8��5���^�{:��E��(U"{&F�"ll^U����1�����D�fHH]���=�[����<,~:q�Rw��6�^$� ��~�gO��p���`1�,�����A��x���Uz	��R��{{���7T �}����E
������}��y������'V��T8_4�7�A�gp�"l\]RN�ve���T,Q���r���y����v\I�����>�����\������b�X��7��{�ww�,"&��~�fc���7��j�]6�O]D��k��a'�}v��}� �?@"YN��g�%����U�}LD������D��rY���t����8��a<p�j/q�rI��+��{��a�l5&[c.�DBF���X%�f�Z����#i�K.�0������t��A����;`t�-(E�{G�eB��`��u�a~��og��h-]s�@��<�S:���
�=GF)��jR!Z��Iq�z�zm���tW��\;���b���EL?���?
ged���etb��b��#�8,a`�i5�cI��<�R�o���*0���?�_wR�#�~#��2�0?�6:����m$���1�^����)*����(����k�����(-g�/���o`��]�w�$����~Y�g�K���o�*��+�2��f�?�n(do�#�8<�����:��?$����!{P��9{Y�,\fP����*?I)��9<j&��G4}�4W������/�~�E)O�2��u���z[���*6Qt	������&<�)#pt_T)a�KO�����uJG.V�{�O����x���.�e�~p��
���z$��Kx@���q#�mE2L�����"�j0Yg����,�BW�{D:H�����O�C�{���U���u����a���\zgO[��{@�^�O��v�>�typ!�����)���U��7���v�LQ�����������Q�&��vQ�FI��eI2B�(���i�~��St,��G���z��${���� ��� e�3w_��G���6�r�y�]X4�
@����P�����n^j��O��`�^������
+~���������eD�g-#J��3DL<m!A�;�k.	��S����B�L�K���EU��T�.,����Za����HE��7)���L���5e���[�H<by�Q|�8P�'�?Yd��5��5��jh���R=���)��<�qT~��k����*�X	89����@�d���f�d{8qU�]�5���f����t�/�|m����)��VEQ��\��d�O/������\��|�z2�&��\H���]"s\���O�>�����������X:�U=LW�i����l@�����1le7��D�O��g�T	��������Q�n��ZJS��42%@Z~]����R'�7���Sz�R%��=�tg#6�
�$�S*�����8���+�
�>�&GXd�����J����'���T�a��K�x]��,{���
�t@\����_T�G����#�f���5_�Y6�.����!���r���K>��R]�Nw�<�
���E�.2�~j���=�<Z����B�{��g6	@ICA}��$��d�m�fB\��WoogM����"�2.�o�I����^g�����W��dh���N\�Z>7�Lvr��!Gz��:���N����"e�����	��Ya{�}��<��p��y�z�S�\,:�=�b
�*�����9�7��`�0�%9���t��2$\2x��7���w�1����8� {�0��%��D�����;��\�V�	=����iA�J����,���g~5&��f��3a����f��.]?>���6*��O�(u���Ub�#�&�J�k�69��q�9S`��`^S�������m@O��
�^�kn�:���Y����D|��8�	/��'y�z(���^5x����b��:PRg)U.�@\����Ko���Z����`��H���cW���'�������;9mZV	�dU���%����2�$a����+�a12���4�{���pb�r��r��ePd$�1��Y`��4��FYx#j��T>�����H��z�W�<�jl8�|6�F
�P5
slY�@"��`_����I���)�xH �tT��AKGSX6s#�J��U"�u�.�M����C'���y0�:Q�����3���I�q�n����
4 7�G_"on��A��G	����Fj��XU����u@@����5j=�W�k�;�	��g�'k�y����N?�����J�l���?���`uH����r�:��E�.� ����������R=��PEN|�E��2�kHE�X]�	R�'��"�h���\��:jx����ii��ok��>j�����3���*:�0N�y�4�������<����%��%�����ox��3u�D�TD�������^`(������-r_T��'�����B���o.���\���/(�a�a�8��{�;��M����?���&���:���$v4��"v�n/���f�o+�����|�ZR��@s��v�Ig�5���T���G�X��m����/ z���#�<����{j$~�����}BtS��������`�1:��ac�](�HTW���)iz9%!%<��H����I����D�V�xo�� ��	)�=>�B���A�L�<7�x��[�!��O Pn��]�xe���*��.���46�a��<�I�%k$j��\N��o��/ d�#�d�;�*��P����D� ���D���/����E�����]��;I����s��P�����(}rZ�>���Jfzy0�&���3K1��i
� (��V�#=*A��0���i��i
	��i:�K��Qz���P�Xr
MK�;�z������/�.�kM��X��}2�IG���'�*ieXUx�kcu��LT�6�'VJY"�j�/�����n�-B7��d���1Jo�(��T�a��i��[�����G���`bm�+y����h�&��Ls`��� ��Qf�~�I�/���8�gU��FID�Q]X+tM x$L�����j�r��a�_�{���[�}>,�����/
,���N|��%�\D���,���Hv���5�s��]�9a$J*�g�W�������n��'���yMC�UR�^R��b�� ��O��h�p����y4���!����&37�j�h���++���a�s���d���|�j�-�������E�5>F�����_sr�W�Gz�,v�q~`��e�1p��h���&[�kO�G��1�^��
d y������3�#�!��������
>6$��s�o�4|\�tS<������+��+�y<�����
��B���Q��|�i�A�6t�X�F����{8!H2�M��{//@]1W�p,z�H�E_�<��`���f������@���c����h.�G�����;��<~����t{Aqn�t�CD�im��%C����v����Q5�+�"�;~�]�:j�<,j�P.
����{����\\Ws� s�`����>�����7�Rm������gU
��V
J�c�$�x�b
�'o����`"E�xmS������hl��U��E�XF�X��G�f,�hj���D�����VH���<��A�^��3N;����\r/����;ls07Fg&�4t����	i�^W�6)��C�����x`��X#�)�� ���Z3L������E�cEncw�P�v�E�6P�K(t
��7�K ��>RS�#5��tn��w��w}� �x>tt�n`�N77�[zQ"���7�3���J�T�b�!��\�-%r<7�R}�|be��s�E����/�~��/e��f0���X���"E�����i#V���n�D��}������������u����K��9f��d^��\N�������JSM��=>�u�:�����cp,��O�Vb�ap y��gs�c}�Re���?7��6�R�k�a���d�,�0Or%t������8Z�5���J@�����=v���wm��E�����d!p��L���8/�����L�w;t=��5$�<(O�D��<������Qz�}���������)}A�81�}pQ��x����U������H}9��li��x�\
s"�"���T��7t�D�;)*`��K�q�m�S�}����B�u.T�A��Y(��'bQ�`"|�C)��s����8����l�Y2a���&2�p~N��f���qUs���.�L���UG������	���t/���oo��W��3�dj��9Y�+i���sD��g����I���MQ����#I�C�"L����]Xl��r.;�QC�q���P7�{S���fX`?���j��7�������J���*y�iwb��Nx���(�"UjL%���Y�5!m,��j�.��0
����F���O��Wl-u��f���G4GX%����'�,F{�������K�i��|�Y�,�9JA^��vQo��T��&�h�(O\%F�;��C�A����]�<ppYq��$�p���8"=T�����;`��.u��q����HH(�1���t����n��D4n�X�%�4d	H���DKJ?B��;��^�oX5��c)JG.��j�Y�Kf�z��"��a&�|	�7L����)��^o��U�0kg.5)�:����=��^y0�6�|)����$����UR�g����4�.8�$	)���
�H�V�Y�J����45�����6�w~���X�P�������� �)�)bp������M�JU���`]��4t���tx�)�{�y��)k�[r2S2�<�H�X�Y/�'��u�MF�#���W�`��"�O�}6u�&A��z9�^h���2{�|��.�*\Af��H{��fV @o�_�H�F�$�1�wd:�����ID%��W�����a��:X��J~��l�Z��PLQ�SV#���<i��b�Y:���=���\R*��(�U:au0/��0�,�.<i)��)������������FU�E���;�\�c�5#�5S�HA�h���Z�Q���z���%?F/$�<��+��5�a��t (d�a&)x��Eq�V��E}0{ g���3��G���Om!�������L2
�.�9�2�1�a�����l�@�y���"�}��Ttz��-�0�W������2}�ac�Q$pN���E�SZG�����G�����N9�rd���
���U�R�N�.�i��\G$�E�d������5�|�h�9�S�.��������3���Hb7��L��NV^ZMK�g�$�}��X%h[�%g�~���!JP.�����}�{�A�����%Ez��}tn��}M^d�.i�,���h�J����KA�_��1��.��I�|���A1AV��U�!LkH��'���z�uu�'6�*��+��D8��"��]��{�F������#��Q�n��K!0�cv[(��T�t��#Q����6L<���,=���-����������A`������$�j0�_��������&����.P���[�)(�����u�k
�O����E�:2i�AybD t����^~�!K�z��b�����>hA���Q�.��E�������}�d3G�]v�Fc��`�JIU��
p��|�c�`#���S&aB3���K����Cf�,��/\���6w�|L��k��|�&E-@�1�H=���F���i��L��z�-�gFi��,l���v�.��b��3�z��C��c>���~O���\�[\�%���k$�C*��h�$Yc����DM��@N������d�r��}!�W�]:��y�*�
 %R��W�,���Z��qC������mw�s@���������g���9=��!V	���=;�?������R��`M3����h��6��u	�,��U��C���|��~tU-���;yh8r1]5�z�
�\'��93������NJ.fhH��q����n�����p�`ZG2#��p��^� ����c�&[3�
G�9q�6�28V	Fr�x����;�#��D]f�>p|4?��������S|ac�F��|��7L7�0_��|�+/�Vi��� �JW4Ny������>��0���p����(��uB��5(�������.��2��q��B����2t�X%x�T�xL�H�_T��ZV��I���������.��s�b�n����U�-��Ne�Q�v���;#)��v������%��������'5�dMTQ��#�,/,�T���keP�Z,���uFW,�-_�[~�W�Z�O�����e]79���R�w.�.nh8g!P�d�z��cT�^����<�tq����iv������o��Y%�������V����I[��:�N�|Qe]d���!g��ks!���u%���_TH_	������b����Z&.
�dx^t�;�p��)�0��HJ���s?��������R{��"/������W�w	��)�C��x#��?�bu��$MY
�A����C�����`'8ij�E�=�0�~���."/����v,����{`��D����<�1�@l�?��������������Y7#�?Z�Q7���W���y!Yq��y�7�Z�T�}�`=|���o����>��V���_��l<��!�CY6���Xn�a
��"f�$�}O�:sc��#.�����7PJ���C����7��?���';
�h��@�>dIF�������J<��;'��&�ay����{��hX�l�M9�ZwNR��<�Z�`D~`5��m	X�>;��M����"t��G�/��������EU�����v�.�������dQ4�c�EQ�J��`h��Mw����^�������e�j����[v��BDI@��V���/e��g���h�'�
G�����[%���p��b9��(�3B�+3
;cV"pt���8�����-�Q�
�
g.V���
��TK��9�e�IZ�c��
ab).�mI1.�U��VSJ/\��Gs��v����n�0=%
����7�/x!�������=�?`����\�.�6_8`}a)3��|������N%������I/}�J<�Jp��W�G�$T����E�z�3%���b�������"k�h��^����iRC]����y�Jo:��Y�\������Z�#���������q�Uh	uFF2�G��@�G	�I��T�;w�+�t�O)!pW�.�o�t@Q�O���(���T��Q��������P>�T
��lN��pi$S��1P�������hnL\���/Xr������P����J8��O���:�^'�K�;���m����c�E\��/V:������e��QF�TI����o���&��W�?�M�����g\����s�M���a�l��K'.`��AfU�a� 7e�&jF.�%�}���!�vs�l��p{������
���x����.��[b*�&���#0Y�[��GR�KJH8�!�$�/�4?p�)�X"�L�H��_1�\
���"�;d��`�}RR��k���r�2�YV�	�����kr�4\�����(�9����IBqU 1��}�5�w�T<i���2]}���*���`2	2�3VYsQ\Y�5��������7H�`�SN��J�R�b���.(�a�S_�e���qD�Y�����$������%��%�6l'>nG�aw������E�����9��D6P��%-�F\F<��?fR�����n�����b���Ja�;N�P �)�u����>���f��bt��C�A���@�c����L'�.�����3�J��&�`G�!��76}�^�v	��r�:��u@�r$�3�8<b�����C��4x�G����EV�VY^X����pD�"uD������q��r�9]��@<��L���t�*�g���G��r9`u���O�\��(i�/��u"�(?���J�<q1����ZE�~���vT�]�����HR9���P�D��>����IE_��]�d�'\���r�kU�a������}�nl����3��N��m|Qb��{V���K���Y+���z�����v����]�������<
��</}�H��L����a,/o<xf�V5������X�]�G��j���
�)IkL�|f�/��&_�|J#�m"h���x����.o]�h|�`g`��t������D��]�1'�aS%x�0�#g�!driz�-�7�}���J`XQ_�@-�`�4)})O�"����<:�*1�^f�W�H����k*f��8x�I�����IK_�V��#�z�PHC@����=�X=�|�4p�,3y�����dn�����xv&��5kQU`���uA�1?Y�H���H��k�k�q������bx0z!I�z��O�\��b�#p�U�~L_E^\U�����I+�{�''��b=�(�`0-=_h��*O������!��u�*�;�u�E	&~�������_*n�p��%{Q��e�������.�iG�}���=E�l^C�����0)����R�
x9�>��v�?�u? �w<��?#��G����l����A����E���U���^��U�������su�X�>_.�]d��G�^o��e�����O&�Cn��/+�.�"��rG������1��t��i�l��wM�`��!�i�vd�>��8����c�)�zd7��7\U,g��'���pBl�B����Z #>���w2b"��!���~F���8�@�-�����$�����$���2_�LA2:���<�
��mBo��|����*���GC;bE�6�u��W����>0kb�KS8` =m��1���#�{`u���?��9J0,��(!�T�}��5t8��U���
�e���
��?e��%j��,�u��c^d���l�/Rxo�-�
�u)-��j^H)������JL�c���)wLp'leb�*gns�*�G\�2����@F���s@�����g"j��utX����z���
W��|�R%�<o����2*�b:���}t�9������R�Hy0��s����qu�T�V��K;&��Cg�t+��=�z�y��Q:rae��2>�b��L��"��X%�=��
f.Y�������-��S�K"�-���".��a5�|)O������/�p�Z%��w5N\]R>���_I�\:#J2����C?li���=(C�*�+�M:�^�5/T�Oy6/
`�$�d���XR�Y�Q~tq�-�k��E��.��.���\�$��k�l���2^��)+����o������������d6WX��X]�/:JP3:�{K��n)�B]�c��ni\L��`j�z�"<�0|������������%�9XX�X�C^�\�X.4�X�h^k�ku�F��fA],��`d��HoY�����v�^�i�
����r{����4��������������j���D�a��b9����D�%�I���-����	Y�\�I� BW5v����X��^��y!���`���RV��?Y6K�/u�F�&�+:M��VL�&F'��Y��\��<���]R�Z�%o���<�����h�5����S����U"9N���U�r>��li�3��A��8�4��>�(���������?�s�[s�����eK`��������,5,+���`R|�Q��$5�|w����L����*�P�eWZ��j��~m�����]��'���<4���l}Q	Nlhq��)�hR���_h\��W�X���������*��
�$���E��3�7%�N\e�rE����-B6��*�r�<�))�j��]�p���;�{�!{�jx�t����(K���p�~��P�,���A�f�]_x�&��"�BXOd�}�e�:�T��W��:jsgo����.�R���z����lz�<������:��^��)���R15!�J����2�3z��N%�>��S���O�_�e���H�����%���E��>N0�E-Y`�`B�-�������'=em!�"��BLnt�u*�
�g
�%�oF�F+_��w��:-9��e��6	�(<�2y�1~:Zb�5�lZZ�=�m�����Je��E�XF���T��K������4���|�8��2f1�2 s��j�hh��El���k���
0��Z������5Y�I�=��� :
�������M�O$�����O�kIS1#U�����%
��s�U����Lu�ft��2gI[+����a�J��V<$t��Fl��db�s���r���i��M���I�W���g*�@��:~�M�j��O�|�x��`-��od�����b��[�����h@,������`|�b�H�
���t5f���}Q���=��zY�~,����������t���wN����i���3��k|F�jL��*�5o�����mZ��r	������Xg[�S2	{�4�sFwR=1+���V������� JNW6\�&�
�����u����!�<��S<�!�Ey����V����9��+B{��P�<�(s����e�F��=g��v�#���=�p����{��&J}96��������y��o�p
�v�z�Z|O����5ux�*0~�������y'/�E�g��E_K����\
v2|����I�Cw�*��l��}�(��q\�n�h$�1mN8���n���Gm�3\�����2]^/�IO���<xf��:a���v�g��b�����h��������
�)�&{j0���O_g�S���\�M��[+�����z;����v9��^���xMF�����D�?d���b��[$�-�p=����I�I�'v�����[��,��j��*r��N5s@���N~�;
Y���u'[V����e5��`�?)i��+_�����?�B}���b�������������%�����S[������V)��]e(��O����`K��b�z�����c������{��|q
.��u�?`��;����F���$zQ��%���Sr����sg���l� ,&���L�u�����Pm��`Z5�
��T���R}�i�m��U�]vaQ�|M��A�J3� �}m��ER�*�1Y����������\�<��1YN�l|��#�/6���*�
	�Q"R���������}y�q����u@�A��v+�d���=��-��f9��HJ.
$&�7����,����.D5_R���k/!Ph"���mJ���mH|q�*t��ET^�����.��a��������J�]�U~a�C=�v~�U�r����;#j����H�\���c��D�����f����K'.@RH��W:�zP�pQ��K�UsgN������v�����	=O����t�pR�?.<5�[�\]���U@.i#
�����#�KsR�'B���Mw�x!��o.�����c}A����3�I������d��4�"~[D������Y�"����~t@�#ua��Et4d���A���'���T��k��IE�����z���y�u3^ ����+1���*>e����_a�T&��}����I�?��P�,'�F%N|�=�UB�����J�?�I�1�%x�K���.�i��^tLKp�&��\R�y`m���Z(Nu����I���.K5#X1L���'I89I����3q��Ni���)�s��!����,���e�O7�v��8���o������D��|r�=<�����qu��!6dF9�Xy�7NY��/�j��������2���K�7��#I�H{������e�0�U��S�c��,��%�./�l�^��Ai������������	*_-�h]����eQ#�ypQ����Y^�+A�yq�0�/�x�����e���/X����p�Opt�����F_�|V����W�����������l"���\����]�$-���#$�T�7o�����kq�'��)�>�����q������j��I��*=8������p����V	n%�t�bu�� _P.�r�����JpJ�.�������	&?�=]x@��>�.�pi�buQ�a)D�n����C��<sIR�WzH�K�=f_$��{U�%��JD��C7��m�_��+3��f��������,F�<��n?�/a2����)�p�����L]����j�����f��i����j}�	0��yO��������Km�E���_�b<;n���S�i�E�nW�/6�'�~>d/�
��SvaQ7P�:�iD"����IK�=�/VK��4n
�v\�����O�������y�����tynL�"����FQ\t��z��'�6���`�d������xqus�3]$�I������� '���#iR������9E�'\�������P=�����{IV���F��>Xj���AN��T
S�Uz1F=K��?+>�j��.�$�(���V#Ik��MJ_J��TEV��yd��lh%�j��XL-�I��cI+oz�F�4���G����U�)M��u�T���3�'��� �g�/J�������K�H���:3ii�[c��>
����'q��#Z�z:�ubK5(��d��*I+�c<�x�������12^����*����f�Lil��e�.�ek{�^L���g�\lIqe��Q�!d���UwN����yk�Ndm�hE�� ������Kr���*��MM'���eSE6x�����,�������Z����<��@���f��?x���cvR��!sY� #$ta�K�e}�7'�V��]e���A�}Y��c�B���`�~x�����a0=	Ud�
�>������l��K�{����c�3�L�r���1���v��A��7��?~q�f���ej�����8��PS��XtT����	Z%y��kEn��I�C��tp����(���^��X���	��pk�"fX����V$4�w\����@/\P���T�P=���-��!�Y�pI�j��>��=���G�����4���T	#����Uom���K�������7���,9��4�qa�����0QUx��H��6p���%�����Ow�'��q�E�e�������=�����E].�����������6Q���wN�z5����n�:h� I����Qr��.g5��d6�^������I'���
|�6��C�%����K�����,;h�f�.X����=HEpS|�S.�Q.�3��+)��C6AFn�%Wz��l5*���+V,��%�)[����)q�	>�I��x�%�d_��\�.����������U"p�z�tp�\��t"K��,�Eyp7�W��{nP/��p�>N�
��~B��_�^��z�lzU|h��"I'�����r��>�2p�2�cA�A��T~F9��
����+����}!�h�c��o���o-?����W�`3ea
:�{�`�*����~��q����^5Y��l0���X�0�e��-_���C��D:R(V�}pm���y/�|��s��\�/��Q�j���D�N
�A):^|�~�-x������/jh����p��*���AM��7h�)��������:���D����(b(|���$�-��${�d*�J�
bu�F[�P2��II�r�4���q�����?X����"_H���`��O��������^v|���5�(-�nWY�*�~\P>�������d�/V��I�";���5k2���RD�Q����r���2��zG��z�K����M8�"����1� ����?*S��!3�J�TR�/&W����_%b���8�8�V����J���!� �
�tJ����x/;�u���0�#�'��$��us���%������ ��L%6x6G���<�2k����P���YbT�3)��BK���-��J�8�/�m�
g^���0J�0��v�P$t��eLN9~=z�#6�=\Gn��<��!�����\������r�B�, A��
�3��$��jTF�!��������.��
?�u��+����)�<|���TUK�'M�/���O��P6MX%r�a�K��i�_����SD
�!�^��E�<��l���aL 'b�a����`]#�]�*�tp��Cw�G�0
���.E�A���8j`B's��ae����4�����+��^j�6W��[�@���=�8X|��!2y��
d��{���\�l���c��;��>��/��������#H�l}�z����������������tL����X����W��#�P5(E�l<��4���Z_&�L^���LP�Y�K�Wu`���W[u_���*?�u�"��_��)Mq\�����sv�t���T�&N�]9�����!�+Ua����uU�*YZ^��%���#���tZ�?���#�����/=��0D�����T��HC9u��MJ�5�3�,-�w[���b�s�����Z�R���c��"����}��<d�����0q���P����{�^���������gu��k�����)���/\LSq��_���<���IE���Ck�SY��(4���OW�"�h���rv2)Q�t5"���Cx��MG������Ok�5���"��y Y�L������(��a[�����/2i�F)���}s�_��*�<�S������#�T[���!����*I�C�A��p����S��35B���9
q�c�S�2�j�q���9���:��g3�LTt1����f�
4��0��s���.�jLpl ���$��C��*`��?��s�fu�����}@m�R���rsm���p���.2���')����k1�U"�����i�l�M���X%��^�}���L���-�_��oS=�2������*���?��q��4������������'\��V������#����#�}�~�N'���R��yFYH�FKY��fi����[-�����?��I�A-�%4�h���B�OS���4�#:��#�WCr|$�Hd%��Eyp������!��V�
^�$�cRYQ'.#��4Yu��d���fk�0�N���a�x�����A1���/�
��["��&8�5�
�LB����_�^*l#�baHr����sB���bt�b��7|2��U���[���P�J"n�0�����rK�A�T��F�����n�9
U��y�b}(�S��'��Y
��&�eO��l��� �<>�|�R%L.:�0m����,���%S��[�,GSz�����e����>�k����Q���R�t�PE���T	#��
R��Y?����_��I�~C��I?/����d�$C7}dR{<�c��9��10��������kO�E7�A�S��7���1�{��
�T�N�*�q���<n������#��0;e?����S>���`��O��x�z��*���z_�K��"��({��3h:�l���Z�S6T����!�,%c���!���#eP�9�c�1�GF��m�����	�u0�$���l|7j�����1;]d_|�c��f}Qd�2�Q,M�����!����l�8n!>��i�_m�f��
Z}�;��KK����n�nd���o������I�@� E��� ��3�M������O����
r���m*��������A�<k�r�7������!]d��� R�Yv`1������S3E��9�{z����SN�u���C�Y��2^���y)'%�5�0�J$��<E��Q���w}qH4 xs�+D�<�My��LJ��U��I���V���{�2*�Ns�{XA����0�����G�T���d3��+�1�2c�a����^�;��K=���_V��a$�����B&5,�o����k�S\fi�F!��(3��8�B�#������)�>8�@;t�Q�@
�+B"�Q~`D�N����&�D�������e�Os65r��S��a�y�	���I�e�M0i�E�����U��+��K
��'�A(���l[\���T�01�O�����9�����oo����)����cT<a�4���K�i�1]�
��@V|�-=lB��@<9E�$�?`��t��n8&-��(�*C�f8���eE������!���T5f������`�7x��b3g-�~\��5UZ0'@lQV�'nGBj����m@N/���\��������Zp>��V�!`�m�����B�����M�4t�m�J8����)�v�X��<��K��0�xs��U^$JI��b������9�4^���
~~��u��3j{�������e��������:b���
��:~��A���r����t=�[�����%�D�����=���1���T�S|�m 5�N}Q�7���+r?`d�0���"���~-���I�x.�#iTe]I�-�9T���WM�I%�L��<�d���B���P*�"�]��;��?�A�U��!�,����|`�`�Oy�B}1�H��)���a�#���M��V�R�ef�Q}`b���Q�n��(G*:	����u0��q��T��	��kXE'��ur^L��J�T&�}�� ���b�I�(�`e�7���P���/��>"��rW�^-C�����C�d5��J���t����(�]�.�����C��Z��_���\��2��5}`�����iZe�hoZ:	��S~`T+X%��cQb�*2����&1�� ���szN�}p�!S�n�(��NN�����R%�d�q�0��f�h�C��D0J�����W����[�����VyP��e��c�����r�������8�v�����Bl��n�r��}�#>!�v>QgEF�W���Q�@>efX���E���P���A�9�|sdL�`%�e�;���lA8��N������E�2���%4P�����@�&r�%�0���^��[��;�Z�A2����c��kIK����a<(n0�|��n���������.�7#��c��S�p�>*�@��<�
����J�4q��;&n����OeS����XU���kt�)�JD6���|JF���F:&�D��Z�(�Xe/u�IJP����a(�E&}8Y2��I����/z'�D�9���"�&�A���|d>�����E��-�?F���Dt� �A�(���w�����+�S~`�q�*a����p	���[�H*�8��rbU3f�������<��t�{t-�r�5=�V���W�zP�p�>{�2��v��y�YW�l���+l��0SD���?S�Z���#�zS���},��U
�/����^���������{`l���#.�����*��e)��w^����2�~KL�Z����#��a-�J��G�����(��t@�x��8:kuI�3��Zh�x���S\g����A�	>��J_�01���������%�,]&/��S������)���E��X=��'�����`��a��X�abui9����I2�&1eqz���Q�6|h�9(a�cd�c1�;#�d�k�bv$��L$z_2 ���e9�(����M9� ��U�$��_���@<�@��!�tnj�b�X��YS��gY��_��D_��uO�����t�@��O��*�r���*t|��:���4|��=�,9W������t��q�E��S=~��Q����_��I������������L�Y������~O��+��v�
J����ks���E/�2O*�@�(����Y<�5�5���rW��-���^�_P�b5|����:�/�{!K�Q!A=Z��V�YC�~�-��J�}�Z��'j��zl8v�F�j�1J�7���Rl��rt!��5	��~��t3p�P���C�22�H{xu�,u������E�
�"e�?����HR2J��qz�0	�j���1�^�C���cH�P��/C��8)����o��/���
��9y�.�f������M����\D/�R6����Q=�WGq�l�1�M�V������F]�p#IGy�,�p��F`5WG���E�$��[�1|���p����
���9���b��OX-X�y��\��r��������s����,J�p+V���_�G�CUypu	N�wn�c����!n�����}p&e�L�93�0O2U��GM�������r�yDs��O���,�%�,z'W���)n��8�"��
��0J�����l8��j,(K=N)�t���z�>l�����D��&��%62����V�%��V�=�O��(6hq��x/��_f1��z���DC����]�%��%S	�^���4Y�Hk�J27�������:�������%���?s^x��Y�AO�,�%�+�N�H�Yjc�<C|��,X���Ql��D=y%l�������?������Cr��\
�K�_���U��I���G%�\[M������/)����^e]?���z��*���/\zx���=��b>�)�jN���J�)�����T�K.�W��*�I�Q���<����P?�_.<��yp�w�}��E�T����S~`�'23Y%z0�?"L,�Q���R��+�uir:shD���}�%��d��t3+�\�\�Rb�Xc�j`o�u�-A���'1:=Z�2���o[(�ER��p�V����?l�
�W
S9$rh�A���	gjJ'��H����&����%6X�U�}�H<?4��)���2M<�yM(�x��m;��[�&Lv�1��OU=�4�9J�J��"}��1K����2����GM<	`c\�E=��n����d������_9��(e���_
����k=�]s��d��M5���/���~��&8$���j��f��*o8KSz�O��<8��kG��r�Q�}c�aV)!cS���K���$�0K��e��6���Myp%����{,h'��f�����^�lP=�,�g���KwI���|�M���4k��|	b�����,=P������1�m�jP�~���������6����
��x�$�/J)������_��P6��!M��qu��r���C���H���<���$5�`u�!i�:�.������l�2�R������#��a����M�FB����)����F�������7�k�����e�#'z�~�3��4�j��0O%I�L���X���h�MC���o�Jn�rIu�����}@�dS��i� ������9�$I
>h?-�
'J,X%6xvK���������dA�>�^�T��%��f���D-�D4�i��3	��\#���n��J�s7T#P�U�[@���$����b)�Pwm�]��W�
'�
l�b��c����	���[IOi���b=8� ~[�	��cdv�d_������-�A���i;�&P�� ��1��������`"�����9Q8-B�XP��F��1��6�&��>JeJ�4�O!\l�2���K��	���1���0v�N�Qb�*��������!��f�V�K�������QbN�&Sm�p��h2��c���t����N�(k:O���������z������k��C���&IL!�L2�T��.36<
��A0R��Kq�Dk0�"vxR��5��%�=�*��$�@��r�a�h�r��?�IaD������S���}pP��MKJ���\�C�0B[%�D�Y8�m9�'������,�����V����;���*E���I}��Ud�r)$�Xj|y/�+���'�kZ �o�_dX9���6%��=`sI&���W�����K��|`�����^��&��{�m0*���r�e�j��G����g��I���^&@�u&�e`S�u�>8�1�`��&O�dX�)=pi+R_��Y���E�o#��zQ��%ik����������+i�~`M�7��4_���Zz�B}i�����"��D�������a�d�����I�����g����/����Y~�~��R�*���c�/�Mv6^�OM3<��Jx?IA��g�VCLT���XP���<�,D_�8������~��8lI����j��p������������tGy�f$���Z��:����XNb"���m�C��4�����8��c\K�Jd5KK�d
�dz�P%����N@���v�� ��AT| l�K-����,;���3x�Bo������{�]��0AU�lC��5�Nh����A��t��)�M�����$��hvT�����=.43<p\H|���������8��*�T������iU�Q�C!{�/c�u���t��A���I�95�ZyQ;$5���IEr�����*}%6;v�)v��E"��8�G����1yp]m0!U�A����Jc���	��wHnV��e����������MF���aH�M��z�wX�,�6���X%��\������rl�z��e��s������$��{j�A�`)��Y)g@�;,P
��NRL[�������Y�<��E����H?Q�����^�U���:��1��X���J�3?0�~b|�8��|��a�tH�HY6}�eO��:j3��(�zF���������+�$�}p#e2������LX���5��?j�.�0"m��a�Z#�~����LIL��5����94�(c�<	�e����sbj}��bT?�\RJ4�HC���)�515�/�������6���Vl���#���}���a�J.29�IHi�,��$	i���q[o�t;����j�T�e��%��:���b0�{�U���Jt K�9����ya�
]zu��f���&g<�2C���C&���U�$ K*�6��k��������XX5�9��>��S�:�����$��>b�rI���o��(Q��y���OV��]�x�SN`4dV���N�h��l�*9��/q��1��L��p�{���s9�����,>��jJ�j��n��^�M�0��U#��}0H?e�����p�
��y}���d��.�R�OX5,M{U���5X�;_;c��+af�������~�%������7�B7���kR1�7��'�~�B|Pb��<\d�RKP�0�,:O�`�9w��B��U~�����=r�X��&���g�T|�u�f�,u?���=�����7�g�MT�E��n�	�cw]�������k��������*����NV	#�mg�������{�r���������">K@
����{`������C�l�
V��s3�*F����T�|/�J�����=���9F�K��6�T���
�T���N�W��:d�m(&�����4�f)��v0��;#����;fq`���/�d��;(�TN�?�����d��+�r�X���A����5��v#V���)������j�<��4^���,������;��
s���If>�T�A�����,V�^��p��������&��H�����i}p��,�����x�o��l�*ss���xV^lkE���[K���Z��2�ie>�M?��R\�S����Mj��x~>#"�Z�T�����s�b�8VQegPL��	i.�2mW"��eW��?`���Avp�]�A�;�8F�aE�J�������x�%�Z�?Lrv�����'��F��M������	O>9���}�������{�-qn�^�$hX?����<Q��+H���O=���
��Z,��C�mCSP+�d�
����@]�e��w��l�;gjJ������i���)t��}p'%	���tlb���/:��1J����'����Z�L�,13�j�v(��'>T_+$	}�6�rY��.��+���$Y�.�ib���@�>��k����-^������Y��-�-x�&"��R��>M
�1�&"�<G������U<0=��cU������}A`����v�M������V
����?p�M���24�>����)r?/jq�.�*����JT����q��]v���_%M%Z�iQ�Ax�����I�"D�%�=��j����R�
����/%Ae���;<H��W��IQ��F��}��t����(*��'LW�\M%T�b�O�j�X><����s���Y%z$��*����2�����X]�Q��s��t,i��F����L������v�$�ir:	������H!��t�q"2R��Ob�FMXs����Xs����7-���b�hs2�HW�h��3�������	kKss.�#x�M\3��������A�N�9�:^�=��B��I�L�I\��6E@�`uQ$��Z��$,pf%$q
d��H'5J����1a�����/��`��O���4u�`�Vv�_S�[��C��#�h�K��A�9m��$Ln:�Tb���Xd%��r*P�����7���\g��NQ �j�Y�t�/V��3[q�c��u��zQ�?m�;v+���S}a���\�IXa��`��������-���^Z����;�e3a�c��+.�������T~l��`AQ�
�j�T�d4���i�;�JCg����b�"��R0 :[�������@�]�jV�U:��l�����gU&P.�|"��.8m��T�&�Hn�)�e���)�h�c��iK�d_=�7�id��������=W�����m��#U2��Y���D^�'y��eX3M�{c�ZP���W�����N�����Z�jQ������vK5���+�x�!�������x�.�Y�Y}�>�3KV
 ������^Wd{6[/�������b�
V	����(�F���=U�4m�R���tG���+�p�����x�14/��&�%�����f3��y�!/��8>�����~�
�a��H,(�_���!{������[>�4NW�AG���A�(x�-t��k�/
�����`p��|���M��&��qm��R�I���^Lb��xN9%3��e�!U�3/�L{�4���rM��j�-	�
��M�����]�]��5k�����sb��Y��A�;K�Wug��y]4�C�P@T�AnA���"�.��t��@O��;i���(��:T����������0�A�(=W��IOM�kS�9d����������dil2����Y�3s�3k����0����/&=}Dbm1�Ue>'�u�$�����}�R��>�����n�����{6rqhOz�p�}�`D|w�&�y�v�`��>��V�������\2Jy5�z�����H��A���q��-LA`5��!�K��0^�����x|�;�.v7���$(a�������e����� �� �P:;�I*�7���p�|�W�q���r(h����<�)�qLD�
�����lt�i�UzQ?��r����a*�����W��KSjg��|z�7��E��|�	��� 'T�	��(2��������fj�2����_�NNV�Q}pp�5�I�.���>�B��>����V�4�[}����j�{��?|/�*�K^���ypi�N�b��
�-r��;[�p�U"{�V���������oq�.|1	+U��������S�}��$�����{'�:��l�.X-��j�c�Va�nMX�]����_�hW7!�dMO'�*����X�Es�����)��W�@}�0Q���tr�"�����/��������Lo�M�9���!�^�Z���������+�/��-�M��������3��p�J����6��W����hq�H�bB���%���%4���(�k5�DU�;��,�M
�h&�E���e����AeY!r�7���60-�UK� ���`-;�� FKImYU|���,�+bl:t�;
���;i�����o8�]���0��8��vl�
�E��D$iqA{a%cetH�s������"�:��
I����A�A�9]��J�%�P���'We����f�����I��X���Qz���&s`T?a� �eB������r���|x�x0H?-	+wb�E�cE�������
��*��P�ab��|�����������*�.�gB^\	i��dk��~�*���\��a���bk������7�4Uu�l�����(�M�U�Yvh�i	��4��&��Z���X�t��1��� F��+��R��ONU�pmp�2�J^d�xD8��L/���%��29e�����W����4��U&�\0�<���f,���=�]Fy�0$:��������q�[U�HB�n��,��&q}mEQ.����k��x��nS��s����+���EU���5���(+���v��7�<i��G��OX���z�2���MMvP�Xe�R�0�yD�����w��yAR5B>e��?�k���E�x���?�B+�)c�������>��mH�b��s9"b�z�����9�X:�����XMI�����8V��O�/D�v��X���"K�Q&g��SB5#6�:{a�E)w�Er�@����������)�������vs,?�B{??����9��A��DUd�����cYWf��P��A���r��E�Xy1+��=�<8)0�QW_����.2 ��a�^RW�C���6��6��cU��F�X	;P@�M�>v�$&=5C��!��(�4U6�H���������#��#������X�����9��H������]�Tc���F�\���7�IS�1�9 �e�&���.��~�Wg���
h��t����`����;�KJ�;���Q�}e�����Vv_����L�c���g�r�K���/n�~L.Tb��h���F^L*k�<9u�5^�1vG}�w����F�\��>����#���������1��Z	�jt�M���T�X�p��%=����{�Nje���F0K%���~H�Y+��>�6a�{�����+�T��^���(���h����}/�,�F7w��a�����t����"��aWF)��W�W�`��1����@���VB��Xm�����������/�����',�+:LkLT�^���U
=��0)R+�yH�-���~���x��j���K6��������e�c	�����]���%��&�����F��������j���E��6{��T�O������P���������� +g���O�x���#K����?]�>�0�'��'n���-�{z��F��f��<J��!;K5�m�Z���0��S������~�y�I�|h�6X���e��'���3����K�����iq��������p�DIT��H����87�t������=�*����s�D]p��� Y�DG��
X#�
p���5���o����,X6xi�,��M��L"�#I`u�l8�]���"��J�0�n����n�T������(��39�k�`�H���T��1��l3���7�WG
>[��G_&�����D��/[�%�
�;�?}��>��s��$��P�qV���z���g�T��5��\������O��"���,i������HbJb���JM<�b�x8U��:���:��W���p��D'���������������s�!F 4~��:7c�U�"a:hu[�n>�zn\g�����
��c�.���X��[/�����w~��G\R��*�t���o����p]
�W���<���i�t�|��@_�h�u�:A��wbmP$K��.�/m���-Y/��������CH:a����	�/�.��r��2V
 �i
�rIB����HB��:��vL�����
����y��?���~4��_�j��rO�eE��)	�_Y��P��s�7P.
��
X����RM�4���*�:��B������G�C��:��������������������k#��P%W�-��8?C�]T�.����}p��-��r2�+�H�P`�g��>n��|M�5��j��-��,m�t4b���r����y�b�l���N����Kl�o2@hzTWR���L���|q&��T��}rJ����	*�_�7�2Y�(s���c���0��Z���*���7���)���j�>4g�+i�$�U�j<��[�K�������_�������	��s����� z����b�����zBo�
|��R%�ie|��"�����M"���u���NE�8��	k���������#w�^��{Zrf�W��m�F�m�A���*�����-X���Ms$&b���z&�z�b}p!o�mP.�]l^�d������j�0yiH����\Ev���4eZ.cK����\]2�f�:H4mBzJ�%�*���}p%m
�P�''e�l���+�����1�n8}1N�6�2��~�������-�
��Y{�=�j���1m�o�p��6$�v?p_��5�T������i���1i�"[(��U�������*1�5T��^t�{.Q�S�1����)�Pw��o�����
Z{~�����`uq��O��`
�	�,�\}���R���$-��;i��W5�;(�c=^7�m�pA)����A��;F�e���/#}�(�����,_R�${����e5u�.#-����Og���/����.a�[+x�n>8)��3J�2�o8�����������k�;��TP~m�1m����3pB���q���n��X�������&�O(����QzP��O����izZ��tP�3�e�2d���
���J"��7K�t���A��� ;+5=�����7����i�X��/.��K�T���V<G�V-:OlP%
�b���=�������DU2n��������9��8�)z�5ae�XT�A,r���N�u�G��������7Gw�]dIK���E���Lwa��#��*�e�9H�>��K�+���IP��@:`�(e:A���)pY��
�2�2�
�XU��d�D�*Q��[�)	*K�����6��������0��d�&T�����[�n�G��LT�����:c�v�I��.�i��;����,p������&�KR���V�f�-�.�[�h���6X���� {��.b�Wl0��]������>�Jp\d��<����z.�
�D�r=ppf�����])�J9���^����M �)x�s����k.�[<�s��A�&�L���{	G���r:*��M���T�D�M�S����\����t�
��7.��'9��$@il��mV	���O�L.�&j>(&a�R�%��MM;��j��lz�;�TY;6}��5w�D�261?��]I2�dp��>������s%D����2���H����
�EH@����\SFJ������H�fj�|�a��������x�o"�Yzw
��K�iU
��V
JY(d���������i�C���W�HQ�/�Z;�"��:�*�����)�����n����*7TY���,87T���H�;��E&�:"�E���YQ�?��z[����Pe�,ju���<��1��W�����7�,���!�L�f����OhC���V���gt�baS�g�Cs�}2��
IFJld*���!�K�u3k��Y��0�f�|�e*a���r�r�0���J����c��-�-x�����c����br����A���P*sT�N
th3�%�o�o�^/������}�6k,�F�a�S�e���2�*��M�e�fK�9^����P��I�_$�H2E�Q�������:&�i�kP����Y��)f��L
���_6��~�c��y���[RG)&�T�K�
�����%4}�d�1W����kgI8n�5����%<�8�d�#8��[r��D���H,wXV����~<�&R;�SW&*�l��`��U�4�|pw�$���U$0�����}�/A����+���k|<�G�hr*e�b�����l0}�T5f��E}�)7$T������U��IN��]t�����xp%u�/Z������Q�|?���E	�����_�5t���Y�b���z��>�P������?s�4Z���Z!]�������������u)��PN���T�I��tt�i�Ue�^����� �����z���^0���� K
����<�
���o�;�"��>^ C�����X���%YC�����6�I��kX��$���U&dwFt����>�C_��U�nI{��m&Gs?�<�����W������l!l������`���z�
�poG����"�bbm�����im��{�E)w��(weT[���&i�)�O�W$p>{z��c8�(�*�Q-���z���
r�1t�5�]���UO�y	r�>`
?y;�����U���E�,=�L������L{l��&Q��k�B�P5������t���oBA��U����4di��R������ ~J����*�e.��(f���rw����}p�|�]��za?�2	U�)?0J�H��9g�F������1���EF�|������.��3�����^u�0=��)?�b}��:����Y<�7 ��*��
x���>-�901Au�a�TW�F��cY��=V;�V
!��b
�9���Y��`"�P}m�u�1ea���"�����:�\����
�6�9�M���*M��pvV�����$"Y��V�*2;����T��U�� RrY�����*qv����|#Lb��9��-,���w�%��.N��X�'�� ���|]$9��X�*���">~9���3'�<�
n\K�W	TA��\F��t�{=��vP������*����sdg�(�z����X����8k�JX�����C
\����tGVS��J���5j�PU�egh�p�g�@�K�����a�#{�D�u�d����F�������{v����:��2�C���s�����y�13����f��7��Q&]X�&���:3'�&L���}�.v'�����8������tFV����,o�O�>0���,�r���$���7���B����d�M�ZW��]�I>�����m��&O�A��*a�������K����*mz��1��bu	>H0g'���E��d�94�J�\�.�Uc�1�����Nea��vC��6�W������!{�M���`�9%�%���</��E����/!F��6��3-����s��%�E����6D9_Y%z�&gY8�h�8���=�-�I�k�1--��"��C�A�2�?�j�t�:�n���k�W��\MK5��9��i��T���_H
a��f��CN�WsP���������i(���K�M�0�.+�lZ(����{��A�����:�"7}���FaN  �q�����Ti�FiL������I?7�W�WF��B�u����I_�0��Ip�bv���j���� �;��,{���f�����Wx���{8��&���,��
&�����9[����"��9�^�qV-<`�Q=d4����e�l�di��n:���,�o>�����RF�
�
�87��d�F���(]``�A"U�<^�����df:\����m����O#�������3�����8���7��A���j�z?��8��N��b������h���A������x�|I������h\|0��3O:6�2,��K�Y]��	����S���g��n��`���Xz�'���.4	�����c�,����UR#�^�;�G�{��m�&�G�u�9F�&�b����C�"�#�5F������������}�����MW���b�
����@^�����a����Remb�
c���-����/��/"t\��P��q)i����q=^<�C~s||�g��J��{=�p��N���2����)G����MmK<�p�D�$��3`��&��=
�A%`������>�9[X=d�r	ds������#�Y��9�u�!R�G�u�
V	W�_�X�m6�CvX%RJp����r��
�"G���@:',�v,Sh���+��V���M/@��4�!�.����%�83�F����rfXN�7q�<��"<;4&5U�	��=�4Jo\��i�*�A�b�l:�p1��%�e@�]�>���Cf<�MWV�^����c��*Q%K�mI����lO�.�`\:�d�� ���aR�1��'MY���2��M����2����L�_'C�-�X�f���o<�����W,��.���]�/�Q7E��R8���S3�ew5G�cV �O�t�S2��Ch�:��Ka-=p�tF}�p����U�L��kq��g1�g��X������A�h�
��g)`�_e��6b������"�B\���%��	����W`��d�=��,�_�GH6����oH-�t�K;��b���?��}p%5]�K�4J�\���x��~�j�5Vc�+��R1�v�V����/��>t��^��qi��:���J�B(�B�3��?y/��9B�~ARi������}���r�<����	�7l�
Xj_��r�c_`��?m����j���p�Q�)=p�vP�y\��V
�^{D�[��g�1�����MW�^����dXc����Q�t�
���d�w[��%�%��6l�h�����m���������Z��P�yf���J��r'�et���Pq��\��!�Z}U���?�*(�+�6o�)��_��������1�cB�V��@�Q��WD����6^��tc����S���p�6�q�Q��M#P~����<���4~�I�W�&��xrR�P��|e�oS�w��G(C+_�?��0a�a�(>��{�J��.�p�U�$��p��]�����'�(-e�^j|v0��7Sc���k+>���F�]�
��*=s��zAB��R;L�U"Cy�E�pi����]�Q8��p����3lP%����K����c�*�>��<[��^���3�Ii�C���7-��{P�w����um�O�[�1���>[q!����l����sl���'��MK���~��^�d�B�i�oPz��'Y���y��J$Ke`8���A���W%�������MA����%g�BX%2�x^���L�MO���<�a`��+)�o������d1���K�c}p�6\vhcP�	�e������#�f�����E��A6����'��G1�u���E1C��"B�{��\���!�b>s����P��B?���J<����9��������5��c�4��0i���`��4	����2l1Q�x���cN�4�g�5$��y�����2�7?�.ii�H����E��X��b���X�%�����d�W.,lfX���|�m���� =M�3D�q��!���3��"�*�J��oXF��2����������NU&�i�I�723�I���X��G�[x&��R��x��4��G��&
$=�(n�p����V�=�6d�y��e���'���X	
LT�8G|�������7nl�L}p�}NS��DS�I�O��Q��Q��������F����w�`i�/�fs��5�dCz�n��<�����h$� �#d�!����9��	��H���z����l���x�?`��y�;�4�Wt��4���F�DIL9��>��������2���Jn�<�l�
^:���ln��Ea4(4:���'sXE[��O�<��6���Rn86H�U�wFI��Q�9��r�'��*�:�Uv���� _��k_���F�A[�an�F���.3	�K��u�r	d��<��?���	��m��|�?�]"[c���/�eu���^g�S���M�Yj��<��>�3.Ug��+@�~;���0����Jt�K�a���r��Lt���<�)"
�d�P����2s�"r~Y����l��$������
�l��%�����,��+6�Oz���w�	����z�������KN5D�p�`�P��{��Jt<��x���<���Ii9U�������OY%"d��"L\]R��:�3��E[���m��r�a��2�vR���������\���3k��	��R�F��,?��f��-�D�<��rW�K�
�U����,��^� ����|����d��s�V��b��+a��E���"���$	�gC�
���T�*������X���-��@�V��/�k�Uz�2[�{�����Q�UC��
�����H�[]<H���HJ��`��J���|Q�D��`T5�$���yYN����rR���T>��&����GN�|g��A�f�JT��abui9����${����N�c���g�	���i���3��;1� ��K����[��f��q��"j$M���sn�-�h��;�PO!G����	������D��k3R�z,�q ��h��H�:�����5��t�n.:�~#���6Ku����;���V�ht��x/i)VN$	NZ|0��	�'�DV����X���@����T��C6U/5�_*Y,����4���\Ge�I�����Q}��u9@Q?�P���Xy��r��r� �O�oZ��xQ���k����Z%j`�����I0���`
���� ��|�	:��II]��&��+�V��Q��Q�m��"<&�t/��_,��:�$F�l��R��YM��I�"8r����>Z$����K�7dK�L����k��D�3������["r����Sm����Jj���;�-R`��~&Y�^�Uye|p&a�s/�oXe�����YS%Iq�#���������V9��i?ua	1��������J:'�����>:V�\�"$T�5G�����Y"��$�/��4���P�����9�^���:-�Z1�@�1�6Y*V���-����|�|����A����r�fP�X���+#Z"
��9�8��|zk��������$AY���������}��|�����90���t?�B}�7���P=�x����������	�Q#F|��I>�
*3�,=�c-K}�����~��a���4�SRS��S��0����|.���W��~��]�v�
��0����������X@�`)Q[��A�����6�
������A}��$���t�=�&-J��pIj:?���L�w���y���
�*D{eA]�<���v�|q���f����=\��R+��Z�)�C��*����QG��������;����+�����Q�����XT����Fd0YD�j�������Z ^;QE�	��=!�}�=�!xO��u�I?��"��i�,�
X���	�a�������N>IQ,\�V��~�K���#�z��������
V��}���~O�4���:�}���ON8��#q7��v�u�����4,S�D��[s���u�H]�k�k��9`�����7RYOhr����4j]I�~�=� �)*���
G��y,���e��)4�)G��y�D��g�/�|�e�)�*=p�[6�dJ0�)�
���/\z#����=\�DM7�lX@�I��w�d	^�moRPX��A9~v�]�sLA	������m��>d������O]~s!-�����2d���8���#s�0���#yp�7Y�F���o���}�HV����,�c�P��\����,Dw��C0.������VHj������d���e�%���A�U;���B�������U"�N��c�:��J��~�%[jUcA�"�M������c�j����oKR���������~k����A�L2���9���s�����'��D*V�@'��,�(\LC��L-}J\I�{Q�6l��"O������&���D��r{��0J0����)�f�Z:��y�\���U���Pw�j]������dQD��&>��0)��`����C���i�E����n�\���
�`wXp��=��=��'q�:������I0J��	�C�����a�Mr�Gk�X]�����C�C�/>���>(MB���2�&~�e����&�]���F�:��OX=��H�U
�P��[�RNU�$���i��[)SG���GP���������q"��+&�8�<�c�z�(s#E\=���E;����*����F�j��d�XF��EwC�HN��*c�A�+nD�kf��qw������D�����z�b�J�s�D|�����S�D~������t,����I;i�C�m8z���8�����h�l�.�O��9������%?�~X����j�R��c�VY��q�sF*�P�����I-�
*���`�����x��kU?a����h����(\�|���k���9�)����\N�U����x��N�������$�:/��n�_���r�m��+��wZ]x4��ri)�1���6d(J�mH{�4j�e���|�����{���������r�a�������U������n��Qe]�.�+_	���B�����������RR��x������\Q|�A�i	��']��N2��J\��V��.B��a*����
�d�\������������a���Y��������I�+C�t&��*!t����8a`��m�T��iv>�mXK�y����,;�hRV j�:M��qW����gWc,�Sbpl�����~[�
��T��a�W.VG+���%7XR���U.�E��7-�����&�sz�N�d��I�"O��QG�����_3d�"�Ypu�M�E����6 �sd�^�Hc����J���<�ttBVH���x/Y��lP=��z�����
�����oW��P��4��cP�XE��#����q#��!5EH��B��e��M�vX��
���.~�k����I5:!�'�*�3.���_����Q��2��v�|�����2���J�n����*�,;�@m@]4��<F��K)���,���#���r��g\IM27��G0q��+�K�)���'���'o\��������?�@�&dOi�g��\v��.i�T�F�����7 g=��>��8�_
KN��l~��}`cJ�[*��*�A�NoM9d�%�����z���K"���
�
>�_��'�=,3g����*;��&`�c2�t���/��oe���#��uVF���y������r�P�7�����V�;AE���&���FIK��3l�������>��"d$'�d83�����s�q�j��r6�����KZ�E���j����Cs�x/��3?�G�.�by�i(�K[���z��I�:��:�2��.����+k�%�0l.'�0k�_v���kN�u��eT���i�c���IS�(��������J��tLNZ?'�g���V�\������g�V!Z������xtn���7:�}g��h',G�U��Sv����OY����d�}���"������"�:/��M@��C��
���<-�����~�jGau��c�{� �����U��s�wBP��{�m��95������'M;,X%�,w�B���=aX���$>vP�F�C���0�,:��\���[��H���O�f��)�%=E|�I���#��8A���h��Lw����5Rc>�?E@�`�;E'�|a�[�	����H��6�1�����Hv��"�yt�^�'��y����2j9�)�2���$:o��X>����0gK��E��t��,;��cQ�e��
.\�@��/����-�)$V��%�i�z��-M?���MG�I	X�X�/j��	h�)��:2Z���+#:�P�AAN������c��*3(w\e6F7�J���n�x#��h��|��X�hq,��V�u��d�:V������=�j]���!�5�Y3D��D���|a�;�D�u,�1|pR���ts� l������������L
:��
V����B�r�V+��Q����B���'���T".}����:�JMc&*���0�ss� ���d����Yu�!^��r]��)6!�R�i<tM�fo|���M���m:�����6Ojzk��� 6L�	C�D}C��2���{t���){om.������e�t�9��5G�'F��t���UCw������AMP�V��ej2�� j�$%����W+$��{cB����o>����!X���9H?��\5(��(���1��J4�j�"O���J����I�A���S��\��?I�R��@	f�V���>0��7�@RR;�N;��a���Sv|12���t�G���1���F�2�e�
���C;T���}�f���B���F1����=�����y+������R�+��Cz��jR���5){�6a�e��H]~���"�,�#-z�0mP_Kjj�wk
��#�%�Qy��X�
�D��������	*]7%Kj:/n��Z���	�$�SO���+w�V����x�nDV�Z&�.�����5��v�`
N�/:Q���T�]G�;dpR�Uz��.����0�&�N�{�CjR���9�D�&2�HK��c*�z��)[
���=k�HX�#���K�)sT�����;g]g�����B�*xJ���J�������7X%,���k�����	�8	i�A�i�A��
:��+[+����N5d�/��$�}�K�lV�$�yp�B�������T�3%����9���ab'��s=��C�v�e�Re���vrr�{=�&�YFY�N�a���q��U"�	�ab[%;���Q�sd���A:�L|��c���_��X���A�.�.��e��7H�K��$�/������wf&�� e���<����]B���5k���c]�,\
�PE��U~�����}�p0OB
���y�'�����"������S~`DK��I�=����;�Q��)��?pJ�<8-ez�4�2PA�C6A�;�����cMK��bq��]����aZ���P��>
|������p�
�2K�P�R���T?a������8E���lc]�)�WJ��S�������R�a�A�"��_zw(�������c�b�HQ~h��S88Sr���93�&	�v�����A��'���7fj��5����D�H��^sN��'h����a�������.���x����Bq�"��_���QV��Yqp��o5�=��)�@�Fk"��6��7~�q�����j��Y_a4H8���k$����?�$��N��q��������AT�W${e�e<WO�OT�A�pf����}���5MAX1��yp�BR��p�Uh��K�@}�J����,P-(f���K��?������51�����^����7����)�����]��?�>�����#�����M8]��@U��AK\f�{��������f�����q��,���-��p�x��D	�jh��
���+��������"{���X��������UR�����Em�1�'�X%�U�#Z��*a�YK��qUn�67cJ�f���8��WW0�D�I�@gY�W.V���`�H�����G�<�-�����@8����I�>HQUx�e^�.���z���S����[l��\M�0��y�U���<j��C"�Q�&�5��H���y`��$�W/�l�����8�����?����S�
j
.��Q�[~�{Sr�2�����������=����p���$M(|mH�|�p�������WY_�����4�;��K�N��^o��p��wI��`�o�X��d&R�����'C�}8�~a��v���1�J�N�[V�qO�_m��[�RB��n�
����X�>���jOO7P�0�zn�&\z�4\-z��w+�H�����]��8T]/����X8*��
X���d�A�PG�����F��T/m��P�%��w���2�U���>��~�s�;�O�`"/-������N�o��6�#2�h� ���2��
$\�����d�0VH�r��LQ�

x��WZvV�uE��]�<P�����\�N�9�����.8�������F�
YB&���/t���
�I��1�r�3�D�G)}�OgR�@
�ze��'B�����"�;Vx��:X��7��3GO���/�LCc�i
�����2�7~f���pb�\t��b/�z#�\�O(K��|�I�7n�E
�,M�h!��o�`,����0��,�PzCg��bP���}�B���(:�o"*��c�����
���*|���d�����`��l[M��{��	)����g����k�����6��s�2)�;#������;���	��gK�vl��������>��k'�=[s�U��4l�MB}p�-O�lP.��������2k5�r�S-.�H�����Y��>��W.*�mGV������J�}L�&�����M� �vi�������#z�p�'GZ!�/
9��[�!V6	�{X��%E�����p��e���)�{pF�{�����/.��e.�Y��r)d<<:y���������0��tU��6eQ�M���c<�{���=�_2����������05y����D�\C����U�s;�`�yQ�$HW	�P���Mk�%������
���/��{�cbf�1����<`J���p���*=P�p^73����M�y��$wP�
������m�K?��������F-��&j��&����o����������]�0���@8g�8��K�[6Ub��T	�CN��m���*1�'�^��X/������%F=J|'�/Y��_������VI�w��UrW��r�g�&�u�e'�q�^�f�ih�p��N2'�����4�[�����9��� v��U������0Z��57�����2�F��*����{�e>�v�d��^��������^dk����:�����2s�!�~�U��n�?�b��+�wg���z99�u�+��E�����5�����R��5�5r������z�V�I����n��C���b���b���C���89�sV�`�^�7.���Fn��Mm� �)=u�/����b!jMt�4�.l*�iK�H��j��\KQ��j!a=�3D��3�q����'��*�!�@|n��$�� C�������K�v��G�G���uO�y))�_�����r���
����7\�W������$��;�"���&��XaR���9%U}���LE%C��_��2dm���:�a"������������B��R����Y�'S
>�Tf!<z���@��G���u� �� �2�|a�u���t�Q��@�y�i��l��X�I��W�&>�la����X;�*)]N��=��9%U}�r�i�w������S~h����e�lSe���1�"q�6������P���i*���_��Z��+�e��������9F�
�%���k��<�2�r��C�<t��Q���c�7z
��D>�<*���R,�c
+���^���GkJ�"zF4m�z������zq��`E�
��8�u������v,���,Ht�@���x��U&�)�	����x��I�����/6��t<
1��K�z� ��C�d�6��".t0��O�����#yO�H���LN���7�?4��='�x�y��
rn�*C����e��~#��8��/�_��E�c�3���V��_�������eT�gB�e[	��4�*�N�|h��s�"��[�n%��l��"G��3���pfx��MG��I�'�����<�����xO�
F�Uz���1���7X�|a�Rd9���1Z����A�5���}�z���Kl�FX��J�@��X]~��EwY�SM�0�\|����.�������	��5������K�����c^��|���k	����%{P��e�lp��=L30s�Vt��j����*4��R�A|%���=[m��hZ.���Hp4����X�$�gLH#�{�Y[0��������=m�$���F���W�{1/��`�����<-��5#��Q������+�(�A���r,���k��1
mP-0
aG!���3A���>84b}��F��*��u��49������3M���D�k�p-�~I|%l���q(��z���'M�jo��7X�l���ra5��8�,:a�c��=���5��-\����SS��{������2X\F����LI��"j]?�s���g���
������K"R���x�\�'���e\��]\��wm���4�1��
��t�t����T�������a����=Nb�LB����8�&�}T����Xx�& ������C�u@�������=e���>�����s����s�eg ����G�Y��Y�]�g.��������`�U�x���,��������4�G��+�L4xi��x�3e���������T�a�Z'N�xmv�7����D4�����C�.���(�4$5hg}l*��\Z�}������Dt�����#�~�A��h������}�up{�?V����2�\���a�s��k�Dn}wQ~�U�)�Ud�Ue�2*��h�c���M@�w�S3�["�aV��d�;��l����M��m�5�
�)��i�.MDG����?�#u��58��|��W�^�`����D���=���R��O�G�x�#Fz:z��@|��k�Ybz�@��B����S���}R�������93�>O�Y���Z64�<WQW�\���o�%��6��x2&�&�����1�?��������@��N1<�8O�gy2���Q�� ���{��Q��(w\���(p��2:�g�0��F��z���d`������������b���}����Z�.�5Y�K�HQE����)[C/������5������]1)g^��Y9�F��_XP5���A��g3 \�Hy�{�����������6����s.�B�T`���n�gb���.]�T�������R6��fk� Z���F��if(����/�]��#���7�O&6�k���K�pt$m��l�:�b��Ak�#x�=Ho`�xX|�H�J]�F�=kH-{�;�k����`k0�V���P������S�5d��&�s�z����[�:�<�G���0(�����5%4BH����2��U�)?0�=p�L�����nQ\]�&9�rf��8L����a`NK�����Hj�����C�<��MQ�N����������
?K;%�/���t�s��@���(��U�3&����@_�f�a����,
\��e������;N���+iS��Xe���Pe~�(�\�,j�
g�+v�x1.U���$W����@�e���^#�ha��J�reX��K�tG�j���N>d�~�������Or������g_�r�*����+�:X2J'��X���$��r��aY������f���-[G���{`��m�W.Vt���Z�d���d�&>�
3R���2k��1[�����������S^��p%o�/NII��c�`Dp����K��n0
��e{�W�����[����+����c��
��20f�gl��Q�n:����<�=��6K �����2��>I)b$z�����!�f=�6����`q���Ki�mH)#Y9���y-I)]�N�q�d�}��@��Y^���6�wU�
^���cb*���s5<������P�4�D�z����o]����W�+�p��t>��m0*4k%9���qV@�&�^VhR�+�4����@�������@����u����!���>������OJ��K�����v��:��xR���I�
n����.X��nC�hY��[wb�2�d�Q�����?����h�� y��/X6x��;a�A���Q���!��Q�a� �VD��Ks;(���n�$P�
�I2R���=J�Q5�)��[K�4����������(�;r���p�
��xF�A�����hd��%�O,�D�������mV��[��������{`"�^�{�bu1FvHS��U����:n6WV��{�KM���l�%�����9� ��1s������,���\X%�'v�+vm�
]�����l��63�����e���_����8	��<�lP5���z���K[
�u��$�}0���/�d��+�HV��=jV��*4\Uypu���ai:1��<]�=�]�����/�l�r;����2��d��<�1��2����D����I�n���
��(�����,�w�������d���Si7rS����A��c��)�F%�-L>���PS���7��{b�����\�����U7-��RA���]
�/�_h V|�5D^�u�m���R�Z]���\
��e��=�<L�76:[�9D�m@�3�,��?���G:����L��*��m������U�� B������N�@
�p�Q`���H}p�Kf��`����Y���P�����<8�1����4����"O����o�0��)�������]����O2�r�vi�e�����
>������5{ f�����F������'x�����}"������k*�A�I�
��
�>��b�A&�����2=���t#��6&H��7���4��(w��G���;D�/~�y�	;�6�_|��������
slH����*sB!b��$K"�L���b6$Y)��!��!UP�;we�E�!~?I����Q�<���*0�w��}��J_��$S���~B���_>{wX��K�<���t����
C��(�|�Tb���`uX��U"���-^��C�s4M�&���0��b�O�<�)��������K.��AHQ5���U���GQz��e��M������ =M�u������h����bz<`T?�\R�g�(}Q#������M��F����e0M�<a�sT�)�1J������W�I9�.��{a�j[(����}�l�(Q&�NUf���2U�-���p
��nzC�������t7p2��C+��'�d(�����<��oe���LT.D��?f
����!g� h�{`E���f�������
��E�a���	=H�W��Q��-x�0�#�"���^��eX_('OJ�/3��"2��t�,9t8]�Fi����%;������Q�p����"B8��B
>�B+_e�d(B8=�T�A��.�9\{��Q�6��>�}�be4��$m��
JM�����G��4Ds�A����Ln�\���)w6��5�W��D�|U&��P���2�V������#N<Ud����|i7B��P�0���$jW'[��m�8�d�}p%%5}�&H'w|���)"7:�a5P,b��I��,�r��1�i����~���q�V!����.@�rc��!6X5,e Q�3@�k�c��A�*�v�W?�t�������Y%�&��A6���O�*��|�/<�����s9���v�
����[8����{�=���rii��5>�D��r��&�_~����������N"Ux���C�K��d��Sg���G"��+3�*�K���(�.�����xqJ�6�J�~��;��<r�����^���H�{/��px�z��O|��@�&P�A��B�'HUO��pM08���<p��>d2��g
�V1���������/�0y��*M���f���=h� �D��>�. �[��(
[����)Q-PG��q�.�����P��r�4�OP5x����!�Q����e�"���	o�/���K�(���z��R_�g4�p��P"������r�;��,|dlaz���9���N�H$�BZ���B���:n�5"�?�D���I��A��~�>+��Z���\2aq���^o�E��U��G*�m:��t(X�*�R�.�+�")��������Q������3,*s*���2
(o���v����{��T�A��M�`/��Q�%,+� �x
�8	98A��b�i����$6\�V�'T����� 1g�R�@���>Y�2*��,@��H�o��Z?����fU �`�} ;��iD+��+XX���=��3����A��H6����ae�����Jd��������)P%
�y��`�����CXdIm��1}�{��X5�`���T;R�h��H��6�����E|�^}�K����5���Q	�}�>���������
w�g�������SlR����r+]�Qe���/<41�C��,`<���9�("\�c��J|�r{�#�A���Bf*��T��K�����C�����2\[�L�'T��O�T �r�20)����i�����s"��s��@������R���S�j�r4�~\��<]�������
H9��G.�O��R�������WE�0#b��� ���5	=p��Y���cT���0�/4����!�g�
dG]�����$�g�^<�O�I���a-G�-����f�y�!�����$>9�� ��a���_P�g�
f����1x#xGB?�2����s�i�<���0�ug�F�,
L�^)8_KA���r�$[��9=>���H��
�$sf	${"!/HxT&T��\y��J�?9=���
�c�����B>"}!���;�Ta^�_�P�t��U)@��)D���DD�0���W����rz*���~��
X?8���T��C�#X%�f�%rz
�:�}$]x��$���L����AG�������%��@x���@��Kw�����!7������<1AM��3KC��� ����$
e�{��f�)��3��	o��)<^�x�4KF�}�-T7Tj�q+q&��j�Cdqg��J/�Bf��y%{D�k7��n>G�#���!�iBGS�*���8/p�$xsO��������o�����~� ���-��~y�g�D���(��G<,+��4����������H��D���o�I�L%��y�4T�J�\�U�����d�p=�E������P�'�m��o���zBv���Z���DvRL���r���<VA���E�d>��
L��_�)����<77�J����AA������o����9
|?�������T�J�������f}��Q���S.�?����(�y�d|��d�o"�1�Ga���U�z�)�ek�A���@���EB}��l��4���5�p��/>��gU���=�g:z����= �?�^����u���*$|\������F�,�K�e��J����P�������L��'�����>HO/�`�0�����~$�(!����D�����I��P�g7J�8���p^���P5�vd�c����|��~�U�����I����vH������Bx����������
�LS�y����|8��h��t��N1� �kJ#
����{������\���������D��^D��N�s>������z+^`9Uo�#7�a���VX-����^�U����$������\��#`�`O��t#K|�\��{}���i8���l?]��s��P
�Y�G*�C��5��<%>���%�q
���
�
������_L��_�+��C����*�a��%������#A�e��RJ%�{�lX+��:��/P	�B����`�����8!3i��)������1VO�)H��y��]�8}�X�
7_|�TyAb��Hy�����u'����6�
nd�#��w��3��A+�����Y�����
Yz�	��Ra��8�\�R��m��
c���;NA���a�kF^{���2eX���T���a���]J2��0mO]�t�1/��NF�r��f?����~;��,��_��t��J��jO��_
�����C��9w�����[�����!�?lb��&���RK�c���r4�|���%��]���y�q��X�`�����sA��%�?��'+p���&
x�>"bvO7������:y������8�P���8�X�;��e�VHQx�{�O.�/�V������tC�c�� oV�x�B-/�� �3�x(���K����)���o.�~�_@������S�����pR(�
R�3�}�y��h)J��Q`L++K�����4��������&��)/l���OQJZ���I:�|D�x�i��z9�������\����8X����$���{/�Qi��Ha�
���������b������)d��,{&w��
�0�������8�e�����f=JUk:J���);�O��W�@t��hO�y��@}���U���V�jp��?�a�P5�0���Z-_��P���?���^E}�����_ .M)Gom���/�V��r-����XM/L��m�*�����ABv�0`��6���������_����6�����wt�$�"����x�&���|�m�TX=���%�;�����bB/gf�Cz����{C���F�-Tn��g�[EI�D$8�xg�:b/E��K�Y1�FM�%</���~��Vh�`�gO��U�p`u�����,{Ob����������~��GwJt@Zl��h7���?�hn��r�~pI��s�����B{���#��?����y�[��P����3	x!���J��?���y���H7Qm#u�X�����
�s�U"w�����	��#�)������O���Du'	�k��k]��0�>"&!�x7hVDM�L%��������^���x��1�rz�5X�P�7JL������
7t�I���:]=���0r���x������\�4��4��`�9;�����z��D�@�H�������r�x�Pd�\��'q�dC�a���� ����
T9_~/e�����D�,��P���%���������_K(wuO��X�P�^h(6��qC���7�tgU/���U(����t�%]^����p,��}BdB�W���,�*�Y��>=Q	;�x���c
_�N�U������3� �,7�����Bv�oL�J�RY��/~�%"d;l�0a	)����H�aFG�U$���,RG�	�R�F����������`<�j�X8S����E*��=7� �)AS���|�:�O��x���f$��]�?�N��l��� �[�:���F�8�k?�VP�f�{�vp���>@�$;�?^>�_�%wuP�i��>������
��9��x�!Q7�E�����U~!�t ��CI����r����30.�-T�g_����Y�\�n�����m����9����c���?�:�Zw�LE����(�K���^0����Sd{����?����y�/Dg�L'�v�-S�V�~�w����7T�y�x�������kw$���n`w�{������Q�n���g{�/H����b�@��8����!�8?�>�`�M�#����*x�1�w��dy����J� �����=i�~)39&��a����O)Y�L������~�
�f���|G97}>��w������	�hmE���F/�O��pZ�2��7�j?���r��#?���	J%O9��������de�/�f�
�L��6x����@���6��#����P3� /�V	� /f?$J+}�������_���!����������mg�i6_��K��R�Ra���k@�\�]��1O���o��7���x�	`������;V����`<��*9��k�LN7ok����F��|��5o�n�L+z��'2.��?m�W�\����(��}���~r���3j��b=��A��\��P^���P���c������x���e�*daYJ9�l�139���*����|�/����/��
����?&i|��'�_L5r���
��!^�1�^0<?9[X��@C	e���1d���"���Q"6%��s�.����1��~z���"�&O�%�q�����}�j���)�~�����RP�=M�F��/W���*pk�7��(;���Cz��|�@y��3P���^�h�#�Gu�O��
��'/0tY�_^��*!��^`;�P��Yq�&�F�9g�^��&~Hh�����6�\��� �>�G/hkGC�=�J��(��(0s=��t2�,*�[�3�B��:yZ.�
Uj���@�����A"tN��B���e�v����t�Uz�h*��{��~�*�Q�7�*�	���R������TL}��c'
O�7:��\��T��#���P�jC��8K��M���P:�:nz�R����L���D\AUqGq<��i���C��~���O���Rt�F"����q/:�|����k��KR���
o-�`�p�F �`�~�F�>{��=��gF�Lo���b�*/<|���b�Sb�{*��y�M-����~A������������������>�!Y���<;�q��/���3���v.CI��rK5����4z��Q����@|>�^��c�2�,$�<��>o~����K���5<
f�6l���)1oLR�W'�'�a��
&,�GW5+?(>~\���OMWRtT���MAlV�E��������
�X����B��s��L���n�j������>=@�^�H?�������DE~*���	�O.�|��2�-k"�R�m�j���d�Cn~��c����)��q��S!��6�oz��	-�[�8����G���Q�-p|�oi�
����M�1V��3"!��q����|~QJ���S����������4>��'$e����"(����7l#��I�
zb�����i�U}��2�N��t����Y�]�a����##>���8E��Ue����/�0�Y=�w�=<�������A���meV�@�d-��n[|�/�'g,��ve��<���#K�^�1eM������
(����P��9��]����>����#���m������a������#!Y"�5��pu�#����Qe(���u�G��s�2h����3����0�n�\�+gZ��&�JA��?�7E������^�P��e��3�������/��������4C�������`�b7�(�����c��h��~�B
�q������9y�c5�\��b���sY�"e���r;�L2�|&$�H~���Z�y��e�Pr���H5�}bK�y���jx�[��u��Z?;��<��"�������@,��$�'����)3/��L�%y$����Kv��'��@y��aD�5i>�[6��-�g�
���P������'*�zV1�lN�;��gt�l}��0�P��y,
����G�>H��nF�
�B�`{&����`M��P}�J����;�wJ�\ W�?��~���*��*�
L/�Z4�����?{�����n�`���d$|�~��S�T����LEvX���*�����/���X����
G�����
���*~��3�`�:�k-vmyz�b�����1�S�f�����	�t�&T��1w�$�+�+$	�?*@��d���g�J*�6l��$��xb�t"���������$�K�S�	W��n6<���I�Zg�w:��h�20ZxN����2�����a�"U�~�+��b�1����������RJa��s�����.XUp;�+��h�s�J%O�;�=�;��_
S�p�lv�7$����P�9;x�2��G�ZM������p� �d_:'�����i@�`cR�	��������^����1X�n��4�����y�(�-L}Gky6����p��L�W��$��Jp��)C�����/P>�.��0g��a����w���3�3+\�N�^$����dt���[Y���1�V��[����w���6�p��J&p"K��KO��P*�����4���}*;�XN��
����A��\�J=���e���=���M���S�~c��`�w��?Y��`^����L��)/�=���BC��*630HE�����n1�
iC?��g#�����j-�d3����|�����$%����DQ�������	����+|�D�G��
G�>�D{���,^�8��^g>��\.��(fE�������'���>��G�9
��8^.��&t��������]P�����#	x�UI5����g����tHyz�
|Mo
ox�ty���j���hZ�#;fi�'�Lu��Ud�T��br�_�Q,����)G����,J��vC2g�@�G�Bbo$
����o�t����{R���"�
B^<�+0%��vC>;�^h���A!I�*����� �B��M���EiG�/����'�����,�|��T�	�F���y$���7��qYOL���p�]�W��]�����$�+��q@�����0�w�o���h�>e`+0b?.o�1�����
�G��X���R| ����c1��A��{�W�u&p��7�(=���/&����PK�l��'A����1��"�9�"��IS/
4�{�%%������w��Fn������0�n���d���@�#�e7V��1!t:<�!doD� ���vP4��9�	�r[�����s4)��	,���@�p�G�����=�K��&�����I��J6-�F"8���n���0�<�P�xS���_)�>��.�d�)P�d�c;W4:�
�yUZ���>^�&�_h��	�	�/��D?�����������mB��+�]/�m8��I�U���z�`Tp�<A�iW�e������J�d7����x�i5g�|����
�"
�q���~E�E�yr��0�c8���PXf�sR5�K(�����H�c�F�<��RG�p������@D�T�v�C�D�F?���Z��a���:�.T�|
������%�7�;�<��:q]`�X�}�
���B*���������F��m�{��b���G	�#"�7r���V	�����T��1E�P*
H�e�LA)f��]S�$���A�/�����j7�n�L+�����R�V
��O���s]Wf�����4��a��G��@f,�����wN��T
@��k*Gc�i��x=/]&�/��/��9��"�/-�����=�7��Pj��aL�3���W�R��.?Gd��N*��>�I���8f��r.��G�G�+G-`}�/����T��F�)�`�����1�|��[>��g����BS��vhv�����`:�OP5���������U^��3�
��[�^x7�%a_�{�";n�x{�J\`�������4�(��d��-���m��Q+z�bA\@�[oG*S�~Ut������__�����h���R��ed�{�����F����#���d
���������8��*��R��RR��	)���E�oe��
�|�R�V��R.ieq�(����A�����Lv3����31�Q��<mG���y-�o�����iT�p'��)�a���,��LJ)a�pZ�#<�{#.`}r=�I6KufFY^��\@����V��[���rA��2\�b�t�IKT�`�$�g�r����Uz_#��.�|��i��Q�bK���;�U,R���\��c�<=/�
8����c�>�����!EM2R�����������.��x�/��D���P�M0��O���	�O��?���5
g����
G�,�K�8����)�1���s*���m
|�X�h�H�#�
o��-Ro��2��i/��C���2�0a���*���K���
��"�
ef�g�}#R����m�	�G�'0*2"{��_���([r	�W�T���a��D��?1W����72���tC��Im��"�V�7��\6A��%���������W�g��V����t�~��K���{����y��b�����9+
�!�r�F\@��ks}���G��������uF�fk��}tp)^K��l8F�S��3��}!�z-X5��G;�^�06D��]����.��	�kN�FL��4��`E�a��@���>�%��v�U�������O�Y��E]�)Z/���u���f�B����6��������t�_j�y�wx\���\�9�A
U���Q������>=Q����)p5Osj�q���u���
og�����[��~�B��tC%Y.��4D�8�
��^�5`���z�x�����	����1
\���9����������p�U��p
������`�q��|u%U�W��	U���&w�S�(��B�H��^���W��f,���l��Q�����
������ �r����Xa�UaL��I)`��K��G
J��t���w�*s��3��R����|�(3����wt)$�����3X��U��
o"������������D�6�����
CEf|��6O�H��5'��g����-LY3\3TV����K�����u<*���l`�9)�a���z�m(u3���
��jqza2s�1���~!� �.���p�V)�Q�������J��f���&�R�N$�V��3��	�!����e.(
y���?I��u�~�1�+q��Y�\��$~k�!��3���B����y�"J��FK(e7F?��PJ�F�a�������]��V*�p?����,��`��4���R��W9�H����*���
e���7�
f�`�a��=Ra4d_h����$��y�K*����i�O�0��0an�s���+
��hR�����9u�pu����wK�:��OYr|�H�\��|	U��O��9�T���F��5gL�{�Rj�C�	E*�)5���J�`��,b�$�S��0%�G�1m^�����$��~�����U��= >F��^��-Du:Z����aHO|A����%!<@�%�Z8o���F3�R�����W0[���*{��"
X�_OQo�����������RO��~\R����x1�)�����;S��_�Y����:Y��#8S��]��>�3HQ��| ��
��j@�mM��T�wJt�M��@�$U�~�3	+	����r9H7�(,�j��D��b��6�}�"��[@������J� ;�z3����/��c�����v�R����p5O����L)Be5&��]���y �d��18����_A�}�sdg������Fk����������_�C���o\]Ko@>�4.S9-��~,��\=q�g��{�e6�G@����5�;aBr{)�	���d��c��"
w��1�T&����>aWS���8��%�q��M�9��?���<6�`Toy!h�@3##����r����6lP���I�����b>=��������=10J�i.�ea�2A��y�A�����1����;mw
��
�}���y����Y���>%��a��KM1#��B��Oo
+���!�Q�l�\x(`L���@�7T��qMo5���`'Y���q#p
�����Il"Yt�_?	��4�*f�K��������"����k�.R{�a#t#��������o��a	�\������	)��aV&���m��
��4l�h�NL�ITD�Y�����i����p����U&����n��TJY/[�>!��_G7��e<x������6��TX����:$KOy%���������_d��}��A!;�7�<�h���N0�qg��w_�5����9��~~pr�ct�.xx��e���M*Y��Y:"����C@��E�����<��}�biy�1�LHK�*�E���=N��\������~���y�����Q���}���������`�������&0R���\w���<�`}���	)
���Uhlx9���/�w�:R+\_��7��� ���+\]x�\}3k{P����� ��a�Z?g?v�����%���
�>����*��V���G*\-��0!�[��32�6�'�
U��l������D6���Eo�6������'�����
.|�8�f�i��w^��*�>��S|����YL�Kri�ob��l����u[��:�n�*�g��6/�y@���;�oMY�=�;ZR3C�
��
�����7�[l`*x!������C%����M9�R��fb�S*)D�p\n���1WX%rR�s:L0�P�����=�7u S���M�v��R���b�O
0��Q������D��"��iC�3����7��yW�����.B����G�)0�M���
��8���y������>����������e�����z��%f����|���g*�������	V�<e7�lJ�$�DN��CC2�
�L4��s��H<��CJ�1?�0E����y�b#.�k6?�;���Z4�0���
�iM*�����]�>��w �o<��L0�����J
SU�H]������[@=�e�}��Gj��f����|��N��{��JC�����}�	o�>HD_x�>V	#z���AvK@�I�U� [\c��C	�1��N�B�pv������]����
(G��`U0�O�j�yqO��,�����9m�@�~���S=�
��tj �od�[���"T�1H��K,���!�IH�T�a�����X�{D:�X�\����*{��G*�K6�n0+�6��Iu����%^t��)�Zv�_�_@��\6���~x��~�RF�@���t�S��f�����$(g�@G�=(L���3���������X}��P\��3��/�������P.��^C�p��2x�QL�����_���#,���������$�]��{?�1g�����!g�m���#�BSc@.�x�}<�PI�����AP'
V����J�r�R�����r&*|&/-����g��$�	�7$�B^�<�]��3Q�2��R6������0[�^��-s.z$����[tU$���T%0���� �|o��X�"=��2'+V�
EdbD���D!Y�{�krP}���n�� ��(e2�p��������O`s@���	^�86�	!I�L��OlU�
�����DKM*|�����^�����N.<Q	��]�}��A���|����g4���4�}��s�����
7t�@��������7��hDx�����yKQ����{��N���wm>��V�DL$�n("�����=���*
'�I���oL�II	���7���bd�]���/L`<e�����
C���2��3�R�,cT�~~L���S�d�0a����.������R��gX�8��Q�q�2�F�������#.��~�I&�d��������)D!�Bh#zY��8��x������A�����5�(��oGE
H�5��\���N�W��������C��q=?[B�h���v��P��-�(�p}�g2���43>"����nH�^8$NkUx!�od���@��&I����b���x
P036�E����9��N$sF�a�	�2�������"��Fx~4)�ayy��N�5�����\a�rt�7�rg�K�3�WXe�����?JD3���(aC��24��������.�\PeU��Wc�����Z���S|�2��C�F	;$�Y��J3���B��������P��~$/������	)?R-7[�m�)~JRx�X4��z�����!��0[�ND����-A�e+B=�T��U%Q�����/���PNI��'{P;Eg�i��?\�9���X�e������A��K��Qj�Z.a���C�7���kA�J� i����NK��c!82��v���wb�8F2���*�5]F��������2L�H���^\���������L?5([��x����]7O���I5��
��Hx���8
��w�[e����mLO	
�1��L!q�������Q�������v�����/kn=����t��_��I��Gg�)���m��ofuW�7f.�XE�b��~�����L@$L&\]|����F�:�*���i;��j�6QSx^��C&+�E�g*�H:	-j����	
<|Ly[����4��Qu�w����<Q�������7K��~V�Q"�*�hO��
�^MrB&����,C [_kS����.��X���r���K|��tl	Gx����R]���@������/;T�f�J�G�l/
%��K�KI�3�l���U`dCu7����u�@�rR���OH����a��-(�f��R��j6V�)����cB�`S9K�>����A����������U&��J�����ZK�
?�b��6 m^`��qV��r��J�O�{�1�vV�_JQ��#�:-�a�F�A
��y���D���n������
�a�^�js�?Y�/R������II��#*r��v�����=
K����������2P�H��)*�0V�:��o�enj_T�_���.�>
���!PNV�S������_.������%����3Slq�*=���j0aL�G*�CJ��5 '[����&=������������6������U�z~�}����.#�b4da	�`<?����g��T���0����ugOnj3�p���}��
�@���q)s�U&��^��P�Na��,]LB9�i��)=o����X�wd���C��G�VOR�<��p�$����w�7�s.5�'ZJ�@2w1�g���B�.@\������a�x����ACFF�|[|���3z������iM���,�U$�;��)�)N����d�H���Q�~H	z#.��Yt�2]A��]3������E��o�md)���D�6��@�s�q\�)I������������q@�3Z3���o��L
��DzoWx�-��eg.6�|��6�E�����Qv��Fh��`u��q�0,�U����-L���D���������=����������N@�k1�������/%`�0�?X�;����U��vR����w�l@��1���q-@��L*������7E��s��N��P�l����������Mok������U�u��/����DAl�|�
���1;�,B���Yt1���!������/��f�i?��%��f�U@p��@Y��l���J�(E��w�.$�����2���d��G�N�>E6v!���!�t�R���E6j��9n��
W����QI�2�~Za2����l���g���-�}gT����0�.(�`���$�E7_��
�����)Gy'fT���H6��Eb~t��
��\S8������
W`�<�H_0�R��u�D�a(��O�oAza������D(,q���V8�MQ�2�����D�*��4
�\���BlZPU|�MM+��FGr���z���O}��6���6	>e)��rm>�ac�)��H�+\E��B	���O�#U��������	����� ��������P<x�G�dK/�U��F�f�~�	���.�]����>'-��Aa���2e�H����;Yp�f����8�e�����RJN^D���x<�
|���F�@[�
?��� ��0�QHyF��]��|tk�y
aY0F?�fo�,98�i�_RQ��E��������E
O6N��L�PJ����7RZ���T�����|g�����P�0l/`��)�
�Y�J�sbY2�����0����m���"i�"��Tf���d�����V�������_JQ6��h����yd�bS�h���w����'*��uF����X�3��8�:�r]����LM9m�F�-p�0�j\O7T��xcVhPf
`��:g���8�{6���nd�@�F�������Bv)'�<~���K���������
�G5r6���Xj��l���nzg4��M�7�}��B�.d����]��%����>�tlSQ;]������R������mL�G*��{��~�Q@��'�9�!EH��8-��Z�J��������4���'�;��D?�M�P9z
&6(]#�C
F���a_�yQa���n���|z�"�h����U���8l|����D�d�4^0�6�I�@�1s�a��E��7���"�h2&��Pat���P:Z�z��l���3^�#�e��vA���/�gG�Bo�������W�@"%*w�S��w*���z�����U ���J�Rv3�}�c/wUG+���>	='��h�o��`@��,@����q��'��� $KD�f��V�����#�p�bJS�e/8�i`l��P)=/��y�m���)�W�mApC:���. s%)w�a4�_A�6�!s��)<)U��$�S�F��/�tW�������qL������$&X#��[�*r=�A6Wm0]�;j��!}J*����T���#��m1�*��)D�����@���o�PV����C��=<�����*����mC|���7�
i{������
^�8�����9�h8v�4��:��q��3jK�v���|����!$K8��a���>��/��Iiz���)M�p��)>���D��L�����C�y3I��C�?��2��4��	�_A4��p�@����f�
b8� �����~9�����$L@��)G.���Ki�Y��JX?�������`\@��L�Be;��Z��0��J����f���u
TR������Nex�A����m���|�'��,����Q��;n|���
��n#�zJj�)���>qE��7�U���4���p�H#s�I�gH��c�5*y����N��Y����)5�1�`�c>�����D�T,����
���=���7���T����?Rb?�=��������_h��5�O\�'*��m��N+�y����*����|]t,IG�u�Q��JC�!s���>% �����|��h@6��n#E���y����"���IG�	�T��'f&�f�b\�CrS	�P����:����='��$r��2��IpN�R� ���a�~��u�DS�<xDv��?w���O�ywg'K=�B/!E>p��gI����eg��vC2g�@�G�����7T�}������������Z����O����
�l`�X�
��
W��
��	a6�2�=Gh��PT�9�f{d�����A^�GB��xg#�Z�4�p��}�V&�Y����z)cm��s��]PU��O`����;#�R����\I���~�A���~5�(W��s�;�KJU�~�!���:����������N���pXspDx�y�J������y�";\2��jAB�[�{k��%�{~-���D-�A�?,Co���o�I4[�@b�����P���m�(	��k��.�{U�w��f�"��M�7T�e�?�����oH�s�?lV��r5�������a�y��rt�\a$��T��B�q�����o�I�������P:R�������oK����
:"�����A��s�I���3��g��G1���[<3�Z���!�����tA\��q=��Q�Q�#?-�j��O�*���j:�i>S�
��������������J�g��w�<��l�X�����M_X�Z���3�����4l��=[P_��5�n�T^!��c����k�$���:��$��9W�Q4����#���F?�/ t����*Yg�~�x��i-�F&y������mX�������9|�m���q:�����@��������
B����e39�[i8��K-������_P�ky��*�O*����Q���������6;~��8���.@?�-�?����]@��
\`���_�?���1�T:9@m�F��0����:0�����Q����E^_�s�D�,����
�n6d	���	?L�F��c���*����9�;�xWK��TN�m���/o��"_���
�z���3��2�������t'@� ��{u�`��X5x�{�'Y@�9*�7@m��UN���z����"�s�cXa������\]�A���=�i�����K�����;H3����k���#�zDv|���!�%��T
r�x����sT�}�
=H}^Z�z�_�X@S.��|zt�r �����/����B���+�,����9���!��)7�]�c�XUd��
������e���
�Ka�y�7If������%����{�����xx$�<�?o���'*��H����
fQY��6>e���t��F�S���S�D[ U������6u!�6������}A�"8�Y�'�
����&�~9�n��
Up���D�������7$�B�v2�r�
��D6�m8�-�L���_a%����� �e����XEn��D����H�GUfS�2��R/������Lk���HE��.��~����P��I��mf�,=)j���n��
o��i�����()To����w}--����$�>e�_�^���y�l�)P�S�.\��!��������z��������n
7P��/QJ�1:��I�a�����-Pc��&���jx�\��'���Cc������N�����m8��B4��TOpO4X�CD/y+a�������`�b���lP��LE���/���[���s�.�PjL��h�dw�~��V��GL��2+��t���i�a#@�8���x�
�L�7T��^T6P5s��)+1�'���
�)S��3��v�d1z�����A�?���WT?�����f��'X4���
h)S�7F
��6��#������������c,����?�����!�)n�u#������L-%��W���2	�3w�>�2����!7�Uf��
���*�s$��@�1�*=[s���"�Q���r�B�!C�OE�u��iH�:��6�A�B�j{���x5���YW�y�>n?]��U�]����=
5�i��0��,
��'��<E�F�o-U�m$N$d'^BW������t���Pv(�e����^�O-�;EL��{�+D(CU
��������ky�
�o��7<��2��4z�a���*4oq�\n����Gk����i7hq[���d�O��\@u�[n�E�T"�'�4�(��_
���[�*�0����3pk�Q0n�{�[gN%���)+c��������)2��Q@. M~>����
��7_gB��~�(��$�;oeM�5����Vz}����}v��,�������_d���e��%>^�W�>.�|�H��79/k��P[@Ci|�����1��8�����&T�;)bDz�}S�'�~�O��-/�Fm�b�H
?��H���3#p��\�����rP�[�}�%�&~5�w?ug�`�\a#7Wb��(��sc|����@�K���*��Z���o��p{�-���;��AM����l���$d�����?���_~�}��������J����/h<�_�ia���p�������������^|C���{�
������p�cP��-|q���=-�_���*)�8i�����|p�%^qy�c�j��J�������w����/�s����-����W�EV��RY)��U��y�77M�2���������6�|hU���2x�����3��^lt}�!�� �>���^���o������7>�p�:.m��cNK�������B�2����/�@�����OD��4j>��j�.	����6�~������;���wmAUR�C�T>�J�#��V8�|�t��D��qnZ~���Z���i���������"F���A�G��_�+^{a���|��S����^e%@�x��f� z�����z�M����I��������9��d8���'�Q�������~�l\
�<%m�~�5b����^������'o4����C���a���o��o|��_�M�}~$)�����@���_�����Q�(�����F��&���m��E�K������;^��+�������y���$�����[3�'c�^�'5:�m%vo_Y�:R��F-!��h�r��gF9s��(��X;8����/5u7���Q�J1��%����;�����8Tz����������u����_"���..��Fl���[����+_K��K�eWF�������y���o��S�/���-?�[����e|���o���/��Y��x��R�Z~�t�F�����#��n��5�������?�����6��L�!n����]�R�C3�r�1�^3��/�.�h#N�!4�xb��3�
��(�
K�@+b�F�������vK��3�f��O�f ��F����C�k���Vf��!
3*ic��������7&�DH�:���eD��f�
�=>������_YF��ji�T�p��h��p��1�kl�4M�"S��9"���j���R~8D����������\�����b����-+���X��w�`�DM����b�br?h��,��<��8���w'��S���3��2��[V*s9]�UL�����'�"�i����;������y�l�g�P"G�//�y��x�K��x�-�c���z�$�j���}���G�^�D��8������O����������|�%�F��Z>��^xe\*�V�^����h���{�d>6�����[>=��c������������?*��nK�efN�y?^�&&�d�O~�����-_oN��%�$}�����xc�i/
�����_�����s+-�U�$�	��/&���NIZ.��7'/z�Mw�:��@^���:g�����.����;�������3���g���
>����g�q1����r$x�d�����i��,;�������n4&J��>//&-���~ \�������bx���L�`-t�b����El��f�'k@���!�����S7D�����hUL��j��oN�4���-8�&:X���1�d�T��v� Hm����&���k��������M�L����$�6$;�����hc����HZ#���m`�vz��3���cb|�r��=�J�o��c�jC���iq��@��M�/�_��		*�l�����6��f��GKW{�deC[""Yg�%G/�G�����!g���6�dG��+�c -/�'�����l�4�m�,�~��\���M����-�����`��9�� V��l���k-���n�x{�8� m/_����2�~sB���.-�^��'_��iyw:�p��EQ��k:<��v�-��e[{e����%_���;
�^��5F�"�6����XK�����^�~1�$��Oy��s�H���l<$3Z�f����x�����m���Gf4�_K�\-��b���� ���,Li��y5
����^t�
����$v��;�#�R����~��9���
^S0-6e�3�h��a�q�b��H��J�>V	{�X/4&I��b��\�b���4$�{_��;wd����_�Fm���U�_s-�Z����W��=���2�B/�l���`2-!N�����%%�v�������jkI�����M���2O;���:�����|e���qil���"L+,��?�E�Q;b��T�:��6f����M�(���	����9���bmyg��
��}���^���V���+[�h~�0��h`^b� !�QMe�e���N�5�1�����C�{���1��E�������b��������i���`|is%��3s�m����L���S7
xu����)-LQma��B�(;d���-u��&QiJBW1���b-:[��l!�xI���P*���~��u���P�������8R����/B
]bM��r���W�����O�(����l7�j����x����dT�0�w*��-�b,4�r�I4�M��x?M��!j��P,����>���i�y�Q�t���������/o�����a�u��J�x���Af�3�KE�4}�c��v����F�\}�l?G����`��}c������	����09^�b��qk%!C)�G��_������CM2����e�Xj�^���1�L#j���3d��	�K0�Y�\
��Nk�y�Dm`�0]�d�hD�&Rl��L\�����t-,���W��Pml�u#%�$C�P%?��������_��/�o����[�t�w�A���k���ma{�Hc���e�E+���aC�
i�61��h3Z�^����>���Xshv�����7��/k��:�"�}�eb�l�2|0/-��0���2���mn�cR����^�aC��&�1�hU��K�;��/#/f�yu�gX1��_lB@��H�i[P�u0SK�F'7IL�t",::�C��w�(��l�!K��O
	���t��5��L�Ic?�R��
�H�����QW��|M�����d��������$��F�"�h��L�w�XWz�B�e��B��0;w�m3S��+6��6�='�6��4y�76�lB��PLF�m�;��W7pQ7-X�������������laO�-]�3.��P�m��a����H�/#�G��������z�.a���mvo���������G���y}������/���L���\.���������@���3=I �<�������n��D*���H���7Na��4'���k7h�:�	M����y��o���P���@@d�������W`^`B�(�E�oD��dc����V�)�}��6T��h?F���@�T��D�dI�pt��@�0R���q�;����U���@��	e�1��B�_�����LH��������\1}�
���#����/w5�q�����M"�7&	��F�����x*������D�#��������B�+��o������gVd:����B&�m��3��b�X�Q�~��%����n����~��6'�%/�Jb�?%������ox�h�>���x�^�������:����e5��2�.i�����^9|�:�
�h�K����m�����W0
p�J�xzb?����2����q������
��/x�~��r�0�/(*�K1 �+��/���5��sk&w��M��G�����xI�I����\��Aj7,�@%1������\j|'�	n�gfcg�Iv��9$H�sv���J"�E"�7R�}n� ���?��>�o~=�Eu"�e�����5���w��-���@�}�����n�J���0�����=7+/:2��O�&�B�,s����hi�bfb�l�&��������
��v����VK�[�Ba�3K�����W��}A��?������&tu6B�=�"�f�_����i�T���*�'*�6[�S	�x�28X�g��<�9�
$1�&�����m�M�6��;vq�y	�Z��a���B�T�q�������H��0�wk+d���4�=�����L���yX(��3g(
o����Hvsn/��4H�������u
o��@�X����i�}]��P�������"����7�o���E�|��X����HVg��p�p�>`��_P�CF*m���J�[%��:E x���#p�?.2�3O^5^K�@(@#�������H�v�H�� ����g��6#�1GX��E�������q�=����n��d}C�\&�I��A����VL�����w(=�B��X��m�� a��~AB�����O�p������J�*�,|�z�{(&�����8�$,���J����2�B%���
�B
C��*�YI��U���3����E��ok@D�~l�(� ����D��l9�3E��O �R�!|F�
 :�;���Wi�_����F�[Z�{MB��P�� &�Z��-f�L�!0�=�;�����q<�0�Jt�>l
B�n�����>����bC���
E�!�����"�[X�|]s���2>���a��=�	Tv�
��*(����_-�lg�G���e�J��|4[}�q�	`�;oGWJ��)4��������2���dd�R��R'*�=v���Y1}����jm�#`>k�@an1�H|&1�$����>h1�U&Y2�o_��[��+��7���HxP'��bp\�E�R>���7���ZR+Ea��q�I����!�S�A����A�:!�"��c<j��K�s����k�Y�s��9�'������Y��8�L�t���w�oR�����=��
��������	�K�nZ<H<{u"�Q�8���� ��
�������q&O��=X�!��w����zH�~��t�����bCv#r��M�!21�S�!3�1��������<����I��3���L��^2l�
�I�\N����p�8as/!��9��i�L���������a��g8�(���'fd�d��	y���/�I�g��t�_SMVr&S<���i��E+��0�P�%���7����F���e�����/���$�J#!��H�� ��C�3���s�����������'J-�T�O�����k(J>ic������cS�a����
�-?��o����r�1���,�cz�V�If�=�?� g	�X��%b�z(��c����(O'��3���3J��T�Q9L�=t��� ����qL�Tl���[�-�T|y��#0,0z��Ne��7H@Q{0�D0D����v�
��y|�3��
(���g���np��o�"��5���P��7�a�y����*~I
��b-Py���n82-��-�����l��)`/�X�t�AW���+��Ic��������!X�$U,��
�A?�9�Pu
�N��g�-�������Y
����	���#���	�|a�x<!6I�&�I�*�%J��ig�O������/�|^F8R���������s����
(�N]��	R�a#A�P*7��|��+T_2>���6iL$uXHn�%NmH$	���p��)��7�AJ�B�@�G���3����n��@�� yr�y7�������Z > k�-k�0��2d�=O�`���Ec i�������6`g�>`����&��8��L/6�t������}�o��?Q�����lGt��$������Tb�G������"z 76$��Y�R����HEB������.�h����R�?��!��h,����0�0��Hs�����TZ�d��XR9s��d�>�v9J�pN%9,d#1J���`C��q��q��J� Z6V*mRD��H=�ID����RQ�����}bK}������"��j\�.*-T8��#'#����J5���u G���[���)�8.��F�h���@S����=^L�W-z�h��-!9%�X���F��<?�0�QVX�Bat$DEe�B�z�E�*0����z��)��7�?\��~��N
�8�W�i��
��.�]��u�F7��;��%o��z����Jj�(y��ge�DG���.g&��Fk���|��Xr�|�ar
��O��@�T�c#K�OBWW��gu��5
��7��K����4PU`nf0>j ������Ip��`�M��<�B�����
���.��� ���R����QLTM�I�B+��c#��B�N�/#�V���^HY(Wg�J,X%YHw\�
����L4� �C�<��J�.7��$+lm8=���Ce���O���^G����+�|"(�$t��9����b!M�/��|:��)�HW7��$qE�5����$�M�%1�J��7z�h���}�u��l� 6�ZB2@P�if�@�pc�l��f��`yK����"��,�C�v����Y\`�9�R�[L��j�`L���)�TF��14�X�&��w'�����	�5��8��;�GM��g����dN�C�,r�'k�������))�
w�q�������#3���`�
��~q���D���\��c>���1�!
)+��A7B�5�@�.�Sd��2G�*�q�\?w�N��4e%���EC��6=�����k������v(Z�E�
%��F�z��;�C��j���a����e�-�%VK���1���_�`���{���i���|��j��`}|Zy�r�E8*�T�\�19��x�����J,�E�<������0`�1�Gx�XZ::��;����\7��$��
�8A��I
�lJJxe*�(��}�n���s���X��|���U�MO.D�
���3{�2(���>�u)��g@�(����������JRmRD�U"����R��������|����
@)X�j�O	��GC��.*M5�Q�[�"m�0�=;�
���H����j���Q
V	����G��d�U�M%.{Yb���~?2{�H��2
3��YV�:J�\����q���V���(C��&4�Y���O7;�ORf���d��4�������!�F9��L<;\��+����O�&q�N:&�����<�!AFT��G��}����o���!�`� ���9���]������E���=�������
j�M��rlyj��`�&�LTC(�k��@�
!$��3����^%v��4<�@=��l=gI8������} h�W�@e���$z��H���y�I:���\$���sj�>�.15fA� �C��sK��
u���2��I�0C����8�akA:���
;5�������'+EA/��|c��P;V	
D��0�!�&C��R�uI`$�NS���d��@��8'�V������*L�"�G;0�������d�Z�1�x�iZ���f��f���d��D�B"Gs$��
%o�4 ���6�k<fd;R�s����96?F�R�v�"+����65&��& H��I���#*�������z.r<%HQ�K���O&?�g��Q�'�*w�P�
��b�4J��������[ ����szO�9��\��!1B��'9%x'~��9A�U���n���[#�T�/�w*�h� (���@�%(:4D�4��2�Oc���*��N���q��9�(Ue��0�HT������A�����
GT��;^cL�`��q�����~U�k�{��ac2>�;���!>I�P��UkFBr����|�d?�;�b:V�Q���a3�4��W���m���v�O��c�Aj�U&�"K?�E�AB?.�I��N��S�	�/
��8�9Q�|����XU(.���y��g������jlWH���P8�%���Q���{���Esw���L����a~����X���/B�D��Y
G	7�6J���x"'�g*`�!��+>��EhP�����v#5�����3e+�6lD��Sxb�����Sa��W*SNH��*,h*X'*H���I�p�������.�V�T	�`���>RI?�X�I�<��B�Xe���6�E6���!	�zq���+���C�x$��P;V�Q����(���r��SN�+�2c#��*s������)*�2�1�����!a��p#��n���@�����������"e(�H����	��*<=	 Z��R�h�Jy����R�2�p�jXc	��ekG��V�����U$�r�~�a�|W�B�����3���N��=E�f�)R���b�����0�P����<1�:���H����)R�}��"Ej�#b
Q<�����T(#����I��� a79�D���OLC�H��K�~~I���
h�����urY@
�q�O�����PAD�U����Te!r�H1V��iHa������+�_��
$�����U��������oIS��:b�.��(��q����eW���Z2PD���|%F�|�p!1��~!xX�qi��M��C^��n)Q�sH���U����������r��TbJ��T"��B7,��[��->���+�)l"��qD���h(��E�����2)NQ&����MS|��tv������|��a�` ��*�HZ�5C��`�2E�"�q��qz�J�/�d*@���9��N��!�������C@HI��H$c`)����e?JMVTa��(�u����`�7T�'�A�h��,�8�
?�����\f���	�S���
��n����B$��	|�jF���\���z��k��Q'
V��}Y��
��
�e��a�x�A�O B�}�h��Mt<d;#[��\a�v�����P3�@LG�i�B$�!����;���4��%GJE6J�R8
��P�2���C���*Lh�@*����7��ml�I%��
j}a��V~E�_a%��9(g�fC��(��?M�I�����
���5Er�Q�&��u"'+#$�s\e4���g��l�Z�B�k����������aR�L�=��7Y1���R��@��m�b�vv�H$�q<d*��8.8nev�
�m�O��)995;<7��Q�ag~Ak?�/���
G�������ms
Q�7J����S�-`�K�w�fn�0������������w�M�)D�7�M��BT��(���0t��	U^�D�m(u�R�UH$��.�!f.�f{I3��J�
G/X3�u�H��
$}����P��8���2���8Q�f�C�����U��p�CO/��	��2p�Dr��X�?���1��
���k&%|��S�zA).a��+�KC���[������������
�tR���@�
rtG�yqw���}��)<�m�pe���������CL��o�������o��/���t���
y~B���y;�����THXx����+���wGk��Ud%��2�?��������	6�L�2$^���{.!�l��&�����v�#%<K��$|p+�{q���#C�\fE:��3Z3�����\�qA��
s��-
��~UdAT��d��0�����@�}��j�F~pgl��R�*�
��M��st�*b�[�~>'k_�@�*c����(eFa�����?`���](�U�y��;*B	��Pa�m�$[��:c����x���w�\�I����������� �h��p#i�������"+o+U��Rm�1� ��_K�N<���`���X�	gJ�*�E$kB�Kqd�`�[�
�K��S��z�-���z#��W��e�o9��Tco*���F��}�����X�jQm�������Iq��i��$L�"[S�T��j��q�i���!��\�/H�c���y4HAj�Gm%Xa-��NQ��P:.�
����T�aROP������x#�\��������`����-(`�g%�R	xS	��,�*+�����lmU��}�?y�3���*��\lz���w���� 1��Oe���^���-M��������&��7���L�5�h�i��(�$!�#	63u4o�fG��fq��������v����p����~,�����N*~�����V��BJ�H���2��D0]�����^�*�����h@z����Y���M�[A�H.� ��x�NV���^�tl��W�R�<��B~�1�M"Z���9�QMGx��%��������;Y=_*�G���o�J��Cft�+�j,t���b����5�t����7z��tb��H�o5x��p�?!��A�d����v��gP!�5����z�`t���4>��^I�(N������4b����Q�l,��]�2���
�S��9`Ss��BE4<�
d�/�)��}%a�R��|���w�axQ.@�n����s��^c��P������'"���E���7�����Q�*V-2Ut�,&��M��D���M���I�@\�A��-l�I?��������V�$?�Yf��fg��!T6':�m���H�1�������@j���T�$*�I "��h;����M����$�Zd6e�[��S�k��
���
�9"Kh�`�������ND��:��eT���O�;��A,x��������'���t.D��t���)1qi������E��;.������\c�����!c��h;��
_���d���:�I[��g	��pB��/s��
�S�C��~pH����Z��x�<E����
��R�r�/ZB�� ��P��m������R�AL���qOA��Y�g5/���a���S
�,��,V�Q<��[&�Pa!��nd��|�&�pJ��%��-#��'�L�*X��!�y$jO�U�I����Bq[T�IIlK&���ofG�e2lnH�����P����%8�Bb�?M�d��n0K��aho�����~��
Tx��C���-�����_(J>(�)���8�X% yj�6Q��uY^�nX��}Snk��h�W.pP<�c�-'�u���F�'�F�����+��9�O�AD��K}i�x�(y�Z��3�f�����B�5��
ph�#`��O,���l�8���(>�c�M��@��0��'}S��o	��_��4��p+�D7�|���TQ�]=�D���6`�x�3�gNv�>�����^��~a�/*������O�PEz���
4�S��~��������
!L�=���M���	���Pd������
GT�I������!�x�}�����S�T���8��O<VtT�Q#
���
8�H*�K�����]l��EK����i���Q
���}�Z�#$u��$X	�{i��X��d�(C	��$4�c%4	�h�Y�PAJ�l����Zer���*���M9�?�<�l���@��V�zN�0�#"%��xc��Ebn�
E&T5�<Q��V�&$���#lq#i����l;���f�GC|�����{g<O_\p�!�;[������.��P��E�l��Q���"�e�p����7��xXa=�����e���a 2�A(�B(�b���)c��+�)XnXl���J�I$w	vC��y'[%;�H�%v�-n$-�)J�%w|��
,���[n��&\�V��"�_R��Pa���K�h=(�����K3��0��h`��~:AK.������~����h.
*o*��Q"q�\����F2�)Mo����?'ojL�.cj����-��I�Q�A�A�y�mUEL9��0ceN=������M>�!-�������>�P����[�L�A�<NT�b�164T�k��I�hDj��������<?�0��v�	
D�B	���=��[q	�qF��X�ts�W��?��4�(1�:��$��7��������[2���
�
��(0���'��R����h�Bn5T.<N���S�"u�Ri��g?���Y*i��y~��Y�1�q!4��f�W��c��Tg���b�$���� �q:r�,}T��N��"K��^@������E`?��Z�F����2'�	iy�<�w��5g����A"�O$��)U���
�������	+L|5r���y�A2R���Hx ����MF'����Pu�<�)U�ptc�QT"P���S�r�4l�jEk���DT�c�F"iS���Q@cH� &/��v#��
B~��/H��
�8�LHx��H���$��W]|��S����V��{����W	�s������b�������M��������2
+�)��|�O�-��;i��s���O4n��O��]T���������dH�����I��F�l���ZBJ�Je-������}���Dd_�1�����yM&�_Sh27��]����j�&�7y���^�4��&�^�"�n`)w�s#mB����X3��V��5@�/�|Z�!�j���
���R�mM��s�#�F��C�`����RE#����B�;�4�d�`�.%�����e�_���x7Q��lj�I����"H���A���U�7��� e�m�G�b������;$������>�+P�����m}�W~�~��0�k�j	*#C~�+��n4`Ssk�$p�����X��o.�}J�{L������iU?<,������8��������d������?SD�c����Ng����j������q��-��Y�9�h#�8����N��$�K
}F���������DD�GD�_�g�2�w9��o���)\�����1����������
k��H*�}8�f�	FR���b��g��(YgB� �F��R�iWC�����Xf�������a���W�;qkF� /e���f�E
����"�����0����TF�R<�0�:$�S�(��7a(j,]#�;����
c���1o=]#
J,��`�2���C4F7NIS��F�)y�m�U��+�a	%�o��
�/i�F�����4
�l��KEuf���2�4z�W�%��>�>i�e_��Y��j��L"�}t���H��},��`F�$������-�8J�2���F�R?��5;7�Ks������
����3L���q�q+����f���}�a@����(��������P���.����7�"���������L�$��(3�R��d�j���t���[q^i�-�^)��O�������PJYc��
����kB�^&�-F?��CL��,��s��2��������L��FIoRH�x�=7�I?j�����*�E+u�F�V���&�L��_{��H�pJu�NBID"%����(Y��+�}T8���	��?}tY1($�F9�`D!�[H7F��`��:B���w�i���K�3���*��W�
Os�b�dF���=�j��E��.��>��]\!�)��#�,u?}m3�v6����#k6�Z6:# ���
~��p�����a�<�vt�Fe�_��E��L���H��;���I��+�"���\��Jy�����G��	�z��P���<XP���Y��|�Q`�I**��\�;�FMe�L��M36�����)����|��Qy���Z03 |i,X�����I�sD��v��������Q6+g�79�W����9������r5c���cs��Aks�z4�/?Y��'�.W�����K��4W�AN��{��J�f���j7�#����N%4|b$���"X��_������I���)k����^)���������t��@�_
0=?|�x����������yv�������vt�2&�!��,�M��0���'����q����~�3��4Q��k���f�Ds���=�<Td�{��w2}����D�l���N����^�@0u�nV+���h�d�?{���s�d��T�����R�K"e�\�lB/U�����!&����
jrW���.��"$w�����N��e���Rg �kz���>`�m�V�2f��d���J�"7)3F�* 	<:�1�����flB/�`.��.ftK�i)�uH(r+���%�Dt/������,[�����R��eW]������"X�w%~b}X��r5~#5��0���_��B�iD����>�Fp��q����@�}������f�F:.�W1����@���fll���R��X`ok��/����E�w���?dc!ID?j�N �{��FlB/��Uc�`��DA�/��h�rb<o5����?����%r���`D>+)��:[�b�|_	z]�=v�H���|������E�����tmf"p�r6�o�D�uM7g����QVcP�����P[�Cj�����4k�Pz�rI����z.�!c+z?7�7
���fi�w�*�8��� ���6���$�����,���:��n������d�G���P�)D���(�a�WD�4���_/��n)�?D�t�s����7j��F��D����y/�5��{��|����*X)i���H���KI�.��j����v��D�c\�-���F.)�9b��0"'��,�#DC;&6���mzE�j�Ph�Q`����a2����+Q7@���
��yp�8,P�[
v��������V�HA�]JI���J7)�#�Yp`.VM�K}0�?�`�yu����NW�$#��:4�el�W�b�����9t����y$�gV�v�.aW��3/Bh��_G
��J>D�_a�t�o�i4�7Q=��nq`6��^3\:�t/I�����Ji�G�b�DW)u#//� �
�Nype-t�����5Dh�J��m���0f��=N�%p����7q]F��^�WcC�b����"���������NC6���>F��)Z�S�b���g#��;���G��j�c�:��m��C�l�
�"�I/�/#�
��P��:eP�a����[8H����5B���8{��������vt�������]~��*\�-���T�,?�lD�!�?����j�l��v)N�t�2B/�m�`
�1ae5cB���cN�M"���TM�eQOM>,@D���&���_c�v�Z�&���K�S�+S����)n|X����j���
��k
N���d���
@S��8m���?�f4?O���������dF��u�x������8��]2*���M)�
YS����A]r8�E���#e�i�&���4?���XY�(��\W��@k�����Wj�C���M����=�aDw���~�_A���~�������?`���2��^E�D=R�%�����W���.���F�R�c�<�r$�k��r
V����, e�dz/XU���;�@�����A�+B�-���x�<�V�n>}!�G��zo����a}>����k��6�xK��6(]��W) ��(�2"
���A
Pf�1b������^w8H������+6�G
u�.��R��Y�aC3��4��S�P/U@��&�OM�;�4�j��F�T��Jb}����eF����QXp����o��S��!���(��y�K)�H!�?�P�V	&S(��`��y
�:���I/<������u����I����A��SS7T�=6��k8����A����?��1e�6�D�Zi�K�+���1�����a���'���n:��V�I��L(���#���v1�%M'�0�����2�L��$2���<��1U1$��iV'�rq�BW���{�2�{K�0j�.����%�@Z�����D���I�(���,6RS\�z��7o[����!���4�>�kjqf3��hLX�f�>>hf�^bU��
�������l�K`7��x�@�����]3�����d�qo���'a�H��7���W���~DIk���m���z������,��Y�I�;�&�������4�i����e����R#b���$�80�3�x:05���� 8���|����hW����A�iV�_&= �u��������;5�}���/��Z�����n^E>��v)iW�2l���^GF��t7o|0V,L��~47Q`N]r]L��!���������6��%hA����D%������fD�uH�N�Y]T�Ef�c��oS��Y��O�G
!�
�r�s������U3S�`��q�J �r��6&�)���SL�2���+h�~[����K.P(�vM�>H����5�L�����/]c��/�_�z�������O��>.�dX�R[����?4�R�;���������Z��f5�aJX���
*wGbh�MU�����TL5�Nw3�<4����c��t�GP�
^�KS����1`�I�T�?4��v�w�}��t�f�x��2�����z%�6]5~�X��s@����
����/������D�F,��� 7��z�x����-Oz��i���bc3�M�g��p5$���%B���-)i�:l����/)���TV���i��D����|��N�H���*���L������P'�Y[&&��t.`�31��k6�����T+�w���A��>|����.�2���=�)A�ne��l�x��Ek�K�k��w�����n�E��	��"��>65�Fn~d4��������V��I��pc��.D�@��~�9�vAmXC6FV��cz�Z��i�����GC ��f�$mBi��'O��j5S��1/ip9H��`g��N.�xg��K&
*o��A���FllZ�r����h:aF�59P5�!t������	�C6����Vp�P5�j�����K,iiO2$X�'_@M8��z)M�%�hv�
a��MD�1�o�/�hS�&�
<V%�	�Q�$�3�md�k�"_Sl��vF!R��@J�8dD<��=YE{� �
�i4.�|�m�Y#g�s�vP�v��a��%�
YQ�'�Y�������������
��D�p*����-m�?V'����m�)���vt��a&�������9�4����'`���Ua��Q~$�7��+�g���u��@��q����*��f���w|�<W�Z���6&yOj?#>�G��c�]P�WP;:���'�k��z�s�s�$=�<)�,]�2���Yj���H3��Q�����Q�{��K�F�'f�^�<}�*����J�~����h���R���������~-Z����	R�n1��/4��`�Nk�o�a�GL��:d=1A��s���d�&����1&n���4\�"t��j�1�=�EGP���1�c}�VA�]3�B/���|���[ve�R�I������;+��YMX�#���(���)�k������3&�bUV@t��q�P^����@�Z���V %���BMAH�������Q��L�A������BhVpGK�%(
���Q[v���A���+�
�����"����=tH�>��z
�6Z�e��eT�$2
��:�������o]DI�RX�u�fH=O����jd~��g�K)�Hi]:��L
��������2~+�������A`+*�Sh-�dl�W�0��3HH>�/�c`+U=;�\C���Dc.�L@
V��$�8C?%S��4���zDD�G,]���;��(�+���)��]oy�K�����&\��5"(G�E��i*�������A��:�$Eku���]���Or�j�F`c�
��S�]�^���e8��i���m������?�����~��L+�/5"F�mz��UF}�h[\��?�Q��������}�e�8dI;�)�MGx��>��"�S����B[��O��&����p�������>f�oo�{��F,��K*Q�b���DpY�P�qf��($}>J�d#�,�!���'����C&�0�����a|}���h|��Kv7�+T�i�=]�z�]�|*���b�QU�9�����:�K"���t5�y`�����O�������F�?�3��[n_�a�!�p	�G���b�LcxY��*����
f_Y���������P�mhDv�^������;����Z����}0�!
p��N0/��Y�q��8����L�YMx�����ZT�%��h_!�}U���+c�"ffOw0hH
p�d/�w����W��UJ\y���V���r���#.�1(F��#�!j���1�e�����E����8�h�'gdn����/��>��?X�YN�sY�9e��_����#e�i�z��:���'C�{�����,��@�����}$��K?�E�s�vQ)������.��p�5���!}2�C�����1�B���������)��MJ�%�B��`�[�(g�������3���n=
1��8G�?NQu�,�JY��?���j���4�I���qZ\��*��[k�u�)S��@/�DT�A�'�������yp`]�	������?�9��-hv^�	��f�\i��3�[��^?�I�������u�i�W ����F�`�{1W�������^k6�$��yp���l�i������)*�!%�j�6��X>��
AsAm��jY!�}���������B!g�����=,�����Kq�&{y��6K_���L�(��FCam��_��J��?>="�)"�J�W�����������m]�������!k���:r���iW?llZ�	�D��G��;�A�>�w����Gm�
�al��W������2l
_Bo�����W���.���06�������D�CkH�F����|;$L�PSv���u\��]p����4��o\YE��gT�Efl�����&����{��������.���I�x���cb��k���A�v�����V|$�l���s���)z��6F ��x���`i"�^��N4r�hb��oj�W�0�x����8&v8�{�KlYN^e�t� V�c�~Y�i�S�8�0������E=""�C}�'!uWM!���Z
U�$_�DH�`b��gB
�J��\R2,>��-�$��M���������U!�X�*S���y�M���I]*U�
-�b��{J
dG�CM<������4�	/�!����g,�S���F<q+��E�f,hwZ��^���E�cT���n���-���9���1�	��X��6�X���������Y����4\��1��F�#�Rv�S�T�E%6/�Y>@��UMs�SB3X&����#+KZ-�a)$��L�~����Ve���a��bJ�')��C�y^�S5~�*P)i�J6}����q�.�:���2lJ����A�����Ae��2�[���/R���M�%�{�F]���ku�w�I
�K�>�0Z}��L�B]�R#"�A���]��G&]JZ�i�C�����kK������B|y@�����Lz
��"P7�u���,�A�y��{�c��i��q��zX���g����a��
b|~BwOE3A������y����7�������\u7��2��Q�%B���HEb�T�7�_H�A�j�XQYL���f�K&�1��#eN`����Z�B��q���������Q�����6�=?�v6zv��h����+o����_A�� ��`.[�1����>�l�gV�( ��<�iu�����i�m�k�4��~U�M#�4�5y��94�@bzDzk���.�h�j������d���&<L��+��Y"AE�UV�2��B�f7av>A&��������S�@Q�i�$�+���>n��t���E�PmTo��h*���Q��5�M��9�mXS^�����X7���y����X3u��{�
�]4�
b���@��Y}�����.e��M�����P�A�(b�\ER�����N�e�8��>�Y2�m�'��+�]��1���$�����e���������F%��~��E��+h1.A�(���m5u���<������V�����5Er��.�Q��v5dc�z���
�I0��kL�3�ZP�+`
�K��N���M���~������j����0X�3�H����[�!����_h����{>l*�f�N����5��e����������?�t����6B�;I�w>�uanZ�]������KR����Q��|�����p��b����-�(�����i�D,�������
�����
��� }`l�Pz������ �t�7��]M��6 �}\u�Q���4d7*����%tn���$�FbF��T�	t�}=T]�Bj�C.)��������tu^����t�v��#@�i���nd0�[�k���s��R�������KoI)?a��T��7M����3�l�%�]
���C�������
����(��]q���b�)!X����_J3(�����oP�w1��E2UD�_��m7����lyNM�o�2�s�vS�D�W������2�>�~#�C�X3g�~
}`\�#jvP|����u^b�����eA��{�.�66mx��Ja�d����vj
n���P�O=�d6�
e��Fd�(������ ��H.)z�����9�4S�`�>X�H���@tO����h�r����i��>E,&�����6[�����]YJ{6m����c�n�<Q���kv��4�r~����1�6�(p����/���}�;mDR�5����\��q���+QiA�(�p�8���,@��[C���2m��`|��!�d-�0k�5�/�,�-T�
`Mz�X>����38��b����$V�����H�nsT�pS������j��DQ�6~�k�0��!p���
�F�|g�R�����OAm���|������r���Wt7"��R�����PA��<��
,6�M��g�R�]��du�UJ��������6����~��=���&���?�
������e��N#6RW�������:���M(�����~6"����&T�J�v��I���h=��w)��O7)���B�qv;�P��7V��}�}�����P~$d���_,�/�������^
p($���A�W�
���h�PQ]*/jD�)��M��1Lh�	<�3jp�5���ouY�����I������u�\��2�����)k
?R��
���xD��c�Ot���d����~;��H���'	(�)1��O������|��~Ha���.Z��t�X�5���6�����U���Lif�1L���z�n�5E��&X�����A)�L��5BG\]���6�!;����/���0Q%��#/�����&��s�v��/�%��9��H����{���|��W��._Rx�������$��~�<��(���"����c�C�����3�R�iW3���>R�R6.}����:S�)��W�H^kq~��E�33����,��v���MW$S����+��>�f�+i
E�oq{E�>���L�L0����y�|?-H�*`*�sD���*b�x���f��vg�i�{�q��4�����_s�
iV�1Lz�x�Nl`�����hS(�!���`F������Mp�b`rb��w�v~Y��D����i�U��Dx���A���u�F���R�kn]����|���x^%qa���e6��l�d���%��"���X�
�!�J�A�r����#e������1�UR��O��p��c����P��Y����-��}P�����K
���{Dqe�`*�Gu�FH��H�4�-���#��� ����P��H�����H����x�P^�J(8�MB}���G������
HRV��2�iW3�������aF!��t�^����a�y~���}��iU���^�5�a����������KX|����
�U���5&�~(P{�LA�NW������iXc���^A���K�,	�D��L�/��y����5,�a�
��X���V�����D��+"��_�u���90A��������������C����	��]9��%�I�V��$OX�Y���N5��~tB3.!0?L\&
�	zg���+���-7<4���+F2/���
��(�����i�HYl#�)�C
Z��
-���`�<�gD�#]L��H�-�?f|�����.�d�����h�u *S]Nj�'��{�B�H"uW���WN��\����_��MW<}�"3J{^j
���NY��ypW�E���#�M��4~~e]��ReK�W��E
6�����6U��`�2��"xc�i��
VX)#���N��o�+:�
�
,�Q�g8�D�������K����\��v:u��-=��f�
��n@���� t4%3���`�>�w1�@I���6�xLg_~��2)�NC���U�}/7�����rx���9g,����|���+��p���]��T_����kX��'���Z��T����*�4�e'�>�M�B��q����v���m���MJ��<�����������!�\c|�@h�H{N�:��ATJ
�0���_�zd|���\S����@<�S�	A�x�?�n����FC�W<��=�^'2"c�/8'������9t1o{H��BU(�<N����{"c����������t�c�7�tD��-�������<�6
<d��x���a[�j#7)���q��#�����I��QKC)$������v.�u�"�����9e!
���`�r��Lx�)���In��	o�3���w,2?2���P��^q��%��1�@I<cxp�������v�W��6"9��~��C��+h1.A�(��0y�{��.���^������3�t�:
n����g�����S�_L�(�/����H���_����	rh�������c�A����T��B��,2C�fz�����c[3i��$�����`�������~J�.���|��jC�l-FA���R ����WL�L�w|E�O���(I��J�E�X�f�L/��+��m�b���s�������H��6���j��7B/�Q�6��CY���k���64�J���/�)'>���n9a�KW����y�
�4�!;����1�T�@TI%�?��Wz��h]�|�����m<����:��x@Ik3{V�b�!��,�����)���l}�-���P(*����w?i	v�����Y�W��]I���*��9��]1��G����9�W������]���r�F�z��(�[���0��%�@��� �Fj�7('����(�z%l������G�C�-�l-������������2���������k��T�����M36��h�uOL�J�X|@E�	�1�T���:�
=K�&z�K����YAm�);����V+�����)j�tl�M�kJdu����nib�]2��v�1%
Y��fllKJ�8����������ANQc��!~���g�����c�lD�����Y�:czI,��4�����om�1z��D�#wZr=;�������~���%��&_*dNS1&	"��)�;�s���K��3��4�OR:���t���a[RW)(��T���6tS��������C$Y�����&��5"���Ju{������<�
�"���/�����	������
}� j�$�<M	)c���[>W-������CC����5���4���v5cc�
��������i�vjE/���zZ���
Ic��X8�e\\��%�\���X�_�W~.��8/��&���5�J�mM���+�U��4Rl��Wh�WJ����"�9������:���������kR�(��UA�����2��
�Z������d��:2.����_�)����Ke,P�����k�pc�6JO ��C\�+�Y��v.�b�����
E��d�,����~D��=��R�M+�>��(�u�M�����>0�����bF.�QG��2��B�)���:TP����� ��K�]]�2^�2}%&���O4^��"TD�CgK��!,��q������FJ�8�����m��U���Q�{�7�
�p�+���C�E��
@z��# ak�����b)��
6/�V���mfI���h���0"�����'�s�{.�eC�]J��b]~\���_��B^�m�0��[]�Rt�?�������d�Lxe(+��j/�8������"�"�:��G���3��1��d�jzw�e!i�����6V<}��9��������8�3:��#�f��;�!y%����A
�0�����&(�%��������~��C��
������
���!n���W�n�*Z�y��x���*�#H�����\.�|9*��5�"��j������sj���j���Oc����I,�+�IM+6����@���R����Fj��j�����%�]���%%}\cA�=��T�b�����z��F'��V����PGV�0�C[�k;�
b�Hb�F3��:����~1�����R��Y4NF�f�)�j���K�����Io������z�^`��f�� )�Fdo�X|l�1W�bj�F'���'d&�\�EwL@�� ����1�f�k�(c\Z�hel��1�+������1�Kx�c�LV�9��HB3x���}�B����2�/�d%�Of�i,��"-�
}cd���k�����<����������.#�����q�?��-�������;vf$T���
�_#��]������H8��n�E��X|�y: 6��Q��%�������d6�
���E��#b��� 6�o"P7 V�3�����-}`l$/���oX��%�=�8�h�CD����a#�R����	�c����R_��"�D@��N45�w����������WA|��^�z�,[����1�����)��
+���^�F*�NC6��������Lf0;�bp�/��h���kF���-���Do��f��[J����b)��
pO�TnfaC����_�h�/�8�AW0!k@Hw1���)�5����i�&�\Q9��������?6�����}�	ryt��lf+b�FD�_EpQ$v��z���)p�W\Y]�����t��5�`��[J��[��	d���c,�:��aM�p���H�H ��i
�{��WR�� (�u�����vt��R�:�yvO���2MR�:��dJmN�����9���a�]�p��V��NW]�VM,�Q���iy
}`\�JZ	��E5�LV��H$����Y
x���x������l	fp��W��w�p�P�Y&�o�����,�'4�f;M��%�}��h`����
h��x��NW���)��Q,�U�0��JG��n�x�����`�3	%}e����+��R�AB�2�
i�h�L�!m�_�`��X����"8�c2��^t���8*�IHre���*dcL���+e���z
&llsj�v���/�.�MV���v4G�6� t���#���5%����X���U`V(z���gVPyp[
CK,@U�!4C���hCc
K[
�b	%�@%�)��p�T[������~��(�Q��q}��
j���2&��u9]5�%�r^�$��\�#ch�j�k�W������}���~�#�9/D@[I*N��p&�����&E�(]LZm�0�$�AJ�d������FJ��JaY:f�w�?v8��c�Z�;�-��15TM0g]\�S�="�I���k8�sJJ�V(�M(�:��(�+)
�R�+g9��=u� O���WA���^g�p�c�5������Y� �"���o�@�G���0�L���0���"(N�{�9��{�9:������g��z/2�������=?�F
�C{�`��������^��^�
���|S��0c�����k�@P�����(nBS�<rA�_*�o��P�q��Ki�~��&��PFD������+8��UXj�X�������Y���sL�l�(l�+%}�b���2@�@����H@����V�|�am_���H���&(�&(4J���T�=�-��o�h��\O>��1�_��X���VU�aY���w@){>62S�������(2�l��!�:���e2m�G�b��M�y������D>Ha�B�`F�����j��Y�Ql�
A���������Mu, ��!������|��0����_(#,�k�9W�2�a��3.A�������9M��@m	H
}�B��@�e�DE�)U3�>�:� �f5�e��W�%����Y�1���_w)���sYp'5�-�l�W~��&r���\R�
}\'_�V��=�$���<�	<�E��%���;]�;�>
��o1��&h����A��Z�%��C���U��(Z�P��?���j|3�H�T�G�������"I"a��\p���n,f4������7�:>�_F�.��+�4����i��]�S�N�������VHfM��[�:�19�������V�15;��|��^��}Tbp�����*�\�7d�i�`f����c�-[�iH�,�?��^��-�������aQ3����&X�c��
`}I#�4����)�D�=R�HYB/�`C�q�)3���v2��hV��]
J���D���� ��'�+���x�����I��1	J�����P�=7}�F����R�<�>���RV����n~p19f<3�ZypueA%���( ��$�4��g�F6lmW3<U����^R��:�5�%��}��$o���� ���>���a^�\�����>��?�# ���L#�qN�:�>�C����-{)M`b z��0�,��������Ep�d��r���vgt���/{/NQCU}(�`��6���f����L����)���>�,�n���G)b�������{@m�(���l���������!r$��WVi72����*�~6h�r	ak;#�����Oq�y$p3<�T���V��E`��������/C�sWs���Ii���P�%��J�m*��`B{�.��������Fj���aD�H��vq�F�R��-�����:���������;�
B������>R,����Va)e���u������v��AQ}���t$�)@Y�u
�����#��)�-R�1}�������Z�L
}`\��0��1"W�J��(z��#��i��2lKJ�8�������������X=r��*P
D��]�a�4Q���C��4������M��,���S��:WQBxE�U�!4C�s�a&��j�K�+�C����<]�z�j�X�M���g��4	:�w����!�
c����j<����TlpZj�;k&�`U��~Z:?��� �2d�X�8��O������z5 }�9�}'�D�#���"�T�G�����=�=LST6J�Ad$���n�\�b�i�c���K�	�K�oG�}����I�]#�L�������e���*N��D��w�q��ZX7���^T�(�eK;Sh#���q��Q:Pe���A���'L+@H^x���o;������a���,�b��������3��3K�}pP�!����78�d{�t��W�h�����HYB��]���N�NHgF�>�<��|c������C�,W�UJ����|����t���'�$�t���@�!=<s X��k$�����U�Hi;#�����,���>�q�#f�VJH��B��}c���|��!|�v�o�c8��zR����I�����V����4
���'�Z��7�zg g��9����|l�
cjH#��v�aL`(��Z�


�>�#0�ot&�/e,�8��~�F�Y���;{x����:������Il6*��>�Z�~"��'�~�	�����v�����i���.��$�fg�91��Xq�
Sh��l�0��;C�����v/[F�/R��H���6(c�wS������.6��8j��F�~��p|��aLC����%��(o5
	)��jc�
%�S���"p�o�w\�T����c�ZA]�j�����{��,~��%c�W�BM����{OHK�H^A������P6:A��%����.37�e ��xc|��kQ"b�N��(!������6�1c�:��9?�:��Z�cG���4/�4����Ch����o���,���?��w��e�8�Y��{J����=2�������Lr���Z�����^�%g���Ff��F�rx�d�������O5�����s��
�L�&�1��el���3.AK�%�c����!�	&��_K�O�ylW����dF����Vek�G�b��/(�L���	A�)1�P��M�`X�.z$SU���4)(��A>��]t���F�z�ZQY]f5�6>0�����\��1\;��kL�B@��
���%�FO`�_�YP���,���c�����3�X�B���>~�%��Zh�vzd#)Kht�1?��
_�E,�����	��$
<d��E��X����]���t�&��R�.|E	��t2K�����)<-�Y#,lK~���]��w�M#$��:�t�����.z�>����iv��0V�0��O�A�=��}��L"b�FC�.��^��4�)���>8�fNq���A���T��n_���]���"%;R�:��=��O$�f���Kpz�~j���"7�=�8�����
_A�q	�H��X\�r6�7����|����@TR��*�"{�|��DD����	�""��-(�9=�w4�������#�"�2/����	��V���cy$�O�G�b��M���)�D��W�����T��������at���B���+2�c`VJ��C��]�������CF)����{a�&���g���F#�����
e����_��4���'����[_C2����+�$�\T����p�\�M�WD���+����z�1�;���������i�%��ot@�KTj��	J�MPI��������r�;*� ���5�����w�e�hQ^Bs.c��b1�a��A#�����z��S~%� fR����Z.��@<��ENPt�������VI�E�`�S�|���H����R������Rh��Q�I*U�^$��T��|��*u$9#2������������F{9���
����H�I�m�
Z��s���Q#J�����W'��d(@u>�/� ����W��������ak;���������4&8 �~�����^�y3�t]����G����sa�.������]
+�����#&�����5�*k��3
�X4���liW7<��R���
*y6���2Q30����aA�g��:�M��n��m"4����i������%�2��'W36&k�����0�]���(�����>�D��&"�����
0����z����k�g_o����D�8�~�m�����k;�#�D�U`��G�������ix�&��r�Lif�E���#"��7}�_�������a ~�������d���N���0?���v5�3��6R��K��&I��i�� �'��������E���"26��*��+�H�:V�R�Jn�FE
�},9RX��/���������3L��f5�e�P,��"��K�,�y�+0���o��4���d����y��M�6�����#8W��ca��^�4Q�NE��B�g��i�����@��Eb�l���|��-����s5���r����&�C������l�U]��,�7N*������2xP��[����t#�})z��$�-��]M��4���C
Z���&�>�+0Q
6:������o	r��*��� ���\BI/����3G����n�%��F$�J]�<G�?�7���3"�������y����=������� 4n�5��2�	��T���@��R}�I"�����MB)e��t�Um>����{W��B�O��������������qpb���V$�H:R�B����!9�3����I$�W��VU}b�#��-���{�<Ew&h����;L'��R�������{�On��\R)mg�,v�H�>�+�e��?���=�3�H+��G�3�d�#����a���������+��L���jtOP}�2�?k���E�T@�7>t�g�
,��~�!k�i�
F�����Ug{����`s��sL9"�u�m'M,f��{_�<�>�4�8v�FJ�8����������f����-��1��"3�)���q�3��Rsk�xv3czF�P��)%?�3>��k $�:|�X�9����c����]�x�M�}L,~,�Q���k��	��!���6`��S��1�zz�yh����oV�2,���1u��4��������������=���#{�~�#([��}�,����&}~�<E>�i�+����?���5[j/W.�����������?>�n�����RT�}KP;:gCB8�zI���������0
~�J�[��!	��.?_HG���#e�EJ�
�����8��e�<��������aQ5A�Z�v���f5�eR�W�E�M���z�������ay��|��N�@j�}l��%e~����/�>y}��3'��o�W]���:t��Mv�f��s��H���kd�[��N�Z����v��:M������5���������$�F� ����.f`c�]=��id��J��`����!B�*}��h�O���Fj� �<,�v_��{�[��}�'67c��bO�b�~WQ?/?��E�����*H-]C��Wj�:{���{;hw1j���Y���Zx��u
[���>�����s���UCd�3�ss1_�me��/���	�v�V�pT�\���q��K<�����Y����cA�-�P����,~$_���i������� ��[�t-2
Y�C:�~G�c.���AM����*���dZ�H����?04����Fj�g���4
�m�HYl#�$�K���TJp��<p���/�~k��������%�����mXS^�)��"���e8`�r�" tl��E��n-[D�!I����?�H+%�j������������2+:���������~���4$nc!&He
���R�N3��/l#e	����
�J�;<���R�><B�����w��F���0���/��@�]�����
5!�T��c.N2K����Y��B�).�G��x��K��}*@J�ZD����y��:����"�1�a#_rV�HYW�GJ�8��XP�b>7��3�>����K����PB���R5�cP�����u���r���P�.r	8��,��`J�t�|R�(�o�5"���1�f��_�~���X�bu�Hp�W��O� v��23�
�~mj������t������?f�6le�S���������G��c�F�����������f<�#�}|�H�ji��NW<��;vU��H�U�_��Kw�k�2br)K��	l-��baUH%z�R��-��V����
�MR_&S[����a�v����r�R�?��R�X��"��`~GD��8��E��}�\���i7R�����M��f����������t����5�r
����8r�����>$[��f��aTA�y��tt��E����*� ��m�5����V���?JWiV�Y�J������Wv�'A�� ���kK1��5.I�aZ���?����N�Ty�;�����������Y�T���<8(]��� �N�� N��/��������Flk���w�F��%�+��I���%���V�?x�D�R�oI��&�l�B8?��P��g2fK,�_�hDI�a��Fj�+V|	��v�H�c�%�6��X�
��2�JI��u0���#=��#7:��������D�����a���$���U#�WZ�=*A�iO���T�`K�FD����~���i�J6����X��`�L,��Wv5�Xyp�?��J�wJ�7A�!c�+��p�i�������?"e	�������8�Sb�D���dbK�������!<<�?"LiV^��W6
T�d�3�Y���u`��-e�����C�1���o���	��hVo��W�F�I�UHb'����S�����T�rc��%����)IVG7�R�b�|!�<(�����
4���$���������������R������u�K�:`����f��J>��Vu�����)�9	�|�k�WP:Fv�=�?*��\����I���9h��s�-X%�3�_�x0�����5L���"B="�����(bW
�?�UF�wN3A���\�Dm���?��{�,6���>G)Z��u�4�)V����lr��Vb�lV���{|XqAl-�10l��2l��z��J��pR�����y~�"�"Y7T��4�}?g�s�-�)����fR�E��IA�����f�l5���y+��
2)��U�I�9������f<�#�}|�|��IX��U��
N]���/�q���Nhi�`�H��'K�k^P�[Q�.�AF���&���b��UA������oC�1��tW[��������,(�Y����Sf��z�a���[�{{�0P���;����[s���z?u
�<�%f�aH��1,J'2��C���K-yI���z�Ilrf��f6S����
�������Y��C�{I����g�K�E������T����=����}p0#���D0���Fj����)����i���]�F��f�:�S8�Y3�ft�t�*���T%��&���.��
k�K;�Ty�%�MN9�
���U����D��A:
����4e!�%/���5fg�i6�+����S����v�>��A&��4�
&g�#^���/6����������E`E����A�;pYx0e%����g
��[��N���2
$����"��P#b�_�<�$����(��_77UY�A����� �	�8�A�W��/�J����YnbeA�r#^e�3���++`[�R0/�g3�_����lm�HYl����>��p�$�s���q��<���)��|U�fHf<(��U0�v��a��M�ed�H�"��4f���A���A��1j ���!���Bp��o����p	:�A#�jL���
V$$���|�z����67b~zg��(�\��/��X��e�fd�}A�s���� ��M����"��B��}�Ji�G�b�
����b{�j���g�>����Q2a��N_�C��Z5�k�j����vt�I��~���������|,HI�j�/�t�	��>�����������(�
���=���sX��@��:�M'�!��83����0��^A�=R���^��;���(�A�Y�\�O�
��-$	��F8
������b;{�e��
���2x�x@�TL8�N�P��k����8�����XD�K&���=R������#�0������`��	�C���������������dp�D�@S2���q�m���2�puz�,J���J�M�0b�/������U�ii��������8�YoI�Yn�C~��dF�3=mt�����C��U�	��Z
}��$��
B^��P��>��l��
o�n"B=""�6�P���h'�����u���T0_]^PR2��u��M����m���������!������2K����:6M����bF�����j+����WM��A���v�*e�����A�����R�������V��O��Yil$(x4��Z_c+lmgD7Rg �4IW3�v�&����b����g,A%I�f���*��|��_
4�ak����i���AJ���4��z�;����Xq\�,9����������;��
��Dy��%�Rp�W�R������,.�7�2�P�a���F�;��0���H8��f���4""��P�:��X$���?�R�9ML�A�~�q E�_S��
TJ��R�M+���������)�lx$
�(5�������Dk�01^�8���m���v���	��GJ�^R	p�Tb��%�m���L,�taS��j�����������9@t����&4+���aU��?������'ue�r�5�?S�Qp���L�HYlZ�	�D�-LS	�g%�s)�iP���W����`�+(���?�2��+h�v*�7F���fg�m�>��3�?nK�� ��9���{��p<�^)���|dR�Uq�,��1#>\	�D����"3����ql0liW3@��M36���bK3��P<D���������;�a��%��y�������N~�7Af�}�bcs���0��(;�LN3a�SC���_kd��j�.�l�cz��Y���[��L^A����������-�3~�K�0�o�����0G������!~^���V�;�+�bH�f0��ld����H�����]����)w�0?R����8��x�s��#����P:��d���M6m��?��j�����c��qM��xJ����>[��A�3�AF�	�~,��Ee�Y�6�iDD�Y2�i9����d��t����.�l�t�N��D����1�wT�������1f.(P�S��1N��,V0D�!�H�}9K�`�"�ldz���T#�X���NF��YT�(E�����%��\��$_?�Z)igX7Rg ��-;(�y��';�33H�����I��h������(��Y��z���
Es~��Gs������A���Yg=��n��oG����1�
�x�i�C���J�?�LCLY�����(���+>��+���N�fC=""��E�@��)��_�tp��)��@ c�/|��
�D�#(�D�Y�G�0��%�V�P7��c���^a�_���a���NkD~p�����f������������>���:�ld�,I����Q#��]�,��@���:I���C�q"%7`�>��?:�����(2��G��E���;$T�K-���)Z���%�,�F��Aj�7dy��T����9�P%n�^�Q�4L,v��1V�����f���������A����M�	G'������+&H����8�Y�>B/1�:�%�q��V���;��*Lt[��x�� ���&��"�d@E���F����6���s`�:�
2�v���)E�YZ#���X��� <�y5hu�G�b\���K,H�l%���%���P��>���)Dd��1F@Y��B��s?��HXGK�M��v������FaK���5�h
"����l�,���
*i�����y]�0���e������'��-�t�����������%Fh�%�}\�w�����2Y/��q]����+n��#�,�������K	�J��H���*9f�^b���*}�>��A.�hH��)���^��d��""��Xos���k�� g����1����NID���l��u�}��`a�p����>��J�6ez��e���d���zx��,W:E�c����{��PG{#�w��czf�P{�I�Aq����D:���"7k|~
g:��_����r���q���zmT��6p����a��J��:k��R;����7���.9SD�'�f[Kz?����z�����F<�Q|O�bJr�)e3n��Kj�����=
��	���y�����`g.�+���������3�B ow�K.~���I�������4A�t��R�����]\S�w������X��=	�zq��n�I��7����KN[g-U���M��Q��V�H���l�
�JG���6�XS;�QA�s
F������j��)�#�aJ��G/�@E�U}�y�p��(������}���v����oE�i�HYlZ��}�g������e���7�<0��`<����L�@�t�6��������U�6|-�%h	>
���O�S�B�����K��5��FY����2�S;t�gL�: t���P��b7(1���;�q�r�9�hM���f��a���P7 ��D�R�>���$\������"�7n�,F�-T �E��8�
������&�9'���F��\Sj��z���
(���+�A��JC�2������
���{@(���t@h������X������M����������_�T,�*�K;9��*��RJ�v�����Q�������U6���m�+%}��q���.#��C#����n�C�{�(��
A�f��F�>nR0�v�����qcjb��)^�HW&�U�_����� R[�d������sEA:��>t�X�������?���,���~z������+e����q�%�M1����4c����\y��F��W�g�,?2=i����6|
c���
B��Q+�*��h4n�c�,�Nt��������1-��Xr@hek�G�b)����������Q���H��m�k�n��l!kD�x\�R������/�FlBO+(\�QH�:��.�lD+|p-�����/XL0^�����GP����Ko��.9�]�p.[�Yq���
eA��p�N6Q�������*�R"��/s]��eUfwj��~��O������zm#�z�6�o}Q l��N7��BR����l��F�JU�����(m�6���B����u4���lg��2lK�k�ST
�����!�d �c������K�w��}Q5��������f5�������+�QOH�a������F�}�#�=�x#k��v�V���mW#66�x�8�Rgr��4b�,Q���Qp+��X�I����,���z>F���N#6rIA���E@YR�{\�a�2��>�H.�����K�s������:&�Y
x�����ZT=Q���������;k '����{�0C���HNh���@���#e�E
V�-!Y�%��qkz0z��[�z�3g�p�#(������r����?�H�&�������a[RF��8s��X`}o<3!��<�{�U���(����,���������Yg�1��q�(gq1#/	*�mLS
�L�����	��+����d����?���=���F�^��nb���k��r6��<g���6s���)q�.�Y#���H#eU��\R�d���!��b{�����jw�����E2�Y��%KW���9���:�0�����Gv���p$�>��9Pjj��i2��5&��WY�CC&&�{�x����T�)m�!�C�BH��U�_�	r�Z�!�<���BO���jI���4��\��
j�W�b��]�)&)�\�]�l�P��NC_�P��o�����?B�%��cU)i���m���z2��Qd�R���Y����W���_���R�@�G�e���F"Li�/��������;V*�H`�����\��4���Z�����^����_�u?�3�jB�����&^��IA��Q<�������ra���j��uc��>��*���B�	Fm��{�����C� ����S��N���
�xx1��$�)�v6���(#�����iAE4E]�����i���kcaO�����C	�����)�P1�)!_)��&�M�g�1}0�YT{<+J2��WH:��Y���a������MCA���A��A���_� �&$�x��5����)=)@��? p\E4�2�YMX��G�_c�������Xv1�1���
���%��&
�F�K�}E�Y���������T��}J+�T���1��7�����:/�t��9� Rp���
]l#�}t���O��F����	OC_�#�����R5��~���h3b�Q#"."�/;�+<U��gO����(Cg��������rC+
���S`��f�>�T
[�I%�� +��Zq/�L"��3���v�f�(��V���?���a���-$��?�Tdc�/}����d�u�T��"��'uLh�*�2i=�4�ZT���_��U9���e\&S�'�Rm#�������&����~\:�Yl7$=<��:�!}����?���5�3��y����Wek���/f�B4��f�`�#�s�M��0��h�md���w���<�x����|���y���
����>�n����1��]>m$
���G��E)���Rki�JrI���V,7�:�9`E���W���7���C�{0��};�H��)�MW,���`k2
gYK�
���c>�HH��xN;�Z��v�[�~�-����HYl��q�'�*��"{�Q�9K��j�)��3K��2�����g	�F�����lI&�`,}1�������@�u-������}\���Q��f���� ]���fc2��DtT��q0���	)�+R��9������&������>�����}���dJ3����&��Z�XJ�9�}�7�����i���K�gv��]:�Z����Z^ ��N�^\�=nL�������vv_)�)�Bos��N���(�2�6
���L���!�zZ����4�FJ�)��ub��C�GlJ&>�g� �6�fq�����DbA��)m�#e�E
����QJqNC�
tCo[+�S�AB*���R�/?%�L�������@�����.���R�{�����]�B���L9�q��ku���Fb�
h���X�4�Q��)M�>�N5��;�W�xP�����'�)m�HYl#�}�Hf�uNE��%5#��}p`]��E�S��"Sh}�L3�������b��4czI-�e������NU\����e&��6����U\[R5e�_A������>��|R+)s0�=h����VY
��������G5dc�!����K��s:�����9����!w�����U�n��{d0��^�
=����6d�h���s�E�������P���+�l|lt�7�rZ�[�<�!�YVP�A���U90���R���2���jkS��J�o%��+t���5/�JI������������A���h�����1�,����1{��N&���H����-��lH�L��eW�4�����D����.� w���Ie����\R��>�����4����Q�DZ16r"{h���>�6[���R6�	����������:��V���PH���dQ�OiO�C�"��&"������m�$�\����rO����H�?�t�����YBz�q�a���_����R��S���^����Qo�|�0�c\N|�,�rl��~��$�i�xHE_�,�FP:�
GE�z`K`)U)#�\�=)M��2(��J���^�X��j�	AU*�������>K���%�Qv/����+k�'���V���L$���,�D���Fll:��}^V�i`)��(��&��<3~���g����_hb���#��?�
zV�#hx� ��n��{���Y�@�b�q��)�I�����1�W���MD�����2�cDD�Uh\�-m���R<��<&�Y	���$K�q�c�p��������b�c�j8������Rlz����5@�H$5 /~��{��q� �����#k���
����C����f����i>3�L�����Uo��5�ak����i�������d�7��+��Z�h��x��Y�3�t���&��zM��M��J�eR�%�*�m�Lv��O�D	����@�����j���$A��t����>�r�\��d�������C�D����;�-T;�DN"���`qe3��h�Lz�xq��H9�@�B��an�V�����U�i�Qyw���h1����W�1o�����c\��8*/<��2��3�6�����
���LRV�z]� �mI�g�Sj9YEi�t���p]"tq��_#�����IS���eC�]J��B)#4F�XP:��g6��=\�s$p}�%�;����A��\�)�����J(#"��Z��������I0?���=��������L'ed<��qLy;��jn�g6A4���(���k:8��A����5���n�s��6�.T�b4���Ls����������]b~�clyI��_)���_.i�[�i�F��K7)��BR�=d��j���a�qh�c�/]&�g��>��WA�w����1,q��I�9~�[����� ������������~���F#��&�Og#�?f����MD�I}p/�E�\�h@6R72
0D�Kb��O;����6R(�/D�1;l�X��la�`'�h>��������!���='a���
k���-oG�Yt��I���^�
��dF�
=��a�H���Y�#�w�������y����!hfRQ.��P�`yb�|)(����j����	��H6��AE��/�-*��Q�M�l�S�
J}��?�oD%v��!��w�[����M+8Nmf��8g��L��a�y8
��F���� B�w.����k@�1
7A��JGgcH`�Ub��A]�nfR:��Q^]�d�b�C�HN���{Ii��]l��	��M-����!��[
�;}/D�n�'�\RY3 9��@�X���R�F^k�&�����zf:@U:�u�������}�LVH����;���:R��/�w��K,�b,b�u-H5��<X�y�
�o�y$���}��+���v��F�����h�����-���>�sg^�R�!�����rY�&,m��1��|��h`�2w���V�i�����r�{�k6���E������]��*����
�K��Ky����f�x�����������sM����c�<���FlB/R�m��#F�F/��� KFL�R�@�G3*�����=WD�"'(����E��^���A ������o�) ��Jybp�>�R��Lw��.;��d��Kf��P\:#"��rQ����/�J��!�oG��J�	dX�R�&��x���I(g��?	e��&TV�VB��!��$�e+���7G�����vZ��m#��~�=��%�����������v����
n�����2�P.NjD��J�S�Bk����'����+m����eQ�G��Q�l�D������W��Rm���!���'���_������D��9���@Y3"^_����.p��*^��5��U�|���x#�aA��*�{����1���1�:p��,�w�B��9u��oA�P�D��XA$G:�	B���hk���<���Pp#]����,i\(�	��HwT)E�7)C.)���������D�
z�7n8�7��e�"S������)e������o��IA�yotl��FO]�����(j�F�A��$��x���i@�,���$s��}g�x8Z�l}g��?�;���
z_TM�mn}�h�P�eR�G���Q��;��r��+���q�@�1�8oQ�;�����|]�]�L��
��v���L����+�9��|�u�"����h+���_u� Rk�BJ�%�BX#����I*�zq�\�B��CS`#����~P!��[R6�!���h��G��I0�=d��[;oa���B`��%R��!4��d���f���<z���+�����}��khb�,������;1�/�o#����R�=�����{�,6\��q��N`�2caU"��U��>sJ�N��e����Hi��������X��X��1����q<i$H��:��T�Ei�=���cd������"L�lOg��%�h*����,'\c��Fg^�Z8V|�?,�i�Z�,�"�%]5b�r�k�<5Yj,R���gTR2�w���W�����j�lmW366�x�8��`�FB�*������"D��+�����\�4��SD��S��C���f�o�5	�9��dlr�!�|$|T����/�������X��B�D���}gX8�W�/��91��V~�m��������u����;��Dsx�����G%2��M�JE��g�,��d�|�J���T��2��
@��lgc$L����M%�hS#�iS�G�44�\��?F�ttM&V�c��x0����%H�p��s�5&WZ�����<����*��#Hc*�����[w�.52��g��Z��D>X�n���X�6���f���C���\�Xc�:�%�^L�g�����z�!���Q���,X����\R��k9��4���3�z�cHRv�i�F�}N\����cf�=R�Hi�(�d0e���d��^\E�N�-*/U�����L6�
,��/��������:���q������T.B�q��� 9'��(�:���2lZ�	�Hak2���B����t��q�'$S^7:�5�����,��PSv��5O����DA/� ��S�:�

}?������h]=b�Um�D,��X��@]�'15vU��S�:������U��������T�oL(�#tz�qm��pf�&��au��5��AW����P2�p�)�K���I��MJ0�U
S("�c���YZ�1�|��������&�_~���m�J�H���q�����y�fs��-��:����8sH�A�O^��A���S����Jy�^Fu�n�R+D�:��R��h9��!S�R��)5��vuE�c�4"'��qpE���}������x�I����n^���l$ �:>0������#e�i�&�	���\v��1���������nCz/B����@��/�������TD�"�"��%L���\�����<8�(d$x��������C�:�!A��G/�l��Q�q	Z�/�`S2��c%aL?��`L���^�@�#p,]&����H�MP�#w�5FAK������p����n��.��`�����d�HMq���+�1P������FJ��<�����c&*:�n����"�J�Ap�~���=���k8����q�R���J�']�t�Di��2����J�������#�Y�H��0���T��v��R��R*��(��[�q��
���8]��a�nj#k��~.���!���g1�����
0c3���B���x��d��^O�R��.D����L7.'8���k46�H���q��V���
�R�}`L�e����Z��ar���w�eA����{�DYn�r���%��R*sO������1���K�H����#H#�aY�H>�����"�w���lR�2������{VU��=��lif�/������u������!Ky����\��{;�~I����p�o)mW#���6R��]
������D!�q��U�8���v���?�vA�_9���B��?,��W������=�f ;
q����8�������Z�����a���h���s��{�����x��z����"�X�=�}$��V�,i����
%�o��i���JTp��H��2rh/��������<$����9l������,J�O��%9,��>8�1��`�Z+/Y3��(~F!_8�v��R���^�3c�QO<����i ����&�6�����"~����3V�iF�[����*�"��r;D1���q?������*��#�{8v?��9����N�#T������!
w`(�e�����_Bp2��od�g}��B]�")��+rIi����NBG#�P&��!l$H��n_G�K�9�����&]�!"�iD��^W��P�e�oR�|.�2���NhYR�X!~Y��lg*�R!�6��}�b���zKCBX+?�������������D��8e[x�W���J����A#��1���x��E:����J����WlJe����?��2��h�� "��}a!��+�V�D����D!a~���C�������w����6)e��M�Q�Jv�>��B�/���X�S��3�9�F��`��}{i��E�Q�!��
��
�pry�L�����R��$����FVA�s����l�������qq��I`'?Y�5|��_�!��h���t�v@�+���p���f<d�����aF`��9�6������NN��?�Y;vZb�?����2�����+��,vE�=b����}pfM��(�P6��\d5p�G��4���v5�q5l:dz�l�f���bH��s���$����E�+E&4TM�����:u5��zD���7��(�x�EC_b����J�����&���M�`�M�*��%%���Y�x�/V�O� ��{|�2nd������i�uII�Z��i�����S8�YO��cF�i<�0���v��@������+ �L�K�+������~4]<
}e��gu�������%
g���VQ�eA�%��vJ�#E�0�9����`�!����tU���l��fU~���r����N*�>Ux�]k���5w���T_a�������b���iDD���������A�!w�!�K�=C^:�����l{_u�9��Kp�24��k`)����#PT����Z���(jL��YC�)�����e������RB�R�)���;(�2�#����<�:2���gg��El\��1A$�0�[
�7()���HYB��::Y�$�����������uc�����bT�p��Z���Qa�_AS�o�����2�P%D��%�����n�`Tk���LG	Q��;(C��B�+c�+;���E�@S����W��)
���\Ib%����A��s+"��1LA�@�][��E
?z@'����F��w#k�����c�]N��]M���!�}t�60c�_g|O���wV�'
��lt"��z7t����0��&�����������VI��h�$�],t�
��G'���4&LX*d�%��[���!���>f4?8\�s��)4�X�1��P�fv�w�U�% �"s�C��v5��,���>�a�
I�� �-&4�q���P��H��^��vY���-����|�������8`lb�?hC�4%<8��06�'�P34>d���|z����8���t�&��R�4u��u=4QN,\�����=H����1��	yrt���D�Y���aR�G�et�nOI���=%�>x�1(��u�s#5 ��|�Rd�Rwx4�RB�R��M
%9��;:y��L�d`#�����d�	3�)~���]���"��$}|���Sp^G���_��.�~L�>��|}(P���FS��R��X�����M	dwZ�$��[��8;�'%3�e�?��yJ��Q��~�K	Xf��}�t�����pV�������=q$�_�/��C����o�����Y"��a"������=,����p�L���](����/�4��S�s��c�&�hl"B��R�m���9)%��F�I�����bG1�F�w�J�}�c���vF�R�M7(%}\c�Fer"���;��r��Q;h!���I)�\O���B�v5�!�����*��#����>��A�i�O,�>����q#U��������f2l�FC�FJ��Ja�2��������w�_qe����(���-d���k~AJ�����;��
����#|�F�r��8"W$,�� @�x�v�wE���8s��FllFbz���i���@!B<��Z|�B@��6t��1?S��1c��JI;��H��l��U
�-��-e��������(d��n�A�l�!�')����j�����K,�����j,���:>�+P����'5����~W�A��$�
_A�HX+�[��nLb-<GHza�x���
�����@Y���*X���,�O�����%���4?�����X��D�j��K�� G�%��$�@�h�X���������M��FT
�-FdRb��E���q�����1�w�jH_�����Pv���!.�����}�d�q������\Fh�T{���5�a�Y=�Lz�x�K���VS�A��/h�|_�/��K�{�����_���E�"����Qk
q��}�hX�}���g���4t�����'�:�MP�6�:�gS���GP���F\iff����2t�5��%C�H����c-]��2(��bX"./�4�D}�{
��������U��Q{���o�p=x��o�
�� s��{I��T���^C<�[k4)�S�v:������APk�0��&��CJ��� �
�r�E�S~5����5x����/�,�`K��cgd����2l���r��P��F�D��z�xY��z��������*���w�Xs��N3*t����q��������h���F�i\Y����4�$UH���t��:��4\�����H}.��H�V��G����~V���5�:�uZ���/�d��i��f*[����Mw������+J���8�}�T��X7���)H*�	d)\��&"�4 ���`1}�gA!/8�.0B��X�F��4�SE�C��F��Yr=��<������������qpec�~�)�������r�E!;S
�C�IVi��|���`��i�&��^B��S����"�4�&�';[t�A�%5�����5��nw�Rpi�F�3~�(e=���M(h��Db�(p��c�����5"dg/F
{�k�1B���W��g)p�W6l�b��vM�"�F�2;����!,����R�$���\R*����sRf���A�I���NT���-��%��Gn�f���igJF�b���KX�9�=�P�H�O��`�#���F��%��1���v5B:l!i�X�O#�o2����Q�	��\��4���}��;NP��J�X���
�%%}t�P��D`�Q9�b&�}p2c) c6i��Cr������A�m�J�H��!T�J�R�����f��4/�
����y�Bt������X���]�x�WJ��0���n �����W��Fn�4_2k�����cb6�eK;{�H�P
�k����B�W�
�1����5�>������1��>�Q6����7j��t�G����@��3hc���t}����Q��C�3�U����z+,i�����_oh8���B>�y�����z�"W��>s�X2t\�i�?&��X���A��%|��)q���b0)a��G���7^�Z�_T� �
=���(S�%/�����~�}������(��yp5 p���L`2����'�==W�F4lD~��
�)���L6'��H�)�^�4y�a����h��5$���9� �K�T����ae�2*�{�LLA������
g���v�@���A(�{� ��0*�
����H/�?&X������)���
}��|
j�#����@b���4R�
C���d{��6e�?o}[��D��g�o0������Y�<4}����\���44;�	������2� �$R�<������)�u�K�^�i����
_A����1]5��������L2I�O-	�K�JJ�8�hD`��r�-HX#"�ZT�~b����RB��p.X���,c�;������G',dK;�W)����;�`�2F�U<t����_S�������!5"_���������5�4�R�mIIW)�%���q�ff����.����Dz����$~L�aJ���aR�%�6CP}FIb��������'����B��5�����R�:R�6�!;���4JZ�;����1�����u�'�<u�	��AkL��:K�o�p	Z82���� �	8ec����b=�a����U�!��
��:R������{�,6�����&�=�32C��>AVx����S���	|�����-Sa������.�g-�bm��I2	v����4�6R�3���(��v���|���[)����@�B����nt�L��j�R=z?�o������a�����
0u�����(�Y}o��[��oU�W����1a�\md��<a&���^��&�r)�2) �)�ypW:8��A	K�/����h���A�2-����x�o�/�A9�!���2��J+P|�P���@H�j�}\��=�0k�S��KYldp�G5D�S��$&�U��<8��pz��2n�f0G�5�����5-�D������6Rp�u����J+�2Y�I���������w�����N��O�WKh�Y8����.�q���5��F]� �����+\���5�!���'Bo^
j�W���M�ed�2�����c+��)�=7%Z�_J(�����f��������#���c�O0�m2
r`);v@�k
;� 5-��j`R��~���{����HvBBt�f|U#�Hvo�`J�eA���[B8���a�����������3�K����f1���J�$��3�"�����������u���9^
�0#��s�Y��8D�-$����ype-(��M��t�4rv�5��.�0*� Q^C"(]G�O�����
�������S��MB���������
��N��R�G���&�|�71e�2���� D�;�Y��������X�O�q�����YgdG�m,p�#�X�t�eZZ�r��b���^�#��X�������+6��
������@�|p�b�%f���hQ#hIT�!���Z�64;mX+�]�
Z2�8S��8ib��u`�Zzk�F���G��M&4����|����������)��W���,�9�\���h�v�g\���jV��,�����	Wu�w�����
0	Tv��W��qsm��.���C�����N6r�R��1�������4b�(i�jP"jO�dFe�<3:��KI����������z$����pm��>Q/���	f������u��!^�Q����=�\�Y��>���4vmG$B_4�Z�"���GI���xB���FLI�<�!bt>��S&.��v�jGa;��Cu#.��nb�Q�O-����s�mg�!q���|a��������0fAuJ���2�"]G�K�A|�JD-~��(2M3
��ap��B���9`��~�f�c��W����-\�od
�S�O�X����	���M�>n�0�nb{c�f��V�+A.I�,�&@�����SV���GD��oX}Yu��1�F}s��X�%keB�R�M!�3s�D�L�+*��"�j?�50��LH
y[Js���\A���v��F��Ha:\��a�]M��tCH�8��1`2��I�|��i�?q����mPDj�����_�2���J�\n�/)_0�3�C�����������R���$�;����6R���9�r����)>H��Tn��H�4�<d�"d���m���R>��
����)��nE�@N���X2ipXSy3�g�I����>o���ms��G�,j��M�}|�����1!h3��1p�W�b�C����y���w��C��R���W-�lD��PS@�����
�B�U�?d{�������L�&e��M�9��3Q���+	��f�.
���������D������i�
������q8G�M[�$�#�����atUxHcbf�Sr�%����i��<l�2}���`�!f��i��H��s#���`^K0��6�7B�1A�������U�/����BN�r��W@�s�<��QX�/SV��������������k
��\X�/w=�OL2Q�}'&��cM��P��8��!�����!AH�zdc�!��K�s 3(�I�-c�%��H�{��7����<���� �*����|�N����L1Bh
��x_71��u<��T��L�������f;������m������n�>@���(��:8��B�b�v�_}�?.]Qy5��zE\������E�5�w+Z�
/��Iu�����__A�m��
��K�%��z$}�Y7���*���jya!/?����q�@�2}e�������]M��F�/b~����������5����(�-|�H�h������/H�v5cc�b�>��pq�4Rk�Wpj���a�9��yv���q#5��3���@�!�flR�\R����&6��A�CQ�u��;�F���5�1b��Gu��jNF��Hi��bc�n������&d�E�'kE�h�_^L=;^�~l�x�K�����,�$�j�.h�G����A�`)V���-���W�BOV�a{���RO8|��SD���"�\��r���f����~��}����C����Nj|c����	��
�8j,�8
k��8���c	���R��}J�*[�����O�Yo_2��;���R�)i��h���%���>nRP:'���55yl��F���Ek�@xH��|������v��a[R��A���{ *��m0x`0����H^��}5x���4����X�6|
#�x^���n��=P�o�Lby����0w?rjT�\8����`�m"�<�-�4c#_)#������>��>���wR~8�����T�*�H��������)�Y=�U)������mj�#�na�{j��!kd�A����V���i5c��36���`���(�?�r|`N��A��@P�����i
aM���~���$X�6���~���F�r'A.H�1ln�Z}pc
|�R�c��!3�����%��iWC�vp:�>n�0c�t����A�G�`X��Q��J�%��S6�&?�"1���L����}��*F�%,4�}��W�#�S
�Kp��+�����VQ�eY���:�����|�}D��������w<��d�����L��!���L�l��1Lj�
<���(��ML�x4N��3Pkgk��C�}
s���5~ak;#��:)���%��K����=g�^��]�C��B��7A�r����l��\�����F$lW#66���^b�2���r-����c*�H�L)�M%��R5����������Y
PD���x��S����#�=KG��8��A�Q\����S���DT}�r�xQ�%�3�lS&L��H__0R��y���VLk|�*/�D��|m;���KJ��W"�A1����H��c k��v
�
c�v��~m��-��RP����	��.y�;`\V��<I��>[#!��u����sj�?�uZOD���{�1�o{�T��QV�3� )��(n�u���NWP�����b<A���
k���7����
�l]:'����<�1yp0��(�����<���	,9���"(_��1#X������x� �l05D�V��{��ph�j��K�P*S���!�vlB3>~^���� *�VrN��(V��g/QP��������2�)��.��U6��_������T�<���0�4����-`���������	*�
jG�a�J�Z����@Q&1D��d���!���@KX�t|�FEK;�)R�1 ���K�p�����E�-2e��d�R��a��q��j��Wr�z��������$,c��%Bgz�����jSf�}jl�2�T�����R�N36��2B��d�9o�6�����kb5"��	�L�@��c$�}%V��]�y.6��	�������hM��ye���ja�$rv���z�S�k4E=��C���J>��"�����A���z�c�H
@W�(���Bg�
r3�����%�,��p��.GMlbf9�?4������H'K&"O��o)m�HYl���^'`�����5������[lB��1�7�Sa����s=P�������Py���2�%�J�~S�����X���������/J����J��F��Q�L�H�f�V����~|�j�B,HR�K�mgE�b3 ���h���6���aWj�(�6�
l${l|R)�_�!lm�HYl����m��0�Zd
�(j��?OR9���lv�x�H��k���UphM����a�u�����l~�	j%��1?�&F���T�B��5��)2WAmXcf38���%��`,,���+�{�S
)�8Hq[k���O��1�fo�����|�S����������<H(��6(�K� ��;�:�75<�����@b1����F��tt�������wk.��R��Z�f~������s�!��b%e��(7�I��q\��t�$�W�(���<88���D�g�a�T�
Hf��,g��h����MglB/c��fb����!��WIs`��������������}>��
k�K[�+�]5b0��0���Q�7�����>Y��F%Y��_W���(c��N���Q�9���E�Q[n�*���c� )pU %#�E����>�\�l�tx��
�
')�����P���HQ]�G�������Hz]L���p�J��O��&�2��������wI��r���P��A�Nb'�~�6�T���6f;{��a[R��5
�`�9^}�-a������>��������R5�7@������f5������j��!Q?;ZJ!�2��������=�x�:XB����������VYl����/$�/?p,���Pu�9���Z�I$�F�tP��}���b��.(�&�#oq<$���I.�%k�l��(���3�����
�������w]&5n�T�Q80���=d�����i�K�2E���~���xe��+�LK�������L���=����7��hQ8|�m����5�O};��}3�liW<Y6mx�8Q),���p]��er����H���
�K�H��S8O���v�������������b��e�U��*H���p�7R3�S�:>��6��.��.���>nR����r*��y��@r�Rx�8HU���1�m����T�0�c	���F� �d����T�sJN9z�F�(���������K|�$�^VK����-�L| G*C��_�e6�~��R����2l����>nRh ��5�?4?����4�0������5D{���CCQAm�
Z���>�d�$n �2&'��v��6}�+�v�5&��t��cZ+���#M$�q	����H;Xy "~�����E��*<dM�t��^�+%�;�����u��8N1���dc"�2Hypp��;���K������U!�{���8����������Q��1RB��H�A���@p�LA��w����f �f}x�a��4����FKt�i*+�F�
Mguv�X�~�`�g��5��5EK�5,�����]
y�GJ�]~,C�����]r�y�+||��QH8Y�*^~l��`�C�����w|�	�=������_��H(��z�629��I��BJT�#�����#W6��-;��=Me"��6�W�����@����u��j�~���$�#U��2����"X
��&�ps�S��}�)8�@��oU�1���SH��9��1m�86/��h���	�����@��q�P	�w+Q:�4w?v\_��G}��
w#�?�.� ,}�\2���e����ks�#���-@������}����/iWll�	������^��si���=iE�����	�������Yr&��e[s�Hy�j��5W��rM�(�F��5�|�M3�`��*\(I��v�x��	-����
�0��&hA���������/��f%E>�q�$_V�����j��6���X���������KD��Q?�(���q�����r�|,
���e(#y�GD`���~������4����QQxP���w��o���bih'2�H>���A����&
#���D�P�u	��C{�����<���my��9
Y������i�&���43�W��f��BLH�IG����������5))I��H��h�����Q#:h+��6<�/��x*�Y#�#��Yu\��g�Y�ll!9B��k%�s�h��N����\�t����#��F�$������~�2lKJ�8�B<�xNB�w�)�?2.n�ik���N0��c��L&�$�^j�RJ������v��*�N�A�>0v@3��c�2d���{�NR�+�����+66]�	�qpW
�1��P@��{���|kqq#50�r�0U	]��R��)x�>��������T�����H�U����f�E��}D�x@����Oq���60�������[>0y��!�4K������r�I~-f�����N;�*�}��R~v��c�C��v��������wTu�i#�w���=�al�W�b\��Q�9L����Q~��x ���}p���5A����,j K�>%?�K�O���gi)�M;�>�4������u.f���=�Xp������>%�������C�I��8m�i�)�'S��B�;e�+E���F���S���%d����a��HIg]p�*�w��
���T7C��Z�����N:"�E��;V� ����-R�a���������~����#:C���9/e$(�1��L�`P,�D�zDD�A��l�9�XQ<5��+�$\3�7:%6{��_c��mh$v��VP;�C5����p�I@�.!�
WA4R�*8$gZ
$s�<
�bqh��R�mI�?.=�L)�������P
}�?p��Oa��^�|`��V��A1L6���J@+B�W-��LJe5��{�<�E���07���K������\GPn��Qo����!��^���

Uo��a,�:47��2?k.�����ll�!�>>bR��Hj-��_{�L3@Q�JV��	�Y����eb���5"F�5/��LJ��]�	`,o@�03ee�j��lt+w�

�MP�WI~���}=��du-�-���Z���:��PlJ������)_�v��2li�J�J!����x�R��]`cE�A��k#0J%�U�jo���jK��[�����{�,6����Xj+|`b�����w�F�cDP�dc����HA�]J�%%����bv
Tu�.�A���k,���/��;�
~����������aLC�����tt�hE�Jd��������&���A�#"ZT]���)�yq����+e�"%C�>�RpDk��bF�+�X3�?ne�:�hJ�j��5$t��:�u��p4�����j0�%� 	�;B�BSR���';�1�(����+?n'#�J
B��	����qy$���K�����m.Y�+�3e��|�wG��sU li7�,����)�9ft��0aR^����lV�[L��{���,y��cm��R+��h�1�B����L�B%��_J��Bi��
�FF�|�3�=go�-�L��|���1���	H��8`=9N��������&{�QKV�H?7
�����?+������.Mc
���G���Xq�PXjeN����
>���?E�������CDSD$�@����!&��U���)"}��1(�D�����JE���Y�6��j���
��>�D
��'���1��I��7�BQ2�+��P5�/Hb�D�Y
���CU�%���TRG/@�,U��~c�$X0�R�OIG�$��	+m��1L#"�Z��l_&j��q��:��
���'�f�T$�R����k'����N��S�Ahv=�-��K/�F&:�	aW�<D���P
|����
����$���4�<a�Hi�Fdc��M����dZ���I��ud~:�1[�HC�>E���"�f*��9L��"������m�U���T�����h�i�b�)A��sF)�������U�P:��8b�!p�:yp��'��L�.�J��y�y�%;&�a�PS6:�k}��A��S����A�i��g���r��F��%���).9/8��v5dc�!��c����jx�C���sDXf{b��u�a#k�����EJ�/�M���i���Y��v��g���5B��=��������wR#�y�r_��aK;��B��R��5+P:'��;���������OB8�j#k���c���������z*}t�����Xp%0�����C�B�hB�6:���zqG���q~����x�%�}�c���k��X�����R�I�z`F��~�V��������df(c�����8�5"�����������:Dy���[�,2��Y�7En�qU�}����<d�9k����1&8�1�/����\��:�`!��x�� �=!>?����6�)/�
jGA�$�~�M]k3f�`U�iu�7y��n0�������a��	]�b�Pc�y�eA�
#k���C�������j��,�&���TM�3�jV4~k�����L*�����ri$�����>�����w������E����%_d}�-��4)y�0l#%}���K��x|�����BW0����3���v�G�^����9�F��e�#�j>C����d�%�U���2Ebo�����Z��B���2����""o�~��R���Va��k����o�(\�_J�=�Q��^C�&T���WD�����6*1~B!��Vs�a_}Oze������=��/,6���H)�6,���$�lZ&�a��d���L�����}�:���4������K���H�e�Z�
�*��V
jD����R}�x��yeZ����"n[4����zC/?og������<�:����@`���Z?3��
_A�Qo�y;���-Kw�h��grJ_�)���L-��t���X([�i�F����}\u��?c!hL9�dTX�F��f���U8I�=/�Z`��R�f �}tQ��7�����zb�a#��1��!�d��Qag�5��`}���J�
I���`}#�����wPh������8����A������
�%�P\��t@)�h$��4������&A�R5�0�a?,��f5`Q#�(�R����f`b�\�%V��nrL���%Q���������a3���:]#F�p��#.��:�P���a#5BT�r���a[��\Rp}�GWI�Lz�G+��L&��!��L*#F�Y@�<�����`f��R�Mg(%}\��u@(f�{�� �F���w���j���)|k/�Pm~E��Gp�����<�o��= ���b��������A)�,mTrw���s�9�bZmC�2��������7��[B�L-���T����W�"SM��^�f�"]���In��&�L ,U�%�� ��('(��;'�`�a�F�
x2&��U����s#uR�6���&�KB��� ���Qp��R�����v�l\}�+v)!_)x�C
X*#�

mB��=�A���I�j������Io1��Q&��</��WJ�/�
�APi\����as����J?�7s�n�-��M��-)����"�`[�N)F%
N��W�i��d�di���v/8���v��f�MJ�*%��~��F��t+,��I��;� ����7h����z �f���z	�,7��1���E�x�����$l/���$9N��n���_��0��~�fdj�vT:�0xN�F�
�Eq%(x�w�������O��C,�����w�"�����x(�{�:�������t��_3���T~5��ig��2lKJ�8�e�����~xoA�5y(Z������lGi��5"�#�rY�Xa<M��f����a��|�j:�tv!�J�M�0��]�%���������A������ ���q�4Zc�5c�}����rNc6�E�4����������1�C_C��$�\���E�w)WHot�U��^e��W�E}6)���~�(�r���a�D^�q�y'�}����O�0<�^)$�l� LM?T
#����|�SQ� %�ua��]��d�U!���[C7H���RA���WP;:hTc�����y��*@�1u	f����9��L���
����G�T�Hi�����j��0�����$��M�c�]q��e�fuAc���O]�������]��i����=
�u�/-�]||e-���lk����iY�
j�Pv�[���F��QZ2X�������V+��Fx�*x�����%|NH�R���2lK��1��h����X�=��XVF������g(M��'�>�UD���y��F��Q�$&��r[���F!H��Q`!E�q����w�F�9G�|N N{��
_A�Qo��"J� ��,
�HCr��<��;<c�1T��B��u!"(D6�	/��/�`3�:�����;����i�h}	��*��C6�1�����;O��ak;��H�.����3�Rla&^��d�o�i����Q�.��-Qnd����e�_G+�D�n�����iD�|�mX�H'4w�9��n�*V�CPd���1(W�~I���I�����D��u�J[r\z��~���E�%5�Bcv���
H�����
�%%}��P�PI8}-���_�CQ��<��d� �j��70l��Z6�X}Pl/C������zjD�*F��u9	�������r,�q/J�������>R�����U��H���X����z_TM���}�I�f��K��$L��x��>}w_�����Z�f��l�!9��{N)Xi������M�%����	t����5t��A����Y%�
c�9����0��&��L1#(]5"�f�������S0��U����y�Q��
�w\/u�i����W�����������g,���]��U��5"��������<������^��W�iKo�J_�Ta(i�`l����T���y	��v�����8��rd*����[��y��t(�"��~�v.Ve-�{-uV�0T����8�����ZbH+��Y�|D!�����K���yu�C����1�l������q�)�`�3��Rz�x�Z61Ez/w��~�Ba��.��-JM)�tZR,g]p>IS1$��}�����we�$�/E��wET� "��AQl�""���������+(�o���xqm����2���;�? �k�AJP��>BE��%�K�6&��^'<dL�������-Q
ftq6c,X�8Ud�\ZAm�
�,A���*eJ��$��JC�x�E�s�3R�f�//K����&,��:���F]@)1��B�����W�:���2�A�J="�������f��R�����>DP���YD-e�'*����,Q��;]
���_Q����5��_A���u�`FM,�P�6����q^����J��^S����A�X�K)Y;
q?�!Z����!����{k�|Z�5�PtUw����&�f��a��/Z\�on��������	&��X2��2t�DN����*(���5D,:�T��Y����p�N���Cc��2��:wG~[�Fz�����4R���0G��9#{n�D���s�_��Q�owL�bJ�N���:��+�h������q2���8=+]S�L�4�"]����5
CO���g���V���!���������r���=l�r�)}ua�!'��H�'������0��j�Xl�+%}��1}Q�g����T�3R��M]�M����E�s-�\�m	GD��tJ��a���K@D��{�<0���d��]d��.��#�o~:5������Q������a�UP;:��	��"�xN�F0(NQ��b�0���1;mDr��e>��3�8�
5f�7A�(�����w��������r=�<;4����G��x,Y
$su��M4����I6�	�$�41q�I��A,3��D�a0��1t����w���u���a��i�u�OG7����.3����4��*Q��[��r#�H����q?�{9lm�HYl#�zA�s8*����������d��%�0����
��	!����:P�6�C%��##�*�����(/&�	���=��=����Ru
?�&�C�k�]�T]��"~����:��J��l�AE{��<��j�����
g��;;�c!��3
k��D����U#T���?y�J>� p���o��-�����d�%Q�<�"��)���lB�����������mK�}���!���s�\�(M��#�s4D��R�<��flXN���f
e<�{L(�����L���]���D�V�M�lc�&�I�Tb0;^!������j�����}Q5
������.��D8L��xIg�~D}�+I�E��&�������U�)�|�����Fl$�P
5���z����l^gY-��^��[��E�vOu+��%�p�/�����+e�4bz�����C����P�yxH��[�2��U�2��'��������e�����j��0������y�X�	����s>k����E9���K�U[�I�u��7yp[t�2H��������@���st8O������)�`yZt�{c�n�1��Q+!~h;
��a�5�oK���A������&�L�������d����==�j?��WV����U�E&3<���U5�����{��v�\��Ja�21af�2�	���qEj]�F����S�AJm����j���OW)X=��� B!25�>�����H�����T�%���a��Fl���.�q-sa!)�)J<*u��Q(jg3���1����#���fU�e�K�A�De�����|�|�����D�h����l.�����o:�u���5����+���{`�o�/�j�����nA&���4,�����5Y=�4�R\����>�����e����T�c��bD��I��w;���m��,Uq�A�+���Q1�[y���*��b�o/o�����Fk������jq����	t����4��|�����@8�Ia �pU��;��8�3/���A|�����ZD�Gl^�$�����q����a#��4�� eE������a��Y��I��BP��kV�BP�+���,���N)M ����,��L��x!oQ`�r#�g����N������o{)�����G
���	�������M7lB�9������%�^�J�������|M����S�����~|�R�����"%�F�q�3M@�=���@���R���9���|8��8q�I�#�P��R�u���L\�m��}��Z�<d{hy\��RPz�rI�W�������L�K�d��!x3�Dz�X�����V�����[�i����<w�*���BPOGg0(���{��D���Y�!����g��e�f`��������.GH�-�0+f�`��3����/� �nU?�����q�x/c���Q��4j@Yo��K`P�Pu��+k�&a�hJS>BkL�����`&
MCU��,}+h	�$~s��V]�a��A������aUA6v
����w�{N�B��{�,���������@Q��Tz���"+
}�?����W�0?��ld��j��������`-���T�K���4J��Y_��I��6��!>��eh	���f�A��j//�X�RZ7���i;�����t��,��P5CLi�C�����_/��#b^�����\)<H,�����EF��?E�B*�������Of���*���q��vt����I���wt�V�����H]�E�6:{wa��Q)3�����hP�+c��E�����]��Hz`=�+CZ�0C�I�G�h�������%��X\1��a<��!\���D�g�I��!������+(
�X;�
jG�}bB�J�B��������1
g�d&�NW���#,T����+h�S�$���I�70V�SD�L����q5�8*-2��i�U��@��)!� m�88v��J�^�
�c#���~���Rv�|�	,tN"�7L�t���T�n���:�*o�����"�!�W���^���	�t��ky	�<��x�R�U�����)��� t���X����\������z�h��F6B�w�y*��mWC66�x�8�#������n�@�2��W>����'�������Y����a����q����J@�I���yP?����:/����N�����������G:Hc��?�xq��ym:l�3�I�:�D�\���?�����_��O�����!�\�����<w"������:/����z����uS�M��FF}?E5>MN� lm�HYl8����Flj�a��X�4�>0��Y���T�t�!�X�g���]�`�_l�	=/?�
s���H� V����YN
�,G�1���\�0������m�);�
jGht���+��Q"�g�����Nu���U������0)"T�p��j���'����o��R�3�	�����X@��9���@$U�#�K��&��M$������k���5,����V�3Z�$ZCxkI�GA�b��i�)�O;���	���,�@<�������#�n���V��J���g&�N�I��UOp�7��Z���
�
��N(��+Tf:���h������2B����K~���`P<&"�����b�735^A�%U>�*_vy���1��RB�R���������G�����?����w��8g�\T;���S�x�4����(�^x^b@2yP7{��(���Y%WhT��i���Fj��0����~�V��6���\R��>��d1#������(�C"5A��$���v�_%��4���]J�WJ�8��`R�L$�����f��>QF�f_�l ��������`�@\�)��P>/�;3�t~:IE���m�������kWL%�	�X(���(�n�
S���#e����_��r��R��(�oc�,w&��+:�}���F��T6�EA���F�H�M#�}���)��+Z�WI��h�����
���	~�r��^M�F�eR�%�*�	5��2��-R�{�A����J�R�Ob����}�)�T��FD�sF�m$�ei� &�y0q;�pGB����\�Y�H��m0�6��6�!;���_�@�����[fT�#WG��s����5�A���%M#W����<B���73_�'/������ypP��(|����dz2�B
A
F�;
b�.6��%��yFZ��A�Q�:V_\�u-�8N���p� z��cYW~Z%tU���w3al�~K��O���W�J��.)6R��n�n�(������a[R��A�:�k#�An��~jWP&����58k�zp#�E�}���_��v��a�l��������q����Q��aT(�����_�UA����tP
�L���3.A�� ���:6$� �4Xc:�.I�3k�q�!��H!;d���9cn���elC��f�1�K�|�T��O���IcY3���1@K� A��C�A���}-V�WJ�]����.G����02,jF�g�h^H|�N�G�{��&�o�:��7A�����������Xu`��i^tb����EF��� �	�K����
#�\���M#�u���F�A����Q�h
%56��,�}�%��(m;�)�M;���c����u��
���hPO�b�i��9�|��`'5����u�3ll
�N#*t����oH��T�g��Q��X��{b�50��YMlt������gp�GT���vb� w�Q�T1�#�un|9�
�"H��]���f��'��j�I�������X�2���Pb�?���9�W��O���~��(�q>�~K!H�����jE��Ja�I���+�VOI�`XF�"�]W�R�om����u�5q�6�����%������qb"�S��3$�(�A�t�D��wR�$���G>DD����)!]�?b���W���J���<f����Q��T������A��_�1�8y����t�c�9�V�I����e�C}&&�o��F�q���n�_�7Rf�|I)�n��p��JY%�\bH�WRH��X:h�u�K�yF~c�OoC�}���M����*B�8��=�5���9��������^9����i�D,&,�F��Y�8����p�R<��JE\YI���_�w'��U�_=������4������/A��QB��"� �2d
@��Rcv����������\6��|������\��W�*��5^Os�_�
1�:u����E`(�k�Y
�~������H����c�>�R��LL(���y����d�R@,I�S
��@h5���A�����1���� j��	=�d�����1*������g��\2��v���=R\��hv��BT,syP�(���FD���P�$=���~�b���f���b[R�}A�$��"�����Z��������!���`���V*�I��q��e;3FT/"k�����SC)�2����-������vY4�dbY)�e���P������C��LypM&A��4Y��5��Lso��@�2���)����s�#q�]1�������@���m���z^��I���}�1�K���\�����1@�c�W�>������)��~	���#(
u�N���HTLG���{�7�J�C���4��R+>P��(�T=�k��M����&,��������J��~��J�tz.l��v'����N[��l��fK�b�����*���<��rI�R���r��lv76&��������\�>-����c���~=�O��;`���c����Jj�R ~�p,r�!��j���W���!������.(]�<(*��a=M3B��9H4����cFP�A��px�i�x�a���� �w�R"�Y�����5��+�����7_�{H��1yFJ�=R[�$w)L��.#8=Q�?k���V}6�]�^G%(6�2_��R2��������N��^�$��ope��P;���wPT0j3}�R��H�8���Wu�*�t��r��/K���T�'���=��|��oRlW�u��6��J�b���3�[���%e�H���D�Y�j���!5�W��5�r���vv)�m������r�����j��q^4����`����:������v5cc���qu)��,F�M����^�u��2x>�dh�5kG��sY��]����v5�!��%�����)�����q�Ht��������9e��E����%e�����vqo�����8�y/������S�w��R5���-:��]&���#����������Rx����TZQ�p����KW�8���ca�#;����~���F���P3?S������fUo8(#MB�CP����F�ZCF���d?�F�%��^�����4�J�R`)�PCU3��o8y�����/��x^����5����P\w������}�2�>������K
�}nRr*��A�G����:0��
�F��H��a�".\q)e��b���M}�D��T��~no3[z�6�w@�E�$�*P)�Jp�i#������_�v���L>I�cc��h���������.�23	=m�1L#"�Z4���R)�VLS=#%�>��8%�Z�6:c�}0'q����a
�	�"�>1��Tn%!2�����!�c���2]T��l<�A�����R������c�c�jfp�5f4���+�)�&�Js!��KZJ�0A�r��0��<T�&~��U��E�x^Jpz�����5 ����t��R�l��]J�WJ�8������6#��4}p`mX?!��
��vn�yY��+���"�����Q����,��z\���O�������O�C��Y8��JY��h�������(����h���Q\=P��]�~[����c����
EPh&�]�D�gL�=0c����S��jL�(A���j��H&%�����3��2l��@�oR0#0��]��hA���N���PD�R5�`��|�a����L��7�x�o.���}������t�}�@��34Y?V�����=��7A�]�TP;�0��{�d�1���=�Dr�y
�N^���5��+v�����+���C����R�@�z���k��Bz�7Y�V�P��v|�������YMx���+��Z��k����{HO^}��|C�����Y�e��C�liW���i��}�7����g������f��>�\s%{Tv������e,���x���]
6�������`���,
63����J�� H��B���K�c��m����3���"����O�7�Pl$����;��y0�q�C���^�7wZOf�|����	�0��+h1.A�w�1����I,�TbL��5��I�!��K��H����6�1bX�A�+�]5b�pbQ[��t���}/3h�C�y�Q��������)�j����.��TO3>:�0���X3����}��E������b��@3;R�au��!�����q���fa]4��[	���`�0����tUHn����%O�M��,my;�����b['��w3��4������K�����?�	������a�DD�A�F�Q�3��\&���C�+k%�N2l�%��L#��ReK;��H�����%�YR
�_�B3Q�F4
Y<��a�������B9H��&}V2��iDH��Jz%���������������!0����)�|U�U���/�%��kI���P;I������x�o����IV�N�>�x{��1K����#��m�0M���%gx`��+k��aD��a.,����)V�U[��&��+�]�s����O5b�C��>���NJ���K��gP1�?����a�a1�O
��G==��l^:��A�fk#���O�od{�r�������n����>����l[&�r��6<����`�����KM�����UA��_G/����
_A��x��/P��KWV���x�1}p0fM$�N.��w� ERh��
V���� �
� � �`}-�cx�1A��/��A���r�%�V�%��d���Q)�p�ll��	�?C63M0�[1���>;��(t���o�A[#R0.�p�U���WJ�WJ�8�B6���>�bt�FA�����5��� �:7�Q�W�ie����0���`*	$E��HC��>X����cV��PU
~��W�Y����x�(J����q���+������qj���elC
�iV�������K���(�1Y�I�-���5:E��Fi4���!�R���X���S�����>F$/����$P���������C��G��������x�\+������.6����fZ��/��Ip���^Z�o�������Z�Xu�I�������FT��-)�c�cDE{��yi���)�E����K��HxR��7�����+�E$�ym��x�?s`@'�Ft&�wFt�j]��TU�3
��n�aJ�z�aJ���C"wp$���*A�`N����2mTP?w7J�)�Q��i[&�F�"P*%-""��~���Iw���z\�W(������5B�������\��������7��b�i��(�B�[��,��;�������gb��!4��*�W?��VV3�K�Q�c�M:�4�������������*�;Zo�d���,�?�#b�87�}�����M������P_xI�eI��������O96}���A��v5�+��Fi��U@T�g�V�(����LL�w*)Mp��*���|����K=""����AA�3���#�C�u`m`�1�`�/	�&(e���YA��n��a)� �����X�	�Tb��������3�
}�.Y8X1t
�sTbO���&��+c�wZlI���f`���kb	�u��P�I@�C^�4�B�86	.cj�j��~-�'A0���U���$X\Z��Y�>"�@:F>�4R���ll:��S����&�\�P�<3~�~�KT�,p�K���Y�����6�)/�	���!�lV�1��<c����`��_�CP�+�C�w{��{@�$(
5f��0>���U#.��
��<���������U�i{*�H�@���b�t��u�&���a�	Pe�.KS��D�`���3��T�v�CJ����b�-}�v��R�mIab����f�]�L
��2_|p2��5 4��jI�E�!��,|-�*�
_A��my�5b�eb��/a>J��h�<�B)�����	@K����X]����������!�Inc>gSh����U�!����F��s1���'��	<�[��a\
�����#h	�� lc6�q�_��D��U}a$���l���a��
E�0v����>R�RJ�S �����`�1�WV�n6�8�>\��P2X��,�EH76������.�d<��@�0c����^�����_Jz����X���A��tCD��������'����M>H���������bo��y�t������(.c�����TA��&�{�w�?�{�w��6L��)f�6�9��I��4R���0G������F����z�6��	=����J�~��\��0"��1LX�
���0m�B�s B��bw��i���5��NC����Y�5D8��O�*�l��	=��{������;���q�{��+����4��������6��AE�)"F�
�~/����������C��B�(0���f��	��R:HYn0i�J6���^��
�	�2�(�����,qU��.A�%k�����}�G��j�C.)���v5�x3E"�X�oX��P��r��������k9|cj�No���A��
Tngh�A��B��ypc�O�$���
����&��|-�*%�^)��C���>n��u/����"%�6c�`X^�;���������s�a�mXS^z����I��^��������������1Q;��pp7�y1�1AD�[8��ch��'n`���a�p1�������T�^D���)���v�e�]c#_)�B�q����K"F�'�NF$+���gpT�6���V�z��p�V��R�l���:�e�������^���"K�p�i�:�=K�� �:��I��N%d��	�R���I,gp3����3��C��RS����'�:9���
�jf�X&
x�(�EM�*SaOQ�?@��.,�*���i$���TO���_��KP���D��|1<�t��%	:<�gya�o��<�CC���^}������&����.����9au�]���Lf�����%�[��P�!k
$e��=Hij��]
���&��Xn�L��-=�
h��>g+?�!����S=��_z%e��-��i�K�|D�������0B�?������`��<�S����=����@)Xj������M�%dLb�{V�ll&��X�]�5����.a�y��D���Uz�=��V8]�q,��M��������*����3����R�Y�1�e��,�/
k�|x�6���=Oa��yp�W���PN1��j�`f\x�\^<:1P��^5E����m<��4Z�c���!
#������Tf>��aMP���{����q�B�\38LP�%9�q�5zk��\[�Y����~J�~����Yg0�G�
�����b*����CA��cr��=�x�v/9C���9�m�����M�E
�:F���X�b�������l���ltFh�i��4������a$���.�,���������4��`��t�R����?v��,L���/^���Au����]�<0������QI��.���q��G��a���������T5��u=U��}�* ���������	�b��v)�1������T?�����s/���{���O i�lT����dc��M5��o�=R����Z�T��0�C'��.�Q����
g�����v^��Q�u�n�[���[v�tOUkEOU���@cQ��p�����!�h��A�����+h1j�.����W.4�������Sr�����"3N�R��lIB��{�P6���n)j���,P�&�,��������\��l5(P-mL�
�=>?�����
_A�q	Z�O1�t����3���V�C���rl���e���B���3/�����+�(B&��~������3\5!X5��^Fr���R/U���+�+
�Yc��GD�_������]��>J%ype���t�bwZd�G��	�;���Y����
��\����&���P�t�*ax�Oc�����:�w�*�9����X�m���+���aL��+ou����S_��z
��
��w+�/
�c
[i�C��v
�:��9Q�}J��?�!��1y���^J}9'e����	;N�l��5��,���.P��5���gLT��?vxz���*�����B�5�)cX-����11���F��#�~T�_���F7\�	�"���[�=R�Hi��������"�a�S3��`F�"P�R@�*`�B�����AJR����,6���^������Q��5�����>83���T7����_�C��!�bl�:dc�UP;����x��!�wb��)AE7raZ�����8���,CQ���e~n��<>	{TfdY�c���k�hJ���4	���9%e������l��(��?�(�	�����:�I�����-bV�!�|���tY�q��E�a��g�(y�
���$s������As;V�w��(~7����'4��8�eA��DA����� &�����?��LI��G:�r��������J�N{�"����P�A��������"]85��@�g���2��b
�-��[w���>��������E#68M��������]Y{K2������S�~z������Z�6���uc�jX�n����}��k�Y���X����.;lm�HYl��T��� �k�Bz�l\��X/�d{=ci#��������,;��q�������K����_�9>+��`FU#s��h8D0���_��5ecddTP;�i����D���o�m���.�Ni�
YoB��U�>B1O�G�b��M��<�������6<���<�y@���)�*��D�_�����;������,��I����������g)�� �������I�v�t
	�6���+����VP��6A�7A]�_�cQ�>������v�[�z��Z=�3j�V��0��S��������luF{�n����ll�������J�O�w��-��H)�HY���(��� �h���zo
������� JbT-]D��O����[�g�!;#��:��H���R�1���o�L�@O�F���J���#�C�oi�	1L��
$��o"P��������N�����X��k���p=���sq�]�2�0�=�fT���:�m����E$�����^���v]�B������aC6��*sN���5����?�I�o]\GLL)�)^n��
�R�M�r�uy����>�R�z�`��9���(�����m��&I���"�L�E�0����v��_���t
����������!k�$w���V
?����a��M�yf���Sl�mbt*}2�)P���51b#5�E��[pp�.%�tE�[�h��,�n`)J{��:X��}#���&�K�0%�|$L1t1L����?hQ'��!���d�G���~��T���n.`�FgN��<�L�1����AJ�.�iaiQq�����R|�f
F�i\T�^�/�Qu�[B���4$|���W���
_A�q	jG1���g^�U�'��������VD#��U3�&G.#3�3������N������(����g����	a�0mb�N}P��r��
jC
�ik���� �h��j��Mm����F��k�2A��H8�6-��R����G�%
�;���Oi���OP��Vj8�(5������W�*��j�7�~�Q��y�?/��O��g����
�K�/E>)�o�2�o"�[�FD�_k�� ����1�����9J�tr����+�l�������R6m���}r/e*�3��	�
?7
���?�&���{�\Y����_`?�vbceT�v1�a�'�?n�q�e5_@3���G-�zokd���,�?T���h~��e�g-�<��%��Rp!K3���B�F�%����_X��-+�����Z���������FjB���pr-R_JR������K��4���zD��_1(����\`�(����BJO�K�o����uz]�4�����kis��.u;�GU����4���0��ju���� R��?����]Mx�WJ�8�21���M�
jD�����,���41x);7������)�4����@��
mafV����d
h���"R$���7�&��m-�O����]���t�h���n�z|s	�|�^�
�{]ntJj�"��b����KH�-k�-Sk>s�`��k/X�����Tv�����H#�+��?j[�l�'�V1C���>nc��`gfX�;�K������qu,�<.G;����������t�
P1P������u�Q�5�t��p� �9:���9�1�\��p0i.�������,K�)c
5]P:�]��F?e���B�Ie�����thM���3vF��(�s����2B/�"G�K+7`�Z�6)�W����%�0]�W2AD�����Gy�/�1`�'�d�����A`��?�<X�J��	��R��q�������'>;h�
*�
Z�O����4�l�v�3E��kC���R^�y���������v��a�T���)	5K�*������J<6,�.��U�q��B�w��2��+��~��h)�5���1�R��T�v������8�V^��k�������Y��ttpi5\gtd^R%��<����N��C�bh#��A�������o�z��7A#���1�>@4��G�p�B��
�*��=����!��}E����2��r��|_����h�+������L$b����c��i�Bhv|����2l���1>�q	�lT���+��g���Fb���I%kDHo���vQ'�����M#6��A
�JM��u�����@����d$��c
�C�/HybQ)+J����*����!��
�(�b�a?.8��*�������s���/�������L#"����T#����:��H��?A���p��q��Dg���&<���>��G|0��=��A�}p0���E^c�z��*�n���������ig���6R��U
���J�8�bD����S�_��s����O�A�iV�5"�,���Q6@#�N��'�>8X�"��X��KW���M����hsn���0��/��1������19���1k�����T���������f�pqP+w`f���A]��������R.�U�ai}
}]l���%��gQ��v���
���T����K,H^c�
��Wh�*�5E���oq|��/������eK;����A)��Y��mY��'&}qT�`DX�I��l�J�����N:��Fll����KR�9)���CrxT�WVE��V��C:_�jq�1:o
���j��)�"5�&�N}�_oH'����E�'���</
i,|�.��c��J�Z�Uh�W�=�
JY�=����� V��dF+T�'��L�����K��E�5Aih<FP	�JG�6����0sU?�%�%�44���4�=���hn���'��q�cu5{D�����&��Pf�k�h��bFb1E^I�!���D���3A�b;h�F'�#��R�NI��vHO^��/D����`�hH��~|����	N���5cc{�\�LT�(%]�� ��n�(�:����E��5(dy �H����G��Ue��OWE8��@G�Bg���������aZu�Xt��^�������;x������F�`ub����+�F�{���_��K�06�|����K�)�q�DaB��L*@Pt�������wG�$G�-�E!"_~���k�I4Q�x�gz��f���K��_�U�
lv��P��PK�1���!������1���Y�������
�X�{'����SV,�����N8��Zh#�����i�t��FlWfM��#�,��U!z?�^����� �r��VU�aQ��|�;%�k=�=Z+���8��I����,�|�geYd���������������	HaH���j�����LO����s�Y�����*`LFh��f�F>R~�����R��9��9��ts����CL7����t������{��a��&�l�B��G �Xd�q�tM�X�2m��!�N���q�Z��U������Z�G�}\��I����\�D�C���F$�}G��%����d�����v5�!�������Bh��D�&lLy�`��x��1Q�ISXv��k�Z�U���������vt�	�c�2���������������>����5"����k�;�<��wL����
�L����bP��O��(����3�1T���YqvZ����6��_&�^&66*�%�����;����K���fo���G5���-�4a#��B��
��1bAH�b��
�5
��8~�4;]��K�L���3��vtu	�,3�;�
bwcf���0�0L����q�?����/6�y�f+�)�u"I<��� �vb��8Iy ��aAaH�������T,9������%e�^F��cD�a�����o��l��=E���Y��C�,��u��lg�GJ�)#~.� �T#����d�'y� ��"��B�������b��������iL���W�
lIf/��=+�
�����CEu)6�H��^AE��#�LZ�	�1}�>�mN@�����?=��O�����y�����j���{���6t<�t��tt����@�q6)���o(�ltY�T
|�1
4�W4*��*GJ��B�z�h]��@"K.����h*dj}pL��$�� ���`n��X��t�&���j4X�Z������
9}�C�g�QxHl���i(�sJ����9�M�G����U�v�bBC��[� ���]�)�	�nD4A�i�1L#"�o"(�=e���iM%�������YqQU����3��H����aR�G�����:	�:Q���A�Q�������!k@H=z�^*���&ll��	��|�����7�fBp���(��`������}a����fl��@
6���3(�=��5;t�>/K��"���16�C�s)���7*�+���L��
�D�z�JD=N��J��[�T��o_��Nj��hs��X������RB.)��>������� ����������	&��%����)����E<�.�g������eW2n�\b�-h�����?d�GrY$�����v����K���hk�7�bF0'��d:�Q#��"�>f��NkH�U>>?"��\�����vtD����.��-�p�!�������/�!
U^"(���C�%77Ad�4�X����WbL/��V#��Rn�oS�{������=�������6��jW��v
g�K�T$�x-K����YPp�S:�7R�=(aHp��G�+[���F.)��>"��l1�
+:F�N:u_�\������!��$���C�0�v5bc���Kq�Y�2�
��)��{;O�>&�����������_"�qG���Z���p=[��n��:���H�md�;�
��tb�-��RNPJ��Ja�,i�"#��r�2~LL0�`���W��>�k�Wy��l����y#��%�>�RI��$�Rv$�FypP��\�'�c%��||����@����HYl#�}tiD�	���`�%�iFYw3����*f��1�iW3���U��J��MJ-<OQ�m�A��?nE
8duF���`�tU}�QXa}n��Q���O�I�Q���oL�A_��k86)m8���Fj
e��{�H�����@2��mIIW]��0fL��S��sJ��5��]�ld�O��~C������������.v'�(�M�$��A�fDA#u��
f����4�^A]��6lD^����O�E(�V��/�)����v��)�������\��]�Ll�Y��	S���K�R�kC�,��(�F
�K��`@c�H��������Uta�U������l]�L���F��R���x�H��&ry����r]^R<��������v)!���qY�h��*�!�����k
 1��r#��#
m>����]�����w)���T���l���}4B�t��H�>/��~zK���3�"e�E
C�2�u`�ZgHh�^�>t�X��$�F��+NR�P��75�RB�R��*�J��Q*�f>X�����b�� ���N�m�T�%�4��6�d���{�)�"�xP|���'��������&�o�������v�������XE�B�,�-T<�k���ZM���s
�K����5�����������p�e1���<����%A��B�.���aV�<�0&��:d�|i�"�O,��������d_��U��O��{ ,-s�`J��?�`��f����m�N[T����F���0G���Ny�"q��f~����X}pe-�\gl�5���Q}@��������VM��az>�����y�%�L�>8�1��rOY��D���!���W���|+c&���e��oLX�D�l�`Kl��)eD��9�0L@�Y��[K}r��Le"���/�'�V��j�U��TD�����_�Eh�X��f�K�_\�������2��"�y��������
��S���<YIk�'Y�����sZ�N��s#�v���KJ�������^+z\�E�!t�B�(cxV�������)
l���*e����q���������	>��y��	��S6�y�����
�07lm�7�����|Ia[�e�[�x�?*��; ������U!�U���aMy�WP;:�$SjOQ���,�NH���8/P��H��+)z�M��7�!�ii���9��������1c<��6�9���k��n0se��7�&�)��O�
?G4lm�HYlZ��qpf�IQ3�rCM1�f��X��/������6R#2$��'�k�)����K���=����u��X��{�+�D��o;]��^��p��PG��&(�sv�?I,���ls��yp
f@)�/�A0j�j I���%�7y���ll�dz�68��mJ��\���|����u��j��.?����(��QK�����ad�WP;�
B���YA��
��>�x�|�"~#kH������^6*�.%�+%}��P��H������y����	�t��P]��0e.�q����k4�<lc�����e2|��&bc�����^@����F���q�	B:G�lig��JI��:('�r��A��:�m�DfM��.���l�~Jf�wz��]���t���-�(�������^q��Q(�
�`����>:��=��k4����a��C���je� ��}p`�|L�S�^2c�r�>����N�C2�{��O������@T�fD�z�2��>�9��W
^�*H����!(
7A�_A�����9e�%iWc�Q���CKZ�(���6R
(-X��� ��
m;=)�m�����T��lo:o�nP1��K�* [V�`��L�o
�����2l�B~T�E�p`+���Ii'����W,���\�,�S�l,B:��Xo�H)�FlB���lE�nN� �5�
�v�E/n����c�����{�[�`���$c)����[�`����!��p���8���`�A����O�������q���Z<�!A�Mj��q���tH���;@ew�JW�,i��X
VyPOF�A:kJ��C�t�aL`+�-Vlzfu.}-s�/��S�����k5,fOY���6"
T��0��W��v��%��������^�X�����t�{`��))`�x+la�WD�"b	� +�-D]��Y7�x�q�m���f�Z�g#��h�Qf0��:��� �v�n	jG��ai�X`�B���/��%c`���H�~���]u��6\��1c�����W��2�������,}��~�a�%5��b7�xR�B����E5�����@��B/��Pc���@��/���������RM�HmP
��Kw#��<�|2�^�`D�����#����,���v5bc���q���`5��R0������T.�\}C]_�1|	���(������u����7;����d�����:m�U���&��@.#DC��N����/Al���Dn��K5p��Omd�H�W(���]<�����DGd��=1e������,��CDL1Y���x;����6�,i���e	%��Y����#�)�r��I�V��I=�<�md����(��?l���HnR�#���}��#'�c��Q�x�u�%?�K�D��Kr*}�^6.��.Q��a#���u#�J%b����m�$���:p�do�m$���D*q�O��������Hi��������T'v,5�F��=���*��U@)�83�Hi�G�b��M�%8�w<f��oh �cv5��i�"�������6\����[�����`�	��*���h`'�#Ua#����B�b�y��-�^)�����1���`v����#��D�<��9��������~�y$,x���_��#a���q�Z��/�j��F���q��"��T>D
�����+�'
b�������/
���,��{�wv�z���w)���Zz)�v���k�F��]��W���~�>�~�^�3�d���J����67������#�����������������%�4���Q(���x��F���`����{��\v+@WT���a[R��A��H��_g�d��K�z��l�3�NU�w|xA��o:��������a��vtD^W}��U_u��0rRy�od�	����r�a`b����<�#�}�H~�u��>�������Ny����P��NZd���,~����4������}��"�n�M7'4&>tH����/m^��������_�iX� �a��^4�g*���g�� (����Lt]b�$�F�L(�vpIK�l���<��2}�t�so�5�����Df\�������L=�
*C��,��j^����j�
\�T�z_�H�����u1f6v/�\�3�]^Z[j�G�@���#HcR�F�t�{��3t��.y �����j�I��<�����5��������R�}Q���<�
`����}��^�����X����F2���U���j��7r��]������:�&���6���4$����*�s ���P��1
7A�H8����e9�jT��Ynr&����US��|��K&�����.3.��4�����@+�_���I�BF�Rj�?�k5��B�9d����U�-���]����bz���R�x����S�@<��x����^�-AmXS6FR.�����c;V���}<�hB"
���	P
A��oE<�P��x����)�4��U�-��7��?���R<��
*S����F������[��`~��6gR
3�
�k������Do�h�j�F�f��jw���:1�j����d�:�>�7S������oQ{)M@w4c��[�4���#����X�mB0F�c���J��������M�j�a�wB��Q���ll�6�<����>����`$�.�i����h����K5,cn�J��f�rl�_���\��A�F xRcX���]"����@�lV��"��+�4����I���J�z�M����c7W��
�L]bg�$:�R��������K��*�%J<��)��d����F$�+�k�Ta�dj.��7�kr������R�mII7#��V]�wT}0V\Wn�1
��o�J'~?Y����(#��EPi�/y;�0F��1�����k�+$}� 6�1y�[������kL��BH����Zg��=iU�,;���Yv�����8j�t�n����V|����FH&�d[R��.���@������X������x����
�����j���+hc�.���C�����A�j��`�@9Q����m�^CY�l�Ipq�)����hZ��f��C�j�X�tP�J����w~M���u��XKv���R�X�z����1'
E���Q �T��H���D�����%e(��t*46���qs+J{���?���.Pk�E�	G�����4��4�����P#b�����9��b�3E64
��&�`���� x��
!��B�.6�oG�yi�Y�.�;>H.��K�.���Q
:;Z�m��?�O�m�
F&�
jG7A�^�����E��_���"8_6�iL(��$��x�0�U
��RJ>R����N|,-32�
��{4�Z�JA6R,��������tD�lD,&b�2�������tZ����E�y�q���1�O6��N��5"NyO�Ax���~MGc�����=�������L$��^�Y�i;�V�F���!90�p;Li�����T���X���
(e�"7'?����c�����7�����
JSX1b1�t�Tn��y9�S�y6�����}B��F�xNy��-n�+�][�!�W�FJ�8�18���b��f���%=r���J&�F���'��Q�<�dSm�-)C.)����s 9��[����1h I��
<dp��y]��i��<lFcz�������a�)�G����.��S�<��r��Np�3EZe��pX0U�4��Z�A����M-v/b�:Q�,�sUu������z3��5&�z7��]��NC��a��]�e�d_)	F�m��������`��9����)������"�s��!�vlB/	F%&��kK����`����d^lt,z����]l����a$�*��%�M�1&�9p�V�%��sV����q�P���Mc�c����{����>�hvp�d��6+�����_Kg�!5�vU�kv���$��T|���t/����=�b��r���-A����'�d�M���I�����m�w6L�>>�635��t�@��K��W���UY:� %+)�1
�<�aMQ��#������1����x�&5�B�x�O����K�>�9m�nP�����R����&({K��N%�`�C�
��Q�sN�P�e?�wvLq\EHU�����@���J�)�AR),�9�\��%�KW��l�a������*�
jGgAY#I^Jr�s�7���q��^qq�C%���_�-"�
1L�tQ���`�����������8+6NOZ��yN����0�8�U��iD$�o@�^�e�'U�I%|9�9���e��d�������6�!/mU���%�������A��4��������/e,�=l�c�p�)�4����c'U�U^���Kypc�?������/2���cB�����#e��/����^����Q�y���P{�7����KZq?[��.e����q�%U��c��d��o
@��hy����D��������m-tO�����|�����3N�w�)�=~'��Vz(J[�_�=�������8��sw��o"P?0�aR����M����V�E��/�Fd�V+�w�u`1��Q���JG1���F�
%�0AI��c��0��G��CC���9�?���o�M�0j�.����c����%�����Pl9�����%Y#�pQ*Y������c�a�YMx�4t�g?�������)b��������E����"�I������i�
Z�KP;:�Gn�����J���!��8�c� �����5&��{xcA(�	"��	mV���KbqQ$k-"�.8����V�w����!4������F���Z��^K�+����f3o�?�������M�����S�r,:t�����8�LC�j��D�|q&�Gh�:����Q�"F�)@e�#��^w���
k��hX|^��N�{�YU*��L�An�lc��K��d��������*�$_q�������vv_T;l#e	�8���c���g��n�������Y5W������$�����x�=R[�P;��U*wv=��:�(R��
�bA=��V���/�X�)��c���`��Fll���a�����
E_Oe^C��Q�w,4R����fl�����sE�)"����6�T!j6�
P������B��p\��i���W����}�fU�eZ"*�2
H���� �7�
���xa}P#��b��%M
��;|�a���4��
"���F�v1����u�
��_6RS(�Ha�?��V�ak�G�b)����f;1c���A����#��9RT�R5A��f���5��f5`���g�_E���RR��5���@Y�t��x4����=�
GJ��	�j��%e����+���iYc;3y�Q��2��=�kv�v�gh�1""�F�bbD�WU�����i�\����tw����L����M'�z�Sg�C��6E����T�����@�����X�{X*}���[J�#3�Y%�������h���4!R������9Bo�X�zJJ���WU��u`1#	����Q��UO�?��+p���HYl#�\�J��^�u���������+�z`���3G����3^)��I����4}\��u/���X��N����(��{�ci�lf�T�����D`;])�m��j�M�sG/��o$��G��(��JX5x����4/�������4|-F�A��E�����s�+���m���,��1c�m,��+�?�)��H���2�\�*��.;S
l_���w�#�����!��CKU>�(R�.�;HF(��-�L���-�z+?2���
S����q���Cz��X�:�E��������6R��uYF��.+�V%�1�=>�����k�^�O�m�U�R�������v��6�H�������i������Ze��������=(����,����xNc:lm���GJ�8Hi4��d��f�[�Z����W�@CS���P�C�&�6����+���v��a����q�(����8���yp`��T�!S�s�C�Ek�V�'�����.e����q�����k��Q�i��=�MC�j�R5�k�~(-@D����#���>rqu��V���W����������/'��	M�J	�JI�tY1���L��'�@���j�N�����������l:��������>.xL����������P<��)����j�|���S�0��=G�Tj��P�R9���A���IiOgQ�~1��N;�H��|>�dxy��b<l������r9S]1���?�}�A����Z��!��%/U�L7�����a2xa����b��Xy�)���u��
5`�w#����W%~������JI3�{$�L[�@'�>���+H���v��_Z����\s)Nh;{�\l���l��H�����Q@
,�t�aj�������yD������#e��?�(v��LP�Gf ~��?oWFu��{J��R���_�_��E�"*��6e
M�6��y�]��w�$7�6���������#�v�}�HnR�)�(2���VG��_�n�O�$�R����^���K�#85m;C9R��+"�>�RlM���l��N��Y)��Vu'T���|�1�^/�K��u��$6�q�������O����aD������l��#`�7Avt���3���E���/�6P2���Z��!R�i��UP��c
*�g��������L>!���}0���P�����F�N�{2�OaY�����2����q��������������+z'5q��#�B��|N�-i�AiXS�,6�h3���0ycc��-����.6V�*At��=n���[$���bc��n�R6#�	�q�H\�5���N@��J���S�@en�2J1���{Z����X&�i�C%
Q�?����E��Nu��Zb;���vQq�t/U�[9���C�Lk�{�W�Lr�/hpl2���>�����m�8G�)���x��8��P����b�F`c3����O��`D����kU�fX������|(��}�Q&;��r�Rg���r3*%������kq�%�)�E���
3�+����Irj�9��V�:�!k��0���r�0����ZQ z�;i%��6�|i��N(�������6)e{�|m��QAx�(��@���E���G��"xD�����M��<�����z��lG�*Ii�@��Zi�����7��$��-���Y��2��+����[�����JM:V��;����}#U�i�:���� ^���W��U
����"��S��cB1�����18��L����K��M�{d�X����mI��R���r�}T�(��V�Z��N���	T���SiV1L�����M��xz��1f��
�Fj������#��(�����Z�G�}\��)#��S��c�oL�D�c�C��������q#���q
�����ek2��RJ:�}�_
������G������2���0�9B?��sN/�����O�H\Y4�M�����n���6�5;�� �!���n�}�n�;Y!V%�E�x��z"���C���2����=E��"���9��(����/���
+��Ua`�^as�d
}�K�&;�]�>��U*���/n��f�\�Z��@+�_E08����A�P�F��'��W;~�*�7�.�SJ3x�R6���^�P��0���I��{�:LH�326z�����\>������-oG�6�V&���*�I��G��\�g�'.x)����|<.T�2�L^��	>q@��{�
a�2�K_�f7���s�n�~Y�o0��j�T���K���S�Y�N�ME�A�����E���y���ph����2;���
�M�L�h5A���FST��m���e��������7�t�Ki����M���J���K-��m�5W����F������J���^�j��F������iVS���o	�1��D��[oB%}@*8�Wok��I�i����wZ#���X�:PT���M��8����������h�Y�*��wm���)�xv1j#k
�3�����mW�ll���q��1�@")�y�/*q/�~];�{o�zh��U��A��V�7_^R#b���u�nh^���t����%�k�|#�=�����a����|�HnR��u��n��[���
�fC���������w~?���v��a�K�u���O��wS�}4>b��H^�`
P�fxc��.��Zm�=R�flB��
��N(����;���P�SB5��y�3Fl�F���o~GJ�iD�[��)��=s�l�!j�a����yp�E�C���y�t
�����&�2�aM��E�4j�b�d�������WA�*���R\��5�/+h�����6���w�G����*
F���s�����k�g��:�)};��N��,|~�3�������~|���1]B2���>8����eU�R�H;t��st�����K�0f�K����1y�w�c�h������[6RS<E{_�?�?���v�����1��9�x����I0�i{\�l�'�'�v�*$uY��b��6�)/�
jG�:���L0^
�xc>�<�c��7��
�H$���\�rO�P�L�!X:	����c��0��]L��ypO0a�?�M]���)LHYX��`ak�G�b)���`l�i�����:���>�OVS��V�;3TM����E��i6�������0g�d��]��k��}�l����!k@b���+���&���rM�
���}&�X���@^	�:������k�Y�/�0��kc!"l�?��WD����D���G���	�u�k�$�L��\��%�?$���X)i�Dh�G�z�[����7@VZ�o�
U���F�}~��@~���(���1+%�+%}\����b�3-G���C��4
���y?5>�(LiV������*����TN�[�
a�2�_�k��������`���>U�~�&V�vME���/U�R��a��dioQ�c�����x6�vR�Z2�}�� ��g�rI�G�q�e�D�>��l��� 0D��(��K3�md�S��*���]�b�O]:���f���n���yp2c�O���\�$9��/*��|���O�G�b�����������
�������B�!�+��-i,r�1������0�q�K	�J����R�DOq}���A���������^�N�N�?��|~����mh<*x	k��� t����R�"t�ih�5��5����`�t�zm8���^s��c;*��)^z���i��� y��k(��I�:}0gS�!�q-�a�����(��|(U�/��"*6$+;�md;Du�t�Y�d>��������q�����x���� �3�
`�C���7Krc�Q����3"�������P�������8�h(?�`XF���"�AS
2E�S�D8GC:*�A�����e|��S�=uf�<IA/��>h�l.&Lu�H�I�� ���WB���?�zF���T��"�4��m�+�E�Y���X������4�j�_.���(2����#"�o���z.*����7���J��u����������������������M���eM%�T(��r�D��T��0m*�Z��-��e�ll���Fll��q���E$
��q��UA���cvf"`f/E���FQ�2��@�2.^)!�(�$_R@�c-��FL��(�c�B�N��[�
H2��HLGJ��R�M36���
���L_X����2�(y��/� M~��V*_��,�]��_wv�E����<����\�!K1���]���~C4:Hi#���v���6R�����HI#psG� i��/
4��3f<d��>|�a���f<d�!JMW)�f��E������E�1�������B�����B����J�%�M��.��"�X�WGI���G��������UxHc��*�!>0\���fll�B,��U�~�g�8�}
S���7~�L�����2d����91WL�v��������`��s!a�b\6�]���lj1�D!���]C|��a�rc�����wg)�=]����v�5K����o��������8R8
�Yz��&<}\����$J�.Qg�D>�#�'64j�	��U�����:���%�����t�A���}�����y3�oi����d^:����<���2�a���K~|lQ:
��1�.�y����BO������k��s��3���5�at�1oG������B��L3�*u�C(���
��X���S#l�K)�Hi���U�-K'��d���B�)�����#5 "zX�!d�R��vY:�J�>��
�8�Ub��(�L�E���Y����I�����~�w���������(c�����X�U�����,[��)�c{�'C��K�
�(#�<���c�f�\eZ"AA�I(��&>t��������^@��U��}|�;"lV�Q&�^�=��r�+B�S��v/�'js�k��$QORV��(�)s��M6�������+��A���c��������4�)�����n�Lmf^jD��.)RE��=�{mW���D�_n�2�j�����"D���+b1�0��x�M�T#��~$�|p��P�8���Fgf�w7z����f�q����E���C���]����������)��?u���8P��>}R��IU��Pf�>1'����j���IT���e���:H����v!��7~��O���(5"F�5���>M��]
�1�u[�%��<���V_�C=�h^\���m�zG�W`�����e
��)���2�?X2��^����sT����g�t�����C����6��:t�th�����)������y~T�������8,�-:"��LJ��3"<�pq��������[�7R���]&�cYn����	&,��KJ����������V"��d�
}�>1������:�qf_���.}�h	�E<�.��M3mgA�! O2O��c�����J�u�Y��������������
�Q����Y�9��y~��(]$(���/�S:>�h���GvA�(
���<;#���5bx����L~r~C�b��{���#)U�%��1
j�W�b��`�/A���wH=%1����`"�sFT���b�f�+�y�p�H?���Qh����F>R��Kp�j6�X.3�:q5�	VP��LvDR���h{H�;������L1�oq��K��.�U
�1���)�E����u�:{v��J�,�>V
oCUAeLxw���&t�D��a�?C����d;�u����?��k���� pSC""�#"�Z�j������)?lV:<��9��/�����/���@��q/5"�B��0�Mi*�0���0=�>�����q���l������Z�Y�H�����A�����??A����a��7,�
��� �56�r��|s������#e�E
���R�H�x����S�T������HM���P��$&|,�`"y5nx�F����
������[�sx�!�(&me�G^m���PFN�~T��VP;�
W%�mo0z:��������K:�?7���)7�4
�R���)�M;�>f43XxiW6vL���f��O�8������=K��U�iT��ZG�}�b;���\!dJ����������^[�9~�U�����c��eV�P#v�����z-�g^��\�+�p����r�*��Km��#
e�YMx���Z)�����>AW_<����H��n��:�1����J/z�]����bz���������A��������M,��v-�R|����%[��R��R*�:&(�����-�P�,�c��g��=5@��__�6�oJ)����K���C���'�PZ��OFtQ0P������Ww�����M��=�z`N��@��h�xk��T�����l�����r��mg�>���r��}��������]-����u�&HVe�"M
������
_A�h(��>#teHv�.JILY� P�j�L���vk�D�B��[�iBD������~N=��9OV"��P�:���1d����YL����<!�-��~f�������X������������� ��U��F�)���eKv)��O]:"@IS����1������>�`c�ti����b(6)%uR���^J��+)�f���'#Z�8�����l�~�:�X`{^����W��i�����lT&�:'xIN����������u����\����8����v�nXB�#������a�d��a��������A�%U���Z��)ul�v)�m�,����H��eL��9c��|(R�����tLP�c���
HE�]J�:l�>���M'�Omx�������UIm���+�U!_��r�lI�:3
Ua�_AT�� "���PVX5&������C�f�LV,Y
���O���v5����M�lB/����f8W��*��q|(��/�/U�!k����}KI�x�4�)�)��eZ��mi3���]��-�;v�����~�6�/��:lKJ���W&n�f��wi�c���8, �Q+�=h7�a�Jq���ak����E
�<}\�`�>f�W��n
J�`X���;Cj#
���
�����]���4C���w"?�x���b��R�W���	����\�/������#�v���KJ��qeW)lYZr�)kx�� V����z����9�d�������;�[�����zD�4Z�R�����3>n�yY�����x�(\��]���f�����
.*�|_�P_/�5��r��B��c2|����V���O;��mv6L�>��'5�H�&�`���>P0�?d�����zW����nRB.)I����(�7H]��v��!�O�7�lr��������'
LD�f��4"�T��`�210f�#<�hB~U��H�%�L�RR:#!�1n�M6����ad���vtK(�p��E%��<�mZ�<�$�8�;���s_M����������&(�����3��M�����/4�L�dM3V����/-�5���&VPi�iB�i��B�T�V���n^I'�_}����*��X�f�Q_��-)��	Z�J���Y��i������^�b��>d��� �������FlB�s
F��t]�AOK?��2��<t���t�������������H�I$}��E�|Bi0����������R�i^<�\7P�k|zP�1���{�,6���^����@���h���M��R�e���u��d@��B��7���Tp�T�
Z������sU"&�ypZ�	���I���)|�BM�s�
[����M�<}������V�tNM	����u���f����g��G������2��'��2���>�J������r:cj�5"��o�!�7�ag�~���D��}�t��T��RzY�fx2?:[��x����Bw~��lF2U�0��%�*�����������L���W�IF�����2��if����mW#6�H���q�B
q1G�J���<��-���>����69�����~�e�t�����&(���<��K�R�;e�(}fL�A��S��K����?%A���m�~�-)eK���z)�T�I��6;'���&K>J�����sc���n dY���t��?��nd�[+Y����d��I'���%����H��������#�l� �1]?�����7Z���'�s"�u���$�c�D��4�?gG��#(
k���-��#Z�R����	��w|?}`����P�C���VF�����f��a�b��ll&��	����<8���\,������5B��B��2�a
�5e��*����A�/(O:|9Rd�)���br�������]�x���3��qpk��Q���g� ��N}p`mD���?�H0�|���������HYl#�}t�T�D�o���m�����c����#7:�J���oK�����f!�GB�0?�M�9���4^8.�Xbs����������g��y���?66�H���!"�@W���9V^}gc�
f�)���B�%a����Ho���2�4)�,5l������U�3�J�Z���{�";.�ZX�;\�/U�����s �d��2����Q0.T�p
�����Z�K��}W�R����J#Z��������8
9���~9��&�*$�+4�����QW�������-l�K���k+e�t����L0}�C�h�������4�"��jSb#k���wyR����v��a�����������|���Lq�v���Srs>��|~�a1�����2RrG�'lBw���`�(s��yp��tF�\S�
iL�M�4�����i\)i�J�%%}���@kA�Z0A�aK��TK3���j�����"lV�i���D����*����R��/w��](pNZ��i������eg��:��,����w?D�
��F�X��XH��fNX�iNG3�j�R���&F��^e������v�tE�ak�h25�3�B����K��i��M���O�k�p!�z�����������BJ�����KF�����i��d�R6L��^���#x�0�w�r���;���9
~���5�����1���vuB�F�>�S3I��>�m)�h�bAE�.\����7
���\6�M;3A�[��FaJl����V�k*�1���b'k�Qu��([����n�4b�q�!��E���M���G,[2<��KWh������8o�N�G���-�����*�� �d����������)��5�/23��Os��R�)��qD%�MW)���d]��8����~�h��0���$T��A��fH��+h1.A�� �\6:5���Q��q�1�pT��E�t�Ia9s4f�mXc�&��1~�����>�WZr�'�e�����������n������`wL8Y��e��f�P�azD(��~�J����[R
�M^�a��Pd��j����"a�I��~GJ��+n����z�sQmf�z��]_\�<�E�);�!�0�� ������vt
+h���BznL��jth8��^�a8������S�g�.�{|�����t���A��E.f�~��F���+��B���\���s��S0��O����u����+zf����f�5:�C+���:[6��}�/��(���*/����>>���+�2�i���Z�>:���A�|z�v�f���!"��BD�^X��#��T�7����e�Q���� o������P�������fU�eR�W�e�b��	�mj�Rx`���y�������2�QI}����f@��4��
���KE�X��JI `s��aQz|j�F�>�%�`��r����ll���q�G3�yR3�����"���eM��6@��:��??��*��}�� uN��L_Yz���<8(��v��?y_e��C�w?����"kz��No���A������0�=�1>p���������E���W�KW�&�������
��a�����>jt���l��t�]���!�����$z)�����~�FC�4���zDD�A��P����~�^������t�72���\��y-&�w)���-}\�`2b�i�Pyp`].A��$Te�������P��2�a��3j���A�F��A������]�F�N���LL(�Xr;4hL��N&���_�j����v�1���Wc�s��a����E)��0�6&Y�Z����C�[AmhLv����pIf�@�dbm	R
}fL��.|�PG���{]+iH����V��(f,�����M������s;8��R���})�/����z7��0��qx�GD�_�gS.�[�g��Q��7V���C��72�c*�	_{���
������>n��u *_aa"93(��LU��(�Ci#�:��A8�V�R�w)�����K������	A��'#V�F[-|U���cm"�L�E�0��Kf�p�(fw����}p�<IL G����C�H���(8��FllZ�	�,;(!���-B	��l�ua'�K�+��v.%��E���}���R�}0��F���b�0������D�"�s���e��>;����&S�E�i���)H�E]
�n`������@��\g8�yHn�`���x~GJ�5
[�`��(Z��fTv(��I�F4
���\vD�O��!M���~��AJ]!�f:�5b��+e�^R�K#����y)h4W�?��{��E�s�X���k�Y
��^x^�pQ70�+�Ei<��yp0�Qp��������X3���/h�U2x�^�J���Mi%��@�����_��A@�#��Cr
���R6����������.�#`�n.���9���R���x�aZ����{w|MdG-�+�E�7a�C�Ye����R��������I�I��<�+�I�&�6�Z��]�(7���jF��Ngl��@JR�K
G?I�R*�� ���Q�G�@��\�R��U����:���
j���3j����%)���:/���;�|���
]�2:�$��W�!���JI����\�b�d�8A�&��6-M�<���FN6R(�H��Q�<4��v���6R~#;���(�Nx�R+n�%�X�I�nY#"�l�zQ�l��R�B6�����fe�`G �:�Cu�� ������S�)i�J6�XB�Y�JBe�WSv*^������'k��%5�yZ+�� �Rb�.e���%�\p)�y��C�L�>H0>OL��������������t	Z�������yn���x���1&�x���*3�h��JnwzLW\t���.�Q����JG��a��{n�u[�������hzAbI��/U3<��k���@ipl���1���	E(�������������B�>����4�D��F�X���~_��)cj�NP*���?���������_?��������t�����TT-���:I
���v��������T����O9\�
endstream
endobj
5 0 obj
233886
endobj
2 0 obj
<< /Type /Page /Parent 3 0 R /Resources 6 0 R /Contents 4 0 R /MediaBox [0 0 846 594]
/Thumb 8 0 R >>
endobj
6 0 obj
<< /ProcSet [ /PDF ] /ColorSpace << /Cs1 7 0 R >> >>
endobj
8 0 obj
<< /Length 9 0 R /Type /XObject /Subtype /Image /Width 256 /Height 180 /ColorSpace
10 0 R /BitsPerComponent 8 /Filter /DCTDecode >>
stream
����JFIFHH��LExifMM*�i������8Photoshop 3.08BIM8BIM%�������	���B~���"��	
���}!1AQa"q2���#B��R��$3br�	
%&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz���������������������������������������������������������������������������	
���w!1AQaq"2�B����	#3R�br�
$4�%�&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz��������������������������������������������������������������������������C








��C	����?����K������x�����x[D�L��J���oR'0$���
v*��[�8�]}�6�5:������o��&E�dn�#&�'%�9�H�#��j����.����kk+��9�LD�K)����W��|m���#����|2����a�s�Nw4G�������~���J��������~���J�Q��<��������~���J��O�Y�������IxHt�9���u��=�Mz�y�����~x�\k�l�VX���j�&=�y���0�yV�Q�M/�CZ]I��wn�y[�������~���J��������~���J��F�MGH��#b�sR�9�P�9���X�.-�g(���]�i��Nf��z_�%Q�i��Nf��z_�%W��R#�U���{!���|h��t��9`o�`���v������������U�4P�y��������Ty��������U�4P�y��������UZ�����
�UIa��KV�IS������j���#O��s7�����*�#O��s7�����*�F���#O��s7�����*�]>�i�'�{�����Kc� 8]l��d��Q@]�i��Nf��z_�%Q�i��Nf��z_�%W��@]�i��Nf��z_�%Q�i��Nf��z_�%W��@J��=��C���x�$,a��!`o�`���v���>F��D�o�����Uu� �<7�I��{P���qDo8�IwK���2J����u�3����<���n�%Io]�p�V���G[�N6�>H��~H��6@=+�������~���J��������~���J���M�_�n���v����|�"����S{�	��rI�����I,�������fE��PT�V�'�x#�i
i2YEH�
.&e����7]���Z���W��/����@�������b�{XVd��5�V@��8
���=k��>�K�$��������:B?��������~�wi��'�"�����HM�B���X�:�*���?�+�b�J���<����dGw
]�Q��'�����qO�j�"�
s[Z7E9��O����Rq��^�^y���@������9����EW�&��O�g��Z�����,�������]^�t�:=����	d%s��I����x���O�_-|Sx������fg�%y[ceU�69�����<��G��~M��c�E�Utk�o�:?�:����+��<FP��5�\�R4q�g$��Q��5������;k���!	u8!!h%�g
��������2��b*TJ������QN������QE�sQ@Q@Q@Q@Q@Q@Q@��<���Z_k�_h��!��I$oWY�����T�\���
��2��,k>��d�F�2�U�T������A��9/������|?c�K�K�/�H�Q
��;����}2q�����eyqmus�l��M�6$`�sW���AIZJ�{������P����o+���������Q����F�{G���W�|%��U������M�������c���Z��(���6�W=�O���~�������<��>�V��L�]������F�����p
����jo���w6�mo<r�m�kbTV����H���f�4�>2�;h����(����=N~�����@������9����E��P�6?<C:��l�EE��lQ��$�+���d���������f��	��{�rH-� �;
��b���f��������
���'Y[�Uo�C��>��G�N�|W�Y�m'R���$��d���>R��!��_M����~���r��w���P��5���Z����B�f[6�d��8.����b�h�L2s)��u�5����� ��v��k�S]G��W��hIc�;{���2�_.H��0z�6���WL�o���<��.���n�F�4>D���b������=k�zj����j7Z���������e��w ����|��h�8Y�-g���I�}�o~�^��BI��W���w�Ve���^jw�-��S_��6��!c.���Fz��������g���5��fI<'q%����v���b�G$���,7n���AF,�HT�|��=o��K��{Y}���r����|��=�iw>��yS}���;�����CP����c�A�f�&��%O	4�jrZ����0����DW����?/�W{xnu�\�E#�=��-�����^����U����l�Y1�����&y��]���0�����^�+'���eO��{�G��M���)���P��o����QE��QEQEQEQEQ^S�{�=��-�Z�����Z������#�BL��U��)�y�=8<%LEX���'��:�T��g�QH`�K\�!EP{��[w���-��e���"�R�W����x-��N ��}�:�*�����J��`];�I�������n. �����E�U���UFI$��H�V�p��*	�X�
��/RG}ae�Y���0%�����)2:�����C��\]�a�h�3:���Ws;�(9$��������@������9���o�������C��{gI"uLh�������t��YY�������-����]T+HWR��'������+�
xbMr�������#:,2N�!!9��p{V�����/��W��FgM2�k��s�H[h�-�k��>K���S��,Y��������4����C�(,;WY�q��'��;�E��	��Ldm=����^�$*�����^����B>����O���	���O||���
����Km��72������+��[[B�*�1W�Cp>����ZG�%�K�����Czn2vA���B&s��;��!H��*�{�z�T�Wv�s�=�]����Y��N2�-��[�������gJ���ME8��z��z�5�~���*�e%��,v$��� �7v����x9`}q\����?�$����Ia�i��Wjd�-�Lm�_(e��������W^5�}���[�E����P�j�����s��q��_�5/j~���g{m~G�/�E��[��uD`B�m��W'�������5B�%�	�6����?��c�/�NJ�*����v�K�?����_�x���3��������"�.��'t������*��n[�l�V_��g����t?^L�eM^B�Hd�wO8�H�5eo�����0H�y���F��t^��ln��H�4v����"I#w*��1F9�$�����~��--��/i2=���6d�JL�r`�;u�|�1�V��O�^��(���m�����:1���#h�oH).���.h���������R��&}�6����c�`z�z{��/��A7�9�E{cqk�G�X4X�rK[M�@�3�9�g"�X%�|;�_\�4������6�Q�-�4L��%�iH9�?		����3\���-��sC������F��OO@8�H�S��2�pZ�����
���J�j��w�����t�|y�|9���&��0����L�%��5
�g'���n9����C�xb����X�y'�W1�.�-��|�'�@���+K���o|�����
�jW�-����6I5��q��
�� 61����O-UiP�%��Mo�D�:�y���HR�?�M��/N���X�����7P����he�DwF��{�O����R}����{K+�T�	��1�3��#BN���{J�s��{rG2	"`��
����}|��H�/���e�����R�-u�I���{���<��i��?�e��/��P�k�K��	�^I\\����gUpR�a����xzq�^�9sF��v�m��i��G�[(�aV��	[���������<O�M;�R��c4����6��2Z(T*���$W����#����V?k�\�vW��������J��������zu����OX�byv��0�<)�-���o��R��sT��������u������vnQI�:�����#�7�,>�;P�~���@����H�w��a�s�|��7LY�#ne)!�dIA�M������y�����j~�=����k�����6t�����k�Q�B2��Y��-CE��ZV�c{�\�� �>y!�"v���D5�xL(��^zJNO��kGK����{�|4i��VZ7��V���3�#�H�P���nl���<s�Y��>%��c��V�5���tYn<��c���+2y�%��+��|N��%�o6������4�P���=��#pN3����Ui��&:��|�3I|�3����Yj���V��?�����������@��������Ai"��fB�H����X+����������Mi�]^������������ �����04��=yI]����w�&��e��J������#������*�g��t��&����K[�w���. �a��C�B� ���U~x-T`
N�}�:�N������>��_G�:�����>�h�V,\�Y�o@���OVfUR�p$��W�k�������Z����A8��&�����%W����y^�)���Z�{i=���N��g<��hI*$RA�}+��e� 
C��z����k������k=f�;����9b1X�F��TH��If%������(4mZtiO�ke�H�����U�T�
z�h�9����<A���Vh����  v���exnz�T^ K����D|K?�.��gz�~t[�b]����+wT�����a+#����>S��${8��
7�GU���7R�d�MB�>Yc3X$�����FO�}.I��)=#f�=�~��AI�
�#��?���(��<R��v�\Cw,(�����e�x�m=FGj�Sm���|T��|!/���?�L��#K�,��):lp��� ��dG������K����U��:��sosiS4�6�]��1��pN+�h�O���*R���I�wk��vvKC�
�T�(5���_u��c�|o�������V���i������+�%���,�W�a�Fj]/A����C��W����co$�G
8�T��*8����~�z�s^1��_j3j��8b2�D������rI���V��N����������ORIQ��y�o3���O�h�W��J��!wp���m�6�/��bNk��Gs�x���w�����������V8���"\�c��nHd��Q����OhZ��������x�#�4*�b���@`0A���������'=��tf���6'l�:�uw3;��������k���*t�I+:qo^��+N��}����'
|�_o^��-;���>Gs�t+[�E�������D�Z!��������T?|C�x_�z���:�����{�Dh�_�l�G��w�|{}x�I�Pw�F\$qM#�w���\�W��1������	~�k���k)�;LH��n#vo������~�z9l�V�)��56�[��������w����w��w�!2xg]��kx���]��H�2��g���������_j���|-��m8����N�*���,ff��U��IK���J����.u���m6�u�;��S2nH���
Nc���j��]WJ���g��|�1j*Fi�$f@Y�a�����1�[a�)�4-~t��q���cN�����+|I�J��:Z�����eE���`�l��������� gt�5�-����_��x��i�g���������W�y8��z�O�����B$���,y�0������_1��y�;�����Z��-��Z��mGk��@��|e�n:��4�x4������#
�Vo�l�z���4�g�����������r�!7:���\��g�Lg5��>'�?�v�g�x��mc��������0^�cs�������g�[���j�S���Y���rm��G*�d���pz�*��
x����?������K��o�iX'�u� �������������~eJ�=�[�Rv����]N�F�I�Uo��kt\��9���X�����m,�f�M#J��������i%,F���6`�'}1~���?C|�%�����K�^R������>_'#��z��>|0���*]'�Wp_j�wYe�\ �"�0v�!28��|3����_�7�v�\�'P����NX����;�8�����4!��w�qmi�o�m�����4]�]}M�6�#��n������7Zv�ax����$y�,�K`�c����hz>�}���~!��/�U^�s/�V�0$%v������D��q�6�������nn��H���@b�R�����9�5��ZT^�����{]C���G���79<�=:���j�`�������V�k��b�X�T������z�Gi�k�.�/�/��:t�mq�:��<`��~V�o��)tb����O�E�%�� ����@�z���'��iz���9Pi�j������eUi�#i�]�F�K���#���]��x"{�2��D��7�������v���qc2��N���������������M;������������J��`];�I�����;�w������#����V���#~x-GA��������5��������l&��&���.��+"9�F*J�7m���9������&��c@���Cgi4���8��f��Frq�����]�\Z�)����d\�B��9G"�-W��x�<����,��f�H�u��b��uI��@�F�~+���_�[-Z�Qj�0[���y��Q�o���x�=1�yI��[��Z��� k>.��x�RT���+����k���^�SJ���Ex�_����4�&�enV��o/���>F��z��>�h��<��Y���Hlg��>�����qx�������}f��U&�S�����������GM������)>�?��+��/u}%�6r��1���26�F�8��3��	�Mw�_�.��g���o�v$�W\H��j�s�>�5��[��s-!��~2��xe����f��^I���>���+�LB�(��(��|G��/�z���n35���������a�e �=
mQWN��%8�5�P��R��G�_�w���'��*�`�SF����t{<H�f��������]�H��$��|��C����K��eq&�p����6�+B�oEm�=s^���0��x�<�Z�s>!���YM||��r����}����76�iM�|V�??>�S��~����|4P^��x�Y��S$,w����w�S�p�9��P?�����l�+�!���cY{4U�(%�����1�6�$���+_	x������������\UD�-�G{u�)A���NG\��i�������hRb��X��d
�i =����	���S�m.��]�j�w_��b3��i�������Ue�����im��M�ax~�|����I6Cmr���dNv�A\���~6|?�>#xsM�t+��^-J�k�����N��c��zf��>[M�J��lw9���t�!s(y����0T��^�^%\�R�B������U���<r��T���[���?�>>�~�Iu����3��C9��v�e�f�R@��,�u�������;��,Q\^jQ\}�(LFA����������WS��C��S��x[�Ouk����En�D�A9I��D���������p]���~�E��M�i����������`6�q��*0���!�z�m�d�������K/3�����(��sj��S�����Eo���?j�
�Iq�^Fe!��S���&��u�
��1������&k8Y!h����f9v'x-1 ���V���%��=�z�l�v���#���i���dh9c���k?���+������������M-���������QTs��`Vx�me��������m;�[�}�b14��h�k�d��Z��W�z�y��>i����|koq,��)���G
��`T��$�'��'�������#8A�IY�����x��J*Q������h�>x�-/Q�M��7������
��7,y��02O5���y'�Il|M+b�};T�]���4h��u�,�#z����+�~#|7��#��C�^Of�&�o�(��J`����l�Ez�^aM��|n�w���w�qpx�W�w��|��e�u��i7���6��I4�|����[�xB�y�rwJ�������e�e���'�~%|�ap�q,je(�[:D��`���������-�����������j{v����$$�v�py���e��'�a��\���;K���YT0�xN��\�e=�t&�)��y{	���������/���ft���Q_�����N2D���=2+��K�$��������:�n��{w�	���8�B�_p���~�>�,�4];�_�h�������������.>)Zx��J�
��C�y�]�|��X��;���~UA����*�����]&��o'�`�I��%�H�}�����z�7u
w�:O�|]e��Pi�C��C�r(�%R�L���;p���.w����o��j�r�x��m5&F�3�BL�������=�~n��B��_<]��}�^��x�0g[���pr��;��f�L���}��}��C	`���������K-}R>���m]�>��
F��[E����Vh�L/��� ��c�y^�o�^������Z�'����M��i���w���q���a�o�m7E�|e6�b���Akq����uYd���>c����:Sn�,���c?���?��j�,]�O�I�����U�z�
u*���V�M{_�������~�(�$�P��(��(��(��(��(��(
�!������(��(��(��(��(�>�K�$��������:�n���sd�����O���W��w�*������g3�h����������so�]Od�K��v�B�,�IQ�rg'�����@��<	�h��5���a��|F�:�i�.r���:d���b�W������v4k4�p2��T�N$2q��+K�F��k�f��&��a���]���%B�<t;��dU�VFu�Fqi�#�+_���
|]�Y[�n��n#�G\9��|����c�@������s��2v9g�����������6V�U�V�$I$r3�.�T��9>��O�����k���VZ|���yjm�`�9�������H�����J�D�`�-��v���v��G�'���=;�^�{.������I�b�lK�����,��A�^��x���)�������k��KyE
��[i8`�8#��<�5�W|%�4e�+{���r���9��|��	�*ZKSN&K�t�T���h���N�y_S�������0����'�<S��#��9&�z^FE18[b(.�NO9�����"���t�C�WT�f0������v����|���N��O�Z��[��}:�N7E�:��
�\� }�RA9>���?g�I�e�x��O�u�i�+�-��6Ka�cL��(����=i�Q�V����g��,&cQJ�r�;&�yJW����������Ea�w���t���7{�a#2���8l�<V�q4~�J�*EN��{5�a^ic�o�:��+���nu�%,��a�dd+~����
�Y=�Y%�mp�-p\������W���t�|)�G�@�>5[����A�#(Y��1������R�g������J��k���V��K��}v�����-�������������x��h���T2�w(�a�v�#=:��./��"Y��#�7�4��z�v��^
�b� �%���Lt�.�=����hE�P��||����8��\x���<b	�P�]F�i,��qm�0�9
�O����[J?3��q�>�
��\S�gJ	4�)��������������~#h<'?���y��dE��d�!�����'�&���>&�t�����R�;���2(a�����Zn���/�>	�as���������;`
����H������|9�r�sB5I��P Cm$�m�	���y����U�>�����8��^���z�������bI#�6�f�	fc��I=x����:��b���ko}t"����i�U$�wb��ik�]�\�:	�_��OR[nf����<W�^�m'N�~����|X�Z	�-���8���^H�F��1���U:u�e��\����i��1�{]���=��.�����_��.c�>��	���.Y�,���8�p9'&�O���m'��q�[[
2��w��a,�T)�W�����k����?`�������m�\yLY[,�I,C0�O*g�+�6|G���|+i5���$z^��0"7�bGw|�<���P��7V|�<��X��$��������Q�����-��~Q_���A�BO�]����`�����c<V���	��=[�`W�xk^����_���%��+���Y�)�d0�����!&=��6�����>���O�xX���)J��e��QqN{�y[�?Kk����|_����Z�4�55����2;�q
�v:m<�'����;����VO�����w�|%}��������"�;��v��m�3^������������A.���@��/���.O������
<����m��2�����}�"����9$����~������o������]����,��	eRSm��Xe����@������y�~\C���]O�S�Z�RN��3���,N3��W�����o�����p�0�m#�����,L��7*�9 ���/kk��\���_29<r�Mg( ��&!F0��z:�\��O���b���j�\����e�uZ�M�U�2�>��|a��xs���o����� 0������R9�������Z5�����`,.~���As�+��	$�������?���,��'z����]�L�v������NO;{]���Z���x�L��x��eg��L�RD��Pd�$lw���������<V�����P�+���ZV\��g	i��>��>�K�$��������:�Q�q��PIt�F��lW`��U�
���~x,���N�����W!��d��?�������y<�d��)lt�hZ���_��?�7����� �����xSPLWl��M��I����s���n%u�]�>��@� h5�l1h��R�#k0
��I�C�z�^}��c>dl����|��5�;��v��n<�5�����"���R��p�\�:��R���� G��~���t��|3ut.l4��u9��%A���s�\
��������t������~Q���m�>�x'����_
�������
�7';��E| �;Tp:����}~���uMt���H0���g����<��~n���'��~+I/���~'���ZC*�q��u�~bp�����q�Ws�2�(���y�-o��)���������cPOrF;t;TQqs������R�S���NM)��h���r���������+����y�>#��F��#X�x��v$+!�"4\!l7�� �9'�J+��Gt~��<=��0�	�W�i����};;tl�#�����'��/WR�h���x�����\��'�����������
|N���&2Eb���.G���`�A�zs^M�~�z^����Q���s���t�c�<��A#nx88������|{�|j��g�|#�6�>��j���V�����X�VB$�*�&Ul�7W_$*l�����u��PQ�������#�;|�n��n�z�����5��?��K��c��c`\�B%h��+v�v��goM����4Wi4Ww�e��q�*�3C��Gn��G�{�M��������"���-�K6��;����N3��������~e��������k�n�H�(�E9���]l���:��Z��w�O���)!n3���OC_~��	��OW�n���[��t�;�O�l��&��p�X��>c��~��YB��iu>�;�Z�^Y��N��K�=W��'���/���uK�GM�$b�G<��l����=���Z�b�@�W�|I��Z��>A{43M3+�J��� g�����K���~����xb����J&����I��*	!~��p:`u�C��wI���KI��-V!��3	NA�����5�����G�/����a�}���Z������,�Z��;��9��&�H�_��:D��5�Z�q��DU���<1�=
z�_|.�%��-�����O%$��! �v�s������fO�������>'�h����-���W1s�Rq�v6�F�Ga��gV|��C��\�x�?�sz_����Z~�"���G-�cY������[���-�D$�"���u��@5��l����������<����S�o`�I���H�y�}+���#&�g�c��8�5h�YTN-���������uO���m�b��.|cw�N�Id��������`H�n��2k�5��:~��R��7_a�$,w*��&�_/v[q��ry�v���"O�>s�9m56�M��x�������c��O���_�!�j�k���}��,�)��
��R��w�"�#�|�E���X�Z��Yj�I,��pK��F�e�'h�=8�m��U��L���7��S���g{4�uk����[����^s��)�O�<s�|D���}oJD��O5�F���)�\�s�dW�QY�5���
�Q�$�j������|�����o���;m��E}<l����>^~|�z��W%�A�-��h���V����
h'�@*b����g�P7�u���'�	e�5gO��G����4�}����
�(���TU$��u��������=#E�5
�p��/����N��h��������e���������
p��o�a���q�`���}�s�z������d�,Q���c�A�>������5�
���s���_����K.�2����d+9�����IPy����i�~����)��/���q&�����?�w8��e�>e��G�=��+�}�x��_��M�������N�N������j���_I����U�C����o�9U-~%|�i��Z
�Lw9���V���OZ��k����K,V'
W���'&���O��ze��������������Uk���	�"Y_�C��8����s+�N��X�5���Q^}�k�W�z/�m���������������P��^}�k�W�z/�m���������������P��^j�>���i��$,o������H�����V�m|*���E������A���������_����?�m|*���E������A���������_�����i���=��<a� %���[�%z;��j��+���m|*���E������������_�����+���m|*���E�������_>�D���4�
$q�/���G�l��d��J���������_����?�m|*���E������A���������_����?�m|*���E������A���~2|'{�l��4��F�7����7��Gl�Z��k�W�z/�m���z
��������������Q�k�W�z/�m���z
y��_�%^��.�����r�_�Km,�6�#E��jV��2S~O��/����hzc�x8kX�������eWR�`���Z(�i�leLv�)\�Q�`�q�h���gx{U�ukI'����$G
#v�XlV'8���*J�N�QEW�����)�uk��^mj�����JW���]
�?up��r7����O�����-2[�oD�A�&h�	�{2V���,G��(��
���Oe����@n���I�22)!9b1��
]��<R��_�-�n-���\����0Q*���>7c'�����&���M����k9�@��'� ����(��+��W��In5��(��2�oH�f�
���e�c�z�p47���cC��]H}by7���p�8��7`d��2G!�x��Z�������>	
;K�Pn�s�gv�"nr�����Q"��]J��E>\���e�����s�c�m>'�K/�4��� wF��")��V�`0M�O�v�;�+������;�k��{�Dk���Uv?x�[��T���utW�k_<g�j�v:g�n�Kh�;��x��p>���	<	C�62G��@��|A�����������+n�7��2�yUW��p2�_EV?�5�#E���m
���e��w�W���Id�q���[P���O��B��d�	bT�'�5� �1�M�n.7�j�����{�Z�][�����<Ah����	Rpj�QE�x��^$��������@���� ���b��R	��T�8/�~j���o��S�i:��n4�7y��4��f@����V��"�7U�
(������(��(��(��(��(��(��(���Y�:Iss4a�P�C6	du��4PX��P
>�_�'p8�$�5!�t������q����n�NMhQ@b�4�$���?���������5z�(��(��(��(��(��(��(��
endstream
endobj
9 0 obj
15491
endobj
11 0 obj
<< /Length 12 0 R /N 3 /Alternate /DeviceRGB /Filter /FlateDecode >>
stream
x�U[�U��9�
�����-�C�t)�K�����[��k���A���d��$�L�}*�����IA��-��z���R�PVw�"(>�xA(�E��;�d&Yj�e�|����o�����B����%�6s�����c��:��!�Q,�V=���~B+���[?�O0W'�l�Wo�,rK%���V��%�D��j�����O����M$����6�����5G����9,��Bxx|��/��vP�O���TE�"k�J��C{���Gy7��7P����u����u��R,��^Q�9�G��5��L�����cD����|x7p�d���Yi����S��������X���]S�zI;������o�HR4;����Y�	=r�JEO��^�9��������g�T%&����
������r=)��%�[���X��3".b�8��z����J>q�n���^�\��;�O*fJ�b�����(r��FN��X����H�g ��y�O����+�-bU��MR(GI��Z'�i����r0w]�����*x������u���]�Be�]w�*�BQ�*����S������������aa����,����)�)�4;��`g�>�w{��|n J������j��m*`��Y����,�6�<��M����=�����*&�:z�^=��X���p}(���[Go�Zj���eqRN����z]U����%tAC�����^�N��m��{�����%cy�cE���[:3�����W���?�.�-}*}%��>�.�"]�.J_K�JK_�����{�$2s%��������X9*o�����Qy�U)��<%��]�lw���o��r��(�u�s�X�Y�\O8������7��X���i��b�:	m�������Ko��i1�]��D0����	N	�}���`�����
��*�*�6?!�'��O�Z�b+{��'�>}\I���R�u�1Y��-n6yq��wS�#��s���mW<�~�h�_x�}�q�D+���7�w���{Bm���?���#�J{�8���(�_?�Z7�x�h��V���[���������|U
endstream
endobj
12 0 obj
1079
endobj
7 0 obj
[ /ICCBased 11 0 R ]
endobj
13 0 obj
<< /Length 14 0 R /N 3 /Alternate /DeviceRGB /Filter /FlateDecode >>
stream
x��wTS����7��" %�z	 �;HQ�I�P��&vDF)VdT�G�"cE��b�	�P��QDE���k	��5�����Y������g�}��P���tX�4�X���\���X��ffG�D���=���H����.�d��,�P&s���"7C$
E�6<~&��S��2����)2�12�	��"���l���+����&��Y��4���P��%����\�%�g�|e�TI���(����L0�_��&�l�2E�����9�r��9h�x�g���Ib���i���f���S�b1+��M��xL����0��o�E%Ym�h�����Y��h����~S�=�z�U�&���A��Y�l��/��$Z����U�m@���O� ������l^���'���ls�k.+�7���o���9�����V;�?�#I3eE����KD����d�����9i���,������UQ��	��h��<�X�.d
���6'~�khu_}�9P�I�o=C#$n?z}�[1
���h���s�2z���\�n�LA"S���dr%�,���l��t�
4�.0,`
�3p� ��H�.Hi@�A>�
A1�v�jp��z�N�6p\W�
p�G@
��K0��i���A����B�ZyCAP8�C���@��&�*���CP=�#t�]���� 4�}���a
�����;G���Dx����J�>����,�_��@��FX�DB�X$!k�"��E�����H�q���a����Y��bVa�bJ0��c�VL�6f3����b���X'�?v	6��-�V`�`[����a�;���p~�\2n5��������
�&�x�*����s�b|!�
����'�	Zk�!� $l$T����4Q��Ot"�y�\b)���A�I&N�I�$R$)���TIj"]&=&�!��:dGrY@^O�$� _%�?P�(&OJEB�N9J�@y@yC�R
�n�X����ZO�D}J}/G�3���������k���{%O���w�_.�'_!J����Q�@�S���V�F���=�IE���b�b�b�b��5�Q%�����O�@���%�!B��y���M�:�e�0G7����������	e%e[�(�����R�0`�3R��������4������6�i^��)��*n*|�"�f����LUo����m�O�0j&jaj�j��.�����w���_4��������z��j���=����U�4�5�n������4��hZ�Z�Z��^0����Tf%��9�����-�>���=�c��Xg�N��]�.[7A�\�SwBOK/X/_�Q��>Q�����G�[��� �`�A�������a�a��c#����*�Z�;�8c�q��>�[&���I�I��MS���T`����k�h&4�5�����YY�F��9�<�|�y��+=�X���_,�,S-�,Y)YXm��������k]c}��j�c��������-�v��};�]���N����"�&�1=�x����tv(��}���������'{'��I���Y�)�
����-r�q��r�.d.�_xp��U���Z���M���v�m���=����+K�G�������^���W�W����b�j��>:>�>�>�v��}/�a��v���������O8�	�
�FV>2	u�����/�_$\�B�Cv�<	5]�s.,4�&�y�Ux~xw-bEDC��H����G��KwF�G�E�GME{E�EK�X,Y��F�Z� �={$vr����K����
��.3\����r�������_�Yq*������L��_�w���������+���]�e�������D��]�cI�II�OA��u�_��������)3����i�����B%a��+]3='�/�4�0C��i��U�@��L(sYf����L�H�$�%�Y�j��gGe��Q������n�����~5f5wug�v����5�k����\��Nw]�������m mH���F��e�n���Q�Q��`h����B�BQ��-�[l�ll��f��j��"^��b����O%����Y}W�����������w�vw�����X�bY^����]��������W��Va[q`i�d��2���J�jG�����������{���������m���>���Pk�Am�a����������g_D�H���G�G����u�;��7�7�6������q�o���C{��P3���8!9������<�y�}��'�����Z�Z�������6i{L{������-?��|�������gK�����9�w~�B������:Wt>�������������^��r�����U��g�9];}�}���������_�~i���m��p�������}��]�/���}�������.�{�^�=�}����^?�z8�h�c���'
O*��?�����f������`���g���C/����O����+F�F�G�G�����z�����������)�������~w��gb���k���?J���9���m�d���wi�������?�����c�����O�O���?w|	��x&mf������
endstream
endobj
14 0 obj
2612
endobj
10 0 obj
[ /ICCBased 13 0 R ]
endobj
3 0 obj
<< /Type /Pages /MediaBox [0 0 846 594] /Count 1 /Kids [ 2 0 R ] >>
endobj
15 0 obj
<< /Type /Catalog /Pages 3 0 R >>
endobj
16 0 obj
(Mac OS X 10.12.1 Quartz PDFContext)
endobj
17 0 obj
(D:20170319071534Z00'00')
endobj
1 0 obj
<< /Producer 16 0 R /CreationDate 17 0 R /ModDate 17 0 R >>
endobj
xref
0 18
0000000000 65535 f 
0000254106 00000 n 
0000234004 00000 n 
0000253878 00000 n 
0000000022 00000 n 
0000233982 00000 n 
0000234121 00000 n 
0000251069 00000 n 
0000234189 00000 n 
0000249845 00000 n 
0000253841 00000 n 
0000249866 00000 n 
0000251048 00000 n 
0000251105 00000 n 
0000253820 00000 n 
0000253961 00000 n 
0000254011 00000 n 
0000254064 00000 n 
trailer
<< /Size 18 /Root 15 0 R /Info 1 0 R /ID [ <ebddec571cd3e0bece7ab6c2a5e7a4d6>
<ebddec571cd3e0bece7ab6c2a5e7a4d6> ] >>
startxref
254181
%%EOF
Moderate_AV_4Indexes_100FF_SF1200_Duration28800s_Run2.pdfapplication/pdf; name=Moderate_AV_4Indexes_100FF_SF1200_Duration28800s_Run2.pdfDownload
%PDF-1.3
%�����������
4 0 obj
<< /Length 5 0 R /Filter /FlateDecode >>
stream
x��K�d1�e9�U�
��c��q�z��A/ P��FD�����B��TW��@�����B������������_'�������|����������������:����:�����=����~��?�����,�_���xN���}�����_�������z�������W���/J�^~?/������������WA��#������g�u�^����\��ZQ���o�����S������E~.g�{����x�����|z>~D��3y(����s�'��y?_�@��������3|�>���y�q>Q$�M���s���+����\|2�������~�|���Nsy�]��/��=�|^� ����,?����7���?�_����Gds�}�*����6O��f5��}z"�_��������K�u�~>}?~��h^v;�Dn�E��q���<v}<�o#�~P�$���q�~����|W�|��+����~;��|��������O�\��[�v;#g%z�V=��+O-����l����������s����y���Q�����SS/O��>��E-���W��D����tC���Q�J���4�fY���;��5�������CY��>������y=!�;�Ng*�X��;?�iUT��.����?~�|�����g�G�o��ZMC�y�� ������n�\_Wz����)�r=#�����J���o�KT���~���<�62�/O$�}�}����|}_�����#|�����P*e��t�o����
=�'jI�n�����,�
��m����NhS������E��BN����P`��=������(��'���a����#��w��$j���J����� ���a��M��\������@���8}��oT��#�����?n���
:��>��}}F���P����?h1M-$}z���_S���d���!wT�
�>����*e���[h��
w�7d���P�kx�3��C=���~��3�������{��u���sG�P��<J;��?v��7���N�.�]���3l,$�8]~NG.;���&�������w�Q�j�gz�Z(Z��DQ�g'�vhP�b6�2`I���������xY(��P,��U�6���Ukh�Y(�uE�S4-Y)����O���^)J{_(
�R�.E��E���K�>�@QF���_�?EQ��F��L��(��PT����j�(��FQ��R�>](��$��A[��R�����`E���BQ�Z�A�u�lud��~Q��j�(u�(j�^(�u!EX���)��w��
��)
�f`�,
���>�St@���P�d�E-i�Eot�J��[fQ�l�E���Yd�(�J�/���������)����������`E��JQ��BQ�P���fV��Qd��t�����[Q���f��}���=�NQJ
E%A�� /�(j/��O��1c�O7�*���7�}��
�fQF����(��AQL�e���}��Q�:n�mE�W&�e�(��6(=�
����Un��~��?{����.�����wy�r����g�g���w=��n���W7��c��b0�4��
�[4������r�F���1��6�����KS�{�"W
c������G7<F��.Z�S��~�AK���<�Q�;^����4�������.��Z�0�����:��'�P��1�1{���E0=��3�d�C-���h���E�hb���:DT �E~�Z��L�����-.�����1�7�dy��L��c�3Xc��=nc��kw�NX������'���]?+���F��'ndT".c4�~Y\�������]�eL��!��2Pye�.�Wt�?��Qi��o�����#
������>7�8���<������n���N��E�����v����?�c�7��H�������=#�����r`S�# ���a�,��z(O��������Q��L���7}�oT�����_��14N+Yg��z�����DCZ��O}�<�����,Q|�T�����	]4�{=wCHNL�LT�R��:& ��k�
��~&H�c�phD��
"8�Uo���N��R9���2��^����3�hi�9�\�9��X�J7�\���+���K�����C�����-*|{�"y����h2�oj���.I�4�\<z��m5���9ci�)������[#�Q�:Q���5� ��t����8v����J�S
W�2�90���E��2fcc�=���>{��O�<�-���<�3�	(�����a�^�M��i�]k�����@���5	p�d�o'�B����*=��~������md�����~�|�$O�����Y�	��������0�v�]��o�b��Q�7
J�w��`�R(�s�4�&;�8}p�K*���Q������3�1�M@��|���]������������/���3��<�A=w�z~f��vJ�9Xs�$B��5C����������`����9f3�L�$a*������#�����4���P��WS&�O�V��@�C�����M�����hu�u�R����pBT��J��	�
FH�`��H���-s�4� g�(�� �GI
�m-�0���������o�/�;�b<�
'�R�Z? ����0�j�w��CP�(�%1225�$�.R�%�J�`��D�Oh���"��=Z<�,!����;r�x������\z�������@}>"��g'H�'���N��X�����m�j+�0*�a�;�J8�F�'�U����p��R�c����
%����>M8a �������D���J_�����1��pB��`8����-���1'����wB�zF�����m��(�\{W�*j���n���LW��0j�*�u�6���@�W���E=����w{
'4����9AA<@#(EP�RL��!(H&������AG��2���`B'�q���z���Y�d#��t�!%��$�	�� /[	�5�	�z#�0���	��h��*� "A���t8����OH�x� �t�~M�swz`���T��AP���jB&���^N0aP�h`�l���	S�AG0a"h&d*">A[�� h�Z4���a���>�t�e#h&PT#(u�`���&���`��@B���"�
&��A��
&(�x���	_-�`�$>�
-�>��:4n��������j�����R@
@&����6��R#2�V��������sC9��^�I��qn��,Ei7�0\�j���q^c���9�=����<p��oJr��1U�:k�,dS#\Zl�����KL�����	?J��Q������@�a�Y���e�s�������C���d�N���P[�k�a/",�	0�1�SI��c�L���d;?n�Kj���-r������E�X��������e�-�������_oiJ�5�v5O�����\����J:����Y���P'��4���[I�~��a������=���X��!*��A�����j<q�|_3P@��[H
��F�z����u�[�F,=�[��TD�R����}�	9��-����Y�^�2��?��#�l��oe����G&��'��=M��h�7S������]�,s�ET����l���kpk![����F���W�L`EYQ0��(�P���n>#Z�1��\F.��ro�O�+u�h����p���pn�i��2����'R T�
��1�J����!�����#�-&hJJ|�����Ua���a�2��1-i#N�<R�+�>w`�)�#R����H�tQ*N���g�;�>X��J�,��������x7ip�T�p5X�1��`�fL�k�;��}T�����T�"FPV�}�GX������q�u����c�h�h��Y�%���M|�us�t�E[k�s��.��7s4�|br���x"\�m`���y>
���8�Q�ln7�Q.��� ��x
�J�������g!OG��cFOB`���X8��RP,��>���8q�3���$���o�L���(#{S�F��uUOU��q+��N7Q"�K��s��t*��Y�<����T�ke��KQ�	K6����la�����8�0#y�A���8&+K����R�"YJG4����,��b��;r5�Z�,���R���le�+������ai0��!����O+K�\�.c?#l�+ea�=
#6�2�o,m�,=rK-�����E�Z�BY����Tue����Rd��da����RcdK���,��b�]z�*��l,�}/,[Y���,������X�XV��F�u_�ikc)���R������s��K6���,����.�o,�G��lc)�HXZ��s)�K�v�b�n,mZ6�R�������V�����`ai46s�5_Yj�6�*���:QK���w�PC8�\
K��6���,�y��`��5��l,YY���ms)U�XJ�_X��V��|��T����X�\JdggiC2��t��Y������������Kc�e.��K�������R��VI���n��t*c6���6����Y���=�$?;���a�o����x�.�-.�����.T=�ry/�Q�J��k������%t8�����`��ryY=����+�����b��T,./{�W���Rc�./�7��q���]��./>����:�x�����z�]^������`�bqy�n������mqyS�����l.��1Xs�pyA�r=����c�*��cvC����B����>��<�T.�ry../5�fS�~]s�i���5+�e�q�'�WE�#�\�(�uuyU*v,S�����,�������q���;����*�7��������lP^\^1l=3��+�MDW�S�'/.��������)f����0\^C��������>/g"�v�f����wM�m��{�Pl�8���'������q#R\�����vu��9ht�n��:�~�����v�[���J����;��sA�ap�?���Y�w���u!����m����;&�����am���~��D�������%��9�]�~����@V��������2�Xe��<h9���G[�;3�_�s,pW5Y�'GC���Z�#<R��=1Th&�I<:����A�t����Htd��q1�Kl��.[�;�/vr�H�j�:�c!Y��*��������yz������P�u�gv5�%����������r?�`�b�m���	�J9H��Z�+��0GO��NR���Rf1W���{>]�,�Q:����8y@����5����`2�2���+�ed6l���.����R����D��'����T��.���~\��Q+��e���@��!�2�"�$�O��tQ�a:+u\	�\��`,x�Bz� d�[�a�v���f@�q����+��M���!qUh���p�0�������X3�<8��V8�44��N���r�<�wg6tT�������]���.��������`2ql"��@�@���i9��;!�3}�sO������,����6{�k�`<#�k!)��K���������L0^���Y��h���V��S�����=�q��-��L��h�;&���`�������vf��AP`#��Pr�{?8�F�K�N��
(��c��Z6�iA���r� �%e�����$�K��3���{e��j����,����k[��Ae�I(��zDf����O���jVU�{��xd��ADX�c
�A�cA�*�_�z�\�Y���t�vc,9��d��EA�C����n0�g��@�r[�?0B�Y�oQJ��������)3�fXK�2u��������5VY�����:r�(HD���aV��Z������(�uD?{����a0K�-
A���?� ����oDA�4�J�Z�X������PRC�������� �����K���j�tZ�x�oVa����A'2xJ������QV�`�)H�A&���<=� ��.�����AZ.�-X	���<�Ew,�����N�����<=���.z�AO}_� O�*X�^���<a���}�:q�^a����P�AD�S$Wa_����W�q����� �� O'�xZ������\<�8��S����l*2�*2�t��S��8H���8H�������9xz O�-��8��*��8��S��ZS@��S���Tg+�9�N�x�X6�"�t%�av��(�8L�� �������FB(.�������
���m�_�3'2���'�cP�	4gM�qJ+k��������<A�)�����6z,`�ws�����de������9�a	���h���)��-
�tb�{b���#���j w�l�I��P(�6����]�F������!�";�����(��k���T�~�r��S�y
S�c��������'�]�
��I����F!���q$
0��w�����V)��K�ao�Z�b`����o�
��1�Er��q���{g�e�;�� $<4B����(\�JD��029�����uL?l/�]��W����G#
�)t���Hr����
MjC��u�{.��e�#a�]k��J���}`���}�� �~���S�'�o1���?F)�:7$��@b��y.~g;gx�0%���^p����/��R��8;xZH0r{�]����������O��mH���w�z���?�r)`��C�
7���(��a��QR���\�V���H}�]�t�� �G��#�1#�-<��}6�6����-��.\�������t���
�>��:��u��\%0��fb�Z6:�4J�phG�=s������85a~W��^
7����#P
�_g#W��c�]k0[p�������$-��y���m�����~����-j^�O�_j!'?h@�������\�W�z�F�/IB�����]�"4��/�/+��t;���'#n�Q����wJL3���5/CD��8����a���1�%(.q���<�;:J
�u�����e�
�6�@>U$�V8ZB\���;c+4a�g@s{j���<RAK[��	������CU���G��5��	*�~���[�D{�%���A���L�2�1��--��{�}=k�V�Ho�M��r�oU)�l��7j��\L��p�+]�c@���\�6Pa:� 
���(�;�_A�s�����^<f�����?��n2T��q|��l��-������P�����m\Y;/� #X�Z���+� �vjAx����n�_x/�x�6�A0���a��D�r���f� �Qb`W�������J�.�#x�s	� ,�2
Qp"�OF.�����bh;W�Q2+�Z���t�A��h*���Ke5J��z�\i���]���������*��� c��<��t���T�6����� �>�A���n<�|g4��z&(C�*���'��^���n,�����6��tN6�N�`��9��Fta�����4le)���dg��?V��m^Y�#��R�*K�t����r�t�KS��RKZYj�1V�Z����oa)
��*��� k��G����u�fe��v�����5�z_��������,��+K����
re)��XJ<me���R3K�ee)�K��`��5��/fe)@-OL,�RKA6�����`$���Y�,�Y��R��G�di���K���dc���v��2X
��T���`+K�V�z�ag�W3l,el��R���t(��R���r�t���l�����`K�����oa)]�V�"��� K+`�����{c�G)p���\:!�R�5r��dc���>���,��u.��/,�X��V�z���R�����R����R,��S
�������X��&�R������Ruv�K���� K�\z5K(,e���
���
�-\�k��Ek*�9X���^�L���[�G�MCK�F�=�N��<�f�s��k:v�kR����lm%v�.��Tr���
d�U�4�K�x]�(-=��.�Bg4��1�u����58�d9�5�11y{�f <��5I���<U���4��S�����H�5����6�� .��u�p�TK�F�8=�R�!�D����p���%�jo�5b\����3���p����������C	�<��p
~0�7o��Pl�A1���M
"e�
MjD���kF.�R������r�p�5�d��c9��*F��kRq�n�4���c���	���II��pMj��?��p�%R�����d1"��	��ph
�t�$U�����	��$�y�k�>"�J�a#
�v{�w=Sr1�8#��3�Ns�F!�k�tJ�F�M	V�*KQw�m���kDJ���	��@���(�1���h�5�y9�h	��Jm��X�5b}iIEb�oB`�	���N�X�#��3q&\����z�ktXA\�5lb!�:�kD\W��������p�'�X[����������$z�c�oH���"�=�k�6J�zo	��qz
�q��41\��)\�!��p��'k����"�?{�����mW�0q���um��
�rg����B\���+]���%�1X\�c����*98��[X�I�6�G�sU7���l�A�����
>qF�1�b!�/|%z���0^b��n��m�r��
�I�X�6�������#�V��E���PH� ��,-$��u���P*a&��}����]W!�����
)A.b��:�>	0�Qm�al����J�MWt$�1�v$���%��]Wy.�S��kV
�&��A��py	Czn�Y1v����1�I���5z�]WD1ZQ���*G�,��EiS���[���.�@����������&u�0"������KVF��i�����+4����)�{��`&v-K[�����"V�n ��8:W��*O&vmD�������L�D��������ou��$�0�r��u`��=W��������R�D�������\����Z���loQ[=�Xmao_)|0��I�j�
��i&��h�
�-Wu�_��2�U78\��*���p�E�z.CU��
%�-:�������r�k
�4�Y��U��v�
vz��x��\1��+
�
��Q��X�y�CL����W�r"����j�W�c�LX��L��1f�����!9��9����>��N����O���W�e���w�<G[��a�6������b�� �+OE�U&�b�=�������sF^.p|����-}��1;�*0 �p�Q��������
�A2����0cz�.��v\U`@$���1�~E%##s����3O�uI�}`�P�����
�u��
 ��b^*� ��dL�tHC��K�qu��'k�U�
���g�)9��hp�%0��bzD�� ���S���w�9�����g)�{��9�
I���h9�����
�h�ZS���0�}��IFU*��w���:�
�������%�;v\U`����s=|��=|7�tj�%����"����"���������m�J`��TY\��K����
(���]^&���;���WG}�V�����������'�g��������������f0��:�������zz�N(�{�+K'�y��	+�T��&g�$,Ev��`	�Y��t���F�n D�Z�.�s	�T�.%�R���m,I��J5�v�.B����X�����$|GI��`�S��Rk����%|U��G����T��D�R�W���,K#�h��
�N�`��1Y������R�����n,�XJU+�>����SRcd:��wC|�����W���e�s��#0w����f�Xgi�u�-��]g�}�m��dci�uFb_ �
�
dc�k�������gzY7��������]T�z���~p���G��������:��f��).Z�Zd�
F@�M����X��p��V�.*��B,�3�evO�x/����J���a�\�}�0|QF��UCo��S_��<Z��|�<���Q?Z�0�� ��<����C�.��mxBF��{����[	��Z���8����x�8y\���
��c��=�B��d�%>Y9�N�f��l��J�����8��#�����ul�f��X�FM1��=Y7�{��J�rB9_�rY��0�����J��8#�2rB�h0��\>a�K�J��MGL{H���Lm����1Ds)���}Cl���W���&�I�bq����c�%u��&���������(^N�oB���eb�1>���F��6�k�%�3F_�N��'tP��ue/A|/�0LH���NM���S�p=���]�q�h�\�]��q7������Y�������^��=�j^�%�:��`CeB�����P%������*u�`������0R_���0�����@��l��7���K:���(�st��xJj��N�o�:��j9�D��lD��
�v�wQ-(kS�� /D�nQ)�d%��Q������f����L{/De���
�uB���������V������j!*��FT��BT��d%j],��^�������s�N����t��#!	t��}��Q�����?��NT�el9N_#��9��j�nDE4+Q���� !jT�Wc#*H��*�1!����
����D-e��J�C��c�j��-
R"�3QuHv��t_LTk'���BTM���2�:aETZ�
�f�h3*��6��lD�z}!��>D��FT���t��Jo�Q��RR�Q;����:a������9�Ru��Z-D����4���!D�����-��J�������NT
]g�>���D�`�Q'�rkw���� !*%�d%*��7���+Q������D�w�2�:�.D%���nOZ���������J�`�Q��FT�/���<��J�7������uZ���6�"�������l�,�K��N�:������_\�Q�������)y�����r��6���f����mNI�f��x�; _�������������W�����MN	�������9��`�.Y/c�
�sf�(_&`���n������K���
�I���\��T�MY�����9�Clhv}a������@A��A�R���`��z�d�`�~���m���Q*�L��Zae7ag�,���>�6��Pb<��|�P�D��]���� �~?+�1����-�.���"RKf�<89�GB&����.*8#&�g��d���?_�y[1;{�%�n�����}����]�U�<7���$�=KY��QI�U��m���hw�qOAN��y���jla`���f�[�-�ad"�A� ����Aml�1V��j�+��%3fR�ME�zC�Ny���-�QO��LY2�}���A����(�Ds�8���SV<
8�`q�1��wL�P�t������X���S. ��������D}����!*�����
���W?�Y�c��+P]{I�Q��	���U`�T��KJ:y��y����.��n�p��H���&���q�C����M�P}��Sg ��k�����ci���x�	�����
�sn]CI8�c���bh����c%��x�+�c#}4�
�����:~.E��u@Y� ���:��H�#�7=F���LI�f��������E(�2�16���"��"���]�4��/G.�n��URM^�u3�:��d,k�f��5��a��5���BA����y�O�	B=	���<���H1�?q7]\�@��{�J��]U�`�O�aN7���
w�@l�U-:��1����b��4O���LE�Nm���� ��jG����v �/�OAY7��Q�m���X�/+�.(�fpb,iHT��kVyG.��/.Vu���������(!8�������,��X��(�����;��OS�����������^:/]:<"��(1tTD�V3"R���"-&�,�
��sKBj`/g�Q����[R�_�e�,%���_[��DB���Q�w���������Q���Y��Q�qUA�<�cW�Z�(��p]m��%���!�GTS�5%�,�n�������U�n���tj����pF���H:5�/e�\�Q�$o�Q���������:�T�RG5�OAV�j_wd���O-�U-O�x7�j�o<�xz�����C��������T���S~"������z���SO�
�$S<=��SJ*?u�����z��Z�O<�x:��(�X9+?u`��J�}y
�u^�(?�wtv�7?����@;O����5���~.��IOA��z��~�xj7�O�~jf?�c��1���W�?��t�SO�O:�t`���Om<������OE��S:u����6N.~j�)�j~*%���s��t*�]�T����w?������j�?{�X$��k)���y��J^�A]��c:�������k�@���������.����5'�/>��l
`T}V��a1��p3������8��LMW62�0�b��:�]����������c�e�9V��x�eo���H����c��hC>F����y�mO�X�|V�_�a���wL�g�S��yA�����Fl����M���;�Gb���
T��e�3Q�:�u|��9~ �
v8N���
���3����� [��KH������P��?��M��N���D����n\��"J>+��_,O����q^�w�8C�FW�$����"dQ��������!k�Ao�c
n�S�� ������B���+C(�f#��T���9���@����qH���TY������a�C�Q:&��>������;7��;o+_�hU)(��q��T�c!�-�iWay�[C���){�d�%�y�\�e���K�2������Vh��au�/���A�k�zD�����N����}��XU�[������?��|	����J����_�zg|�F?Eq��5���l������H.�e;�bL����r:�h�������X>�D�b5r_$~�#hH�%H�B��.�[�����g�H�c��U�Gt,�5������0ab3<@1"H���j�0v����8z���\:r�����(�x�@w��U.�#W�� F,��4��$�m���T�t��$�%7,��r"���� ��1�f�*��3�y{���"���<X�"��9e�xPQ6�s����/!�����O�9[CcZ� l`
����q�,||g����	�`���Dn��OP���+�
�a��?=��7l7�_��;>�>a��Gd�y�������jB��L�AX�"\B@
p0���&Fp�0 G ��a�\�+.�L�@JZ��{.�E���SF_]w��T��g�l-K���L~��F�`�!>�_�K��_5��H����V�a�J8����(g���@#�[s$
4�6���5��
"1����w�{V�)�������`>���,��,]�d�S-��@M�'���vH�.��4�����$[�l
��%H�����x�����D��Bx�e�s ��[����8S�k�sVH"�����m�6���f2����M$�@�o�0��yoh�"R�s�[9^���[����R8�\��A�`�SVC����IC�q|uY��0��l�OD��c`d�v��r\b05Kv�o�!�t�N�%&L;�+�A�vS���KI�#�Y���3��.j�&	�&�]g&B5�X���	1�����'��w`"|�6Spv�������a'F$���N$�8�aw���z�n�UA����!P
b�����R���L;�M����vi�j��!F�a������������R��[����u��N����:��e4(��#�h7�:&����"���:4����u�;�%�bT3��*�>Z��:�L�)�+C����L�l�QT�u�>��E�}�c�\�s�u"�Y�k��;�f��K{�n�����(:��Y'��+�w�u�!dCc��K��[���2�.�*���e�)(.�F������������BS?���d�)���D��
�Z�JSo�_hJQt���� M��M�`�BSJb���))���+M�6���4m��YTe4MAMA^hj���*[����hj�v�*��� 3M#����Kw��e���~�m�)�ud���MA6��,4U��wi�%�W:y�)m~�)��iJ�0f�R��l4��n_i
����:�h�1�AS��� M�<L3��NSJ�\ij��4��R+l�������}�����f!�M.M���2��A��4MI3M�������JS+���/xm4��%�l�����M�l�V������T�A��8�$G�������1�0����gSz��/�i���4B�����L�#������RL������_c6XhJa9hz`��"�T���$"M��}���YgS�,Mm��M��LSD���S���X��T�W�$��ZgS?�d�i��2�i��)�-4
���FS�����RhzD�^fS��Td���S�l�g��>I~���AnL���*1��W=�`�#��Iem�����?��ds
���[�����g�C���e��w�������g���Cjn2��q�e��t��x�N�����LP�XZ�nYxf4�a�|r��A�Ls$$2*������u0�����;��N��-������kF@D��R�/g�*��
-t}��@���Px>0�-%����Z��S�ak5��c�)���^��$�
ot��!�$vx����2O���ukm@}[�'\v"��������&�!��=1�d�E��`��bw����	���������7���(�`_U�e� �S��A���@=�����`�+�s������l�>��Bf��O���#c�Q�a�q��C
0F� ���@�r�zG�>�!X� R>�H���5<�p�D6��HB����{w	�����M�m�@���������f9�"Y.8��M�}_�j��QL��Y����W�j>$������4��N�����8�vE��g��}��i3�2�J�9�I��D*����K
��$���BS1���yO�b,��1�����{\"DI�/R����kV�F��D���������Z�g=%��XS.�	J���!�II���&����>��`��o��<�c�;�>���:�={��b�/
T�ji��c��m��K���S�z�P��h*&��� �W )�\���A��`=�4=�$�����nI;2��c��t(H"�I�l+�IJC6�bk�$�$F.���T�Y�ZI
��d%)EANH��V�DV�`"��IS��N������i%)�FRZ'I3����
�H�X��t����2�k�;Icn��I�7�vd&ia��h���$�FR�F;I?3�$��'�V�B�����I����H}!i���ML�I��+I2�t`���*�E��z�G��I
���jn$�K�^6�:��;I��F���1�V�RI��Xk��]uI6�d"i�\"�h�m$y!�lYI
"I�y���+IQ�����:���$�G��FR����H:�FR.���i&�h�����i%)�BRJB5���.EIA6�"�u�����?���l#)�F��4�FX=� �%�$�IJ���D:y")5�I�H�d6�*���H]���={��I��J��L$mX5��nI��Iy�NR���Ts!iu�4���H��6�"�����Wk?��������_g��4���$YgR��r}!)��#���������kv�bR�$EV�U����
�L
��������FR���V����R�����B�������_gR����i�����0kG��Pb�I=.3�����i&� �%�$YI��-$��m6w���b�����Iccd+�R_f��?6�:�2��$-�%���HM�5�]���.��.U���!���F����yK'K������b��I�k�U�R?v(ju�%�S����(6v�9�ch�k��rb<����v
T�!2n1�`K�����0P�:�H��t���x:��R1,�����1[�G���-��������O]�Q���\�X���~b<j��)��1G�+�������aa=�b<Blt���`7���� �I�g������b�*�d��|�HK�x�1N4[��1�f=I�?�K��!�{���'�#�M���{�G���fc<l+;b<���s�x:���c<�?�V�O�d���FGz%1� �x������x�=a��X��*�����@/%'����m�2�9�it����	)@�)B�����Kte�7���'��E����� lN
��]f)���u`m{��)<_`q��� �����K�]U�K����wj�'s�xm�,�Jb7fwa��%���K��w��K�����d[X�w��:�0+j_+�Z�����)��wIL�����Lsy��Y�!F��LU����e��8
����u"�����������^���������#�@{�Z$�4@���zx@a���]������.s�S��w�0\�f��
^)�Mj�q����q7�k����TFf������JQg��k���j�b�^a���
q�$�9�1n��w0����v+W.��;�u��V}��qiQ�<6R~��E�\v�=���""���/kA|6�U�D��&O6x�^�)_�m;������K�0�S'����K��2�C��iU_h���+y�;��m����]^"�������$7�0E0�`����&s�dF����������8��gpi��(
vc���;A�&����TP�������2�f�����Qm����t^�'�\�z6y%�;��U��D8�AI�Hm�m���&���[y3�./c/\�RHQ�H�#��r������h([�u����%�WM�=� v�C����!�9�}� K�^�-���aE���2���������;c~��c.�xR��5���~VF��bl�� }����:U+�'��hT���`�,)�2u��
�|>�����,b\�3���7k&S�������|��%r]�Js���RF�9���J���2����99����Y��,����E�[����Lt���[U�����{���]m��0-�,>��#�%�������c����2)B�s��@��� K��I�)D/��mU����d��Y>���|��O�N>Kz�Z|��>��_���R{���Z���T�X�H:���k$� ,�-����m$�L�BRTgc,��$e��H��?�����,���"I;0��cI-h�[�+I��FR����W��}n!z�$ENIA��cc����=���6��@�B��L$m��OI+I){')�FR;y"i�($=h���Q���n$���I�g�k	,8�l$�D��5�R+l�%�'�BR����x+I��5D��o$�H���1a&�� Cs��Io��JR<�u&%K!3I;B���~!�6�6��k')�Lz��	��l#)W�n$�Ij�m$YI:���k$e��i&M�/$%�FR�����P�y&�d�fR����:O�+�G�$)�Z�h��� ������\7��FR�5��KII5�����-$��t�DRj�����`I��FRO�n!z����e#�@&�6��[�FR���@I��B���i&M����3)B�I���!z{�;��;�?�t�%�;�z����MN���O�������_������a��V4'���Hc�bgps.7g 6<��f{7g �x�Ln�AB���]������h&��pH�c>�3����L0Si{=	#�N���:�����<��;�1\�mru5���,~�pPG|1��b��^&K�d��v�Gy��/UM����� ��8�����BI.���& c�"���@� ���g�������^HH1�5�9d�E���nW���x]<6��i��r�fYx|��J!/� ����(���������OI��/� rJ�,��Q��
(�>�{^��1:�x Z�8sir�������?m)�x>e�53��]�"�+�����#f����?����1��&�U\7�����u5�.Eq+M�����2��R�l/���
m��x�����M�����U*�=?N���>I����k.n�K.�6�q����	W���g�u��8rR�=yi[E"n��22xZ�K$!�������]����*d�[�z',�c�G�]�!�C�G�E�������<���]H�.���X���D���J��+��T�����vBF�����KU��k�n��~V���]%s��C��:�R�-���T+������&�!f{�vW�:\��S
�1���:m������:��4��s~F?�M���T�Aa�O�����F�9�~ �}`}~A�xO���(����4Y1�LB�ru��W=����K=r��&+f�
H�#��33�Dj��O�K�\O�l��6�AJ,��vUW���1?X�tE�GN�h�$����s�Au��5\]{�M�5���iZ�%1�D�:���D���r���!~|]���^����sG�����P4�JQ��(��AQ�����Q����G�82�BQ�a��f=�R��*1���lx`�(��NQ�?��G8
���}.��L��h
�(������I��FQ�h����BQ�r�}�D6�"���x}d�8(��$����N���(�s�LXFY�B���BQ*�S�N�)J����MVE��LQ�E)	e�)J�1e��D��u?���F^(Jv�2�����M�p6V�"���/�W}�^m�(f�NQ\r:�,J����:(J1�,J��r�y�Es�FQ.{�E�g�(�B���B���:��8���(E�L��)h�(`�)j�f��R+Ei�FQ��h�JQ���(����Eu'�>�r$|�(�:��Gd>���r�Yd�(�bL�i�bR���Y�N�)J�����j��2>(�^�(JI�N+EA6��Q4��s�Y�\Ey�E���G�V�v�"���P{�E)i������7�z8y6t�
�|�h���)�(���Z�JQ�����*�b��l����r��BQJ�%YgQ��r��FQ���
�{	aeMAE��m�RPTum�.�JQZ�BQO��Y(ZW��=W��O�S���Yd�(�ud�E6(
�Qd���O7o����R�.�)�d�R+:��(%c���.w���[gQ��������r���
������A�j������Q���EE��MIe�"����O����k���L�(����?bQ-����h�p]*Us_q�f�*f"k0�]�;%�^$uG.i2�x�Q�Y ����Ph0
�����t�L�hQJ�1g����D��.��(��!�x$	F1 ��s0J����N0�:'���-bi�RT��D�K0J��l�(������?���"I�Y����p�`�X6O�6��s,*��l������}�PG,�+2�cQMP�^�Z"V����)EQ�Qs,
��Wh�E��:�����XT�	�X�U���`=ec�6bQJ� w���;���ZbQV����a��]�E������t2[�K��:�X�_��bQlha�:��y��G9��\������24��(6�0.V��o�ct��!�JMj����`Bm�y�j��Q�-p��tL�f�}m�#K�J�~;�Xn,�U�&Q�(�%f<�v�h��B���A�p���s7w�l�K5	���sb� �F���!�O��#�\���y�7#G�='��<��9cu���h������a��A�`|�,�e���	csC�������s���d)#��a��e�\��������l�K.�XO.T�+�>����l��1��v;Zdv��9��qia��r��gB����%rJ�v��h"l�L����c�T�3�r�q�<�%�=�<�)Y��	�@@�����&��P��{.Xn&�R%�����g~b*�����<�w��]��`�
����2���������>���L����k�:�u7���hf�&wE���>�����@>�l��wN�;3�ju9��b�,JT{��N��Z]����W�s;Z�|qaD#������a��hb(�)V!�TO0�f���h_rHA����sA���H!jB�%�[Rg��;���-z� ^��X��o�SNq��V�rre�R7�Q����c���95�����{��8l��'�wN��u�w�@��s3����[���w���~�PB�F���{Gv�o*T�[�E�+N$_��?R2�=w-��I�0�e��X�j-8�\t��Vk����O�:|��$$�q�8B��NV�YB�"(�C�b���S�z�����cuj	U����r�5�+����n�J����_;�D�d�Z(~#�z��dUA.���F
_�:Yb\�;�&X���dY�%�9,I�1�:��b�V�i1Ad�8���tI�Y�:�EQ�������&�������:��b�$�_0�����.F���o����e-!]<�X���7KZMk	���b���a0V�s{���l�RDj�.������bh}�o�8�t��X�D�..�e��s�o����e-��53XV	�u���0����� �6��s����y.�c[������%"��s��_��L1KDd���������c�������?����O����"��	��l�����)���~v,Wp��z�4�3��Os����3?*������g~*���"+?��XQ?)����O��������z`=bI+?Ev~�-��fp�Joj�1���)��3^���������z����N���k}�4���,,lv��k�'��?1��?���T]k/���io��Y����������L�'�a�6~����@������O�Y�	��O���t'��������O����l1��OJ���O������i�?STa}���V~��u�4��OJ����O���O�����i�'��������.]��$~���+H�U���W:fL������� /���V�V�����4�����r��IW�Drm���Sn���O�������F������G+?-{�����h�2�%~���+�?�������+~���D(��w�P��;��].��i�����v��w�Prm�d����N}�D�q~^CK���Y!',�3������#�n����2x����bxI���������r�����e���b/�j�s�����Y:#)pU�����<�s���,!0�pvX�%�AX�I(�L1������r�~���������������<�=�����%v0;���R�*���M�����9.�
��[8�SVA�#��cY�(>��b��>q�c"4]�$���dH��"�-�� ��g��S,����B����rQ��^"9���C_J:q�0�
�$�QY&�V���c�L���w�m����WH5�1���8�h���������?����y���%QH�#BT-���Gk�
cH;�NT*�D-�����|]}��U�����
�5��h��K
	%�mx��%1���fRw����-�� %a��ira�W�=0�M�9�@��� �����N\����qt�c[�F�_�U.�Y��))��g�������p��jI�Fb~��\�=U;$�S)�K}��������UQ71�R4�M���'������WQ���N�!	r�P��qR�(tn NNO*�sc��m%�Z*�n�������A�}	��N3����D$T=�����d
����L��GE�J'}#��b�����	D��F8�c���t�:r)����\b�(�vW�u�%|�J�lGqCZ�����P�k
�����kG>������:�%�[8G.������L�&/\���3��8�����m�'��f+����P���?��f���:2��	��fJb�������It�������?���z�hT�:��+��#F���?� ���Y�EmZ ��9���,j��w��>��}��9
�9K��k`�y�2uc��q�j���Y��o��Z��
�q>a��|��u�]Gh�p������Oy��I���W]�8������ ��g��3�@����p��4w0���z��\���L����$����0��T�������>fS�<�;���sZ�!y��![2Q��*-�[nf������k8:O����=^#�G�5���X�7�T����*����
�L�=^c�R���$n�^d��N�4��)�=��R��L�5*(��f�p���y�>�4q�����4r/��l,Hc�z5��RK�|�������`K�������#}���R�����������9=dc���,MI;K-gb)J��v�Z������oa�:?�T�EKAV����)������ue)�����N��U���l,ya)U���Shk���R�D��,E2K���t��R��~����V�v��
}�r5��$��-,u|�Y�D7�4]������!l,EP+K[�gT3�z��>�r]��R���
�O��a�2W�5XjIK������A�Z�6���,e�}a)#��R�u����s�9��le)���4�f���\�r�����:m,�o�X�y8��a���2�"����yK�K�ake)����,,�KA6�����*�,��7���/,��2&\t��i.�q�\J.W���l,����
��l,ya)u�Y��2���Ia�K����jei��������R�j�K�`egi!+K{����������lc)��R���Y�M%KA6�z�>���,XY:���,M9KA^Xj�V��l,�u/,EKAV��a.��^YJ�h�x�Dggi!+K{���(�j�R�+K�A����n,�Gw�"�u.U�+KA^X���+K��q�K2�t`�� KA^XJ6�R���6og)B�Y
���u��R���=�?�~��[[pt��4���a�+y���h�ME]������G��*>=���.i@=�B8#a�	#T��	���UN���}K�A8~��G���������	XC�b����	�~�P��aN,1}�cs�e�p�p9|c�8%��� ����\��R��SF6���f�C�Z��r.���,rZ��n��m~,�`PB�	�����Mfp_��h����b����8��,K�g�Hv/~���+��T�:�	�<w��O.��B��E�B��4q��a�Au��y��
��MjX�|�������uO��CP!U�}�(��XX�b�����G������m����m�A��5�=�X,���c|7%q������z��y=��z?���K�9f� ���|8nJIL@��@��b�v�dr�dj�
Y>F�-��%B~U��^q����*`���������X���l�b��X�x|��V
 v��^������MW�����a�9�����X�/�'��u� K2M5�w
��~�9c�>�dC�
b���zYS��p����D1�pS��c�r�����)z��l� 8���P
:F�������^�<_(n��|��,�`�L�|0�A����� ��(s���L���pKgG��;�>�00�&'��O��i�����H�-���A���������:��jG���������%W�I��1(���u��������{�
�fl��/��^���z�j#�$�����E���	L��O4Se�"6�TEy�W�,b�������C�:�@���1��E�5�.:�g��
g7>���*7}�S>��*�O����.U�:���%B�U:��q.��:*	����qI5`���k~��������Rz��lQ����v>�[)7��0�� SodT�M �4��{�T��-�o��zy(���]�U�@8R������K��/Qr�>�U�G��"*!��v����E�hR%�P��������h\�U��e`[0���
��:��^�J�\9���{�F�*�����uE��p���l�t�Yf<���c2�_"h�)"��57�����Kc���YZMkX��=X)�5����!�XS��-���pt�^4��w�X��P�� l
'�I���4�������S��)�*Gr��� ����0�����}�g�I9w��9���y(Ypk�l��a;C�0/�R<nP�1!8rj a�X���h��TqB���ec����rs���,��P��
���i�"��|���p��*���a�!����XG!����T�Z�Y����A��J��"a'��#�Qm`������|H���������\�N�� .�[��L|e��z�o&GG�j�4���SH�!��v�Ov��o�:�V�JN���o`����)>b��(x��M~2�P/6��j��#���[�tb�����S2����O�����X}����C���\,p�M���e.��u�sXe1��������
��S��q��g6J0fA��j`d�d7[e����b�T�H��F��������a=���1/ABa���L�f"�]:�C4�C~������x��P��xl���"�v~���~z�kl��h�q��<D��T8aV?��JH����U�l���@3L�F��5��9�7~�n�2�t�%;�1Cdtc�sZc�3'P��{��A������&6�|��
�Io�W�8
#O�� n�ie��{s1GpS�h�����
��X�(ED�t�v�|R�|������+/�	��-W�G,��|�Vu.����f����tu����:�m_$���� n�oL��fr�[G�$�I@W�L,�#�ic\{,��n�V�Iu�]E�p�6&��sKl�QO0F�6�j
H6f
q��
]0`�Y�����L]]g,�*@WR��F�=�A��)Si1����0<�*���I�P�]]*&�b��	�v��*�c2Fs�i|���@�8U|o���z���?�zF��i^��4��`�3�{�Y�����ZY���J���8R�3���%���#���$�u���&~�":����>����>��?y�����_'��L!�:�������?�B�b��c�7s=�',q/L�y�?=�!�l\TD�_{n��>�$E�2�`c��(-�O����8�X�I�#m�|�N?CKE�CS��2�~�`��!�{=G�����_�M���?S)���w�H��x^�����9����������?��D�~�
����'�e��_)��])y��������~�z�����q<7'����7�D�����V���/���M��|/
�C2�@�M#�W��=��L}�w��y�_�Ma���;0QZ�G*57�H34��,�%.��
����\�(�;�xK����Q�f�����S�����Y"��<)�����j����@���~_}+L��ss2
�~d���&������J2�����,��D+�Z�M#�f���N���������R���1[�F@��L#LbaT#��0>���$m#�������#�F��/������ �4')��N����k����$SL�6W����'P���`O�	S�����Hq����T���Z��M-������])E�F*�?>��G���cS
��d���E�.*�?��?R�>)|e��S�1>T!C��������"{��=[*?R���z/{_��19��y�)*�o� ?��8�g���'���������n6��RTw�Z��_�KM����J�S��O�-S[N���]T�Ok�O�g��R�#���O���	#����������,�����lp��R[�/=�r�Pu@O��-�Qky����#���pu�����p<��P�8���X/3&��������Wo��������UL�{"��<,k�.�����[!-��w��<����0=7'�y�U��R2��:������H��LD���Fp1}�pWe[7?|l;1��-��4@]�k��7�:��F*��d��>����t��Et�Dv�����\�F��������&~?�M���D@����g�����$�(��V}�����r��L�T_��N~(�w��Q�9E�-����dc}!���A2�z�w�D�����d<����Wc$�����=���s�W�/��x;���3�Sn�`��+��"�)��sT�H��#��Eh�r���$�&���"������}d<~����'�7����9q$��9��|"k#_�N�M�z2�MHOOk����Q���>�=�r-�=T?3�����I&S{�H�����T[2@s�S��g�J�Y�x��0R�QfY�_�T�/o^i9��C�S�����Ba�R?��\5���$�/�����Q��+j��������������V8c�/��LM�*���6���t��i|������y�%y;�}O~T���6�Hd��-�64��B|KU�������?J��m���dq����
��]��&�F$�pP�x��2)�X���;��b�L�kD^���!�)E�������1+���)E�d,j�Y	���D%p1R���]Xnx��|�u��*�?�+�G���*$��<�����QX.�X>+�8����Gd���
�TzN��T^_��qN��b�K����y�r�����6�h��Q�
�]1�����.[��kH��T�Z��{*��dz;�E]Y-fh��z�6�S��?i������J��o;QK��2��O)�H���>����T?c<� ��t�*�	��+(*~~��S�V��H=����u���f��i�!Wv�M��/���Wr���Ez�j�����s��{-��\im�R{��#;���"JR����C�}����(�R���1�EC�c4�0����O5/E���!I�:��S]��WcF����XC�����e������L��)�����-J���`�i�A+�~�\J��(����<G��c�f-�T���QvZ�,�K��'��O,�wo������dd'Og�����i�j<D�BE�	y�_>=��B�M�����&X�(��j��X�M���G��h'���i/���M)���{SD{7�6^�Zo���R{�D>�����1��q��������o�R��K�������L��������?9�%�����I+���0#��xjJQ��\���}��	��X�uO�%&L�,���4C��	�V��L��!�)����/�wQ�,�R����]��e�<H�<��E#���	�&�F����?��#���_�<�;�p�K������l��R'W�v���M�������4w<7'�|
�������(����s2M�Wv�����obdKx��6F�M��I�iBd�����cS*
�[�?�b����x�)�H��}o��VD�mY���Sq�M~�zV0hc���#��V�[y�u����RyGb����/ORDF{<�X�3�W�����(�=��2R�;c��"����!��C�J�������M-Rs�r��P����c��cT�	<���H��/����PW�������w�Wq��q�!3m'��V�C{����s_K������R|�oc�c���c�L�����q�d�����(1d��'�Q;�=5����[����N�o��G4�l�����A�3����5<M#h����#w{@��'-D��~�/���L��e-����k�?
�\	�����M+�������n�#��q�oo�H���/6�7%!�CI�����-6�,�����\~2� �J�#��vV���T���{��
'�Yo�)��7:5���dsd��K���|#�&{���)#��O��������i���8����3��1�>�|�<���=�)=�7�������V�I��H�����0c����i���>e���W��ugH:�W�M�q#���#���_�<_��C�K��
?	��ArBu�]��IB�������l(�xlJ�j|���G��]d#X�O�����:����q|e[\T���B��l��-�n�������i���g���+��(j:=�L2mh���3��u�==�n��do��>�Y�L��`���t���L��F����!�)�����?�e,P��F�S��UE2:n�|��z6��Bc�_=�~�qi��'�M�I�;G�t�G�������8���X?u9�\5/�M)���������k�c�E$��2x|�{��V���8m\#_�/n%P�5E�?����4�|/l�@��#,o"5V��9��f��1>�"UD���.�0�`�9V?�I�>�J���m1/��S Y�"��3������u�	������(w�����]�����K��a��ko�u�&�	)\Y4K�}Oo���X���$>v^v[��mSqF�*�s���	�7��T��T-F����iF���~6���~9��l��H��S�]�e����a��V�Z���Om�!��1E�LKz�ymzlJ������k6{CT��L�IYy�����V�?��M3�
��H!�-���_�L��ss�d��k)������L�&�6�������8('������k�q{����������J���������7��}K�'�S�� �
�uu#LQ�+����_Y����:N���cT�G����xO������_\nf�=�R�'�J�����b�k	+>%����D/�@�w$Roa�|���
J���j�$�r�x#��A�X��lBK�[��'��x��S���A��v���	�U�:����Q���\���x�u�9�vd����	I_mcbK��|s�r>������j}�C�s	"U�����u���0X��be�&���@������
���	&�<���$���PY%R�#��r��#l���+ ���+!U�dp|'MH�����T<:o�-��RT�����Ao��o�����XJ��V�?MO�#����NjOR���d�~Q]��dK�G*��R���(��{�����-�V�>����oU��*Kw&?I~���16�xs���"���`����Hm�R����"�$i4��/e$����~��SI��T~����T�i��u�H����f��S��x�-
z�
��J~�6��'���)��_��62�=<�4V/m	�����c%�li�6��.���7����g.^{$�s���w������=�j�c�}�9�,�K*B��B#�I^�����$/�p�~}SJFf=i��H�q&�uKRJK���\���z��V���?�Z,���s2�H��ry=o#�� W���i��_zB��bRF������Go�Qz6Z>�x5#��M�����L�8��W�R{S,��qR�KZwVk�2�_���p�t����T����)�c��"�����S���!��'	�H���"s�*��L.���d�&=��R������|����U�>��*�������y��x��#Y86�!�Qp���|��J�Y7��>W��-��/N�*S
*?�P��W��O�h=�o��f��f�6^W1i����R���.
-���$��r&��\��k�I����
$o���+P^��P�FL��:3��M���a�/E4hJ!�JiS}$����B��.�9����6i���!m���u?R��,-[hd��VB�I����Y�h��f��c�iJ�����Fh����B������{��PS,�H|J��V(�����Z2/��K�3��b����m�V%+����m�d���� C��n�It��X�����@
>U��>�p�!�p��9w~z�6]����T����z��>�t�E�e�L)�����w�O�����<�Q_����d�����$0`��t?}j�����a����=r����
�2��(%�i�5�7����r�CGB���X@T����P��?�f�t�{��z�y����I|���U��W]�o^��f�-
C���"n������3�E1���	�V��������?7%!�s�j�~*f����m��B2�0������p ���!L�Wf�8��
�����j���A��������i�������iR��������e��[q�
��N�5��2����3��t��=�w���_{cKU�i��^{�����`+!
�$��������_Bb[	��QU?�)���]��O������<1,.o~;�W�Et�� 7�1?f�O�Cg9��1�=���*o~�X���n�v��w�����������G�&$�Y�$���$o*�h��������c�HK����������+��V�;!�������H.�Kb����o��A��c�9Z�Sa������m����W��-�/%�heZ�~QL������H,�5�Z+5S�S�������h�Yi{]c�V��9�q��l	�r�f1��Xy���(��5�������zI��
��\���SUy~��GL��AE�����1�4����?�Hcv�)��kF�TF	����5��j j��F/%!Y��@����>��#nz��dK:���������[���������|��Tn�+w�9��[��D�i[BB���~��M��G(������}_C]G8�?��Pb!���1�?�c��Q��[;$6<����4
����3��dP�>�2�uH�R��v2r�����p%%��"|���u������i�n��Z9�d���Fja9�\j���p�{:7�l;���5���c��+�{��!d���E��O� �;����^3f���A)��Sy�4��l��L�U$pG7����������9��.y�������%��n�U�Q��|�K*wZo���2����B�v���������."��������l<���E��Q�+���^����Z�s6Ly���q�z&����7�x"���'�O�
h�����C��=D.�U|��;)6��='U�\���97J���,����r4�+IGU$��~�$�J�A���-W���X]���|��;�2����!�l�
�8EHw�k���@������O���@N��Hz��3j�DJI�\�����+*�D������0�s����� ���T��R������F�@}����{��:�Eo�/��lq��t��k��d2��,������!��o����A�Z��&����'T��e�}��t���l7��W��j;�8l��^�������7X%��/�^7*��[�������A�r�BKW�(�R~��?0�?yL�&��WLa�*�����@��%��$-�d�c�WR������M�+���7���1�lP=��f�U��y�GI���ak��	��|�b��`�D�m�����6�����GT�-p�����)b��f��/vD�����6�cXY^��S�-8�����s�WFV�8��i�o��kl/+������u�EI�p���9V�����E2���e��)d_ J�A]�Uj�8.:��;0#��f�+�4Y+�\�0v��t~,��{�,�����;V�������J�D����A6���n,f��}cUk����`����������\H��1����G��n���[��]f���v�o��6����	��@�n[.�
�ph�=�������(0	Q�����SNL��~U�!�.�'F��
��`����$����XH��C�2�o3���n��Hx���#��#�x�'��[��Z�W�p!�h��_0���M�`;�x��[����F�k���xY��y��D
\zt����\8�C��bs�B	�e�����k�`�SW5|*@��o�I*�$fW PH��A���uw%�(� �<l�����^L6Ux`� ���P��ySo��a�`�C��3EV{�1,�cEh����c�0M��L������D�e��u�}f�8�k�B���/�Ji��NH��LEt�w.9z�'62{�u��#|:��6�]��Uuq�jaB����)^|(wlG����u�[(����<"L,������c���E��o�2H��)���E2��������w`���������hv9xAx��?�V��/����E��.s���������A�jl����� �,�1��;|lI�P��s�J���F)�����7�$z�;����3�y��m��
(� Y09Y�2^������`���*�]���$�$6E�XF�T����������#H,���+��=d����E�����L�V�������d�S�.������c��;�u��9�� �N����� [����	k��0I�+N7T>���Pj| �\�?d�qT��,j�b���(V�P��E.�_X$2����L?�	S����7�F	D{�bs�67d����/J��	�z�3E�2qHk����<(����F�a�	��|���<E �1����3#$��-Y'�y���G����n��s�eBgIl2���[z{�-�\6�M��D��,�:.H]�_��\�����o�T
�>���3jY��2�7����#���58�0�y-<�P��$;��zN���	Yl����%�&~?��UB����T�rZ/�Gw��u����t�4���=o�fn�"���G���z��P���LP&����U����>���
r��;���7�>"���];b�����z�%g��c���?/yM�?�L�5�c{��v��m��H�oH��#��P#����$��&�=:&�g������SbQ������m���"��P������CB�� ���8��u"������wF�iF� ��*�� ��)s]	�g^�����s|Pe�/]th2����wO���< �bZ��P-���O�S�~����f��&���s;����Q)��d� ^w\V�����V�TX����~qn�Q�y��o]�r�(>)QK!��c�a"gGC���-$&
$� <���T��X?�0��\��r���UX�����F��9���i��,�7a�?w�7`�y�b���i���?��o�L���E�����20��s��C]P��;�r�3`�'x&��&#tW]^p(��A�a����b�U��`�9�7�$�r�bG�XF�.Cy0q�Z��(z�,m!a>0�S"�op�=�����OX=[s,s��x�d��-����X3����FW<��Or�x%����������������0�?-Kx�s���/Z*���}A[g�������"3�R.�����}dQz������Ix��c���=���#���!?j������������K���� Q�:*=��P6�X��'���-�_t��H�opi}?�n���Z����U�	��,s�I�>��N���
�]7��u�i�U�t*B��/��n-��%(L�W�^SPO�l������5�������*��.+�ve�������}FX��tx3u'V�
��ls+��BJ��0�w��@�)q�����������Wf�a����m%����t�9��������
��|j{�|LM0�@V�.{�S���5b��W�WF��-�G�t��cu)F�������x�PXN��pP��������M��+�T�S�>�	�DA�,��pI�2�@7#li���Z$E���A��&!����@[�6�&D��0�� �����(��(G&8�����m��9���*�O�]:�\ln-�
�C�J�c5���jC}��`���1��-��s�mP%��w=r�2���_������*�e�r���%Q����#��.��ZG���x�\�X�j<�6E��0?V_���^��8y��q��{����i��G.�&�%t	���DJXs������C����`���N;�aJ�#��$2�B��
�
�_����J��Y.�T%����o\<QtD������#�ua�~29|(�C�-'m��1�Gk��i�����Q�AU��=+C��z�2s��Bd��3i�g,%k����RP����Q��2
V���EF{)���Q�m��^-C�����}��������� �r�,��c�ao�5�wS���
�����X5��d����:�-7���z����{��������d��a(�)=��V��:Y��W��dA.\��	N�p��:�]������xK��h~��an��~U���`���;>ww�49)Q������� �	�v��quIO�D�������hN�jrY|�`E��Ts@�I�"��	?N[u��~7�<�3�M>��w1�=��\��_���������2{'��&d�	)e�9��yq,K?]�A������d^$���V��ql����q� 1��6�u'�Eso��nC
�{���<'���wY�gY�D�A��H)*\t���2����ql���F���pV!���sA��2t�8��u�k�3�3��9����;�G�.�B���4�UB���C45e�0Q�����n��}�J$GP2����6��sIZ�P%4#t��N����%4��q����d�����:<����������]WzP���d�r���I �)�h`������D>��U�]@��I�
/A��9�]��o���s�@D��@f�N��+)���2}�`�r_��
F6�.4��;�������D�Gk��N�����sO�k���^�$��A\w��G���c������Z8��4��EX�9����A0%& ������0lP{$J����94Q�
�SJw�2���.�d�{a�~\%Qg��-@�T��;��kM���4
V
OX���=p��@E�2� >T>I����	}#|�������{�b���]����U�_a�������#t j�sh������}g3t���sB*��$�NRL.I#�LLp� ��UV�}��7�
����K����F��r�U�������l����
�������}m@D�U�����.���%����N3+;��9���'���]���J�W�������+<��\\3(��
�E����O1`�����R���^bQ��'��TA�y�U�D@4��q����k����p�������1�J�"����Tp���=��m��2��
*y!���=� ��v��� �q�A�9��&]���O��{�R��9z���l���$�0@��$��)�+�����V���c"���"��^�v��y,U�� �i�D��������p��U|�	����*��wf�����i�������^��c���#(�9���XAp�w'�m�����Y#��%��T�������;i����\<$P�^��(P��p:0/O���������:d����
V�b�2���dk�2-C	B���]���C���������w[�\,�i$�X-"�	Y�8�F�%$C�W2���;y����Fd��
7�Fb���2�w���o����8Ic�:mJX|5���c����+�B`�����d�{,&�t\��<�X�9W�4����2x���66h�#4\����3�v@'s�l���XV������E��Lq��B6������q�������=t�T����T���,%�#��@h<��l03!�
w����0>Q-�����V����:�"�At�Lx,I^�+�e�?Y�\8)��3���%� ��l���C7�K��1���D`�p,�G�ge�����|I��rap
�l�ib�N�P����5bMC`:�d�jlP.�I���38��kY��T�������;�N�Tb��r�9����V	e�c�Qz*��������S����_���6Kl���F��1�qf�+�����9�������]@�R7������xN�
:/~��)�'Mc��0Y"����,�5b����o�i�~
)z�snP���d��5��\�p4?a��n���e�Ad��clZ4�5R�|%M	�x�z<������R����������_��m*M�v�:(��%��w']��K������W-�iL�Z�q,�;,c|���kW�Q�]���\0����ER������"�^������c`�{G�W1	r�_%���Q��������!�J�r�E����o��m��F��'{b��@_��gRx,�`R��I2��W��3/"v����_��N�����Mh�MT��>O|��S��q����\"z�@:�l�tQ�?Y����+#���N6��m�a������'B[g��tG�ge<�{��L�7��;�2_�]�*���2������U�	���^��d]�����s�����
Ad���9���9]kO��mvl i��{�G��8�9maw�_���#]���'�E���eUN���a��z�w�gJvQ�bX��}N��T�E|�dD��*��"�����
�]w/��H�=�+���M����N�Q�'�����>�y]�d2�o���5���y��D�����)^9:Q�&�(	b�����n���O:��N��<�U�\�������}�z�B��1y�o&f"�3��7s�`�"��X������w�����um�f$��.L������!W��uBw��ZS��"�D(��K�F���66Tp�VT�N4Jf '�������\����sE���`���T���
�U}�s��g(�"w
���s�
<Y����^��1��'%s��`2�|Vi2*�2����T��L!�H�$���������~�!e<�7���$*���m���lG�[�����U�����;eo
]��$~G���[G��l�(�do,l������`E������S������u��:kD6�����Ve�}�p�8(Gx�1��"�5MxaA�#k�����	D�������3��!�����os�M�����8;�Q1��������$�}/)�RP�,��J���/�
��zQ��d�y�������[*����ic
��X����I6S�8�19��=����D��\Zg����`�5�.N����>�	��Z����s4���qa�������{W�S�����pU�(Lu������NFl^�������Vc�
�(��q�}�U	c��^\�j�6�1^d��pr���ort�>j|���N��R�X&������f��#����eT�����ud�����G�KMLV���hux�(�}����d�~��$���N1ta0�[}�%V�fZ$�"�k���<�T�������S�S�����g���(�/�c}��e�A>�0l�p����e>�_m�j��.�)���I1�\�0
�K��y�%�������Dr.��9��g�� ��|����M����y�zB8�T�c���QlrI�@�1�q��������
i)�|P	vLO��s�`���&a�mlUypM"��9���2�4��99Y/�\8��~he|����L�^���r���1g�4�Xez���U"�J7tGq|���
R:sI����P��C��C��b��T��"�y,��Z���u)@����+a��'�BN���{���eJR�R|Q�������g����\8��X���3��oL�����Q��;��/D��L��,� ����Kr��;�9xr�J��x<�����A��e�Te���v\]|s������w8�d�������Nf�s��c������sn�����&�����>�G�1����n�G���'�c�'R�$�+��^�lcP�V�=v�do�,"���@f�~�����sZ��%$�,X6x��_`�}�JQ�?���B�d�=�BF�&&�Dx=���E�);��r���3/5F���Q��!�����Z��^��E	�U��-���![OcT�X{-*CG�q��c4�S��XF�����X�p�atxc�������0�.������{�R�hU>��$��:L�r�.�(����d��u/6;�^���b����S��l�i�w^�FBvr]�!��U@����[�f"y���)b���C
���J�<o�k���/�=U�x;y�\N�i���_:�h3cE��^�s��1s84����k<��(Mb��T��A�B�"�M"�V��bu�ol�c�I=^;�v�"�'/.V4\��9�`���}z�\
�+��q��s=lL�������;s�}.;��m�1��{��M�q����F����
}�����T_����
U�yox�g���Q�b�Ez��}5QS�,�;Q����m0���s3b>d�/~��E�1���%�ja� ���@��"��)
i����V x�t��l�z���#*��e��P.Y~84�->���(��y�h�y��?N��d�V��(w�����n����w<2��~o?��$S�l1��%�����E��b%+�D�^��:&t����1a&��������$�������xm��p����$)6<vu/���$f��$�`-���ei.y[j4?�\r����K�1����C�\����+�������$[(�`s3���m�m�	Z���g�D��O)�c��6�qk�>���������c% l�1��hQ?��1S�XF���Q��Lj�y�R$�2���>�fQ���A�2���M2~��wDHw���2����7��f"�K����� �4���A��
f�;(Q�������@������o����`������8��'���Ro����Aw~ga	� 6����������
3��a�6��Nq�������F�c�o�(�������yG����D���E�T(��#C��v%$`���i���(��^��\�����j���}k�A�}�KB�P����+��7��y[`�S��AH~��5�p#�V����[X�n�&���{y�6��;��a!���������+���6_��Av}�2�O"����9b�#��d����3������K���������
�#`��|�T��CK�[2��@|�*�?��oV��|�J,X%��w.����Ug�
V�����uU"Ie�>�"9������k-����?�����i	��O2����
�����g�N<��tL<�����M��1M� -��P���m�"3��[�\����~�>J����%EN�8�*x��K���A�����vca[
����2���r?��(	l�� ������3���7X���pH0���LB��r��1�9�A��*@��e���| ��Rp��b�+���.�E�D��x���X��z�}�K���N��a�}��4~�1o�Y�O�����s��,����i������8�/�tQ0_m@��{�0��l:!!�n,h{��S��2��/N�O Jz����';L���Y�I���[�U*+��v��������%�:?�u�8r�W7�ZSN��p%e>�'����
��~��2(uV�S������?��}�eQSe���a�A:��9(�^�<Z)Q8�����$Td��R�/XF��o������|��Q����V�I|p�H�P�9{3E���OV���K�����k���%�_X-��~v�%*P�g�K��-�)h
:)0Cx���@���*7������o%�o~��%��P\�vpHx\��k���E������w��_�����'��!w_�� 3
��Y'�?B������Io�;V����eP��T�W��L����mJ��{nd�T~��Ef@�G���V�
^��K�%��4r��K.�|]�eUJ��.P�\�Y���ER�����p���4����M"z������f��^����CD{�Q����:�c�c"�����Iv+�Oy���x��>��D5��0�Dij��O) ���W����8,��^��\�:��	�����u^|����/��I��@��K���T�4�>-�}��3�3f�e�9
��������g4���h�~^J��@H��xP�7�
�n����;l0��v�h(��3��J��3��$Ayp������@l�U�����>NN5��w�I�������K�&[����k�7����m���I�P��U���R}L0�����?�_�#����n7�LC�����Fd��d��_��6����d)��y ��A���"�e}����C�p�2�R����|D�{�h������X��s���H��f��f�d������W��/B�x�1��b����H\]�O0�����>��_Y����#���[`~`�@��;�����T ��=&��O�d�g��������A��;����� ���c��xn-�6X5:�^�L�D�-�A
/����$����D���
��<�o��dT�u��3������`�;�$�&d3ny�T�-:4A��Zd���2�Af��jD�����B<��Z���Kx�T"���[7L�.�tA� ^��7.6�s�d'��O��]vDW�`Zo��������I?�.7�����U��������~9anC�0���u����~���k]������UK�\Z4V�K!��[����}���/R������&d��>����\p�������.�/����W���dZ��W����#\��?��&M
��3�"��4f����C]x�i�
�3��s2&`f���I�.�����c��|�����FM��1ux��@����m����hy�����OX�l��k����\��0.�����B
�!�j
�����h/�������%��f�y��V��/j��UQ�W;�CU��yJ��q
z��i���Vb2n�-X�L���AELa,������fx���2+�����-|2�7;�����������;�/S>�)2���q�Ry)�W���/�f��R�����|��n7�B�n�*q�l�E�n&�%3�xt6�|E6n���d�����J�2�U~�a�E�&2���?r��w���9=z���Q�^yw�Z����jx����)�V6:ig���Mp�������/F�=N��At4��0J���{�{Z6A*
h�P��^	S��1�3�
m�x��+�����G�x�U��&f��N�%���[q��{������c�Uk�e����tx�N-�/��(��h�v,�P����.G�
�Z�7������o\y�Jl�D9�^xPEo�b��j6����ZG�>9�^+� �!�`��gO~�e�,J�*��<`�^=�\�v8SG��W����m�����_���CW���D�S�D��P%
���K|x�o����x�����Ek����XE�o�:"FI���;�.Z�S5%�~��>��@^��W�OU�E�lI^�[����LT�9���L�8&8y�S��q0�XAo�d��D*0�X��h51��6�������{a����Y'%��bt%2���:��q�?��uq�o<��>Q�%���GH�1�N7����L��o�^m�����T����]�cU)&���UI����>�RC��~��[�>(is���(5�z%���rfcaK#{e�U�����F�H��L|DkRp5S��V	�w`���-dt�YBAU@�k^7���]d��5+L$��3#�q>g��%��A��m~B;��!a��KS�8C���A�9�E���~l&7�n\�s� @�i{�w�OX5�����%��D�sM*v��ix=�-�.�����4���� ��N������V��_5�hh���D����2U��i�bh�=���_p5q�!u��T�#FSB����^$�����A�Db�+�v�$I�������q?bdBO��z�l��I�
��!u� [f3�X��R|�~u!Pz��j�!mL2�jy�ic
i^�M����H��9G�������/:4m����{F����/\�6�(k��qc/��3IH���=lG+���������?s<YU�e�Z$8��0k�j����Y;��?Y���4F ���\ev|�h������`,,������u��{���g������1�������TCG�^��G��{4?a��.P�/j���M#�(65�?.z`+<�!�J~���/�=�fam>v��i`j�n
���f��F��<oD���l�V����
g���-�;���{�5b���9�9�W����H��;6�~�44`=UZ_�n��5����
j-"L,����T�pk�Y���Jz]8X���F��*�ro�?0�����Z#�>�c���U�I�R��Uz�,mx6�>0N� �\<�Rc��h~����$�w�6�y)�|����0��U���9�������c�k����h|9p�_y��z,c�%W���X@�4��J��������n���0�;�����������������w4�x~�m����w�h]�t�s���x��:�P`��m�{B��=Q�r����z�G|�Qs<��T��w����gW��oe������g��0P�GC:]��(y/\��j��lG�+}{���:������X���X�\�%.
��
�)
��A�
@����r��h�(a�}�^��dYXT��0�'��R��R�c�a.t]8�q����r�UD���F��_�]��w�.����(�R���W�0��b�V�\��&�n���Ie��(K��<B��`��B������Jep,�F�z#�`UYw����"�?�X@�H�����T�vec���b�6�/S�'d�j�
^�(���g��+�(����_e��M��Zj�=�^��R���r�*��Fw�"����`��%��r�%wmt�{,,b�\p���A���u���`��e���Of��&Qf���^�K�:B|����n���e��-���N�n��!ud��M���S��1iE�P���L<��B.Q�	(D�5}��C�
�s��Qt
�60���+��1"����p����S9�{��������;L����w/\��d��A��JlP-��A���w��\�b���|7za�yC9��f;���`Eh\����J\Y3|���"�8��p�'�e��hV�������d]�4����4X���	G&eSE���4o��uP�$ac������&P2�?�T�<��5/����+�R��G�">o�#���3HUY�Q���d@;O��zDI��������Co�:�����J@Hz��%"�?Xr��4Y`��d
��np���N��P���Y��AW$^�x��v��K�����_�jLQ2��C�v�e���BL�Z��5ra�~����wQ�XU�2�<�m2*3YL&�.�'����2t�oE7edV����!�f�����{%S����|�.a����80�=?����]���*��K/>�>%�����wm����h#�������[��1����n�>�?������Vn�L����w�\,H��F��`����O�r������,D�Ut8f�}��E2��QF����g�v���p�������\�����������k�V���;�
��-�(jD_qU��!_���{y�o�����q<Y���S��'7h���\��6������`.�B,��9�����b���l�����Bh�R��U}�{W����-$���-��[���P���t��ph�V��~#� ��n��{�2��/�0����@*�W�hm
��34$�~ -����
;+��N��WFY���dz�'�	��Q����tO����1���>����J*��U�xt�Td�Y3}���6���gnP�����A�z5�C�1SY�_-�p �K����r�~g\0�����&�M(�a����D<�_%�<#���&'6?%����3Tu��}�*z�;k�u���T�@��k���L=������s�A�s>$��[��~dv �9-�w7e�n�,��J2���H��-�/B=�V�`P�����Q�����7��c�;�*^�pnS.�rXdw]��O|��a���K������.5G���X�����H��-��Q9f���cTN�����)\�!�0�VtC��^����tS�7��*����`�yfcC�����1y��s���72D���?������
����u�,a�����\.{y��hdO���e��"C��*������`�ro�<PD�i������H
��3�Y�s���>�A�84k�i���t^�B�O�\�)�=��3s�\U�On�%Y��2e��'^p����2��e�y0��i���4��N�������UFL��u;��9��G��(�����w"=9����2�������z���&E��b�"b������e��X��6`f~���E�]�n���eVFj�s�WC��"n�=&���v/��^xQ����B4�0�����b{<�=�h>��EY���k.	.I��S
�p���)������Te��u�S:��xD�XF��Sr��	"��8�1?Jo@�g/z��LGqp�D�W���3@��<�nV��Q�c��Y���'t�p064:X/��K�{����A�LxgH���|k�
V�"�����?���c�~��D�A�r��EF����6������yVP�������X�O[6����}��qj�sxR�h/�L���
l.��#7�����}�q
�qQCZq�,k���p��Oe�O)�6���};�fP�o(kV�@�
/�\R��6e�T-���43�<���3����d����vF&�����9A�2bR���fe��&ca�j�	���;�1Q�����Ts��$j���%��_��,J�2����Je�X����o�xN�7h��M�w_z�d
�D�����xZ���\���8Xs�ER��/�>n9��y�z<XF��}V����ox��e4�K*��c�2f���"|u_{�E�y��$��LE�t����j��z��1�e�x��K�7��4�,�\���}Rs*u.�A��{�E�������n�N\��R��8�g��`P5��y5������xA�n*�C��8�1m �(}�E@����\z�Zy<B���A)r�r�*:����{�.����S��bXp��}���&�%BP!��iC��v.����1�zP�W����n�X���.�C�1������q��x �X�st�AE�\8���Q6�D�E�U��WF�cnE����������>���S�T&Q�]Hg(���_��:��b7/%{�6I���5m^c��� �D�e|R���8���������$�c=��Qa�=���K%?��H�lP-�q��pI�{uN��
��v!B.����3�����u�� �M���w%1f���+}��Qeq������J����;�
n[z�dl��nn�U������]2L�I�)�Z���p�C�^�e`������O��7R�i����Q�SP��5�������8�q!����/RZ��8	�	$�\lo��h~��A�?/)���42�����!�F>(��a�5i�M��l���{����]
�7�n����M��1�4���<�3C"8A�:A��U�[w�t;����uP�9Dv>���/*�[�fz=�� KJm���E�r�����u�E���`E�p�3�#}
�W���*�����j��R�[{X�$�kdm�S4�D������z�XkV�
*��L�wk��wqBI�=3���f��'����g��M��:f������l��=�}�i��vy����p����s#�a��"��oM'��x�4����.d�����c��`�A�.�N������\t(�`���)`+�k��j�,�3	%�x�n
���v�y�Zt����^� �}N�
��+����pI��u�u�
����������jK�)M^e\8�"��qp'�`�Xw\�?�5�*��}#r!	����9Z�W�X�9"�WRWd���aA�3~�|��rqe9m�-h7p���8�������|�Y������`"��T�/��A�Z��xn&fl~��!|I��c�B�����K�*u?�'1_L�*<���>;q;��Ay����H���p�����0u��p��V�?�;V��*881�9�D�y��W��q}�*gFQ�8��'�%&�~Q���.���>Myg��^*��TT��
��TV�]��G��DG[u/H���n��%����J���+�a:�����~�����D^$��p&D����u����
k�����&�����;����KQ��������_�����?�;�2�������X��X���|��C�9��_fi�X0�.�Cw��Z��6XE��.u/�����E4�/{�A���I	`��_&f��G�G��#O2�Z����������-���Wr��q�����/�Wsv��5�BSc<:�8���������`%�*�'^^;?H�A0{��t���8����?��C��7?�2�r��0"��j0Lm�g*��*���b.�7�0&����	}�J���*���/�f>"R���5N#W�V�2��}�����7cv�?�i�9��aq�Zc�����!�/~h��
Ub@]���dx�V\g��n����r�Jj�)���q�u������J���`��H��7.D�T;)�F�j���������y�i	�/���E|���.I"����Xdd�9I1Y$���:��1��n-�S�}�/����$n��ezs`%���y/��s�o+|�A��|����
l��s�{�2{�7O~z�V%<�UB���}0�'"X�� '&��6 �����DU�}ea[1��
�"����f�y,o�x��^`���_m�j�I������;N���4D���[�6[����3�<�v",���p"u��#�_	x>f�?����|{g�����&��%�o�>=����E�D�]qy��R@D�����3%�?������L�>����" {�Q�h��7mdD2��O���r�[&�pP�Q9����m����dxg�>���)��#��~
$n1~��i<`��K_w	�x.2��j<��E2�����p�W��"�<�K< ��9������p�����d����w��.D�>l�����u+���L�^����f��cX��U���Q�X"�WF�=��v�����,D���K�:n���#�d�i�Z//�\��f
���
���"T�t�vhJ�Y']�c�Q�P�`���U"���L&�*c��/N���+ir
�A)wl�?TNF��GJ�5��6w,�bs�Ht�����Ov��W)>�R���!{P���,j��Y�U�hE��1��w%%Cp:]����h�n��s5����o����0;�����w�e����#��������I�#�n��"$��a��\����kd�>28.%:��1fI��r��L��A�"����s���&#<�*�
g�VWU�$��`��.	���|R?6��7*�4��� ~�j������E��N,��&��_�k��5��������v�u����|`�6?a�`|���r!���,�D-�>����Y0?�T�A�XA[�~�ve-��H��o�'i�9q��y�4��g
�
IP�u�
�1��0�	\���c3�~s����1��-,�W1���}=WM�������&��?���=6�A
Y-R� }�C)t7�L8T��=8�(Y�2U�]��
d���eF
B��F�$��&R��lx".\g$��u�A�AU`�3���v�C�d0G��
��*�8�k�ZAcV
��j������\Rt\���{����G~P5rW�>��1JE
����C�\��^<!���U"/V�.\������k��@��1FL+����5���p��6t�t��J�����b|{�Z#t��]�p����<��x���\��b.�YA_����`����
��L���	��������8��\�T���*�$�{���5�I;�`2�/�����������ot��<�����dg�f2O�����)��:Lo�#�z�V	��v��K�Cl�����U(��7Yx.id~:*�B�������Df�A�V�
~�RK�97kV���Qd�B,�2.8���8;L��r.�m�2�R*s���rRB��WF�j�m�X���������",�w�#�>����`�S$��!3z."�x�Gs�����&jo���=�{�*c��]�C��r/�4'��>^�t���}��{�b{<�:2]j����s�_�$^��Z���I�>�@�+N:�X�Jl��!:.l�3��L�0���:�!�{�	��lv!��:/8�s�jF������b����CT��Q��������x��Q��'(�;6qp��0����A��nP�$��n8,��1��)}?��P.�{Kt���V,���f�����~��e�{ ����i.������#
U����>�~�Z;9k�0Y�p6������'�����f�����s�K�=�:��?RPD���z���b\��r���r��I	;~�Q�����_]Y�3�z��������8Yg1/���5�F����Te������(���c����c�������h��t���]f���(%�`~}�<<�e��-^��0G{8�U��� E�U�#�Y��"^��F�;�UM!K��mv:���y��+����
��e��y9��)8�����g��J�C���L�7x�|�������}0:y�	G?����+����������w����.� �S���I��5o��p�{	�DF����d�%����dz��Z!k�����$/���T�����FY�*z����n$M7b�&<���9��=�����+<���o��s�����K�J��m�~�J1"���lF"�F���B������/��"��w���O�-�gr/��+��8'����{e.\I���x�*�K!h���`�H���#���P�3����(��2f�������Q���T��o�1���$��Jy���������3O��	��Of�|��)o]�
z�^s�F������u2*<��FM���:RF;�bV-���0J����F��o{qa�~R�;��.:6_��w|��F4���"Vr��c���i�|�z������o�����9x ��{>�;\L����S��������M
����������b�'J���L%6xi��Q-����1�=-�SI�����D��IC��iyMm��?���r��b���\H*����@���E���M�I���;)!c*���
<���6�I�!�@�� sW)���4����/"���p��-R����
�n��dk��t�
�%�nx��Z��A�������e�| �{&Vx����(���$��
�GZ���a�K��E�^.N�O/\���E�
a���y��&Q�D����y�5���Z���\��=��4>td���G�h�A�56x6G��e���#�B���DB�x���%($���&xz�����F`u�>�hh;g�Uj�������q����lU��[��qGI�{(����1-2D8����`%Rh������Z�y\����L��a,�m�!\Q��W@��F���l�:pW%V.���=@tS���`^%���g��Q�|N�
V�u��e4�J��:��U���_~<�"\����_���|?-�N��Rh�����w�����nblP�W���Fu����0{|����8�"�|}��>�e����=u��ST�-���A������/_ J*��.���p'%�QY��0�o������/PbL�%�(K�T 5��Uk<����W/�'������P���!��+����4I��N��.��[%��]G�2O�y����k_&=���K��I���������U����������(���.��*a����r~^����o�;�#���u�N �~!�p��h�8{��Y�8��R�,0���t����z>gxe�J[��Sc���)� Wo�Hm1�����(���;)Qi>�Km�A�
����SR����j 0=s�J�t&�C������C���`7J��'�v|�Jd��.��)���$��������)�X�.eRi�W���l��o�QD���������/eR���>k�e��sd�u�1�9���r�nT��6�
V�uM�C���_����(�lQ�;��s��/j$I�)�_��������E#�;�|�B�� �K�����i���8��H��������D�l�K�A{�-����:��i/:4ue����hq�cul2�fX������S��\��Ov����D�xo�Tb��=��5.�u,*T	������Pb������f����x�K������������/,0�`�-�B:t)����-�P^i�iu�^8�_#��z�'�<���?��h������������kRX��i.����S�a���g����rR��������%�Z��;���B�A��Il��A���Y��Z����N��L8;�h�M>����.^Y���������p��.�����?����9�P�WP��������t@���i��}��9���C	����6�����;:<l>�\t��Dk������@;��P5&.�q���!������G�[�+�yn�����{����R�;�g5X%]���,L�yi��`o�q��q�E
�4Y���`�0�u�F^=!18����
�'k�K��O��b�|PbFJ���cV^��:n\:I�\����u�6G�	�Sd)J�?SN��h�t%B�����@�����>�D��Q������-�}�D	'��L9,{�����~<	Qa�*���s��;�����M�
(�@�zqi�3E����`���W+0������2����A4��aj��!9���-T�Al�I�]��s��~d�ldR��C�+2md��O"����EB	?H�`!R� �3��?KD u7�N_������;�l����
��(3���.��	F��Z�I��-8H�9���=b��[������n�Udq��hAN=����k��8uk��,tz���	��J���!8��y�m�����w�����B�������~�PY�#��lu;��9C���Pr��7�
�����pZd�\����L�K���V �5X�u�&,a_J���	=���g��d��[\h������@�W	T������y���|�7V��|H%�)��yb��q�a��}���C��F��qu���|���<9N���������*5f��g�)�<��s�}'����lks������I��Z�p�Q������Ew�=���W��)S������h8||gA[�4R����(	�[�c���<>�(/\���d#��*Xo~��)��x�$�o?!a@y��������"���1�v�V��#�>����wF��t����c�^1~��C��KH�$��LL�9�%�>t�"zt���?�=��9�R���
Y���=vQ�	��aIZ�r��?cn>�!�a��:��g�*8_��yc�����#��JD�����ZW�lq�g�����C���1�]g���������@;�����%S���A�=����
 � ��P|��nq��LK<��$��pW)���h����U_;���\���Q����OK�S���j���c�H�3��JV��/���BJ�K���������.L��D��l_(�����(�4��!#�|1�'�~2��� [O����9m���
��tr�i���U&6�J>|���{�e���?����[�]p�a�o���'�3�45����RP:�1����q��w���l�BK�Sua�E���d�_Xt�w���3e���X���L���nP�G,��u���N�xT��;L��.gr���J�I�0=�����\8���g����������O���h��G��y����/��A����5��NN��p#�kT�v�Nf���##*�����V��u�T�5����-*�����5
�`7T�*���:���TFg
�OX5��L�.U�@���(��l_��7B���q�=�vX�n,(�$l��n�����p��Qzc0#1�y�b��m��WG^t)�.��l�'<
�HI����z�F��A�'{�JP
��1)iKk���;�e��2�q���b�?�G����|�O`��~��_�a�wN\
�Q"D>�	�yzK1��v'�D��A�Q���
W~��z�'��]��N�
��������_d�
lS������O0�H�%5g�U|_219q����+�V��=���<v��A
�c%h��2B
\�&+�#��2�}uyq%�,,����L�q/�iW���ON��W��+���e��V�W6����a�~NorJR `u��������L��p��Xl���Gyp-�l?)1�>��*��yrl��Y�t�}#N���������1�]{�����TY�l?HE�*^X$���w�%��r�N�b�>g�$��+d�~������a���������Am��������;�[@��6H��<u��t�����"�B�u�	)H>z���^���+�O���}�=ti~�U"��h/J�:J4Z���DdcCx�8����0b�kg$l��A��\-B��'T���+���}�)|4�R���#��w��5���n"�Hl�+'WBcV~�,�?�*���&����1?fR� �d6.�cK2cv-���%�NP��%{����!�,f��9fD��\MU|��e-�Yw|�����
V�\{�
~��9���*� ��%G�
����C�"��H`��0	�Jc-����A�i�W}�z<����/Y��$����K�3�� Q��qZ^<f-�q����+)2���j��~��+G���a���a�H1��z��g}�n�����V���#/cn�j��bY���7���\r��<G�H�i��B\������29�������TE��M�iU���*�vq���A�em�$����S�E���Ky}��������2yk������8��U��5��eB���$J%����;�&����)��s�e�U����K�bsXs�i�:�B��|;l��G��D����(��]?(0�}<����
z�V��X��m9=v��$� R��?�OK�<L��D�����m3Gzxa����U�y����X�e�0���\�W��A'��)H�=w���pk�����������m2:�yi�2*F�_e�N���	�t1��J�?�}���4\�N���z����os�0d!�����j�Q�A��e�Qn.?���_	���A�sl��	
�����q4��S�_(�B���cB<���F�j��r/�A*:W�c'_�k_�j��7a���������]Yh����d��?�A�6O�A�I��(��]��-T4�=���I���D��:�g�P~�����t*���2�:8U�u�$Ce[m~I�G����h��Y������D�.��)�eN^���<�1B�_��b������{��% �;������j[.�0���� B��Q| �I��8��d�����;��E����!���%���^`����3����o\����]�#�9}q��_�
.�����l0:x�B*\��Pz����~�I����qh'�����w\f?�"��f�AN��j)��R�<&�
tL�t�����{�N��f�
�������{���1���w��Rt���M�����k������ �S��7����iUI�wS/��0 ���h�\�5`T��#uP�D��Q=���=����A���}?Q�^�,0���Je���S���gc��g�[���=���{gx���V����A�z���I��8��jZ##S_�N��C�AH����l�z�5A��?����Gc�%�}.�U��m	x�)���W���$kt`���`��wI�/�����):k0�3p��{��8���&�����MR`�����c� �u����0�)~��#���`�����0r{1�S�
�N���\�������2�V���Ew�j��!�b{�iX�5�p%�J~#| �@5��+7
��G��wyE������

��Q�`�^�Z#=8T��H��+���'����w\]"Q�>�.aXP������L^Q��3i��(a�z����<���Rp��MK����8�Xu�,4���Ov�x�Y��2XK�6��q�Y��3o��Qj0�,�h�?�	�������8�>
>���j��	s�D��>f�;�H��-z�����
� ���
f�|���d&9����p������d�N���������h~B��./=�v�K��-e�0|��i�v������O�����C	�T`��
��,5'id�l�o������+e��
g���`��A���HTq���
:�������7����(��d��!�-ho&�.���m.��:��R���Q����:V��z\0{<.�z�9�a6T����u�y��G�L������
X5�;�.���`����:OEi`�m�*�{�>�m�e�Bar�o���O���)����=q��;/='���:z�,yW���>�D�����(D���t8sr���G��+�2Y��z�i�!��K�A�-']�{�v}��;z����@����	�:�U��v:zN/��`d�%�<0��p�\<ww~��D��kE��-(�I(8���|���B�����wg9m������sI���'���'O�yQl� U�'��/%��!�C>?3�p�Q�����	�V�\]�t	�$����>�rd���$�i�c�rF����K�a�����opIm>
�q�A��o�K�K������&r��\8�?�
N3�����!��;#��$w���������Wj�n��+����8��F���~���+~Y]����tN�K���st�v ���Ln|�K65�8��Z[_��J��d��`{�b%����{5 0_��(�K���%��=�||y�W/\I)3�����~�:��
�����$����<8���$En�6?��E���`�'>�2C�N��Ri_XE���:������	a����6B6��B-�H���2IE�g������;��L�����0{��6�)�}
����������wD��#���������������;�����6O��9���B��r��\�Z@
8'���D��.@��)��,]�����H�]Y���R}��1V�e�<�cER�c4�1�����Z!��;V��lwFt!1!�������qU�D����$[[PF��#�����A�eJ����R<9U��d��4_�^����b����JD�]�b6��$<�p�!I����*�.���y�AfW��A��GF���9V�����MN����"<dRV~���Ed�T���O��A�_�������*d�8N�,��*>��lG������c{�C;�E�����f��r�����#��U�����t������7�2j�����o_qU�Do��&y,b�)E�����+����?8B������#h}�U�bNz�>��R�,"�^����C{a��yH9�s���.��`q�N{p����\�5s�|���
s����J�����u��p�Lnt�s��%��eT|�U�U�����*o���_f�\�+{�j
����c�U�����kb�f��H,������F�S�[,��umpD�	#�S��SMXE���6��r]����������"��xl*a���2k�0���U"y�����M�pIF�J���5��9��DM�b�>����d���d������A���9	��/�ib�����d@�0sy�����=�%�t&���i�u��n��Y��.���?�9q��V�4�/\�?X��0���"��j���t�,1�K���i��`�Hn+9���d�+35����C]�-�Ub�=p�%le����'*�K!���t|����,g"����{�c�=����~����eD���@��{��Y���2�P���=Fl`����p��Z�w#�`UXw\j�C���*��9�\�D�5*�9?H����Mr��
����D�o��F��Ev�p51���{���`�i�����:���k��i�I%aI;����fheN�B�5����:����� f,�>��n�tL�?w������S�����x�r)dl� ���Y��8�:�x@�L����V���i~s[���zo��J��O{�~����b�n���(������/�=k��9�^�;�Y��\�X��j<���<�!5��F�T�)Jk�}�2�8���-[�e�����.9�L����Z��2�R4c���3]u1"����r���B�e5�I��P>tc&���m�o+�{^� /ui��
V��:���%PcX#�a������)�J��*�����`���}�6><E��������?H��%O�2��&Qt%B�A���� �-�Z����)�Dd�Zn�_�0��3fl������>dG�7�R�a3�NlI�;���B�3B��}�}�#depPl����m�k}���AKF�7\�T}�CL��g�4.T��I?8�\q�������b�\)���Tq/��(i����J�aF������a����,G(��y�b��?tl$b�s�����X]�!����\5Y��l0������P�A�Ev(��]~�\=Z$��s��]��K��"n�c�8����K�u�����z)F��-�w��'e��"���A`S���y�����[�eb�lN���'C�M�>���*?�r��+�
]���bQ�A����E��n�
��7.i.�fz��yFG���	����M��F���6H�Yt���N6*����(qU6Ok|MO7Z@��I�JT�:'	^�z��H�K�������{r��'�I���5U��0Km"M�mV�����4�	6.���(��2T��6x�eJ�d_��e�G��d�{8�q���@�.��G{��o_�'�?!�$>�m�0�n,l+��Q�P�K��\Y�w�RXges�Cx�^��z�wX������(���A���A�����H�HQ��]lN����H��x�	�a�*���\nsI9�.4[F>2K
��]�7Yx.��|���I�Y��\���p��!��j$b����������'�-�K�b-2>[�Vzm��=L�����g���6��*<�b�?p��o4�v"��b���+��eBL�V|�����n���!��%i��[T��V�@vpv�qlB�40Y�z��K���a=�O�HL2�Xw\l�.s��u��*��Y���2��h�'K���r�5�I��@�K��xaa-��<���%�k~���
���_�0�sF�sXK0�3�}8�����:�I��J�]FmE�Z�@����r���Pb���2���D�LRj�kc2���#�����Eqf]i�j�1�3'}��Y��+�E�Z�x(>4?����.o�L2�Xw\�6���B���(�C.��0%��0�S��{rJ���}�A��/%���
.������
��9���g�P:���/_��y�//_������&����";>�|�&a��e�cu)~}�I��
����">����K���3������o����.���R��h~�����6w_������!X9�)�}���`��S�\�L56�� ��%J�$F���wX�(����"�.Rl�����J�]t�����k�eM��7,���`T�����%�0���7u��m�C����?t��RI��.�
�K
O�(!�4H����\(�sJ$���D�!.�wB�
!�T��W.Z�nx�F"q������1��$IO,�`fx\8h^+(�"[@�����}��7�~r��s�~S~ �X��K>N���D�T����!��d���K�Qb�u_��E�����'�(���� #����3|���a�z��4�Y9������*��L�5����r9dbxA-r����ic:��*B�x����$��S��e��C��������&6�'DD��"�1��U�y��"C�;k;�}=�����FN,|�bO*���KV	L�k�o�"*�:�6T����H�-�%A�?(�{��}�*�"��>f�'_��D��
#a����������
�����N��wQ��l�
ij��`���p		�%z���l�6l�$�	;���y���=���Q�5W�,v�v�Z;Z���
���B�\:h^�u�8[O8��E�I�77�����IU������3<��>V&�
?3M�Y�,L2U��*?(+���bJ|��^���Ix���;m�Jr�a��*��>.Rk�b�m����������;	F��:��F`�|5!s����7^��:������3��R��b���`��,�kB��H�D�sZm�Ub�=p�l�y����)�K!]����	��-��09�%�]0t������ 
l��KE���;�F�V
�%�^ea�����M�0k��4��M�8;�q��b��X���4A$�� 
�A�8�7LX��������mB���Z?������+���~X<HE����	)�/N9Z�qi���=�I�A[oh�q�l��5�l���K�
�,,z�]��'*������%�J\����������fr~��3x�;��~�o�0��5*�q�
?f/��t����v�_L��������d��x�l�F���*6L��x�"\8��i����{V:����zK��:����;��V��x��c���Q����:
8�=�T��������|G�s�W4IN�t�f�P��Da������I�;��~/�G�DE��l��P�B�s$&��-E��^|�=O�S�I�A�>G�{�R���|�@�|jC@!��@�=���y�-t�'Nj����9�w#2����o��� �
k�
j f��]j����
�N%Z��X[i~�=HK�z����@[�+�
���BM�����RJ�t"��Sr���Sp��.��_(w\����p�����|;��,���9��Fd
��d>')'+�e�e�t�>(e����x�a
����KeC��L1Go�?3��l�9��8�������{`"qE�����L^Z�Z���`����7�c(=�Z
��|P��Pk��%#�N�c%V�X����0'��2E�(��29�XR�����-��Tp.��������l���;�-q\�W*G�V~��%��g$F���|$z�@Z����I�D����\#eD�	CGU��W��p� Q�a��>�iy����1��ec

�&#���;\���+'(��p���,<�HA���;��P��Z������� ���e�"��`
R
��X5:�OF��I	�������H]e�����]��P�����IJ?�O��)<�~�P���D�\8�|�1���0{��������qA**�H�Fx���w�49�Q�Q�G�������H��0�<��cF'�|����Ug���<Q�`�V�1������)<��|�o���(��Q��["z<���������o6���c�)��"��Y�D�Y�PP�U�^����J�A6Af%K���J���(��b��!���}���S+=F�v���i.���t1��6�le����n�7#c�U���$����k��]��E���"�N���BXK,8���Z'{���{F�UB�lo�-����n�R��B�!^����E��A)w�����phF�M�b��S�����}�(QG1]��s�I9�!���K�4���R<9U��d�B|���?7�Sp���������@�G�m[���E���1�z���-XX��^�H��s���s�,E��d����C��;%��l�tN�\��������&��� �9�+*b{=�������O9�v�+�A)w�� WF���&y~���2�}��2.8�r\�iL�\�2�3��t����2��{��thZ\�C`,�i~��A����I�12RTx��U�y0���*�p�1dP�T�N&�er?��A�cR����P~4���Io�v�t~j�������7bn{�j��	VR� ��w��Y�`2��p��^�Z���nP����w;�f5=�y����,
�{�{v�����#`%Z��A����?�;�":��Q�xv�����v�.�(�6�I9�I$-Ny��t��rae+�'�Vy�_%	������8C|��@�f�JD�_�L�����������6�s��vI4	�7�`$qd��r�H������*����*�������h.�r`=|b�	��/1��R���"�2w�F`G�A�Z�:��A� {o�W���(!�	�}�s�D��no}eA[�l��
LT�A�)��5/�`�$\��s6�6`��q��7X����s�zidZ��A��������i/d�Iu���U��0���w��J,���1J�0J�ab�M�}�����<���"����"���-ip�;R�����.F�m��X�nH���s�;��td��M2+���\��xX(V�M�w��cc��!�����d�1� G���A��p������<�(�M9Ym�����MADH�!z!HU�}f�W�����K�2�G������k�g'�5�����n��:����2b�x�k��@�A�H-����A�Z$5�%z�V���/I�Vc�"��Ez����E����%�@9%\��A�hp�E��8al�h����s$q�����.����<���Wj��q���P���s��*���p��]�����Pb9��;�`����L3�p����
��t���+Lx�$����:26������$��>�I>\iN�f��5��|V��{��@�7#�g���r��7��<
*n�f���)�n��_���s�m��'�Lro^�-���(��g��#t�<��d����/������7�%d�8��yQ@��2Y��1-�
�^�$���;�KR$��s��������
z��w�`x��I��a(���L!�����kE����E�Uv[|�$�����[�h����)�;��_����^|��`��Ndsa��@�W���Q���L���4�|��������>�)���A��4�R�?����~��s��R�D������{�	���=Y���� �h��x�����@���
6��/W A�0���3F�R+�'�������vFm�r?bT�=����W�������@�^�O��N��������/��OX��W��ZX7H��>�����
�����+����G,M�J?Kbop�2��Q��[3p�zg�3���P�GA������
�P�U%,����
�S�<$s������d)�����!���G��V6�E4�q����A]�?�G��G5�P�����Y��9[������2m�4����k'�/��\�2
)�0��6��v�u=������,;���.=��u�maD��GG����)����<�f�S/.!N�uB>�f$�pFA��7�$,�K�aabGA���A�W}�D�0�l���T?1��������F���:��<
���]�mP�*��2x�G/�E�����*}N��g
����Bl�������B]h�8�wJ�Vo
b�in�h�3X�L6�W1�@��"���.�TT�����ER���-��'[��-���!F�N�2��Z(?)�y�9�A���;	$M2?��FB�Yf<��J*��
�B
��i�}p���s��_��Q����iq���\Q����n�*&g��7����>83KF���0�0;>
��M�w(>,D��V�kX�������2��1>�� L�]�d��tZf����7*��{��
�A��d��P�'�<������P�I�}>���x����!'�B"z�G?q4��"�������$H_txsCD�1�V�������x�,��B�~��OH�D�.�m~B���o�A��J<�t��q}0�~����
X5����)?����C���W�9���@a���������[Gyp��|>ou0	7���<�!�@��FaP�/�X��W���2�5L��l���E���!�A��S��������;�(��>���0�*��hp0b�)�%{�PQ�9��I�A���$��j�q�6w�����������?����0'CA+�C22�+��/}�q����T���$�$���.;�����s�Y����'�2�sR���/���>5��^��S~ ���{b���?Y�ppq��/;S~&��f��1)�jK\��-���)���X��Y���J�I��u	���<���x��0������
���>��!���Vq��3~�d��<$=o�����ye�R)�����$B�0�3��M�>8�rL�{5&�Y�L���RaQu�b�$[�3�	� �����xG]_<o�E\���xj������� ���_�����q��� �����S������f��C���L��Vs��������5~�Y��!v,�b>�L��P�
�#�G�
��E|�HI��{�7��f���xV�^�W�� ��8�~�J%��/�u�7T~��}�8�{���V)�8s�<���hA����9	�f�O=��q������u�c���YI�tQ��(Gx��Pt��|�`z������������.Qj�t���a�k�����M���
�!��������PS�ga�Z�WlQ#f����`a�������H
ddk'P���^�%i�qy�N�LxqyypmZ�Z���yGJ�c)���;K,������%�0���������A�Z�d���#�Sz�2-���d;��!X1U�-���
a���+~/�J�s�s+�:'b���z���o�����$�El�&���'�}���#��� ��H��OM�k&V�t�Q~ ����(��� �(bz����<����[�P�'>P�>8����^����*���IZ���i�N�c4d�q���(������~����QmCc���im+���Ym�P���~$J%���!����A��)R������(�u�(=��Am{�7C/;�|/�D���
�8�i�9E�;�7*vt$�s�C����9���:����!���U!%��1�I��n�u�g�%oo���-
���6k���(�J��>���`V��>������d\�~�,b*��lI��{��c�V�7�6�|�0��I��U��kK������l��U-�&Gw�~`�&?��~���gH5���W*��p?���J���A�x�������8af���2�5���&�qv2�l����E�%Rz���?e��xS+pp����H+�"�&7��Z/l�
������)�� 0l+�'v�v{ 9c>�e/^�� ��//��\���j>MBh�;�`	e�O�wG'/0x�}�0��A�[0�j��?�<$!�N(�4I�+;*��_\a?�H�nrM���Sj\��.>�2����Z-�T��C��xm���H@�;��\��1���������>8T�-pI�=5�A�w���T
����]�<��:*����� ��v)��z=S!������f���Bx�.�;���z�*@J�>�Lh�,�;�l���v�U/A����JTj�cn�l@K
!*�����P�]=k�hW��{U���{2�A����P1������+�_�cA�W1H��d$L'h������C�������i�E6nV�D7A[�D��U�J"I�.y?h���(�.�y*���w����Z_�����RrP@�Q���n@�+�����������w��N��%�<800�����T�Y�����@I�q�b�<p��+��u(��	a4���A8��IM����Z�Y�������
*���D�]y�J!���h0��G�cB1,�u�A���8^�?<H�.n$��h�{���NFA�_d����I"�����l�����t�k����{*�b�N����������KwcT,��^���B�dV��x�F��������^��f5�_�*�����|s$�h�#�Z��A��a��"��U������o���
g4�.�G(�zvc3N�cM�OR�`����"�K.e1�H�T�9z�}�z����F����ir�~�_�K�u	i���!B�w�������v:��C���������
��w*y����1���? G������X
���EHlY������l8e���7H����;���?�I0�������B�,�/���G:�[GD�A#{�[U�f�/V��3"�.�A(M����/�1q�U7�x������
cpy<��+�}j2���QKng-|\�`��*M&����it���)3�����8��Zh���W=�$c�����@���~�������5�R���N�L*����a*U��kU���;�O����/b0���z���b�b��=��)�Y�j
�jU�V��������j���PXN>���<6�=����h~��ai��U��������1>xp�
��g�f��TA���4�Q�da��G3��dx�B�{�1�,J��A�$�5��������O|j�xe��d~�-yU���~
��u|�����;���>�w�����J^����g&3��bp�������2:*8�����Ei��C����4�m1���b��Qk�^�@���	�b�7�Q�G���J&��C�������B���Qv%���e����@��XK��B��P?����]L|h�������A[��h��J�`sQ^�TV?*%a���p/T�����GEC�?��^5���E���a	:�weV����1���.G=��k�_���H�[Y�#K�ES��0g����j�:����Q�3�V��A����H��c8�Mu<0Mh��kE�_O��b����������A�������*v����o'�yp`��t�L2��h �)?B��$�yQ��������<�:gB���2Rg��E��#���p����IIr��;�r�'�D�iv�1m�Q��qA���Y$��&������i�L��Vs��0B)���i�M�K(W�������Xe���B�j,-��;�� �t[t����������G�T��KLo�V@���I9��8�R*����6����u������W�� �%b<�fY��c��A����(}#��u�"��a|�8m+�z�������@ej1V��3��:�+D���*	:��:������Gv�����o���1�����7d�w�'�>�N�����	���H�(!6���`88,czO�U�Yv�F5F���b������|>�P��$�0������q�����~e����7�Z��6��#
�����7?R5*\S#]B�����pW�<�Va�-z�3*n�����j�]����� �\��L*�2�s��@��j��/;��9�����>S[�����L�
�[��{����}��U?�$>�_�'������e?��-=P��e!��c��4���C�&6|��'3^a&F�AO}p�
����[(������P����4��ek�{=��P�����V;8����l�oG������d�VH�eoS���o������<���L��\����;<O���Y���u�G��|���2�4^7s�xx����W�w�z�`���y�Hs��QB D�RID��1{A�1��C,�rIP!T�u������y�#�7*i.��v][�BY���
���\f�O\A����:+C'��9s"<�b�~�6s���a�]9�oV�����'W�^���f `=�~��M��`_��'����AI�X�*�H)��G��>^�����l������������p�o��;�4I���0�&qi�`3e�,���=`tJ�p6���M|�D��Kg=�!�J�(��a�����jr�R��^�^����%�@�E�� DM9��?�m���Y���J���Ee�ZH��L�i�{��D��`�m��=�OB�y�'�m�A��$����)A'����jPo�'�j�d=*�lW\�����i�YOE5��i�
~����5w.F�h%P)�=����/�J<�O��J'�!�U3��������������:(�*=Y��eV�����0$�;�Q-bK�����C��</Zs�������$��G�����I��<��ux1^��R�03jc���&}�WY�(�t0���KG����r0
����������\���q��h�4���������2J�4?wJ�V�%A�+�S#��a�c�����Y
?5����Z�b��~vE;�"O�7��=�W���<��0w��,�g���4/�����>�M���\r��b������B�IF#cdx�Qa����I�/V��e�Ff��{�������8�s��^��X�v��~��<8��]�G�
�.��@�f�=zW�1�z���p?��x��:fr�a:���9��r�be�I6H�0�d����0�D*I�G*`� ��(�52����ju�<���b�PP����G=J������L;z���������,�$����)�������z�� �����!�(����nP�1�2�I��O{�l��=��K6�6��X;�F�M�%���a�+K��gP&
hD&��p�A�M�%���5�I^I�Vvj��<H@p�v��k
�U�(���A�veu46f�
j	�;��@�j�����]���"1e���x
r�G��[��<�W%e�(�-�U�A(W��	�9b�]�d������V�6�A�Gmrs�M��/��n�Uv|v�ir�|����������2�� �d?|0j;Xf[���<v,8��P{�D�x�
��:p���^<�<�f���SB����� J{yR�g5�i+����V�n$l+�f���P����+�����
�f�^��G��]x�B�0����VUR�6}P�c���.��1�?�qv���v��hG��j�� �8/l�"d��a�o)B��[�M	6�P�T'�@��k�t�Q!��,o]���6����H�%��^~����)��8sK2���
n����q�A�qd�F��
8j��iG��T�%��-�$tf������!|h�VP���~sUR�]�kq�I�#8�-��m��������0�d��c
��4���^3K���9m�f���OQ��#�T�?P���j/�L���O��B������2��=,���+��v��.I��%/5��qI�'�aj�$�l5��R��}�P��zA�w\lx�x�9�97V�Y��hJh^��;\���f���"�)|~�C���!����$�2=��DU�����w�V�m��C���{����N?1J�T\e{���l/X)�3������1+�,C������%��x$;l_���c��� I����GJ�?�XB��/��5C��V��]D�<8�P��O��
k����=�s4�u�_��4��Br�	�0)��q���SariR�:�B���>s���
2K�.+w/���aF&������������t��FG����+�{-O�
V���J�J{I��.�R��J�2� w����?��d�)���h����Z���O���kN��m��[��a����=����O������Cyn��V�?^��W����`@�[�>���e��qA����k��R<�0ypUe�����IIar_kP��I2���{X8sx�1Z�e� ���������M��/��P�0���6w�I�i�Z&}.L��u�M�U�+���A����g5�������"i����.�st�
z��=�p�"2����5s��C��|��������+���$�V}jJ�W�H�T��kxnh���c�}=U?{�l5w�-�JJ��C
���W���d���e��H�@��yHC����GV���7Y���zB������^zU�1��������?7��,D��M&����t_�m�'Z��������^3��v��V`hq��J�"�`3�g��q\�6��UrU���Tt�0!�S�	o������h~B�P�es�s���J]B�H��;AU�gB���d��{F�K�^�o7J��nS���|-�e�+x�	q��{@����[/]`�@g��	1�`Gj�V��'���T�j�q����9���PuZ�4^�+�]��Z�)?rJi��=Y�XY����a�L(/��n]�8oDw��H�y�;��2&�:.XI���l`mG�7"�X'��0>�	+��%��1�d"�6�@�\5��F�e���0�����5=C��y����f8=���c��a�����Z!�J���2�,�Ap�t^/]��Zy�OFU��M"������y�����Q���3�}�����Qs����$#K�s�����<��K���Aw�vD$��B��r�T�QQAv�q��E����n�`����A�G�o�:���o���5�������=���;Rvq�iz���
�c-{<����Rr�wv��Q��Ud���#
���?���s���N��;���l��zg��H@��
��'N����d�^��m������[�n>:�oP*z�p��^�u�lQ��
�'�L�����������)��J>�����
�
T/B���S':R2���
>H_z��T{���w�HT���X��!W��$D���*"a���lf����3�!oi�l��jK
��WOtd)�p�,�BF�uo�d3_��Q�����|��W��Y�A`�HL	����h���o����+	��>�EF��W~�w,TG�4��
��k��[�R��n�>�T����Ad�k@�^�\�#>����9�&T�:	j��/f�Q��z"��C����e]��K'&]0���~�f;�?�+�>��{2�A5�J�T��>�i,���<1�]\9�_8�HA�� �U���OEobH���+���#�<"d�F!��x�P"m0b0���|����I�R�(����K�l�IY�0f�4P�e���N���R G����������z[����+�6�|N�R��P�=�KO~�3����l��{��L���u�Y9�~Q���?XO����������c�iz\d���W�.
�������g+<��h`�`�<�b���~:?;���/����d )��s����PIF�����C�P1���2]���D:��C���y0�~�Y%,5^T�k���/���
]�&�����\	���8��S�ga2}'<�O_tZe:Lx��U��R����^?+*�S~#�&�)l�G�`,��j��q�_%<����!J��p����[U��k��_p-���cf��������r��.S-��)����W�a�;��41+�o�P�;�T�q`��+h}	� +�B��w���r���4e�N�X|�H����A���+	�*���D��`�p�+���A�����
�D?�����F5\y��E\��<W���PVG�����j	���|�F�s������A�vj�AR~�G{�M��N�aab���|�x.*��,��v\pPe����0O���X�����J#�!���I;���)�Ja�#����/���I&���C�.����"�1�>������	�f��G�~��1�����=�i=�.('������7���huT�P�e��4c��h�-$� .��#J������$�D����s�\���e�����6j�g5�o��[����2*	{�q�I\�TqM�j�X2�d�Bh�a�����1�w\Y�����������F$������*~q�V?P:p��;��KW���Dz��+�����
�r||�XK���j�m�2��@�S)m�6c6a�}$SDZ�qK6I�#����P�8�]n�+v�`pl�SI8�;tSyQ�� �%����"u>�$A)����<8�pZ�n�����YQK<�BM����y'��`ei9g@��FFWt��*�"&���:���G��Y�v�Z��~����d�X[`y�{��
��v5��o�������
����h���qL���HAt�{h��v>�X����wW���#�������������0Q���C}w�T��`,E�ye����y�E��v�G���@���&i1}OB;�(��Q����QP�rH����
��I[T_�Xa
W�t��_	%���3����{�p��|!��:M�MR�����F�l}OI6� x�E�X�i��]������AM2`g��C������q
����1�~�n��K�ga����0��
t%�"����]�'�unD������i����=�,���������o0��<0i/
�
����(:���Cd]hV���=��Y��N*68}OlP!�&��d�O(�����<Y!2�po����
�1;6{���<�6X!�;��j�h�C���}d�6 ����{��o>�6�y�J�L�d��b��8�ZYe+�_|P�vn{�� ^B�l�Q��c�������s���7���)!c�e5O(&�R0���Qz������C}3�{��>^T�dO��C�4PE �����
����-$� �g�z��y-~�Q� ��������}��WMf��A]C��0���Q��F_pEq(1]S~&D�$�����1�8��oa}�g�z�S:����4�Bj�$
� pL\���x71([����Y����j*����w�������T�nL�o�7l�?C7 �t��l����� �7����B�C�~��F��)b0�����?��79��vJ�����wzC�����|���Jf�4�%����&e�,u��K��I��L���kE&aM�����Mb+�T�TFl�R������'��^B?�%>t��b�C��BX!6x��m^aud�u���)��>?��2��2/z�Q��=*��g���L�c��AGE<FK�h���=v�A���*�����q��{	s���8���C���9�U�A6��e�HL	8K�$9�m[��������[3�����1��0��^1Gz}�N��
���;�����'7Je@�{�'���2�,r���:O��
v��&Q��+��O}�^�K\�a�Iv�����=��i�.��������@\�CF�VS<)���A�)��RxY�"}�x���������/��0�J-|]~���j������V�L��l7[=�������L�t����<��;�x�����Nt�Yz��c�h��F*��L�"�/T�)�]|y{J)���U^��^����j
�����z
���� ��!�B���g5�9�3��L��$�\2��^tHXcB��m���*J����JCUlP*����Q�����Gw<���h�R���:�*=P�bx�0:����!T%'^J�d��a|-�`s��`q�&�����'d\ �*��a�y�R����h"/T�Sl�t�4���K�q���Q�J�o����T��k���u��Y��
�R^O��UV����������hn��`�_��^ls0����{�j_�0O����%���VE�/Y�A���d�z#=����Sz�R3�C�)U�i\��(�e���	�t�!��se`����7�H����d��e����"�b�a�l����_Q�����9���t��zV3�����Ff��{ ����g���%����eB&��3��'��9#���tz/T�c ���`��Hzl8��[;P��V�#���4���B�c����p�BM�e��Q����B�I;!^ld������m 5�����s����AB9`A!aa�:V�^�xYxS�>!����:�o�B|'s��(�>�6!��4]]t(�P)ev��E���Y���L���0(����b�@�n$���k��hy���<�$H��:�2���e&������=��	c��'�4��>)�����+��_��=��
��U�d�����=�:�����D����F(M��|���`G�,Q�D�B�{83���{�0��7���O�sZP��[�4NC�m*�zW��[;��qDt\1��Gg��5����p�<���v�I�7���TS�
^:U�l���c���b���E���\'��Ff�!��<�Ue&'��*�P�S�����n���*��C8��'�L�����1��5�� F�x��WM����
���A��	sg���!�`�;����ni��v������ ��W���^3���A��
*���+�"��2����1�a+��;1�=wV3�0y�Ex�A'KD�.�P��/ze�%�nj:�����<x�}����M����%s'�|�Q[l�J��i�J��e�yP����@��������c�S�a���e���,=0Pk��e�6���bPZ�q�R1H�O��5`�	3���}[z�+?_���YQA��7��	5Y?x�P������y��K)!�<�O�M�j��V�����Ca��Z������u���^���$��R�.�f�1���%P6(�.-}�&G��b
5��h`�`�A��+_j�_.����Gk���V���?�"x��J�������������f?�RR�$w��x`�������E����i��*�bt2��$$5H-�����[��0'��0�$4QU��Q|��:��=��� Q�+�C``c�Ja�y�
_�I�6�g��t��_bL�&xj6����<�B�H��uL�;V��o��*��$�����6���C�U����	�3�a��a{��;���
�%Hc�L���2�;<�e��Fi�+S�VE����t�C��IB%P)
��Hd��H�LP�z��b8����G�{��
V-*�;��m��R�`�o��=b����Pn���/j�Xa�(;V�f/�:�js���aT\Y��.�x �	lt������1��!����*B�.�x�����E�i�ly9A;�GC�y���`�#�}p�H��!������,=P�hA�w���J�c��^�$`�.��uOcP�G�M,6H�"�y���A�fu��;7 ��u$��~�/etR�#a<84=�@}j�\K���	d�I�����G��E��5���/~f,�#�� �t(hp���LO����7*i*~��b�{
+]�k0�U�H?���!C��U|h��0UwUC��l\�q%=��a���l�J"I��dB�]��fP�L�����A�>�[���V��	l[����������	��d�� 1Bv�/T4mV3"��|;.0�|�SM��*!��N#�m!��\�e�� ����D��Ae7�hM*����*,�aS��w�q��p�_U�I��9Y��`'���!�����eg�D}�hl���7��t���"'T�����1i=��V1��J%!3��1��~�{�;R����|���k�gTf���Q�sr>��)yN�<v��'2e����_���y�����*���e~���|�xC�oP*�$,�}t�O�����}�����>RM��K����(�+!�����h��,�������Va�}u�6���1��+���mo1J��c2�������T[����=���O��e^�zp�����Vs���oW��!:7H*�zC�v�W9���gB��!�/wR�P�E�����O��2>�=�Ix���������� g�� gG5�&5�}p`���TT������G�|&��dx@�d[�R��K�*�v)�|e���^1���s��A2�:�+�
���T�@�,��R���GI�bn��%+>H?]�z��jA��s�P���o����x�:���x��k��1{����b�3���AW
q"�(W*��l*T�U����P1��
��:,b����y�`b�Vq�0����Qz`�6u��c~�`���z��pz��mT����j
|T�KZ���P��J�T\<�����O��f�S�B����I�#F���H��/��4j�XA��	}���:����Dype�0KB�Y:2���x�4�d�sE������/�
��^��rc�a�	�yZ��w�p��z��o����k���r��{a`�c�"�y������n������X�{9���Mj�@J|�"F4���� E`�J�h��t���4f�d/V yQ�����8���E�����I��K�W�U~�Z����B��}YJ{�KR��U7,�����`�?�M<�zQ5���<�g�������>�]��Z{[��Y���<q
c�y<8�@I|�j�'%���|�P��M���
cD.8c����_��zi�����"yp�~���lV�
���A%�/��TZ���P����{�
%e�6�\|�Hflc_�s�D6����Dx4���n�E�������R��L!;�G�eu�����vi�\-���g���2�6�j=F��i�	������N}�;.o����g��s*��ja�?��s/�h������-�!}����*�9��Y���-��O��P���r~��C���G�+��*|�([|����v]�,�Y
A<7�����Y�b�����!;�J"I�������c��������s���i�i��'�m?~����N��������t�*9�d�F('�������g=}��J��n�Wx�����	�0#�ON.�I�w,���M���%J�]E3�g=����j�?�]{����v�
��B�������$&��b��9;�-�����}����2���S�2l�\��lH%7�
+���F6�rI��G.T�!d��(���{�G5f%v<�:���xQ�4P@�3wk2���%����+0�W5W�E���.�M�Z���<���p/B/c��c���LE���k?���K��u}Ds�TO
��!d �\�9k��^Swo����X!�=�Wv$���Yo��i�J�����'�i=����+��,�o�x/^�3�~� ^!�Z$9�o�
�q�v��>t�)��O:�L���[�GKqG�t��'���
�������$�H�OU���3V�-��h��EI��s$�(��F��s�E|��0{���
V�n�S�9Zw
�
6(�B������^n��)����Z�����B����;�����JE2�&�w�Xxw���o�,b?{~�S������������w\/�N��Ta��k�j�??s5��aF�nF�����?A�E�}55�����f�U�Yv��u\��t�Z���1:��� ��d2c����>�Vuu�U|�
����(�Rq������	+9��t�^�T��K?�l����C��0���g�`�H�P�m�n�}S�$�W��lf���I%�����T�u�pu�U�b��c������k+���x����)�%|bi��J�'^c$�P�^��;���i)����U�}l�F�c�_����jW��+�����q���&�
����O_9P�DopYv5��M
��=H�AY����]���.�_���N?�����R�x��>j��o\��M7[T�A�����d����eE�>p<��]y�+�5��c�z�wBi2���dabF�(gMK����h���b��k����#rD��D�T)�";>*y�b��'K�����"��2(/�����X��������2+2�/(q-=PI��o�d�QA`��C����
���3)��m��C|5&\����o�"��_I�FH_�!I��YX�S	^K��3G�#>�=gF��3�D���f�H����l��A#*� �����H����|�������\����3����E(M���1����D�#���>8L�I��?���+��ux7�~�D�\����/��	�h�'Q'XB-w��2��"�}���7p��+��Xz�n5��D��r#��e��U"���@�"�W[�/dO5:�u\���*,�u���j��z�,}a�1�5fz��P)��#���B%�*�k�
�R}0�~>N����b��BB����G/��IZL
�da��b�'����:��ys���2Y,������r�Ev�_�<d�;��<D$�]�4�\\aR>��Q+���D�����	���d���`1 &&�{��Q��]�@
Hq�_��I��]M�����r����N<�����c�_�Z�
c��k9�8Py���k9��DT�}W���^G��x�OUZ����� ��l�J��k[�iW-d}T�X���Q�������It��Um���������)�A�&i1��*�������������O��y��R|�X+4�~�����)^�L]n�jR<��X�J��Z.��|A���vr����{,�����VKOxPE�
���N�7���$� B.t����<A�gw������P|�YI����v��>V�*W���k�s�R^(�/z[5#_����;o�Rag���������;���'����7��r�8i����~J�5�q<����������	>�p:S����L�Qm�S^�T6���&
��������/�J��7�p�$+Y|��L��g��F�`��J��	$�����PK�Rf��=�R	�����
}p����^���)�/3�����mo&9�@�l�+��CF�q�|���,~�}5����j���I8k�q������e=����D����Z�'�*y�����g�
*;�o�O�����i�J�r`q��dtj��8�1���n��RZs�J���
���P�B�Z;��u���a�������j����>���y�O�+LK&�K&U���|l�0��i�?�0aNq&��`��t�����|�B�WR�W�U:����#�5��H�P`���s5*)��D���6M��7�/-�=5�Az�W
f�A
���"jg��J���2K�^|{w�W���������(39�=sn�/X�>��� �	e�8Y����<��JP�?;M&���������I��^`�GPYJoT��������[��������`��r��Q��\A���l�&#���0qe	�G���S�i\J����(yU�wY�a����_Xv�lP�T�p����Q��'_�w4����X�X�>b�Y�D�QC;}5b�:�W�or�{FQ1t��������Ie��blP1���P�B%������������t�U�i>���,��|!�y��1iWJNAEype{p�qO�N�~a)�Z�����j�?�����F�����0^{*�0�3���&�xBY��q&p&~�<�!�Dc��J����cyEx�$��7�����F��*�^.�`�7��E������j������&��+1C`�`�!������1gp<�������`}-*|����e����
F����$�:��q��o�;+*��q�o����S����
�J�T�2�"������!c���*�3M���u��gaR{�m��q����+WpTa��+��=i����%�z�������t�>+{�����"<�2L*M�xZ(F����@aL�+:vG�BNG�,��2o���uB\P�����-��� nj�R)$Pq�%��D������������$���
�`����q��i\X!����dp���B�4�|H���*������^�c2������A������y/�e�X��	�0|h~:���^R�g�_�wX�*]��=WM�"����U�w����Tp'���o-���A-�[���f��q?=g�$L���
#���?���=��k	-�2��	{�>ho�����:����Q������$ey>W+��!=�(�kET���`�_���(�&	D���28�D)�y."�^!��|5i��zp�����fr�� O�����_����X���T���oK��H��������f��*&��{�;���jK�T�B�jg�V
2~/���wr8��h`?�h��X0�~V���k�N A�������������-�d�W��at�����I�O���Yv
�$�����8�8j��#I��O�,Rh5����*@2��b�4V}�d�^��l�:8����+��|���GL�	*��#�7g����V�c�5Q��	 �S~#�&&!���Y�{$:�B(~��M�/�8�>U?/sY��j-`���/�?��Q{�~��	+�����@2'��~�N�����������|���G��A��:��a� ����[�b�eh�����\�$$�6���@����`l����:C���Xj�R	|�6)��~r�6���q^|3#��p�����"���J�C���a��7h��l�9O�����"V������<?�!�V���_���a���V�6M��t�W�sc��i���

������ @�ER�GEF���C����IL�: 7(�@>z�����{�D��0_:�D��C��n���"�,(l[f�����?��2u5�T��a�
��2���
�v!����-mf���7/:Am8�"����$���"�������`�h!�J�K0������Uo�����&�?�ti.9����P*��7tRkg:V2���>fGA�$'���O���+�?N��V6G��	�t_�"��
�����Eb2����Q�ypXn7�����jQw��$=g4?�TZ��i�(	��Hs�4Y|��'��s�V{������;��H�1����x �����r���)8�KdCl�h��}�����f�
V�Yz�do�q�]_�A�z�*�c��p{��)�ac�Y��_��:��O�?J���\�d�fRi�������}�D���g��C��b������Y���Y�Wr@���6��M:��-�l�;QW���5��J\����T������d�T��OBQG���J��|���J���t��������k'�����?�`F�h��B[�s���LT�A\�9F�����N%|���}p�q���
�J�Tr����8[_?��
�C���T��Y_	��f�����la�X�4�Cl.��:6�!
�X+@���x�H�y����B��&=���~o����
/$�V�����^�t��;��dW��8��p�lj����LdI��{
���`G�����:~��`�JQ�E���$Fe�mL��� =��qc>����T��(	��|��6���Z8`:53���$��((j2n�X�oE��in��`�_�o�w�������V�)5���}��7Fz�(�� ��(����	�#3��
�J�:�9T6n�u�V��F����3��Xf8����<�6O���u.�H�p�J����.���8	bq}s�idl#8�\I���� �t&�I����`E��y������u4�8R��WXsPr�"~�
����z��"����^
�`��Z�+�4Gkcu�����D��:���O
�.��y�6����"�<�a����)����BWd�p<a��g�=��3���:|0�`D��o�����c�(NA���4�ix�01��Q�,���g�4PJu
ypR���h^IPel��7�bSx���LF�J#�����
�%X<W�����o���i�j��q[v#a[����
0Q�'���k1lg�"�T�����;&�}j"B�o(�
_�ij��$���-�`	��3{�u�b6T�����L��nU���W�����<d�5�o��d�9FF�Nj�uC�q���~8};2&���������4�{�V�U�$��?n`��HA��A_8�`p4$3���-�JG�k��\������fy�s�	P,�23��3������`1���g�L��V�
*���a�p����h�cK	���b�
�$��	u���������U�b����iSAv|��M"J�&&��<XB��Q�)G��#��xMb��O�������b���m?��SzWi*�$j��"z����"��E(�U���>8Xd�,��SQF����@0#���X1a�����gm�t��Qv�G%���*>s�Wk,����Y��b��O++�H<):/b$@`���xq��c�%]7(,@f�������8Wel�bXJ�z��_��&�J��0��j���m5��R�&����D���p&�b�xqhW��5;��e����x����d���+�6T��i����e�{lP*^�G�J���y��a�>��|��S�Q�������~|�;$<e��4[.8hX�@�����\]2y�}1���!3�w�~P��e���:�Y?
~FqR�Ax�c����C�d
L9T���R�9������d[u{�����0����rn'F�{���eL����Ci^��k��Ss��?��	2gB��^���_t7F����+~�
��y�4y�`�M�'L��
����M�_�H'V��oPi�>|&�����1#=�_��s���6
4k(�:t�p���sv'y��hdU�#��\���
l��56H@��@��VO�Oh5�I
�|c�I��"��y����QI�7~'A[Q|{���?+�Ch��WP�����_l�g=���R�D��T	6I�a��=�!��2&B$���J�U�ga�������z�~�����C�&���r���b2K)�p�7��?d�,!���`*�}����a�i=F�m;��V�_���$�!K�$���R&���c�!��KEgCS�[mC����+	�f�t������"r��Z)���F:I�I���>j:X��`A�gW5K/j��dEgJP%{L)�{�����B�!�L�s��[��c�p_�ZxXa��6	��PY:�����YN��A�"L���-/k$����L�^�Y�;�63�/|V2�1���i�
�,�l�����vN��`s�
RO|�B;���w���|P��e���="g��2E��e06������l>���d �����}��o���������,���u�����zg�3�j�w��U��������`D�I���4�w���N�;�+��������<7T�C�g���t�������^��^;�a`�X_��9��E3[P!��O������sc@�L����`�&�'�e�����.D�������Rh
?
���$
���Vd��(`�~^N�����^��x�",��w�
*������4>�"P���Y�*� -��v3�|�8��0^�����d�A�����U�������RzY��Z<l��<B����RPkFy�{���!x��P�d!�k�����}���P���H`GB0OK=}Q��'�#��Y:��jK�����(��
��B�_�.�*���P���������z�[�����,���|%��}�|��������B��M��Q��O�Q���$�f�rXI1:������Q�BM��8�j��TD���hX|��C�Q
�=����+	��v�N+eh�%}!�@xn�g��y<di`X��6��;��-�9;h�����j(d<UY��x`X�G�0�U�,�G=���
�=*IC��	�2�j�hi��W^8�#1;q�^���^L�}�=�"���L��w(=���3^r�d�0�����6�>�JnF|��eB�GHf�bg6/^�&� K���k)L��Fd*��!�J)�w+���T{PE�eW+�$z*�K�H+uz����S��
x<?��I5�2�Gj�(@�D�^�$��6u
�	+Cx�<��J?O1���9��#w����I������l������v��1����g6\�/�����
�4�,}TO?^�d~V���:d�_��h������w������DD��A
�pf�7]���:�/h'b�2J�V4C����Oh'J�����?����������bL���X9\Av|��om�s3Y�XB��Jr��(�EB���MY2&t�jm��*���@�Lx���K%��T��u�ZP�����bA��rO�nk���,h�9�R��3���BK�M�>F0@���(N��dO�K
��:��h#���T8����m%�^$� >>���.���E��_%D:�>�	�V��o�t'��C��0�d��lm��J2
�����k=BK\����]a0k��{��
��Yz���'���kq:@ae�?�_I�9����9:��gf��<b�������B,X!6x��Fs�v�
oZ�(!�"�#�t����)<��?���?3w_{��\%i�)��[�C��F�0O�gx�$�����~�4��7�P�F`g"�}p�B�WE$ �~VS�
^�@{����Z�C%P)�'y���/����j�V`���{Er������m�����j��Vs�s�s� �q5���B�ZV�b�@����Pt����*��`�W��Lx���^PXI|b�eBck�@fm���OS�]���P�n;R/�`l@))��T��2��y�R��������XttR��v�����������l\�_I������nG���Y��#���(�D�����
���>Gv�M���4���>x'�S����.a����)�zGU�+lV��^4O&�S�����7x���e����b�)|��C-/3i}��������'�)�g�5�=�Yo�6�_��Tl���F����B��}p`!<_��I��_-�$��W!L�b�K��o?ku �S���B��G&������J�3���������s��;|I�^�u��m�+m�TVje��
0'����C��D&�=�V���l���~��Ld�_Q��*y��8���kcM��w���z����
����MR1k���9	K�����>H��0����q���uc�����5w�0x���m��4|����B�k��#L0��p��LaF��S�[��<�����2�+�^Qav|�h>��F��~�
�r�tC����TV%$��`���$u���F0�j1&c�]In��1{'�U�_p���	���X��&&�o������0+��� �7���������4I}4XdT�J��E����r���^�m���E���c���Q��C;/���,@���g�'l�B�����'%����u���{��c-�1	�y��R3+�PgF���M.�=
r�z�=�G��Y��=�k�9ibU�`EY�*��G�b�=qH:<����3�)���a��&Lwc����_��A�z)����@��`�5��G7$	#�?�p�W��a�~�"j��q�����Z������jB�g;���<>�B�l~����X)��:^&Y��x�4$RK���b��>�_/j�
g����J��H���LTx`��(�����b�1y��b���F�,�T<����!�����c��[�4�e�C-��������	�1�8�����'��x�W���b�����9)��_���f�.�h������=o��wXVi:�az�=�$E\!6(�O���x�J�������{6J�K� �PqN�Z�j g���Y��k#�A�w��c�9�?���%�[*�;���V�����W�8	]��Axe"�%\S?���EI��Ig�R2O~=8X�Sm���c���*��O6aJj�O����K����6G�c��~��dYH�������n��j�W�K[v%a���t�e�-T�{'�+����,n�M�{<8T��2a~��A�w��S�h/��BaR|�#~���p�9�I%�	=���c5o���g32��Oo�Bh�w*6Gk����
Je@�Y�N!Z�
�$o��.&��Y���U�{�OC"l&�l�����.o!����{t���2Y��q��R�������`�����^�'�>OO���]����E��A� .*�kS��a����jR�����v��a��3�W�D��c��
�i�e��@+1�>8����KN�V���mf��
���Za%�|!��^8����1U���.=��k=E�����&y1y�v�G������Nd����jXS|��3>��-�#�7�d�7�R!�VP	����Kg"�����;h.���� @m������+�*=Pi��=�Lv�T�
T�@�*}�b�9?O�.G)R�(�r��m��e.|~����k��
^'*��-y��n���s>Sw��F�����q?[���Uv&��|�b"�X�?PzO���+	">����^��I��VlB����
`G��n��ftu+�AM0�kx9O�I!����$�>8�1�s���&]fa�Gl���1�M"q��/���X�OD�YD�`����l{T���e�'�Q-3���;T�E�m�wX@��xB��DBe�������GN[�����\����>��h��3T[�cQ�%N|U"6;n<��e��`<�3���\��I�@��+���A��[6XW8���~�}r�d��uj��k�f��z�d��b�g3����KT����W��'vy��^o��
�'����������=;�m�lI��QIs�>B;���H�u��Zo�$FX)� Xf�?��p��'�x�b,X16x��������@*�����d`�dz�8u>�q����(_��UR�_I��dY%�������F>�g^��2J��O�g�3�;���j?k�B���H��Jx��"_]�m#IG��l����>�%g#���p�������?������,���������7(�B���G��{� �
�� ���8��r��-��q��������5z���/����	�T�H����LF���A� �xV����]���H�6��P���*���uR��{h���8��?�Z�-�Ea�u����qE8S�LB{���a��� �@���j$&�K��2y�������70�)m�S�
����^���\sTn�B��A��� �B���|���f��Zs��)?�����V�\Y��[�N�	�:<a��	dK�E���>:�\��(��&fl�g|��'�^c��������)?]���[^_&�����`�qr5t��".sB`3���=;���X�'��b,X1���W�?�j��j���"�j7�O��b	�4�!A[�V+���=��2&Hg��&����7L|`�zK��So���Va�����ds�u����u�1�2;%����1�;��>��$����+���#!��}O�7�9=����3���#�Ka�H��>;q����!J���](�j	��j���@�
�-���
��\��)�g/����>��F��!�yp����l~���	V���%�B,�"�	s+��J|D�"K&.�v6��@�"��z�y�X�o�3��,\��q<�BS��
eUEs�4�9nv�PyHY���X��u+�G#R}m�j&�����
����,���� uWX:IM������n�����${��O��:i0?�+�W��x���Q�����A��
�c<�V5�b��@}D���C��r�4t�A�r�%���B� =�����$�;�>T�E�KV��>���SyV�7��.6������'�*/���<�_+")��z�����c���g;�n�28��w���!�}��&>1��#�U���S
��4P���ZE}�f����W��q�Qs�t�B��%�f���D���^I���&1�7�Fh��5���r_�����s�|��I���4����?�$���S)\WKh�����������FF����8w��q�����ng�W�/X��jw�����`���FJ����h7��
flu���tY\/*��e!^�N;Ds�s�������%�Y�8�A��_S6� %��>�U�y�YO��I��<�7Bi�D���L���i�(��P���
�)���|K��i����V�m����Qm���:��Dh����	���]�d<�����x?��G�Em.�Zv#����1�������F��}�"����<)������S	��wx�����u�%���R	�1��
�@����'*��J\�2'�E�"�`��Z�Wk�$����;�������^	�S~j#�$�S����x�6]~�C��Q�2�6�|���9��,�M��"��n�1*;��)��Bt�\�8���o� �}~�g�lOT������C"y���N	,
���W^<�|���}Ww*��]�� @O�S-�R0�����
'�v�l@\�z����������c���z�o�IZ�[�,L�hh9S��g�WG��������� ���;V��v��,�M��g�XB?>�����~���N�!��N_��fs�K��<�B]�4u��wwsa��+�S~&��xu]�;�vW�7Qx\�&�����t����^����q��mP9�0���U��Z1�6���J���Z�Gg�=��������TOD~9�-SIT�I�g����Lj��+�+���^�����z�l���m�U������lP*�X���U9���]C�:�N�o�?�u�Y��L�2?J�ED���}l�?�e=1�O\��������l9 �(H���O!����,�I��w�����(�Pq���GI.��r�"FN�����`���i�`G]�P�bl�c�5i/���������$���A�W����1B0\����{V|�s��Uz�������F�V�?]��s�����0�p��|h�ZL�|��v�QM!6����6%&T������Oh����R!�1�%�#7f��N�=��I�/V�e�����<X1A����.C�����,{�+Gl|��s5Y��`*�z�����T�����P��X���5�
 j�����%f���#V�����{���Ym�X��wH��`q���<w�JK�}�
�Q���@W���Sm�����C���z��$����B�������9I(���>�2�d�Y�&w�����#d�����������9������a2;�"�q6vp�q':�K"� v�%o���y�0�V�>y����W�
 �B�������L{t����6��.����}0��h~��� \�������\[�6��8���<��
.vV5d���]�\Id@���^to�Z���j�-���O�Q=����>�u�����1��k�
_Li2|w��GX�
I��qD���I��TY���sB7�<�1�/h�����@e����� v���+E����+��)w@�D?��"\�����B�o�[�R���/���G��[���(8����t�@LiU���M3D���''�������.&�����#!C9f}��4@� �7}�(<�r�����F�LT��r��y0E�x���P��}�7�q�����?\�l����y����1 dzJ����e0�2���	�Ke�~}�y�(BL����3�RTq�����V���!@����M����D^o ����/������?�b{]-���J�5y��j
��(���K<XV�+�I��c*V�`�^�*v��AF��C4�%8~Hv���J6�~�#s��H��������o��Y��+	/��� C���?���\�x���Y���������f��Us��)?��^g�����`ab	�Y(?���OP\B��DmH���jr����IIr_�;�r���L�8g��r)�rDT1��_Re��|���BS4�*n���R���$ot���U�Q�_l�14��6��w������.;����������/��?���)�R�/�g��7`�O��
�w�gQ��� �s����W�g�igK��3���B2V�g�*��K����Z����	N����[5������W����
5��u�W*	��V��"��<�w�l��r�Xn���GTG���)~�sx�����4o���?���S'���k�0W��� ���,�L��>�A5?�M���_`3��� �1z��`i�h���
���*>0�`�2K�Th��r�0��P)(%�(��/T���1J'^�<U?�������cmQ/������kQ�|'�\Y(��pz���$aFb�8��q�_�	1m�)�*=7��	��v���@�f�b$&~��]����9��;����W��_j��y�o'�i2���'+�,�����&	7�IL\J��[H�����ym0X���0�����MN��^�tpV�o���(�Qav�C�Y���2�/��6�{j���A���$��&��`�v������r_��Z�FiX��UE�#k�
|���>�B����~k��h�Q]���+�d������fS(%t<�z��n��+��
�II�%���_��Maa��M�bt�+!V��c�/���EL��v���$�;�6XQV)��Ae���P*�:e=���+����]>��Lr��������S�D6�Ng��I�G���+"8�v���p�����@��F��\��z$����g~�lE$X�"Lx������, ��	�qLd��o}F��"F7v�J
��]�
�<������O3h~���u`���#�Q ��f
�r{Z��G���/�����f���P�u�
�&���17�8F�~�p�M�|&��~�O�w���J�xp���h�2�*"���r��X�<��V4�;�R-P*����^jV^C���P/7S`�6�i�Dz_���o����>h��<������|x0�y�����	�R�7<������J��Vs���y'�����E�7-���������Z�IG�m����N���������7�c|�J|�g���#�m]��W����w�y���O��5���q=8�Q�$*����,d����Z$��'_3��+�g��v&��H���GYG:���
�?��=�2�IH`.��2���L��W��*|Ic�K�p�yX�V���S��k�
^�%�CoQ�]��� �k'/#���t&�p��y,���CN7��@��(L%�
�-?����:#�����`+��#�WB$�G�d������j����
��aW��O��C�����/�4<Y�0`	�a������,�����U$|��uO���p��4�����y��B��o���� �G_�'���ZL���h���9/�5�~P�_e���u�L���j�2�_���kG'y�a��N�`f���W�9����:���*���J�YP�k���������`"�� Y��G��y��	�Vo�
PO�C����������7�	
�]<<��!��\���Wf�)~se��{E��e~�T�$|
�ce)F��)��6;.�~TJ�}�"�]v���e��������=0}1���7X9����fW}$��|��$����5'�V�
*�����@�=��T�Z����m�2�Rz�B�`��rx������kU�P�n�x)�X�W*������b�A�7WG�|��������P�����v�S���n_��6�1Ff?\�qa��3_&|.`E�M<!���T�8���
;["��|'d��m�,"��]|������y�o8C�Yp��������k���'��863�l����X�D��w��k��1��&����xp��Y�=+n0�`Y7KoTlf��+��3������9P��6�;#(FN��y����jU��O�>�Ij�LB���+,�<�]e	��<b�1��L�^���
c�Y]�-����/�sv�>sE���B���U0;�`	��%�}��b	�a�|�5��h�%Tg�m?]"�W�e���8�&�B�����Xe���@��.V����)����{���*��>�����k]�U�[=�8N��r>k��y�c����@���b$��*���������������+����E�7��c@gtG�MGD\-���^tx���<�B�6IT-id���EL��e���=�Q��:"L�����AX���zH2a&\��,�Q�y�3p�T
!��m��{G��	�vH��qF�^�N��d�N(�*W*�����b�������q�U�V�t3*���'b$��Z�lC��^1tY��g�^��1�G�*�#�1/Tf�yp�����Q����������l�%HKM����{q�`�
�g���}���fLd�X:	7���}3|���)�g�����$Y<��1�����5�c8�&�e��):����W%�|c ��Uo���S9�4<_:f\t�T�&�A�+�mc~K�<�t���<3���|����l<>�����9P���LsA����zY�����)���`�Gm�����W�������o+�7�����Oa�M2F�!�+a���C����z.k�*�~���k�ur�c�(���������Nz<��W�Yh?����"��C���	zZ�r�����2pE�����LHf�\����L����}�|��Lh�E�	�mn��^�����8(31�}�����:������q�c>/)A{Bn<�����H+�����
�U���E��O&]i�X����MyY�2A����A���e��W��>6�����_��w�eW��+��'*��qj�8�?�i�����G�=�)�*�j���Yf��z[�����jN.b!D�obQof�w3��z6LU��3z�Ii/�0lF�Y�`Z�,���3|O��q e����yp��#�~���V�,'�������(>�;�w
�B�r�����/������dxo���v4���;�Zr�C�b���Eh��5r�a�a%�"����n!���Ve/�_�;3�����W�7p�f��0`���{�������{HA��
Szn�`�v)�K����y���/:���cm��[-?���I/2�.�KH���5����_��>�0yp�aN�1�<���<X�W�<�J�3�,��f\a���o�z����XJ���7���0+�������=�$�������H�YPG/�topY4�����A�����������<oel�'�K1�P	�J��M�K�2���Nr���jH]UZ��
:������2�Aa�Nh����b���I��R!�Qw����|�X[��z�a/�b�[�\n���'����;�2Q,|U(�������A���2:�;�Q>��f��t����/VU��5k����bkW,�&��-���S�8��7v�QmCj2.%e��;���j�TP����:��D�g1���M��[���[E�?�j����	�����$���+��1���td��%�V�_�8�^��/�!1G���+CxaT�	��.����%�����Y>�+G��P� �	�
�v���qW5����^UB�4��#=p~#��x�L���z� �t4�qQ���26�;�J����A����"�d�a��">T���� ��D�E6^��J"7�6J����2�W.He���&[���yp�g��������a������i��:�i,6(�����mj"��J���
��G�����������QQ[��#5h��N�j2�>,W�`2�^F*�@���	<�d~���
�������U�l���������7���L�1��2kzYC�%\�z
�F:!�t|*J*��� +FMP5��P*�����5$�����v;��2�pQ��c����Y>���%������(j�L���lk�3��q_C��h����t�`��17"����8��(���a�����N�������~�9����4�����v��|"�@e�F�7�7���3�����Y��m����]��Y
���Y�������������YC9�a�*/*X��I���6!�,��n�Q3�Cj�F3�0.5�m<��%��B&O��`�
�\a��W������?)�9�O]��W7+�TD���o�_ ��t/I���������70�X�@���BX�D���C�T������'��kSZ��y�����>9R�-��9��A�J�_�i�V *��T��bV.C����#���@��Sm{���N�p��#��~�M�j�i�W����R�P�l?q�2�(T�@��FIN#"��L��[��P��|�����/P!�Z����i�!�  ��~�d�bh���s-�p�S�`��"\�,p"����$��B�RpH�-7\����cG��s=
�cG��"�r������<�H��`�yy��P����s�/C"�����!��l$��c���"�X'��P�?�B�0�KP� 7�(��{�pT[0y�H���DI��T����f��Ta��'B�6����,x��<O����)�����H�}�q�a���V�|r`<?zGS�9{����%XB}~�w��y17[`	��1��#�l���JG����e�{��=����������"nx���0�h7�1�l������h���SnT������m��l�-Hy�9��Z��X1G�N`?#�F�� G#�HCl��G���Lx���%�L ����aO�&�)y���$��#��@����T���{�����Cn��R��M�[��E���������9���G8v�+��+��w�`�M��Z]���+I�ym���L�3�T�B��X���1���
3J2��O�T�9v/M����C�t�#���9�EG'B�����/�U���^������'���5T�=�j~B���I5O� b���bd����T|������
c�F�g*6i3L�V�����u)6[�|��U��)3A�.�3F1��Qo�6O�P*W��2)G�i��p5*d�r��`'��)Dp����������XT��$L���A7�D����daD�qJG�$&��s�� 	�y��od�,h�$j���`
i��6���C�$?-�������0���i���W	.�!"��0��=�d�v4?���)rP�Cwfx�g��������������kT����M��z�W\Y�b��B���Z���6��be	�WD[M������n�~�����/�4���9�I�Tf��So��{'y�q��� ���a;q��f�����0�m�PK�$�}]�FZ!�+8���r�k���k�t��Q)�W���O����H���,�������|TI���ZUBo����<r�t����+��|C�n�SN&�(�c��{B���,Uj�S.����H���n8���3�c�����b$��P)�������)��;�
Qg��h{��@����Ot����S��6�Q�B6v�&�K�`l��*���M�����8���,��Q���Q�#�]s�*��*��b��$��M�H���z+�W�h	����_�^M�O�����i9�z �Wh�c4CJz��X1�;�9���`z���Tl.l7>-�B�E2�8��9�gK���cPD�l3��S;�L�#2��Y���6�9��/�$A��qD�=<8�v�*ntpw���b�e(���+�C�$�V8�j�	��U�$����U�Hf�����UL�?fhU�Q�hz�����4Y�i��p���0������v#������}�d�1F��4������D�J��P5��f����@��0Cc�x#��0�\�G��"T��	���J���B�?u9w�%��N���5�	`�F��gunu�2�����zR�{��?,P���x��{�c���#G�&��T#�L`���g���]�{C�
�������U�����8���l�L�wrF�� ���'*2����.X1���v`6�����nw������JwES&�7zI��D[��i����Y��cP��l����`d2=����m'Y����m��Y��?P�=�|��"�������<2B�A��vc>�4b�C�U�����~������7��GF_q>��t4������{By���nr&K,b$��cU3���7�)��/�x1�'?����4?a� >����b�cH�Za�4��Q������F6���L��I[��^��Tq��(����d��L���Z(pK���.��\q�����&a!����%�����gKh?��Uk�l��n��9���[������}rT������f�x������9�"E>���|F&S��x�4Z�����W&�4O��9�>	��>>��.X!������,�|�/�aP�a~�q�����kE���N-����H{����\�C%P*������{�(���U��(���^����nD�~�$�=�0I���YGZ����=�c/ Q`���b��U�������Zj��B<j|H���s�a�K����~���_
���q�n�3�'[��yO�����g*4O��N�/P	������y+G5��m�t~��@H�8R��@{,��];��
+Z�M{#Q���	k����	1�����+.�UM_O�Th*W�X�b�6>�fN���(�����)��MJ���"!iB��2��/���/����\�OX1\����$A|/�i'��L�0�������9��R���#���st8��g���x~$��H��b�i8���a1�LyV�#�����I�y~1���I��+�0�9q�0�� _�
�ow�D���k����4�*�o�7~X���6x)W	 �YLQ�u&��)�C�7���Fo�H[�xE
��'����$�����
�G�LQ�
UC�f�^���������*��#�F�v��M��N�U��}j���Id�d"�����������d:�QP�y���qr#���Tb�h}�B<4���
�\S���Ol���I<�K��a6�*��Q[�d�Y�nP� ������BL��,����Kh�l�o8j�`�a2�J��1
NU���=XX`� ������"nm]��`������/P�}��*��N�XG���/i�����F�G�O�t|i�X����[���@x�e�s��_��-���5Zp����W��II7�	���W�C;]pL�}0��s��@(�V��]�jA������u���Xab�����&a~����1�S�|�����$����_v����������e�C,[XA����A�����f���K(�B���
/�Jn� .��,}�)8W}�oY�6����~������7X�B+�~�a#v��������Qs�p=������'�4[�_
p�E/�qQ�Pt"�P��b�\1yD�7��k%���'�Vxp�,�!4.�

~��s��:�}S�t���!����O"}���m<S�#��=�b~��\���V{��C`���2����B��m��x_�U9{$a�n$0���G�_F(�|�����Y��q�F�0�Y�yU3S������9^���O�e�N�]�/���iP��0��!2�g����=��'�,�nd�L��c �����2T�A�����0��>DFd�@�o��j%Ov|_��-8
lH�B6I��>v��2p�����X�a��t'���
�C�Q���i5k�����`SO��B�E}�/�Wv�5�����=�nPJP�*B����q0����rA8����T����5�CH<B�OX9�a�S?���fC��to�Q�$�_�.�\
t�GI�g���#�t� ���MG.���/eR��[��Z��/�5��b��P��@g������V\Q2�����I��4<+E)�Gi�L��U�I�+�n<+*�_J���K�g^����G�8��2L�7�aA|5��w��*��PM���������]M^P�����!�#d0���6��rY%xSq�������)Bb�3	���V��+G.��I�Q
,p���s"��(Xf��Yq��zz��;6�S"m~B����,�Z���c![W�J�G�sD>�%�D��W>ILX�absa��$��8���S������	��E6�����$���#,k�Xkd�~<?v�^�%��������s��N������X�@5�Y�f��F���07�0+�i%!�
��b����J���<?Y��X�^�BA���v��p�w��6�a� �����{�4��OX9���H�)K���Y��tp4p���'$D�����@p}�'������V(�>����Jz�O��kN��*|G�f��TA��'B�jW�"�p����~���X4�Rv�*�)����|��P�{��n�0������#L:T��,@8��������;=q��1�y�/�F�+�������7�uv��-,��>���"I����"!��<����{���fs
�U�{���h~��=X��G*���	uW�V
6���<}#	p�h`�R0������0:��e�n3�n�58��W!. @��6�k�	���wC�'Zp����r���*���D��y����+N�s��}b���fF�z����1;�����[�W�*�d>��Y��+�k�X���G ���2�7r0��+��Q]�= ;��'�7:tlwG��
p�?�X,�=�u����X����J���vD�b����j��
�:������S?G���'.8�i8��w(;VL�D����8��a��JF�Q4���+���q��s#P��=��\e��vO����t�W!.X!x��CX&_W�����a�6L
9*�X����8B���]o@u�@�'b��#��Mg��b����F�����,�px���A�����o������q=�P��n����?��R0��1S~s�����+@2�S�`�u����.hlf�3��`S�#�zkk��=����j�MSE�r����������;���a�����o$�A{����WZ���7����N�0�kP2�*�W����5!��
�_����M���1���
����B����!�$s���}�Q� ����(F/���>��RK���+��X{�k�`�DP�x��;.�4���H%�y/"���b�������h>�I�S*�vx
|d�Ds���Y�8R�a�9���5�����c�7<�A�>m�������Z�O�u~�������m�A����E���B,P[$���%mc��#D��H��W�e�B����A�R<
t�����r���z}f<����$N�:,O"
��w�#�������O)�.d
�d��v��X`E�����T<W����f�:V ���1����/	}4�Y��XEAg5�)����H"���2tp�L�����D��2�,+�4 �Y�:-�����4�+����b��X��x�uI�i�R��8�F�����W�&�~)���C��S3}j���g/o����<�84
p+�,y��@(�_pMz{)��KNU� FM��b]V^��!��W�JB����Xg?	1��fV0Rb4��Z��$�+!\Q�r��k�����O�}Jy�-g�9F,�\�i�I��z�����,����?%���'����CHf�O�������M���"�b�������E�Z�C�#3')�
�������2�-H��8����~%��Z��F���P��j�1�lG�%X�~��#^����[�0.�����d���	B����3y���I*�o��<�����9i�
����|u0��5Y���E�j��K:[7�Lt�-��NJ�0��a��������'����I�����
~��`����ll,�����$hY��MT�]��ks��m����c���z!�g��:jf�a�����
!T��gA1�&�V\Q�����9c���&"H�C1�I��j����I�A��!�s8C^0Y��8�E�3�~v�
+6����4+�hR�'w�)F�JI�^?���c�t�"FD��3tw��~A�������
��Sp��]g�x�(���|z����8�	G�*Oa."T���5�|5�����U�6����6�:�Py*OK��rvi�,������#�t��`���c������y�1#kF�Smv��j?��Dy���0���I���pK:�M�C����g=/\���
B�*����"T�l��C����#�td��bO.y�'<�J1^�O2�@��&��n�10d^1.X1���{S�=��T����X�D���^j��0_����h���NT���}�"��w�N���r�X�kF��:�?.�������m{��=��`]x�����l����}M�AT��|�f���)a�#j/��sv��_�Ot�-�k�a�c�Q!�^�l�-���)�!%�Y����=�Q/.�@-�d��������H�*�x���yd#f��sb������0
6Z�-�������s`�%�3�e�a:C�sHc������MH�F��!��g�e�#����5��
�Z�8�Q4?a��8�;R!�Pg��(���Q��:���kr�T���C�@s6I���y��R��1Tl8�b��I���s����y�f��$N������'���'�	f�d�Kv����L�>erFu7�Ta,�e,85Y]wM��*�����:7M�X��
��K�W	H�DX,�xy17c����jL6F��%Y���,8���(�������&\�Aa����G*�q�b� �$��}������M�$���|�Q�t
S�����OX1�"?���T��Z��Q2������M��"��z�����7�x���qa�!�j�;B���n�ox�*�:Q����IuN��*c�#u_����������QE���5��J����Po�dvC�^����/wia��0����~�7-���Ae��e|���N��rC� �d����
��(.��SUX�q�����������3��P)x��:��*�X� ��Nr�[0����)JlE��2����/�CMA���y�f���b3�=�An������HaF��(>U���=�_`l���H��X�����Bx#g����&�hO0]_!���UraG����z��lT�|i�z����c�8r�L.y*D�N
�Z�Q229�'H|��q��5����q8{$A[�p{b�<P��[��=�|1�g+����[��������X���=���u�l��nT(�@�����G��$�\0+��\i�!��/h��
<���W��w���#�t�B�����T�M�����fx�V�s����!��d��N����G�o&�A���e�QU��LF(W�qO�h&G����xQu�����U�/���9r��<l�#���d����#���=�7���a�<$	���b�7L���S�\3�*#h-P*��O//##c��QHR��g��W$C��v����	xt���u�1n��J9oiP)��qq���{�������u�W������L������+J:Z���/�y��tdtF�F�v���������V��{�<������m��C<����� ���������9^*&�e-��B������)q"T�

�r�P��kt�1@�����e��
����Qq��A��tC��$�a����%d���n@���
��!Fw�
[C\���)���,�
b\O�T��dY%�h	� .���do:�d�EH��m�Z�w��}!C��i�-�V�'���>�aO%f���$�(��c��!x1S7}��b6����/P!��G*�c�U��������%8��+%URn���+�`�������M���+���E���r����M� �T������,{#gg�����P��Q?���m���O�y�v�S� i��:B��	����GK�E����D%�B�\^
�n��������x�/fQ$�/��	>��y/��a�H��J�����PY�T���#x����{tFv
u�A��)��������&;;�,��x��#3g�������������7��T$F�����R��.��T�mbV\ �S�������2tas)��������2G9C��]����5p��"\�,pg�(�Me�0��1`�r��
/��H��o�&��#;c<�$����#N��d��#�y�fs&n���������"L��
C��|T=6$N����EG�m*���s�49��N*��`���G�MR/6�y�L��b�m��{�s�=%Y���k��1l��@�����H�.J^��v��{�
Ck�,(�m�������*��	#�x����X�b\�bxW&�=R!��������;�k�����x�|�b\��X����~�T����J!����dC�@�<�@f.����h������!7��/��T������I���������-!F���I��j�������e!����!�>�����m��b	�����'��"H=���0�cE?�uW����|v$A[�ift�<P�'���1��[0y�z�-�S�Z������}#�63���������HAe_�t�A=�`���i�!�n����]���|��N�k���	\E��tcTY�R�k<}��H����CQ1��������8�^5W,
z��M�Qy���������N'aP>g���C��� USp�C��GMH�������l��1Y�?�Ra|���#!��f)!?K���$t��q�x���(��y��A���2l�R	$��������1{A &aR��o��a�����?+}f�m�C��-T?�����5N��H�8�3�����[�����1��l@��[q��	���p=������"���j ��������"o�`r�����"�������y���n�L!�;����	�3�
!�����4�:,�3�J���z��M�/%����y"�c%�;z#����|�GSQd�%Q:FvPO��FNL9wZWXQ����n���O:�p�j�@N���U�`�@�x9��!�6#�.���BP-#�D���<;�mE�:��2�	����M�a\��Y#$`���P�k�>�P��+��tH�:��.Y.���!���6�j�l�P0����
�� �"w�6U���qMb.,Xb����&!�9�]���9��#��#Lzh)%�O=��hF~_T
+v�M�������v��V\a����a����� �P�����k���������e��rje{PIY�T���j���%��t��,�����g�q?�w�Q�>?Y <V���eQ�'*�"���!�h�X��qJlk�J���:�8����]�H��1��\��%]�d�Z���D����iUC
��*y��E{���
C���N��$�0�����[��� ��+����%q��|~��hs���l=1�$^o>���	�~��D�~���QnA+x���g�>Z�
W5��@Q^�v=��$lw�1���L�%6���<�pe�Y��e�������`y�j�Fu���$Y�� (
�A�u�E���B��fs������T����.s�������HsP��V���w\R1l�sK�|�%V����l���?@���A�6��U3��~�5�0�?;.��&�_���I�����N��|��a���
����r���k]�F���_��r���F��S��"�������j���������{�aM����t���vVyi��9D���!�{��lO��=e�e��|5iqW�!D\�H*�s����y��G���KY�����s��3��Y�,�����
�&Q}�T��+��	Z�B����RB�RrO��)�3����5����*���H(���1�,x���L�8s��./�;�c	3����q�w�o�U��l#DM��1��N�n$�"��U(y���sh����G���#���<V)��C���]�-T��Z�C�Q����O
a:����8�x�vUG��V��h�#<�\��~����:;�B��L��0�06*������b6�S��$����dFi�WJ����	�$��ZE��E_��/�}����l��>-���`*���$���z� Q�OG���T2��n����3�I_���F�z+��/������#1��z����.�"�g���@E��t�el�(+�{����I"��\V���&�o�qT" ����@Ep��T\�&���{�����Z!O��~�\+�?���CD��	�W�Q���h�L����/	%��Sn�.�`��20���~A�T�Y(��I)xFg<�(7�o"��_����`"D6���W�r�}����&��#^�K��8#��<�;�&���Yh�K�3)B�D�IT�X>���z`���Z�`m��;�[1Y�f��oT���	�=�M��9!�d� �����������\���~�S�i���K��Uq��@��O7T���c�p��<�By+�x�@%w[H:��)tr�%��M���r�`������L��'0|��*���st��#C�{"D�M�2����K�� O��e�+��+���4M�bv� ��P��z���������Bfb�R���`��+r����P���j���.pFa��Dn �C%�rp�'~0.�b2����68����1�Z��5�q#g���$l�tS{��HA�s��_�Z��p��@Y*���+��WMD�q�X��	-F�S'�t��"��x��(C�B&Q�-�!K���*2���zLG�u�G*6/O���V
o�`�����Y-H��a����s:��Y�����z��������+�P1����U-B� +?)��`� �� �=�=��HQ����HB5P��0�.����	��OF��m�y�xg���`S�VPV�Rw\	��:k��tC�����14rc�Yr�q��I��������^��};{�qW��{�������|�������l}@���<[�3(�k���D��7��
�	�w��|r��tCe���p^��*��'�B��q���}�`Kx�/wc�����l~��O)=Si�?g�{����{�F���>J��r�KT�u��a�����J�o�����r
@+��mx����'��:2Y��!N2�[���i��(�T�Q�����khU���-T�A�};����h::CzO���)X_��-8
lt�B6I��P`yJ����Q;rD\�0h�pJn0�z�C���+vy�P�7�r���'���V\a�Vv������r�JB$��#�7Z�I��R�F���q"�~���;biRn_�>z���^nd�b���>�gIcgh��������g�����<�����4Tv�H�S�'x
/�M<���/�n8���o���]�#WL���]�gC�n�=�
�eH����i7�	sz���tL��������g�<^T�vf�)���t��_aZ����.��#GLs�0�P>��,g#f:VSo�
��M���_�����S���5H�*�N���mve�{�,e��-8�o:}��}B=��}��\
��Okp��b���9~��bd[���n(����[zS#�
SLTn���kC�����~�d�H��H$=[�s�����>i�����rs�4�kP���H�H%y2��T^���2��2�>����Cb�R������D�����
r??N;2WP���W�,'S��Z��"�5Lxx�0)��d�[��W���\���|C�7�=,d�Q}�����|��=�H�!�o��X��;���7��](n"�3	���u�
@Da"xv,��~0i�+)��R���p��^�>'�����������g�i� ��\�>/Ky/�����g�.�����7^�[4�<J����(rPL������x���L���b)���X������WI��Q�������4���f�j���g�|��x����+����!�~�q�N���2c���QI�6]�����P�Y�t��Ff�9�-8��������4��;L�H�m������:��Q���V��(uS9�C�lsg����qJ����������0+�8K�G�E%O�F����<<��Z��[*=j<%�v<j&='���Dt��`�7b��!��
Vd��_qE�9�@y�`7�=��&��-8U����u����!����������7C�da���>���<?B����vh�Z�*~�7��7�!�D�F/-�A[(c�51�M$� ����9^s�����(�5����g�U��Q k���A&����OX)�?�gO��I�V ���0����`0�r.�T\��'��5Y[\�:����y�/�/���&�H�4���[�i�C[.c,5��RE�����H����d!��!�$��>�	��|��FR�S��.]a��������qQ>u/�G�^�6Yp��9���e����17U��5����`�L"��2�.����22���
pH�HEn\�H��?���1���Wd[p�� rW��m��x�?��c;
G`�fM4�x�+���e��3R�������S��j�8b,�7���h>'i��4����O?��`
�����U��R�*�9@�t���
�{�*T�T��?���aBZG��+0d������`������FG(gJ1���7�8��`��"3��*~r�?�K�E2iQ�c�\����#f��4���1F(�7���y��{�eBmb
�_Q!���16]@z=l_���l�L�"����i��U
�]�d�F/�Y���??Y�w������/���
�%B���|����<�%��=��@�o���G*�&�t6��zd������s��9F�,g��������h�\qav>(ou��I^d�v�0��4��)�0F1��G\'���5��y�
�6J&Ho |�E�����:X	`�/�:X��8�m�`�c��������������#���Bu*�
�����A��<�8�ma?.�3��	��mh�I�GCc4Y �+��
�?SI�?���+dRu���c���
��?x�����\��h�����jP���]�������Au��G�9P[aRp�C���=�\b����I���g��Z��z�y��1����p0
<����at�YY.XQ&<��?@����������S��%T���~dG<
���������_�IN��>�������P��v���dM��
/��D�����~�8a�<U��8��9J,���#p�D%����X+���D��I�����8*����DS�g=���t���
��9�u��*L��
�����i��n1�nX"�[���p��aN"h���{����M�B\�B$(8M�(�B�L^t�:,zF�������-t�Y�.X1x��%v�%����@��I�T�^�;�<m��'b$����TL�X�A�E�|z�b{l�b4?a���I�<=Racs�F�|�k�X�aSqF�������;9�)��tC����*���
�m�B�(���7�y/[���b
����p\7�`�j
������I�,�Q+{����h����������6�-uv�M����v[X��XI���|p2U�0�V.'�p�)�9O�Nd��U������G��=��������#{##���l)��|`���@`=3�n��5	�����u�V������JU�Vg6nPG���O������d��)�
�,j�x���V�;:�,�Bd�i�Q��
d(d��AT�o*V�����-����CuC�:�96]�
av��:�������h.�'0�wB�y�\f���}"Ep��g�u�����-$� v��<;\�`��&2|�I
�u6��
�>�S���
����0�?���R	D��\aR�������-bd�<2�| �:$O�?��~{���H����T�+��t�:G��/��t``/{��,z9*\q��c	���E�8�~E�qeI�z�-�X���Nw��&�KY|��^x�KK�({�O�Tl*f�(��B�>L��82Rp��A���i~�����F��*����7:L7i�1�
�JA�_����9xD��X����-bi�7�i����"\O7jh�dz�!�U.��8��G�3h�@n�|`<R�izZ �{��>���#5`���P{Dv'=�ei;Y��#8�4fd�tO(�g[��q��1��|���%\�n�A��O�Tr��av�+E>-���
�F��}�aU����M�Z$��K�k���Z$	�~�v8�����'����d9K��y4-K����z��}W���"��|����������S	���A�%T0�K�p������g2K�F������{�?�s^���VX)��Hz�v�s0�F��S�3����Y���`��
���$y*h���'v�KH�\<?����L�Ll�����9c5�m�5W�Ur3k<����c�D�1V��%���\G��7��IKNM��,�m�+G-��)�+�����;7�j�K�)���:��"�;�K����z9����y��T�6��2�Qt����?�H[T��lE��.�:������dr��$�?��-�H^+�fo�[*�/h�NK��
�i6���C�����+1	��$B;9N��.8�'b��������z/�N��H�MB(WIE&������J�{A>i���g���z��I��c�a�n����/5�5�.�������,�����6��_cI��bn��� �<sweV\a�`OT��eW1��RX1�]���)!�9����
bGt(�4�>�T�QM�Q�qF��4D;m�@O�(]$	�V8�n|���T��D�
�=^8�M����l����k���>y�l�L�MT�EQ�~��H�;-nAF&�+�x�v}����^��`X�A	6��04�T�dZ�R�6�q���r�%��1�{m%�?1�����"/V�YJ��gBWb?T�����H}�^���D_�
HN�2�[��,-8L;0��_�(��4��
����C�+�c�p��Q��A��������!4n��������)t���b�FIn��t�B������G^��#\��`=K�Xy��|����@�o�U���6�q�T�|mO8�r��w��������3�'�j�[�_��M�{��=�O��_\�*�#�}J����I�Oc������6.>�,��}������_`��*�Ap
�����B��8{��D����5�)8:q��}o��	�S��������<��>%����+��\[`�����
Nt�7|5��,�7:B{�;7!q#�/"g���8���0�AM}�!�d2��P�]`�R|z�d�����vc!FVy#�{p�+i��o�{p;����;��41\�B������W�\����+�O�v�l�u��@M��q:��`�#�u����^���W%E���+��H�1���E�D~�q��Jn{���d=���x�>�j��]�FZ���xv��F�R]�F�?��������+(���<P��}s���Cx0<m��"p���i��8�H��b�K�������� @<�����D��������*�VI��L��0��N�S�A�P>���f�����@���O7T:��l���6�	Q$;M��fx�\*T�C��87(�UG�U��H���L�zz���C�*+T<��I�>P�Qc7K';�-U_i��^��.`�l
�Q����Xu�lw��'��kd��A7|3�T�|�H����u!7�N�|v"a[�4c�����Y��8f��^��;lZ���7���Q��gE���o��x\�����+�XQ����P^d��=E�4){��|��0�.����X��1����9R?���J`�&T������9�qh�!c6Ug���Y�p9��4��
��)�I��p2(
� ��t�?���'����}����y!j��I��(�C��=��Qv�"?����Tj�C~�~�N�����cR|��I��H�����k����bw�/W�3W�`�O'��p�c�,PM�����:��.&H_��+��9��U�w!���'*�;��lq������Lu�0�v�x��M��s�p?�K����_ \Y|�-��@�@o� �`�$q"x�AX���j�1���$�����:��e��?�����sLRQ�1��o�605aV�����Y�\�e/��D[��a-���$�D����N�D������I���s��"��'��R��F��|*Y�;���?���x�<��v
C��������-�S^#�Zo��z:,�rtKs��o�����>��QG��������%Y�[��Z0�����7�G$h��C�Y'T��s�<e����WCLya�1b��a����g3�j�Yq�2��f<��:��N��G�X`����d����b�lj�����4E������7:H@[��d	��m�.���$rn����GpT���2=P������#���p�|�3!��������s����q��1�c)��������{�`���:t��{�|��Y=��_c�����P[���������R�Qr�A}/F������o���Q[p�f+�>e�y�����9Cg��k�B|��&���n�L����	L���U[�����6=�V��s�b8n��(~�M�l��\�e�d*���H%'��8N��*�xz8b./��E��)U���Ja��b��f��[1�����'�i2
l��5@������9%?�N���wN����h�G�O���\@K.���f���'=�V��I�z�HT����������O�&���nT���~A�.����H?
bY�0"����a!cG���x����4#�~�����HD�;��LB�q���*��%�Lx�J�S�i�}���������]d��l9P���%z%H�Ga�������f�L��,�����]��+�.]��d0I��;+v50��Y���������Sa�h�yTJ��]S���\U��S��Y��0�da�
��1 Z����@�IV5Rb������Bg������rA9`53�n��,��9��<��,���S`�������C����(�����#���/`��Y��>,�N,7����Ts�=Pit�)u�m�H������&�������?*;�!9����%]-S�
G�)Y[vw�j7�����������zL�f�3�4Gj�HB�R1�'^x/Q2������`S�!^��Vd���$������"����e�6`�>�&��dg�l>�<������z��+d=����N��p��'P�0Cd�\?�8#�9Y�������7��jp?Qmp�]���� ��n�~KQ�������Z��#h5�E�������c�&��=�-���am{�|5o�$��/�$��2�����'{��}��oF����m�F�X����k�����7��k�>���n�K��Y��|O(/���;�R��E�����f	�c}g}?�����������(@n��3�d�����{�x������2��1jSS���������&�<���<����C���l�WG�KM��TA������>DY�9��_y&�|e�:1�^�gwX_2>��|�[����U�8��$�rw#{����������(�%q�Q����3���M��	���L��p=��PBI��-LT��8?���������#�B�w&�����cU�l7+��+
��A%����8�z���^��U����7[��S�O��Q#-��Qs2��s.\$[�!��^~�{L@��=f��b9������$A]K]vB���sU�P�+������7�l2g�C�;�)F���d�9�or�\Y�C���:+��}��KK�j���c���w��^5��dZ��P�����N��.`i�w;�+D���I���/2Z���x�bs����8.$GEX!|�i�v��[q�2���R�=*�+#3J3��D&�<X4����h�\�b����-J� b���#�-�X�f$V���a>�P�z�g�s�d4O���An'7���#2���������,)8W%-e���N�xR�gBM�g�1��������0�����1����NN�|d�_5W\�h�kr�/F��CV�a�I���p73��&M���1���F�FsR�����k����\���!���1�.t0 T�z@���P��,x��p����^13����������I���a"���VO9?7�����Rr���� �.��������I�5�F��*�&i�F{��6I�~Lj�l N����[��WH�������;�'](�����pc9���<�������/��6�qEs�����)*���d�����|�����fr���V���Q���$�,#�C��Z*3T��4#6�%b�`?�d���I��k:��l��gO$l+?��y���?�P�2�����e�ejL<�����:%�~V�}�yT�����E�8C w@���?OGq���k��%�d����('�W=8�[G5Y���	rz�/�
�e����ck����r�4��zX���`��N�b���[p��_J���<H��?p%�#���{���������fYJ�#U���z�rdv,tW�,�g����V�d��g�a���0���e��+9~��,+�D?����5ir����nd��'S�������Oj]�`�2	�A����f���C���yo�H}��_�=���<�j`�[���-��������=�&��D������WM���;g5]�_xg{��&�87���������63��<��N&�l���a������o�e��GB�������.�L~<;;��;9��L������\`�`��HU6b����FM��al2�+o$� ��W�'��6�����%�����[��-�h�-M8A��
���o��?�9fJ,������!�bS�H�[�	A*�`T}��Fk�����t��4�"F�h���b����p#���9�0QA�R�a`����4�bR� ��#��Fh�!��c�bV< �����D\3����!��\1��"rLB���qA1�k�j��E���n/&f�&~���Y\�<
�&+������r==j�u_���x��v��I,�<���������
�s��$H��*��~`�����D�d�t4?a�����uk������������.7q�A]7��n�
��}v6���J��JJ_�E��UI��}���.�L��\1�z��V�X�g*����.��������{%B���
�B�Q�$�86�f���B��y��RG�9�u�7[�hg}e�}��0�aG��a���	�	����o��~����{�=H���[�B���]����$�:^���qo{�b���[e��D�����8[�$�4m�8d�
�H�y� ��<~?�x���l��`/o&�k��b�H���sx�l�����\qEa���[�}9�$E� �t�JH�F���&�_5n��0+�9r�_�O�j�,L�Vz�{Y�����v�gXfr��T��8���7t�� ����������B���9�T��$3O �k��W����������V]*���\����6`���R�bD��j+�07W�Z�������$lu�������K}/G%^�1#K�����7;�R2����/��{7��Jxt!���.��x��� ����J��3��mN��L��*|��}�r\8� ��A(M�z�l�:D������d����W,*a����8&��'���=�A\uV[P�1�m�Ng�-H��JW��$�<T��hO�v�����G���O�'���v�X���L�>�Sm����7�8rj��������������d�t����qB[g�r�]@�=�`2�_��p+�@V-<}��c��w���p��cU�����W<�p?�Ar�����a���b?��(�[�^+lhc�h���m_ �@���@uI��O��G����o���H =~��v*�������,c�����"���UnH��1��h�44{]��&�"� �������Q��pb��Z��9-8VMn�]q�2��e�=wP�C�8�h~��A�2I��/{�`������
c�c/�g�������*��Bmy&�����ne�*�J�7����8��7
���]��#����E�^��1=�o�G{%��2a����xg;��cs��0N��
���-����L����!7|i�a'
f�g-����J�����R!MPW����>&Q�R
�U����4����o�8i�����!��~q�����)��H��[)�7��d�_���4��v�<���8���l!(�!�������U���y���V��a�mg������ot���x��]o���X����_�	����W!�X5���!����W�D����p�n�T6��^
P����C���������Sj�I��{�q�Y�7'9�m�
b83�V��_��YO6HL`^2B$��@6�����U�j�D�#�*��xX�+P[n��{sI��O��\�+�uTP]�Fe�C>���u�����<���i;���8�S���h@���X�4k�So�
'*S{FDh/��������d�h��2�1��s���e��^��~����T�%�����r�
QH��u�O��X0b
9�#��HmA�"�jP���}�Z!w�#��0�Q !�ib��41y.?�.���h>S��c:C�D�zv:F�t&��i��w�O��m�O��D>f�=|	�]�;/|H�;�5'�7*���Iv�����r�>@�-<�H�����g�Nq���k����m�QbR�x��J
	�C(l�P�}��6�W����yY���a��U��Z�M�rk��/X�E��#�&a!kUL��PR�����+��G�����	���<�m�y^�5.XQ��T`�����]^�rx��XsN��J��I|W�( {�-����(9�WES��'td�.u��*��X��=���A�B<��� 
�V
s�"�!$�������g�p�qUS�g*i�j#��P!�:�T����l�s	1X!�7��I��{
�@��E���C���:2��{;�]���u��1{`���!��`#@�����b%�gE(�&���s�4�M���?pM��)�b��O��s~���`���W�e���0�D���]�����T>�F�m��}'�����%���5��9{��������-���m���xz���f~N�j~���S6�������`���HO��N_������[d(�����[;'d�a�#���>�EL��^���0S�����k�����V�	���9����(��~;����Ma���L6��7B� ���U�09��'O�O7T����r,g2}�p8�Bz�q����:c������)��sU���#*^��#���i8�<���*0��<}�Ma?�;F5�����Q�����n��(,X��
�)���g���>uM�$��\0:5U)�7����G_�� ��sc�3�d�����u���@v�y���x��B��8�F��0��w��8A�n�T6�\�
!�����9�_�;'N	�MS/wN���^���b�!�����gn8�[!0�V���b�v���)�����T���[�������,�d�T��G�av���Z�2>�����,�q��C�n���&�1��/�2�'G�%as�\��E�*�}�� �tr�����CS���+���#!L���4�W���}^9 J^q!�y��`#L��)�}1�����M�T��a�g�Iu�
&g��s���#Gl�Z�� ��'9g9l��@5��H�6�l�vM���q��{��s� �����}<��q�BE���g�_k�zm����g��@D�;*��2sd�O��O>A=�i���Y
�����
�U^��q�[`���OT��G}��v�I�KM�Ne
]r�U�.y�P�	n�K��
M�������!��C(;�?� ��HW*v
6�k��0�\���V�G�#�����M����	#E��f9�����!+�Q�-]j������u�/����1�'`n�vV� K�b���X�V�`}�r���CB"wOh�#�{��c�p,��1$.������*�~	�c���~\X)��NDzRz��F��P�����c{�������'BiB��D��L
L�����~ ��'lN4zS3wOZp�}gPg	�M�Q��>sL>����s�Ie��>GwJ�V1��7D���=q��j��������������*K+"�+��=��Y/��dz\ q�'Gk��e������r�D|��������y����m�����}����*��	}��.��`#y�����ht�����
O���	�4-@p�Rq�'+p�f'���k��
|�B���3,�B����Ut����������B(�>��o��^������C)R�a`�1�����B~[�LI�����~��%�����"�s�w�`�����&��.��6A$$�$h+Zk��Q�k��\p�`���d�#�-��>m�����sU�<��C{�w�M<D�!�'��z?F�
d�Eh��V��,���Ht�_�r U�j>��-*��^�HE&BG'fvk.d�v:ql�2eXj��B<j���5��+�P1�n�o8�J�z�P���TpXx:v7F*�8WZ`�qd>}j���u��*�|�=y����cgZ����z�6��t���R���4N$f�~���7�2���=��}�����z�~`z+�����Qo&�_�
a����<W�$L��Bs���8�$�����)Nc|{��G�QzO6�W��M��������"���l�s��)W\Q�<y4�jN������\o	u+�[��M������� ��T�Tz��<��N!���>�J�V2��{��m�]�1s2c�*�00-B��fs�K����7�j�&�����V�,���>��H
Tj�Y�b��S�9�O��j�=�!A��<��W�����4��:�I��w��G_b��S��Z+$W���@m��*�
��2���������<��+�"Tr�������))8�	�0
���\������	]k����u���6�{����R3��-8sY,5W�U2�f�@�L)���-�R��&8y���n��h��8�V��Y�]'��F\wT������K^�b���/��L;l���6�r��.j$,kL">�mz|_S���� q�T��k��*l"��g�
�����sf:!b�R����3���T���$�7�(��`�9:���\ ����}ML���z�!!8w^� ���x�sT����=zKs�8��B��(������}�o��FJ��@)Zp��xC��kOJ�@|����I$a�w�0p�a?���<?"!PW�� �I��R�q$��U
F&�7:��;\�Xk�HAL|lA�0.����$���/���o���4���H\J�(������a�
��l%�A���W���X����b���}-�_����?��`��s��6���?���g�!��P������7>V������0��
�A�3�4�?���R	4���k�������mRxp�}�H���xQ;4G~V����~�1'��G^Y��{�cLYqe����oy��O�t!�����)���a��h�0��h1�����6���<7P�M����Q�^�d���:����K��M�i
R�����1.h�,\��
���/���
����6�/�v",=P�}VI���A���:/���}���Fn>R�U.�(=c�����[��]�fz�/���h��`#���:�]�����~�!��a��&�dab	�������l�;��J���c�&�_�n�~��V�������V�\;�����)���U��xQ���p���v#7�I�c�rB}���������V��$�Px�,_����K5������w�����4W������HA\
��\�>�fO������x���X�m�������g*��R
t��.[|�`_"y�ah!U^!\����|��!�[`��T�����8�:�)W(�>u
<{����K|��:�AZ��{G��q��q���nt�M��-Yd�a�E�A�i]���T����~�YJ�}b��g��+!7�+.������lK]]�%������`K"�cD�0;�+9�T�Qq��z��R=�^�4V�@�A a�6��@�mc/���+�51A�b�����4b�!�/�*�I�d��1����c�"��y���l�E�����=�J�F��"��b��)U��z�+��^�������]���pT�2:8e��NXBe�+��pW��1{�gG�	�T���WO�������@G�b&�-��>C.|Wt ���&�v�;C�������	��E`��rGj�5<����P����L�5���
A(��rm�^�b�[�,>�����R�Z�����a��5��Q���^��a�L���,dY�0}�,7�nv�x1���B�T[����^���'T������1�i�;�I��?������=���O��u$;�x'D�w{��U��*��H"_b��e��3(�a�����k�>f�!���J����?Q��m���6$��bg$sh��J.1�o.���K�����:4h���Zpj��>4^T�+.���c�J�l4%Op�0qm@��R*��h��l��fC�����0j�L�D�v�
���h����n��N���?����]5�o8�Y��gS9qT�x#���r�i<k��<*��}^x'�h2��u�0�������O1*vSyr���,>���I����T%���l�vLk[l�s|n$� VK��^m�P���U!��ilZ�����U��Q����>��Qw���/[��?P�#9��g����Q���*���v��B��������v�q�5zF� W�<O��Cc�k7'�3O�,���a]w����=�QMA��'*i*�4������H��8\sTc�8�A��.������+�w1�jL�S�������V�!(�@�x:���;x/�L���{"F
6z�?�l������q=�Pi���d$!8�(�����g� wV]�t�<�������	�}�j7*���x
�L�� AD�#�(���+����n*������7rW�	�w��z#H�n�����&@�!m�V�����:��|�����tN=,�)86��y��>��!<p`�����B�$�)kkM�����DLzS&�G���Ks��?��(��Q�u7"���v[�����_g�5���U��X	|�����@&��kvDO
r�U]l�7*�0��#�xz������'�S�]�b���Wxz���z���2���
�n+������+<L�l��vj����������}��������*�+>��i�]o����1�d���Kf���mx[�J(dV����.$�*�S���d��7B�M���-�B<Q�m�����CLdx)T�Bf��3���d3�����t����F�2��T+F�26a����
�%�}t�V~%y�@%y��2��	)����X[4�~V���0j�� �!1
A2����M**�"��$X|�J�R��:��\������[`7U��\���?,P
�or����d#C�(��������:��O��d�'�s���_�0y��e=<PzV#+'{{���(	
�sE�NRm��}�=�pW�����7*�<��$O[3���>'#61$��l�7����M�j���C<��~0A�q@��Y�G*�����v*�bJ
�v ��c��@�",��F��&�h��]`���-C�����Z�����k�w�����N
���!��@���
E�e&.e�>��������!����k��j��x���L)K��!d��x-�W����7�*�'S�o����2�
��[}|�~o�����|s;$%��?���������gJ��������e�|�o�����,�gEZ���y�RB�����i������_I-�j����-uH|P����R����+%�oL��W9%%�2�F��#
�)!h���Q}��7��U����x��aC�g�������e����������k�G}�p���R�F[�v#??����Z�{v�i�S��?(a�����iM����>���[|^S���-:U���|~��7��({��CJJ�������`�7z��4����O�XRa���/��[�k�Z�B�������_�.�hY�Z�R��X������|��#Y��p$������)��m�h��������� ���}7����J��|}�	O�5���;��������^��A���?�0�M}��_�Y���-3x���t��>co���t'[��Diz���%�"�{xcw�t��}���M�W?�Z�����,���|M��eC��C���C������}���9���&?O�>��Y��%9������?t���'��Q��M)�r{��~}'~sB�n4����
9(���>B?�H�9V�[G�O��o�ug���Q%��7m�����q?\v��}|��-��CX���*���	N�}0����d����`���g9{���,!-�V�d"h��}|
�~|(#%_������{�~����(�&f_�*_������+=�����>w9���n��x���-�c���> j�$��������D"�%�5�dI��F�VE���Tc�1
��$����J�n�O��e�����S��:�S��{��5��1G�{���4�wr/����?���@��D��g�(8$Y�Z����e�_VQ}�����d,��5K�86����1����+��5�&���D�mo�;(����~}�����A�M� �A�~f�K�L���!�=��`9���+�;J-%��x��W4(�D����1��;��/�+���'|a/
��}��x�a%r��_�kSb5V?����)[?���gm���_�$��gZ���~�a���P9/14T�l�����m�\�>>�//}�#���kk��8�G�t������K�v�����r'%$/~};.0�������0����
Gr\��p��1���&��[�>DU��O�wK��$�kF�r������G,�t}��`��q��-B��w��U1�S%���7��S��3~�
�|l`h���K>]�Db��*�9�e�������j�7G�&&U�o��B	��F��_�����h�;O���1|��	�x��F�	�w�w����a{��������%�����"h���?�c���mo�4'C'�7�,�R-!����GE�������-\�Ug��8+l���W���v�L�hD��`���>�B��Y��K���v�����%����N�!��;��2�I{���"e���e]����e���d)��/:������������9 �)X��w������{������@�m}v�����_��V�3��}����)&?�\�����\�����H1�ghe�LW���X!}��q��1h'2�\���1��,C���B\@�31D[@�7V���+<�~t:0���7�q-9"W
���>[9_��[�/�J@JA���QK��}���N�b����8�I��X�Qg�%�[��~��/�_���hi]����&l�d�%��7�'���3�Y���^"�����Ld|\e1$���cf4����2�����bv����d-3�S���,^�G�-{�2^�ez��2�.5��{�)��smK�`�������k��u���tL��V&��������Y@����Qc���,�g�Vau){�e/�-�~Q�$B�'�RJ�|��>�����a0��k�`:K�O2�*������"Qx�E��-��E�a*	G��G�	i���
q&�s��f��(c�^�������w�rDq�����k,���P{t��`�=�?�(2f�2V?��w�p
f���qE�Ak�����i��$���8�K��)����L>���Y�������35��K��9K��ctW�h�R>�S�~cN-Dn�c6j�b�N)/������W�������,���=r�1�f�/qW����a��6�:���u��V��x��A\����esw��|Y���#�e����(#z��^eX�k�,a�'?'�E����iL���S�B8�>��&m����������E��3f���;!�y�j��7��g��j�&d��5=�7�v������P��Xc���[�b��>���Z��~r�Cg|sCZ����YFMV_���(����Q�m<�����L��(;����(��-��Zz�0H�&�&��N���,g��S���n���S��2fd
��B�L1����uq���T��0A26@�h�:]�%��S�b��(e���da�T�E��)���9�@��f!�7}�xqh��M�����_0�H�$���P4�_�K�0'SL���r}����~7���N7Q]�W�89���u3�pmC�����~&3Y;P��.�0C��#��cu��#����p
������K�?���h�`��F��G&�@_�f�Qt�V�2,�g�g��6�w����|��z??�`-��z�����e'��,gQ�U����>�+1{�
�����W�J����M��x!����CW�u�L�M���@��Cg\YJ~�C�!�~f��C��z[f�!�$��6�5oL!�����X��f�*�
���%d��K������b�n����u����d�a�G�j�/(�-����>3��%s���^����������|sW��chlu��;�2�z���t�)E��6>�o���D&� ������i�=ul����oq���:dj��<���m����d��l�I���nE'�{�q�.^��C�y�@3o�|�K��%7I)�-�;h�1��w6O�W����1M~��u�=���.
���!��3�`�s�a=P�S�7Bf��2r��M�h������������R�;O�(c�����H�I;�J}���%��(�Dk�)���:R����;1j��N}��+nl�Y��5g�������C*�c�:.���cE"���U��'*�A�$����Q}��]��"-eS���-�����S������}���.�� ��/���?_��#��t���p�N�Lf,<�r�[����?���*�!�?8�*�u���Cy������D�=�IS	�T�8�f{�nr� '��g��tW�� ����$6�X�"���&/���O���n���=rL�Z�Z�qG0H�?q�J����EbT$B~����S3MK�?r,�����$�c:	|\(�'%E��Z�|�&�J~ve�?���m>�MV|h��Z���h����od��O�&
�\�����w�I.[_�f�m�zN�	IX��Tgh�b�������h�<�� Y�M�l�x��I��k2~�I��7\De��>��J���I��Z��K��s�(D��(-��Ap2�*�v������������`D�y��`:��
6p*����8�\������B`��S�x�i>�k��G�+����
���5G�?8��'���T�M{�sM���!D�I�B��Q-��TF�P���q�%��F"9��v����c89��	�:�6�o9����
L�����	Nbi�����xH�o���f����HF�1Ka6�(�H(�Mb�Ab�$^�}�i�}���(������m;i�U�v����d���=�6��I���V�{�q�����+2�wxl<����L'������!��-�$D�\H��8�	��������@��]<���{$*�+L�C%���I��^*�����%��,�^��~"����������	�)��&H�/C;@���U?`��
�Q!�
���#=�����a�)���c&��i�y�Hv%��9*!pI���k��y�:�m.L�]�:�=7�aZ�&���_����W*������b�i&�an�7�
�)�jP#�0�I���3��;��}U�+���J*/��1�0�\�8I:5O���M�B�����/��J���HM��&��I�6`��^��&�`0�?����4�|�
��O����?f�7h���V���!��:���7�����rT��$�MD�,q��o 1����t��(�
:0�f�O��	��:��N���x~��w��1�H��@���A�a2���������(=���I6F�F�a&�d1&A�](����M�Z`i����R	��{2w��%�CY�q�pUNr���D�b��l�T�k������?��&Q�G��k���T��^��G}���s�@w�������� Az���}����( T�������M~����H�|����zZ5��,~� ���gK�p�5/��M�����
���|�T��P<>d6_�Y�pV:g6���A��I�/�eB���r���B�H�%��qa�Rh,E�o>c��+9�H��XL|��>����]�+=t���.����Y��������s���/J�=(�
v���������h�L���7��o���1^�s�n���8���{	�/?�����fm��`�����g����A��q��*]��L�	�M�*�bB�
�lu���c�2LBg�|-���0Lx��;������ek��?�}�������*aem|`v�C��{�Q�N���O)������i��2l%}d�)�y�@B�&0�xH��6t��������_H��0���XL���>����~�z����C�n���Q��d������y~`n��Ff���y����"N}�S��m�k�$��9��_J��x��L�	�f��a2q�����������Sb��>��gn�%s��0�Q��X���V��b����B�,�������b��B�G��k!<�Q��	����u�0Li�@�iA� E�,7'�������I�J��9w���">�}B�Y�_L�6,���=����p�"!<3U2=��X@\g�[�t����duO{�<VZ-�aA��&6q�Y�P�;����cu�������4���6T�����7�H5[�����$��(��>u^����[�5:7��dz�����������.*�l����� KL�=���rG�����%���vf�P�7���9�n6��j ����r,�,���J6
����O�]1�F�o?�Y�b��'J��1LX���A�=pAb{�O����=���n���Ve	��L�^=���"����q�!�C�!�H��	y�G��91������}��V���f1�~��Qx�Fdj���t���2���-cf'��(���`���lmv6t�,��e�`cNE��ctH6��
��%R�K��2l����/��i�@S (�|�6�W�->�	�!��r.o<h��_��yk�>v�B+W8��+����cs���B�D{�;uW|��H_�W�M��F�k������kK���q�Q1�#�OkK�5�?g������������9;�I����!�q���=;,��V���,����o^)����?�bra�?#8]1�d�`6qo�2y3��6���N��q���f %������a�������S.�|}�*��)'����|����=r��=dO�$�����|�����O:�./��&����y8m>����'�m\�W�@f�-y�0.5�z������O����������x�E>LF��X��$3�%l#v�P|j����Z*��%Nd�F�����zS	SV�w%"��5���F�a������N�Q���~�&t�qX�a��_LF�o�f�9@r5��!o�6[Ev7V���L����o��?��G�c53����0�D`��h!y��X�L�$$Lr�E^�M5!����?��������#�.��{�n!�i��q�MUM��u�L�2#��2�:���{Q�
v��l����Y��6��>)1�[���I7��%����KGe�vQ�������@.���2�;"�S��3.~������%�
0�za�>udJ�!L�/�<{����U
�
��]����d.������P�/�"��(�4��	,����?E���K��4�{�X����]��>��ye�����u4%�0?@7������#2!s����N.�^�e�;���Sr�u���l���|Q��� ���pEhjo�8�R�Z\�.�s��%^2��������������a���	�����Dn��c��}V�*;J�$J���p|�(l�v���Cgo(�&��6��L�w�}�?�m-A7*��_�t�k'
��=�4�6&�������T��Nd����~ct���o�a�">o"��NI�Li�&�_�W�s��!��C���x��p?�����A`'Q �C���%�-�Q������Q"�N������lz�?�56</%r�����aK;z�aC���	�M�TkQ�� �A�x�;
">}?#~�dK����eav���
�])d���	G��j�iM��J��
��>�<���j�K��|0��������1��x"���
I���/����6����,Y��Q��f6�LQ�
����j����V.�Xw��6���5�}� S��C�iA�"6�6S�}tN��?E��7FrS��a#�NF�YFD����f'e��z�E�����L�v"�����v�rY:<T:��=����Y����x\. no�R�
�WZ�?���	T�\^���D�{r���!�/�#x���fJ����$��Ao9�����Ax�t.�f����vX]����a��lw������Pi���F
pxy�1��I���V��E��Rw���;J�e?���:���]���0�� y��o���y5nT��Y<#>C,&��]_"�D�/�T�d�g����K�-�����r
���O8����V|�?��
���*��"�U���m9�@#��`�%�u*���'��?s����^'<�(q�XK��0�<tM����U7*��3C�b47Pr�!����
�;h��w������qC��	�#���
��=�[��{A:<}��B�Vk2}������������$���D�C	,	�4!��N�(K;z/�b�zFa�m]��	BgB��u�:�9�?TT�+
�X���}A�*��_}@:�s�@# N���y�������_��L�`kP�V��!�^�s����u���d��M������B�fg�����L����[�EI�����>@9]+��='~��;����@�z

������^�B�}���O��bg������?�� ��Cc��7�s�}����_�k���hyl��%�}�>��"��: :���W�x�!g�
.�����[Zi���E�_������r�)#k������xr5��������^�����mT�������(��^k�P�D�Hhg��%�������|5x1�l0:;���vp?���d�w�W);;��*f%��8Cl��u>�w?�.�E��I�����������.�n��Ul/�5	d�r��\i�`���	\;�a��2��K2K������y�*~�3���|��c���nm��1L#�0�M@�-�=d�Q��f�+c����*��5���Q�4{0�I��5�Q7��TK�wj4��A�X�[���{Jpm�����k�0b}���f�0-�tp�C#kw�^���y
|��=����	tx�������y._�E���0�%��r7�G�*�7�1�"?n�Y*b'����o�`*������{1���b�r+��1/�zo��HnS:����CDln��m��3��<�z����q�f�]��j��o��s��G��f
�9�M���sVk���%���������Yl��x���TeY���"���g	Lwp()�	62y �>��`���6V_����F$��t�gK2}�?����%���_�����j��h
��������C������C;7/��Q_<T{�L���=
V����^�t����Z����������<8t��>���5�F���1<�X���yc��1�bL9�Nb��2�+��������6c�j���swL�	H��
r��Z����M�K6�����As=`���P8|i�qMN�O(��l=��P<��EA���\%�	�-/�Bu��[�"��CU~���
�a�ZE�2i�y]��k�v��Ea��oqpy���'����U����o�U��~l�*����z�L*��������XZ�M�,��1�K��������b��^�<-Sy����:�s��G����������@�/G�,����~�������b��m�)qU��W�,���	�%��	��x�(i��*���S������t���*���p�K���������ifQ�0�����^�0_W{b-�
!�U|n�E|���l?��N�Q�:�[���BK�EU|~��_���;iV1LZ�<��)0���ujv�D�����y��35`!i�;����@��f�DZ�
_������A���K��������ev�WV/�g4Q?�A�+*������`i'���M��vx�|��uqe��W�G]��������!t7$��1�����
h�6�*�f�d�'R��Yr�Td(���@�R����`b���G V�*D��utpGc�����L�����|���/��CR�Hc���������B����;���;��/�������S1@���t�/]_���@���o����P::����(�k��2G���)L0�aW����X���w��O-����t�b;�x���A����^E��#1j'{�`���k����������������b������f�@�oy1����n3re6t���H���X_�X���
l�:��]_(����?��t���:H���=��
0��8�;d��{Q�\(��m"5Yb)\5����P|eK��oO�J�!-��f�����o�z��3�5��e$2aaS��>+	l��U}���&���%GIN��S=N��a��jl����'k��������z���������\��$�!�Q5 ����[�U
�4g�
J�8�Q_����U�~�����=���P��6���,m��1�E>�������3�E�A��Sr�Gb?��q��B(/���z�O>��s�������/�e�S�p�S����V��7����|)�w�nd;�S�G�R�W7�.�o�l���� K�K�o d��R6���27V2�#���Y�lt	���sc�W��j�)jc$�P:�!{���Rs �$I���<������z��K��$��}���d3��R/��W���-~��]��(�������O5^�3T�MaJ���2i���&�N>�{�p(>pB����1 ����Mn�s@i
������J�Pv�Z���M�6S�;:� ��\����L�T2�N3���[�2���#�shnA�����&�|U&������%C�2��Q�M��`#U$�w��z2�����,��-��q�F��	Mq��c��)cX������y3S��
�o�+(J�U
N(�����S�A5�v�T����}q������'CU��������(��U�E=�?HQ0����������J2t��;�n\����}���8lb��?���3�����k
��J���������d��H�R��N�aLCE��
(a�c�B�82��v��1B<RW�)Q��U�Q�]���} ��m�B�Ig���a�����R�L�{�����i��j�������N�`�����Jl$���qEa���
��t�}0�?�JR$�+�CV����,�Z?�w_6��@/����)i��@�d��W�s^k���%�RU�����_@I;MQ�a+
��G�W%ru8J�y?$����^�n��CV	I��DL����:,6��@/J�e1��%Z'K�?)�h�u����
���$�/7��f���b����e\���p2���G�
�:p#��Cae����-��(�mP��A��]qz/���(���A�%��;�6�Jx���r_����$���F.���=gq>%�%������+��A����1���1����L�aL���3>@|��Z���;��KJ�������=�R��%��|�/d������2l�[�H�fX��-zm���d!}`�?1���}Q�"!������m�b��Yr�@��R����:���O�w:�%������R|6W��[�li����@�-6!n�lD$P|�U[z���9� bk���KW	iV�|~B
�C����������S1�d~`4�0��"}� �ck�.��-!3�"|���cA^����ZX�N7Y:e���Hz�8��#A�u9�!�[3�7�*�^��}����Q�m������F8�uX���������}�����HW�$����6�ac����1�*���1�:ed�O�����������w��o;Vx%�Nkl����
C�zcF���)'����;�n�����gB�t����T��}1s�@iXl���02\N�	�))[dJp����U��:�*�t?D�q�K���Y���L��?�|�?���@a��E��A�B\Y�X��Y8q6��	e��N�xL\��{P[P���j	�S@��P&��q��@3��<d�����AI�����,6��@/���������#Bd����i2��MU�_��.
�H�*�����,�Pg|��x������^����8��N+A�u*Pw��w�p�����=8Zs�����*NDO!��2Dx��|I�1�d!�t��s����~�
��������|X��X|w��	L��<�O-vn��u�B5!~q�Q�z������"_m�-3W|�h���q\Y�U�����l�2��b��:*��6���l�*�&���,\Xtn�3������c�	�
��;�i�c�|���y8�+����F�@�����`e\!]�V�i+�=v�"��B����,�Z9wOw�2��������9���(�]�ZxC^�|p�^��/g6�tE��n���
oCE�2F��#Lr,(��v�sh[��Uq~�>��c�/OQ���EU
(�?�A�I���2�����~���;��5���r	��@�'93zN��H��T���+	|���v*���i�Wr�C��(������c��T�������q>�~H����~)a�����P$7����;oPE	n\%.u��1�2+�0����-LX��U_��a�f��r����g(���_v�(����	2%�,F�E�s(��o�����f5���^��|`��]]�	�T4.���vy��/o�Mh"}.���&2���v��
���-(h�>�(,H��0m�%Z\j��$G��`+�K��z_����Mm� S!
����+e]P|�T���TL1�z�����Q:I*����x|��i1
���2�|B��J^K@:���,��gV�x5t=�}g�C�s�J�i�=��ef~n& {�v��������������%�}���W=L�y�f�(��� Ef�����#��M��w���;��>���oT��m�?�S �L��@�m< ~�&� c�R�>8h^�{����o�i�l�@Y��\4�
U��AYl��>�,/XR���b�F�c������TW�����N��0,	�
_�aL���JD`�{,�{
���5�yg�������C���I�O?��vU�!_��q���\�k�n��!/������BF�f�A|��������FS^��N56R5�N��MBX�a�=h���>0vf������j�|&��K�u>������� E�S�H�\�G�D|�nL#�F9����!��J�ML�6�"/��Y����9���7>0���5��"�^c,#�1�I��Q��)o��Z��E�S�c^$�`�H�<�H�q0�Y��d�S,r?n	[��2lj��q�7m�%!�o��s�9��:���0��;��H"!dP@�
����1�W���Ux��K'�-������� s��C����v���4 J��1L���w��\�63���rf�y�<�q_"m&�Kg�O.����cV���C��=8���H��9����$OMOep��;�GH�^�m�������zOCN��;�XY+� #+���r�k���!��d�(�������E����>�(���a��c�5����,��L��C��?�+�}d�����_����D����6v$�Dr������N���K�k?3�� ����6��&��A�B��
�lvga���o#8����CV��O
�lWr��b�%>q?>J$����y���k�����r��N+B����+|J�Pv����(��+���Z
A:�����dK�^'���HC-�@6x}��A.(�-(�YVBj���,�81�O��q�����]��b'��"���E������T���j����'��0�������J�i����$&�����q�=�=��!M-2�!�vD�,S�����(GB�~6���
��5�� .cr��;�JGw3r�b�� y����|U&��oS{�Y1������~e|��Nko�0'p��D�2'HwM��3��`��yN��	���Y	�4���I:4��v/������%���B�9A %-�|c��(c�+�#��>T��W6
�z\Gh!�$�]����~���9���2�V��A��m5[�qI����fwZ?L��C��.���x���No@����(���;�	l��e4z�?�0���t����2�8�h��c\;@Y�(�bB�
h�J���%p�)@�,-2�$k��B���<���-rC)�y!����0��������2'E�x��b��_3#LF�"��A	Z�����6���d������k��am��k��l�~�Q�!P:�ac�������������~3��	i��I�v�0=��
7R%��`E2�1�q�F.>��{J�z��Go3����k}����j�+���C���Q*��P��4$�a��fO1�.���_���J��$���*�"37L
�e�Q��AYlz�7f��
�s<�q}-���O��������9�gn����+����n��vZq#_�\����]G����)+��tF�+���}�#R�f�x"_o���cA�#�L6�B�I+py9�W)�����U����}`��@`����gz�d��O98���q��vUacS����,�k%��r IW�E����K�3�6:� TM>��PF��7��.����@T ������2��pD����f���������4��_��W���lmg�]l��>jt|p�5�B
Z�j3���M��W�v��E��7C���J;L6S�B�
_��wHq
-C*n��1����::�����R�ot%�cT���a�����J�@�(��<�+XK�X�W�F����~>���s#�?����I.7pmWll�T2}\=���Q"7��3������u�t�0��(����
�1,��;J���>d����h;�B�����������(4}.�H��z@Y��B�������@�j���V�-��h�T�v��+��)��s�����@��e���9���k>��������#������9�XOe��F�l]HL��elC��N�@��nF��2kb���������T���)��pDQhW_���2��sxc����RK.����ZX�y���=��%��1���{�6�
����}c[(�� K��LNh�:��8�������/�* �@�C���d��Q&�o���W�_E�q�.Sl�Z>�!�N���7�*����&����mW%66�x��z���%�v��
�w/p�D�U6R%��H��������Jl�Bi�YE�%��0�Kp|�������@�U�D����o5M�iWQ�o���el";"�Z��=t2"��:��5*�s��������+�wG��Fza����U"bn-S�ID�����g]�M�F�l��w��,�����E�l6��e�����]�8������a����>8+����sv��Y��e�u�J�s�@C�(�Y�Y�����P5����FgA�;��G�M�MT����9>Ve��j7g:d+�;lT�r�$�t}cd�F;)q����Z�u�7F~C�)�P~K7$6;�j<�K�m!l����re�����x�'��')��I�����d �+yj�l=vJ�Y���o��f������ce$O~��A(� >��9
qz���7
B���1�+pg4�K�c�?��#�����,���:=���������w�z�>��������t�}��$������~�%�_@Y�kP��\P
z
e�������!g���^����t@��h�OL��@����� u[����?[��I����I��CTz
���6<�����(��p�+�H��
��N��A���M�����Oh�TD����{��m���[L������'����O��v��q3�*�^�fK=��s{F�U~@[Q��}����*7�BJ�|[9����o)�U����S=�HJDo�
hUe��L��s���	��YLn=[�'�Tro�;�'z?�*S��B��w��r���3�Mj�,�p�(,.#�\p?;��v��z����(Q�d����n/���T�N
�'��Q�<8
�
�%6��#�FRL���,����F��N565D����Bg���������`XO�<sj����+B�)�5�[������*��s:u��J���+�)��_����:���,y��z.�^f�0�a�b�����%�^=C��� g�)�L�<���$������'�S�a�y)����H3Ux�B���g��e������zd~���Tj3#'�/���|������=�%0�[�%��b�8����e��a����h��r�'��I7���k��	M�C���{��2�^�Y�����6���W��f�e�	m��DTq���)]���
���H�+P;�J��*�Xl�R�E�x�kGgKfH��CVIjsOM��W����Zl�?z���*��%;��q��U�M[%����s���~��7J��=(�����(��>�<Q�+��xn-��s��������*,p?����4�/�@�`�xq�V)S:C�����-��6�
�i��;t�)Q��^Pp�}Q����.B�����`��*�Uo��
��wZ��*�_�Cq���J�@8D��%�*�HLp��.��b'�QUV5s#���b����lqA�tQ�Ms<}�����(����N���O'*�������+;I��L��`���=����m�L��%��r0.LH�1�=������!�t��b���^���;J�
~6��l�gY��
SO�����0^��v(���Qg���#fw{�0eK�*��-��1��1�!�:)�]:J�".6$*:�[P�H�qE%LWO��I��T�(��P&H]=���K{�SD���<�@-�7�#�C�J��������*?lAa|���1�:'�5'D�%ff�>���)�z���C���=��Z(<���C�(��C��.8��
A\Y����1�G��dB������%r���e,�(!%g��C�	p���r�����J��P�^b+�<d�i�:��{��sV-�l�b��n��~Uh��(��<�)��s�Ub#U��<�]����F.��/:����.�"���TC,�����@2���2G�m�����@/���`����lR����%l�A�EU*�N�_��Y�L��^6�����Z�,��%�������)Eg/�����0���=�4��������X�����}'�j�N#�A\{���_��h^~���e���v!�'�c|OrW}�J�������b��{�
0��������/�8�Zb���E�Yhe��.I��t��1�\0K�2'��)��b� XWt�"�@d~���:a]�Fbp
�k�����E���[-�����1I�~����O�e�iP�W�H�����g��/E1�����������Oi�m������_�(�9Jl�B)�=#p{=e�U�E�����<��o����Q;]&T��Y��eD���~�0�����g��#r���LF��,-s~�]|9�3����+,����)���~��1��@��?��i�p����4Od;sO/f����*��B��l#�qC]��	^^Jo4�5(�;1��}�g��q�WV/�Y���t9dh�g��(�B�(%�U��qE�,+�����"i���2ab�o���\��Cc�F�~�
���S]��/���5p��4J�+H4�#��mk�������T����+�}k:lH���\(����HEXn�X�*4k��������������~������-��o(��1�>��H�r<���AK�!7VB35c�����d���\~z;{}��j��[PP2}�dA���gCb����$���,I�+����`O����|01'V��0
�����@���-m��}0����_�3�>�q�b@���f�_���mXO$��1};����9��"�LJ�En�"Y�$])�N�B�.�N�p��
c�@�q��(sX�s�H�e�ad�;th���3���!��y#��J�
�+������e�1��=��RL� !�B��&��[�����k��Kf=����=(�!b_>��/���2��X�%��X�������#�#�
��v�]?�
�������%�p_:���v����su��}�������������jW%�EIc�K��R2� 0�(!}�!�Y%b���{F��{"l��e�&,h����E�X��d�3��d"��FzT�\��i�k��&w�����^5�j0���UjLY�Y�W6+It��~��w3� �������_5}�����
1LZ���!6M�
|�kRJ����?|W	:r��OR�I�7GD���w��%��vp{�8aMz�G8������m#���}���S<����/
S6M!
��jnY����T�xNF������Xl����{(x������Je���>�����Mu�!F8��+�P��H)�����BK��v�nl��
�L��^fU�Ld2J|@F��_��<��[V��[4�bt�@2d!����L,��/��J����u�Cyp���O-B
���w	�����mF>��*����D�G~X���(3?��W�\v��2��W�*�T��L��6�U�u�-����������Hh5�@���A<�������,�b��u\t��������c�?/:�����6��5:��}�3�
����.��r_��i��{P�����,Z����U�9^�X]���Y�������Vh���fU`Q1��i�O��>���:�r	�>�����w#�}B>#�n���v�>(e{P.�R�RTf���9�HMY�����6����a�JX�e���,��vU"�eS���}nr���d�;���<8��|l^�jH�%��h"���:$1��l���^PX��xVi����wP�n��H����2�o@�S�������e������&Q��,�u�t��������I�X��F��f�����L�C��l��� KP���t�t���pc�h���\Q>d�H���/��nC)�J,���@��	����"j�)�gC�2LJYV�_��!���l���R�����+�'��l��z�a���!}P��l���Bx�]����_��F�0(vA=����2c6�
:��������������]R���!�L�!��R �@��|����	
�y�Ce��<1t��U@���y�������]U��4��cP��q�������m�����T�=G���UB����r���JxV9l*����D�i�x�^���[��<��r���M6R5�y����������2le�?z������Z0����	��������,\W|��1��c~�����i�)`,�����442��+�)�aR���g����'�;reI��?,���Ne]�1�d=����^����� M�~b>������0j���D��-�8�e���(�>��B/�~6f����q��7�y#n~��%�~�X�y�
h��������z��TU<�R��?�W<)R	�y��<����v,���6(�� KQ�d97����Bb��}O���xm�����(�5���X����*��b���d{���l�����Du������-��@-g}:�ky��F����"���Z�
�����Uj<�����
�I�������
��m�;��c��l8��q8ec^y�"y�F���F/��0�����z�p��CdH���
7�a|��Q<s8E������4K���Nl
/~YV��F��!�����N��F
���M��z.3���k�T��ype���I
�j��L�~�[����Q��E)���P�.d��9���"����99�_��U1���]��p��:-��a
��/.�EvJ���#�w���C��@�;�����2H��s����*��W��7�X��-|���*H���x&�+�7��2������+���!@��)
�+�1�*;�e�����B��2M	!�v�Y�����y��b���l�?Mi��,6���^}�2���
v;�S��Qaa����8ygQ���*��^�&v1C!�b�����R�}���s�)}�����C����gP:�DAS�����|Q�����*����!����������������f��_e(c�@�����]�X08/h�
U&g�}pu�������Rk&�0��4\2oF�����}���n��q-t*�h�1$��(A�Z�?U����~�4�'�i���5��P0��u�N�KX���rB|�R�o��)��-Li��/�@X��Z�� ��u5�����,��������&�}��
;J��`a9}����)3�#�Y}�,f��=	���[� �.���I��~����^G��G�-� r���?�����m��F�-�{��^
�����je�W���,d�1b
�Q
v)�sy0�?��
-����6\Q�_���!��R�AijP0H����H���s�P���������0��6�G`+����K)?jw�M!��~P/����k��v6��"t�+�����j6���ay��$J�&���������+�����a�>��j+Q���}��cr�
���8��nG7��=���~W�<H���.����:���H����K_���][������6(kqY~�������,�����J�Lj������#m�iT�),�A�z�M	�@����E����&0���CVpI�_����jiP��E65�@���FZ���E\�O���y�0��b�84���V
��U^�i���o����U".��#��$U��Z{����iL������ch�^��3a��elC���U�����N����1���L�;X��\���R������U	��:@A(��B�4��������=%2��,���lp�5W����F[���4{ �I<��w5���9�E��2J�^�7ps\��*��*���J��l��9v^(eS�
���]�*��?sz����x<�n ���������9��4([����\(9���b���
�&�90����^p������w;����-C~S��:��\A��c���G��W@��\z��2�e�u����Vd&�/�,�C��F��as@=�X��~����c�o�����xI����*+���aC�Y�@��H�-�P�|�����X�;��>>0�{C|�>^��Hs����{<
���~���u�@��2��e�ZJ��;�3�r��a�����29�<��d����@�q��3��������$��`}pe%�.�kV�E��U5����(m��,�� \���p'�y��_T��x|���s�u\��d���}�D�L�b�U��I�_��a!8;p81�Z�,a�{�91�8��!������~�
����F��W�����2��<��yppW-�x�7��4
��U���I���y0�i�
��h/H�����S.f���@�<�2yp��������74Hs6�_�=�p�+@YB��KgJ
�����
�R}?ew?����N���(����Q����rvu/���Q(���K���o��O
���6}�1�q%�cm�4�^&-�%�����S<�������.�6gDw���&OP/;�4�����S�A)���Q�4p�H��`H��h��/l��X��J���D6i�2>?�ci���F~��@����������A��~.�$��%������H�9j�<:�*�����eSep&�����w@]�5{S�[�k��Rl"Y��Q����x�@ B
��_VHF���f�,*�P"}���~��2�v�����)�P��>n�e�4���E�s��w��\��f����j����I���u���t��(i��[Q��2�8�qy����Q�T�C��R���-�Y%��<��S^5����2l*��q�E}sphnDb������q�����?�Is�����x�U���1H��C���V�R}`�/,4������k6R(p)}��� ���W��T��D(7�Z���3I&x ���#�^����U�)��>^;c�-��M�>X�2���UW�C�^�G�wd���]{|
�F	a�"�G�.S��������j�����JD�����.K�6R���������v��J���F
��o_D�����kuk���p�l��b,����i�����4W������]��T���VuC�������p���OD%����=����i�8����8��Fud��B�lP�����J�O�O�WXvK�f�~�@��1L��s��o������
���Z�zF�2��jm��_��w���f�eR��2)��+�%����n=��go�{��K)<O��H�6S�B����.����U�a��m�C���g�xU���*>����������!�I�_���������TL���3m�_��O�6��=�rR����2�j3J���J�[�������o�H���B���no2������7���WN�*<ESA<���a<E�N:����U~dY���{���V�;�6k��M$������K�D��WFooh��=c�~(��R.��H3��a
D|�1c��$,��aB��4[��w��[������*~���� \Y�n �����y��HlYk���� �@�>�|��5�ZoQ���+��<����M�����`^�:��y9!��>�z�[�J��(�H�H�v~5E20�P=��0P�9�sc)���	g�����N�C�_�l�*���*���?��v�bS�>�8�������A��������B�\)"��z��p�In��Z���"`���7v�L91���
H�#)����?�H�|�H9�������K-��D����y����I�`G��R�t���7A��kBv�F�A�U�E��,J�y�"{���:X|� ����H�Ffm'�K)9Q\O�F�x�����%s��6(H!@�
�_�xd��O�~�����0Y`e�����q�����b;m������:9V'��'��v:�!O�[.����[�N~�$T;�3�?<`����
�L��� �x.�&��R�m<������]�m���:��O���(m��,��d�}� ���(�T��ug=�)��xH}�`N��T�9��
�2l�s0�>
x(?� :��n��:�?��:��W���R����2�e�����m&bGK�Y�Qi�����j&|�G�����[����(������0���]����.������2���
a�Ro\��h�|��~}�U
���mP��}tpk/j���M�dD��JX��~�����]����k�)���a���l�3�q���_Wp�k�#�Cg.4�{��1
��UB�ad.���jF�L���nN$_�*nu������ot����v�p�Vj�*�3���Q�9l{�����`"8�����lL��3�|��R��p�3����er�*�C9�b:R��������(��	�����'i��[��l�|��������]f�F>(�K�]I�R
pDU#�r��`��I�"j�����dj`�����M6.�������}di�`����3G��"��~�G�3�V$��*���+rc�liW"H�T`�q�&��*'���:�a�<@*�po�Y$E��m���
l��m�Ik eRQqv2;�NJ�u~Q4�(z��?a����n�����n��3��AS�q�"S8+f��U�ybZ�r�6������P�������i��_�e.Z/�������D��N���^G
�1�"|_I$s���N��U�{5�b�H�������b���7E���?���Ln�j���������.?����.��l���q��$�\��^%3$�~��Snb�ss��t��s�4��}��UegT����7��"�C7�*����7F��P��)�����y���j�
����@*�oR� /c�c&/Q��o/��a7g:�u������|v��5!z,�2)���A���g�tF�e�Lg4f�,�?d0��.�[
�mW<�Bi����@Qb��E-z��%-�p�A����iE �k�\W;2[����h1.�vt�G��������K�T�ypc<;�uj�
�:�@?�+X�����<��^Z��(���]�hz�������u���!���";���SW+�X�����?I������j�Il���dDzc1�!��K�p�6����a��7�aBz_��U|&�s�,��r��a���\C���`�L(��^/+������m/��QUv�Pd�7���T��l|�	�3���Se��>s�����sb��P{8t\[���.�=����+�1B�v��X���g��I���y\����d�� `��obFG�e;���U�b����<zg�#�%��~�Y��U����7:>+yH�
?j4�� {�>�X�K�vq��Z�>����>i>=���([��(��P������.�C&B����N��2r+�v�Qb��E3���WP��J�2l*a9}\e!�N�_�nJ-��Z+&��{���������gw`�lw���R�����$��
k�%��������pS�$a��I���]?�������v��OPpI���B�-�:�T�a����'o�������+g�B�x�
 �3����O�e�Tc�d�;��Tw�p������)��0����&$�����������Je�����>��P���� ��.��>0�����I��
�cz��bc��x�A>�_#RPpVFf���WV�����wM���'�@�g�������*]MLg��
��0"03�}�C��X/����x�^���\a� jH����!�� �?HQ��E�\���P����WV/K�����'�l�)����v/��
?OU����Su����������G��!�AC����8N6��,~��F�R�\���s�����������wZ������_j��!�s�S�/ ��G��Q$/��L�?go��-S�t�:;��2��o�*��P::13J�93�j6"1��}pfEm���3tH�Qt��5���/D�-��-���o��n��9*Z@���9�;}����f�*�^;���<m�^jA�Yd%Q��������<8t����yoq�o{_du�����0��gh�X���
�S\F�d	�Q$Y�x�Ej��L
�/���R�c��#��v�b#��p��lF$�T�NfI���D�dX?$g0�
������/D�4/�&�(������N��>0�x�"z�p���4�$��t�&)"��T�2-�R����Z� kr���WF!"z�Z�U�M]���@��1L��^�
A�&��?A�A�4_�O�����^�G��UB�������2�a�Ue�D%�T�JL���2�<�g>���L�*����*C�W��M��
�0fbh_�a�Gh3Lw^�������H�X�/U5��1r�um���*�LZb��J�������<|;������Yn4���E%b<�K2��kYV~	�����=`W�7%d�V����P���M
E;����~D�,yx����l�qe�+��O��fr�RVd�"���H4d� 9��������bDOR������I�'c5,
�����������VbY�� ���Sl�Ca3��B��������Y^+�\)�}�n&�C����hs�[�6�����y��+�
i���5c�0����?��9c6�h��	b1���������$�X��v����2�m�L��yC_;N��74�9�m��'O��Oq����yWaC	���P�kLb����	���x!~���k}/���_�Z�?d����{@�D�d;�b����@(��@	A�{���_)������o6�C��f[��\�Q�����9�3$�"�/i|L=��'�"��������Vz���~������2!0�r�>8���2������������k�b�p��b�o�G�\�Dz��������23��<������2��DZ����b>������
<����H}�4�X��@���-�����������%}\e���Ac|��9vw�H��8�8��� 0T;�J����-�)��a�/��c~��*o�B6�+�����od�/��l�����S���Ca������;A��J?7�N+��0gZ��!�/����j����_�~�~�z�*7�XX27�(����X
�Ex��d})
6�9$?���\&�)�K���d������.3���;�C�j�\9@����j� ��Kx�����2)|c��l@,M�����:1^����7q|��{Q��SV������_�@�ms���n

���m�C�������?���d�����*��E$L��C�1��H��7�N������Y��5�KUp>��<���2��Q����R�xH�/�1�ks�R��c���I�F�F&)560iu��;�����U�E9�Q�s�A�|������[$G��3����n�)�'P�}�t�A��S��|P����9�
G9-��/�!olt���X�m�C�_���}��BXL`�]J�d�~W�H�/s�<t\�Y?�,����~)��������L52��T��DX67�s�A
�A����O�w�����Y
md�Ou���/�X��q�^@n�?VX��g]?����;r������E`����v�g�i?��\��v�b����'o�!T��@1����3�9C�}	e7r[��L��f��p�D�G�������

(�g��j�I����Q������,�&��L`���1c�|��#����u|lM�]jm�se{��em>��c�B�����"Do��Zs�����)�r<��'U$��o��6� ����6ds�7���{m.��E=��qn}�B#X*���56��X|����
�W�z�%7��z���`�
�V��"'t}pP����,7i:�6R�{[����9	��4bQr��E��H���\|��`�����	c&�]��T��t!��@^}�Ai�e�%���0�w��&���P�>����wLu#��!kGHn���A��������Tc�,��Jn2�v�h�7@��w0x->�vU�����y�|j��������(�f ��P�6Vq�[��/��X����y�3v����v/I���^�M�*��-��x�+~(�J3-��8����{�u�������g{��`�Q�����^�G�+"[qQsreD�>0v������r*;�E��|v���S:1����{"��J�^�a�5��{��h��5G��o;vK<Y���gy�s(J���F�(����f��0�x�}p��q^u�)�Fg�3���4�m�C�����VG�q�}�(����_�ZYn}��W��W�9wX�����y�l��p�H=R����v����'Cq������s� �R����	��H�F5Wxt�����jJz���@F�'5?bxLH��dD��G&����n��9�����m�0����M��:����c�I�>p<��U��������;��:[�_JC��
����?����sQ��R�u�=w ��K������;����_+��&����/J�@�ctD�d~�/�:Y!�A��2_}��;�^����_V.�0a,�9
1LZa�� ��y��Y9�����M7V�co���v���O�������(H����!����qCA����^�����G���52������
`������+1=�^�a[(��s��d\��������:R�W���L��"�����c��|w���n@���:�=�Nzt!��J��|�*]pld�7`��Yp���iW�<��2����6����h�<$�C�P�V�a�;;��{);O�w���F��L*^hq�s���������#�"�^���x�2Y����l"��ak�*��i��������Amt���yp`]	F}	!�2	Kg^L���/�W����"JG_>�+p.]�'�?���W�*1Bvt?D���W��-UM+-X��(��c��(��YV�bx�\/����F��z`{��B���+@�U1���d�Q���Gf7�2���X�D���6c���T���C��W�nSk�l�B����`���#��KlB&��7�f�L�lF7�����X����f��\�;�> �r�zv� C)>��������U"�P��C���w @3fH;�/1�5B��F��L��}puc����9�����U����I���O�FelC���/P;:�Hy�%���_��0c�����72��< ���k�EI����i��v�}\Q���La��8;BW����N2'_XkQ��B�����)�� �i ����A���.����=A�2��i-F�_������ �Q{+����P��Nnk<��.%}�{Y���B�PV�%�@>5�Pj=���v����OW��D��W&��VL�	�+���)�e�*�F;�~���������P;�J�A�����-B�x��^�zJ(���������jX�����[��%�S���g�����|�w����������x�,�k.��}�n������e���9��H[�[4�>0�������F )
���r��1�[H3{�0!�x���,������1C�X��}
|%�����(���C
����4�Ty��@\�2���������eKJ�����P���#}?_������__�a|�r�-V9-:��9��J"��$��`�-7�l"ruy��"l�<����pZ�v�bS�
�<��:�cN�s�:�c��3�={r�i�RU��0���)�4"����b�o�q�%��������/l7W6H.i���F����%��{P�� K�@>JA�;|��C�KJ���+r[�������$���_������z����$/P;�!{J�d�W��
���L��I��6L,e�i��k���sd8O�2���P;�2�I�2��T]�=n�:��M��z���g�&����U�5��K�����\I��bW'd�X�=�|M6��EU����}L�1�������w������g
�*��:!��pg7���	p%`��l�����q�~���������3�G��8�={odv)�P��d������p�������:*=|h�d��X�A�E���o�3�,��h�D}l���������L:���$:���n�3��d.|��q�W�1�c�K)?{+�6��k�-��-(����%[M�v��D��P�f����[���j@
�_���7�7��M
VA����HZ����07H^G/qX==RV��R~��?yx��d��L�(�-?!m.�����>h��O�*�mg;�* �
�����{Q�M6��<�@�1�(�,����uPTXk�����3D��-&�7ZUt�IY�C�S&f�����S^�y�D�,v�&�������L����K�'�a��d����v�6�
{�H�s�!�bG������=D�� #U���<O����(�����LHF������!��j��K�`���c?�z���fb�pf!>}�(Uxj��;�C��?�C���l�<�R������@xZv��fD�)�K=�?@t��Z�j���L�g�+�g����������a���</������O���>������?��]?��.'�r���[������h��*�s;�"���E��*���4��WF�������7�������|����EI�r��@-�2�����>�A�Vo�;K�}�v�g$���,�o�m���A��w�x��`0 ������W��U~)�'�����c\{.����K-��_T���`����ky���k���}��n��������s�h����8o`����P��Z��.D]�����V\��o������#������Y���Tq���p�������'��-e���/����4�:%�����0���
1L��^
��M�sKV����]_F�����+����(�
Py�����L[z�&���2���+�q������!�7���:�dj�1LQ��i�k9�Ee�7�'_������l%h10�?/U��l.��MW�4�����,���������p[
q`����\f�}B0�O�K���p	XY��;k��Fn��H&i
A� v���?�dE��	�-�]�K0��I�a[(��\��&�����_���E�5H��E<�a�;�����O�g���>Ji��<����,N��[n/�%��p��%n�T5j����#Bj-������^�/qrw�r�����b����D�jm��:�x�����/�/����Ce�Z�p0�����V�]Z�����`�~^�y7��O�V/U���C����4��/���_�SVe��
����J_����^���%c<��S.>�W��{Q�����l_�-Ty���~���v/}Pbe�q$Q�����^ 	����Jm������BIY�BY�y�����TO����wZ>i���
�K��^�1kF�
b����-t��mvaS�D������^ ��.*5�HU`I��`v�W������)�b��qE���Oe�_*��������?d��ok|]Z6������\(����$e���#�NN��.�A���2i\�������h�������.m;�Y�a[(z��(����\��}-�N�A�N������I
����Nz��_J���1��.�}tpH����l-�1��b�'���2nd}���7�
3��vUdcS����=:>��5��������h���-���g����o��j	��(m��p�f�%}�QX��7~�UW����*[M]?ld3���Y��vU�!_��1f�y���:
�4�:��.��;��y�[������������ ���4�*;�����@�?�7R��i9�rP�����I;jWe|I�9uP>�5����T$(�-(�>�(B��5Yi������;��������U�/��g��vUccS�
�v�z���3;��D�C���.������',��&�}OH�_�T���H%�U
.�;3��<�'i�>0�z�n�\��sa�������t��ul����-������\��m�l�,!�U.�X^`����I)i�O��G����mXU6������*'M��PA$�VQ}p����!I�4�e�HGDR%w>�����{P�������������@[��~9��F{}�m�FU?�,��w�Z�U�a�C�@���}�#/ST����^��2#��KW	�.���������u�L�:xe����&18?��`�N��N�[��Ie�i�!�t��Mel�h1.�vt�(�%0�E��)��(�y9�aq�L������U���7��@mXe�2R����2y��8�
�${��2����E��oQ��D`P<����L6S��z!����K��7-��A��]��7��MC���P1�k�����@��1LZ@���I���X;�[:#iV���k�`��%�5\���/?������y�������
��j�Qz�Po�j���z���J�f���(!_��������J0Y��1��Kh����V=^�"��H����
7�a�";��#�w3r����L��&��8�dl���V��Yc4 ���������)(?���b�^��G������i�!��a�=�2k���vZ�s��
%B~n��Dy����sO���Q �W��������Wf�:)�vU�!J,u��/��F���&������E,��X%�iE�T��@���q������3=R��9�=�G�3���z��N||���f����0��l�����{���,�P4eA����T�9�jl&G���t�	b-r�C�Y]�2�����YN�<dD�'2�d�9����^��fNJ�Y- ���Y?��_�4��@S ��>N�Mvp#����s�E\�_��9��U�������P�6�";��H���D����rr�
"y�\�<��z���� g��}h���V���=���i��b�~?���+,[f^�MY"^�+ :���F��/U5�s����*������%�U|r�Q����bg���Wo}Aw��K���O��uT��O�*�������P��=a����[��&�n��� � ��v��q+�tD���m?(��P����A��'Fy']�<K�2z�vs����J��~��.�B)�J,����Z~�X�X�_��X�p�t�cnF��~�+�p8����(��OY�O�B���4�H����V��b'C�o�O���?���?��f�~�=�\�����CE8N�Z�W,t���q�N������24;��O�]��<(�a�4�z���r���E�������a��[GWn|H����������N6rC��1���xo�F�����At����B/�s���V%x����=����Vv�F�@�� Q|J�$�����HZ]B����{L�%x�o���+����vxX�����=j���J���<8�^/`~$��g����t��O��(�������>����?tR{��@dBrl���?/:������xH��F
��P�	`]���F����?R''pj�#���2�F�?O'�,6||;]�@g�[��U��_�vt6	�#�9
{3���||����
R��*������!Z��m�a���h]_y��/���5P�T���_��g;�O"��v4"<d%��x������+l��2'����N��5R9���O����A:�����T�k�k�(�,lm�A�7�Ai7���A`TVlz���t���t7ky�����t���%7�o���������
R�����Ig��80._��mB�y��B#T�d�3�e/����z^� t������_
q����I��0tN�0[:[�$������1������������V"�9�v�2�T��Hh�k"�(l�I�(����I���n�!���)'�}K�T�C_��nD5���~���n��Wl�����N�?�-,�,��}\w����I/�Y��k���%���V��/r�`�qO�+�mU�HJT��W�E��)�2}�C�h]�3�,�Pn@�_�vt;D��H�n� n��L,23v�2;�����gep�0�[��7�\7<�#7��2�z:B(kv��+���q������$�T�4A���������p��*���"W���YI��>0.�P�e:gwM��j�R�� a���!��C_����p7��r���HiM����o�_mg%@�7�?���7�@�c����9�B)[Q��}�1����)����Q}�y�|a�[�eDZ��{I�N�zU�X��������^<�QX����lR�7����(C%�)���V|��?����F|�=�=���������X�W���f��^= �����G���/���0�oA��2�#�+s�E��������uN�CUl(�5��'Q��=b�������X� �o����;�k+��F��A�#�(i�
E��`������Q�����]z����q�,�d��W���"J���&��q�I^��/P;:ab�i����g����W��6J��"�Cb>�)��~%�^�aJ�}\Q������-K���{�y0�?�
��0�&�F��%���I;	P������6(������o����Z$���F���[���������*8@aP��zVqEL���r�b8������������{���O^WTy�	��_jCgG��JG7 eO���"��~��(����	
~�rI>t@��H��q�r�A�����K�*��|�jLN��Yn� ��F�����+=8����~z�;������ll���bX��x� �]qll2.���jv�<�3'N�d"���_�]ak;��H�
����,F�'��Be\V-��w�� i���%9��Q��<tG6�@��[�S6�l��fuH�8�C:E=H����	i��S���4�L2_+��D*���J:�����C�T�Bo������G��u�����G1��d��� �s����g^'��/�'�.84^b��^���h�BqS�/��t*���+���].{
���o�K\���B�����^��a��_!��c^w�	���81�z�����������}�e���el�����(P;����-�S�[L�P
o�v�b�;]���s��4�����w��n1��������1��b�G#Y2��"ge�	�\��X01�l�?
Q�a����\x�����o�t/��^�s<6��T��@��I���L�Z%�X&�_�����f��W���St�����Z���:������>��&����2�f9��G����	c�<8]�������<��t��%N��~(
7��/���#�0���6�H���g��u3�e���*���!t���!�Pn@��W���@�=o-��X�{�L�WF�8�-aZ�EJ��CN?��N
�}�=�=��a�Q�L���$"�����w5��h~BGE��q-�Dn�a��|��R�D��
���$^\���sg9w��'�S��9�!3�iX%vF	���D[7E��H<����++	B���c���[Ze�~�#�_���
�	~��DU��U&�T���#J~�`�25C�~CW?��3�_z���(�v�D���Cg��aD"�"���U:���3M{_�A��?�N�5�>a���e��s��gD�,����Bz� B{>B[�
���4O.�r�4{ �����+������W����7�[CG����
H�0?v_�%��B^Q���a���5�D�.��F���9����'9wZE��s�E������a,[��#Mnq	�g�<YZ�I�J�����k����i�
��
D��&��)��M�7^�y�>��U8���"�}c���
,j �6�7�a<��;i%1H� ~�)1�]R��}�Y�\�YWf���� B=��;�#�������6��A��w�=�Kb~�;�%��_c��G�������R���D���Q"�Qz�8�N��\pR��������m(����D�e�J����<��D�.\������%\�7��0j�)��Oo��������2E��$�abt���hq94_��dK"�`�T�� 3o
f4�1����� ����/YR0��63&>F�J�������
r}N�2�L�_j ���6N~n%�(���m�A����3dh�����+A����#�wLi��~���A�%�x��^FE���C��@�klz���sus��Kr([BZ�X#N���q����5+������y<G���4a��WB&�7%�B�z �y[�^��`��`7���������I��5,RQi���v������
{�l{�(��y0�����U�ueO���la ����u�0��"����@�L��e�IH�2�P�e��Z(��
���A��Ge:7�HIQ�(�=�0j���Y��:-w|���\Y�����tex�*#M)!/z�g��w5e^�jG����Cg���������e::��b7R	��&�P���II��(��P�������{�3�����_P��L��X�I�4F����j��{P[P8Mj����b�F�\~���5B���B~m�����5Z(eS�
�2.�C87x�3�l�B"W����/�������6	�}o|M�2����i\:@������r��W���i�{�~������UF�s��p,����U��_��g'Bd�5�2g�)#�>��/$�Sh��x){w��u�����/���*b�"�)&�(9���U.9��-}V����,QM�CU|D���o�iV�5�R<!�$gp��{��+��A�e}�������Kr9�c��U4�z`c��erS�v>p%0�L(����I�:/5�������T	��}
��,=���h#
��������[��P�h�[M�8q~�����_yc�)0��6�W���BI	f����.�����3��zR?���;=��~��o��+��m�(�Mc�>���������r��D����DL��	U�-��{��t96���BF�sH�����g������A��2"���u7&�MF|I.�~-)aC���	�l(�=�L�gQ�L��S��z,�t�Rj]��U�����5u���uM}��5���5�����3(�r��#(�������@6�(�;��{��$S�����7������m�8��������h�O>B�O��4_�ZU�����qUr�e_�@�m���)%#G����\��;�3zv���l�|���o(k|�$�:/6G`�qCA�d��3�����R��fq��*a�G��������2lEi������0����hp��c���/^�`��*��]���F�����E�$�����[���@"��|�n���x�i�B�L�=�����JC�y�Y��_B-��^RY�$c�����'9����"�����!&������l�Ay�+���Sh�:C��$����fJ]��T�$�����g���V&��b�ar@��(�����Ii=��F�~�E|���c����R��|���q,W~i;}��FI����p�Z3P
�Q�_I�D�7���w���E��|�����S���^(e���+
���K�j����W�s����EU�^t�(.���2)����7�e�.Z��f$� 4�(�,����2�HUp��\��JG6TM;�/��-��qE��3
��8��Q�a��z�V5h^6iu�MU����@IH}PB���u���p3qr^bZ�,qt��E�����"�~G��-��7kn/��P"�����xTHi����J�Q�jzF!#nb��*`�$�B���Qx������~q�������8S>8�����=�Q�MVH����E[�U���x�^��A���`g-��%:���+�,
�s�>�c8�f*�P1���8����*!HL
u��'	�=USS�P��:�����6����?��S/��Y�w/j�|�����Q��7Y<1dgs�N��a�]U��BA6����O���v����[��A�S����H(�C�=^R'>�dM!�����l�>�(����gB������Y]��u&u�I^@
��Y�U���3q���tL����C�V�|�p�{`�X�
����7�T5N��0������a��s����Y�SVk@��y�������l�������l���h�]Y�
Bm������nM��G�����Er��$��*�
/GCw����C	�����A�k��cAh�Bq.0_Y���^��_4���sG#��7��PCh�NjG7 "p��f��J�3�B��<%�0�t@�j������4�
�
,���3��2��N�qhoD�\���Ed����_�\X�j��_����B����������A�.���Z7z��������j��m5tL'H���(Q���j1�wf?�2����,	�2X!.]e���L��p
c������K�f�	�����dy��Y�x"�C�F����w�6L;�>��H ?@,���"3D�4��d�|pcew�� �i���]�9|���ve�i��
)5�rE���)�>t�L�N3u���C��P��������b�*�0)�x^�"������B��J���P8�����8L�'D~D��o@�]tM�IG��������}��/-jm�6'R�w:Om�M;"�\��{d8'�2������vee2d��"�*E����+�����M8��^~�m�f�N��f�h�y��+3wg���D�<�B������('V��!#�$���W��v�JXl(�o����z��Nu� Y�v�el������ Y#tE�8k��s_O�;�0���2������T5�{��f8H���E:���v��$���1}��B�}�-A;�B��^�Q?�������>?j���J��A�P�I}x����bT�7�e4X>d3o���Jr��A��-�d���cpw�V
����0M��E
Dv����P&���?U�[��H�%qj�������-��t0!�Z��L��,@o~ $�9�u���
3
S�����*�C+x�q#�1,[�9�/r���{�c3����5��9P8��u�hxV��{�t�M�l���e�X��������/������BIu�h���I�->���E���)����
(��$�����m�
�����&��UfE��HD�Y�22�j'�[��[�+�&�
$�,��
��FYl���q��?H�Qc��c�����Z� AF
b�<$r��+��y@��1lm7(q��
��B�N���Sq�ho���)@f��rz�?�]E���~����@a|�Z_�����r�}�iM9d�p�
e��1���2�62]�B�+cv\���t;:HT/�Pm*�]d�]����}�9"Ex�L�$�
ak������r�	������(�R���&�����F�E9������0e����a�[*�(p$�G-8��%��3�>� dK�_���R|6Z�u������f���b��3�k�������H��|~���o��En�����A���3�@T31�T�jG�bB3��h�� �#g97Vd`��@����q����Y�����i���1�+����&�'`�H���:��\�����[���z]Yf)z��0���kn)f.��r2�v�L?����u�,��&�"�6�\C�&@Q���n���8�l7�N��<\�"�L�pQ���v��v-�(�
���A6��=�P Ef\��0����x�A��o�����E�}��U�+9��i�*o�!-���nBy��%����.�A;����==��{����U����6(iW5^l%}��@�>�2�l@�A�c��k����s����T!���G�	S����2��\�J��}Pe�)��>�"���1�*�zto�P�`�{��+LiV�&-��oFD��l_���B�1��<?�����}�/�:��F/,K�!�|���D���9������)'��k=���p�n/��V���lF�o{���q}�(�r;R$�����4�0Ml�J��F���~��O��j�*�3
��/#��D�,�L��(�Dae
���&6���7ctdai+������_��O��+�f�4��~>b^���/���b�����,���fb���'X�;���r"����E��8vk�gG2�'�K�v�a>�8T�]Ue���z���Q��BJ^�W	������+����E��]��>�J�=(�V��q�ND��"t�w��I��0���j����q�Uq�����Qm�(�M5^�����Z52lF��W��	�U��%�*>������0�Y�`Q�/��P
��5��>�]��3f���������|NP90��a�g��X{13�Wf�}<?/~Wq���<\Y��8^�0Y!�+|h���w�r9]����7��e:���������k�j�"�����\��-��*e�J���� �����FF��-��} }���>�l��j�+��wP�n�,���cKW5H]Q��a�l8���q�EK����������e��Yx����D �2���v��zR>����������6/���|�"������9���}��	�)#;=+�W���*���9��2]eB���y�����4�*�fdG��R��9��U��9�XY�S�6Q%����Jc�,��Z�leq2���L��ia��&I
�g�� zY=��0��;9f��O�G����Q���b��q������ZKB�8��}�Ofl���
��^ �,��
<L���o��3��@���_�����8�����6��g�3�o�H] ��@��C���F��ubg��������#�o)��1�����mX�<�������C���@��Q�0�u,���%1�YT{���G����8V!��%�s�@�.�\��A�
����G�f(^���n
�0�Fi;�!(�-(y��y�����Q
s2+T�,�|0�X��GJ�<�����#�����he��Fi�����x�tf�����T�7Y�2S�i�#��K�����(��)�$��&B�h�g���X�h���b����������#��A����IROC����q���D����eU.u�Wy���H�����wU ����@���Z(�]�C�1""�D��!����������%Z�K,xH���/���B���v�2s8+���Y�C���{��<y;P\j��j���/�J���S�vDGs��Du����YHc3o&�S������b.���QK	�|���{��S�3
k�7������	b���s���%��,�O�B�����!����_5Y|�p�,F4
P����_O��5,�fi���&�?���O�����;��gV����yy����~G��ky����������G���o�5���N)1�oP����}���`�j�Y~���� �	 ����6KK~8�s���~U�)w/k��������$	�������x|��5�E_]�+��S�3e_d��x���]����J�8�����g����Y�8����%�b�:�C���w����k$N[&.��w
���'��������>y��������N�~�~����m��'��(� _��^��1E`�A�*}e$�)���CU��)���R@4[N�*��Q��5�G�>���o�/}`�p%v�7�>S�������8�Fa�aB{�W���>�(�^����6{������
�?��+��~�����q[����FYl�����������<a�UN���#��� [����O��}j�U�e/�d.��|�!��hX�-������2�CEl�[���K�6���Z����}&_���O���������w�j�M���*�S�r6�6�������m/�+i�^?!��W��������;j�H{o�N"������0o���}��d���R����%l+Y��������P�Z3x��
�fb��z����A ��ii
����xZW��Pz��EU|������+D�-�����y���^&�t�m���Mor�_��&��7��DI�����b����U��u�~��|�l:0��gD�e�v�I��h�(uA�
�0o���}�P�/6���*�e�������IH��v�C��(�}���FYlEi�p���vz����}��#A��H�Dx�UB�T�(	��7�bS���
���l�B����z�9�e���t�y�v���@��X�A>��������uAj���������A�l�e�,;]��	�f:����(�
�_�thr3N��
�9�%�����}�U@��>x��9�������M3(K�8(�1�0U%�}(a���
���8$���*�E�:t�Fr��F�aS��%�U���+��[w�����h��=\�d���� "��8�4�?��#�E@��� ���,����~�.�������g�����E�7����y�mU�9H]��+��q����7�g�+O�,� �r��?H�C��{�K���7��L7���5�i�Y����9�-��aS����y9���zt�<����A��[���Y5��*��*�PjC�~Q�J���!*�60�5u���c1H��c�+a�	�$����2���Kl~���|,{#��@���k�D_c;eX�?T���\�<�]��)����2��2^g`!�eS�Sq2MT�)3V���0�"���'b�c����$��Q`@����-+t=�M��{�;[2'�`���T��_9`��F�Ep�|�����wO��@��xy�����c��dw���������W�����������k��%��B4+��H�a���H�XQ�x��87�����[~��H��-C��<t[��.:��:;���\.d'�`�����a���Uv�mFJ�W
j8�(~���"�au}pw�df��y�j��_#�q���O���U������F�5Yp(t0B�1��h�"�$&��|�a�4T�7����N8�q�<G����]sm)}�����v�/�J��R�l�����~��\mW^lE1�p�����eG�<��Q�I��i�C�������8O�O��l�e�{�X�y5�����L�<�s�6pe����n������6�'P���li��_�������K#Z��
o�dN�A<'��7��h�$~�����9�%�A��
���p�::H��Dv�|�k������/T�{�f)�Cf�(Tn������1o���C�������t �+��(q�J+�d�X���l�Aa�y�������)^l*�=�#��c�a�y�}������P�E�$���)����ve���H7��E]�F&�@�%���)%�i1�C9^Z��1�58��8�L1L���\�3�1������O���
��Is����p��3aX:6�����A{5�-��K��Y�������E�yb��@��E^�J;m?(e�(��a�d�./'G�$�+#�����{���sxH�1e�Y�L���o��D�^�]/��w�Y����M�=Y���o�*�".���'
���kAC>(���(��y����M3���)����m�S4���s�J���o�Ka��=?�@,���]�E���8V�C��	��v N��l������QP�go���b�/�K6`+�8�M���Nj�Q���s �trsnH( 1�`�7PV����R &����d������`3'2p�_�����k]NJtP�f��P�3��}r��G�P�8������z��2
~����4�����@W)(��J<��
+�$5��k��C_�2+�7��y���@��2�i�"o����@0�\�"{�X� ,\Ys�Wep�����!f�)����Z���Z�����$�,'-���<������'}�������2~�Ckn��@mX��3�OGX�<W9�]���e����,?zG��+u��e|����$g���
6#�T$(�mP��(�VF�<L����?�..���]~=TU�M=Vw���4���/��pr�J�b���xkp�}�B7���;2������	��el�*�fT�7�e~����t]�F�<��v�
2j{���Vi�eh���j"��*��=!�E;�h_�!q��H�H�G�Y�C�H��W�MV��3;OP�5��A6�x�F����&���v�"Ou`��.��&���?�*���h��z��L��fE�)&���jJ&���W�-L$�lM;n�~�3_�C�g��)���Cm�������e�L�����}_v��l��n�3�l�	�.�����|4��\�	U6�����E����`�`�{ ����v�7�"|LA�#������~�x��������+H��ts��A�%�������A�#K����g�����s|S3�i��L����7����	^�Pl7��^��Z���6��������-d[�[�����i��0!���.�.7dM��]T��>�mP��$��|���ZX������&��X<�R��*��1�T�2
Aa��))h�������1 t��@=0�\:JH�mH����#��J��������%���p�9���>����}�mub�����l��lC6[����
H�7k
��e�9I�������`��	$���>���C~�CU|��,��-�e��� E#h&P��~e�1��4�c�v�"U�`>����4
��ar}\s17�b|�U�>�]u��V:'�x$�tb/�/���v�~�Sj��4�Z�
|���N��uJ{�<�\zG���t�$xQ��15x�8:\kXm�
�����	_C�����G��.���G�Wf1=��\)q��B�s�������/6�x��(K�U��g�KrJ	Y�k ��D+�,�*P���w lV�(��o�K9q��C��>T�s��3t���P�>	����_� �%=�d�� 1�%ub�k��Zb#3��OHzQ���Af�}�]����z-�I��>8)���Q��IA�������g������#�7%���?L����>y��"���<��?$��D�������XB�Y�"9�0w���f�������r��J�VCd!�����0��v�m�����Z �V�L*?�g0�
���[N�$�����ElUAkR�C!;�Vy�1�HQ&��/�B����Y.�(|
�����E�2)���2/���In�7j�v��SK����}\ea��;���?�������c�+�C�s�x?������fU`Q���;��O�������m��?p�zA�W�e?��
h�I+���/�K�a/����������m.s��!n��Ne�/\3�2W��xJ����k0.�+p�*1u�_���^�����S�V��������^(!���q�I?y���<V�S���ku/���:����������9@�|2��t�a��_!�����jI�NT
�q��$[�Y]>T���/����}��f��P1��x�����{�5T�������b7�\	�e��?�5�,�������UsN��{F�x���_!�l,��"���j��$���D�i����?H�Z-��'~+���8�,{Q��������B�W�s��iV�7�/���oR ��s��uZie�p
�����	;�*@��[N������v��-��yG&�<����DH��!o�#����}�v=D���/�c�O��?,
��HM"�%ZO�5!��zC��f��?�v�J����p~T�L����;�����f<��	VVz�0t$�}T�n�"���2P|{��n����o6x�8�d�����S	o�Gy�&7F������*R�m���d����Dv����T�l�>n� ��%� ��!l���,�
X�u� ��C������I�%`)�o��i	9��'�J�I�1}��E������]��(� ��?�4�Z*~A��6�����������=�R������D��6B���y+�]q�����K�����)��8n��C)x��/):��M�_^�K�D�
�+�,��S�M��_}3�y���"_v�%�;7u	�|{��l�,lP�M3�>��7�6�;L�����C��hvM���m�?���4�$��(�~��%=��Tq�g9���q$q�<s���l�t+�}�b����f�R �	m_��f���x����������u�;7��A>d0�������2l��@�.�6Z%|f�:XN
y��#������x( b�����;/�a��_!�{V���J�]�H��WFo���K\p?�^d`����7J�m������PW�#�&� �.�I�������K�V2������9�,��Ug�yX|�]���wN��m��)��
y���TF���M��?�I)�/e�8sX��C����������I^�ap�p��e����'J��	��7�����t�Bi�*`:6M����Y�^���I�=-�p�^Ly��
��
������@���a��\��i
l4��������L���/�����&�(mW�f��2}����W�_s����J���2�s.C`�!*>����_���V1�"`Q�
vq.)�{F�'18Q%�����m]l�n���Bp��i���Vv����C�5�%��"x|�[�E��A	���UB<��~3��"5��=8�C`����J�����^�)����,g^d����g>6y��JD�U���>��|��|�k'*Q�<��t��1XS�y(U L[���k�2��
<�@�e�e��:�-I��S��q�t|��?����}ZT��9m*�~��
������^�&�+�2���m�����Rd�������o��lC�@8�7���$�+^Zo}�[��%�B�����wHj�C�(m�Q[Q~�����)uL*���{e{��e@k�)��MV	HJ�_�<.<pb�ve����G�u�*��%�J�����\����x��?�*�Qn��o�4�����NY�#�B_��:l��}w�ky�Q?��������m<L��^�O
�����j�7���2�j[}�o�
�����FXc[�u6�P��-u)t^��zpHu�<t�H=B/�e��o �
��u��f��a�������q�]����7�|�6�7�e�����*@
4}L���]Ut�����>��9��$`w�$y��0�r��a�����C���,������?�W~�����������%�b��y���BP������~[��E�7����f���.����m�/}���2�gP��_���
/������vU�]l���Q��5�*A�*au��uh�`%F�����T	��&���i��&��|P������!FY�?S~���C����������C�s��"�u������&����#I���g��:>�����!��=K����s������h0�u��{�z��OE[�[�Qbk�!l�6����Y�n
���b��U�4�Y�&S���8<Kc��A�����W�Z�ia�����R�j�����k�p�W�
�bS��G���[�,�0��Q�5�C_m��������.�y&?�������ih�o��� ����I=�]���gPk

�����n��{�jP��y��A(� ~5$�\V5Y�Q��TE�S�e�!���QMe�>h�����(�u�L���_)H��4�iN����:+�����$�$��/�!U@��y�|�F;s�9	�^�.y7]^�|��C�g_y�`a'\m��_�BQ���]��\a��{���`��_��q�Rf�������
���>��!S���@�iA�B��?S��	�V�]��\W*���z0�6fd��C���m�i��Kl����x���z"oX�*����\Y�w��������$�o����7���`W�������B�����W_nC��;�&�M*|c~�#��c8T3��J���>n(�k���?�����!��
�i����j��b��zP���d3��P���p����{�Fi1C�_#������y��]w����L1iW^l����F��O|�H�7�T��;��n"7���q��1�eI+E���9�r<� Xs����j0k����A�����Ch������� ���qQ��(� 
�@\����@�>��%s,����/����
��4{��\(�(�|(�Y[���[�Z���c��[	a�S�oH�V�E�K���x�^����D!���N�A�}�\������Y�}��vU+Y�6z��6���f6�B��s�b�H������@���E�-���S��62i����� �?��iCDJ�u�F�4�zm$�2�%���>T��xz���dj����Q�����|2���#f�}c�X8�-���9�����Q�n�,6G�3�9h�x;�1����X�C:��HLF��gg��4���s/=�#/�a��������Eq�*U5(�u���<j�~Q���F��-D�a�@,&�_�����*~���j�����f:����!�������������#����D�e�r\��J����-w��AjdPhp���;�!�&���0��EuVJzv�t�,�-�Cd�/W���R�H��d�sf�G�.[���(F���,��taY����<���{J���C�z�C�A���������e��?�qq���+=���B?�p�'�����D�����_>�z�jo�������>n<��
�����#�?-5b����v�������	�������g� �
]#)CvO�}
@Th�J`�L��6��tZyem���2�y���G{�p�����!������{��[����/�J���y|��.#�>�g�>���R�4�����i�%7�f��"U� 
J�����."��v��"7J�%~����������*��c��������j�h��U�H����VX��1�\(�g&�\�rv�{"x�(�&*v�<X�g���c�&?����������3�}����#(��_��7T���7
��������FiWD���Djp����6�i��j�P1����.���!n�o������=��Q���O��!��H}~-����?��UC������h���
���x�kR��E����WM���a����?��xQ�0����Y���o�w7�c�GLb���\dK.�����g~
	��/����������Ah����5����;��O�����a����U&DA���?(�'"�EU����MW�>0�9���LX�E�w�V����v���6��,�������]_Bz6�l���1��2���������(������)���F�2�'����n���&��]�A����b��_��9bk��Di���[�-y����G,�Y�
n��p���>p�J�(��!�
i���@8��+�9������Z����%U���<�u�~�2����}\���D��@����:�T�@�v��v/9��ux(A����a�/�3
�5$X��H������xPb��}��^������T����D���[�E�<�0��N���#5�zs����G1�
e��
#���26^����Bp���T��|L����f���&R��
.��e���4����73��k6�|��.��+�!li�$�����N�_��/��p�-p
S�-�a����U
��*|�D���������Z�����Rx�x�?�KV~_�>�h,�@`�#DMH�P��B=��i���l���v�MY�A��$�|��L������_��J��6�h
#�	�oq���T�Y����	����LG��d��d��R� �m�G�n���i�+#����>�!��o��g�(i�����}�r�&�,������x!y�����)i�<$CU%6��!Yq��e�u���(��qx���	Ea��y����
��#�k�d8u���P��WB���8LH��(5MT$���U&������7����D�e��U���jr�fQ�A�������;�XX�������|�����<G6�?G\6�i�����l�8�\���������yx���
�i�Yn���N?�%I(���z���7�U
��{
�����������)#����9���fS���/&�_�����v���N�b�\=����m��W"xNn>!�/�R���Wn��[�;�>?�0����#���[�#E,�}�
��w�o�p�Y�@,&-�^$�\�$A9���N�/��������Y���#��	lm�Q�:,�{���J�A�*�j�C�or;V�,��T�\Hf�����d��m���BI7Bk��	���c��sJ�����������4��� 
KR�j�g�bD�P/7f&���buV�
��o�B�s�V��2R���W�Iu�u�)�J�c�c��L��Z���I��%���*�J�a)���'&����W%�K2CC��/������(.[��V�
D�U��������1�ZVE�l�$�f�����h6wBHc��EJ���Oy�7 �L�X&�i �G�cB����V	Bs��{�y�����MT�H~T�'�>�!�o�CN1�� �
2���>����QR@��g���8���a��V���_��}�H"g�����%���q?�=�[{��A�������J���,0Y�~A�Z�?HQga-�����")��-y~�L�F�.���0T{&rs]�~�O���fZ��D�o3i-�rM��V���;�3:h��91�!�=�?�J9�+.���l�U�7�b�z��,�H?��h`t��;Ay	��V�!4�����^���w�,���z��Q4wL����Q�`�e1������6T��+������Y������]�+�X�Y�BRP��0K
&1=�E�l��u��w��WfY��b*D�5"������ ^D��WF��"h,���A�Jr��������M<EKWY(�3u�}&��E����5�-�� ���
����/6�x�^�.[O�?���$�tw��%���J(���(���,�����%��O)����]G`�0r���w;��e{���w�q����
_������ ��:�bv���w�'�3OP^��H�g��gem���A��8Y�|(�g
rr(�W�Q"���c���,�4�"d������)!iW%^l*���?WRmH<�HxEu�����x�Vb�U�+�Y�LQGH�����(���>�-8�2��l��}G�,h ��P���&�����dzC�@�
���N��
�����\=�G��U���\|�
�f�?�������,UJ�E�V}�{Y_��~��l\Y$���l�����:�@/����(������p7���B��C���E���@����(�uj���P���n����W�Y&�C�o��!�a��0��b�����Bp���	���^�{,���'*��h������, �g�n���^g����.5�B�W�7�(�-�/�i�������*:�w�����r�����G
�M�_��u���y��Z7��}��"�;n���|;�c��lm�Q�:����m2m�/O�@R$l�����|��J�Q���~ ��_1����'�0��
�S���#�i�G�3��}���*>�e�{�@�7Li�!�I���D���.�#���7����eib��oE�����E2��0C�O@�����3�j:3�"�>��$%����@����f���a��l������,��]o������B�������.U���t���"�bw���m����<�R�����j,������5a��&+�!U��Y���E������6(�lL����+����������i�)U� K����yy5��Z�G�;D����]�;_����:I������p�oN�T�i^NA.�e�����b�(����+%�[��DJ��B7��eo�6_M�F)�Q����K���M��j]�{eL���]�_d�g����M�R<Y��6�b�Vn����B�l����0^�6�A��D^����*'��l%U���r�������������(�G�/
%=<m��QuH��>�P��]���i���1?f���UE'��E�7b2=�5�2��XAJ�m�����fa�D�&�� ��?Fn��J�aQ�A�����)�f��E�g���eML���>��'Xk���0���
��u[�B�Oe�0g]6'�zXY�\����(�3��������&�4x�s2�~���!�!�t�?y���Y=T
g����j�0�Y��� jDXS"���J����.�ky���������eyW���e��@,�@��J.�g�������d���s���t<s��CU��S�RL�p�D�Q��(����.q8��!�,��]�?��6Tc���� Bm���?.��5/Hrw�-_!{������0�[�o��C:m�����{���w/�o6���8^��X�J�%q�<��y��OLn���Uxi���R���
$�8&����s#~� ���$|=���oC�/K?��}+o�L��l�[~�>�_���pxV��@E���+�_��[o��zrl��C�����+aK�1�b^���B�F�fN7�C:_�w���������������qr�u������ubQ�}����0��"U �'�|�c��N^�B12s�q0e��3��z"?���c@��%���j���X�Q���O�*�0i��@���Cq�:o��!���H��Zrw��J�Y�)�W�����b
���b�&��,"��-�M�[�+e�P����"��W�
�0)������`�giw�5�W��5S�Zm�U���<qq������6�bS�z�}*��l�n�K���g�D����*A0G�yE�%~�6���(�mP��A�N`�����v�/��8����3���K������UE��,
��P��l'�z�['�CZ�
y���������S�s���G��Cp���3KlD���\N���(��	�/�����g]�#�N�: ���~V�e����/���G)���B�Ui}<���8u����)�<d����7�e���Y|O�g�E�D�uF�(�yFNH%��3�e����g�2y�p�G�y!>����+��H�Alt��0G���]�/��c���9+��S�����I,�k�"K*>g�2��K(�K_���Z������]��K�����Y�{�]<�]��}���Dw����m�i�m����
0�T�{���l��7J���@�d�i��&�hG�`{K.��w����%6��-�f�s�o���)^���`�u �K�:P�/7AW�������'���.[:��8��g9�wP�fc������3q5��m�p�r���gwto��\�Un����M��X�������
����m��8I��C�c#|��,���&u"�)j<�%��<H�K�=_C�{������v�(���$N���~��Li��8L�(����c���M����?���>�K���EV�fi���N������eSP���0�	������#@~`�v�i�o���l�Z��� NkHBt'k�u����p	�b3�p�d��sv�J�ix;��Od�����R� _��9����9���<x����)�z�
n<��}u�0���
���u[.X#<�s��^�^����<���[L%������_��o��SQt��
J���p����K�.i>���8�WU���������@�Yx�����p���Zf_�|������N��S�~)�~���O�
�Y���.�������Azhr���K:Sp��P�����+���s��b*D�o�Dzb�_R�7d���f�L�#U���{�B�3/�P������)�������>�__���k�e�z�L���b?�5mEq�7�as^�gB�<��t�!�!�w���W�z(
���v��)��������g�����D��K��p������n.�M����m�U��I,�����I+>�������� ?�$`����c��Pt�2����L	�!�����H@��X������$��Lg���o���!��~��l�Zw�|�i��y��Sn�wl����p[���t��5O�W�*�R�G���D�Q���s���`q}]r�&y�?Y.��\1���4���3V~[i1r�f)��������r��s��M�A��1UE7�#��*:������qW�
��Q��q_�5��7��K�0�%x'��7����8����E�f|3��Y�4�k��=O3H��?��H$F'�����k�w�|��&D���Cm�� $��8�'WZ������g�y(.ew�����ti�eZ���*<���_��P�����i�[���&�<��!�a��{.��7�a�����R� �_6��
B6��T���B^�<��R�{��
�
���X���"��(��1��l��c���;/�7�!8Q����]o��C�3V|���vU�t��T����=MUb���NK^�����{�J�|�����{���>��CL�`K���+��8��J���8L�<T�����-S���!��
X�W)(1*~�� DV�
��A����)<L�o���0�"�Zm��d�7�������w~2K
R�r_����c��wDHv'�&���5m�S!�%����mx��6�@���J����22+�-*V��?�����~\H#>A��*��M����UV�+�h�U�����B,���!�6x)�'�+�W��tW�)��\(����z)�?��:�`W�{]�)|�o�l��K�6r���0���\�M6f�>�B�_��]M����e�����8�Rx5��� ~p�9��2OW����?�-f2]���M��	?R�v/��[ �-(����{e���D�.��!}`����M;�8��&�CH����Pf�Z�h���zq"9J0����Z�\WR3�����P��F��O~Q��(
a��H�ttb#���o�i���=}��vK�M:&fM����j+J�U��BI5�D[+�:dwy�29����N���C�w��"�[$	����U���h��P�`7Y��l0��P������������Lx�C��n�k��P]�r�K@�>��q�v�t�[:(��o�2�cX�W{:.��R�}�/��{g ��?��=�����<K��0q������^�O�6\@������w::�7.���m�(�K��-�(H:���IR^��$I  8/x��ve�
J�5~�q���&E���m���R��h��+>��y/���F�?�����Z�ku3��rC��:Xl_&�2�Q�M�/R�'�{m0������N�_��>J4�r�%V)-,��
bs`m�MT1i�����QE���P��a�@vt������ �A3G>�P�������\T1x���=r�7�n�l6��k�Fd����+���
v��Y|�����]<��*�0�J�����>�HGn�1+:�E�����3�>H��P[�]�m.
�l�^�!�A����L�K�{��S���rE
(.����|1��	�b�Xv�oF1��jZM����Dt
8�U&��V&A]�,�2������B��(
��
c�p�V�/��,��c�H���������	i1��u����D���A��FQ����2l��Pw{p(9��q0�b��r0ldXj=�$Y*K��4������v*�"5(���@���,��P4O�����t�?+@�fe����\��P���q�%�����b��x�e��1��KT��z���q+������
n[Ux����+K��]��.����ao��-+<
L�����k�X�P}?�FB�d�
Jz����5��l-�HC�n7���i����� ���Mm�y�{1y��������P��vCi%l�-�6�QS�O'����L�X�HM�:�A�@^!���_�
��J}�w�?�w�������;d�C(#:�~1�C6x��wE�ek��s
�>��F7,�����&�[!F�o?����j�M.��q�9V�Ub����Q��;9����X`�/�����pO�C	�<lz�,�AiWSPH���=e$�^����I��@�quXTU�0���o!�D���[���l����5����gXc�L�u(���M�8g���Og��Z�a���ep{Ob���s�y7\�17F?���>�|��L�jAx�
�������f-������j{/������&�;$��Y��x���T��n�,6��=�B�Q6#i����O��J�x�Tt�+���h�6�l�p��o1�Y�E6$���o�cc��&��g�*"�R|MU��,�/*)�s���� 7�i��@��
�����a~���MTt6���:�,iU�7���?�&��
�,.� ��F�#��:7#^,�����o@�L�X�Bx�A�����P�����}�j����I��e�9i�����L!���U����N��x�+eHC�����8�(0��C��~C7�����/1�E�	���lq��!-��<X����$�y�k�M���D��cY*��&�l����?�0�t���E��vC�A�Y������"+�����}Q��{PJn���,�Z5B[�E����7�A�a"�c|����:aI���W�^�4�?H�p��n�>�Z���WS����A��w�n�2t%��������������.u��6����K��r)4�+!�SUrB#� Y)k��]�,\����Q �UWD?�$�����c��}J#D��&=���
Q��kp��`��w~��?+!�
�����dp��*<����]��c�:�>�S���������+}�~�-X�c�v��*iv8�,%�t��FYl*���99��{��<F!}R��{w�[>{(U v+��`����b
��_�+���/�����e?��)<L������n���V1������|���b�>BS]m���KG��u�M r���e��q�w^P��K��|�NK*�#'��[q`��f7������c����?n��z5sx��g!n�W
�@
��CoD9���E8�GV�]T-���jw5��������;y�r�#�<.q�I<���l�~�/����,��b�u��f��c��z�MO�����Wk2�)��y�!�������H�������5������A��������`{^+:��E@����9Li�z����
�LYW��<)��T�3+8(�9�:{�M��|��x��F/J�Ew�������+
��*�����C�����L����o	]�W�^X�vUb��}�?���<+���w$��cx2YwUv�	;6r�.��b��$��������VM�Y|�}�����g��1ly���/���*��*p��;qjA� E�q������s������4sw% �+���k7L����9�W���(ST3@��1S>�s�>������7�Q"�G���Dq�W�lW���(�mP��%�$����{��7��|�:�@��Ei>�?���������Q�B|?���9;���!T1i��1�N��m�%)�5����]f��F���9���Y���p_���S����iH�tn��9�d,�g:���1����-ma7�.�y��0!����Q���I���k���w���/R��[�����L�j�l����nn^fI>��g�S�)Xk�-8D��E:kaR�<�`KRD������
�n�8t����	*s������J���E��$?���������v�����u@�
R������}��n�p���L?k�����A�}c��y��
1L��^f��y���l#��`�A^���D��%T=��{������
�f��d��e
���r6�9�S�9�Oo�3���X����zIC���~��%�6����+`e��a�q��(����dJ�M(�+��wf�Y8��?!UB�F8_�p�~�lg�/r����E�<F����*��(B��HXV��O�����v.���K����������<`c�<���{	���z�+z���7i�`J�'r0C�����m��4LW��E��Y�!AE�|0�?�L���)j�HG�M���� @G�lig�/�AI��������Nq���h��+k~;��xu��;tE������j����f�:J���@��qz���cZ������+=p�jex�*#�/�h�q�@m�
c��
|^�1��Y�x�`W��5��k�E��?���x����q�&l���FYl��^�%���,Y�G��C��2��_�������_������]�x��MWY8����n�}`�X��"����X4 �G������Je�J�8������;�:���������?v�}�L,E�w�-�?6��������2\��K���x�{�0wF�O�=��d�D�="?���]G���/��a���`f�ZU���9������,a���(UH*�GQ���zC��L6������������?�;�q��B��*>�h����.LiV�&-���.��O�[Gz�������2���/�����t��a>�����p%� ���t�J�������;�����j�������E�?Y�����JS��v�Q�a[(��0"��8�����V:�"�����	��Y�2�*4q|L�2����2
�=�_�P\k�����(Z�������	�Gc���]�QI8���P&�=eZ���"a"��'59����U�mg	v�_dL��S�o%����M-�^������
��������Fl�6�d����8|����l��rKQB>(�� �(
�')p�<��F�/+�8��?�%m6E�_��a^�~^��6t<
4������2��2'�����'�����#,����^���d3�����;�a"�� Bm�� :D�D�����Z�5.���">���Q0;Y{��o��_,��?5��N��6#p��S�G��|p���F�n�g���x;I\��H\F$�a�p��$�8� 3W��4eFI]9��Q	����S�����^��X���{P�Ms�z��2�B��LC���v@B�p<�m�
�������e|s�� ��^(e�(����C��8�F����K�Q��l����g�%Y��f+�7��`(����u(\2j��!��?�n���Zo�=/�;�}8��/L���j��I<������I;��3}���7n��}l��_�
���������2lEiL���O\��k
>p2��kG���
s��<t	�2��f�i�Fuy_��<���>$��;�����q�%%�7��4�M���7��o���\j7WVN��F5�Kz�,����s�H|�p>�C�kz�����O��bL;;d��=t�����$M�2B1a��*rR�{�~(��w
��?�����d�������;���(���xIj��������|/8�0U�Q�����MC��f�vc��/4�y��3���B�>X��
8x����V�z��U�SlVIm�Q�J�@/��u�s�����pH�����"���@�������sx_������~�����NL�$���k�$�q�~##�1�=D�M�����>��,����Y���~�5y�HM�N�<���X������Q��f�o��5�-Y{���iY������z��(���\����1����"�Jf�w����(ig��^�b�k������~�D��U"Or����|�T�~��^���cx�<�����2l*��?'6�:P��������J��<��U���p����)g��}���1L��^�����m��0����m���l������� 8L{��2-���M��5|,�v���vu!�&�����N/�XliW+���v�f���(�q"��N������%�<���7Y%$��A9��lp��A6��z����sL��(�>��a�����g"�z�E��%n���H�y3�o(��P�N�
��F��ea&��l�Z��J>TU���*��4'oj �6�7W�g��+��7Z��+�������_�v\��k���*Qh��������������c��"�� ����}��\�W���E��gj��6*�@���xkA�q�L�I����>��b������|��6���e�i�7�bsvY:�h��q���f����!$��sT�	J��X�!���0���XL����s��!�q��D�3��;.�x7A�������6���q��������,m��,�K�5/i�v�+.�M��x��4�G^�>2�Cd#|���9�i}�5�aL}��`H���Dn�?<�lm�EYlh�A�����J�cv��y�}�:
�SF�E��RHr<������� �I���$Gx�;\3=I�8M��x��15�C)>�k���Li�!�)��\R|_�*������=��:�Q�!��w�_���&�o��`�#�b�}���M�{}����un��d���P�pv5�/o�
Q�����u]���Y�V�<�*>w3�����a��6���x��>:4�i�T�7P�����\=v��9�}��V��M�y���o��s�
�SF�7��n7����'�-|��a$0�|p0A���F�K���F���_���h���E��:�4�X��T#6��?����1�aZG�|�����k�$��������B���+���J?����������\�����Iv�f�c�~�j�����h��q�fr��:�������������ji��o��=���
����o������&{��."qn[�:I�!�i���v�����&T�x�V/����X�}��M��R�Y���?������|�C7�p��M�PDz��w|��]M����e���D�DQ#n'Y4j��p����a�R5|aCN��X;��������
J����,��:����������o���c��NBO��xD����w���(}(��FV9)��������i��`Y��2I��U�s�+#�83�M?@�(����i���wQ�����`.�3����j�|�"~&�%jo*^��a��H�����KN/�����re��8F������>�i�u��4�l�!0b�'b�����|�F�=��[���L����W����Yak;���M����VW]���Pfg���2[�"w:yJj���KM�<��vU��2l%}�)~\� tI�?P>8Xq�]����2�����iJ1���s���pd��4�*]�<��";9��\5J�I��7V�Y�w��!j�Bx����Fi�����:�}�P��zP$���s"�bOG����?2��dqy�����Q���H�*�0
D�o��������&�Z��}����(�g/��>�;���@�4�"�@�-�!~
Y�~�����A��?G�>C2�7]	Ma������Ro��y�Il���g�2G��A���2$�.��P�R���7�c9�t�T�������H�� ��E, �E�2�J/�o����G���t���y,c��[�=(��P���#��z����K��Dw���7G����������_����
�4�@��o���
q����g�2�����(>��8]%���S��2�ay3���4�b{��1*t0r	���u����9r�S:�1K�?����C�0��Z��$����D��'�$2(��������}c<v�2J;2����_#e����~���d
��1��Q&������f�Jd_[Vb�&� ����[V�p�����,�AiY:��h\1']���l�����33]J!�&
���2U���xx����d���������Zh�h��c��W�9����!�BWiD���Q�5�O�h1j�7��@�I�U.���Q&7���u8�W����Poe�0~��\��
�6T�p�a\@��
D�K���G6aL]�]��q����CU
k��dS�U��K<�?�E�2�����f�\`g�}�3�&���%�������F\��|�� )���%�ND�	�GM3)���L��H�&��Za&^���Cn�X(�����,.fj�
�{��������#���l���h{�����J��+��>!�,�qEa%2�E��>����r���W���L4�g�oEH:���=y�M?@���XY�Xm�2�6t������&Y�v�]X�l�@�$wk��l[t�i���S���M[�p���.���1+����;M|~��>�[\�o�N����/���t�>�(8��{K::���������t:(%�>�MY���(-_d~(.��8���p�%���E��j�y����Q���,I~��H7c�����P:�!{J���"�dH��u�f��ba�y��J�O��Xi#���SWT�7����Y�9qd�������T�/��;[����JY	 �s��4�-�`rQP�y��g,�,�gjE�F)��-��z&����&�CU~��a��J*D�u$�Y�I���/�����I�a����=��M����,��[��r�h&�N8��(!��qC!.���g�
RN�?�
t������(&c|������[�(6�9/r� 	��]("����Y�JV���3
�}������C��9�,SX)�e��R6��z�	�>LC��L`�c�@����"��f���R�[HF��k�4sj �:(/E�U�4QE�[a��@f����v������W�
�)�n�,6
���gi&����0������g�8���{; o:	orF.��g�*�:�@��
��(�d����"�u8��"o�4��\���2��U2��	o1�!��7���fdN	��4��?��H��8��O�9W�h^6Y	$���_:��aS����(D
D�/b���}��)�
��?�*k�r�U�2���(S ���_"~.�B������{G�\�������CUv������\(�vU����eX4�y}f�*$]`��������cY���j`�;���r�]5��M
��{�c�������M���@&h�Vy({�u���>��ru�)@����� �J�zz�)�	���-��p����!*�|��� �,�����E�7�%q�B�W��s#9�A���g/>Y���P��c�����iaz��%�s���Ym��)��M������B
}��2r+!�������0�|x�>(m��_�B�����q6O����0z�B�_c@��1��8�~���{�|{@Y������/6��zLn�kB���/���w��}����o�'�����~C�������5i�G��$S���Z�P��d�)?)"�^	�_;4��Vz������A�N�X;B�5E-��p�T�w�6���m��#@6�3xW3�t/@�
�;H�v^Z���'sA�$��R�V��~���q
S�E���/6f���EZT�����+�����Z�gF�t�L���������]Ux�����#�TG�G^�!�J�j>����I�	�O�v�u��������l�`��_r��W~�V����_��K_��+3dU��9��,qW�
��Q���1�Z���;=�:o���/|��=!?�)�~yH��g>����oGm��]9��I����y�$����k�������
��F�s�d@*���7J��b��{<4�X*�������;#���6�w���v��0��q0E�0lm�(����>�9
����e�>�F����;y�@'��i�pr���:���6����������l�k�/.P{[�-}�g�0�g��P��������f|S��������q3�U|�����1��&����������.�m� ��b�w�@���'U��������u��W���$v�%5!�����r�m;T���B����������^s/y7��fX n�PU�:-���M_�lV�,��/��#|�+���#���@���F�����nQ��J>��18gvmW^lj�`��.�@K����A*���:&:���E�d���z��orP��=`r��e��y�Y���6tH���P�����M�_`$�L�OC��Mo�vs���E:��
�X�2�"��+k
��/+d%x��T��1��7�4�8�M�n�8�v�$q\ ���4�_�`������Ck����
D��Vb�1����/�d�(s���[�]�>h�6���n���sE�d%��0�o(MT�p�ve�i��e�s������:bl�V���{�	��o����b�ggxrU�������5G���Y]y����J���_��9����F����26d�:
�8�RQ�bwv�Q���1��J���/���M<����JXE����>&U��#�b�E��0�yRq��l�E�o���cK�m�C9��H��_���}��Y��	�eQB>(����5�	�{����$L�d�8�/��{ZG��i��I+l�[��b`��C��(���O�s(_��HU��=g�	����P<��mP<��@��`f�������/}���&�����/���E��y<:�+�-��aS���}����!�,*�W��1������w���~�M�c�\J�=�scS��Tb�q��(�C�|N/�)}���ue4:`���)�R�=�@��
�4s1L�X�������?UN�b���s������������?O��n�eP��^
��ID���K���������:��Z����q��$�i���P��q�$1F8^���� %��S����:�}@�6S�F[��!"�_MV�6|����jG7 �l��b�.�� �����4W_3-�2�g��2����#@�k"�
���Qe����au���t�@�����`�zY0>1�"U%��Z�����
�U+'��K� u��F���!~p��a�����Er�5����o����Q�a
J�P�`�8;R������@�j5pX|8�����)�Fw<��-�9��a}V��~���YL���a�z��tt�>3b��M'�G2.�s��U�3�%�T�!6��>n��S�Jx�})u���D�=JlR#����az"�nW%6�������D����XlF?���O����M�9�"�[)>�!2���������v~�A.����5����$�����
�[�E�7Fx�8���x����'�z^�?8�6�_=�X�EczE0�w�@mh��@�q�����\r��!h�����o��?�T����}��g�a�����~lz�B�*��b�G�A>������|���{��}6��li7(�m�T�z^M!u�����������^dG�o��BrP2fUc��1v����A�(�$�C��0F��^����T��S��#��W���T ��4�?H�\Maw�t'&1=����z�h�y�8�o�W7��������e��T���NB_@<����I�`���\vOd�����vD����x��2>����ed!��������OJ�t��h���Ys��n� l� :�0e"� �IK<�����,�K�n��z��W��rY�����CU|��D����t��6s^<�@,��?�Z����W��Z����,������O��j|@���j�v/��m���p�<J��"�9�c��^B�(;g�o�J�(�^�Yl��-�J�>�a�%����;�
>8��3�
��+��%CW�qQ�?1�U?
��C���2+X�y����9F$�"�Y��l�&f:�������$!.Jl&�=e
R	��S��`g���M�&�	�$�l�O����PL�gmV�&�/���`\�':���$=d^�za��	,��+������L��%:���5���&���@��
����P|J��,��qh�+�j��z7�|��FN��.�5�?��IOc�7
�4&,��qP�~A�?�L�����g�";�D"�E��|���mE(I��+B�8V%:��wV�mg�Y!}m>������
�hw�����L6sV<�!�A������<��I*���k5'+�z����q=l�/s+lV��!���b�qR#� u��~0��S��C
���;��f|�������k@w��X>~���m9��)5��C�J��T��k�����������1?��lX�v��(��P����D�0Z�3��AO����%���	]O�`���f��-mGn�B�U�M�U��5z��������wY�����;��2�
�"������ @��li���BI��nS�������h�i�����������Rc�I}y����(mW5��aS
�k7�N�����?�W����F�x��tS?���������yV��~���Y�$���?A��������{�Ihq�����T�2�w���
UF �d�����X}�p �I#���r�O4i�1��wU�����yP������a>���>.+��[��*2���n��7o�������~	1�1�����A����d6��Zp���*t�T�!t�C��cL�������z�CEjc��P�������,6t%=\��B<����/��a5�����
�������q���7
c����j,�k�'
����'��7�L���
��_�z����lG�.|:��7������!d����Hl^ye�p����a�v�@|�'��zf������e���x\��H�����^f�����������/������6������~�����s w��;�C^m�Mi6
R������[���x��O�w�_�BIw�g������U �A�F��
�Q�.�P��]��K�f����Q��+��i��eX��>>�8����!/��v��fo���?�?����RU���������j6n)��2 �<er/����I���?��:��G�kg���ep���i�}6�����o��, 	B�J�#���<N2�
�[�v��%��8Xk��VV���Qz�~����\��� g:���`�k�����q5�C)x���_=$L>�{=���A����
�6������>0Vk�:�,=�?��&��<�o��s�E���4��2�8<V<��2���Hf��$�����5�d���KJ��4��������wG�%G���lE��Vy�*�J�F��NP�^��1�y��X��Y���p��?��|���������u�K��:��(��;���v����(v#��_��W��|��zL��N�T��a��'-�>F�.������A�%��]���)�����w���mv=����}��kQ��S����-<���^%Vq�?�A�z ��;T�+�H�|"5�U�I�7�W�}g�6�/G^"��O�=P����p����"�*����
�K}F\;����+��������8�g��H�>�����v Q����o�4$B�F�����LRz����\�@�����[��_���a�A���������p1�a!x�
��-3eY]�	�+u���n}V�$��<�.f�[<���5�w�Q�"0\d�U�����������J�EE�|t������!Co�����I��
d�2����/���P[@���u=9�~����f-#��W���&�kz�^"T��i��X�l�v��"-�Y���7��%~�����\	�j��X`a��;h/��/^D�T(
vdyWC66�N]��:��������BU@����y�V�������D����I���/xp\��:�epm���_&��$�q_U������~{��������c�W_�^,��p^PY�������'��!1Hj�������������(|���L��3����kl?�C���P�\5������U|�(���Y�4�w���
���D~������o�R��4�E��N���I���>�o�x��PB���`��a�m
���,4�x�
+1��_J+��
���T35Hj!����i��wx��Q�.����c���=l�]RiO��(�T_m�3HT�����������:��$������?/V��^���G�l��`?x�;^�y�_P�������:��/hH�[aF\{���F�w�|����x�n�dx@�_T����7��6T��J���J�3��Z������� ��X%l��?re���/pQ)�k��$���k�$�9��1�	�����{}�131;����,��gL&r�^����
�7"��h�����|G���sT��h��#k�0hT]B���Mt���'�E�2�lDN*[���l�������}�t����T`����U��Y��=��y����z����I������D� ���y�(�"!�+	l��&�x@|GL��(�u��x4�>��Rv3P��FT
�IB��#�hsk�2�s4��r�������������b��E�7��D��h�\���O�r��2�D)���)�{ND���"���9$�
�[�&����m5��=d���[������A���df`��6lD-��{vr�ve0:�K�����G���#�3mq�Z���#t6�*����jC����n�![��!;<��<y��U���������Cd���C��	6vN��)�p���`���	�I���T
|?����Q�m��hj0�!���������0_}@�� ��^���x�Q
��^D6Z�����+5���q�i�r�������x���z��,7�h

m�"RpQ	��p��y���������x���;V��p2�o��XW>�&T6q�L��Qks�F1�oh��:3a�������0�<

�������'�"ME'�d�|4�I�T��L�.
`E���Jd�i��� HmO�1�N�<�b�sE��L2����@u0�g���7���������e��}���5"j��5����.��PbAe
��r�Q��d4�x�i���i.a0�8�c�7��)����#����eq���3�}mW���'!���h����:g�=U��0'���m�?eP�;�7�����'���`nH	m����u��*9�����g���x��e%u7P�s1���FWV
�	�r��
SDm��.	��^w`j���;b��P�)���Mu��Y���4�C�����{{��B���~t�(���D�����2��%��^����3�&T6�
�]������������:����\�(��?�LX6;�_D���J8i����_��U��m���]�#��B(3 4�!�;vP �%��q��F���{���O>HMT@��x=��3x����\���R��L�)�������H;y��^J�4g���*3�i����MSht\�~�)��p57���K�vn�
�j��Y}���:������cg���/�+	����x�hHH�*���y2�T�T/8h?���
V(���/P�<?�m��Q|�(�&v��U�Y{�IB�<���3k[=�GE��z�sr0[��Y"?�ZUq��8De)�*xgH��w3(u��������X�0�12��Q������s#�����x����,��`�I�
T��\Nx���m���� a���\��}C��a�Q|���o�����TUI���e���TZ���=�8\ X�!�+r�vk
���:,�*M�b@�;�
<H��<�@����?8�^w6��l�T`�l�������d�e{�W6	~Q�{(�w�2��9s3)��9r<��BG�*��N�<��'#�G����oEZ$$�N0��v.�
{���!g����Uxz�N�_!�0�$�����-� m.��Y���a�X����Je���>��
�=���I+��F�Be����B���1����$9(��u/�����@��^�)���
�!�1����(>��|��������2�Pys�59�@bY���;�Jb!i���%�p��t��s�cP������S��)���?��1���a�����������V��]�+��f���43�;����AN��Z^$u|�$�W��"�����n��������&�zC���``iw
���;l�����V\����g+.[��������N��t�<�������$��H��Eb!�D�_I�dRm�T|@�ye����?����������������YDw�V�M�r
�2a�������m�R���c`F��U�� �W����T
n*�q�B�T%Py\`�U������}5��I��g���9$2l����70��#Vl[
��e�3se�?@E'��������������`n���.��5���#6�����'���;gU�6��.\�}�?�E�(m����W�q1�b�Ae�i�������bg;(�[�����l��3�bo���.��:S�i�| w2=6;f��	�M���+< �������q��BS����W��u<�\C�����?}�*��o�d���|%�P�8�x��J����SmF�O��tR�~�3b����i������-��Wrwn��
�,�y�����2�@�vk4e���D�0��@�D����8�H�����?�\nT%�����+��������7uW ������qL1�x#sM� �]�`r	v&��D;3�k�-���M��C	������e����9zR�B�Z?����R�a�"9L�h�PrH�"�e��)[�Tr�mQ����f]]L(O�������wT&�2QSC�|���\Dfg���o]���-r�<�GE����y��q&�;\L�p�llsa����m��=���������S�6�����~�
y�M���R�|n��������xb>V�f����Be
d�;���D�*?��A��/�������.S�?r}c���������EOr���f��x�����oo���i�����i(=V�k�S���v��[��EmW���U�7�jobks�:���o���W���?U�<�p���`���l��	Ph��,�����d%����eE���������*a����F������%:h�R���vL�5�v�~����~(������m���y��2h*�"z	$3$�����q�-�����#����N�&�~����/��o����NUTT'�����E���6V�5�N�@��Z�jo�gTD^��/b�M�5��4��cv�$�*��������_WP���V)\(xhQ��L)��,��a:)W����\�M���
Oz�����}b-�*7�E�x��D�w��
/a#��A�U:�]a��������-�	gf����Cb���"%�7���j5e������d�4��R�
Tt�C`��e�������C�����Wlc����rk2���d�} Eg����O&gIB��(�+	.�!(����j���#y�k�b�P9�[���s
V�$�_�$jB��}�L2�X�3����T��+��t�(H�W�A��Me����P�����
}-=���*��> ��9��A��*!��z�/���Jm(�/�xS)��,����-7g���T���bN��}W�
P�����O���R���0fI����*�@��V�����hk^f�o������������!�)�;n��sRP��������r�J��/�
����e_�,%*���'`6�*Q3x7:�A�e}����|T�B��~Kv{��"�$����<'��N���hcG� |9S�b��(zj���+��z�6�e�Y�q�B���!vc�+i�F'+�>/�����?t_�t\�d�E������L�{��J����|�����[�
�7��8�����S����|��{o������K����}�����L�����m����|��R�*�`���/5�L�xj��-��s9<�������G,��UoE�������G�|Wmf����,�����S�,)���B�y�@\
��4$����jr����

@���J[x��+��7{��evP(�S�7�������,�fc�&��S�����_�[�m�����Y*{���Jx�����hl[��J��27%j}�[�({���I,���f����Ee)q��o���\�b�����`1��-uo�
o���i��Z$J���{�M�HAv2�����/w�;2y�>�N^�X-@�TI��^Q,�I�s�B��]u���|��k��F�\�/��w2"�M"�mm�����Z�����T�>�_��y��(-]��>����M�G$3��� ���������pQ�S���~���i�x����]��������_$�����>4�N���'�9{	����!C���B��r��������R�;���q����������4M�ck�h7��/��|Tt��q��#��8��;��![���D[Y�u
|@a�fI�A��Z��X��w��E�������r��:w��u��Q���F��������
�o��n��pX��p�&5!��]&{&
�Q�Y���������@��H���2�WH8L�%Q�����	.��@��$B��@�o�������{A�����?�I��2l��!�)��}S|n+e��eP�����*�!��
TtN�H�'��6(���+�r��}
\T����$tE�euO<����6��{��.���A�w6���B*���Z�sx�k�vZ�s���v#�1�d��y�U��
���P��������.���P*����<��2m#����w����z@�!|DO���a��4$�R�D��S�
��������/6P�9>��w������~(���h#�}D��Z{%3]P����c������}
���L��E�J��J�`��/ !0h���^C�:���������h`M�<8�A���!t��*0�;I���O��r���"�$��\+��P���Ar��.�"��<_�2����5.�x�z�7-�dk
	�
V�Y^l'Z�HQ�]*��/�* 8/�L���xP�����L�N�
(~�w���mP��60��|
Y2�b�.�$(oMm|{������p\9/��y����O$�������������3t,%1;���]Z�7��P��M����������RpQ��y0e��T%\h�v_���X��
|.[�}�>�}�_j���N�[r�������i�A,'���Y�������d�C�������e�V����i���o?�^�S�,P�l@�>By��
���X~����|S�|�+�T���I��
�-���CX+���0����$��*z7����3ZY{~��o��a{kB�l7X�����Tv�*���Jy\�
%�V�6wuHWU���V����];���%E�����4)B��������N2���9 	��8�h�!�z���7P���o��i�������C�f0�Lu���������T����NG��0�������B9��O�70����?rf[=WT,��/��^]g{���@��?.�)iG���PT{(��C�4L� 3�=�#J�al���7�`�7�{_�:�J%�*7��@���B�q���s��%��D�O��)����=I%�D��Jm(����,��RW*��P��`i	�2�bo6S���PU��r��g��a��BR�M��vG��N=���i�
������2;�$�}���W��::\��;�,&�#�69^��s@��n:�;E<�Q���Z��]�:$��u\�~���������#��1�������c�t8���w/�UBX�
�$2l�����v��� ����#����v9�~<��i�/8\X&W�q��7���=PSr%����������W��%�)4����v>.��V���0���'r�*�[X�6����q���,8�F[PYs?5��$�����yH@�?H:�!x�:uB'KI�_�{J,J�� ��G��g�i��-���P�_�r��Q�	���M�l����&+a��W��6��Me�i[��q��	O�M����qOrE�|T�dq"�f5:�+��c1C� j��fE�a�������M���!��m�0"�.k���aZiCv���6�K�S=+>�)�P*>0z�DT8�M=*��w�+�]M'	��"Qh�(��Z�Ks��h��'��"r�!�H<���1^�u��}SA�7��� ����J��K�
���W�v�Z �����@���P4S&�j��J��^��"��P�,{_�����m���T��}�������g��*U\zSl���<[�<��t�����M��-�W���Ea���2�R>�����;y�/��o��lw�l[g�?PE��^���� e���@����Uq��cCD�(�|��J�{��nx�r�[`�A�f��u�*H����R7*��&�����['{WWT��������
��U�m�yWpM��mj�������u4K&��.'�Z=�����8J����pH�&*�'f���75I�?��F�
��>�����JLGe��?(�����@���	����sC�M�E�@�����o5Hf�<�&�)&�E
��V��j�t��e5�+O��������HZ�T:�*�Zh��E�\���8��di~�qb0�J�-_�J$�r���R
Z����T�cL�������Y�����N����?}�rT��1gV���w���������l�t�Gl��s���-�4
�4z@���x�7=�CYJF�Tb�R|�v'��{�E�~Y{��n�'��{EY�4n�5���H����"������!1/6gU.E5b�H�a9���U�/Q��\-W>���y
dm� ��O`c������3`S{����b��=Y
���7��\�o�H����@Cb���,y(�d��LW���k2��N�����W������Q�- ���h��	�\�g]��>�5��"�X�+0��g����OJ�9�a�����>�����H�N0$oh�||�rr�v��H/0�'���/PI-���i���~�|v�I��{������2���S�T���`����!���� )�"0A=���A�y%Nq�"��g�//��D<��3�>c��A�09o�%hI���
y�_�
���Q�����������^Py%x�t!���m! Qp�����o�{������������
)=��l�^�#���q��Mp1�<���A'������V�%� �V%����H�Q��,R�8$�F�!1/I����[S�w�4
|�i�E=����)1��#4��q��.*y��~$JiL��a(v��5=�-�2e�����a�ZE������}![����8���E��g���UEd�����6k�&1Hh�"x	�NXV�1����B����M��5~8��]}���
� ��3��s�X�;0���*x��Jy��o����H�3���~�@�a.����AC�ly�Q���y�zt����������Ax9=�/h�(�[�"m��,S�<`������y�3&��9��f2�Ng���	���������]m>/ �Tn��Z� ��k�,e�R��d�Z���LV���"-!�"������"=�����g��^�	����@x{-�~z��a�A���(���YT;Y}�`�OK5p�>m���'%�g$XkA@�������=��Z����T��d{���b1�4��k����,o6����|�I_��:�
,�������IAj�'\�">����e=�{i�.T��M�V���P�v��?H:�!x�>�H�������(=�}��LA���6��lm��u�N';����R�0�)�#p� F7t[�l0� ^���]�<P����$"�'g�L�,��/�7�k��d���y�5����9���6X�G��Z���#4x3p������^[���GX24��*<�4�R�tVg��o ����;i>��X(�q�P"�T�� HJ��������_��Vy��:o���7g����&N�%vh�@���7�Hj����eg$��������&��u������>w�����E�,������f�6����m����������E|���|O3����Xx3p���%�c;_t�qMv(z[Mn1���:_*����yQ
���I���t6����^jO� ��X�=H���YL���_6<�Jlr���C�w?��05(�"��OH��g~���\&s:R!����P���
F��~�:���*���J�	h��qU������>�P�O��L�������/+'��*�~��B�Pq\����R���8Qb/M}2���?+�yK���)�F������>���E;�!��2:B��
�����G�+bZ��`��B K����]|F=�AY�l�����{.s%�S��"�uR}`����N"W/H�{g��<�~�m�v��Cb��D�_I�>R|�0�T�P��/��� �j������{��
���/�P�t���
���<l��.Z�>> ./$��~�~��� O������m�=TM%^D/����(�}aW����F��w�qFV�J����"�W�/C���oX��P]%b�
	e�MM�V�����NA�����"h~�I����q�AS��}����]���F�aC5���0�[�<���T�e4��~�B3J�o����EoLp�,��K&�Xw�Nm��z�M�S������}L<8y T:nSYh�M�G5����z���eJ|�h����n)�	x�*���Q���� ���^xx�� ��^��=�_D��*.l���l��t�`!�K����%��3PU��C��P�x=�[/,6��XJL	���D0����T����l�$�VM%Q$-�����@\���V;`;��O�w2y����Hx����$�{W�C����4$��m^"n+D^]�[!��A���k(��gN�a%�D�������/�duH(��*�3!8�:t���H7^x���__�mizJC�W3-} 	;'Z�9#Je�4s�<>�H����d��> ��b�a��mT
��'���0���S�R�E%<�tVpqo*E����P����1�oH?PU�'�u}{���-&/�����uVp���,J�����������k�~�
��` v����oT��)
���/�?�z���uweS�)�W���T4��_�J��3��k��b�zQ	�� <��<l���S2�Q-�`�x����,�I
���slJiu*���t����%�XE�Jp��x�:�:���J��2�J�����3�?������5��?O��{�:��M�3������7;*~#���)��7�"-��Z��L�'���T��������2�U�?��F��w���hH���)���@RJ�8����������L��?Bpvi�X�
�&t[5t&��H[�#��CI�,AD���Q�M�����W�8�E�hj��q��q��������6:�.�����"���C��v�h]K,���T�E�R�B�i�yA6������^���T� �|���HNd��@Cb�lA�C�u"���O+��J�AW�T���y����5�$-��_Ip�<1kU���wn����-u�����	gOv���"r���"��	���![e��o#�x Fa�p�� ~ U �+��P�^��|
�����B*�p��QOa#=A�8���?N���TvK���E	t���!Q�������H�-G%?�����>8�Q���4�]j�;���>/dH~����>URy��������f�*�D9���#�����7�p8hP�qu<T���w�
�|q@]���-g��������������������7�����K�h������~�Hsg������$k�x������e�{)��������A�3��&GC	����B��;�1H(_����3.N�M�|�4��Q@T;���}l`�(|C�)&����Y����bu�O�o����o�����H�g�H1c�<�'x�99�D�@�a~7]�m�7�|�����������</v((��2Q�#F[t���5^�)�w4���j��0=���G����5�Qs������� ,Y?����i?_,�����@c�*��G����5��n;%2��RA�F0~����C�K����k�~�rm��w�C�D�����Z��P���\��� b�0ig#��u

�w8]I�Y�T���� �f4i`����^h���y��Z��^4v�[��������WG�J<�����F�U�`u����E%�Kr�w�����������I������:�������}Qud�^��
W��� Ng�U�9}b��DTI����.$&�{���ym����D��A���p�9��X�Y�H���">��m�~��]���.����_��a�ZJ�FB��K�����l����>�[�>��m��*��n���[*k�5T���{Ly<.Ch{����I������.��z�o�<��3�!�J{?�`�H���Eb��D��c�OWp'X\�}���!��P�e������0���>h��,4u@�����[������= ��y�NT;/�g�i�+P�I�v?��ZfT'��(�&v�|��=�\��=�E�����T�
(6sYWN�������(-�����k6�2��=9:�y<75~���E�)C�	�������X����qU�>�B�/�#2iT����X��ZN|��x/%��2��+6<���e�J���Jx�'~���1W���<Q���
���Q��+{���o�~��
6���i���'��%,��yD����.���P�;�2���"���I�1����O���x��(�_��4�8�Rl�I�D���g�X��&�l�@�I��p_��Q������|���\��[���z���"r���r�]�X�
<��}�-���n*��,��C�U��{c����M]y��/���h:P�`��q5�M?���
-�h��\,oUh�S��Xo>*z��w��P_�����0�w�f#^�����'�%���y��=bCQy����F����X�C��U�V���fY�u�Ya�6%��N����)}d%Q�.K�]�NF��
b����lN	&���z������]�kt)���m!�]X�!%"��������E��X���d��M�����q����q�����.��C�:�=�Q�2�317��I�$���
<Q6��������u��)�����A�k��c���LU�
�o7��������V�7���.�J��.x@[Vs������M�������a�AR���y�"��U�'����_q��u+�q��_�
���,�9���h�P�R)�k������lg
x`^P��<�@U��>= ����(��/�)�Q=�2��.���74�k���������R�j��{(��C�'���Q�����y��-^(���2�ZnY5�[���lc���y��bb=/�"v�w�����*v����7TX������kk>��~��fP��|�HJ�]�����3cg���L)��JP�A���K��n��e�U$v�/�6��?H� ���[x@'��5��_�'7
/�����b��x	���b!q����3��3VX�[�G����O�bnA��>k4����g_�&�4em�A^"����Kd����<�o
Pb���������
D�����3��:��!6(H�(_
��)aB�'��i�x�?�-�?�L~A������h����H��/�g)�*�S����3\X�s����I8�����EP�>�@�a��'����T��,�8�R���&�V����2+;x����9�� {�P"�F)���rv+�����tO}�(�� �bL�sX���tC{|pK�"uX���E��R�|$B��;��0]�*��%�+�m
B���K�?h���g����i�%��#?�9a0����4�e���6�"��*d��6|e,��;�X��0uXE���A>4r���)��,�����sB2�9��������)�<�-�*�V*�q����3p�=�`#�p���j��������[�x��T��	�f_D\T��J���j����@5��N]�����;�����5�j�^n��C��>`�i !xl	h	����@�8$�$om�Q���^D�P	�� u#��?����>iP�����x/�:�>�J��L��t6�j(���w?���?�O�8i��`�A>X9#T�-��h,K \�U��
,~H��������F���n���H
�
F?~��q��F�r�z��m��H��&k��z�m&��}��I|,[���
�B$E��@DM1����;�����������T��k�4���7������m�+�w�
x�V�����&�����;T��u����3��	���1/l���;�I����F���!��)�������%��/���V�4$��eh����$��E�k�������_��QP����Jb,q^;�����OW;�D�����z6\X�v
����#N�OV���Z$e����N�����R�~^�/���,��{��0s��6��,�f�����E����.Y���
�M��>n���/�`+��5>��7�#d"	1����Q��TK�A�����i(���h����E�?����P��48ibE�Z��}4X`�������Y>�!a��k��Bo�>
)�$X��[�X�7_�b	�L�Y�Ls�>�@@_�q����
����C� B���)!��H��!�\������\f�U~&�Z�)���df����������I�����Jf�"�i��b;�.��|����.�zb��%���J8\-���i���Z��E0[,����{Q��C�Z��G|p��#� _4�Z=�L�c�+��C�����(i�)�"{�S�sN�Y@v�N!���� z=�E��2�X�MJ�n�}-9�����0���[�Qf�Z�>!|Pe�`���6X�������G�q�F�Ms��^�py{�`�-os7�|9���.��/��Dw�_ul�s� x3P�0�!��_tH�����Y����>��7������\���]U	b�U�I��3P�&)�82�Zy>x;��F&+d>Wt?���&�"�������"7���'v2�T���*�/�B{����)�D�?��K��V�����
oB�mb��Qv4!�7)�E�R���XLB�l~�PM�]�d:01:oBesMJ�SV-MZ��c ��
S�NWn@�f����*���xt������%��N���#
�����m3�9�7��>C���|H8�!Q$4������������XU��E1��@�����_$�U�Ar�����z��:E�K�l�s!�o)�:��G�~`%�l�x^�T�7�"4����T������,
E�[e�Q+��'�	��*p"�pe;2'i�P*+���<n�@�U	��_�R@���b4N�R�k�U	����4f�"��4����3D���&� �v{�������7D��
l[��j���W��qF6�����P�Ee���P�Z���p"���A��>y���������
�z�\T�<������n;�U���>(QO�"���I��������|��jk�xz��s�mS���������}��3^�NS�_p�d�D���d@tE�@��
��-�2�"-�\�:;X0�����o)�}�lZ3X���-�E�C�,Z�U��z���8;��X�s!�Ek*Z�����"H;���@���^F�q��*��T`CqhI|5�Ie-�����y�����Oy,���D
�K��%�����5��/U�{2?�]C�'}�1_��i���cJ��2�=�!|G>���}�p��beo��������N�y��/0qG[>r�Pq��J���M�l_��T*{��U#6'���`�N,�'��z��t����G��~������z~m���W*���C�x��D�6H.�B�e�G���&��"��^��
_�P)�{d���_W�x�r~]�5���]�*X����4�!dLo:oB���)�>eFdf���#G�L����4���(������ht\m�B����I�xb\�rf�x�V��Qb��)�6��4`+@���F
q��g���,�/�/o002�����^���^3X�r���yL�� ��4$J�V�����CyQr���j�<��������?�9��f��p�(/��:<����Xu�����K�D�cE�vPHv�sC8�Awm��������$�6-}<�j��+������~�r��K�}d�"9��K�H�|��� �^"�\!��"�Y]���?�.�]�Hs�P�!1HC����s����2g��Q�:�2���.�������?�?���C�j}n�5�8���c�,���������Y�37q��{�(��^��uU�On��y�|�C�����1�t����>�����s�V�y�E�����TK���n�]��>����9'�yOyT����3	��q��M<4/&�J����8�Jcca���W�S���������Y|'�}ee��k�>@�K������������C�4n1���x0~A�4�O3.���FC����M�y��`G��F��������n�X�f;~�rG�����FT�]���#��s����c���Xz+~��T���@��������K�Z�������wn��r�7�Y��l�&�,7���|���7����!�w:��'>�7!l�T(�8�I��1���EN���`���=�:lpQ)������5���{� �����������E�����BC�%$��SnjAO�=Y�1�j���i	}�2����E+Y�E��$�h-�Q���u�I-\>��z�XR�~�r�/3D�sJhr�R'�������H����E��T�C����z@A�*�V~<�XJ����B���C����`a`}��x����#�������B������������m"����S�����*����_!�����c1$��Mb�������8sm���M�����i�_L'o����_�*0W�aX�������d^���>hv��Y�r	�{��o9��s��*k ��q��T�'?^D�~xh�(H��p���)�1��N&�D��u�d���W_�5�4�*�p��:�j��D�V���VX�\��(g��=�TQ�tX��4���dC�vc~�����M�]s�^����4���_ c��Mg`M!���8k��q}�Vkx��Q|�^�K?��vur.�����^�FM-�(
�h]��c�����^6��T��8�_���F���-�}�_���7�4P9�*z���B#���5�[c��7���,��r���� a.���H=$$5m��_��MAl7"�_�Q���<X�l�9�
���_��q�A4L��C�u�W/�$� =��X^���_.�
��Y{����L���^T��/���@��S��EO��w�3�hg�`&��h�w�K�F(Ue��L��G�G�:�E)��I`���.�m�=T�
V���|%��6���/�?���A�h�������#FJ���5��O������l<�A
�>o�������O�N0PE"�m2V[
 Bg������*�����
��4������
�;~�{����@_VGg������r�x� M����2h*�����m�L�<< ��)]�8L����q�Z1������(�!��J���Z��.���P*�oh����DRk��7T+$�$�T�&3��4�|S��Me������q�����n�Z��G;�=*X���>�	���vn����I�A��{�w���>����-�f���"r~#�-���z^�0�pXt��CB��������d����@����^��y�m��X����0�2h��
�XH�������&B�
��2����X_R�7����T��>��85��eF~)����=�N�Z��5���$���M��?��]�aA��*���Ic���0U�T��/�X���V�^`��
���1|p��;:U���l�����q��W
6����A�z�
y��;�j���X�@������H�S>wL�a�l��PpS�pO�D~o�H��ya^y�F�|�M1v�9�����T��j�~����������F�����wn�B��s�:N�K2���(T��*s�����K�J&|���'�;�e��j`���?�s 4f�"����C�����_5H�����$��������&�@��B���j9o�t�PXH8�K/C�G�+3����i�!��.���K����hK���}r����W�N��=�!�=q,������ivN��.d����w��_�`C�?�;|����S��{�����oC���"V9�.4����J����9)���(�z$��c��Mb�4�C�>�j�Lg!�������Zk���r����^�]����12���8Wn�����X��h���5n^zh�|�2����B��#$��Q`�4���b��D�a�2/��	��������
���������k��p�V ����"��Ti:K����Ret�o?�������3���B��Ug��,398�o�;��������@��o�gQE���>H�^��R����-v���x����9��)�h$ov9Y�-�k��$�[$B�*>w�t0�}�����+Z�l��K}�@$u����HdX��J�7b��sF�4�I�9��4������4+�$2P-�7I`_��`��
� )�"5!�S��1�v�����F��_���i��R�/���#�� ���
�%9��>A�V !������
T�d�c����QS�6�2���������l<|E4����H������j.����-%��A�r�pa�26O^<��jY����/��.T�Rt��$V�7s����kr�\���@��������D�t_`������+�^���Jx��������Y�
���;�,��qpWRd�$�Cy�R���RzI�G�$HJ��m���^���l�Vu[Py�G>f.$��^$I�����^Sl��J�P���SI�z����[��m�(�w�n���&�(~��l�����06d;k����:+�a��������df`"=�A��l����zx;(z|�O��g��
V@����y��X=�����K�	r��Zo����X������r(�fse��`��3�$��Je������� �7O`x<r���D�S�.x6P��q���PhH���"jv��5T���d_�g{l�d�����6��{d���g��:������ 1"�����)�W�S)�9�����`6H#�#��g�Mg�E�l�I����������Y��~�{\�#�xAe�o���V�"�����p�~���9�F+n`�a)<�_4���,U��JnR���-�L�7��\��<H[��VS���{:�_���E�\�J���H1�"O���������%�p�%_�#J��(�0���������p���������FT��K���K�$1����-�ay:��"���$��p���j�%BR
D*���'P�O*n��Dj�{�S�����,���K����@�3�0���^���\E���E��8/�N$J����{�26�u������.��.3������`��!�*4R���:X���>���-&�#�-!>�����:�n�x��k���P������^�e���<u��<��t����L�����.�����v��$_\Q�`:,&= ���y��5o�S0��T�7������*	V
���d�8�/���	���/���n����p���Mg�M�lF������3&%�	���h%��������x?����9���!�" �����;���M������
�
�a�{\��g��\~u���hH��5���O#�lfe0cKJ����T�UT�����C���U��6xh^�O{m4�wu����G_�/I�8���!��@y����K�x�q��*�y��lBe^����ry���!	�����M��W�F�|��E>���.Q}z���>nJ�Q�4����?50+���>"������B��� A�^����Q��I�wS�4��9�@36[ ~]�����w��c�Ce�B%u	�	������E��]DN��]����a�8	���k�
������0:+�b�0<~�%^�n�L�tc�����j�P��.l{�%.�������������Z4���}�c��
�5c�����h{��*�����A�[b�X�����EX����Ztw�@e�o��Q$�%}o(���a��C	cq�(5�FIB�E���
��=����
V����/O�qo*��/�gO�D�X=_����5};'�����	a�$�"�w~�+��=pg;P����P�	��VS����#��Q��Gd�s]<`�%y_E�G����/4����_��H��bH�N�|`���
�2�_�&L%�H�n�xX�a�� 
��?H�"��#+�9R|.�
_�o��e��*>�����a_�ER�E��4E�>�1�_aC+y/������l��*P�\����>N|������>P��L9y���J����
���]���3H��1��WM����A��P]	qo/EV�Dx�*�+a �p�N�q��/H$��a9�:5$2�*@b��D��O��7-V���Q��WD���S�I�i���Q���_i���A���[5�LZ��Tc)��.���3����IoZ������Y�G����
F��'5*�7������)�3*_|��t��U	��#g��@(����8y�\ZA�<�����c)}�mN�;C����?I�Rq�u�C4|�����-*k� (�z`�#��&P����{���r_PUP2o&����la�`�����}�E��
��#����N)/`�,�l�*���������q�AS��KE�5rxW5��gCr��V* �������S��N�%��j��d�5T��~Q��W�7�`�,���u���X��$�X�6�)�
�2���/4��,5&����B������������{����!��w$uc����E%J��:�ovn��	���uk���4�w�
�9s�#+���k����� b��\�P�-��"�g�^��b�~5��s������#�B��6��*�m_T��Jh��BT���~1j���fb��=���h���5�C������Bet6,���y��u	;����y�e���-�9T$T�Z����U���*��>�:
��T-���5���3`\	�"�Z~��_J�B�]����g�i��cU�#_����������U��?1T��@�LSA�M�<nTH�������+����������������?>6s!�a5$I+l�����^���_���V��y�����z�nx�*1�������N-�1U��pE�2:H�"��z({���������|����3=)�����U*W5^hj���W%�:fy������A���?,
'B�s�`���@e� (�"t.x��v1	�������&�AT��GuL�@E�����vDA��L�����?��I�i�C�T�X^���Z���K��V����2�a���z�$����h������'����8)����[��T�
gL��������-P�]�LzN����~|J_�m���_D���[���L��6��!�w>56��0�;5��!����p��\���	P+��z!>���>]��@�,d����AbX��$�������&���9g6���l�+o��]%���=����B|oN^��$��H��A��L�ms�m�D�I����N&�k������*���Q�/����?H�Pl���U
�b����?�b�Y)�C���a������h{���2�������
6�T��T�8m%Z���.���HY��1Nx��'�W�cr�'���M��T�v����z�����Q�ld��%f>\Q]��&Fi�����=�B�r�h�:nSYh��y�,?{�5s���H�}�����m���Yp�J��������;�{�r����9w��}�"9D����
\T�������Y?��(����*�+��v/"�`/����|?��.�R�Q`[l�8&�N�4��F1�pEH�7������mg���!u9J���t��"�#^SW��p�f=����hN���_I���3��}�2��2h��l���Be�L �jWv�
������Z�N`�UC->���C�v7�j��^�3��N��xv ��w�������?P��R0i�#���(4$����s�0sbzm�
z�(|�?���r,a�����}�M-��F�BE�EEY�l��D�F�7���/��l��������@��P
�U�">�
���"�+!��	��R�2-��z�#P~���h��V��h��)�WF
�B	~)�N�w��2;���+"Q�+�|����y�72�"��A@�M������I��&�
|�K����/�r.��F��wA��;`�	�M������P	��
�:m�u�E�	 |��I���+�����f�y]4h.�^T_T\<�)~����^N�V_����s�b#7���)��<>�"��5P����U����q��D�I�NM�=��5p�9��Wa�.� �n�(��)��=Y�E^$�>o"��{q��j]�����z��W�A�`}�/�&��	�����p�3�����8�kq�RY�p3 )�*��������^���������mF16�<�
?���%�#{Z�(���_$��z��op��9��(*�V�<r�u9~?��0q�^�����z9���=-a�H0y^�
�%����}�j�����=��?
D��;�E�h��u�O�]e!��R����r�a����F��T
�v�Y�=$������`��B���D!��M|��DV���6QU�M�
R���P���f	ykD���4��H�
d��y���:��
���J��*ES���J��U��{��%�2���B��n6o�
��|m������)�q]1 r����B�4����'��{��O$��}G��h��_�mS��A�R�za,��jq��,r����gK��Uc���OYg��U���P~Q�L-T����?���F�=1(��h!|`__x���[#>�����%��	A�J�Eb��D��X����x��an>]�Ynq0��Z�Y%��������S�D�@M��u������>s2�`�d����ME8��u���z�_����E����BS������-����#X�&�>s�Jt�|&z���#���c46�� _��������Ef�*���%6#��/�������wW�,w��i��,�95��Me�����%�9���*t.PP��_�&s��y)�Up�z+���?�����y8Z�q�G�ks����|��1����2����$������G�H=�h����z�{?�j~o�W�/�i��T������?����E�71�p�I���$�bP�>I������k}vp�dmM�X���"���X���9�#tJ�H���?���$���e$�?[���V����^ ��5���E������
���
1���M&|�����s��v���ri%x��"p���C��/B��.o��\�����m9	��T������G�n�����������=$������V�E�DOJ0��#.g #%�o-q�e��Zj�o��9��i}����qU���/�?*�"t�B�i�D�+!��m��2g��kl�����9v�������Z��"K�Z�����f
_�������l�H|�B������E�Z�A�3��
������|S���k����/>l�������<8���]���������*")n���2}�d�
C�>d�/��������l<�a�����aZ��J(��6�
?��
�c�C�T��&?��R0���r����&�*w~����jEB����A���A��\���<:��}�d�	�gl0�O��m�U|��\���WK"�*>I��[< n�X����b���k�y*@���T�oJ��*MN�E�7��6T��F������,;����/�0w{Q|���L�f~��Mo<.��A����0�P&��$��$�b�A�q�AZ�E�7�7��=��X�*sL����`S�:�C�2�����&���L./{��}��!��{��J9H�AyA����|[�p�5��FaK2f!kX��{K��2�,@n����2����T�^x����
�5�L����K����G����r�2�^�*����B%���
�]��#��������9A��3����5���=v�G�ZB�Z��S�L�`:FD�P�K6��D����F{�>���4������5�j,03�T��%�XC��1J�d���8���@��B������z"H��I�����\���]p�!qZ�"��sh]����Wx����7)�*���|@E�W)8N7�pv�p���~q@��r~�L����1���!>�p���q;PE��C��5G��������LZ����������g5^�T�2�Li�!�m�a_a]�Uf�����",��Q$���H]�T*�!8��7\p����q]�
��X��(8�����G�TdG�*���/�4�|�UG�$D|���2�����X��R)����A��~��\z&A�L;50������*�jl�"������T:nSYh�����B�NhQ�"�%�I��-�/T��9Jl�V�qZF.��Jg8�J��T�E�B��3J,W��L����W�uE�o@L�F��Ea��o`��r|xh���s���e�'[���|qp�O��~^:]�,��/�C-	��b��T�� K��>��73��Q�\Q��������%�B���T:nSYh����+K��(2V����0�/(�Q���������}�m��#^$P@�!���$7U����Q�Y���y�^�G�id"����1��FX7�q������+�NK�g��i)>8�>��>r�C�@��{]p�dPP3o���,��g2�U�v"83�g���������T��-��\���0-�@��������@f��5p�����e��Q����g�6�>���7I�	��y�%tDu�b��Z8]Dn�P��_j������g�SH���� ��&�"~���qL�3g
;]�;~���z�8���V���~7���q�AS����f���n,�T�6�o;�z��$fj��a��{��gC�g�|�M(�����?!�V����~qCEd���e�1�����4hWE^h��Mx�dA�6�
]�:�,}��:��t9����]�������r�U�I������	D�p��+��|�;;�?5�Jb������YEEC�7���J��������Eh�*��d���)f�_��������u������Z�Z�at5,�����Nro�+����I.���}����v��g������V(�T��)VB��%�h[5��V
B�����#���:�>.H����=Eh]��D�M"�����Ex��5�3��#P���\�i��*@�t����4������
k���f(���=?Q!�\��/P���DC�9�XD�$bQ�:� ��y�m��!�-�t�Rf�u�����	���nt� ����v�A$�P]����s�Vy]��0��^I�\��$/��0�
�x����
�
,���6�]�Ni60N����"R)6�I��Bz���*�o�)�Y?$
	��:�"�����P�"��A�������y}._�������:h��AP�=pn�1[�����1����/��y�H��]�7��M��"��W�KT�X��B��p�=a��.����|��/2�����,��*1R��9���L��9��t�����|���2�����Q���(�_}�"e*$��h�I�xg�"2`��7���k
rP���� ��w=A]�5m@������0Y��RXH%�W
��h��9������u�a�a����YuY�/�����\T��>�8*T���"��
.|�`�^��- ���{j��,�� x}��
oBes���;���t���v��;�� ��1����nq���+�`u�>x��J8\}@�������<��X��G,��
�����8����� i�!~�����kA�p�;k�k�k���~ �g��y��n���a�� �������"`Da�&��wM����F�2�L�*�)��*��T�+����2D�{j�`�Hv���`_��9���z+�PY�����"9�F�D���&x)�H���|�,����k��z"�W�`�H����R\�h���H���,��24/��z���%6��x(���?!�������E����4��rT�
��w�<W��Y���~X�����<[���g�*B�h���3*KP��"~���1��~H��+5��P�!R�L��.�#��U���7�L�����P�\3;$$6���������OC6��\��n?���
U�����%�9�"f��V*�=��>H��XI��K�������"����N�-
�$B� BGA�9__>@zJ���^�]N)�&CM�����f%��!��~[$q����`�>� |w��!��I3�;�������o��A5[I8��?H����O�'Jr�+o�'X]3^Q���x��.�x�t�����U�B��B�	a��v�k��hbf�f_�2v5��/����^�u�����H�>����Q�|
�&TFX�<��v	�i�K���:��Q�>i@�^�U�x�5� 8�s#9D���m2E��&2�����/���v1���V{��_y���Q��{��T��jy�g�5��������%��M��;�I��e/hC�>��8��WS��p��� K}�u�(�:>���������� #w��g���c#�"��FH(��_ ���R����"�w�p�=�d�(%/����u>�Q|}7!���L��e��q3���U&p�*���9Gf���������qK��$�V���BU������B���������K���&���?Wl4_��D*e���H�����C� q^�"hH�k<0
%�^q�F��/�\��R�gl�+��9������,-���_~���Pd�9�~`
er�/��a�23/��:3�d��~�����@����-Jh�'U	,&�8&��8�c��@%���>k���:�4��"d�����}��-{�q�s}�/������7��^�y��kvp����f��:�(�BfU��|����3�X~�������?>wamD��U��������;c����T�P)���A�J�0�RcA83[�������~�	�&~w��o��q�AS	���mX�_\���>)��=_��S����-���a��vB������"v�Ch!.BeTB����������������o��-�h|��M�S4H?��3���?��D�q:���?�"�j
endstream
endobj
5 0 obj
209626
endobj
2 0 obj
<< /Type /Page /Parent 3 0 R /Resources 6 0 R /Contents 4 0 R /MediaBox [0 0 846 594]
/Thumb 8 0 R >>
endobj
6 0 obj
<< /ProcSet [ /PDF ] /ColorSpace << /Cs1 7 0 R >> >>
endobj
8 0 obj
<< /Length 9 0 R /Type /XObject /Subtype /Image /Width 256 /Height 180 /ColorSpace
10 0 R /BitsPerComponent 8 /Filter /DCTDecode >>
stream
����JFIFHH��LExifMM*�i������8Photoshop 3.08BIM8BIM%�������	���B~���"��	
���}!1AQa"q2���#B��R��$3br�	
%&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz���������������������������������������������������������������������������	
���w!1AQaq"2�B����	#3R�br�
$4�%�&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz��������������������������������������������������������������������������C








��C	����?����K������x�����-<h�`������$XL�}��b��-q*���Ld&����+y�G�'���|�z�:u���������i�By�weh;P�6����|�?������K����,���_��?�}�v4��F��D�o�����UF��D�o�����Uz���F��D�o�����UV�}2�&�?��*����1�Bpn�����Z��<��������~���J�w�^�"I�/5�rH�+J4�V�B�f$:���v�����5���i�Hc���.���)�f�:�eeU��n��5��a#_
3��vo��������9;&�:i�M�'��NV5,@�K'g�.�5��g��o��������}:xd�@��D��,B���R�5��z�G���[��3��1��<����\g��;Z�i�0��c��Vb>eL`��@���?V�F�2�K��+��)Q�����������F��D�o�����UF��D�o�����Uz��g�y��������Ty��������U�4P���e�����{�7��Xt�
��?j�aH��������������U�4P�y��������Ty��������U�4P�y��������UX��.������F�'K�� `����xZ��(���?������K���<�?������K����(���?������K���<�?������K����(�`}2�[������g�ht�	(��?j�a����V|�?������K�����A�xo������
���0��q�$���'!�d�#y��$g��_
x1|?k��|J�����������$�m�|�������$l�zW����9������G����9������U~��-������):�����E�3;���t3�|����UW�. �Y��K��ar���l��P��N �G4�<�d��&�>\L��=$1�n��"��)i���_G�e��ook�Y��������k*��w(pBz�^}���IW�?���4t���������m���ef�����!UM�D�2�3"�����<�	9Q(`���J��`�3�G�Wc^q��m|u{��<���O�d���	owyhS�'8%�pk����(��(��$���_x�_���~��i0Np0RI{���mJ=1--��+���Ye�U^�L3�F�^zg��|Q��}w�������}��+Mllv7.���W<���4�7�4�����|=yg<��y[ ��R���b�F�
A��g	����F�NI��
�����t��a�/f�-������S������uk�~�TK&Hv��l�G�����zFJ_
��������Ev�s��nUl�eP�I?*�Q��a��\\|/�{?9��O�Saie�P.r�2H�H �_�M$��G$�7�0P��F<�;C�9�����X�N0�F�zT���������+�8��(��(��	d�������d��q��1f������������������KD�~������'4�V_}�����W��3����|O�y��M�yR)Z7������~��T��:st�+5�Ot�R��'	�4QEf@QE����>�M����e���	�F�u�mh�H��I���AY����,zB���ITms+PL�!
�Q��
�(�����+�K��G��?d����r�d�e�������'Mo���W��W1o���D�#aa�Fq�5z�	������	��x3���w��G]��]%���i,�|�#���v
��&�_�{��Tx/p�����������������(�������/L������c�%Z��^��������(��(�]K(\K,E�m��F0�|�0|�6����:W�m:�L��t�6������D�}�F� \�o�H���j6����������a�G���22B�s����)xe,`�Z��Kw�
�����2�'�x��8'��{o?�����;|�F�q������-�������������s$�3%���~$�F��wZd�-��y��@���bC�	�(���9=
y��5������&��b��a���i��ouq�%*������8��z-��7k�#���L�q�0������w���I���a��rj���j��i?�4�R�\���V�����A����62����.D�me���y��1�^u�?|U��(iZ���������Zw2�H��'j��8-��z��*�~,����],_g��"��)$��Q�����h�T�����K��_�d�������J5��?��G��E��Q��|C�OxR��#���a'�e����F�~��M���������������D�@�f�w&����$�����\O$I������7�� a��P:���?���>!�%�@��� ����Q�,���fr���rzm�^�
N�:0����r��|�}��=Ja8S��NN��O�S��&Qg,��E�
�qa���d���s^o���_�v(Y����7~�7��T��w�?t�5c��Gk��L�i�84��y#th�!qA���>�[����!��XM=����,Zg����.d�����P��V��%�M.�RU.Zs�f����4>��e�4�s��$��r� `��0�����Fs�Ezq>�$����^&�M*��x�E����p?.N;b�j��f�^rN�w��ld�VN��Q\G0QEQEQ\���Y�umJ��{f�.����@I�D���������3K������k��Y���V�Kr����3��$���uKUs&���v��W�n��W�ew���h������*�g��t��&�����/����N��h��������$�<����'��U-���#o����i��*�Dn`�������n��(��]^���^�%��C�c�sM2�z�������O^��"���(:������}]�QE�x��>�M���o���nn"��INM3mE�d��O�K_8||��~8��|=a��k���$��G$�\?*|���<�W���)W���f�u��{$���v��!R�aQ�=m���I����a�������m�.Z�ZU��X����ON��iQ;x��y�#�t�"�[�1�H��2�*�p2�������f;���>!���4��O�mc�ZI$����p�=i����s��)D6�J��>��"�<�@
  I�y�]��i*|�����I�����������Wm���#�|C/�+�����������z��r�dX�"x�-����<�^wz��k���_�$7r�����I"���2�8<���k��I����5�-"���)�TI,�Vn�.������T����G=��B��i�&6��t]��2Y��*I��28���ir�1�n�����O��Q�%�����OC���ok���[���t�;����o�Igy/���t�j�:����%�YG��6��<��l��m�I2��rs[��p|K���#���N����
,�+2��a�v�����oc��z���Oq�x�[����`V���-��* �����#�R��w��������?�����iJ�I/�O��E�?�%i
��&��o��P��j��1W|�F=s����Z����W��x�hN�� ������m�-e��e�Bc��#�Y�+�~\'5�O�h�]�U������x��LV�l��J�ob>�+��4�?�<1`�+K%���R2��;w�n~��&�4�o$��}������I��D��������1^�\�����w�M~��������z=[G��	�R���yg��yi$�����s<!Z5Q���c��k������R	$1�$-�|�*�|�!o���GJ���,������H���Z�q��=��,`iH-�	��=]��� ��t�?�F�!���89�T7�
����+o&���D�+o�U%��v�������9ux!����0�n�9�c�\|����
�+6�Vinaa
��6�������s���+J�*�����yu]�QEbfQE���E��M�����A�\C`�
�iT@�wd�$��k���&������j���4����
��:�E
$�=��|�!��1��L'/��]+��}V�?_������5.���-�<��,�/�[9i��g�"&D�F�t'y�s�����������	��#��\f�c�� �g��c���%hb���|�	$w��,�q���N���
����5'��"�/�+���F�I�.#S������n�2Fk��Q�V���'�r��'iT�K��C�.����l�H�#�i����d'����#��
��{�O�.��~�zy��_�%^��.�������)������������\Kp�&�m��O1�'�3����LeTg!e�d���ac�%Z��^�������6���.����S�Op,�=&lc1YoW�����q��9<W��Es�g��M__��3e>�CC�>�O�&��1]iR���5m�����jT���%�0�8�R[]k3����zD�)WU��mTVS�W���@Ew7������j7p����Y�0%cM�nr!G'��
���`���{m6�3n\t�$�������I��0)���e�/��������O>���Au�/j�3�b
V�<������"[y?6�`p�$u=O����t?k�(�2���&��U����O-J/��u���'�xFO�V�4�=$�U�	 <�]�X��>��J�3�#SVx{����V�r��cqjug*ZE�o%k_�A_?�^��a�
�}��������](��!���Q��S#qp�����/|<�|K������P���5���.c����������4�WU*���}l�.�
uy�l�����cwYh���d���+<�c��||��>V���`Gs\��f����j���W7s%��>�P����p����*������/K�y����):<�H*~�[�q����V�[�������{�������L�P@�c���a�����RI?Nk���o���(���]�W���������>���%��u�Z�K�����65��9\�,�6�{��|g��f�G�n��=%��v�^�Um��.rC��!
�9e�_@k���k^<�l��s�����@�|���I7H%�I;c�s]�,���X��[����+�[_�Z��nua�x�#�*^��k�M�R�{�W2JK�.w *��#'{�@}0y�Q����k�d�D�Co��G�����T�#YN@������}{g����e����yK�������N�{��'�����o���t6�F�f3���%<1,A������Z��T��������J|����?�"�`�M����A�x��� �O=��#9U�W���{�]��x����t�h-�
RXaP������en����5�Z���CN���m$����jr�[���o-�*q���?tc�����~���k�kU\i-�E�RY�
�w$���?+�w~h�}�/6���KSLE._gw�S����m�&=B���#yGz�&�`t���+V��-�"������4P�]�H�(,O ���N���D>">�Zk�Qzm���r�_����\����)?u^�?�D���t�
�+�����Z��}[G8����xS��@^�<����]�F�a��G�9�t��tm����K�������%R��$�����
d�'uB�*���@���s�q�|8�v��M^�mt�Vh��r�UC:��<k��W��<o�gR���c�=gOu�yUCN<�����I	��W^.�>J�Z\��}4����9��m{���.d|��������{��Ug�o3,�#�E�!v�_a��Q��+��Z�k���L�++3�72�?�%#9;\�Fk�����<�����{��]=�w� ���E�BR4e�pzds�����i����Rn	�����r��Z$�X���x��z���R�Z��vz+-U��M���IJu'�g����B/X�0O��]�|�9��Y������2>X����y*����jV����&�ff��i�#�����<U�
�!�����C���
�����-�Ga�984��J���
��CI�6�ZX��rL�LN'8�XV������fU%����=���/����N��h����K�w�)���`IB���/���_�`���#E��=O�4~�����������n�*f@FN7�����G��<�K��-��#k��i;�|�Q=���`��{R|6�W�<K�-�Mf���6�ud�X�I�����9F�9+B�{Q�#\�:��H�������?L���>�a�[�/�����m��<���B������t�b�#:[Z�[_~�y�=QR�6����+������[kW�{�������)_&�m�5����G��"����-�Kg����	�V�oqrt=I�*�<�:%������XNO@k���I�O�~!�#�V�����O5m>�85�q-
���M{���F�x��H���M�V��K���O�<��W��
�V���ao�K�y����+��&��,�"��9P����G����!�O��������D G����0(7}�n�v�y7��?�3����'L��D����,�}A��rp���y�y�|�5=w��'���[�b�y������W�w����V��S�]�d��Nv����]�W�(�������jJ��\��(��@����(�����K������mkw$.�B@�y2,����g��x�	��F����
��R9d%����:���
���u�h:��cv�7��������V@23���#5���M{:r��^�v���}�UD��I������$|����_�,�o�k���������v�e��$�8�~aB0T`��g�������5�_��o�O���(��+H��P���Vp@bs��W��6�t��F��K���.,����4�J�$�A,FY�s�i�S���=��_�����K��.^(F�2�r���p�f�gE&�iE��O��,zt�S����Q�I;��������C�>|)����x�R�m����4oi$���Fp#*�6�M�
�g=z�C��Q��� x�9�����w
�N�3!!�b6��bU�ry��^�P�o/$����6�*.��#��'������%V�h�)��4����[�rW��U��D�$�kZ��#[$��QHn�E��
7�!%T�Rb@�5�x��76v�c�)d�%Z����G����8<.3�A�u��u;�O�'K��#s�\���j���,�:�:���0Aw#������M�����.UW���C���q���� ���+A��g�X�����.j�[��h^��e�*�W��N/M �}�+uf1$�0>Y6p��d��_�eg��I��c�����+�qJ�1�0%NK�����y.��1t���W�}�w
�k�����X�
�*Gs��T/$�����EN����gjzuM.w���t�mY��W���U�J��O�z�V���w���?��#�|=���-����Z���3�!������cc�����������=����#��V3]Y�-kH�������$���q�c��/��8�e���[W��tmO����\�aw�M��2c�M����B��>g��;�o�5�xC]��O
x_G���.�h���W����&.7*�������b��=h����m����W��Y;�]�N�N4�$�K�����;�/�G�^x�_�-����i����7�|om��O��gS�m��� w�M�{�����x/E��+]YkXI�$D��9��b^����Y��V�����tS����H��|�a2��EY�t��e����X�Z��y�;�����G�CG�'�z1u�WQ��B����������qU�T��H���>+-�"O�p������0\l�_@���2�2��]J�a���^�\���$r8:=�l���Yx���0���5���d��/�%���N���K�~��0NPzrz�������B���]7PX*(����6���\�p��u/���X���E���
�;K-2�D�xm��=�����r����3�Z�al��?�!�U�ax�<g�G�F�d.���B���j/�Ecpeuo&��1nfX�o�����dT�S�h�
����'�i���9i��P9-����5��I�?�����g"ri>��N����/����N��h��k����*�g��t��&��c�?�����(��7������<�6�t1�H?kPDH��f���N.v;�z�������5�^�IV�J���?�r6�-�]����n�;d���g��9���id�����+�d��r��:�=������#���q+H�W>�v�#h<�'�R\1[��g<����f���-����E����H�b����
�e�3!���p�I/���:�zs���������"����> x#@�b�����/,fA+y"8�	9<�2r��q� 
���z�������
�\cFY�dK(�j�K�RF��	v6��=x����oSE�.�,���.��,"B���g�|��<x�/�~<��J��xv�{d_)R��x��#DS�<�y5�d9��'9�(�nv�MZ���_�zzn{N�NQNj/��_��(�����f/��0�W���o�5�����<�L%������9VR�
�r0+��x:���<%m;]!�a��Gb��d����������:���R���i���V_�g.;2����-9�m/%dQEy�QEQEQEQE������h�)�������e|��i@������t�V����vz��Vz��T�]���Fyo�-#�����K�	��9�4p�b����%N9��+����#i�!�����?N�n�.�[�U5�UYbrz�,��� W������Q��M�j���X���R����v����5ox���&���]P}����H�h`���Ey��t��{��*t�R[M%�4��|���\!:ki[�w>q���<yi��]��4��c[��;�:��5�[+����'�@�g?�z�������B�e��c�(Ye���yNp�(c��O\v�=U%�������{�`�����`��g��Jy���J�V����m����������U�yt���������h��Y��a�i�����5s���&G��<�x�
��f	��Z+3�/T���A���q_C�V�������W��t���P�\�gd����85�����*J�F�z����Uc9j���uYt?[\D�N�B�FuA�Z���-�����~�]7B��71qe)��*)�����}EA�������[h�)��,�������6U_P�a�z����>&�������.��������@v���[��P�:����5��j��}o��k
��iA���ki����n����{iK�H0Lr<O�:a�\/�0�G��t.�99?��s^����H�Z�����J��`];�I���O�����d������v�fFi��F��~n������j�f�_�;����JY�yS$�J��I#��31 �!��?z8�T\|����-���.������
������]�eM�W�j���i��gx�����4_i��I�C����c+4Wo����r�\u�Nry ���>M?�V�d���i�n�s'a�Mvc��|���H��z��
5�h�]��^$�����S�e?!���S����>������V�y���_=��~Y�|i_^X55Ns�����{>]��%��].}y�����\���3�jZ][�<�.��e��<�d�'�g���8������-/�:�Vw�����yP��q�@��<��r��?	�/�����ow�Bm(	��m�@
>b2GM}��C���kdu�'�7d���c�O��[���v�>
���']��F7��jqw�~T��w�QEr��Q@�><������U���^�e0����������0����N����Zu��D�5;����*�n�K��(�'��Z�'k-�+�`����T�����o��_���SQ�DY�V�;�N�=P��(��(��(��(��(��(��(��(�>�K�$��������:�n�mo����$�	C+px5��#P�
|�0���{G@�������u_�������92(
�����	��#e8�=I����F��X���D�����/Fh����]���O�O�����������_������j��$��1�@���$nF
K���~���B��u�+�I����������Y%d����SM�e����Q��F������z���;O�iv���!T�>Xv�d���m9��w�^#����e��A�>������������z[�t����'�q��.&�m�`m���:�����s�um��.���u����]�r*#��g�����
�Ih����cj��q|��z|1i=|���zW��Im��
j�(��c�-��k��vE%P�768=k�_�����~1k^#}^�l��]�g����i0����`u������J|u���|	�������h��mR`���&A+�����H����P�_���~�t�����,R�4��!Rb8��'�`2*���Y�P�,F?GEN�1jQ�<�t�����������A��}�E�Dxj�MZ��I�Xck1��i%���\Ha�1�����k���3��y�-Jh��:�u
�q�?|�����#�c>���C�^$�a���o���D�y;X��zI#��y0��+>���q.4����B����r{;�O�6�h�:���Om��\Mp�vO��s�O$`�������N��<�3�~�"�D��Y�f7�<H=��b�+�=��"����<��Y�~ye�0_!�61��q�=��N_��>�%��?�,���<m��������	��j��b�o����<q���[^5%w�p�/��ESx^U��ZT���k��p$ ���_��)������
A��T�i�`���#4A�cj��lg����C�Z��-[���~�5����y4-x��,�q�o����<c�3��rg�d�k��b������>f�M'���wn�c�*+����
�I�P����2i��i�1 ��z��k��%���<e,E5Z����kfQE#�(�
����o��o�J(��[�LJ���!��dr8��b2+��=x�|����	�e=������5�]�w��=�hQn���y����X�g*�����5��O�����������sy*A
r�#Q�9'�Kosoy]ZJ��(��X�����g��Q�0�oI�in'����n	!�7����f>}�Mz������hsN'������!�G����H���h���/���3`�vT��������.�>���+���(��+��2���u�*���*[+)ei�`��''�q�n���1���*bj�F	�����W76�v��^J�A4�I#DEff<$��
��a�Z���s����,.�#`����px��u���������$� �E��h����),��I%KF���d�G�o�t��?	u�*�'������w�����|���;����������S�
<L#��|�������I��Z�>��>�K�$��������:���_��wtD��n8���\?�B���:.�����9�
wW?�����i�9���D�]���9��������3��z�[4P	�YY��@�DG�H���j���k�q�x�8�����(:������}]�D-���k��������_-~�����
�kyn$7����db�F�����k��*���I�6��v
����Z����{�Z$�����F"���(������^+�Jk���xB��k���/���y�Rq��{�����|W��
��L]�6B��%I�l���L�c��8�C�zt�i]�gy]J�|��V�.[E��[m���:���m�o����,a�5=:yoo��TB��8��A�s���iw
��|k�������%�B�k� (��� a��?t������>�� �-b��B/9sF7�^x��z�qX��k�x��_-n���!�'�?,�g,Y�������j�{��O����.����"��MG���{6x���J��^3��7y:����$L(A�x�F������|'�������n'i��<�����UQ�(��o��7�?���k�oY��I���;����>��R���}/���\t�EJ�{9��'�l|_���a�������_O���7�r9����F�O���z��i�|k�KE��o3��)uc'���1�+���y��#��@+��e������[s�zB�_��|���F���|�P�=�5�q����f<�0Q�����d�4�|�I��v���~�%��W���H���E+h�2��{�9�<W��6�v���du>�P�o�o�<W��K���<��i�`����"�6������py��>��e�������6��Z'��8�Qv�Sp�m�����bf�|/�F��xX?�.��!�������y��s�^_h�����������-��
�U%}Z��T�m����������������i�������r�F���a���H��s�F8�8�/����_�]O�5�i|+m��kU��V�|����0G$��1��[�|o�o�����e��*������2�G��r�88?�ED(r��[t=l������hFt�Yr��T����%�IKG�S���m����U���6���o�B�`������|0��^����
q��+mJ��9f<���w{��3�1]�_�����z���
?��[�����
�)*S�
Yx�{��>|�>���~5������9�"����M������:t���H$�8���5�U���S���R���Si��Jt���Z�/��������q�_	kz����8�B��t%���������7A�S�_�VZ�����N��j�����G��If'��$t�]�O�4���!�kZ�~\��`|����I�3�5��?f�����������������"u����O������������:����
�`�eYT|��{>Og�o�ZM.�;_�?�?����Z��o-���`�+�����*H�������5?�kZ�h�,�i+lF�Hb�0�<rN|��K�x���O���w���	�=����oV6�D�Y�d�yc�������)�&F�<O�^��%d�#�q�eo0�:0�#�!:t�r5��<Oo�R�)��jU%%����w�\���u���kS�����:��?�E���mg,2B���
�����0x;�������>xsV�v�����	R% �Kw�!h��ppp��?��k�~|?�;�h�������|�,���R=������q��f�o���O���rn�i��/�\���p�'>���Og�o3��d�6�����-�Ux�I�i(��/+�f|��~�?����s��6����X`�2Z%���U ����9���s�sZ��|M�_h	�H�������(�H.2v���z�x��f�'{�i�@C?���63����f������������k��+�;XRg<*�A>���������G�������e:2k�-]���1��z�������W?�^���6wP[x?��jmh�|�w��XP�2����x��Z�������X���Yj�;�W/��99��Z���0��f�L���21��{���G.��{8�0r�%�z����t���Zmy\Ya�x��d��*�~��������p���4u�U��	��x3���w��G\��g�����$��4xC@��-�W���O���f	��~����=A���Q�P�G�C�Zw�n$����kh:E���J�"y�I���#x��z�W�w����������l/����R�������[�����d������[����~��z��������A����5��P����o�A�?���&�Z|f�ouK��;d�=�+|�)�
��lnS�H5���P����X���F	�t��������b��_��&�����g�xhX����h8�
v��8_G���C! ���o�A�?���&��[�
��=�?�����@q�w����������y���������1��F�yq+4��gj��,z����x��1��d�����,v��_fZF�H3��UV��:��
���~��z��������A����4�x�����%��<44kE��f	3 e0=C�j���ox����|={��Y�k��"����*v�Gc\�W�-���0j��cim�_s����l��,Nz���tU)=��f�GVt���+v���oO#�?���W���&��x���V�|��\m��U@�Huo|������o�A�?���&�k�(��eq��i��$�
�7euG��.�@��c����wQ��_�������I���W���~Ws��gA�Rm��.�ha(G
���#�K���w���������?�n�7������^�EI�y��-��������j�����^-���gtl�f��6�6����t��i-�nn"�(��i	*��"����OA^,�9���~I�gk���c�����G$��(������������M��~��z�����I����m�u>�s"��������<�����8�����������M��~��z����G������XG>�����64r���b|�v\.N��g���8�>3|7�[���c����:��W��o+�m�0#�Y�����������Ms��1���(�<����8����'�D���?q��M��@q�w���������?�n�7������^�Ey��>[D�����(�e�����Y��7���xY�M4��v��m���#��O�=w�Z4���)�������>r`�;|��������PA���O���E����k<��b_�C ��G���<������ ������(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��(��
endstream
endobj
9 0 obj
16014
endobj
11 0 obj
<< /Length 12 0 R /N 3 /Alternate /DeviceRGB /Filter /FlateDecode >>
stream
x�U[�U��9�
�����-�C�t)�K�����[��k���A���d��$�L�}*�����IA��-��z���R�PVw�"(>�xA(�E��;�d&Yj�e�|����o�����B����%�6s�����c��:��!�Q,�V=���~B+���[?�O0W'�l�Wo�,rK%���V��%�D��j�����O����M$����6�����5G����9,��Bxx|��/��vP�O���TE�"k�J��C{���Gy7��7P����u����u��R,��^Q�9�G��5��L�����cD����|x7p�d���Yi����S��������X���]S�zI;������o�HR4;����Y�	=r�JEO��^�9��������g�T%&����
������r=)��%�[���X��3".b�8��z����J>q�n���^�\��;�O*fJ�b�����(r��FN��X����H�g ��y�O����+�-bU��MR(GI��Z'�i����r0w]�����*x������u���]�Be�]w�*�BQ�*����S������������aa����,����)�)�4;��`g�>�w{��|n J������j��m*`��Y����,�6�<��M����=�����*&�:z�^=��X���p}(���[Go�Zj���eqRN����z]U����%tAC�����^�N��m��{�����%cy�cE���[:3�����W���?�.�-}*}%��>�.�"]�.J_K�JK_�����{�$2s%��������X9*o�����Qy�U)��<%��]�lw���o��r��(�u�s�X�Y�\O8������7��X���i��b�:	m�������Ko��i1�]��D0����	N	�}���`�����
��*�*�6?!�'��O�Z�b+{��'�>}\I���R�u�1Y��-n6yq��wS�#��s���mW<�~�h�_x�}�q�D+���7�w���{Bm���?���#�J{�8���(�_?�Z7�x�h��V���[���������|U
endstream
endobj
12 0 obj
1079
endobj
7 0 obj
[ /ICCBased 11 0 R ]
endobj
13 0 obj
<< /Length 14 0 R /N 3 /Alternate /DeviceRGB /Filter /FlateDecode >>
stream
x��wTS����7��" %�z	 �;HQ�I�P��&vDF)VdT�G�"cE��b�	�P��QDE���k	��5�����Y������g�}��P���tX�4�X���\���X��ffG�D���=���H����.�d��,�P&s���"7C$
E�6<~&��S��2����)2�12�	��"���l���+����&��Y��4���P��%����\�%�g�|e�TI���(����L0�_��&�l�2E�����9�r��9h�x�g���Ib���i���f���S�b1+��M��xL����0��o�E%Ym�h�����Y��h����~S�=�z�U�&���A��Y�l��/��$Z����U�m@���O� ������l^���'���ls�k.+�7���o���9�����V;�?�#I3eE����KD����d�����9i���,������UQ��	��h��<�X�.d
���6'~�khu_}�9P�I�o=C#$n?z}�[1
���h���s�2z���\�n�LA"S���dr%�,���l��t�
4�.0,`
�3p� ��H�.Hi@�A>�
A1�v�jp��z�N�6p\W�
p�G@
��K0��i���A����B�ZyCAP8�C���@��&�*���CP=�#t�]���� 4�}���a
�����;G���Dx����J�>����,�_��@��FX�DB�X$!k�"��E�����H�q���a����Y��bVa�bJ0��c�VL�6f3����b���X'�?v	6��-�V`�`[����a�;���p~�\2n5��������
�&�x�*����s�b|!�
����'�	Zk�!� $l$T����4Q��Ot"�y�\b)���A�I&N�I�$R$)���TIj"]&=&�!��:dGrY@^O�$� _%�?P�(&OJEB�N9J�@y@yC�R
�n�X����ZO�D}J}/G�3���������k���{%O���w�_.�'_!J����Q�@�S���V�F���=�IE���b�b�b�b��5�Q%�����O�@���%�!B��y���M�:�e�0G7����������	e%e[�(�����R�0`�3R��������4������6�i^��)��*n*|�"�f����LUo����m�O�0j&jaj�j��.�����w���_4��������z��j���=����U�4�5�n������4��hZ�Z�Z��^0����Tf%��9�����-�>���=�c��Xg�N��]�.[7A�\�SwBOK/X/_�Q��>Q�����G�[��� �`�A�������a�a��c#����*�Z�;�8c�q��>�[&���I�I��MS���T`����k�h&4�5�����YY�F��9�<�|�y��+=�X���_,�,S-�,Y)YXm��������k]c}��j�c��������-�v��};�]���N����"�&�1=�x����tv(��}���������'{'��I���Y�)�
����-r�q��r�.d.�_xp��U���Z���M���v�m���=����+K�G�������^���W�W����b�j��>:>�>�>�v��}/�a��v���������O8�	�
�FV>2	u�����/�_$\�B�Cv�<	5]�s.,4�&�y�Ux~xw-bEDC��H����G��KwF�G�E�GME{E�EK�X,Y��F�Z� �={$vr����K����
��.3\����r�������_�Yq*������L��_�w���������+���]�e�������D��]�cI�II�OA��u�_��������)3����i�����B%a��+]3='�/�4�0C��i��U�@��L(sYf����L�H�$�%�Y�j��gGe��Q������n�����~5f5wug�v����5�k����\��Nw]�������m mH���F��e�n���Q�Q��`h����B�BQ��-�[l�ll��f��j��"^��b����O%����Y}W�����������w�vw�����X�bY^����]��������W��Va[q`i�d��2���J�jG�����������{���������m���>���Pk�Am�a����������g_D�H���G�G����u�;��7�7�6������q�o���C{��P3���8!9������<�y�}��'�����Z�Z�������6i{L{������-?��|�������gK�����9�w~�B������:Wt>�������������^��r�����U��g�9];}�}���������_�~i���m��p�������}��]�/���}�������.�{�^�=�}����^?�z8�h�c���'
O*��?�����f������`���g���C/����O����+F�F�G�G�����z�����������)�������~w��gb���k���?J���9���m�d���wi�������?�����c�����O�O���?w|	��x&mf������
endstream
endobj
14 0 obj
2612
endobj
10 0 obj
[ /ICCBased 13 0 R ]
endobj
3 0 obj
<< /Type /Pages /MediaBox [0 0 846 594] /Count 1 /Kids [ 2 0 R ] >>
endobj
15 0 obj
<< /Type /Catalog /Pages 3 0 R >>
endobj
16 0 obj
(Mac OS X 10.12.1 Quartz PDFContext)
endobj
17 0 obj
(D:20170314064355Z00'00')
endobj
1 0 obj
<< /Producer 16 0 R /CreationDate 17 0 R /ModDate 17 0 R >>
endobj
xref
0 18
0000000000 65535 f 
0000230369 00000 n 
0000209744 00000 n 
0000230141 00000 n 
0000000022 00000 n 
0000209722 00000 n 
0000209861 00000 n 
0000227332 00000 n 
0000209929 00000 n 
0000226108 00000 n 
0000230104 00000 n 
0000226129 00000 n 
0000227311 00000 n 
0000227368 00000 n 
0000230083 00000 n 
0000230224 00000 n 
0000230274 00000 n 
0000230327 00000 n 
trailer
<< /Size 18 /Root 15 0 R /Info 1 0 R /ID [ <bf59064ce673bded725e02c36ad36dcf>
<bf59064ce673bded725e02c36ad36dcf> ] >>
startxref
230444
%%EOF
#166Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Amit Kapila (#162)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 23, 2017 at 7:53 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Thu, Mar 23, 2017 at 3:44 PM, Pavan Deolasee

Yes, this is a very fair point. The way I proposed to address this

upthread

is by introducing a set of threshold/scale GUCs specific to WARM. So

users

can control when to invoke WARM cleanup. Only if the WARM cleanup is
required, we do 2 index scans. Otherwise vacuum will work the way it

works

today without any additional overhead.

I am not sure on what basis user can set such parameters, it will be
quite difficult to tune those parameters. I think the point is
whatever threshold we keep, once that is crossed, it will perform two
scans for all the indexes.

Well, that applies to even vacuum parameters, no? The general sense I've
got here is that we're ok to push some work in background if it helps the
real-time queries, and I kinda agree with that. If WARM improves things in
a significant manner even with these additional maintenance work, it's
still worth doing.

Having said that, I see many ways we can improve on this later. Like we can
track somewhere else information about tuples which may have received WARM
updates (I think it will need to be a per-index bitmap or so) and use that
to do WARM chain conversion in a single index pass. But this is clearly not
PG 10 material.

IIUC, this conversion of WARM chains is
required so that future updates can be WARM or is there any other
reason? I see this as a big penalty for future updates.

It's also necessary for index-only-scans. But I don't see this as a big
penalty for future updates, because if there are indeed significant WARM
updates then not preparing for future updates will result in
write-amplification, something we are trying to solve here and something
which seems to be showing good gains.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#167Amit Kapila
amit.kapila16@gmail.com
In reply to: Pavan Deolasee (#166)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Mar 24, 2017 at 12:25 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Thu, Mar 23, 2017 at 7:53 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

I am not sure on what basis user can set such parameters, it will be
quite difficult to tune those parameters. I think the point is
whatever threshold we keep, once that is crossed, it will perform two
scans for all the indexes.

Well, that applies to even vacuum parameters, no?

I don't know how much we can directly compare the usability of the new
parameters you are proposing here to existing parameters.

The general sense I've got
here is that we're ok to push some work in background if it helps the
real-time queries, and I kinda agree with that.

I don't think we can define this work as "some" work, it can be a lot
of work depending on the number of indexes. Also, I think for some
cases it will generate maintenance work without generating benefit.
For example, when there is one index on a table and there are updates
for that index column.

Having said that, I see many ways we can improve on this later. Like we can
track somewhere else information about tuples which may have received WARM
updates (I think it will need to be a per-index bitmap or so) and use that
to do WARM chain conversion in a single index pass.

Sure, if we have some way to do it in a single pass or does most of
the time in foreground process (like we have some dead marking idea
for indexes), then it would have been better.

But this is clearly not
PG 10 material.

I don't see much discussion about this aspect of the patch, so not
sure if it is acceptable to increase the cost of vacuum. Now, I don't
know if your idea of GUC's make it such that the additional cost will
occur seldom and this additional pass has a minimal impact which will
make it acceptable.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#168Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Amit Kapila (#167)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Mar 24, 2017 at 4:04 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Fri, Mar 24, 2017 at 12:25 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Thu, Mar 23, 2017 at 7:53 PM, Amit Kapila <amit.kapila16@gmail.com>

The general sense I've got
here is that we're ok to push some work in background if it helps the
real-time queries, and I kinda agree with that.

I don't think we can define this work as "some" work, it can be a lot
of work depending on the number of indexes. Also, I think for some
cases it will generate maintenance work without generating benefit.
For example, when there is one index on a table and there are updates
for that index column.

That's a fair point. I think we can address this though. At the end of
first index scan we would know how many warm pointers the index has and
whether it's worth doing a second scan. For the case you mentioned, we will
do a second scan just on that one index and skip on all other indexes and
still achieve the same result. On the other hand, if one index receives
many updates and other indexes are rarely updated then we might leave
behind a few WARM chains behind and won't be able to do IOS on those pages.
But given the premise that other indexes are receiving rare updates, it may
not be a problem. Note: the code is not currently written that way, but it
should be a fairly small change.

The other thing that we didn't talk about is that vacuum will need to track
dead tuples and warm candidate chains separately which increases memory
overhead. So for very large tables, and for the same amount of
maintenance_work_mem, one round of vacuum will be able to clean lesser
pages. We can work out more compact representation, but something not done
currently.

But this is clearly not
PG 10 material.

I don't see much discussion about this aspect of the patch, so not
sure if it is acceptable to increase the cost of vacuum. Now, I don't
know if your idea of GUC's make it such that the additional cost will
occur seldom and this additional pass has a minimal impact which will
make it acceptable.

Yeah, I agree. I'm trying to schedule some more benchmarks, but any help is
appreciated.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#169Amit Kapila
amit.kapila16@gmail.com
In reply to: Pavan Deolasee (#159)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 23, 2017 at 3:54 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

Thanks Amit. v19 addresses some of the comments below.

On Thu, Mar 23, 2017 at 10:28 AM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Wed, Mar 22, 2017 at 4:06 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Tue, Mar 21, 2017 at 6:47 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

5.
+btrecheck(Relation indexRel, IndexInfo *indexInfo, IndexTuple indexTuple,
+ Relation heapRel, HeapTuple heapTuple)
{
..
+ if (!datumIsEqual(values[i - 1], indxvalue, att->attbyval,
+ att->attlen))
..
}

Will this work if the index is using non-default collation?

Not sure I understand that. Can you please elaborate?

I was worried for the case if the index is created non-default
collation, will the datumIsEqual() suffice the need. Now again
thinking about it, I think it will because in the index tuple we are
storing the value as in heap tuple. However today it occurred to me
how will this work for toasted index values (index value >
TOAST_INDEX_TARGET). It is mentioned on top of datumIsEqual() that it
probably won't work for toasted values. Have you considered that
point?

6.
+++ b/src/backend/access/nbtree/nbtxlog.c
@@ -390,83 +390,9 @@ btree_xlog_vacuum(XLogReaderState *record)
-#ifdef UNUSED
xl_btree_vacuum *xlrec = (xl_btree_vacuum *) XLogRecGetData(record);

/*
- * This section of code is thought to be no longer needed, after analysis
- * of the calling paths. It is retained to allow the code to be
reinstated
- * if a flaw is revealed in that thinking.
- *
..

Why does this patch need to remove the above code under #ifdef UNUSED

Yeah, it isn't strictly necessary. But that dead code was coming in the way
and hence I decided to strip it out. I can put it back if it's an issue or
remove that as a separate commit first.

I think it is better to keep unrelated changes out of patch.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#170Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Amit Kapila (#169)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Mar 24, 2017 at 6:46 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

I was worried for the case if the index is created non-default
collation, will the datumIsEqual() suffice the need. Now again
thinking about it, I think it will because in the index tuple we are
storing the value as in heap tuple. However today it occurred to me
how will this work for toasted index values (index value >
TOAST_INDEX_TARGET). It is mentioned on top of datumIsEqual() that it
probably won't work for toasted values. Have you considered that
point?

No, I haven't and thanks for bringing that up. And now that I think more
about it and see the code, I think the naive way of just comparing index
attribute value against heap values is probably wrong. The example of
TOAST_INDEX_TARGET is one such case, but I wonder if there are other
varlena attributes that we might store differently in heap and index. Like
index_form_tuple() ->heap_fill_tuple seem to some churning for varlena.
It's not clear to me if index_get_attr will return the values which are
binary comparable to heap values.. I wonder if calling index_form_tuple on
the heap values, fetching attributes via index_get_attr on both index
tuples and then doing a binary compare is a more robust idea. Or may be
that's just duplicating efforts.

While looking at this problem, it occurred to me that the assumptions made
for hash indexes are also wrong :-( Hash index has the same problem as
expression indexes have. A change in heap value may not necessarily cause a
change in the hash key. If we don't detect that, we will end up having two
hash identical hash keys with the same TID pointer. This will cause the
duplicate key scans problem since hashrecheck will return true for both the
hash entries. That's a bummer as far as supporting WARM for hash indexes is
concerned, unless we find a way to avoid duplicate index entries.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#171Amit Kapila
amit.kapila16@gmail.com
In reply to: Pavan Deolasee (#170)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Mar 24, 2017 at 11:49 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Fri, Mar 24, 2017 at 6:46 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

I was worried for the case if the index is created non-default
collation, will the datumIsEqual() suffice the need. Now again
thinking about it, I think it will because in the index tuple we are
storing the value as in heap tuple. However today it occurred to me
how will this work for toasted index values (index value >
TOAST_INDEX_TARGET). It is mentioned on top of datumIsEqual() that it
probably won't work for toasted values. Have you considered that
point?

No, I haven't and thanks for bringing that up. And now that I think more
about it and see the code, I think the naive way of just comparing index
attribute value against heap values is probably wrong. The example of
TOAST_INDEX_TARGET is one such case, but I wonder if there are other varlena
attributes that we might store differently in heap and index. Like
index_form_tuple() ->heap_fill_tuple seem to some churning for varlena. It's
not clear to me if index_get_attr will return the values which are binary
comparable to heap values.. I wonder if calling index_form_tuple on the heap
values, fetching attributes via index_get_attr on both index tuples and then
doing a binary compare is a more robust idea.

I am not sure how do you want to binary compare two datums, if you are
thinking datumIsEqual(), that won't work. I think you need to use
datatype specific compare function something like what we do in
_bt_compare().

Or may be that's just
duplicating efforts.

I think if we do something on the lines as mentioned by me above we
might not need to duplicate the efforts.

While looking at this problem, it occurred to me that the assumptions made
for hash indexes are also wrong :-( Hash index has the same problem as
expression indexes have. A change in heap value may not necessarily cause a
change in the hash key. If we don't detect that, we will end up having two
hash identical hash keys with the same TID pointer. This will cause the
duplicate key scans problem since hashrecheck will return true for both the
hash entries. That's a bummer as far as supporting WARM for hash indexes is
concerned,

Yeah, I also think so.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: Amit Kapila (#171)
Re: Patch: Write Amplification Reduction Method (WARM)

On Sat, Mar 25, 2017 at 12:54 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:

I am not sure how do you want to binary compare two datums, if you are
thinking datumIsEqual(), that won't work. I think you need to use
datatype specific compare function something like what we do in
_bt_compare().

How will that interact with types like numeric, that have display
scale or similar?

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#173Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Peter Geoghegan (#172)
Re: Patch: Write Amplification Reduction Method (WARM)

On Sat, 25 Mar 2017 at 11:03 PM, Peter Geoghegan <pg@bowt.ie> wrote:

On Sat, Mar 25, 2017 at 12:54 AM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

I am not sure how do you want to binary compare two datums, if you are
thinking datumIsEqual(), that won't work. I think you need to use
datatype specific compare function something like what we do in
_bt_compare().

How will that interact with types like numeric, that have display
scale or similar?

I wonder why Amit thinks that datumIsEqual won't work once we convert the

heap values to index tuple and then fetch using index_get_attr. After all
that's how the current index tuple was constructed when it was inserted. In
fact, we must not rely on _bt_compare because that might return "false
positive" even for two different heap binary values (I think). To decide
whether to do WARM update or not in heap_update we only rely on binary
comparison. Could it happen that for two different binary heap values, we
still compute the same index attribute? Even when expression indexes are
not supported?

Thanks,
Pavan

--
Peter Geoghegan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#174Amit Kapila
amit.kapila16@gmail.com
In reply to: Pavan Deolasee (#173)
Re: Patch: Write Amplification Reduction Method (WARM)

On Sat, Mar 25, 2017 at 11:24 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Sat, 25 Mar 2017 at 11:03 PM, Peter Geoghegan <pg@bowt.ie> wrote:

On Sat, Mar 25, 2017 at 12:54 AM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

I am not sure how do you want to binary compare two datums, if you are
thinking datumIsEqual(), that won't work. I think you need to use
datatype specific compare function something like what we do in
_bt_compare().

How will that interact with types like numeric, that have display
scale or similar?

I wonder why Amit thinks that datumIsEqual won't work once we convert the
heap values to index tuple and then fetch using index_get_attr. After all
that's how the current index tuple was constructed when it was inserted.

I think for toasted values you need to detoast before comparison and
it seems datamIsEqual won't do that job. Am I missing something which
makes you think that datumIsEqual will work in this context.

In
fact, we must not rely on _bt_compare because that might return "false
positive" even for two different heap binary values (I think).

I am not telling to rely on _bt_compare, what I was trying to hint at
it was that I think we might need to use some column type specific
information for comparison. I am not sure at this stage what is the
best way to deal with this problem without incurring non trivial cost.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#175Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Pavan Deolasee (#170)
7 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Mar 24, 2017 at 11:49 PM, Pavan Deolasee <pavan.deolasee@gmail.com>
wrote:

On Fri, Mar 24, 2017 at 6:46 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

I was worried for the case if the index is created non-default
collation, will the datumIsEqual() suffice the need. Now again
thinking about it, I think it will because in the index tuple we are
storing the value as in heap tuple. However today it occurred to me
how will this work for toasted index values (index value >
TOAST_INDEX_TARGET). It is mentioned on top of datumIsEqual() that it
probably won't work for toasted values. Have you considered that
point?

No, I haven't and thanks for bringing that up. And now that I think more
about it and see the code, I think the naive way of just comparing index
attribute value against heap values is probably wrong. The example of
TOAST_INDEX_TARGET is one such case, but I wonder if there are other
varlena attributes that we might store differently in heap and index. Like
index_form_tuple() ->heap_fill_tuple seem to some churning for varlena.
It's not clear to me if index_get_attr will return the values which are
binary comparable to heap values.. I wonder if calling index_form_tuple on
the heap values, fetching attributes via index_get_attr on both index
tuples and then doing a binary compare is a more robust idea. Or may be
that's just duplicating efforts.

While looking at this problem, it occurred to me that the assumptions made
for hash indexes are also wrong :-( Hash index has the same problem as
expression indexes have. A change in heap value may not necessarily cause a
change in the hash key. If we don't detect that, we will end up having two
hash identical hash keys with the same TID pointer. This will cause the
duplicate key scans problem since hashrecheck will return true for both the
hash entries. That's a bummer as far as supporting WARM for hash indexes is
concerned, unless we find a way to avoid duplicate index entries.

Revised patches are attached. I've added a few more regression tests which
demonstrates the problems with compressed and toasted attributes. I've now
implemented the idea of creating index tuple from heap values before doing
binary comparison using datumIsEqual. This seems to work ok and I see no
reason this should not be robust. But if there are things which could still
be problematic, please let me know.

Seeing the problem that hash indexes will have, I've removed support for
it. It's probably a good decision anyways since hash indexes are being
hacked around very actively and it might take it some time to settle down
fully. It'll be a good idea to keep WARM away from it to avoid more
complication. I've a few ideas about how to make it work, but we can
address those later.

Other than that, I've now converted the stress test framework used earlier
to test WARM into TAP tests and those tests are attached too.

Finally, I've implemented complete pg_stat support for tracking amount of
WARM chains in the table. AV can use that to trigger clean-up only when the
fraction of warm chains goes beyond configured scale. Similarly, the patch
also adds an index-level scale factor and cleanup is triggered on an index
only if the number of WARM pointers in the index are beyond the set
fraction. This should greatly help us to avoid second index scans on
indexes which are either not updated at all or updated rarely. The best
case scenario where out of N indexes only one index receives update, WARM
will avoid updates to N-1 indexes and these N-1 indexes need not be scanned
twice during WARM cleanup. OTOH if most indexes on a table receive updates,
then probably neither WARM nor cleanup will be efficient for such
workloads. I wonder if we should provide a table-level knob to turn WARM
completely off on such workloads, however rare they might be. I think this
patch requires some more work and documentation changes are completely
missing.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0005_warm_updates_v21.patchapplication/octet-stream; name=0005_warm_updates_v21.patchDownload
diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c
index f2eda67..b356e2b 100644
--- a/contrib/bloom/blutils.c
+++ b/contrib/bloom/blutils.c
@@ -142,6 +142,7 @@ blhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/contrib/bloom/blvacuum.c b/contrib/bloom/blvacuum.c
index 04abd0f..ff50361 100644
--- a/contrib/bloom/blvacuum.c
+++ b/contrib/bloom/blvacuum.c
@@ -88,7 +88,7 @@ blbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 		while (itup < itupEnd)
 		{
 			/* Do we have to delete this tuple? */
-			if (callback(&itup->heapPtr, callback_state))
+			if (callback(&itup->heapPtr, false, callback_state) == IBDCR_DELETE)
 			{
 				/* Yes; adjust count of tuples that will be left on page */
 				BloomPageGetOpaque(page)->maxoff--;
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index b22563b..b4a1465 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -116,6 +116,7 @@ brinhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gin/ginvacuum.c b/src/backend/access/gin/ginvacuum.c
index 26c077a..46ed4fe 100644
--- a/src/backend/access/gin/ginvacuum.c
+++ b/src/backend/access/gin/ginvacuum.c
@@ -56,7 +56,8 @@ ginVacuumItemPointers(GinVacuumState *gvs, ItemPointerData *items,
 	 */
 	for (i = 0; i < nitem; i++)
 	{
-		if (gvs->callback(items + i, gvs->callback_state))
+		if (gvs->callback(items + i, false, gvs->callback_state) ==
+				IBDCR_DELETE)
 		{
 			gvs->result->tuples_removed += 1;
 			if (!tmpitems)
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index 6593771..843389b 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -94,6 +94,7 @@ gisthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gist/gistvacuum.c b/src/backend/access/gist/gistvacuum.c
index 77d9d12..0955db6 100644
--- a/src/backend/access/gist/gistvacuum.c
+++ b/src/backend/access/gist/gistvacuum.c
@@ -202,7 +202,8 @@ gistbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 				iid = PageGetItemId(page, i);
 				idxtuple = (IndexTuple) PageGetItem(page, iid);
 
-				if (callback(&(idxtuple->t_tid), callback_state))
+				if (callback(&(idxtuple->t_tid), false, callback_state) ==
+						IBDCR_DELETE)
 					todelete[ntodelete++] = i;
 				else
 					stats->num_index_tuples += 1;
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index 34cc08f..ad56d6d 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -75,6 +75,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = hashbuild;
 	amroutine->ambuildempty = hashbuildempty;
 	amroutine->aminsert = hashinsert;
+	amroutine->amwarminsert = NULL;
 	amroutine->ambulkdelete = hashbulkdelete;
 	amroutine->amvacuumcleanup = hashvacuumcleanup;
 	amroutine->amcanreturn = NULL;
@@ -92,6 +93,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -807,6 +809,7 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 			IndexTuple	itup;
 			Bucket		bucket;
 			bool		kill_tuple = false;
+			IndexBulkDeleteCallbackResult	result;
 
 			itup = (IndexTuple) PageGetItem(page,
 											PageGetItemId(page, offno));
@@ -816,13 +819,18 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 			 * To remove the dead tuples, we strictly want to rely on results
 			 * of callback function.  refer btvacuumpage for detailed reason.
 			 */
-			if (callback && callback(htup, callback_state))
+			if (callback)
 			{
-				kill_tuple = true;
-				if (tuples_removed)
-					*tuples_removed += 1;
+				result = callback(htup, false, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					kill_tuple = true;
+					if (tuples_removed)
+						*tuples_removed += 1;
+				}
 			}
-			else if (split_cleanup)
+
+			if (!kill_tuple && split_cleanup)
 			{
 				/* delete the tuples that are moved by split. */
 				bucket = _hash_hashkey2bucket(_hash_get_indextuple_hashkey(itup),
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index 2d92049..330ccc5 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -59,6 +59,8 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
 	return true;
 }
 
@@ -367,6 +369,9 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
+
 	return true;
 }
 
diff --git a/src/backend/access/heap/README.WARM b/src/backend/access/heap/README.WARM
new file mode 100644
index 0000000..7c93a70
--- /dev/null
+++ b/src/backend/access/heap/README.WARM
@@ -0,0 +1,308 @@
+src/backend/access/heap/README.WARM
+
+Write Amplification Reduction Method (WARM)
+===========================================
+
+The Heap Only Tuple (HOT) feature greatly eliminated redudant index
+entries and allowed re-use of the dead space occupied by previously
+updated or deleted tuples (see src/backend/access/heap/README.HOT)
+
+One of the necessary conditions for satisfying HOT update is that the
+update must not change a column used in any of the indexes on the table.
+The condition is sometimes hard to meet, especially for complex
+workloads with several indexes on large yet frequently updated tables.
+Worse, sometimes only one or two index columns may be updated, but the
+regular non-HOT update will still insert a new index entry in every
+index on the table, irrespective of whether the key pertaining to the
+index is changed or not.
+
+WARM is a technique devised to address these problems.
+
+
+Update Chains With Multiple Index Entries Pointing to the Root
+--------------------------------------------------------------
+
+When a non-HOT update is caused by an index key change, a new index
+entry must be inserted for the changed index. But if the index key
+hasn't changed for other indexes, we don't really need to insert a new
+entry. Even though the existing index entry is pointing to the old
+tuple, the new tuple is reachable via the t_ctid chain. To keep things
+simple, a WARM update requires that the heap block must have enough
+space to store the new version of the tuple. This is same as HOT
+updates.
+
+In WARM, we ensure that every index entry always points to the root of
+the WARM chain. In fact, a WARM chain looks exactly like a HOT chain
+except for the fact that there could be multiple index entries pointing
+to the root of the chain. So when new entry is inserted in an index for
+updated tuple, and if we are doing a WARM update, the new entry is made
+point to the root of the WARM chain.
+
+For example, if we have a table with two columns and two indexes on each of the
+column. When a tuple is first inserted into the table, we have exactly one
+index entry pointing to the tuple from both indexes.
+
+	lp [1]
+	[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's entry (aaaa) also points to 1
+
+Now if the tuple's second column is updated and if there is room on the
+page, we perform a WARM update. To do so, Index1 does not get any new
+entry and Index2's new entry will still point to the root tuple of the
+chain.
+
+	lp [1]  [2]
+	[1111, aaaa]->[1111, bbbb]
+
+	Index1's entry (1111) points to 1
+	Index2's old entry (aaaa) points to 1
+	Index2's new entry (bbbb) also points to 1
+
+"A update chain which has more than one index entries pointing to its
+root line pointer is called WARM chain and the action that creates a
+WARM chain is called WARM update."
+
+Since all indexes always point to the root of the WARM chain, even when
+there are more than one index entries, WARM chains can be pruned and
+dead tuples can be removed without a need to do corresponding index
+cleanup.
+
+While this solves the problem of pruning dead tuples from a HOT/WARM
+chain, it also opens up a new technical challenge because now we have a
+situation where a heap tuple is reachable from multiple index entries,
+each having a different index key. While MVCC still ensures that only
+valid tuples are returned, a tuple with a wrong index key may be
+returned because of wrong index entries. In the above example, tuple
+[1111, bbbb] is reachable from both keys (aaaa) as well as (bbbb). For
+this reason, tuples returned from a WARM chain must always be rechecked
+for index key-match.
+
+Recheck Index Key Againt Heap Tuple
+-----------------------------------
+
+Since every Index AM has it's own notion of index tuples, each Index AM
+must implement its own method to recheck heap tuples. For example, a
+hash index stores the hash value of the column and hence recheck routine
+for hash AM must first compute the hash value of the heap attribute and
+then compare it against the value stored in the index tuple.
+
+The patch currently implement recheck routines for hash and btree
+indexes. If the table has an index which doesn't support recheck
+routine, WARM updates are disabled on such tables.
+
+Problem With Duplicate (key, ctid) Index Entries
+------------------------------------------------
+
+The index-key recheck logic works as long as there are no duplicate
+index keys, both pointing to the same WARM chain. In that case, the same
+valid tuple will be reachable via multiple index keys, yet satisfying
+the index key checks. In the above example, if the tuple [1111, bbbb] is
+again updated to [1111, aaaa] and if we insert a new index entry (aaaa)
+pointing to the root line pointer, we will end up with the following
+structure:
+
+	lp [1]  [2]  [3]
+	[1111, aaaa]->[1111, bbbb]->[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's oldest entry (aaaa) points to 1
+	Index2's old entry (bbbb) also points to 1
+	Index2's new entry (aaaa) also points to 1
+
+We must solve this problem to ensure that the same tuple is not
+reachable via multiple index pointers. There are couple of ways to
+address this issue:
+
+1. Do not allow WARM update to a tuple from a WARM chain. This
+guarantees that there can never be duplicate index entries to the same
+root line pointer because we must have checked for old and new index
+keys while doing the first WARM update.
+
+2. Do not allow duplicate (key, ctid) index pointers. In the above
+example, since (aaaa, 1) already exists in the index, we must not insert
+a duplicate index entry.
+
+The patch currently implements 1 i.e. do not do WARM updates to a tuple
+from a WARM chain. HOT updates are fine because they do not add a new
+index entry.
+
+Even with the restriction, this is a significant improvement because the
+number of regular UPDATEs are curtailed down to half.
+
+Expression and Partial Indexes
+------------------------------
+
+Expressions may evaluate to the same value even if the underlying column
+values have changed. A simple example is an index on "lower(col)" which
+will return the same value if the new heap value only differs in the
+case sensitivity. So we can not solely rely on the heap column check to
+decide whether or not to insert a new index entry for expression
+indexes. Similarly, for partial indexes, the predicate expression must
+be evaluated to decide whether or not to cause a new index entry when
+columns referred in the predicate expressions change.
+
+(None of these things are currently implemented and we squarely disallow
+WARM update if a column from expression indexes or predicate has
+changed).
+
+
+Efficiently Finding the Root Line Pointer
+-----------------------------------------
+
+During WARM update, we must be able to find the root line pointer of the
+tuple being updated. It must be noted that the t_ctid field in the heap
+tuple header is usually used to find the next tuple in the update chain.
+But the tuple that we are updating, must be the last tuple in the update
+chain. In such cases, the c_tid field usually points the tuple itself.
+So in theory, we could use the t_ctid to store additional information in
+the last tuple of the update chain, if the information about the tuple
+being the last tuple is stored elsewhere.
+
+We now utilize another bit from t_infomask2 to explicitly identify that
+this is the last tuple in the update chain.
+
+HEAP_LATEST_TUPLE - When this bit is set, the tuple is the last tuple in
+the update chain. The OffsetNumber part of t_ctid points to the root
+line pointer of the chain when HEAP_LATEST_TUPLE flag is set.
+
+If UPDATE operation is aborted, the last tuple in the update chain
+becomes dead. The root line pointer information stored in the tuple
+which remains the last valid tuple in the chain is also lost. In such
+rare cases, the root line pointer must be found in a hard way by
+scanning the entire heap page.
+
+Tracking WARM Chains
+--------------------
+
+When a tuple is WARM updated, the old, the new and every subsequent tuple in
+the chain is marked with a special HEAP_WARM_UPDATED flag. We use the last
+remaining bit in t_infomask2 to store this information.
+
+When a tuple is returned from a WARM chain, the caller must do additional
+checks to ensure that the tuple matches the index key. Even if the tuple
+precedes the WARM update in the chain, it must still be rechecked for the index
+key match (case when old tuple is returned by the new index key). So we must
+follow the update chain everytime to the end to check if this is a WARM
+chain.
+
+Converting WARM chains back to HOT chains (VACUUM ?)
+----------------------------------------------------
+
+The current implementation of WARM allows only one WARM update per
+chain. This simplifies the design and addresses certain issues around
+duplicate key scans. But this also implies that the benefit of WARM will be
+no more than 50%, which is still significant, but if we could return
+WARM chains back to normal status, we could do far more WARM updates.
+
+A distinct property of a WARM chain is that at least one index has more
+than one live index entries pointing to the root of the chain. In other
+words, if we can remove duplicate entry from every index or conclusively
+prove that there are no duplicate index entries for the root line
+pointer, the chain can again be marked as HOT.
+
+Here is one idea:
+
+A WARM chain has two parts, separated by the tuple that caused WARM
+update. All tuples in each part has matching index keys, but certain
+index keys may not match between these two parts. Lets say we mark heap
+tuples in the second part with a special HEAP_WARM_TUPLE flag. Similarly, the
+new index entries caused by the first WARM update are also marked with
+INDEX_WARM_POINTER flags.
+
+There are two distinct parts of the WARM chain. The first part where none of
+the tuples have HEAP_WARM_TUPLE flag set and the second part where every tuple
+has the flag set. Each of these parts satisfy HOT property on its own i.e. all
+tuples have the same value for indexed columns. But these two parts are
+separated by the WARM update which breaks HOT property for one or more indexes.
+
+Heap chain: [1] [2] [3] [4]
+			[aaaa, 1111] -> [aaaa, 1111] -> [bbbb, 1111]W -> [bbbb, 1111]W
+
+Index1: 	(aaaa) points to 1 (satisfies only tuples without W)
+			(bbbb)W points to 1 (satisfies only tuples marked with W)
+
+Index2:		(1111) points to 1 (satisfies tuples with and without W)
+
+
+It's clear that for indexes with both pointers, a heap tuple without
+HEAP_WARM_TUPLE flag will be reachable from the index pointer cleared of
+INDEX_WARM_POINTER flag and that with HEAP_WARM_TUPLE flag will be reachable
+from the pointer with INDEX_WARM_POINTER. But for indexes which did not create
+a new entry, tuples with and without the HEAP_WARM_TUPLE flag will be reachable
+from the original index pointer which doesn't have the INDEX_WARM_POINTER flag.
+(there is no pointer with INDEX_WARM_POINTER in such indexes).
+
+During first heap scan of VACUUM, we look for tuples with HEAP_WARM_UPDATED
+set.  If all or none of the live tuples in the chain are marked with
+HEAP_WARM_TUPLE flag, then the chain is a candidate for HOT conversion. We
+remember the root line pointer and whether the tuples in the chain had
+HEAP_WARM_TUPLE flags set or not.
+
+If we have a WARM chain with HEAP_WARM_TUPLE set, then our goal is to remove
+the index pointers without INDEX_WARM_POINTER flags and vice versa. But there
+is a catch. For Index2 above, there is only one pointer and it does not have
+the INDEX_WARM_POINTER flag set. Since all heap tuples are reachable only via
+this pointer, it must not be removed. IOW we should remove index pointer
+without INDEX_WARM_POINTER iff a another index pointer with INDEX_WARM_POINTER
+exists. Since index vacuum may visit these pointers in any order, we will need
+another index pass to detect redundant index pointers, which can safely be
+removed because all live tuples are reachable via the other index pointer. So
+in the first index pass we check which WARM candidates have 2 index pointers.
+In the second pass, we remove the redundant pointer and clear the
+INDEX_WARM_POINTER flag if that's the surviving index pointer. Note that
+all index pointers, either CLEAR or WARM, to dead tuples are removed during the
+first index scan itself.
+
+During the second heap scan, we fix WARM chain by clearing HEAP_WARM_UPDATED
+and HEAP_WARM_TUPLE flags on tuples.
+
+There are some more problems around aborted vacuums. For example, if vacuum
+aborts after clearing INDEX_WARM_POINTER flag but before removing the other
+index pointer, we will end up with two index pointers and none of those will
+have INDEX_WARM_POINTER set.  But since the HEAP_WARM_UPDATED flag on the heap
+tuple is still set, further WARM updates to the chain will be blocked. I guess
+we will need some special handling for case with multiple index pointers where
+none of the index pointers has INDEX_WARM_POINTER flag set. We can either leave
+these WARM chains alone and let them die with a subsequent non-WARM update or
+must apply heap-recheck logic during index vacuum to find the dead pointer.
+Given that vacuum-aborts are not common, I am inclined to leave this case
+unhandled. We must still check for presence of multiple index pointers without
+INDEX_WARM_POINTER flags and ensure that we don't accidently remove either of
+these pointers and also must not clear WARM chains.
+
+CREATE INDEX CONCURRENTLY
+-------------------------
+
+Currently CREATE INDEX CONCURRENTLY (CIC) is implemented as a 3-phase
+process.  In the first phase, we create catalog entry for the new index
+so that the index is visible to all other backends, but still don't use
+it for either read or write.  But we ensure that no new broken HOT
+chains are created by new transactions. In the second phase, we build
+the new index using a MVCC snapshot and then make the index available
+for inserts. We then do another pass over the index and insert any
+missing tuples, everytime indexing only it's root line pointer. See
+README.HOT for details about how HOT impacts CIC and how various
+challenges are tackeled.
+
+WARM poses another challenge because it allows creation of HOT chains
+even when an index key is changed. But since the index is not ready for
+insertion until the second phase is over, we might end up with a
+situation where the HOT chain has tuples with different index columns,
+yet only one of these values are indexed by the new index. Note that
+during the third phase, we only index tuples whose root line pointer is
+missing from the index. But we can't easily check if the existing index
+tuple is actually indexing the heap tuple visible to the new MVCC
+snapshot. Finding that information will require us to query the index
+again for every tuple in the chain, especially if it's a WARM tuple.
+This would require repeated access to the index. Another option would be
+to return index keys along with the heap TIDs when index is scanned for
+collecting all indexed TIDs during third phase. We can then compare the
+heap tuple against the already indexed key and decide whether or not to
+index the new tuple.
+
+We solve this problem more simply by disallowing WARM updates until the
+index is ready for insertion. We don't need to disallow WARM on a
+wholesale basis, but only those updates that change the columns of the
+new index are disallowed to be WARM updates.
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index e573f1a..4b4ccf6 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -1974,6 +1974,206 @@ heap_fetch(Relation relation,
 }
 
 /*
+ * Check status of a (possibly) WARM chain.
+ *
+ * This function looks at a HOT/WARM chain starting at tid and return a bitmask
+ * of information. We only follow the chain as long as it's known to be valid
+ * HOT chain. Information returned by the function consists of:
+ *
+ *  HCWC_WARM_UPDATED_TUPLE - a tuple with HEAP_WARM_UPDATED is found somewhere
+ *  						  in the chain. Note that when a tuple is WARM
+ *  						  updated, both old and new versions are marked
+ *  						  with this flag/
+ *
+ *  HCWC_WARM_TUPLE  - a tuple with HEAP_WARM_TUPLE is found somewhere in
+ *					  the chain.
+ *
+ *  HCWC_CLEAR_TUPLE - a tuple without HEAP_WARM_TUPLE is found somewhere in
+ *  					 the chain.
+ *
+ *	If stop_at_warm is true, we stop when the first HEAP_WARM_UPDATED tuple is
+ *	found and return information collected so far.
+ */
+HeapCheckWarmChainStatus
+heap_check_warm_chain(Page dp, ItemPointer tid, bool stop_at_warm)
+{
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	HeapCheckWarmChainStatus	status = 0;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
+			break;
+		}
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		if (HeapTupleHeaderIsWarmUpdated(heapTuple.t_data))
+		{
+			/* We found a WARM_UPDATED tuple */
+			status |= HCWC_WARM_UPDATED_TUPLE;
+
+			/*
+			 * If we've been told to stop at the first WARM_UPDATED tuple, just
+			 * return whatever information collected so far.
+			 */
+			if (stop_at_warm)
+				return status;
+
+			/*
+			 * Remember whether it's a CLEAR or a WARM tuple.
+			 */
+			if (HeapTupleHeaderIsWarm(heapTuple.t_data))
+				status |= HCWC_WARM_TUPLE;
+			else
+				status |= HCWC_CLEAR_TUPLE;
+		}
+		else
+			/* Must be a regular, non-WARM tuple */
+			status |= HCWC_CLEAR_TUPLE;
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	/* All OK. No need to recheck */
+	return status;
+}
+
+/*
+ * Scan through the WARM chain starting at tid and reset all WARM related
+ * flags. At the end, the chain will have all characteristics of a regular HOT
+ * chain.
+ *
+ * Return the number of cleared offnums. Cleared offnums are returned in the
+ * passed-in cleared_offnums array. The caller must ensure that the array is
+ * large enough to hold maximum offnums that can be cleared by this invokation
+ * of heap_clear_warm_chain().
+ */
+int
+heap_clear_warm_chain(Page dp, ItemPointer tid, OffsetNumber *cleared_offnums)
+{
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	int							num_cleared = 0;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
+			break;
+		}
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Clear WARM_UPDATED and WARM flags.
+		 */
+		if (HeapTupleHeaderIsWarmUpdated(heapTuple.t_data))
+		{
+			HeapTupleHeaderClearWarmUpdated(heapTuple.t_data);
+			HeapTupleHeaderClearWarm(heapTuple.t_data);
+			cleared_offnums[num_cleared++] = offnum;
+		}
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	return num_cleared;
+}
+
+/*
  *	heap_hot_search_buffer	- search HOT chain for tuple satisfying snapshot
  *
  * On entry, *tid is the TID of a tuple (either a simple tuple, or the root
@@ -1993,11 +2193,14 @@ heap_fetch(Relation relation,
  * Unlike heap_fetch, the caller must already have pin and (at least) share
  * lock on the buffer; it is still pinned/locked at exit.  Also unlike
  * heap_fetch, we do not report any pgstats count; caller may do so if wanted.
+ *
+ * recheck should be set false on entry by caller, will be set true on exit
+ * if a WARM tuple is encountered.
  */
 bool
 heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 					   Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call)
+					   bool *all_dead, bool first_call, bool *recheck)
 {
 	Page		dp = (Page) BufferGetPage(buffer);
 	TransactionId prev_xmax = InvalidTransactionId;
@@ -2051,9 +2254,12 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		ItemPointerSetOffsetNumber(&heapTuple->t_self, offnum);
 
 		/*
-		 * Shouldn't see a HEAP_ONLY tuple at chain start.
+		 * Shouldn't see a HEAP_ONLY tuple at chain start, unless we are
+		 * dealing with a WARM updated tuple in which case deferred triggers
+		 * may request to fetch a WARM tuple from middle of a chain.
 		 */
-		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple))
+		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple) &&
+				!HeapTupleIsWarmUpdated(heapTuple))
 			break;
 
 		/*
@@ -2066,6 +2272,20 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 			break;
 
 		/*
+		 * Check if there exists a WARM tuple somewhere down the chain and set
+		 * recheck to TRUE.
+		 *
+		 * XXX This is not very efficient right now, and we should look for
+		 * possible improvements here.
+		 */
+		if (recheck && *recheck == false)
+		{
+			HeapCheckWarmChainStatus status;
+			status = heap_check_warm_chain(dp, &heapTuple->t_self, true);
+			*recheck = HCWC_IS_WARM_UPDATED(status);
+		}
+
+		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
 		 * return the first tuple we find.  But on later passes, heapTuple
 		 * will initially be pointing to the tuple we returned last time.
@@ -2114,7 +2334,8 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		 * Check to see if HOT chain continues past this tuple; if so fetch
 		 * the next offnum and loop around.
 		 */
-		if (HeapTupleIsHotUpdated(heapTuple))
+		if (HeapTupleIsHotUpdated(heapTuple) &&
+			!HeapTupleHeaderHasRootOffset(heapTuple->t_data))
 		{
 			Assert(ItemPointerGetBlockNumber(&heapTuple->t_data->t_ctid) ==
 				   ItemPointerGetBlockNumber(tid));
@@ -2138,18 +2359,41 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
  */
 bool
 heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
-				bool *all_dead)
+				bool *all_dead, bool *recheck, Buffer *cbuffer,
+				HeapTuple heapTuple)
 {
 	bool		result;
 	Buffer		buffer;
-	HeapTupleData heapTuple;
+	ItemPointerData ret_tid = *tid;
 
 	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	LockBuffer(buffer, BUFFER_LOCK_SHARE);
-	result = heap_hot_search_buffer(tid, relation, buffer, snapshot,
-									&heapTuple, all_dead, true);
-	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buffer);
+	result = heap_hot_search_buffer(&ret_tid, relation, buffer, snapshot,
+									heapTuple, all_dead, true, recheck);
+
+	/*
+	 * If we are returning a potential candidate tuple from this chain and the
+	 * caller has requested for "recheck" hint, keep the buffer locked and
+	 * pinned. The caller must release the lock and pin on the buffer in all
+	 * such cases.
+	 */
+	if (!result || !recheck || !(*recheck))
+	{
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	/*
+	 * Set the caller supplied tid with the actual location of the tuple being
+	 * returned.
+	 */
+	if (result)
+	{
+		*tid = ret_tid;
+		if (cbuffer)
+			*cbuffer = buffer;
+	}
+
 	return result;
 }
 
@@ -2792,7 +3036,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		{
 			XLogRecPtr	recptr;
 			xl_heap_multi_insert *xlrec;
-			uint8		info = XLOG_HEAP2_MULTI_INSERT;
+			uint8		info = XLOG_HEAP_MULTI_INSERT;
 			char	   *tupledata;
 			int			totaldatalen;
 			char	   *scratchptr = scratch;
@@ -2889,7 +3133,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			/* filtering by origin on a row level is much more efficient */
 			XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);
 
-			recptr = XLogInsert(RM_HEAP2_ID, info);
+			recptr = XLogInsert(RM_HEAP_ID, info);
 
 			PageSetLSN(page, recptr);
 		}
@@ -3313,7 +3557,9 @@ l1:
 	}
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	tp.t_data->t_infomask |= new_infomask;
 	tp.t_data->t_infomask2 |= new_infomask2;
@@ -3508,15 +3754,18 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 HTSU_Result
 heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode)
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update)
 {
 	HTSU_Result result;
 	TransactionId xid = GetCurrentTransactionId();
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *exprindx_attrs;
 	Bitmapset  *interesting_attrs;
 	Bitmapset  *modified_attrs;
+	Bitmapset  *notready_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3537,6 +3786,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		use_hot_update = false;
+	bool		use_warm_update = false;
 	bool		hot_attrs_checked = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
@@ -3562,6 +3812,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot update tuples during a parallel operation")));
 
+	/* Assume no-warm update */
+	if (warm_update)
+		*warm_update = false;
+
 	/*
 	 * Fetch the list of attributes to be checked for various operations.
 	 *
@@ -3583,6 +3837,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	exprindx_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_EXPR_PREDICATE);
+	notready_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_NOTREADY);
 
 
 	block = ItemPointerGetBlockNumber(otid);
@@ -3606,8 +3864,11 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 		interesting_attrs = bms_add_members(interesting_attrs, hot_attrs);
 		hot_attrs_checked = true;
 	}
+
 	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, exprindx_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, notready_attrs);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -3654,6 +3915,9 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
 												  &oldtup, newtup);
 
+	if (modified_attrsp)
+		*modified_attrsp = bms_copy(modified_attrs);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3909,8 +4173,10 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(exprindx_attrs);
 		bms_free(modified_attrs);
 		bms_free(interesting_attrs);
+		bms_free(notready_attrs);
 		return result;
 	}
 
@@ -4074,7 +4340,9 @@ l2:
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
-		oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(oldtup.t_data))
+			oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 		oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleClearHotUpdated(&oldtup);
 		/* ... and store info about transaction updating this tuple */
@@ -4228,6 +4496,39 @@ l2:
 		 */
 		if (hot_attrs_checked && !bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
+		else
+		{
+			/*
+			 * If no WARM updates yet on this chain, let this update be a WARM
+			 * update. We must not do any WARM update even if the previous WARM
+			 * updated at the end aborted. That's why we look at
+			 * HEAP_WARM_UPDATED flag.
+			 *
+			 * We don't do WARM updates if one of the columns used in index
+			 * expressions is being modified. Since expressions may evaluate to
+			 * the same value, even when heap values change, we don't have a
+			 * good way to deal with duplicate key scans when expressions are
+			 * used in the index.
+			 *
+			 * We check if the HOT attrs are a subset of the modified
+			 * attributes. Since HOT attrs include all index attributes, this
+			 * allows to avoid doing a WARM update when all index attributes
+			 * are being updated. Performing a WARM update is not a great idea
+			 * because all indexes will receive a new entry anyways.
+			 *
+			 * We also disable WARM temporarily if we are modifying a column
+			 * which is used by a new index that's being added. We can't insert
+			 * new entries to such indexes and hence we must not allow creating
+			 * on WARM chains which are broken with respect to the new index
+			 * being added.
+			 */
+			if (relation->rd_supportswarm &&
+				!HeapTupleIsWarmUpdated(&oldtup) &&
+				!bms_overlap(modified_attrs, exprindx_attrs) &&
+				!bms_is_subset(hot_attrs, modified_attrs) &&
+				!bms_overlap(notready_attrs, modified_attrs))
+				use_warm_update = true;
+		}
 	}
 	else
 	{
@@ -4274,6 +4575,32 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+
+		/*
+		 * Even if we are doing a HOT update, we must carry forward the WARM
+		 * flag because we may have already inserted another index entry
+		 * pointing to our root and a third entry may create duplicates.
+		 *
+		 * Note: If we ever have a mechanism to avoid duplicate <key, TID> in
+		 * indexes, we could look at relaxing this restriction and allow even
+		 * more WARM udpates.
+		 */
+		if (HeapTupleIsWarmUpdated(&oldtup))
+		{
+			HeapTupleSetWarmUpdated(heaptup);
+			HeapTupleSetWarmUpdated(newtup);
+		}
+
+		/*
+		 * If the old tuple is a WARM tuple then mark the new tuple as a WARM
+		 * tuple as well.
+		 */
+		if (HeapTupleIsWarm(&oldtup))
+		{
+			HeapTupleSetWarm(heaptup);
+			HeapTupleSetWarm(newtup);
+		}
+
 		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
@@ -4286,12 +4613,45 @@ l2:
 		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
 			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
+	else if (use_warm_update)
+	{
+		/* Mark the old tuple as HOT-updated */
+		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetWarmUpdated(&oldtup);
+
+		/* And mark the new tuple as heap-only */
+		HeapTupleSetHeapOnly(heaptup);
+		/* Mark the new tuple as WARM tuple */
+		HeapTupleSetWarmUpdated(heaptup);
+		/* This update also starts the WARM chain */
+		HeapTupleSetWarm(heaptup);
+		Assert(!HeapTupleIsWarm(&oldtup));
+
+		/* Mark the caller's copy too, in case different from heaptup */
+		HeapTupleSetHeapOnly(newtup);
+		HeapTupleSetWarmUpdated(newtup);
+		HeapTupleSetWarm(newtup);
+
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
+		/* Let the caller know we did a WARM update */
+		if (warm_update)
+			*warm_update = true;
+	}
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		HeapTupleClearWarmUpdated(heaptup);
+		HeapTupleClearWarmUpdated(newtup);
+		HeapTupleClearWarm(heaptup);
+		HeapTupleClearWarm(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4310,7 +4670,9 @@ l2:
 	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
-	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(oldtup.t_data))
+		oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 	oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	/* ... and store info about transaction updating this tuple */
 	Assert(TransactionIdIsValid(xmax_old_tuple));
@@ -4401,7 +4763,10 @@ l2:
 	if (have_tuple_lock)
 		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
 
-	pgstat_count_heap_update(relation, use_hot_update);
+	/*
+	 * Count HOT and WARM updates separately
+	 */
+	pgstat_count_heap_update(relation, use_hot_update, use_warm_update);
 
 	/*
 	 * If heaptup is a private copy, release it.  Don't forget to copy t_self
@@ -4421,6 +4786,8 @@ l2:
 	bms_free(id_attrs);
 	bms_free(modified_attrs);
 	bms_free(interesting_attrs);
+	bms_free(exprindx_attrs);
+	bms_free(notready_attrs);
 
 	return HeapTupleMayBeUpdated;
 }
@@ -4541,7 +4908,8 @@ HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
  * via ereport().
  */
 void
-simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
+simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup,
+		Bitmapset **modified_attrs, bool *warm_update)
 {
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
@@ -4550,7 +4918,7 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &hufd, &lockmode);
+						 &hufd, &lockmode, modified_attrs, warm_update);
 	switch (result)
 	{
 		case HeapTupleSelfUpdated:
@@ -6227,7 +6595,9 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	PageSetPrunable(page, RecentGlobalXmin);
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 
 	/*
@@ -6801,7 +7171,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 	 * Old-style VACUUM FULL is gone, but we have to keep this code as long as
 	 * we support having MOVED_OFF/MOVED_IN tuples in the database.
 	 */
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 
@@ -6820,7 +7190,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			 * have failed; whereas a non-dead MOVED_IN tuple must mean the
 			 * xvac transaction succeeded.
 			 */
-			if (tuple->t_infomask & HEAP_MOVED_OFF)
+			if (HeapTupleHeaderIsMovedOff(tuple))
 				frz->frzflags |= XLH_INVALID_XVAC;
 			else
 				frz->frzflags |= XLH_FREEZE_XVAC;
@@ -7290,7 +7660,7 @@ heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple)
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid))
@@ -7373,7 +7743,7 @@ heap_tuple_needs_freeze(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid) &&
@@ -7399,7 +7769,7 @@ HeapTupleHeaderAdvanceLatestRemovedXid(HeapTupleHeader tuple,
 	TransactionId xmax = HeapTupleHeaderGetUpdateXid(tuple);
 	TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		if (TransactionIdPrecedes(*latestRemovedXid, xvac))
 			*latestRemovedXid = xvac;
@@ -7448,6 +7818,36 @@ log_heap_cleanup_info(RelFileNode rnode, TransactionId latestRemovedXid)
 }
 
 /*
+ * Perform XLogInsert for a heap-warm-clear operation.  Caller must already
+ * have modified the buffer and marked it dirty.
+ */
+XLogRecPtr
+log_heap_warmclear(Relation reln, Buffer buffer,
+			   OffsetNumber *cleared, int ncleared)
+{
+	xl_heap_warmclear	xlrec;
+	XLogRecPtr			recptr;
+
+	/* Caller should not call me on a non-WAL-logged relation */
+	Assert(RelationNeedsWAL(reln));
+
+	xlrec.ncleared = ncleared;
+
+	XLogBeginInsert();
+	XLogRegisterData((char *) &xlrec, SizeOfHeapWarmClear);
+
+	XLogRegisterBuffer(0, buffer, REGBUF_STANDARD);
+
+	if (ncleared > 0)
+		XLogRegisterBufData(0, (char *) cleared,
+							ncleared * sizeof(OffsetNumber));
+
+	recptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_WARMCLEAR);
+
+	return recptr;
+}
+
+/*
  * Perform XLogInsert for a heap-clean operation.  Caller must already
  * have modified the buffer and marked it dirty.
  *
@@ -7602,6 +8002,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
+	bool		warm_update = false;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
@@ -7613,6 +8014,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	else
 		info = XLOG_HEAP_UPDATE;
 
+	if (HeapTupleIsWarmUpdated(newtup))
+		warm_update = true;
+
 	/*
 	 * If the old and new tuple are on the same page, we only need to log the
 	 * parts of the new tuple that were changed.  That saves on the amount of
@@ -7686,6 +8090,8 @@ log_heap_update(Relation reln, Buffer oldbuf,
 				xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY;
 		}
 	}
+	if (warm_update)
+		xlrec.flags |= XLH_UPDATE_WARM_UPDATE;
 
 	/* If new tuple is the single and first tuple on page... */
 	if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber &&
@@ -8100,6 +8506,60 @@ heap_xlog_clean(XLogReaderState *record)
 		XLogRecordPageWithFreeSpace(rnode, blkno, freespace);
 }
 
+
+/*
+ * Handles HEAP2_WARMCLEAR record type
+ */
+static void
+heap_xlog_warmclear(XLogReaderState *record)
+{
+	XLogRecPtr	lsn = record->EndRecPtr;
+	xl_heap_warmclear	*xlrec = (xl_heap_warmclear *) XLogRecGetData(record);
+	Buffer		buffer;
+	RelFileNode rnode;
+	BlockNumber blkno;
+	XLogRedoAction action;
+
+	XLogRecGetBlockTag(record, 0, &rnode, NULL, &blkno);
+
+	/*
+	 * If we have a full-page image, restore it (using a cleanup lock) and
+	 * we're done.
+	 */
+	action = XLogReadBufferForRedoExtended(record, 0, RBM_NORMAL, true,
+										   &buffer);
+	if (action == BLK_NEEDS_REDO)
+	{
+		Page		page = (Page) BufferGetPage(buffer);
+		OffsetNumber *cleared;
+		int			ncleared;
+		Size		datalen;
+		int			i;
+
+		cleared = (OffsetNumber *) XLogRecGetBlockData(record, 0, &datalen);
+
+		ncleared = xlrec->ncleared;
+
+		for (i = 0; i < ncleared; i++)
+		{
+			ItemId			lp;
+			OffsetNumber	offnum = cleared[i];
+			HeapTupleData	heapTuple;
+
+			lp = PageGetItemId(page, offnum);
+			heapTuple.t_data = (HeapTupleHeader) PageGetItem(page, lp);
+
+			HeapTupleHeaderClearWarmUpdated(heapTuple.t_data);
+			HeapTupleHeaderClearWarm(heapTuple.t_data);
+		}
+
+		PageSetLSN(page, lsn);
+		MarkBufferDirty(buffer);
+	}
+	if (BufferIsValid(buffer))
+		UnlockReleaseBuffer(buffer);
+}
+
 /*
  * Replay XLOG_HEAP2_VISIBLE record.
  *
@@ -8346,7 +8806,9 @@ heap_xlog_delete(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleHeaderClearHotUpdated(htup);
 		fix_infomask_from_infobits(xlrec->infobits_set,
@@ -8367,7 +8829,7 @@ heap_xlog_delete(XLogReaderState *record)
 		if (!HeapTupleHeaderHasRootOffset(htup))
 		{
 			OffsetNumber	root_offnum;
-			root_offnum = heap_get_root_tuple(page, xlrec->offnum); 
+			root_offnum = heap_get_root_tuple(page, xlrec->offnum);
 			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
 		}
 
@@ -8663,16 +9125,22 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	Size		freespace = 0;
 	XLogRedoAction oldaction;
 	XLogRedoAction newaction;
+	bool		warm_update = false;
 
 	/* initialize to keep the compiler quiet */
 	oldtup.t_data = NULL;
 	oldtup.t_len = 0;
 
+	if (xlrec->flags & XLH_UPDATE_WARM_UPDATE)
+		warm_update = true;
+
 	XLogRecGetBlockTag(record, 0, &rnode, NULL, &newblk);
 	if (XLogRecGetBlockTag(record, 1, NULL, NULL, &oldblk))
 	{
 		/* HOT updates are never done across pages */
 		Assert(!hot_update);
+		/* WARM updates are never done across pages */
+		Assert(!warm_update);
 	}
 	else
 		oldblk = newblk;
@@ -8732,6 +9200,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 								   &htup->t_infomask2);
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
+
+		/* Mark the old tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetWarmUpdated(htup);
+
 		/* Set forward chain link in t_ctid */
 		HeapTupleHeaderSetNextTid(htup, &newtid);
 
@@ -8867,6 +9340,10 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
 
+		/* Mark the new tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetWarmUpdated(htup);
+
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
@@ -8994,7 +9471,9 @@ heap_xlog_lock(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9073,7 +9552,9 @@ heap_xlog_lock_updated(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9142,6 +9623,9 @@ heap_redo(XLogReaderState *record)
 		case XLOG_HEAP_INSERT:
 			heap_xlog_insert(record);
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			heap_xlog_multi_insert(record);
+			break;
 		case XLOG_HEAP_DELETE:
 			heap_xlog_delete(record);
 			break;
@@ -9170,7 +9654,7 @@ heap2_redo(XLogReaderState *record)
 {
 	uint8		info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
 
-	switch (info & XLOG_HEAP_OPMASK)
+	switch (info & XLOG_HEAP2_OPMASK)
 	{
 		case XLOG_HEAP2_CLEAN:
 			heap_xlog_clean(record);
@@ -9184,9 +9668,6 @@ heap2_redo(XLogReaderState *record)
 		case XLOG_HEAP2_VISIBLE:
 			heap_xlog_visible(record);
 			break;
-		case XLOG_HEAP2_MULTI_INSERT:
-			heap_xlog_multi_insert(record);
-			break;
 		case XLOG_HEAP2_LOCK_UPDATED:
 			heap_xlog_lock_updated(record);
 			break;
@@ -9200,6 +9681,9 @@ heap2_redo(XLogReaderState *record)
 		case XLOG_HEAP2_REWRITE:
 			heap_xlog_logical_rewrite(record);
 			break;
+		case XLOG_HEAP2_WARMCLEAR:
+			heap_xlog_warmclear(record);
+			break;
 		default:
 			elog(PANIC, "heap2_redo: unknown op code %u", info);
 	}
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index f54337c..4e8ed79 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -834,6 +834,13 @@ heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				continue;
 
+			/*
+			 * If the tuple has root line pointer, it must be the end of the
+			 * chain
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			/* Set up to scan the HOT-chain */
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index 2d3ae9b..bd469ee 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -404,6 +404,14 @@ rewrite_heap_tuple(RewriteState state,
 		old_tuple->t_data->t_infomask & HEAP_XACT_MASK;
 
 	/*
+	 * We must clear the HEAP_WARM_TUPLE flag if the HEAP_WARM_UPDATED is
+	 * cleared above.
+	 */
+	if (HeapTupleHeaderIsWarmUpdated(old_tuple->t_data))
+		HeapTupleHeaderClearWarm(new_tuple->t_data);
+
+
+	/*
 	 * While we have our hands on the tuple, we may as well freeze any
 	 * eligible xmin or xmax, so that future VACUUM effort can be saved.
 	 */
@@ -428,7 +436,7 @@ rewrite_heap_tuple(RewriteState state,
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
 
-		/* 
+		/*
 		 * We've already checked that this is not the last tuple in the chain,
 		 * so fetch the next TID in the chain.
 		 */
@@ -737,7 +745,7 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		/* 
+		/*
 		 * Set t_ctid just to ensure that block number is copied correctly, but
 		 * then immediately mark the tuple as the latest.
 		 */
diff --git a/src/backend/access/heap/tuptoaster.c b/src/backend/access/heap/tuptoaster.c
index 19e7048..47b01eb 100644
--- a/src/backend/access/heap/tuptoaster.c
+++ b/src/backend/access/heap/tuptoaster.c
@@ -1620,7 +1620,8 @@ toast_save_datum(Relation rel, Datum value,
 							 toastrel,
 							 toastidxs[i]->rd_index->indisunique ?
 							 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-							 NULL);
+							 NULL,
+							 false);
 		}
 
 		/*
diff --git a/src/backend/access/index/genam.c b/src/backend/access/index/genam.c
index a91fda7..d523c8f 100644
--- a/src/backend/access/index/genam.c
+++ b/src/backend/access/index/genam.c
@@ -127,6 +127,8 @@ RelationGetIndexScan(Relation indexRelation, int nkeys, int norderbys)
 	scan->xs_cbuf = InvalidBuffer;
 	scan->xs_continue_hot = false;
 
+	scan->indexInfo = NULL;
+
 	return scan;
 }
 
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index cc5ac8b..d048714 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -197,7 +197,8 @@ index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 IndexInfo *indexInfo)
+			 IndexInfo *indexInfo,
+			 bool warm_update)
 {
 	RELATION_CHECKS;
 	CHECK_REL_PROCEDURE(aminsert);
@@ -207,6 +208,12 @@ index_insert(Relation indexRelation,
 									   (HeapTuple) NULL,
 									   InvalidBuffer);
 
+	if (warm_update)
+	{
+		Assert(indexRelation->rd_amroutine->amwarminsert != NULL);
+		return indexRelation->rd_amroutine->amwarminsert(indexRelation, values,
+				isnull, heap_t_ctid, heapRelation, checkUnique, indexInfo);
+	}
 	return indexRelation->rd_amroutine->aminsert(indexRelation, values, isnull,
 												 heap_t_ctid, heapRelation,
 												 checkUnique, indexInfo);
@@ -291,6 +298,25 @@ index_beginscan_internal(Relation indexRelation,
 	scan->parallel_scan = pscan;
 	scan->xs_temp_snap = temp_snap;
 
+	/*
+	 * If the index supports recheck, make sure that index tuple is saved
+	 * during index scans. Also build and cache IndexInfo which is used by
+	 * amrecheck routine.
+	 *
+	 * XXX Ideally, we should look at all indexes on the table and check if
+	 * WARM is at all supported on the base table. If WARM is not supported
+	 * then we don't need to do any recheck. RelationGetIndexAttrBitmap() does
+	 * do that and sets rd_supportswarm after looking at all indexes. But we
+	 * don't know if the function was called earlier in the session when we're
+	 * here. We can't call it now because there exists a risk of causing
+	 * deadlock.
+	 */
+	if (indexRelation->rd_amroutine->amrecheck)
+	{
+		scan->xs_want_itup = true;
+		scan->indexInfo = BuildIndexInfo(indexRelation);
+	}
+
 	return scan;
 }
 
@@ -358,6 +384,10 @@ index_endscan(IndexScanDesc scan)
 	if (scan->xs_temp_snap)
 		UnregisterSnapshot(scan->xs_snapshot);
 
+	/* Free cached IndexInfo, if any */
+	if (scan->indexInfo)
+		pfree(scan->indexInfo);
+
 	/* Release the scan data structure itself */
 	IndexScanEnd(scan);
 }
@@ -535,7 +565,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
 	/*
 	 * The AM's amgettuple proc finds the next index entry matching the scan
 	 * keys, and puts the TID into scan->xs_ctup.t_self.  It should also set
-	 * scan->xs_recheck and possibly scan->xs_itup/scan->xs_hitup, though we
+	 * scan->xs_tuple_recheck and possibly scan->xs_itup/scan->xs_hitup, though we
 	 * pay no attention to those fields here.
 	 */
 	found = scan->indexRelation->rd_amroutine->amgettuple(scan, direction);
@@ -574,7 +604,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
  * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
  * call).
  *
- * Note: caller must check scan->xs_recheck, and perform rechecking of the
+ * Note: caller must check scan->xs_tuple_recheck, and perform rechecking of the
  * scan keys if required.  We do not do that here because we don't have
  * enough information to do it efficiently in the general case.
  * ----------------
@@ -585,6 +615,7 @@ index_fetch_heap(IndexScanDesc scan)
 	ItemPointer tid = &scan->xs_ctup.t_self;
 	bool		all_dead = false;
 	bool		got_heap_tuple;
+	bool		tuple_recheck;
 
 	/* We can skip the buffer-switching logic if we're in mid-HOT chain. */
 	if (!scan->xs_continue_hot)
@@ -603,6 +634,8 @@ index_fetch_heap(IndexScanDesc scan)
 			heap_page_prune_opt(scan->heapRelation, scan->xs_cbuf);
 	}
 
+	tuple_recheck = false;
+
 	/* Obtain share-lock on the buffer so we can examine visibility */
 	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
 	got_heap_tuple = heap_hot_search_buffer(tid, scan->heapRelation,
@@ -610,32 +643,60 @@ index_fetch_heap(IndexScanDesc scan)
 											scan->xs_snapshot,
 											&scan->xs_ctup,
 											&all_dead,
-											!scan->xs_continue_hot);
-	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+											!scan->xs_continue_hot,
+											&tuple_recheck);
 
 	if (got_heap_tuple)
 	{
+		bool res = true;
+
+		/*
+		 * Ok we got a tuple which satisfies the snapshot, but if its part of a
+		 * WARM chain, we must do additional checks to ensure that we are
+		 * indeed returning a correct tuple. Note that if the index AM does not
+		 * implement amrecheck method, then we don't any additional checks
+		 * since WARM must have been disabled on such tables.
+		 */
+		if (tuple_recheck && scan->xs_itup &&
+			scan->indexRelation->rd_amroutine->amrecheck)
+		{
+			res = scan->indexRelation->rd_amroutine->amrecheck(
+						scan->indexRelation,
+						scan->indexInfo,
+						scan->xs_itup,
+						scan->heapRelation,
+						&scan->xs_ctup);
+		}
+
+		LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+
 		/*
 		 * Only in a non-MVCC snapshot can more than one member of the HOT
 		 * chain be visible.
 		 */
 		scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
 		pgstat_count_heap_fetch(scan->indexRelation);
-		return &scan->xs_ctup;
+
+		if (res)
+			return &scan->xs_ctup;
 	}
+	else
+	{
+		LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
 
-	/* We've reached the end of the HOT chain. */
-	scan->xs_continue_hot = false;
+		/* We've reached the end of the HOT chain. */
+		scan->xs_continue_hot = false;
 
-	/*
-	 * If we scanned a whole HOT chain and found only dead tuples, tell index
-	 * AM to kill its entry for that TID (this will take effect in the next
-	 * amgettuple call, in index_getnext_tid).  We do not do this when in
-	 * recovery because it may violate MVCC to do so.  See comments in
-	 * RelationGetIndexScan().
-	 */
-	if (!scan->xactStartedInRecovery)
-		scan->kill_prior_tuple = all_dead;
+		/*
+		 * If we scanned a whole HOT chain and found only dead tuples, tell index
+		 * AM to kill its entry for that TID (this will take effect in the next
+		 * amgettuple call, in index_getnext_tid).  We do not do this when in
+		 * recovery because it may violate MVCC to do so.  See comments in
+		 * RelationGetIndexScan().
+		 */
+		if (!scan->xactStartedInRecovery)
+			scan->kill_prior_tuple = all_dead;
+	}
 
 	return NULL;
 }
diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c
index 6dca810..463d4bf 100644
--- a/src/backend/access/nbtree/nbtinsert.c
+++ b/src/backend/access/nbtree/nbtinsert.c
@@ -20,6 +20,7 @@
 #include "access/nbtxlog.h"
 #include "access/transam.h"
 #include "access/xloginsert.h"
+#include "catalog/index.h"
 #include "miscadmin.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
@@ -250,6 +251,10 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 	BTPageOpaque opaque;
 	Buffer		nbuf = InvalidBuffer;
 	bool		found = false;
+	Buffer		buffer;
+	HeapTupleData	heapTuple;
+	bool		recheck = false;
+	IndexInfo	*indexInfo = BuildIndexInfo(rel);
 
 	/* Assume unique until we find a duplicate */
 	*is_unique = true;
@@ -309,6 +314,8 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				curitup = (IndexTuple) PageGetItem(page, curitemid);
 				htid = curitup->t_tid;
 
+				recheck = false;
+
 				/*
 				 * If we are doing a recheck, we expect to find the tuple we
 				 * are rechecking.  It's not a duplicate, but we have to keep
@@ -326,112 +333,153 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				 * have just a single index entry for the entire chain.
 				 */
 				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
-										 &all_dead))
+							&all_dead, &recheck, &buffer,
+							&heapTuple))
 				{
 					TransactionId xwait;
+					bool result = true;
 
 					/*
-					 * It is a duplicate. If we are only doing a partial
-					 * check, then don't bother checking if the tuple is being
-					 * updated in another transaction. Just return the fact
-					 * that it is a potential conflict and leave the full
-					 * check till later.
+					 * If the tuple was WARM update, we may again see our own
+					 * tuple. Since WARM updates don't create new index
+					 * entries, our own tuple is only reachable via the old
+					 * index pointer.
 					 */
-					if (checkUnique == UNIQUE_CHECK_PARTIAL)
+					if (checkUnique == UNIQUE_CHECK_EXISTING &&
+							ItemPointerCompare(&htid, &itup->t_tid) == 0)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						*is_unique = false;
-						return InvalidTransactionId;
+						found = true;
+						result = false;
+						if (recheck)
+							UnlockReleaseBuffer(buffer);
 					}
-
-					/*
-					 * If this tuple is being updated by other transaction
-					 * then we have to wait for its commit/abort.
-					 */
-					xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
-						SnapshotDirty.xmin : SnapshotDirty.xmax;
-
-					if (TransactionIdIsValid(xwait))
+					else if (recheck)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						/* Tell _bt_doinsert to wait... */
-						*speculativeToken = SnapshotDirty.speculativeToken;
-						return xwait;
+						result = btrecheck(rel, indexInfo, curitup, heapRel, &heapTuple);
+						UnlockReleaseBuffer(buffer);
 					}
 
-					/*
-					 * Otherwise we have a definite conflict.  But before
-					 * complaining, look to see if the tuple we want to insert
-					 * is itself now committed dead --- if so, don't complain.
-					 * This is a waste of time in normal scenarios but we must
-					 * do it to support CREATE INDEX CONCURRENTLY.
-					 *
-					 * We must follow HOT-chains here because during
-					 * concurrent index build, we insert the root TID though
-					 * the actual tuple may be somewhere in the HOT-chain.
-					 * While following the chain we might not stop at the
-					 * exact tuple which triggered the insert, but that's OK
-					 * because if we find a live tuple anywhere in this chain,
-					 * we have a unique key conflict.  The other live tuple is
-					 * not part of this chain because it had a different index
-					 * entry.
-					 */
-					htid = itup->t_tid;
-					if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL))
-					{
-						/* Normal case --- it's still live */
-					}
-					else
+					if (result)
 					{
 						/*
-						 * It's been deleted, so no error, and no need to
-						 * continue searching
+						 * It is a duplicate. If we are only doing a partial
+						 * check, then don't bother checking if the tuple is being
+						 * updated in another transaction. Just return the fact
+						 * that it is a potential conflict and leave the full
+						 * check till later.
 						 */
-						break;
-					}
+						if (checkUnique == UNIQUE_CHECK_PARTIAL)
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							*is_unique = false;
+							return InvalidTransactionId;
+						}
 
-					/*
-					 * Check for a conflict-in as we would if we were going to
-					 * write to this page.  We aren't actually going to write,
-					 * but we want a chance to report SSI conflicts that would
-					 * otherwise be masked by this unique constraint
-					 * violation.
-					 */
-					CheckForSerializableConflictIn(rel, NULL, buf);
+						/*
+						 * If this tuple is being updated by other transaction
+						 * then we have to wait for its commit/abort.
+						 */
+						xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
+							SnapshotDirty.xmin : SnapshotDirty.xmax;
+
+						if (TransactionIdIsValid(xwait))
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							/* Tell _bt_doinsert to wait... */
+							*speculativeToken = SnapshotDirty.speculativeToken;
+							return xwait;
+						}
 
-					/*
-					 * This is a definite conflict.  Break the tuple down into
-					 * datums and report the error.  But first, make sure we
-					 * release the buffer locks we're holding ---
-					 * BuildIndexValueDescription could make catalog accesses,
-					 * which in the worst case might touch this same index and
-					 * cause deadlocks.
-					 */
-					if (nbuf != InvalidBuffer)
-						_bt_relbuf(rel, nbuf);
-					_bt_relbuf(rel, buf);
+						/*
+						 * Otherwise we have a definite conflict.  But before
+						 * complaining, look to see if the tuple we want to insert
+						 * is itself now committed dead --- if so, don't complain.
+						 * This is a waste of time in normal scenarios but we must
+						 * do it to support CREATE INDEX CONCURRENTLY.
+						 *
+						 * We must follow HOT-chains here because during
+						 * concurrent index build, we insert the root TID though
+						 * the actual tuple may be somewhere in the HOT-chain.
+						 * While following the chain we might not stop at the
+						 * exact tuple which triggered the insert, but that's OK
+						 * because if we find a live tuple anywhere in this chain,
+						 * we have a unique key conflict.  The other live tuple is
+						 * not part of this chain because it had a different index
+						 * entry.
+						 */
+						recheck = false;
+						ItemPointerCopy(&itup->t_tid, &htid);
+						if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL,
+									&recheck, &buffer, &heapTuple))
+						{
+							bool result = true;
+							if (recheck)
+							{
+								/*
+								 * Recheck if the tuple actually satisfies the
+								 * index key. Otherwise, we might be following
+								 * a wrong index pointer and mustn't entertain
+								 * this tuple.
+								 */
+								result = btrecheck(rel, indexInfo, itup, heapRel, &heapTuple);
+								UnlockReleaseBuffer(buffer);
+							}
+							if (!result)
+								break;
+							/* Normal case --- it's still live */
+						}
+						else
+						{
+							/*
+							 * It's been deleted, so no error, and no need to
+							 * continue searching.
+							 */
+							break;
+						}
 
-					{
-						Datum		values[INDEX_MAX_KEYS];
-						bool		isnull[INDEX_MAX_KEYS];
-						char	   *key_desc;
-
-						index_deform_tuple(itup, RelationGetDescr(rel),
-										   values, isnull);
-
-						key_desc = BuildIndexValueDescription(rel, values,
-															  isnull);
-
-						ereport(ERROR,
-								(errcode(ERRCODE_UNIQUE_VIOLATION),
-								 errmsg("duplicate key value violates unique constraint \"%s\"",
-										RelationGetRelationName(rel)),
-							   key_desc ? errdetail("Key %s already exists.",
-													key_desc) : 0,
-								 errtableconstraint(heapRel,
-											 RelationGetRelationName(rel))));
+						/*
+						 * Check for a conflict-in as we would if we were going to
+						 * write to this page.  We aren't actually going to write,
+						 * but we want a chance to report SSI conflicts that would
+						 * otherwise be masked by this unique constraint
+						 * violation.
+						 */
+						CheckForSerializableConflictIn(rel, NULL, buf);
+
+						/*
+						 * This is a definite conflict.  Break the tuple down into
+						 * datums and report the error.  But first, make sure we
+						 * release the buffer locks we're holding ---
+						 * BuildIndexValueDescription could make catalog accesses,
+						 * which in the worst case might touch this same index and
+						 * cause deadlocks.
+						 */
+						if (nbuf != InvalidBuffer)
+							_bt_relbuf(rel, nbuf);
+						_bt_relbuf(rel, buf);
+
+						{
+							Datum		values[INDEX_MAX_KEYS];
+							bool		isnull[INDEX_MAX_KEYS];
+							char	   *key_desc;
+
+							index_deform_tuple(itup, RelationGetDescr(rel),
+									values, isnull);
+
+							key_desc = BuildIndexValueDescription(rel, values,
+									isnull);
+
+							ereport(ERROR,
+									(errcode(ERRCODE_UNIQUE_VIOLATION),
+									 errmsg("duplicate key value violates unique constraint \"%s\"",
+										 RelationGetRelationName(rel)),
+									 key_desc ? errdetail("Key %s already exists.",
+										 key_desc) : 0,
+									 errtableconstraint(heapRel,
+										 RelationGetRelationName(rel))));
+						}
 					}
 				}
 				else if (all_dead)
diff --git a/src/backend/access/nbtree/nbtpage.c b/src/backend/access/nbtree/nbtpage.c
index f815fd4..061c8d4 100644
--- a/src/backend/access/nbtree/nbtpage.c
+++ b/src/backend/access/nbtree/nbtpage.c
@@ -766,29 +766,20 @@ _bt_page_recyclable(Page page)
 }
 
 /*
- * Delete item(s) from a btree page during VACUUM.
+ * Delete item(s) and clear WARM item(s) on a btree page during VACUUM.
  *
  * This must only be used for deleting leaf items.  Deleting an item on a
  * non-leaf page has to be done as part of an atomic action that includes
- * deleting the page it points to.
+ * deleting the page it points to. We don't ever clear pointers on a non-leaf
+ * page.
  *
  * This routine assumes that the caller has pinned and locked the buffer.
  * Also, the given itemnos *must* appear in increasing order in the array.
- *
- * We record VACUUMs and b-tree deletes differently in WAL. InHotStandby
- * we need to be able to pin all of the blocks in the btree in physical
- * order when replaying the effects of a VACUUM, just as we do for the
- * original VACUUM itself. lastBlockVacuumed allows us to tell whether an
- * intermediate range of blocks has had no changes at all by VACUUM,
- * and so must be scanned anyway during replay. We always write a WAL record
- * for the last block in the index, whether or not it contained any items
- * to be removed. This allows us to scan right up to end of index to
- * ensure correct locking.
  */
 void
-_bt_delitems_vacuum(Relation rel, Buffer buf,
-					OffsetNumber *itemnos, int nitems,
-					BlockNumber lastBlockVacuumed)
+_bt_handleitems_vacuum(Relation rel, Buffer buf,
+					OffsetNumber *delitemnos, int ndelitems,
+					OffsetNumber *clearitemnos, int nclearitems)
 {
 	Page		page = BufferGetPage(buf);
 	BTPageOpaque opaque;
@@ -796,9 +787,20 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 	/* No ereport(ERROR) until changes are logged */
 	START_CRIT_SECTION();
 
+	/*
+	 * Clear the WARM pointers.
+	 *
+	 * We must do this before dealing with the dead items because
+	 * PageIndexMultiDelete may move items around to compactify the array and
+	 * hence offnums recorded earlier won't make any sense after
+	 * PageIndexMultiDelete is called.
+	 */
+	if (nclearitems > 0)
+		_bt_clear_items(page, clearitemnos, nclearitems);
+
 	/* Fix the page */
-	if (nitems > 0)
-		PageIndexMultiDelete(page, itemnos, nitems);
+	if (ndelitems > 0)
+		PageIndexMultiDelete(page, delitemnos, ndelitems);
 
 	/*
 	 * We can clear the vacuum cycle ID since this page has certainly been
@@ -824,7 +826,8 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 		XLogRecPtr	recptr;
 		xl_btree_vacuum xlrec_vacuum;
 
-		xlrec_vacuum.lastBlockVacuumed = lastBlockVacuumed;
+		xlrec_vacuum.ndelitems = ndelitems;
+		xlrec_vacuum.nclearitems = nclearitems;
 
 		XLogBeginInsert();
 		XLogRegisterBuffer(0, buf, REGBUF_STANDARD);
@@ -835,8 +838,11 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 		 * is.  When XLogInsert stores the whole buffer, the offsets array
 		 * need not be stored too.
 		 */
-		if (nitems > 0)
-			XLogRegisterBufData(0, (char *) itemnos, nitems * sizeof(OffsetNumber));
+		if (ndelitems > 0)
+			XLogRegisterBufData(0, (char *) delitemnos, ndelitems * sizeof(OffsetNumber));
+
+		if (nclearitems > 0)
+			XLogRegisterBufData(0, (char *) clearitemnos, nclearitems * sizeof(OffsetNumber));
 
 		recptr = XLogInsert(RM_BTREE_ID, XLOG_BTREE_VACUUM);
 
@@ -1882,3 +1888,13 @@ _bt_unlink_halfdead_page(Relation rel, Buffer leafbuf, bool *rightsib_empty)
 
 	return true;
 }
+
+/*
+ * Currently just a wrapper arounf PageIndexClearWarmTuples, but in theory each
+ * index may have it's own way to handle WARM tuples.
+ */
+void
+_bt_clear_items(Page page, OffsetNumber *clearitemnos, uint16 nclearitems)
+{
+	PageIndexClearWarmTuples(page, clearitemnos, nclearitems);
+}
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 775f2ff..6d558af 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -146,6 +146,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = btbuild;
 	amroutine->ambuildempty = btbuildempty;
 	amroutine->aminsert = btinsert;
+	amroutine->amwarminsert = btwarminsert;
 	amroutine->ambulkdelete = btbulkdelete;
 	amroutine->amvacuumcleanup = btvacuumcleanup;
 	amroutine->amcanreturn = btcanreturn;
@@ -163,6 +164,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = btestimateparallelscan;
 	amroutine->aminitparallelscan = btinitparallelscan;
 	amroutine->amparallelrescan = btparallelrescan;
+	amroutine->amrecheck = btrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -315,11 +317,12 @@ btbuildempty(Relation index)
  *		Descend the tree recursively, find the appropriate location for our
  *		new tuple, and put it there.
  */
-bool
-btinsert(Relation rel, Datum *values, bool *isnull,
+static bool
+btinsert_internal(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
-		 IndexInfo *indexInfo)
+		 IndexInfo *indexInfo,
+		 bool warm_update)
 {
 	bool		result;
 	IndexTuple	itup;
@@ -328,6 +331,11 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	itup = index_form_tuple(RelationGetDescr(rel), values, isnull);
 	itup->t_tid = *ht_ctid;
 
+	if (warm_update)
+		ItemPointerSetFlags(&itup->t_tid, BTREE_INDEX_WARM_POINTER);
+	else
+		ItemPointerClearFlags(&itup->t_tid);
+
 	result = _bt_doinsert(rel, itup, checkUnique, heapRel);
 
 	pfree(itup);
@@ -335,6 +343,26 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	return result;
 }
 
+bool
+btinsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, false);
+}
+
+bool
+btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, true);
+}
+
 /*
  *	btgettuple() -- Get the next tuple in the scan.
  */
@@ -1103,7 +1131,7 @@ btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 								 RBM_NORMAL, info->strategy);
 		LockBufferForCleanup(buf);
 		_bt_checkpage(rel, buf);
-		_bt_delitems_vacuum(rel, buf, NULL, 0, vstate.lastBlockVacuumed);
+		_bt_handleitems_vacuum(rel, buf, NULL, 0, NULL, 0);
 		_bt_relbuf(rel, buf);
 	}
 
@@ -1201,6 +1229,8 @@ restart:
 	{
 		OffsetNumber deletable[MaxOffsetNumber];
 		int			ndeletable;
+		OffsetNumber clearwarm[MaxOffsetNumber];
+		int			nclearwarm;
 		OffsetNumber offnum,
 					minoff,
 					maxoff;
@@ -1239,7 +1269,7 @@ restart:
 		 * Scan over all items to see which ones need deleted according to the
 		 * callback function.
 		 */
-		ndeletable = 0;
+		ndeletable = nclearwarm = 0;
 		minoff = P_FIRSTDATAKEY(opaque);
 		maxoff = PageGetMaxOffsetNumber(page);
 		if (callback)
@@ -1250,6 +1280,9 @@ restart:
 			{
 				IndexTuple	itup;
 				ItemPointer htup;
+				int			flags;
+				bool		is_warm = false;
+				IndexBulkDeleteCallbackResult	result;
 
 				itup = (IndexTuple) PageGetItem(page,
 												PageGetItemId(page, offnum));
@@ -1276,16 +1309,36 @@ restart:
 				 * applies to *any* type of index that marks index tuples as
 				 * killed.
 				 */
-				if (callback(htup, callback_state))
+				flags = ItemPointerGetFlags(&itup->t_tid);
+				is_warm = ((flags & BTREE_INDEX_WARM_POINTER) != 0);
+
+				if (is_warm)
+					stats->num_warm_pointers++;
+				else
+					stats->num_clear_pointers++;
+
+				result = callback(htup, is_warm, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					if (is_warm)
+						stats->warm_pointers_removed++;
+					else
+						stats->clear_pointers_removed++;
 					deletable[ndeletable++] = offnum;
+				}
+				else if (result == IBDCR_CLEAR_WARM)
+				{
+					clearwarm[nclearwarm++] = offnum;
+				}
 			}
 		}
 
 		/*
-		 * Apply any needed deletes.  We issue just one _bt_delitems_vacuum()
-		 * call per page, so as to minimize WAL traffic.
+		 * Apply any needed deletes and clearing.  We issue just one
+		 * _bt_handleitems_vacuum() call per page, so as to minimize WAL
+		 * traffic.
 		 */
-		if (ndeletable > 0)
+		if (ndeletable > 0 || nclearwarm > 0)
 		{
 			/*
 			 * Notice that the issued XLOG_BTREE_VACUUM WAL record includes
@@ -1301,8 +1354,8 @@ restart:
 			 * doesn't seem worth the amount of bookkeeping it'd take to avoid
 			 * that.
 			 */
-			_bt_delitems_vacuum(rel, buf, deletable, ndeletable,
-								vstate->lastBlockVacuumed);
+			_bt_handleitems_vacuum(rel, buf, deletable, ndeletable,
+								clearwarm, nclearwarm);
 
 			/*
 			 * Remember highest leaf page number we've issued a
@@ -1312,6 +1365,7 @@ restart:
 				vstate->lastBlockVacuumed = blkno;
 
 			stats->tuples_removed += ndeletable;
+			stats->pointers_cleared += nclearwarm;
 			/* must recompute maxoff */
 			maxoff = PageGetMaxOffsetNumber(page);
 		}
diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c
index 5b259a3..2765809 100644
--- a/src/backend/access/nbtree/nbtutils.c
+++ b/src/backend/access/nbtree/nbtutils.c
@@ -20,11 +20,14 @@
 #include "access/nbtree.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "access/tuptoaster.h"
+#include "catalog/index.h"
 #include "miscadmin.h"
 #include "utils/array.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 typedef struct BTSortArrayContext
@@ -2069,3 +2072,93 @@ btproperty(Oid index_oid, int attno,
 			return false;		/* punt to generic code */
 	}
 }
+
+/*
+ * Check if the index tuple's key matches the one computed from the given heap
+ * tuple's attribute
+ */
+bool
+btrecheck(Relation indexRel, IndexInfo *indexInfo, IndexTuple indexTuple1,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	bool		isavail[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+	IndexTuple	indexTuple2;
+
+	/*
+	 * Get the index values, except for expression attributes. Since WARM is
+	 * not used when a column used by expressions in an index is modified, we
+	 * can safely assume that those index attributes are never changed by a
+	 * WARM update.
+	 *
+	 * We cannot use FormIndexDatum here because that requires access to
+	 * executor state and we don't have that here.
+	 */
+	FormIndexPlainDatum(indexInfo, heapRel, heapTuple, values, isnull, isavail);
+
+	/*
+	 * Form an index tuple using the heap values first. This allows to then
+	 * fetch index attributes from the current index tuple and the one that is
+	 * formed from the heap values and then do a binary comparison using
+	 * datumIsEqual().
+	 *
+	 * This takes care of doing the right comparison for compressed index
+	 * attributes (we just compare the compressed versions in both tuples) and
+	 * also ensure that we correctly detoast heap values, if need be.
+	 */
+	indexTuple2 = index_form_tuple(RelationGetDescr(indexRel), values, isnull);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue1;
+		bool	indxisnull1;
+		Datum	indxvalue2;
+		bool	indxisnull2;
+
+		/* No need to compare if the attribute value is not available */
+		if (!isavail[i - 1])
+			continue;
+
+		indxvalue1 = index_getattr(indexTuple1, i, indexRel->rd_att,
+								   &indxisnull1);
+		indxvalue2 = index_getattr(indexTuple2, i, indexRel->rd_att,
+								   &indxisnull2);
+
+		/*
+		 * If both are NULL, then they are equal
+		 */
+		if (indxisnull1 && indxisnull2)
+			continue;
+
+		/*
+		 * If just one is NULL, then they are not equal
+		 */
+		if (indxisnull1 || indxisnull2)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now just do a raw memory comparison. If the index tuple was formed
+		 * using this heap tuple, the computed index values must match
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(indxvalue1, indxvalue2, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	pfree(indexTuple2);
+
+	return equal;
+}
diff --git a/src/backend/access/nbtree/nbtxlog.c b/src/backend/access/nbtree/nbtxlog.c
index ac60db0..ef24738 100644
--- a/src/backend/access/nbtree/nbtxlog.c
+++ b/src/backend/access/nbtree/nbtxlog.c
@@ -390,8 +390,8 @@ btree_xlog_vacuum(XLogReaderState *record)
 	Buffer		buffer;
 	Page		page;
 	BTPageOpaque opaque;
-#ifdef UNUSED
 	xl_btree_vacuum *xlrec = (xl_btree_vacuum *) XLogRecGetData(record);
+#ifdef UNUSED
 
 	/*
 	 * This section of code is thought to be no longer needed, after analysis
@@ -482,19 +482,30 @@ btree_xlog_vacuum(XLogReaderState *record)
 
 		if (len > 0)
 		{
-			OffsetNumber *unused;
-			OffsetNumber *unend;
+			OffsetNumber *offnums = (OffsetNumber *) ptr;
 
-			unused = (OffsetNumber *) ptr;
-			unend = (OffsetNumber *) ((char *) ptr + len);
+			/*
+			 * Clear the WARM pointers.
+			 *
+			 * We must do this before dealing with the dead items because
+			 * PageIndexMultiDelete may move items around to compactify the
+			 * array and hence offnums recorded earlier won't make any sense
+			 * after PageIndexMultiDelete is called.
+			 */
+			if (xlrec->nclearitems > 0)
+				_bt_clear_items(page, offnums + xlrec->ndelitems,
+						xlrec->nclearitems);
 
-			if ((unend - unused) > 0)
-				PageIndexMultiDelete(page, unused, unend - unused);
+			/*
+			 * And handle the deleted items too
+			 */
+			if (xlrec->ndelitems > 0)
+				PageIndexMultiDelete(page, offnums, xlrec->ndelitems);
 		}
 
 		/*
 		 * Mark the page as not containing any LP_DEAD items --- see comments
-		 * in _bt_delitems_vacuum().
+		 * in _bt_handleitems_vacuum().
 		 */
 		opaque = (BTPageOpaque) PageGetSpecialPointer(page);
 		opaque->btpo_flags &= ~BTP_HAS_GARBAGE;
diff --git a/src/backend/access/rmgrdesc/heapdesc.c b/src/backend/access/rmgrdesc/heapdesc.c
index 44d2d63..d373e61 100644
--- a/src/backend/access/rmgrdesc/heapdesc.c
+++ b/src/backend/access/rmgrdesc/heapdesc.c
@@ -44,6 +44,12 @@ heap_desc(StringInfo buf, XLogReaderState *record)
 
 		appendStringInfo(buf, "off %u", xlrec->offnum);
 	}
+	else if (info == XLOG_HEAP_MULTI_INSERT)
+	{
+		xl_heap_multi_insert *xlrec = (xl_heap_multi_insert *) rec;
+
+		appendStringInfo(buf, "%d tuples", xlrec->ntuples);
+	}
 	else if (info == XLOG_HEAP_DELETE)
 	{
 		xl_heap_delete *xlrec = (xl_heap_delete *) rec;
@@ -102,7 +108,7 @@ heap2_desc(StringInfo buf, XLogReaderState *record)
 	char	   *rec = XLogRecGetData(record);
 	uint8		info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
 
-	info &= XLOG_HEAP_OPMASK;
+	info &= XLOG_HEAP2_OPMASK;
 	if (info == XLOG_HEAP2_CLEAN)
 	{
 		xl_heap_clean *xlrec = (xl_heap_clean *) rec;
@@ -129,12 +135,6 @@ heap2_desc(StringInfo buf, XLogReaderState *record)
 		appendStringInfo(buf, "cutoff xid %u flags %d",
 						 xlrec->cutoff_xid, xlrec->flags);
 	}
-	else if (info == XLOG_HEAP2_MULTI_INSERT)
-	{
-		xl_heap_multi_insert *xlrec = (xl_heap_multi_insert *) rec;
-
-		appendStringInfo(buf, "%d tuples", xlrec->ntuples);
-	}
 	else if (info == XLOG_HEAP2_LOCK_UPDATED)
 	{
 		xl_heap_lock_updated *xlrec = (xl_heap_lock_updated *) rec;
@@ -171,6 +171,12 @@ heap_identify(uint8 info)
 		case XLOG_HEAP_INSERT | XLOG_HEAP_INIT_PAGE:
 			id = "INSERT+INIT";
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			id = "MULTI_INSERT";
+			break;
+		case XLOG_HEAP_MULTI_INSERT | XLOG_HEAP_INIT_PAGE:
+			id = "MULTI_INSERT+INIT";
+			break;
 		case XLOG_HEAP_DELETE:
 			id = "DELETE";
 			break;
@@ -219,12 +225,6 @@ heap2_identify(uint8 info)
 		case XLOG_HEAP2_VISIBLE:
 			id = "VISIBLE";
 			break;
-		case XLOG_HEAP2_MULTI_INSERT:
-			id = "MULTI_INSERT";
-			break;
-		case XLOG_HEAP2_MULTI_INSERT | XLOG_HEAP_INIT_PAGE:
-			id = "MULTI_INSERT+INIT";
-			break;
 		case XLOG_HEAP2_LOCK_UPDATED:
 			id = "LOCK_UPDATED";
 			break;
diff --git a/src/backend/access/rmgrdesc/nbtdesc.c b/src/backend/access/rmgrdesc/nbtdesc.c
index fbde9d6..6b2c5d6 100644
--- a/src/backend/access/rmgrdesc/nbtdesc.c
+++ b/src/backend/access/rmgrdesc/nbtdesc.c
@@ -48,8 +48,8 @@ btree_desc(StringInfo buf, XLogReaderState *record)
 			{
 				xl_btree_vacuum *xlrec = (xl_btree_vacuum *) rec;
 
-				appendStringInfo(buf, "lastBlockVacuumed %u",
-								 xlrec->lastBlockVacuumed);
+				appendStringInfo(buf, "ndelitems %u, nclearitems %u",
+								 xlrec->ndelitems, xlrec->nclearitems);
 				break;
 			}
 		case XLOG_BTREE_DELETE:
diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c
index e57ac49..59ef7f3 100644
--- a/src/backend/access/spgist/spgutils.c
+++ b/src/backend/access/spgist/spgutils.c
@@ -72,6 +72,7 @@ spghandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/spgist/spgvacuum.c b/src/backend/access/spgist/spgvacuum.c
index cce9b3f..711d351 100644
--- a/src/backend/access/spgist/spgvacuum.c
+++ b/src/backend/access/spgist/spgvacuum.c
@@ -155,7 +155,8 @@ vacuumLeafPage(spgBulkDeleteState *bds, Relation index, Buffer buffer,
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				deletable[i] = true;
@@ -425,7 +426,8 @@ vacuumLeafRoot(spgBulkDeleteState *bds, Relation index, Buffer buffer)
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				toDelete[xlrec.nDelete] = i;
@@ -902,10 +904,10 @@ spgbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 }
 
 /* Dummy callback to delete no tuples during spgvacuumcleanup */
-static bool
-dummy_callback(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+dummy_callback(ItemPointer itemptr, bool is_warm, void *state)
 {
-	return false;
+	return IBDCR_KEEP;
 }
 
 /*
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 1eb163f..2c27661 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -54,6 +54,7 @@
 #include "nodes/makefuncs.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/var.h"
 #include "parser/parser.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
@@ -114,7 +115,7 @@ static void IndexCheckExclusion(Relation heapRelation,
 					IndexInfo *indexInfo);
 static inline int64 itemptr_encode(ItemPointer itemptr);
 static inline void itemptr_decode(ItemPointer itemptr, int64 encoded);
-static bool validate_index_callback(ItemPointer itemptr, void *opaque);
+static IndexBulkDeleteCallbackResult validate_index_callback(ItemPointer itemptr, bool is_warm, void *opaque);
 static void validate_index_heapscan(Relation heapRelation,
 						Relation indexRelation,
 						IndexInfo *indexInfo,
@@ -1691,6 +1692,20 @@ BuildIndexInfo(Relation index)
 	ii->ii_AmCache = NULL;
 	ii->ii_Context = CurrentMemoryContext;
 
+	/* build a bitmap of all table attributes referred by this index */
+	for (i = 0; i < ii->ii_NumIndexAttrs; i++)
+	{
+		AttrNumber attr = ii->ii_KeyAttrNumbers[i];
+		ii->ii_indxattrs = bms_add_member(ii->ii_indxattrs, attr -
+				FirstLowInvalidHeapAttributeNumber);
+	}
+
+	/* Collect all attributes used in expressions, too */
+	pull_varattnos((Node *) ii->ii_Expressions, 1, &ii->ii_indxattrs);
+
+	/* Collect all attributes in the index predicate, too */
+	pull_varattnos((Node *) ii->ii_Predicate, 1, &ii->ii_indxattrs);
+
 	return ii;
 }
 
@@ -1815,6 +1830,51 @@ FormIndexDatum(IndexInfo *indexInfo,
 		elog(ERROR, "wrong number of index expressions");
 }
 
+/*
+ * This is same as FormIndexDatum but we don't compute any expression
+ * attributes and hence can be used when executor interfaces are not available.
+ * If i'th attribute is available then isavail[i] is set to true, else set to
+ * false. The caller must always check if an attribute value is available
+ * before trying to do anything useful with that.
+ */
+void
+FormIndexPlainDatum(IndexInfo *indexInfo,
+			   Relation heapRel,
+			   HeapTuple heapTup,
+			   Datum *values,
+			   bool *isnull,
+			   bool *isavail)
+{
+	int			i;
+
+	for (i = 0; i < indexInfo->ii_NumIndexAttrs; i++)
+	{
+		int			keycol = indexInfo->ii_KeyAttrNumbers[i];
+		Datum		iDatum;
+		bool		isNull;
+
+		if (keycol != 0)
+		{
+			/*
+			 * Plain index column; get the value we need directly from the
+			 * heap tuple.
+			 */
+			iDatum = heap_getattr(heapTup, keycol, RelationGetDescr(heapRel), &isNull);
+			values[i] = iDatum;
+			isnull[i] = isNull;
+			isavail[i] = true;
+		}
+		else
+		{
+			/*
+			 * This is an expression attribute which can't be computed by us.
+			 * So just inform the caller about it.
+			 */
+			isavail[i] = false;
+			isnull[i] = true;
+		}
+	}
+}
 
 /*
  * index_update_stats --- update pg_class entry after CREATE INDEX or REINDEX
@@ -2929,15 +2989,15 @@ itemptr_decode(ItemPointer itemptr, int64 encoded)
 /*
  * validate_index_callback - bulkdelete callback to collect the index TIDs
  */
-static bool
-validate_index_callback(ItemPointer itemptr, void *opaque)
+static IndexBulkDeleteCallbackResult
+validate_index_callback(ItemPointer itemptr, bool is_warm, void *opaque)
 {
 	v_i_state  *state = (v_i_state *) opaque;
 	int64		encoded = itemptr_encode(itemptr);
 
 	tuplesort_putdatum(state->tuplesort, Int64GetDatum(encoded), false);
 	state->itups += 1;
-	return false;				/* never actually delete anything */
+	return IBDCR_KEEP;				/* never actually delete anything */
 }
 
 /*
@@ -3156,7 +3216,8 @@ validate_index_heapscan(Relation heapRelation,
 						 heapRelation,
 						 indexInfo->ii_Unique ?
 						 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-						 indexInfo);
+						 indexInfo,
+						 false);
 
 			state->tups_inserted += 1;
 		}
diff --git a/src/backend/catalog/indexing.c b/src/backend/catalog/indexing.c
index abc344a..6392f33 100644
--- a/src/backend/catalog/indexing.c
+++ b/src/backend/catalog/indexing.c
@@ -66,10 +66,15 @@ CatalogCloseIndexes(CatalogIndexState indstate)
  *
  * This should be called for each inserted or updated catalog tuple.
  *
+ * If the tuple was WARM updated, the modified_attrs contains the list of
+ * columns updated by the update. We must not insert new index entries for
+ * indexes which do not refer to any of the modified columns.
+ *
  * This is effectively a cut-down version of ExecInsertIndexTuples.
  */
 static void
-CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
+CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple,
+		Bitmapset *modified_attrs, bool warm_update)
 {
 	int			i;
 	int			numIndexes;
@@ -79,12 +84,28 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 	IndexInfo **indexInfoArray;
 	Datum		values[INDEX_MAX_KEYS];
 	bool		isnull[INDEX_MAX_KEYS];
+	ItemPointerData root_tid;
 
-	/* HOT update does not require index inserts */
-	if (HeapTupleIsHeapOnly(heapTuple))
+	/*
+	 * HOT update does not require index inserts, but WARM may need for some
+	 * indexes.
+	 */
+	if (HeapTupleIsHeapOnly(heapTuple) && !warm_update)
 		return;
 
 	/*
+	 * If we've done a WARM update, then we must index the TID of the root line
+	 * pointer and not the actual TID of the new tuple.
+	 */
+	if (warm_update)
+		ItemPointerSet(&root_tid,
+				ItemPointerGetBlockNumber(&(heapTuple->t_self)),
+				HeapTupleHeaderGetRootOffset(heapTuple->t_data));
+	else
+		ItemPointerCopy(&heapTuple->t_self, &root_tid);
+
+
+	/*
 	 * Get information from the state structure.  Fall out if nothing to do.
 	 */
 	numIndexes = indstate->ri_NumIndices;
@@ -112,6 +133,17 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 			continue;
 
 		/*
+		 * If we've done WARM update, then we must not insert a new index tuple
+		 * if none of the index keys have changed. This is not just an
+		 * optimization, but a requirement for WARM to work correctly.
+		 */
+		if (warm_update)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
+		/*
 		 * Expressional and partial indexes on system catalogs are not
 		 * supported, nor exclusion constraints, nor deferred uniqueness
 		 */
@@ -136,11 +168,12 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 		index_insert(relationDescs[i],	/* index relation */
 					 values,	/* array of index Datums */
 					 isnull,	/* is-null flags */
-					 &(heapTuple->t_self),		/* tid of heap tuple */
+					 &root_tid,
 					 heapRelation,
 					 relationDescs[i]->rd_index->indisunique ?
 					 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-					 indexInfo);
+					 indexInfo,
+					 warm_update);
 	}
 
 	ExecDropSingleTupleTableSlot(slot);
@@ -168,7 +201,7 @@ CatalogTupleInsert(Relation heapRel, HeapTuple tup)
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 	CatalogCloseIndexes(indstate);
 
 	return oid;
@@ -190,7 +223,7 @@ CatalogTupleInsertWithInfo(Relation heapRel, HeapTuple tup,
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 
 	return oid;
 }
@@ -210,12 +243,14 @@ void
 CatalogTupleUpdate(Relation heapRel, ItemPointer otid, HeapTuple tup)
 {
 	CatalogIndexState indstate;
+	bool	warm_update;
+	Bitmapset	*modified_attrs;
 
 	indstate = CatalogOpenIndexes(heapRel);
 
-	simple_heap_update(heapRel, otid, tup);
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 	CatalogCloseIndexes(indstate);
 }
 
@@ -231,9 +266,12 @@ void
 CatalogTupleUpdateWithInfo(Relation heapRel, ItemPointer otid, HeapTuple tup,
 						   CatalogIndexState indstate)
 {
-	simple_heap_update(heapRel, otid, tup);
+	Bitmapset  *modified_attrs;
+	bool		warm_update;
+
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 }
 
 /*
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index d8b762e..ca44e03 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -530,6 +530,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_tuples_deleted(C.oid) AS n_tup_del,
             pg_stat_get_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
@@ -560,7 +561,8 @@ CREATE VIEW pg_stat_xact_all_tables AS
             pg_stat_get_xact_tuples_inserted(C.oid) AS n_tup_ins,
             pg_stat_get_xact_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_xact_tuples_deleted(C.oid) AS n_tup_del,
-            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd
+            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_xact_tuples_warm_updated(C.oid) AS n_tup_warm_upd
     FROM pg_class C LEFT JOIN
          pg_index I ON C.oid = I.indrelid
          LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index e2544e5..330b661 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -40,6 +40,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	TriggerData *trigdata = castNode(TriggerData, fcinfo->context);
 	const char *funcname = "unique_key_recheck";
 	HeapTuple	new_row;
+	HeapTupleData heapTuple;
 	ItemPointerData tmptid;
 	Relation	indexRel;
 	IndexInfo  *indexInfo;
@@ -102,7 +103,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	 * removed.
 	 */
 	tmptid = new_row->t_self;
-	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL,
+				NULL, NULL, &heapTuple))
 	{
 		/*
 		 * All rows in the HOT chain are dead, so skip the check.
@@ -166,7 +168,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 		 */
 		index_insert(indexRel, values, isnull, &(new_row->t_self),
 					 trigdata->tg_relation, UNIQUE_CHECK_EXISTING,
-					 indexInfo);
+					 indexInfo,
+					 false);
 	}
 	else
 	{
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index ab59be8..22c272c 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -2688,6 +2688,8 @@ CopyFrom(CopyState cstate)
 					if (resultRelInfo->ri_NumIndices > 0)
 						recheckIndexes = ExecInsertIndexTuples(slot,
 															&(tuple->t_self),
+															&(tuple->t_self),
+															NULL,
 															   estate,
 															   false,
 															   NULL,
@@ -2842,6 +2844,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
 			ExecStoreTuple(bufferedTuples[i], myslot, InvalidBuffer, false);
 			recheckIndexes =
 				ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
+									  &(bufferedTuples[i]->t_self), NULL,
 									  estate, false, NULL, NIL);
 			ExecARInsertTriggers(estate, resultRelInfo,
 								 bufferedTuples[i],
diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c
index 4861799..b62b0e9 100644
--- a/src/backend/commands/indexcmds.c
+++ b/src/backend/commands/indexcmds.c
@@ -694,7 +694,14 @@ DefineIndex(Oid relationId,
 	 * visible to other transactions before we start to build the index. That
 	 * will prevent them from making incompatible HOT updates.  The new index
 	 * will be marked not indisready and not indisvalid, so that no one else
-	 * tries to either insert into it or use it for queries.
+	 * tries to either insert into it or use it for queries. In addition to
+	 * that, WARM updates will be disallowed if an update is modifying one of
+	 * the columns used by this new index. This is necessary to ensure that we
+	 * don't create WARM tuples which do not have corresponding entry in this
+	 * index. It must be noted that during the second phase, we will index only
+	 * those heap tuples whose root line pointer is not already in the index,
+	 * hence it's important that all tuples in a given chain, has the same
+	 * value for any indexed column (including this new index).
 	 *
 	 * We must commit our current transaction so that the index becomes
 	 * visible; then start another.  Note that all the data structures we just
@@ -742,7 +749,10 @@ DefineIndex(Oid relationId,
 	 * marked as "not-ready-for-inserts".  The index is consulted while
 	 * deciding HOT-safety though.  This arrangement ensures that no new HOT
 	 * chains can be created where the new tuple and the old tuple in the
-	 * chain have different index keys.
+	 * chain have different index keys. Also, the new index is consulted for
+	 * deciding whether a WARM update is possible, and WARM update is not done
+	 * if a column used by this index is being updated. This ensures that we
+	 * don't create WARM tuples which are not indexed by this index.
 	 *
 	 * We now take a new snapshot, and build the index using all tuples that
 	 * are visible in this snapshot.  We can be sure that any HOT updates to
@@ -777,7 +787,8 @@ DefineIndex(Oid relationId,
 	/*
 	 * Update the pg_index row to mark the index as ready for inserts. Once we
 	 * commit this transaction, any new transactions that open the table must
-	 * insert new entries into the index for insertions and non-HOT updates.
+	 * insert new entries into the index for insertions and non-HOT updates or
+	 * WARM updates where this index needs new entry.
 	 */
 	index_set_state_flags(indexRelationId, INDEX_CREATE_SET_READY);
 
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 5b43a66..f52490f 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -104,6 +104,39 @@
  */
 #define PREFETCH_SIZE			((BlockNumber) 32)
 
+/*
+ * Structure to track WARM chains that can be converted into HOT chains during
+ * this run.
+ *
+ * To reduce space requirement, we're using bitfields. But the way things are
+ * laid down, we're still wasting 1-byte per candidate chain.
+ */
+typedef struct LVWarmChain
+{
+	ItemPointerData	chain_tid;			/* root of the chain */
+
+	/*
+	 * 1 - if the chain contains only post-warm tuples
+	 * 0 - if the chain contains only pre-warm tuples
+	 */
+	uint8			is_postwarm_chain:2;
+
+	/* 1 - if this chain must remain a WARM chain */
+	uint8			keep_warm_chain:2;
+
+	/*
+	 * Number of CLEAR pointers to this root TID found so far - must never be
+	 * more than 2.
+	 */
+	uint8			num_clear_pointers:2;
+
+	/*
+	 * Number of WARM pointers to this root TID found so far - must never be
+	 * more than 1.
+	 */
+	uint8			num_warm_pointers:2;
+} LVWarmChain;
+
 typedef struct LVRelStats
 {
 	/* hasindex = true means two-pass strategy; false means one-pass */
@@ -122,6 +155,14 @@ typedef struct LVRelStats
 	BlockNumber pages_removed;
 	double		tuples_deleted;
 	BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */
+
+	/* List of candidate WARM chains that can be converted into HOT chains */
+	/* NB: this list is ordered by TID of the root pointers */
+	int				num_warm_chains;	/* current # of entries */
+	int				max_warm_chains;	/* # slots allocated in array */
+	LVWarmChain 	*warm_chains;		/* array of LVWarmChain */
+	double			num_non_convertible_warm_chains;
+
 	/* List of TIDs of tuples we intend to delete */
 	/* NB: this list is ordered by TID address */
 	int			num_dead_tuples;	/* current # of entries */
@@ -150,6 +191,7 @@ static void lazy_scan_heap(Relation onerel, int options,
 static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats);
 static bool lazy_check_needs_freeze(Buffer buf, bool *hastup);
 static void lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats);
 static void lazy_cleanup_index(Relation indrel,
@@ -157,6 +199,10 @@ static void lazy_cleanup_index(Relation indrel,
 				   LVRelStats *vacrelstats);
 static int lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 				 int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer);
+static int lazy_warmclear_page(Relation onerel, BlockNumber blkno,
+				 Buffer buffer, int chainindex, LVRelStats *vacrelstats,
+				 Buffer *vmbuffer, bool check_all_visible);
+static void lazy_reset_warm_pointer_count(LVRelStats *vacrelstats);
 static bool should_attempt_truncation(LVRelStats *vacrelstats);
 static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
 static BlockNumber count_nondeletable_pages(Relation onerel,
@@ -164,8 +210,15 @@ static BlockNumber count_nondeletable_pages(Relation onerel,
 static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks);
 static void lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr);
-static bool lazy_tid_reaped(ItemPointer itemptr, void *state);
+static void lazy_record_warm_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static void lazy_record_clear_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static IndexBulkDeleteCallbackResult lazy_tid_reaped(ItemPointer itemptr, bool is_warm, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state);
 static int	vac_cmp_itemptr(const void *left, const void *right);
+static int vac_cmp_warm_chain(const void *left, const void *right);
 static bool heap_page_is_all_visible(Relation rel, Buffer buf,
 					 TransactionId *visibility_cutoff_xid, bool *all_frozen);
 
@@ -690,8 +743,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		 * If we are close to overrunning the available space for dead-tuple
 		 * TIDs, pause and do a cycle of vacuuming before we tackle this page.
 		 */
-		if ((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
-			vacrelstats->num_dead_tuples > 0)
+		if (((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
+			vacrelstats->num_dead_tuples > 0) ||
+			((vacrelstats->max_warm_chains - vacrelstats->num_warm_chains) < MaxHeapTuplesPerPage &&
+			 vacrelstats->num_warm_chains > 0))
 		{
 			const int	hvp_index[] = {
 				PROGRESS_VACUUM_PHASE,
@@ -721,6 +776,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			/* Remove index entries */
 			for (i = 0; i < nindexes; i++)
 				lazy_vacuum_index(Irel[i],
+								  (vacrelstats->num_warm_chains > 0),
 								  &indstats[i],
 								  vacrelstats);
 
@@ -743,6 +799,9 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			 * valid.
 			 */
 			vacrelstats->num_dead_tuples = 0;
+			vacrelstats->num_warm_chains = 0;
+			memset(vacrelstats->warm_chains, 0,
+					vacrelstats->max_warm_chains * sizeof (LVWarmChain));
 			vacrelstats->num_index_scans++;
 
 			/* Report that we are once again scanning the heap */
@@ -947,15 +1006,31 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 				continue;
 			}
 
+			ItemPointerSet(&(tuple.t_self), blkno, offnum);
+
 			/* Redirect items mustn't be touched */
 			if (ItemIdIsRedirected(itemid))
 			{
+				HeapCheckWarmChainStatus status = heap_check_warm_chain(page,
+						&tuple.t_self, false);
+				if (HCWC_IS_WARM_UPDATED(status))
+				{
+					/*
+					 * A chain which is either complete WARM or CLEAR is a
+					 * candidate for chain conversion. Remember the chain and
+					 * whether the chain has all WARM tuples or not.
+					 */
+					if (HCWC_IS_ALL_WARM(status))
+						lazy_record_warm_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_CLEAR(status))
+						lazy_record_clear_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
 				hastup = true;	/* this page won't be truncatable */
 				continue;
 			}
 
-			ItemPointerSet(&(tuple.t_self), blkno, offnum);
-
 			/*
 			 * DEAD item pointers are to be vacuumed normally; but we don't
 			 * count them in tups_vacuumed, else we'd be double-counting (at
@@ -975,6 +1050,26 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			tuple.t_len = ItemIdGetLength(itemid);
 			tuple.t_tableOid = RelationGetRelid(onerel);
 
+			if (!HeapTupleIsHeapOnly(&tuple))
+			{
+				HeapCheckWarmChainStatus status = heap_check_warm_chain(page,
+						&tuple.t_self, false);
+				if (HCWC_IS_WARM_UPDATED(status))
+				{
+					/*
+					 * A chain which is either complete WARM or CLEAR is a
+					 * candidate for chain conversion. Remember the chain and
+					 * its color.
+					 */
+					if (HCWC_IS_ALL_WARM(status))
+						lazy_record_warm_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_CLEAR(status))
+						lazy_record_clear_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
+			}
+
 			tupgone = false;
 
 			switch (HeapTupleSatisfiesVacuum(&tuple, OldestXmin, buf))
@@ -1040,6 +1135,19 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 							break;
 						}
 
+						/*
+						 * If this tuple was ever WARM updated or is a WARM
+						 * tuple, there could be multiple index entries
+						 * pointing to the root of this chain. We can't do
+						 * index-only scans for such tuples without verifying
+						 * index key check. So mark the page as !all_visible
+						 */
+						if (HeapTupleHeaderIsWarmUpdated(tuple.t_data))
+						{
+							all_visible = false;
+							break;
+						}
+
 						/* Track newest xmin on page. */
 						if (TransactionIdFollows(xmin, visibility_cutoff_xid))
 							visibility_cutoff_xid = xmin;
@@ -1282,7 +1390,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 
 	/* If any tuples need to be deleted, perform final vacuum cycle */
 	/* XXX put a threshold on min number of tuples here? */
-	if (vacrelstats->num_dead_tuples > 0)
+	if (vacrelstats->num_dead_tuples > 0 || vacrelstats->num_warm_chains > 0)
 	{
 		const int	hvp_index[] = {
 			PROGRESS_VACUUM_PHASE,
@@ -1300,6 +1408,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		/* Remove index entries */
 		for (i = 0; i < nindexes; i++)
 			lazy_vacuum_index(Irel[i],
+							  (vacrelstats->num_warm_chains > 0),
 							  &indstats[i],
 							  vacrelstats);
 
@@ -1371,7 +1480,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
  *
  *		This routine marks dead tuples as unused and compacts out free
  *		space on their pages.  Pages not having dead tuples recorded from
- *		lazy_scan_heap are not visited at all.
+ *		lazy_scan_heap are not visited at all. This routine also converts
+ *		candidate WARM chains to HOT chains by clearing WARM related flags. The
+ *		candidate chains are determined by the preceeding index scans after
+ *		looking at the data collected by the first heap scan.
  *
  * Note: the reason for doing this as a second pass is we cannot remove
  * the tuples until we've removed their index entries, and we want to
@@ -1380,7 +1492,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 static void
 lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 {
-	int			tupindex;
+	int			tupindex, chainindex;
 	int			npages;
 	PGRUsage	ru0;
 	Buffer		vmbuffer = InvalidBuffer;
@@ -1389,33 +1501,69 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 	npages = 0;
 
 	tupindex = 0;
-	while (tupindex < vacrelstats->num_dead_tuples)
+	chainindex = 0;
+	while (tupindex < vacrelstats->num_dead_tuples ||
+		   chainindex < vacrelstats->num_warm_chains)
 	{
-		BlockNumber tblk;
+		BlockNumber tblk, chainblk, vacblk;
 		Buffer		buf;
 		Page		page;
 		Size		freespace;
 
 		vacuum_delay_point();
 
-		tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
-		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, tblk, RBM_NORMAL,
+		tblk = chainblk = InvalidBlockNumber;
+		if (chainindex < vacrelstats->num_warm_chains)
+			chainblk =
+				ItemPointerGetBlockNumber(&(vacrelstats->warm_chains[chainindex].chain_tid));
+
+		if (tupindex < vacrelstats->num_dead_tuples)
+			tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
+
+		if (tblk == InvalidBlockNumber)
+			vacblk = chainblk;
+		else if (chainblk == InvalidBlockNumber)
+			vacblk = tblk;
+		else
+			vacblk = Min(chainblk, tblk);
+
+		Assert(vacblk != InvalidBlockNumber);
+
+		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, vacblk, RBM_NORMAL,
 								 vac_strategy);
-		if (!ConditionalLockBufferForCleanup(buf))
+
+
+		if (vacblk == chainblk)
+			LockBufferForCleanup(buf);
+		else if (!ConditionalLockBufferForCleanup(buf))
 		{
 			ReleaseBuffer(buf);
 			++tupindex;
 			continue;
 		}
-		tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
-									&vmbuffer);
+
+		/*
+		 * Convert WARM chains on this page. This should be done before
+		 * vacuuming the page to ensure that we can correctly set visibility
+		 * bits after clearing WARM chains.
+		 *
+		 * If we are going to vacuum this page then don't check for
+		 * all-visibility just yet.
+		*/
+		if (vacblk == chainblk)
+			chainindex = lazy_warmclear_page(onerel, chainblk, buf, chainindex,
+					vacrelstats, &vmbuffer, chainblk != tblk);
+
+		if (vacblk == tblk)
+			tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
+					&vmbuffer);
 
 		/* Now that we've compacted the page, record its available space */
 		page = BufferGetPage(buf);
 		freespace = PageGetHeapFreeSpace(page);
 
 		UnlockReleaseBuffer(buf);
-		RecordPageWithFreeSpace(onerel, tblk, freespace);
+		RecordPageWithFreeSpace(onerel, vacblk, freespace);
 		npages++;
 	}
 
@@ -1434,6 +1582,107 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 }
 
 /*
+ *	lazy_warmclear_page() -- clear various WARM bits on the tuples.
+ *
+ * Caller must hold pin and buffer cleanup lock on the buffer.
+ *
+ * chainindex is the index in vacrelstats->warm_chains of the first dead
+ * tuple for this page.  We assume the rest follow sequentially.
+ * The return value is the first tupindex after the tuples of this page.
+ *
+ * If check_all_visible is set then we also check if the page has now become
+ * all visible and update visibility map.
+ */
+static int
+lazy_warmclear_page(Relation onerel, BlockNumber blkno, Buffer buffer,
+				 int chainindex, LVRelStats *vacrelstats, Buffer *vmbuffer,
+				 bool check_all_visible)
+{
+	Page			page = BufferGetPage(buffer);
+	OffsetNumber	cleared_offnums[MaxHeapTuplesPerPage];
+	int				num_cleared = 0;
+	TransactionId	visibility_cutoff_xid;
+	bool			all_frozen;
+
+	pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED, blkno);
+
+	START_CRIT_SECTION();
+
+	for (; chainindex < vacrelstats->num_warm_chains ; chainindex++)
+	{
+		BlockNumber tblk;
+		LVWarmChain	*chain;
+
+		chain = &vacrelstats->warm_chains[chainindex];
+
+		tblk = ItemPointerGetBlockNumber(&chain->chain_tid);
+		if (tblk != blkno)
+			break;				/* past end of tuples for this block */
+
+		/*
+		 * Since a heap page can have no more than MaxHeapTuplesPerPage
+		 * offnums and we process each offnum only once, MaxHeapTuplesPerPage
+		 * size array should be enough to hold all cleared tuples in this page.
+		 */
+		if (!chain->keep_warm_chain)
+			num_cleared += heap_clear_warm_chain(page, &chain->chain_tid,
+					cleared_offnums + num_cleared);
+	}
+
+	/*
+	 * Mark buffer dirty before we write WAL.
+	 */
+	MarkBufferDirty(buffer);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(onerel))
+	{
+		XLogRecPtr	recptr;
+
+		recptr = log_heap_warmclear(onerel, buffer,
+								cleared_offnums, num_cleared);
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	/* If not checking for all-visibility then we're done */
+	if (!check_all_visible)
+		return chainindex;
+
+	/*
+	 * The following code should match the corresponding code in
+	 * lazy_vacuum_page
+	 **/
+	if (heap_page_is_all_visible(onerel, buffer, &visibility_cutoff_xid,
+								 &all_frozen))
+		PageSetAllVisible(page);
+
+	/*
+	 * All the changes to the heap page have been done. If the all-visible
+	 * flag is now set, also set the VM all-visible bit (and, if possible, the
+	 * all-frozen bit) unless this has already been done previously.
+	 */
+	if (PageIsAllVisible(page))
+	{
+		uint8		vm_status = visibilitymap_get_status(onerel, blkno, vmbuffer);
+		uint8		flags = 0;
+
+		/* Set the VM all-frozen bit to flag, if needed */
+		if ((vm_status & VISIBILITYMAP_ALL_VISIBLE) == 0)
+			flags |= VISIBILITYMAP_ALL_VISIBLE;
+		if ((vm_status & VISIBILITYMAP_ALL_FROZEN) == 0 && all_frozen)
+			flags |= VISIBILITYMAP_ALL_FROZEN;
+
+		Assert(BufferIsValid(*vmbuffer));
+		if (flags != 0)
+			visibilitymap_set(onerel, blkno, buffer, InvalidXLogRecPtr,
+							  *vmbuffer, visibility_cutoff_xid, flags);
+	}
+	return chainindex;
+}
+
+/*
  *	lazy_vacuum_page() -- free dead tuples on a page
  *					 and repair its fragmentation.
  *
@@ -1586,6 +1835,24 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
 	return false;
 }
 
+/*
+ * Reset counters tracking number of WARM and CLEAR pointers per candidate TID.
+ * These counters are maintained per index and cleared when the next index is
+ * picked up for cleanup.
+ *
+ * We don't touch the keep_warm_chain since once a chain is known to be
+ * non-convertible, we must remember that across all indexes.
+ */
+static void
+lazy_reset_warm_pointer_count(LVRelStats *vacrelstats)
+{
+	int i;
+	for (i = 0; i < vacrelstats->num_warm_chains; i++)
+	{
+		LVWarmChain *chain = &vacrelstats->warm_chains[i];
+		chain->num_clear_pointers = chain->num_warm_pointers = 0;
+	}
+}
 
 /*
  *	lazy_vacuum_index() -- vacuum one index relation.
@@ -1595,6 +1862,7 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
  */
 static void
 lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats)
 {
@@ -1610,15 +1878,87 @@ lazy_vacuum_index(Relation indrel,
 	ivinfo.num_heap_tuples = vacrelstats->old_rel_tuples;
 	ivinfo.strategy = vac_strategy;
 
-	/* Do bulk deletion */
-	*stats = index_bulk_delete(&ivinfo, *stats,
-							   lazy_tid_reaped, (void *) vacrelstats);
+	/*
+	 * If told, convert WARM chains into HOT chains.
+	 *
+	 * We must have already collected candidate WARM chains i.e. chains that
+	 * have either all tuples with HEAP_WARM_TUPLE flag set or none.
+	 *
+	 * This works in two phases. In the first phase, we do a complete index
+	 * scan and collect information about index pointers to the candidate
+	 * chains, but we don't do conversion. To be precise, we count the number
+	 * of WARM and CLEAR index pointers to each candidate chain and use that
+	 * knowledge to arrive at a decision and do the actual conversion during
+	 * the second phase (we kill known dead pointers though in this phase).
+	 *
+	 * In the second phase, for each candidate chain we check if we have seen a
+	 * WARM index pointer. For such chains, we kill the CLEAR pointer and
+	 * convert the WARM pointer into a CLEAR pointer. The heap tuples are
+	 * cleared of WARM flags in the second heap scan. If we did not find any
+	 * WARM pointer to a WARM chain, that means that the chain is reachable
+	 * from the CLEAR pointer (because say WARM update did not add a new entry
+	 * for this index). In that case, we do nothing.  There is a third case
+	 * where we find two CLEAR pointers to a candidate chain. This can happen
+	 * because of aborted vacuums. We don't handle that case yet, but it should
+	 * be possible to apply the same recheck logic and find which of the clear
+	 * pointers is redundant and should be removed.
+	 *
+	 * For CLEAR chains, we just kill the WARM pointer, if it exists, and keep
+	 * the CLEAR pointer.
+	 */
+	if (clear_warm)
+	{
+		/*
+		 * Before starting the index scan, reset the counters of WARM and CLEAR
+		 * pointers, probably carried forward from the previous index.
+		 */
+		lazy_reset_warm_pointer_count(vacrelstats);
+
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase1, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row version, found "
+						"%0.f warm pointers, %0.f clear pointers, removed "
+						"%0.f warm pointers, removed %0.f clear pointers",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples,
+						(*stats)->num_warm_pointers,
+						(*stats)->num_clear_pointers,
+						(*stats)->warm_pointers_removed,
+						(*stats)->clear_pointers_removed)));
+
+		(*stats)->num_warm_pointers = 0;
+		(*stats)->num_clear_pointers = 0;
+		(*stats)->warm_pointers_removed = 0;
+		(*stats)->clear_pointers_removed = 0;
+		(*stats)->pointers_cleared = 0;
+
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase2, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to convert WARM pointers, found "
+						"%0.f WARM pointers, %0.f CLEAR pointers, removed "
+						"%0.f WARM pointers, removed %0.f CLEAR pointers, "
+						"cleared %0.f WARM pointers",
+						RelationGetRelationName(indrel),
+						(*stats)->num_warm_pointers,
+						(*stats)->num_clear_pointers,
+						(*stats)->warm_pointers_removed,
+						(*stats)->clear_pointers_removed,
+						(*stats)->pointers_cleared)));
+	}
+	else
+	{
+		/* Do bulk deletion */
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_tid_reaped, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row versions",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples),
+				 errdetail("%s.", pg_rusage_show(&ru0))));
+	}
 
-	ereport(elevel,
-			(errmsg("scanned index \"%s\" to remove %d row versions",
-					RelationGetRelationName(indrel),
-					vacrelstats->num_dead_tuples),
-			 errdetail("%s.", pg_rusage_show(&ru0))));
 }
 
 /*
@@ -1992,9 +2332,11 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 
 	if (vacrelstats->hasindex)
 	{
-		maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData);
+		maxtuples = (vac_work_mem * 1024L) / (sizeof(ItemPointerData) +
+				sizeof(LVWarmChain));
 		maxtuples = Min(maxtuples, INT_MAX);
-		maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData));
+		maxtuples = Min(maxtuples, MaxAllocSize / (sizeof(ItemPointerData) +
+					sizeof(LVWarmChain)));
 
 		/* curious coding here to ensure the multiplication can't overflow */
 		if ((BlockNumber) (maxtuples / LAZY_ALLOC_TUPLES) > relblocks)
@@ -2012,6 +2354,57 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 	vacrelstats->max_dead_tuples = (int) maxtuples;
 	vacrelstats->dead_tuples = (ItemPointer)
 		palloc(maxtuples * sizeof(ItemPointerData));
+
+	/*
+	 * XXX Cheat for now and allocate the same size array for tracking warm
+	 * chains. maxtuples must have been already adjusted above to ensure we
+	 * don't cross vac_work_mem.
+	 */
+	vacrelstats->num_warm_chains = 0;
+	vacrelstats->max_warm_chains = (int) maxtuples;
+	vacrelstats->warm_chains = (LVWarmChain *)
+		palloc0(maxtuples * sizeof(LVWarmChain));
+
+}
+
+/*
+ * lazy_record_clear_chain - remember one CLEAR chain
+ */
+static void
+lazy_record_clear_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	{
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 0;
+		vacrelstats->num_warm_chains++;
+	}
+}
+
+/*
+ * lazy_record_warm_chain - remember one WARM chain
+ */
+static void
+lazy_record_warm_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	{
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 1;
+		vacrelstats->num_warm_chains++;
+	}
 }
 
 /*
@@ -2042,8 +2435,8 @@ lazy_record_dead_tuple(LVRelStats *vacrelstats,
  *
  *		Assumes dead_tuples array is in sorted order.
  */
-static bool
-lazy_tid_reaped(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+lazy_tid_reaped(ItemPointer itemptr, bool is_warm, void *state)
 {
 	LVRelStats *vacrelstats = (LVRelStats *) state;
 	ItemPointer res;
@@ -2054,7 +2447,193 @@ lazy_tid_reaped(ItemPointer itemptr, void *state)
 								sizeof(ItemPointerData),
 								vac_cmp_itemptr);
 
-	return (res != NULL);
+	return (res != NULL) ? IBDCR_DELETE : IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase1() -- run first pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	ItemPointer		res;
+	LVWarmChain	*chain;
+
+	res = (ItemPointer) bsearch((void *) itemptr,
+								(void *) vacrelstats->dead_tuples,
+								vacrelstats->num_dead_tuples,
+								sizeof(ItemPointerData),
+								vac_cmp_itemptr);
+
+	if (res != NULL)
+		return IBDCR_DELETE;
+
+	chain = (LVWarmChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->warm_chains,
+								vacrelstats->num_warm_chains,
+								sizeof(LVWarmChain),
+								vac_cmp_warm_chain);
+	if (chain != NULL)
+	{
+		if (is_warm)
+			chain->num_warm_pointers++;
+		else
+			chain->num_clear_pointers++;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase2() -- run second pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	LVWarmChain	*chain;
+
+	chain = (LVWarmChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->warm_chains,
+								vacrelstats->num_warm_chains,
+								sizeof(LVWarmChain),
+								vac_cmp_warm_chain);
+
+	if (chain != NULL && (chain->keep_warm_chain != 1))
+	{
+		/*
+		 * At no point, we can have more than 1 warm pointer to any chain and
+		 * no more than 2 clear pointers.
+		 */
+		Assert(chain->num_warm_pointers <= 1);
+		Assert(chain->num_clear_pointers <= 2);
+
+		if (chain->is_postwarm_chain == 1)
+		{
+			if (is_warm)
+			{
+				/*
+				 * A WARM pointer, pointing to a WARM chain.
+				 *
+				 * Clear the warm pointer (and delete the CLEAR pointer). We
+				 * may have already seen the CLEAR pointer in the scan and
+				 * deleted that or we may see it later in the scan. It doesn't
+				 * matter if we fail at any point because we won't clear up
+				 * WARM bits on the heap tuples until we have dealt with the
+				 * index pointers cleanly.
+				 */
+				return IBDCR_CLEAR_WARM;
+			}
+			else
+			{
+				/*
+				 * CLEAR pointer to a WARM chain.
+				 */
+				if (chain->num_warm_pointers > 0)
+				{
+					/*
+					 * If there exists a WARM pointer to the chain, we can
+					 * delete the CLEAR pointer and clear the WARM bits on the
+					 * heap tuples.
+					 */
+					return IBDCR_DELETE;
+				}
+				else if (chain->num_clear_pointers == 1)
+				{
+					/*
+					 * If this is the only pointer to a WARM chain, we must
+					 * keep the CLEAR pointer.
+					 *
+					 * The presence of WARM chain indicates that the WARM update
+					 * must have been committed good. But during the update
+					 * this index was probably not updated and hence it
+					 * contains just one, original CLEAR pointer to the chain.
+					 * We should be able to clear the WARM bits on heap tuples
+					 * unless we later find another index which prevents the
+					 * cleanup.
+					 */
+					return IBDCR_KEEP;
+				}
+			}
+		}
+		else
+		{
+			/*
+			 * This is a CLEAR chain.
+			 */
+			if (is_warm)
+			{
+				/*
+				 * A WARM pointer to a CLEAR chain.
+				 *
+				 * This can happen when a WARM update is aborted. Later the HOT
+				 * chain is pruned leaving behind only CLEAR tuples in the
+				 * chain. But the WARM index pointer inserted in the index
+				 * remains and it must now be deleted before we clear WARM bits
+				 * from the heap tuple.
+				 */
+				return IBDCR_DELETE;
+			}
+
+			/*
+			 * CLEAR pointer to a CLEAR chain.
+			 *
+			 * If this is the only surviving CLEAR pointer, keep it and clear
+			 * the WARM bits from the heap tuples.
+			 */
+			if (chain->num_clear_pointers == 1)
+				return IBDCR_KEEP;
+
+			/*
+			 * If there are more than 1 CLEAR pointers to this chain, we can
+			 * apply the recheck logic and kill the redudant CLEAR pointer and
+			 * convert the chain. But that's not yet done.
+			 */
+		}
+
+		/*
+		 * For everything else, we must keep the WARM bits and also keep the
+		 * index pointers.
+		 */
+		chain->keep_warm_chain = 1;
+		return IBDCR_KEEP;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ * Comparator routines for use with qsort() and bsearch(). Similar to
+ * vac_cmp_itemptr, but right hand argument is LVWarmChain struct pointer.
+ */
+static int
+vac_cmp_warm_chain(const void *left, const void *right)
+{
+	BlockNumber lblk,
+				rblk;
+	OffsetNumber loff,
+				roff;
+
+	lblk = ItemPointerGetBlockNumber((ItemPointer) left);
+	rblk = ItemPointerGetBlockNumber(&((LVWarmChain *) right)->chain_tid);
+
+	if (lblk < rblk)
+		return -1;
+	if (lblk > rblk)
+		return 1;
+
+	loff = ItemPointerGetOffsetNumber((ItemPointer) left);
+	roff = ItemPointerGetOffsetNumber(&((LVWarmChain *) right)->chain_tid);
+
+	if (loff < roff)
+		return -1;
+	if (loff > roff)
+		return 1;
+
+	return 0;
 }
 
 /*
@@ -2170,6 +2749,18 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 						break;
 					}
 
+					/*
+					 * If this or any other tuple in the chain ever WARM
+					 * updated, there could be multiple index entries pointing
+					 * to the root of this chain. We can't do index-only scans
+					 * for such tuples without verifying index key check. So
+					 * mark the page as !all_visible
+					 */
+					if (HeapTupleHeaderIsWarmUpdated(tuple.t_data))
+					{
+						all_visible = false;
+					}
+
 					/* Track newest xmin on page. */
 					if (TransactionIdFollows(xmin, *visibility_cutoff_xid))
 						*visibility_cutoff_xid = xmin;
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index c3f1873..2143978 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -270,6 +270,8 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
 List *
 ExecInsertIndexTuples(TupleTableSlot *slot,
 					  ItemPointer tupleid,
+					  ItemPointer root_tid,
+					  Bitmapset *modified_attrs,
 					  EState *estate,
 					  bool noDupErr,
 					  bool *specConflict,
@@ -324,6 +326,17 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 		if (!indexInfo->ii_ReadyForInserts)
 			continue;
 
+		/*
+		 * If modified_attrs is set, we only insert index entries for those
+		 * indexes whose column has changed. All other indexes can use their
+		 * existing index pointers to look up the new tuple
+		 */
+		if (modified_attrs)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
 		/* Check for partial index */
 		if (indexInfo->ii_Predicate != NIL)
 		{
@@ -387,10 +400,11 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 			index_insert(indexRelation, /* index relation */
 						 values,	/* array of index Datums */
 						 isnull,	/* null flags */
-						 tupleid,		/* tid of heap tuple */
+						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique,	/* type of uniqueness check to do */
-						 indexInfo);	/* index AM may need this */
+						 indexInfo,	/* index AM may need this */
+						 (modified_attrs != NULL));	/* is it a WARM update? */
 
 		/*
 		 * If the index has an associated exclusion constraint, check that.
@@ -787,6 +801,9 @@ retry:
 		{
 			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
 				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
+			else
+				ItemPointerCopy(&tup->t_self, &ctid_wait);
+
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index f20d728..747e4ce 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -399,6 +399,8 @@ ExecSimpleRelationInsert(EState *estate, TupleTableSlot *slot)
 
 		if (resultRelInfo->ri_NumIndices > 0)
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &(tuple->t_self),
+												   NULL,
 												   estate, false, NULL,
 												   NIL);
 
@@ -445,6 +447,8 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 	if (!skip_tuple)
 	{
 		List	   *recheckIndexes = NIL;
+		bool		warm_update;
+		Bitmapset  *modified_attrs;
 
 		/* Check the constraints of the tuple */
 		if (rel->rd_att->constr)
@@ -455,13 +459,35 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 
 		/* OK, update the tuple and index entries for it */
 		simple_heap_update(rel, &searchslot->tts_tuple->t_self,
-						   slot->tts_tuple);
+						   slot->tts_tuple, &modified_attrs, &warm_update);
 
 		if (resultRelInfo->ri_NumIndices > 0 &&
-			!HeapTupleIsHeapOnly(slot->tts_tuple))
+			(!HeapTupleIsHeapOnly(slot->tts_tuple) || warm_update))
+		{
+			ItemPointerData root_tid;
+
+			/*
+			 * If we did a WARM update then we must index the tuple using its
+			 * root line pointer and not the tuple TID itself.
+			 */
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self,
+						&root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
+
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL,
 												   NIL);
+		}
 
 		/* AFTER ROW UPDATE Triggers */
 		ExecARUpdateTriggers(estate, resultRelInfo,
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index 19eb175..ef3653c 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -39,6 +39,7 @@
 
 #include "access/relscan.h"
 #include "access/transam.h"
+#include "access/valid.h"
 #include "executor/execdebug.h"
 #include "executor/nodeBitmapHeapscan.h"
 #include "pgstat.h"
@@ -395,11 +396,27 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			OffsetNumber offnum = tbmres->offsets[curslot];
 			ItemPointerData tid;
 			HeapTupleData heapTuple;
+			bool recheck = false;
 
 			ItemPointerSet(&tid, page, offnum);
 			if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
-									   &heapTuple, NULL, true))
-				scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+									   &heapTuple, NULL, true, &recheck))
+			{
+				bool valid = true;
+
+				if (scan->rs_key)
+					HeapKeyTest(&heapTuple, RelationGetDescr(scan->rs_rd),
+							scan->rs_nkeys, scan->rs_key, valid);
+				if (valid)
+					scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+
+				/*
+				 * If the heap tuple needs a recheck because of a WARM update,
+				 * it's a lossy case.
+				 */
+				if (recheck)
+					tbmres->recheck = true;
+			}
 		}
 	}
 	else
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index 5afd02e..6e48c2e 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -142,8 +142,8 @@ IndexNext(IndexScanState *node)
 					   false);	/* don't pfree */
 
 		/*
-		 * If the index was lossy, we have to recheck the index quals using
-		 * the fetched tuple.
+		 * If the index was lossy or the tuple was WARM, we have to recheck
+		 * the index quals using the fetched tuple.
 		 */
 		if (scandesc->xs_recheck)
 		{
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index 0b524e0..2ad4a2c 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -513,6 +513,7 @@ ExecInsert(ModifyTableState *mtstate,
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												 &(tuple->t_self), NULL,
 												 estate, true, &specConflict,
 												   arbiterIndexes);
 
@@ -559,6 +560,7 @@ ExecInsert(ModifyTableState *mtstate,
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
 				recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+													   &(tuple->t_self), NULL,
 													   estate, false, NULL,
 													   arbiterIndexes);
 		}
@@ -892,6 +894,9 @@ ExecUpdate(ItemPointer tupleid,
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
 	List	   *recheckIndexes = NIL;
+	Bitmapset  *modified_attrs = NULL;
+	ItemPointerData	root_tid;
+	bool		warm_update;
 
 	/*
 	 * abort the operation if not running transactions
@@ -1008,7 +1013,7 @@ lreplace:;
 							 estate->es_output_cid,
 							 estate->es_crosscheck_snapshot,
 							 true /* wait for commit */ ,
-							 &hufd, &lockmode);
+							 &hufd, &lockmode, &modified_attrs, &warm_update);
 		switch (result)
 		{
 			case HeapTupleSelfUpdated:
@@ -1095,10 +1100,28 @@ lreplace:;
 		 * the t_self field.
 		 *
 		 * If it's a HOT update, we mustn't insert new index entries.
+		 *
+		 * If it's a WARM update, then we must insert new entries with TID
+		 * pointing to the root of the WARM chain.
 		 */
-		if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
+		if (resultRelInfo->ri_NumIndices > 0 &&
+			(!HeapTupleIsHeapOnly(tuple) || warm_update))
+		{
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL, NIL);
+		}
 	}
 
 	if (canSetTag)
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index b704788..cdfd76e 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -1824,7 +1824,7 @@ pgstat_count_heap_insert(Relation rel, PgStat_Counter n)
  * pgstat_count_heap_update - count a tuple update
  */
 void
-pgstat_count_heap_update(Relation rel, bool hot)
+pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 {
 	PgStat_TableStatus *pgstat_info = rel->pgstat_info;
 
@@ -1842,6 +1842,8 @@ pgstat_count_heap_update(Relation rel, bool hot)
 		/* t_tuples_hot_updated is nontransactional, so just advance it */
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
+		else if (warm)
+			pgstat_info->t_counts.t_tuples_warm_updated++;
 	}
 }
 
@@ -4330,6 +4332,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_updated = 0;
 		result->tuples_deleted = 0;
 		result->tuples_hot_updated = 0;
+		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
 		result->changes_since_analyze = 0;
@@ -5439,6 +5442,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated = tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted = tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated = tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
@@ -5466,6 +5470,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated += tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted += tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated += tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated += tabmsg->t_counts.t_tuples_warm_updated;
 			/* If table was truncated, first reset the live/dead counters */
 			if (tabmsg->t_counts.t_truncated)
 			{
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 5c13d26..7a9b48a 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -347,7 +347,7 @@ DecodeStandbyOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 static void
 DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 {
-	uint8		info = XLogRecGetInfo(buf->record) & XLOG_HEAP_OPMASK;
+	uint8		info = XLogRecGetInfo(buf->record) & XLOG_HEAP2_OPMASK;
 	TransactionId xid = XLogRecGetXid(buf->record);
 	SnapBuild  *builder = ctx->snapshot_builder;
 
@@ -359,10 +359,6 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 
 	switch (info)
 	{
-		case XLOG_HEAP2_MULTI_INSERT:
-			if (SnapBuildProcessChange(builder, xid, buf->origptr))
-				DecodeMultiInsert(ctx, buf);
-			break;
 		case XLOG_HEAP2_NEW_CID:
 			{
 				xl_heap_new_cid *xlrec;
@@ -390,6 +386,7 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 		case XLOG_HEAP2_CLEANUP_INFO:
 		case XLOG_HEAP2_VISIBLE:
 		case XLOG_HEAP2_LOCK_UPDATED:
+		case XLOG_HEAP2_WARMCLEAR:
 			break;
 		default:
 			elog(ERROR, "unexpected RM_HEAP2_ID record type: %u", info);
@@ -418,6 +415,10 @@ DecodeHeapOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 			if (SnapBuildProcessChange(builder, xid, buf->origptr))
 				DecodeInsert(ctx, buf);
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			if (SnapBuildProcessChange(builder, xid, buf->origptr))
+				DecodeMultiInsert(ctx, buf);
+			break;
 
 			/*
 			 * Treat HOT update as normal updates. There is no useful
@@ -809,7 +810,7 @@ DecodeDelete(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 }
 
 /*
- * Decode XLOG_HEAP2_MULTI_INSERT_insert record into multiple tuplebufs.
+ * Decode XLOG_HEAP_MULTI_INSERT_insert record into multiple tuplebufs.
  *
  * Currently MULTI_INSERT will always contain the full tuples.
  */
diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index fdf045a..8d23e92 100644
--- a/src/backend/storage/page/bufpage.c
+++ b/src/backend/storage/page/bufpage.c
@@ -1151,6 +1151,29 @@ PageIndexTupleOverwrite(Page page, OffsetNumber offnum,
 	return true;
 }
 
+/*
+ * PageIndexClearWarmTuples
+ *
+ * Clear the given WARM pointers by resetting the flags stored in the TID
+ * field. We assume there is nothing else in the TID flags other than the WARM
+ * information and clearing all flag bits is safe. If that changes, we must
+ * change this routine as well.
+ */
+void
+PageIndexClearWarmTuples(Page page, OffsetNumber *clearitemnos,
+						 uint16 nclearitems)
+{
+	int			i;
+	ItemId		itemid;
+	IndexTuple	itup;
+
+	for (i = 0; i < nclearitems; i++)
+	{
+		itemid = PageGetItemId(page, clearitemnos[i]);
+		itup = (IndexTuple) PageGetItem(page, itemid);
+		ItemPointerClearFlags(&itup->t_tid);
+	}
+}
 
 /*
  * Set checksum for a page in shared buffers.
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index a987d0d..b8677f3 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -145,6 +145,22 @@ pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS)
 
 
 Datum
+pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+
+Datum
 pg_stat_get_live_tuples(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
@@ -1644,6 +1660,21 @@ pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS)
 }
 
 Datum
+pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_TableStatus *tabentry;
+
+	if ((tabentry = find_tabstat_entry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->t_counts.t_tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+Datum
 pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index a6b60c6..285e07c 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2339,6 +2339,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	list_free_deep(relation->rd_fkeylist);
 	list_free(relation->rd_indexlist);
 	bms_free(relation->rd_indexattr);
+	bms_free(relation->rd_exprindexattr);
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_pkattr);
 	bms_free(relation->rd_idattr);
@@ -4353,6 +4354,13 @@ RelationGetIndexList(Relation relation)
 		return list_copy(relation->rd_indexlist);
 
 	/*
+	 * If the index list was invalidated, we better also invalidate the index
+	 * attribute list (which should automatically invalidate other attributes
+	 * such as primary key and replica identity)
+	 */
+	relation->rd_indexattr = NULL;
+
+	/*
 	 * We build the list we intend to return (in the caller's context) while
 	 * doing the scan.  After successfully completing the scan, we copy that
 	 * list into the relcache entry.  This avoids cache-context memory leakage
@@ -4836,15 +4844,19 @@ Bitmapset *
 RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 {
 	Bitmapset  *indexattrs;		/* indexed columns */
+	Bitmapset  *exprindexattrs;	/* indexed columns in expression/prediacate
+									 indexes */
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *pkindexattrs;	/* columns in the primary index */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
+	Bitmapset  *indxnotreadyattrs;	/* columns in not ready indexes */
 	List	   *indexoidlist;
 	List	   *newindexoidlist;
 	Oid			relpkindex;
 	Oid			relreplindex;
 	ListCell   *l;
 	MemoryContext oldcxt;
+	bool		supportswarm = true;/* True if the table can be WARM updated */
 
 	/* Quick exit if we already computed the result. */
 	if (relation->rd_indexattr != NULL)
@@ -4859,6 +4871,10 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_pkattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
+			case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+				return bms_copy(relation->rd_exprindexattr);
+			case INDEX_ATTR_BITMAP_NOTREADY:
+				return bms_copy(relation->rd_indxnotreadyattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -4899,9 +4915,11 @@ restart:
 	 * won't be returned at all by RelationGetIndexList.
 	 */
 	indexattrs = NULL;
+	exprindexattrs = NULL;
 	uindexattrs = NULL;
 	pkindexattrs = NULL;
 	idindexattrs = NULL;
+	indxnotreadyattrs = NULL;
 	foreach(l, indexoidlist)
 	{
 		Oid			indexOid = lfirst_oid(l);
@@ -4938,6 +4956,10 @@ restart:
 				indexattrs = bms_add_member(indexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
 
+				if (!indexInfo->ii_ReadyForInserts)
+					indxnotreadyattrs = bms_add_member(indxnotreadyattrs,
+							   attrnum - FirstLowInvalidHeapAttributeNumber);
+
 				if (isKey)
 					uindexattrs = bms_add_member(uindexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
@@ -4953,10 +4975,29 @@ restart:
 		}
 
 		/* Collect all attributes used in expressions, too */
-		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &exprindexattrs);
 
 		/* Collect all attributes in the index predicate, too */
-		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
+
+		/*
+		 * indexattrs should include attributes referenced in index expressions
+		 * and predicates too.
+		 */
+		indexattrs = bms_add_members(indexattrs, exprindexattrs);
+
+		if (!indexInfo->ii_ReadyForInserts)
+			indxnotreadyattrs = bms_add_members(indxnotreadyattrs,
+					exprindexattrs);
+
+		/*
+		 * Check if the index has amrecheck method defined. If the method is
+		 * not defined, the index does not support WARM update. Completely
+		 * disable WARM updates on such tables.
+		 */
+		if (!indexDesc->rd_amroutine->amrecheck)
+			supportswarm = false;
+
 
 		index_close(indexDesc, AccessShareLock);
 	}
@@ -4989,15 +5030,22 @@ restart:
 		goto restart;
 	}
 
+	/* Remember if the table can do WARM updates */
+	relation->rd_supportswarm = supportswarm;
+
 	/* Don't leak the old values of these bitmaps, if any */
 	bms_free(relation->rd_indexattr);
 	relation->rd_indexattr = NULL;
+	bms_free(relation->rd_exprindexattr);
+	relation->rd_exprindexattr = NULL;
 	bms_free(relation->rd_keyattr);
 	relation->rd_keyattr = NULL;
 	bms_free(relation->rd_pkattr);
 	relation->rd_pkattr = NULL;
 	bms_free(relation->rd_idattr);
 	relation->rd_idattr = NULL;
+	bms_free(relation->rd_indxnotreadyattr);
+	relation->rd_indxnotreadyattr = NULL;
 
 	/*
 	 * Now save copies of the bitmaps in the relcache entry.  We intentionally
@@ -5010,7 +5058,9 @@ restart:
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_pkattr = bms_copy(pkindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_indexattr = bms_copy(indexattrs);
+	relation->rd_exprindexattr = bms_copy(exprindexattrs);
+	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
+	relation->rd_indxnotreadyattr = bms_copy(indxnotreadyattrs);
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -5024,6 +5074,10 @@ restart:
 			return bms_copy(relation->rd_pkattr);
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
+		case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+			return exprindexattrs;
+		case INDEX_ATTR_BITMAP_NOTREADY:
+			return indxnotreadyattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
@@ -5636,6 +5690,7 @@ load_relcache_init_file(bool shared)
 		rel->rd_keyattr = NULL;
 		rel->rd_pkattr = NULL;
 		rel->rd_idattr = NULL;
+		rel->rd_indxnotreadyattr = NULL;
 		rel->rd_pubactions = NULL;
 		rel->rd_statvalid = false;
 		rel->rd_statlist = NIL;
diff --git a/src/backend/utils/time/combocid.c b/src/backend/utils/time/combocid.c
index baff998..6a2e2f2 100644
--- a/src/backend/utils/time/combocid.c
+++ b/src/backend/utils/time/combocid.c
@@ -106,7 +106,7 @@ HeapTupleHeaderGetCmin(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 	Assert(TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmin(tup)));
 
 	if (tup->t_infomask & HEAP_COMBOCID)
@@ -120,7 +120,7 @@ HeapTupleHeaderGetCmax(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 
 	/*
 	 * Because GetUpdateXid() performs memory allocations if xmax is a
diff --git a/src/backend/utils/time/tqual.c b/src/backend/utils/time/tqual.c
index 519f3b6..e54d0df 100644
--- a/src/backend/utils/time/tqual.c
+++ b/src/backend/utils/time/tqual.c
@@ -186,7 +186,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -205,7 +205,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -377,7 +377,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -396,7 +396,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -471,7 +471,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			return HeapTupleInvisible;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -490,7 +490,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -753,7 +753,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -772,7 +772,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -974,7 +974,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -993,7 +993,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1180,7 +1180,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 		if (HeapTupleHeaderXminInvalid(tuple))
 			return HEAPTUPLE_DEAD;
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_OFF)
+		else if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1198,7 +1198,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 						InvalidTransactionId);
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h
index f919cf8..8b7af1e 100644
--- a/src/include/access/amapi.h
+++ b/src/include/access/amapi.h
@@ -13,6 +13,7 @@
 #define AMAPI_H
 
 #include "access/genam.h"
+#include "access/itup.h"
 
 /*
  * We don't wish to include planner header files here, since most of an index
@@ -74,6 +75,14 @@ typedef bool (*aminsert_function) (Relation indexRelation,
 											   Relation heapRelation,
 											   IndexUniqueCheck checkUnique,
 											   struct IndexInfo *indexInfo);
+/* insert this WARM tuple */
+typedef bool (*amwarminsert_function) (Relation indexRelation,
+											   Datum *values,
+											   bool *isnull,
+											   ItemPointer heap_tid,
+											   Relation heapRelation,
+											   IndexUniqueCheck checkUnique,
+											   struct IndexInfo *indexInfo);
 
 /* bulk delete */
 typedef IndexBulkDeleteResult *(*ambulkdelete_function) (IndexVacuumInfo *info,
@@ -152,6 +161,11 @@ typedef void (*aminitparallelscan_function) (void *target);
 /* (re)start parallel index scan */
 typedef void (*amparallelrescan_function) (IndexScanDesc scan);
 
+/* recheck index tuple and heap tuple match */
+typedef bool (*amrecheck_function) (Relation indexRel,
+		struct IndexInfo *indexInfo, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 /*
  * API struct for an index AM.  Note this must be stored in a single palloc'd
  * chunk of memory.
@@ -198,6 +212,7 @@ typedef struct IndexAmRoutine
 	ambuild_function ambuild;
 	ambuildempty_function ambuildempty;
 	aminsert_function aminsert;
+	amwarminsert_function amwarminsert;
 	ambulkdelete_function ambulkdelete;
 	amvacuumcleanup_function amvacuumcleanup;
 	amcanreturn_function amcanreturn;	/* can be NULL */
@@ -217,6 +232,9 @@ typedef struct IndexAmRoutine
 	amestimateparallelscan_function amestimateparallelscan;		/* can be NULL */
 	aminitparallelscan_function aminitparallelscan;		/* can be NULL */
 	amparallelrescan_function amparallelrescan; /* can be NULL */
+
+	/* interface function to support WARM */
+	amrecheck_function amrecheck;		/* can be NULL */
 } IndexAmRoutine;
 
 
diff --git a/src/include/access/genam.h b/src/include/access/genam.h
index f467b18..965be45 100644
--- a/src/include/access/genam.h
+++ b/src/include/access/genam.h
@@ -75,12 +75,29 @@ typedef struct IndexBulkDeleteResult
 	bool		estimated_count;	/* num_index_tuples is an estimate */
 	double		num_index_tuples;		/* tuples remaining */
 	double		tuples_removed; /* # removed during vacuum operation */
+	double		num_warm_pointers;	/* # WARM pointers found */
+	double		num_clear_pointers;	/* # CLEAR pointers found */
+	double		pointers_cleared;	/* # WARM pointers cleared */
+	double		warm_pointers_removed;	/* # WARM pointers removed */
+	double		clear_pointers_removed;	/* # CLEAR pointers removed */
 	BlockNumber pages_deleted;	/* # unused pages in index */
 	BlockNumber pages_free;		/* # pages available for reuse */
 } IndexBulkDeleteResult;
 
+/*
+ * IndexBulkDeleteCallback should return one of the following
+ */
+typedef enum IndexBulkDeleteCallbackResult
+{
+	IBDCR_KEEP,			/* index tuple should be preserved */
+	IBDCR_DELETE,		/* index tuple should be deleted */
+	IBDCR_CLEAR_WARM	/* index tuple should be cleared of WARM bit */
+} IndexBulkDeleteCallbackResult;
+
 /* Typedef for callback function to determine if a tuple is bulk-deletable */
-typedef bool (*IndexBulkDeleteCallback) (ItemPointer itemptr, void *state);
+typedef IndexBulkDeleteCallbackResult (*IndexBulkDeleteCallback) (
+										 ItemPointer itemptr,
+										 bool is_warm, void *state);
 
 /* struct definitions appear in relscan.h */
 typedef struct IndexScanDescData *IndexScanDesc;
@@ -135,7 +152,8 @@ extern bool index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 struct IndexInfo *indexInfo);
+			 struct IndexInfo *indexInfo,
+			 bool warm_update);
 
 extern IndexScanDesc index_beginscan(Relation heapRelation,
 				Relation indexRelation,
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 5540e12..2217af9 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -72,6 +72,20 @@ typedef struct HeapUpdateFailureData
 	CommandId	cmax;
 } HeapUpdateFailureData;
 
+typedef int HeapCheckWarmChainStatus;
+
+#define HCWC_CLEAR_TUPLE		0x0001
+#define	HCWC_WARM_TUPLE			0x0002
+#define HCWC_WARM_UPDATED_TUPLE	0x0004
+
+#define HCWC_IS_MIXED(status) \
+	(((status) & (HCWC_CLEAR_TUPLE | HCWC_WARM_TUPLE)) != 0)
+#define HCWC_IS_ALL_WARM(status) \
+	(((status) & HCWC_CLEAR_TUPLE) == 0)
+#define HCWC_IS_ALL_CLEAR(status) \
+	(((status) & HCWC_WARM_TUPLE) == 0)
+#define HCWC_IS_WARM_UPDATED(status) \
+	(((status) & HCWC_WARM_UPDATED_TUPLE) != 0)
 
 /* ----------------
  *		function prototypes for heap access method
@@ -137,9 +151,10 @@ extern bool heap_fetch(Relation relation, Snapshot snapshot,
 		   Relation stats_relation);
 extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
 					   Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call);
+					   bool *all_dead, bool first_call, bool *recheck);
 extern bool heap_hot_search(ItemPointer tid, Relation relation,
-				Snapshot snapshot, bool *all_dead);
+				Snapshot snapshot, bool *all_dead,
+				bool *recheck, Buffer *buffer, HeapTuple heapTuple);
 
 extern void heap_get_latest_tid(Relation relation, Snapshot snapshot,
 					ItemPointer tid);
@@ -161,7 +176,8 @@ extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
 extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
@@ -176,10 +192,16 @@ extern bool heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple);
 extern Oid	simple_heap_insert(Relation relation, HeapTuple tup);
 extern void simple_heap_delete(Relation relation, ItemPointer tid);
 extern void simple_heap_update(Relation relation, ItemPointer otid,
-				   HeapTuple tup);
+				   HeapTuple tup,
+				   Bitmapset **modified_attrs,
+				   bool *warm_update);
 
 extern void heap_sync(Relation relation);
 extern void heap_update_snapshot(HeapScanDesc scan, Snapshot snapshot);
+extern HeapCheckWarmChainStatus heap_check_warm_chain(Page dp,
+				   ItemPointer tid, bool stop_at_warm);
+extern int heap_clear_warm_chain(Page dp, ItemPointer tid,
+				   OffsetNumber *cleared_offnums);
 
 /* in heap/pruneheap.c */
 extern void heap_page_prune_opt(Relation relation, Buffer buffer);
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index e6019d5..66fd0ea 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -32,7 +32,7 @@
 #define XLOG_HEAP_INSERT		0x00
 #define XLOG_HEAP_DELETE		0x10
 #define XLOG_HEAP_UPDATE		0x20
-/* 0x030 is free, was XLOG_HEAP_MOVE */
+#define XLOG_HEAP_MULTI_INSERT	0x30
 #define XLOG_HEAP_HOT_UPDATE	0x40
 #define XLOG_HEAP_CONFIRM		0x50
 #define XLOG_HEAP_LOCK			0x60
@@ -47,18 +47,23 @@
 /*
  * We ran out of opcodes, so heapam.c now has a second RmgrId.  These opcodes
  * are associated with RM_HEAP2_ID, but are not logically different from
- * the ones above associated with RM_HEAP_ID.  XLOG_HEAP_OPMASK applies to
- * these, too.
+ * the ones above associated with RM_HEAP_ID.
+ *
+ * In PG 10, we moved XLOG_HEAP2_MULTI_INSERT to RM_HEAP_ID. That allows us to
+ * use 0x80 bit in RM_HEAP2_ID, thus potentially creating another 8 possible
+ * opcodes in RM_HEAP2_ID.
  */
 #define XLOG_HEAP2_REWRITE		0x00
 #define XLOG_HEAP2_CLEAN		0x10
 #define XLOG_HEAP2_FREEZE_PAGE	0x20
 #define XLOG_HEAP2_CLEANUP_INFO 0x30
 #define XLOG_HEAP2_VISIBLE		0x40
-#define XLOG_HEAP2_MULTI_INSERT 0x50
+#define XLOG_HEAP2_WARMCLEAR	0x50
 #define XLOG_HEAP2_LOCK_UPDATED 0x60
 #define XLOG_HEAP2_NEW_CID		0x70
 
+#define XLOG_HEAP2_OPMASK		0x70
+
 /*
  * xl_heap_insert/xl_heap_multi_insert flag values, 8 bits are available.
  */
@@ -80,6 +85,7 @@
 #define XLH_UPDATE_CONTAINS_NEW_TUPLE			(1<<4)
 #define XLH_UPDATE_PREFIX_FROM_OLD				(1<<5)
 #define XLH_UPDATE_SUFFIX_FROM_OLD				(1<<6)
+#define XLH_UPDATE_WARM_UPDATE					(1<<7)
 
 /* convenience macro for checking whether any form of old tuple was logged */
 #define XLH_UPDATE_CONTAINS_OLD						\
@@ -225,6 +231,14 @@ typedef struct xl_heap_clean
 
 #define SizeOfHeapClean (offsetof(xl_heap_clean, ndead) + sizeof(uint16))
 
+typedef struct xl_heap_warmclear
+{
+	uint16		ncleared;
+	/* OFFSET NUMBERS are in the block reference 0 */
+} xl_heap_warmclear;
+
+#define SizeOfHeapWarmClear (offsetof(xl_heap_warmclear, ncleared) + sizeof(uint16))
+
 /*
  * Cleanup_info is required in some cases during a lazy VACUUM.
  * Used for reporting the results of HeapTupleHeaderAdvanceLatestRemovedXid()
@@ -388,6 +402,8 @@ extern XLogRecPtr log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid);
+extern XLogRecPtr log_heap_warmclear(Relation reln, Buffer buffer,
+			   OffsetNumber *cleared, int ncleared);
 extern XLogRecPtr log_heap_freeze(Relation reln, Buffer buffer,
 				TransactionId cutoff_xid, xl_heap_freeze_tuple *tuples,
 				int ntuples);
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 4d614b7..bcefba6 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -201,6 +201,21 @@ struct HeapTupleHeaderData
 										 * upgrade support */
 #define HEAP_MOVED (HEAP_MOVED_OFF | HEAP_MOVED_IN)
 
+/*
+ * A WARM chain usually consists of two parts. Each of these parts are HOT
+ * chains in themselves i.e. all indexed columns has the same value, but a WARM
+ * update separates these parts. We need a mechanism to identify which part a
+ * tuple belongs to. We can't just look at if it's a
+ * HeapTupleHeaderIsWarmUpdated() because during WARM update, both old and new
+ * tuples are marked as WARM tuples.
+ *
+ * We need another infomask bit for this. But we use the same infomask bit that
+ * was earlier used for by old-style VACUUM FULL. This is safe because
+ * HEAP_WARM_TUPLE flag will always be set along with HEAP_WARM_UPDATED. So if
+ * HEAP_WARM_TUPLE and HEAP_WARM_UPDATED is set then we know that it's
+ * referring to red part of the WARM chain.
+ */
+#define HEAP_WARM_TUPLE			0x4000
 #define HEAP_XACT_MASK			0xFFF0	/* visibility-related bits */
 
 /*
@@ -260,7 +275,11 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x0800 are available */
+#define HEAP_WARM_UPDATED		0x0800	/*
+										 * This or a prior version of this
+										 * tuple in the current HOT chain was
+										 * once WARM updated
+										 */
 #define HEAP_LATEST_TUPLE		0x1000	/*
 										 * This is the last tuple in chain and
 										 * ip_posid points to the root line
@@ -271,7 +290,7 @@ struct HeapTupleHeaderData
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF800	/* visibility-related bits */
 
 
 /*
@@ -396,7 +415,7 @@ struct HeapTupleHeaderData
 /* SetCmin is reasonably simple since we never need a combo CID */
 #define HeapTupleHeaderSetCmin(tup, cid) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	(tup)->t_infomask &= ~HEAP_COMBOCID; \
 } while (0)
@@ -404,7 +423,7 @@ do { \
 /* SetCmax must be used after HeapTupleHeaderAdjustCmax; see combocid.c */
 #define HeapTupleHeaderSetCmax(tup, cid, iscombo) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	if (iscombo) \
 		(tup)->t_infomask |= HEAP_COMBOCID; \
@@ -414,7 +433,7 @@ do { \
 
 #define HeapTupleHeaderGetXvac(tup) \
 ( \
-	((tup)->t_infomask & HEAP_MOVED) ? \
+	HeapTupleHeaderIsMoved(tup) ? \
 		(tup)->t_choice.t_heap.t_field3.t_xvac \
 	: \
 		InvalidTransactionId \
@@ -422,7 +441,7 @@ do { \
 
 #define HeapTupleHeaderSetXvac(tup, xid) \
 do { \
-	Assert((tup)->t_infomask & HEAP_MOVED); \
+	Assert(HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_xvac = (xid); \
 } while (0)
 
@@ -510,6 +529,21 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+#define HeapTupleHeaderSetWarmUpdated(tup) \
+do { \
+	(tup)->t_infomask2 |= HEAP_WARM_UPDATED; \
+} while (0)
+
+#define HeapTupleHeaderClearWarmUpdated(tup) \
+do { \
+	(tup)->t_infomask2 &= ~HEAP_WARM_UPDATED; \
+} while (0)
+
+#define HeapTupleHeaderIsWarmUpdated(tup) \
+( \
+  ((tup)->t_infomask2 & HEAP_WARM_UPDATED) != 0 \
+)
+
 /*
  * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
  * the TID of the tuple itself in t_ctid field to mark the end of the chain.
@@ -635,6 +669,58 @@ do { \
 )
 
 /*
+ * Macros to check if tuple is a moved-off/in tuple by VACUUM FULL in from
+ * pre-9.0 era. Such tuple must not have HEAP_WARM_TUPLE flag set.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderIsMovedOff(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED_OFF) \
+)
+
+#define HeapTupleHeaderIsMovedIn(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED_IN) \
+)
+
+#define HeapTupleHeaderIsMoved(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED) \
+)
+
+/*
+ * Check if tuple belongs to the second part of the WARM chain.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderIsWarm(tuple) \
+( \
+	HeapTupleHeaderIsWarmUpdated(tuple) && \
+	(((tuple)->t_infomask & HEAP_WARM_TUPLE) != 0) \
+)
+
+/*
+ * Mark tuple as a member of the second part of the chain. Must only be done on
+ * a tuple which is already marked a WARM-tuple.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderSetWarm(tuple) \
+( \
+	AssertMacro(HeapTupleHeaderIsWarmUpdated(tuple)), \
+	(tuple)->t_infomask |= HEAP_WARM_TUPLE \
+)
+
+#define HeapTupleHeaderClearWarm(tuple) \
+( \
+	(tuple)->t_infomask &= ~HEAP_WARM_TUPLE \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
@@ -785,6 +871,24 @@ struct MinimalTupleData
 #define HeapTupleClearHeapOnly(tuple) \
 		HeapTupleHeaderClearHeapOnly((tuple)->t_data)
 
+#define HeapTupleIsWarmUpdated(tuple) \
+		HeapTupleHeaderIsWarmUpdated((tuple)->t_data)
+
+#define HeapTupleSetWarmUpdated(tuple) \
+		HeapTupleHeaderSetWarmUpdated((tuple)->t_data)
+
+#define HeapTupleClearWarmUpdated(tuple) \
+		HeapTupleHeaderClearWarmUpdated((tuple)->t_data)
+
+#define HeapTupleIsWarm(tuple) \
+		HeapTupleHeaderIsWarm((tuple)->t_data)
+
+#define HeapTupleSetWarm(tuple) \
+		HeapTupleHeaderSetWarm((tuple)->t_data)
+
+#define HeapTupleClearWarm(tuple) \
+		HeapTupleHeaderClearWarm((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index f9304db..163180d 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -427,6 +427,12 @@ typedef BTScanOpaqueData *BTScanOpaque;
 #define SK_BT_NULLS_FIRST	(INDOPTION_NULLS_FIRST << SK_BT_INDOPTION_SHIFT)
 
 /*
+ * Flags overloaded on t_tid.ip_posid field. They are managed by
+ * ItemPointerSetFlags and corresponing routines.
+ */
+#define BTREE_INDEX_WARM_POINTER	0x01
+
+/*
  * external entry points for btree, in nbtree.c
  */
 extern IndexBuildResult *btbuild(Relation heap, Relation index,
@@ -436,6 +442,10 @@ extern bool btinsert(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
 		 struct IndexInfo *indexInfo);
+extern bool btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 struct IndexInfo *indexInfo);
 extern IndexScanDesc btbeginscan(Relation rel, int nkeys, int norderbys);
 extern Size btestimateparallelscan(void);
 extern void btinitparallelscan(void *target);
@@ -487,10 +497,12 @@ extern void _bt_pageinit(Page page, Size size);
 extern bool _bt_page_recyclable(Page page);
 extern void _bt_delitems_delete(Relation rel, Buffer buf,
 					OffsetNumber *itemnos, int nitems, Relation heapRel);
-extern void _bt_delitems_vacuum(Relation rel, Buffer buf,
-					OffsetNumber *itemnos, int nitems,
-					BlockNumber lastBlockVacuumed);
+extern void _bt_handleitems_vacuum(Relation rel, Buffer buf,
+					OffsetNumber *delitemnos, int ndelitems,
+					OffsetNumber *clearitemnos, int nclearitems);
 extern int	_bt_pagedel(Relation rel, Buffer buf);
+extern void	_bt_clear_items(Page page, OffsetNumber *clearitemnos,
+					uint16 nclearitems);
 
 /*
  * prototypes for functions in nbtsearch.c
@@ -537,6 +549,9 @@ extern bytea *btoptions(Datum reloptions, bool validate);
 extern bool btproperty(Oid index_oid, int attno,
 		   IndexAMProperty prop, const char *propname,
 		   bool *res, bool *isnull);
+extern bool btrecheck(Relation indexRel, struct IndexInfo *indexInfo,
+		IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * prototypes for functions in nbtvalidate.c
diff --git a/src/include/access/nbtxlog.h b/src/include/access/nbtxlog.h
index d6a3085..6a86628 100644
--- a/src/include/access/nbtxlog.h
+++ b/src/include/access/nbtxlog.h
@@ -142,7 +142,8 @@ typedef struct xl_btree_reuse_page
 /*
  * This is what we need to know about vacuum of individual leaf index tuples.
  * The WAL record can represent deletion of any number of index tuples on a
- * single index page when executed by VACUUM.
+ * single index page when executed by VACUUM. It also includes tuples which
+ * are cleared of WARM bits by VACUUM.
  *
  * For MVCC scans, lastBlockVacuumed will be set to InvalidBlockNumber.
  * For a non-MVCC index scans there is an additional correctness requirement
@@ -165,11 +166,12 @@ typedef struct xl_btree_reuse_page
 typedef struct xl_btree_vacuum
 {
 	BlockNumber lastBlockVacuumed;
-
-	/* TARGET OFFSET NUMBERS FOLLOW */
+	uint16		ndelitems;
+	uint16		nclearitems;
+	/* ndelitems + nclearitems TARGET OFFSET NUMBERS FOLLOW */
 } xl_btree_vacuum;
 
-#define SizeOfBtreeVacuum	(offsetof(xl_btree_vacuum, lastBlockVacuumed) + sizeof(BlockNumber))
+#define SizeOfBtreeVacuum	(offsetof(xl_btree_vacuum, nclearitems) + sizeof(uint16))
 
 /*
  * This is what we need to know about marking an empty branch for deletion.
diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h
index 3fc726d..fa178d3 100644
--- a/src/include/access/relscan.h
+++ b/src/include/access/relscan.h
@@ -104,6 +104,9 @@ typedef struct IndexScanDescData
 	/* index access method's private state */
 	void	   *opaque;			/* access-method-specific info */
 
+	/* IndexInfo structure for this index */
+	struct IndexInfo  *indexInfo;
+
 	/*
 	 * In an index-only scan, a successful amgettuple call must fill either
 	 * xs_itup (and xs_itupdesc) or xs_hitup (and xs_hitupdesc) to provide the
@@ -119,7 +122,7 @@ typedef struct IndexScanDescData
 	HeapTupleData xs_ctup;		/* current heap tuple, if any */
 	Buffer		xs_cbuf;		/* current heap buffer in scan, if any */
 	/* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */
-	bool		xs_recheck;		/* T means scan keys must be rechecked */
+	bool		xs_recheck;		/* T means scan keys must be rechecked for each tuple */
 
 	/*
 	 * When fetching with an ordering operator, the values of the ORDER BY
diff --git a/src/include/catalog/index.h b/src/include/catalog/index.h
index 20bec90..f92ec29 100644
--- a/src/include/catalog/index.h
+++ b/src/include/catalog/index.h
@@ -89,6 +89,13 @@ extern void FormIndexDatum(IndexInfo *indexInfo,
 			   Datum *values,
 			   bool *isnull);
 
+extern void FormIndexPlainDatum(IndexInfo *indexInfo,
+			   Relation heapRel,
+			   HeapTuple heapTup,
+			   Datum *values,
+			   bool *isnull,
+			   bool *isavail);
+
 extern void index_build(Relation heapRelation,
 			Relation indexRelation,
 			IndexInfo *indexInfo,
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index ee67459..509adda 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2783,6 +2783,8 @@ DATA(insert OID = 1933 (  pg_stat_get_tuples_deleted	PGNSP PGUID 12 1 0 0 0 f f
 DESCR("statistics: number of tuples deleted");
 DATA(insert OID = 1972 (  pg_stat_get_tuples_hot_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated");
+DATA(insert OID = 3373 (  pg_stat_get_tuples_warm_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated");
 DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_live_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
@@ -2935,6 +2937,8 @@ DATA(insert OID = 3042 (  pg_stat_get_xact_tuples_deleted		PGNSP PGUID 12 1 0 0
 DESCR("statistics: number of tuples deleted in current transaction");
 DATA(insert OID = 3043 (  pg_stat_get_xact_tuples_hot_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated in current transaction");
+DATA(insert OID = 3359 (  pg_stat_get_xact_tuples_warm_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated in current transaction");
 DATA(insert OID = 3044 (  pg_stat_get_xact_blocks_fetched		PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_fetched _null_ _null_ _null_ ));
 DESCR("statistics: number of blocks fetched in current transaction");
 DATA(insert OID = 3045 (  pg_stat_get_xact_blocks_hit			PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_hit _null_ _null_ _null_ ));
diff --git a/src/include/commands/progress.h b/src/include/commands/progress.h
index 9472ecc..b355b61 100644
--- a/src/include/commands/progress.h
+++ b/src/include/commands/progress.h
@@ -25,6 +25,7 @@
 #define PROGRESS_VACUUM_NUM_INDEX_VACUUMS		4
 #define PROGRESS_VACUUM_MAX_DEAD_TUPLES			5
 #define PROGRESS_VACUUM_NUM_DEAD_TUPLES			6
+#define PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED	7
 
 /* Phases of vacuum (as advertised via PROGRESS_VACUUM_PHASE) */
 #define PROGRESS_VACUUM_PHASE_SCAN_HEAP			1
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index d3849b9..7e1ec56 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -506,6 +506,7 @@ extern int	ExecCleanTargetListLength(List *targetlist);
 extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
 extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
 extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
+					  ItemPointer root_tid, Bitmapset *modified_attrs,
 					  EState *estate, bool noDupErr, bool *specConflict,
 					  List *arbiterIndexes);
 extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
diff --git a/src/include/executor/nodeIndexscan.h b/src/include/executor/nodeIndexscan.h
index ea3f3a5..ebeec74 100644
--- a/src/include/executor/nodeIndexscan.h
+++ b/src/include/executor/nodeIndexscan.h
@@ -41,5 +41,4 @@ extern void ExecIndexEvalRuntimeKeys(ExprContext *econtext,
 extern bool ExecIndexEvalArrayKeys(ExprContext *econtext,
 					   IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 extern bool ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
-
 #endif   /* NODEINDEXSCAN_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index ff42895..042003a 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -132,6 +132,7 @@ typedef struct IndexInfo
 	NodeTag		type;
 	int			ii_NumIndexAttrs;
 	AttrNumber	ii_KeyAttrNumbers[INDEX_MAX_KEYS];
+	Bitmapset  *ii_indxattrs;	/* bitmap of all columns used in this index */
 	List	   *ii_Expressions; /* list of Expr */
 	List	   *ii_ExpressionsState;	/* list of ExprState */
 	List	   *ii_Predicate;	/* list of Expr */
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 2015625..4b7d671 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -105,6 +105,7 @@ typedef struct PgStat_TableCounts
 	PgStat_Counter t_tuples_updated;
 	PgStat_Counter t_tuples_deleted;
 	PgStat_Counter t_tuples_hot_updated;
+	PgStat_Counter t_tuples_warm_updated;
 	bool		t_truncated;
 
 	PgStat_Counter t_delta_live_tuples;
@@ -625,6 +626,7 @@ typedef struct PgStat_StatTabEntry
 	PgStat_Counter tuples_updated;
 	PgStat_Counter tuples_deleted;
 	PgStat_Counter tuples_hot_updated;
+	PgStat_Counter tuples_warm_updated;
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
@@ -1259,7 +1261,7 @@ pgstat_report_wait_end(void)
 	(pgStatBlockWriteTime += (n))
 
 extern void pgstat_count_heap_insert(Relation rel, PgStat_Counter n);
-extern void pgstat_count_heap_update(Relation rel, bool hot);
+extern void pgstat_count_heap_update(Relation rel, bool hot, bool warm);
 extern void pgstat_count_heap_delete(Relation rel);
 extern void pgstat_count_truncate(Relation rel);
 extern void pgstat_update_heap_dead_tuples(Relation rel, int delta);
diff --git a/src/include/storage/bufpage.h b/src/include/storage/bufpage.h
index e956dc3..1852195 100644
--- a/src/include/storage/bufpage.h
+++ b/src/include/storage/bufpage.h
@@ -433,6 +433,8 @@ extern void PageIndexMultiDelete(Page page, OffsetNumber *itemnos, int nitems);
 extern void PageIndexTupleDeleteNoCompact(Page page, OffsetNumber offset);
 extern bool PageIndexTupleOverwrite(Page page, OffsetNumber offnum,
 						Item newtup, Size newsize);
+extern void PageIndexClearWarmTuples(Page page, OffsetNumber *clearitemnos,
+						uint16 nclearitems);
 extern char *PageSetChecksumCopy(Page page, BlockNumber blkno);
 extern void PageSetChecksumInplace(Page page, BlockNumber blkno);
 
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index ab875bb..cd1976a 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -142,9 +142,14 @@ typedef struct RelationData
 
 	/* data managed by RelationGetIndexAttrBitmap: */
 	Bitmapset  *rd_indexattr;	/* identifies columns used in indexes */
+	Bitmapset  *rd_exprindexattr; /* indentified columns used in expression or
+									 predicate indexes */
+	Bitmapset  *rd_indxnotreadyattr;	/* columns used by indexes not yet
+										   ready */
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_pkattr;		/* cols included in primary key */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	PublicationActions  *rd_pubactions;	/* publication actions */
 
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index 81af3ae..d5b3072 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -51,7 +51,9 @@ typedef enum IndexAttrBitmapKind
 	INDEX_ATTR_BITMAP_ALL,
 	INDEX_ATTR_BITMAP_KEY,
 	INDEX_ATTR_BITMAP_PRIMARY_KEY,
-	INDEX_ATTR_BITMAP_IDENTITY_KEY
+	INDEX_ATTR_BITMAP_IDENTITY_KEY,
+	INDEX_ATTR_BITMAP_EXPR_PREDICATE,
+	INDEX_ATTR_BITMAP_NOTREADY
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
diff --git a/src/test/regress/expected/alter_generic.out b/src/test/regress/expected/alter_generic.out
index ce581bb..85e4c70 100644
--- a/src/test/regress/expected/alter_generic.out
+++ b/src/test/regress/expected/alter_generic.out
@@ -161,15 +161,15 @@ ALTER SERVER alt_fserv1 RENAME TO alt_fserv3;   -- OK
 SELECT fdwname FROM pg_foreign_data_wrapper WHERE fdwname like 'alt_fdw%';
  fdwname  
 ----------
- alt_fdw2
  alt_fdw3
+ alt_fdw2
 (2 rows)
 
 SELECT srvname FROM pg_foreign_server WHERE srvname like 'alt_fserv%';
   srvname   
 ------------
- alt_fserv2
  alt_fserv3
+ alt_fserv2
 (2 rows)
 
 --
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index e8f8726..a37d443 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1755,6 +1755,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_tuples_deleted(c.oid) AS n_tup_del,
     pg_stat_get_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
@@ -1902,6 +1903,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1945,6 +1947,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1982,7 +1985,8 @@ pg_stat_xact_all_tables| SELECT c.oid AS relid,
     pg_stat_get_xact_tuples_inserted(c.oid) AS n_tup_ins,
     pg_stat_get_xact_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_xact_tuples_deleted(c.oid) AS n_tup_del,
-    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd
+    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_xact_tuples_warm_updated(c.oid) AS n_tup_warm_upd
    FROM ((pg_class c
      LEFT JOIN pg_index i ON ((c.oid = i.indrelid)))
      LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
@@ -1998,7 +2002,8 @@ pg_stat_xact_sys_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname = ANY (ARRAY['pg_catalog'::name, 'information_schema'::name])) OR (pg_stat_xact_all_tables.schemaname ~ '^pg_toast'::text));
 pg_stat_xact_user_functions| SELECT p.oid AS funcid,
@@ -2020,7 +2025,8 @@ pg_stat_xact_user_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname <> ALL (ARRAY['pg_catalog'::name, 'information_schema'::name])) AND (pg_stat_xact_all_tables.schemaname !~ '^pg_toast'::text));
 pg_statio_all_indexes| SELECT c.oid AS relid,
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
new file mode 100644
index 0000000..8aa1505
--- /dev/null
+++ b/src/test/regress/expected/warm.out
@@ -0,0 +1,747 @@
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Check if index only scan works correctly
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx1 on updtst_tab1
+   Index Cond: (b = 140001)
+(2 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab1;
+------------------
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE a = 1;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE b = 701;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+VACUUM updtst_tab2;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx2 on updtst_tab2
+   Index Cond: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab2 WHERE b = 701;
+  b  
+-----
+ 701
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab2;
+------------------
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 1;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+       QUERY PLAN        
+-------------------------
+ Seq Scan on updtst_tab3
+   Filter: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 701;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+  b   
+------
+ 1421
+(1 row)
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+SET enable_seqscan = false;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    98
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 2;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx3 on updtst_tab3
+   Index Cond: (b = 702)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 702;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+  b   
+------
+ 1422
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab3;
+------------------
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+explain select * from test_warm where lower(a) = 'test';
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on test_warm  (cost=4.18..12.65 rows=4 width=64)
+   Recheck Cond: (lower(a) = 'test'::text)
+   ->  Bitmap Index Scan on test_warmindx  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (lower(a) = 'test'::text)
+(4 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+select *, ctid from test_warm where a = 'test';
+ a | b | ctid 
+---+---+------
+(0 rows)
+
+select *, ctid from test_warm where a = 'TEST';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+                                   QUERY PLAN                                    
+---------------------------------------------------------------------------------
+ Index Scan using test_warmindx on test_warm  (cost=0.15..20.22 rows=4 width=64)
+   Index Cond: (lower(a) = 'test'::text)
+(2 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+DROP TABLE test_warm;
+--- Test with toast data types
+CREATE TABLE test_toast_warm (a int unique, b text, c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+-- insert a large enough value to cause index datum compression
+INSERT INTO test_toast_warm VALUES (1, repeat('a', 600), 100);
+INSERT INTO test_toast_warm VALUES (2, repeat('b', 2), 100);
+INSERT INTO test_toast_warm VALUES (3, repeat('c', 4), 100);
+INSERT INTO test_toast_warm VALUES (4, repeat('d', 63), 100);
+INSERT INTO test_toast_warm VALUES (5, repeat('e', 126), 100);
+INSERT INTO test_toast_warm VALUES (6, repeat('f', 127), 100);
+INSERT INTO test_toast_warm VALUES (7, repeat('g', 128), 100);
+INSERT INTO test_toast_warm VALUES (8, repeat('h', 3200), 100);
+UPDATE test_toast_warm SET b = repeat('q', 600) WHERE a = 1;
+UPDATE test_toast_warm SET b = repeat('r', 2) WHERE a = 2;
+UPDATE test_toast_warm SET b = repeat('s', 4) WHERE a = 3;
+UPDATE test_toast_warm SET b = repeat('t', 63) WHERE a = 4;
+UPDATE test_toast_warm SET b = repeat('u', 126) WHERE a = 5;
+UPDATE test_toast_warm SET b = repeat('v', 127) WHERE a = 6;
+UPDATE test_toast_warm SET b = repeat('w', 128) WHERE a = 7;
+UPDATE test_toast_warm SET b = repeat('x', 3200) WHERE a = 8;
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+                        QUERY PLAN                         
+-----------------------------------------------------------
+ Index Scan using test_toast_warm_a_key on test_toast_warm
+   Index Cond: (a = 1)
+(2 rows)
+
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+                                                                                                                                                                                                                                                                                                                      QUERY PLAN                                                                                                                                                                                                                                                                                                                      
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Index Scan using test_toast_warm_index on test_toast_warm
+   Index Cond: (b = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'::text)
+(2 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+                                                                                                                                                                                                                                                                                                                      QUERY PLAN                                                                                                                                                                                                                                                                                                                      
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Index Only Scan using test_toast_warm_index on test_toast_warm
+   Index Cond: (b = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'::text)
+(2 rows)
+
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+ a |                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+---+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 1 | qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+ a | b 
+---+---
+(0 rows)
+
+SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+ b 
+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+ a |                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+---+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 1 | qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+ a | b  
+---+----
+ 2 | rr
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+ a |  b   
+---+------
+ 3 | ssss
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+ a |                                b                                
+---+-----------------------------------------------------------------
+ 4 | ttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttt
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+ a |                                                               b                                                                
+---+--------------------------------------------------------------------------------------------------------------------------------
+ 5 | uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+ a |                                                                b                                                                
+---+---------------------------------------------------------------------------------------------------------------------------------
+ 6 | vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+ a |                                                                b                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------
+ 7 | wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+ a |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                b                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 8 | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+(1 row)
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                                    QUERY PLAN                                                                                                                                                                                                                                                                                                                    
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 'qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq'::text)
+(2 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                                    QUERY PLAN                                                                                                                                                                                                                                                                                                                    
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 'qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq'::text)
+(2 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+ a |                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+---+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 1 | qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+ a | b  
+---+----
+ 2 | rr
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+ a |  b   
+---+------
+ 3 | ssss
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+ a |                                b                                
+---+-----------------------------------------------------------------
+ 4 | ttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttt
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+ a |                                                               b                                                                
+---+--------------------------------------------------------------------------------------------------------------------------------
+ 5 | uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+ a |                                                                b                                                                
+---+---------------------------------------------------------------------------------------------------------------------------------
+ 6 | vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+ a |                                                                b                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------
+ 7 | wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+ a |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                b                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 8 | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+(1 row)
+
+DROP TABLE test_toast_warm;
+-- Test with numeric data type
+CREATE TABLE test_toast_warm (a int unique, b numeric(10,2), c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+INSERT INTO test_toast_warm VALUES (1, 10.2, 100);
+INSERT INTO test_toast_warm VALUES (2, 11.22, 100);
+INSERT INTO test_toast_warm VALUES (3, 12.222, 100);
+INSERT INTO test_toast_warm VALUES (4, 13.20, 100);
+INSERT INTO test_toast_warm VALUES (5, 14.201, 100);
+UPDATE test_toast_warm SET b = 100.2 WHERE a = 1;
+UPDATE test_toast_warm SET b = 101.22 WHERE a = 2;
+UPDATE test_toast_warm SET b = 102.222 WHERE a = 3;
+UPDATE test_toast_warm SET b = 103.20 WHERE a = 4;
+UPDATE test_toast_warm SET b = 104.201 WHERE a = 5;
+SELECT * FROM test_toast_warm;
+ a |   b    |  c  
+---+--------+-----
+ 1 | 100.20 | 100
+ 2 | 101.22 | 100
+ 3 | 102.22 | 100
+ 4 | 103.20 | 100
+ 5 | 104.20 | 100
+(5 rows)
+
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+                        QUERY PLAN                         
+-----------------------------------------------------------
+ Index Scan using test_toast_warm_a_key on test_toast_warm
+   Index Cond: (a = 1)
+(2 rows)
+
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+                    QUERY PLAN                    
+--------------------------------------------------
+ Bitmap Heap Scan on test_toast_warm
+   Recheck Cond: (b = 10.2)
+   ->  Bitmap Index Scan on test_toast_warm_index
+         Index Cond: (b = 10.2)
+(4 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+                    QUERY PLAN                    
+--------------------------------------------------
+ Bitmap Heap Scan on test_toast_warm
+   Recheck Cond: (b = 100.2)
+   ->  Bitmap Index Scan on test_toast_warm_index
+         Index Cond: (b = 100.2)
+(4 rows)
+
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+ a | b 
+---+---
+(0 rows)
+
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+ b 
+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+   b    
+--------
+ 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+ a |   b    
+---+--------
+ 2 | 101.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+ a |   b    
+---+--------
+ 3 | 102.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+ a |   b    
+---+--------
+ 4 | 103.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+ a |   b    
+---+--------
+ 5 | 104.20
+(1 row)
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+         QUERY PLAN          
+-----------------------------
+ Seq Scan on test_toast_warm
+   Filter: (a = 1)
+(2 rows)
+
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+         QUERY PLAN          
+-----------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 10.2)
+(2 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+         QUERY PLAN          
+-----------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 100.2)
+(2 rows)
+
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+ a | b 
+---+---
+(0 rows)
+
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+ b 
+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+   b    
+--------
+ 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+ a |   b    
+---+--------
+ 2 | 101.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+ a |   b    
+---+--------
+ 3 | 102.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+ a |   b    
+---+--------
+ 4 | 103.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+ a |   b    
+---+--------
+ 5 | 104.20
+(1 row)
+
+DROP TABLE test_toast_warm;
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 9f95b01..cd99f88 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -42,6 +42,8 @@ test: create_type
 test: create_table
 test: create_function_2
 
+test: warm
+
 # ----------
 # Load huge amounts of data
 # We should split the data files into single files and then
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
new file mode 100644
index 0000000..ab61cfb
--- /dev/null
+++ b/src/test/regress/sql/warm.sql
@@ -0,0 +1,286 @@
+
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Check if index only scan works correctly
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab1;
+
+------------------
+
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE a = 1;
+
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE b = 701;
+
+VACUUM updtst_tab2;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+SELECT b FROM updtst_tab2 WHERE b = 701;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab2;
+------------------
+
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+SELECT * FROM updtst_tab3 WHERE a = 1;
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+SET enable_seqscan = false;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+SELECT * FROM updtst_tab3 WHERE a = 2;
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab3;
+------------------
+
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where a = 'test';
+select *, ctid from test_warm where a = 'TEST';
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+DROP TABLE test_warm;
+
+--- Test with toast data types
+
+CREATE TABLE test_toast_warm (a int unique, b text, c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+
+-- insert a large enough value to cause index datum compression
+INSERT INTO test_toast_warm VALUES (1, repeat('a', 600), 100);
+INSERT INTO test_toast_warm VALUES (2, repeat('b', 2), 100);
+INSERT INTO test_toast_warm VALUES (3, repeat('c', 4), 100);
+INSERT INTO test_toast_warm VALUES (4, repeat('d', 63), 100);
+INSERT INTO test_toast_warm VALUES (5, repeat('e', 126), 100);
+INSERT INTO test_toast_warm VALUES (6, repeat('f', 127), 100);
+INSERT INTO test_toast_warm VALUES (7, repeat('g', 128), 100);
+INSERT INTO test_toast_warm VALUES (8, repeat('h', 3200), 100);
+
+UPDATE test_toast_warm SET b = repeat('q', 600) WHERE a = 1;
+UPDATE test_toast_warm SET b = repeat('r', 2) WHERE a = 2;
+UPDATE test_toast_warm SET b = repeat('s', 4) WHERE a = 3;
+UPDATE test_toast_warm SET b = repeat('t', 63) WHERE a = 4;
+UPDATE test_toast_warm SET b = repeat('u', 126) WHERE a = 5;
+UPDATE test_toast_warm SET b = repeat('v', 127) WHERE a = 6;
+UPDATE test_toast_warm SET b = repeat('w', 128) WHERE a = 7;
+UPDATE test_toast_warm SET b = repeat('x', 3200) WHERE a = 8;
+
+
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+
+DROP TABLE test_toast_warm;
+
+-- Test with numeric data type
+
+CREATE TABLE test_toast_warm (a int unique, b numeric(10,2), c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+
+INSERT INTO test_toast_warm VALUES (1, 10.2, 100);
+INSERT INTO test_toast_warm VALUES (2, 11.22, 100);
+INSERT INTO test_toast_warm VALUES (3, 12.222, 100);
+INSERT INTO test_toast_warm VALUES (4, 13.20, 100);
+INSERT INTO test_toast_warm VALUES (5, 14.201, 100);
+
+UPDATE test_toast_warm SET b = 100.2 WHERE a = 1;
+UPDATE test_toast_warm SET b = 101.22 WHERE a = 2;
+UPDATE test_toast_warm SET b = 102.222 WHERE a = 3;
+UPDATE test_toast_warm SET b = 103.20 WHERE a = 4;
+UPDATE test_toast_warm SET b = 104.201 WHERE a = 5;
+
+SELECT * FROM test_toast_warm;
+
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+
+DROP TABLE test_toast_warm;
0002_track_root_lp_v21.patchapplication/octet-stream; name=0002_track_root_lp_v21.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 51c773f..e573f1a 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -94,7 +94,8 @@ static HeapTuple heap_prepare_insert(Relation relation, HeapTuple tup,
 					TransactionId xid, CommandId cid, int options);
 static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
-				HeapTuple newtup, HeapTuple old_key_tup,
+				HeapTuple newtup, OffsetNumber root_offnum,
+				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
 static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
 							 Bitmapset *interesting_cols,
@@ -2264,13 +2265,13 @@ heap_get_latest_tid(Relation relation,
 		 */
 		if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) ||
 			HeapTupleHeaderIsOnlyLocked(tp.t_data) ||
-			ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid))
+			HeapTupleHeaderIsHeapLatest(tp.t_data, &ctid))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
 		}
 
-		ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tp.t_data, &ctid);
 		priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		UnlockReleaseBuffer(buffer);
 	}							/* end of loop */
@@ -2401,6 +2402,7 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	bool		all_visible_cleared = false;
+	OffsetNumber	root_offnum;
 
 	/*
 	 * Fill in tuple header fields, assign an OID, and toast the tuple if
@@ -2439,8 +2441,13 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
-	RelationPutHeapTuple(relation, buffer, heaptup,
-						 (options & HEAP_INSERT_SPECULATIVE) != 0);
+	root_offnum = RelationPutHeapTuple(relation, buffer, heaptup,
+						 (options & HEAP_INSERT_SPECULATIVE) != 0,
+						 InvalidOffsetNumber);
+
+	/* We must not overwrite the speculative insertion token. */
+	if ((options & HEAP_INSERT_SPECULATIVE) == 0)
+		HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 	if (PageIsAllVisible(BufferGetPage(buffer)))
 	{
@@ -2668,6 +2675,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 	Size		saveFreeSpace;
 	bool		need_tuple_data = RelationIsLogicallyLogged(relation);
 	bool		need_cids = RelationIsAccessibleInLogicalDecoding(relation);
+	OffsetNumber	root_offnum;
 
 	needwal = !(options & HEAP_INSERT_SKIP_WAL) && RelationNeedsWAL(relation);
 	saveFreeSpace = RelationGetTargetPageFreeSpace(relation,
@@ -2738,7 +2746,12 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		 * RelationGetBufferForTuple has ensured that the first tuple fits.
 		 * Put that on the page, and then as many other tuples as fit.
 		 */
-		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false);
+		root_offnum = RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false,
+				InvalidOffsetNumber);
+
+		/* Mark this tuple as the latest and also set root offset. */
+		HeapTupleHeaderSetHeapLatest(heaptuples[ndone]->t_data, root_offnum);
+
 		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++)
 		{
 			HeapTuple	heaptup = heaptuples[ndone + nthispage];
@@ -2746,7 +2759,10 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace)
 				break;
 
-			RelationPutHeapTuple(relation, buffer, heaptup, false);
+			root_offnum = RelationPutHeapTuple(relation, buffer, heaptup, false,
+					InvalidOffsetNumber);
+			/* Mark each tuple as the latest and also set root offset. */
+			HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 			/*
 			 * We don't use heap_multi_insert for catalog tuples yet, but
@@ -3018,6 +3034,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	HeapTupleData tp;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	TransactionId new_xmax;
@@ -3028,6 +3045,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	bool		all_visible_cleared = false;
 	HeapTuple	old_key_tuple = NULL;	/* replica identity of the tuple */
 	bool		old_key_copied = false;
+	OffsetNumber	root_offnum;
 
 	Assert(ItemPointerIsValid(tid));
 
@@ -3069,7 +3087,8 @@ heap_delete(Relation relation, ItemPointer tid,
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 	}
 
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
+	offnum = ItemPointerGetOffsetNumber(tid);
+	lp = PageGetItemId(page, offnum);
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
@@ -3199,7 +3218,17 @@ l1:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tp.t_data->t_ctid;
+
+		/*
+		 * If we're at the end of the chain, then just return the same TID back
+		 * to the caller. The caller uses that as a hint to know if we have hit
+		 * the end of the chain.
+		 */
+		if (!HeapTupleHeaderIsHeapLatest(tp.t_data, &tp.t_self))
+			HeapTupleHeaderGetNextTid(tp.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&tp.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tp.t_data);
@@ -3248,6 +3277,22 @@ l1:
 							  xid, LockTupleExclusive, true,
 							  &new_xmax, &new_infomask, &new_infomask2);
 
+	/*
+	 * heap_get_root_tuple_one() may call palloc, which is disallowed once we
+	 * enter the critical section. So check if the root offset is cached in the
+	 * tuple and if not, fetch that information hard way before entering the
+	 * critical section.
+	 *
+	 * Most often and unless we are dealing with a pg-upgraded cluster, the
+	 * root offset information should be cached. So there should not be too
+	 * much overhead of fetching this information. Also, once a tuple is
+	 * updated, the information will be copied to the new version. So it's not
+	 * as if we're going to pay this price forever.
+	 */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tp.t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -3275,8 +3320,10 @@ l1:
 	HeapTupleHeaderClearHotUpdated(tp.t_data);
 	HeapTupleHeaderSetXmax(tp.t_data, new_xmax);
 	HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo);
-	/* Make sure there is no forward chain link in t_ctid */
-	tp.t_data->t_ctid = tp.t_self;
+
+	/* Mark this tuple as the latest tuple in the update chain. */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		HeapTupleHeaderSetHeapLatest(tp.t_data, root_offnum);
 
 	MarkBufferDirty(buffer);
 
@@ -3477,6 +3524,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
+	OffsetNumber	root_offnum;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3537,6 +3586,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 
 
 	block = ItemPointerGetBlockNumber(otid);
+	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3840,7 +3890,12 @@ l2:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = oldtup.t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(oldtup.t_data, &oldtup.t_self))
+			HeapTupleHeaderGetNextTid(oldtup.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&oldtup.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
@@ -3980,6 +4035,7 @@ l2:
 		uint16		infomask_lock_old_tuple,
 					infomask2_lock_old_tuple;
 		bool		cleared_all_frozen = false;
+		OffsetNumber	root_offnum;
 
 		/*
 		 * To prevent concurrent sessions from updating the tuple, we have to
@@ -4007,6 +4063,14 @@ l2:
 
 		Assert(HEAP_XMAX_IS_LOCKED_ONLY(infomask_lock_old_tuple));
 
+		/*
+		 * Fetch root offset before entering the critical section. We do this
+		 * only if the information is not already available.
+		 */
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&oldtup.t_self));
+
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
@@ -4021,7 +4085,8 @@ l2:
 		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			HeapTupleHeaderSetHeapLatest(oldtup.t_data, root_offnum);
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4180,6 +4245,10 @@ l2:
 										   bms_overlap(modified_attrs, id_attrs),
 										   &old_key_copied);
 
+	if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
@@ -4205,6 +4274,17 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way (we should have done
+		 * that before entering the critical section above).
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
 	else
 	{
@@ -4212,10 +4292,22 @@ l2:
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		root_offnum = InvalidOffsetNumber;
 	}
 
-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
-
+	/* insert new tuple */
+	root_offnum = RelationPutHeapTuple(relation, newbuf, heaptup, false,
+									   root_offnum);
+	/*
+	 * Also mark both copies as latest and set the root offset information. If
+	 * we're doing a HOT/WARM update, then we just copy the information from
+	 * old tuple, if available or computed above. For regular updates,
+	 * RelationPutHeapTuple must have returned us the actual offset number
+	 * where the new version was inserted and we store the same value since the
+	 * update resulted in a new HOT-chain.
+	 */
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
 	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
@@ -4228,7 +4320,7 @@ l2:
 	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	HeapTupleHeaderSetNextTid(oldtup.t_data, &(heaptup->t_self));
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4267,6 +4359,7 @@ l2:
 
 		recptr = log_heap_update(relation, buffer,
 								 newbuf, &oldtup, heaptup,
+								 root_offnum,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4547,7 +4640,8 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	ItemId		lp;
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
-	BlockNumber block;
+	BlockNumber	block;
+	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4556,9 +4650,11 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	bool		first_time = true;
 	bool		have_tuple_lock = false;
 	bool		cleared_all_frozen = false;
+	OffsetNumber	root_offnum;
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
+	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -4578,6 +4674,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	tuple->t_data = (HeapTupleHeader) PageGetItem(page, lp);
 	tuple->t_len = ItemIdGetLength(lp);
 	tuple->t_tableOid = RelationGetRelid(relation);
+	tuple->t_self = *tid;
 
 l3:
 	result = HeapTupleSatisfiesUpdate(tuple, cid, *buffer);
@@ -4605,7 +4702,11 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &t_ctid);
+		else
+			ItemPointerCopy(tid, &t_ctid);
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -5043,7 +5144,12 @@ failed:
 		Assert(result == HeapTupleSelfUpdated || result == HeapTupleUpdated ||
 			   result == HeapTupleWouldBlock);
 		Assert(!(tuple->t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tuple->t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(tid, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tuple->t_data);
@@ -5091,6 +5197,10 @@ failed:
 							  GetCurrentTransactionId(), mode, false,
 							  &xid, &new_infomask, &new_infomask2);
 
+	if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tuple->t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -5119,7 +5229,10 @@ failed:
 	 * the tuple as well.
 	 */
 	if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask))
-		tuple->t_data->t_ctid = *tid;
+	{
+		if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+			HeapTupleHeaderSetHeapLatest(tuple->t_data, root_offnum);
+	}
 
 	/* Clear only the all-frozen bit on visibility map if needed */
 	if (PageIsAllVisible(page) &&
@@ -5633,6 +5746,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
+	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5641,6 +5755,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
+		offnum = ItemPointerGetOffsetNumber(&tupid);
+
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
@@ -5870,7 +5986,7 @@ l4:
 
 		/* if we find the end of update chain, we're done. */
 		if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID ||
-			ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) ||
+			HeapTupleHeaderIsHeapLatest(mytup.t_data, &mytup.t_self) ||
 			HeapTupleHeaderIsOnlyLocked(mytup.t_data))
 		{
 			result = HeapTupleMayBeUpdated;
@@ -5879,7 +5995,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		HeapTupleHeaderGetNextTid(mytup.t_data, &tupid);
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
@@ -5996,7 +6112,7 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	 * Replace the speculative insertion token with a real t_ctid, pointing to
 	 * itself like it does on regular tuples.
 	 */
-	htup->t_ctid = tuple->t_self;
+	HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 	/* XLOG stuff */
 	if (RelationNeedsWAL(relation))
@@ -6122,8 +6238,7 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
 
 	/* Clear the speculative insertion token too */
-	tp.t_data->t_ctid = tp.t_self;
-
+	HeapTupleHeaderSetHeapLatest(tp.t_data, ItemPointerGetOffsetNumber(tid));
 	MarkBufferDirty(buffer);
 
 	/*
@@ -7471,6 +7586,7 @@ log_heap_visible(RelFileNode rnode, Buffer heap_buffer, Buffer vm_buffer,
 static XLogRecPtr
 log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup, HeapTuple newtup,
+				OffsetNumber root_offnum,
 				HeapTuple old_key_tuple,
 				bool all_visible_cleared, bool new_all_visible_cleared)
 {
@@ -7591,6 +7707,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self);
 	xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data);
 
+	Assert(OffsetNumberIsValid(root_offnum));
+	xlrec.root_offnum = root_offnum;
+
 	bufflags = REGBUF_STANDARD;
 	if (init)
 		bufflags |= REGBUF_WILL_INIT;
@@ -8245,7 +8364,13 @@ heap_xlog_delete(XLogReaderState *record)
 			PageClearAllVisible(page);
 
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = target_tid;
+		if (!HeapTupleHeaderHasRootOffset(htup))
+		{
+			OffsetNumber	root_offnum;
+			root_offnum = heap_get_root_tuple(page, xlrec->offnum); 
+			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+		}
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8335,7 +8460,8 @@ heap_xlog_insert(XLogReaderState *record)
 		htup->t_hoff = xlhdr.t_hoff;
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
-		htup->t_ctid = target_tid;
+
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->offnum);
 
 		if (PageAddItem(page, (Item) htup, newlen, xlrec->offnum,
 						true, true) == InvalidOffsetNumber)
@@ -8470,8 +8596,8 @@ heap_xlog_multi_insert(XLogReaderState *record)
 			htup->t_hoff = xlhdr->t_hoff;
 			HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 			HeapTupleHeaderSetCmin(htup, FirstCommandId);
-			ItemPointerSetBlockNumber(&htup->t_ctid, blkno);
-			ItemPointerSetOffsetNumber(&htup->t_ctid, offnum);
+
+			HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 			offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 			if (offnum == InvalidOffsetNumber)
@@ -8607,7 +8733,7 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 		/* Set forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetNextTid(htup, &newtid);
 
 		/* Mark the page as a candidate for pruning */
 		PageSetPrunable(page, XLogRecGetXid(record));
@@ -8740,13 +8866,17 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
-		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = newtid;
 
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
 
+		/*
+		 * Make sure the tuple is marked as the latest and root offset
+		 * information is restored.
+		 */
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->root_offnum);
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
@@ -8809,6 +8939,9 @@ heap_xlog_confirm(XLogReaderState *record)
 		 */
 		ItemPointerSet(&htup->t_ctid, BufferGetBlockNumber(buffer), offnum);
 
+		/* For newly inserted tuple, set root offset to itself. */
+		HeapTupleHeaderSetHeapLatest(htup, offnum);
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8872,11 +9005,17 @@ heap_xlog_lock(XLogReaderState *record)
 		 */
 		if (HEAP_XMAX_IS_LOCKED_ONLY(htup->t_infomask))
 		{
+			ItemPointerData	target_tid;
+
+			ItemPointerSet(&target_tid, BufferGetBlockNumber(buffer), offnum);
 			HeapTupleHeaderClearHotUpdated(htup);
 			/* Make sure there is no forward chain link in t_ctid */
-			ItemPointerSet(&htup->t_ctid,
-						   BufferGetBlockNumber(buffer),
-						   offnum);
+			if (!HeapTupleHeaderHasRootOffset(htup))
+			{
+				OffsetNumber	root_offnum;
+				root_offnum = heap_get_root_tuple(page, offnum);
+				HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+			}
 		}
 		HeapTupleHeaderSetXmax(htup, xlrec->locking_xid);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index 6529fe3..8052519 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -31,12 +31,20 @@
  * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
  * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *
+ * The caller can optionally tell us to set the root offset to the given value.
+ * Otherwise, the root offset is set to the offset of the new location once its
+ * known. The former is used while updating an existing tuple where the caller
+ * tells us about the root line pointer of the chain.  The latter is used
+ * during insertion of a new row, hence root line pointer is set to the offset
+ * where this tuple is inserted.
  */
-void
+OffsetNumber
 RelationPutHeapTuple(Relation relation,
 					 Buffer buffer,
 					 HeapTuple tuple,
-					 bool token)
+					 bool token,
+					 OffsetNumber root_offnum)
 {
 	Page		pageHeader;
 	OffsetNumber offnum;
@@ -60,17 +68,24 @@ RelationPutHeapTuple(Relation relation,
 	ItemPointerSet(&(tuple->t_self), BufferGetBlockNumber(buffer), offnum);
 
 	/*
-	 * Insert the correct position into CTID of the stored tuple, too (unless
-	 * this is a speculative insertion, in which case the token is held in
-	 * CTID field instead)
+	 * Set block number and the root offset into CTID of the stored tuple, too
+	 * (unless this is a speculative insertion, in which case the token is held
+	 * in CTID field instead).
 	 */
 	if (!token)
 	{
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
+		/* Copy t_ctid to set the correct block number. */
 		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+
+		if (!OffsetNumberIsValid(root_offnum))
+			root_offnum = offnum;
+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item, root_offnum);
 	}
+
+	return root_offnum;
 }
 
 /*
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index d69a266..f54337c 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -55,6 +55,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
 
+static void heap_get_root_tuples_internal(Page page,
+				OffsetNumber target_offnum, OffsetNumber *root_offsets);
 
 /*
  * Optionally prune and repair fragmentation in the specified page.
@@ -553,6 +555,17 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 		if (!HeapTupleHeaderIsHotUpdated(htup))
 			break;
 
+
+		/*
+		 * If the tuple was HOT-updated and the update was later
+		 * aborted, someone could mark this tuple to be the last tuple
+		 * in the chain, without clearing the HOT-updated flag. So we must
+		 * check if this is the last tuple in the chain and stop following the
+		 * CTID, else we risk getting into an infinite recursion (though
+		 * prstate->marked[] currently protects against that).
+		 */
+		if (HeapTupleHeaderHasRootOffset(htup))
+			break;
 		/*
 		 * Advance to next chain member.
 		 */
@@ -726,27 +739,47 @@ heap_page_prune_execute(Buffer buffer,
 
 
 /*
- * For all items in this page, find their respective root line pointers.
- * If item k is part of a HOT-chain with root at item j, then we set
- * root_offsets[k - 1] = j.
+ * Either for all items in this page or for the given item, find their
+ * respective root line pointers.
+ *
+ * When target_offnum is a valid offset number, the caller is interested in
+ * just one item. In that case, the root line pointer is returned in
+ * root_offsets.
  *
- * The passed-in root_offsets array must have MaxHeapTuplesPerPage entries.
- * We zero out all unused entries.
+ * When target_offnum is a InvalidOffsetNumber then the caller wants to know
+ * the root line pointers of all the items in this page. The root_offsets array
+ * must have MaxHeapTuplesPerPage entries in that case. If item k is part of a
+ * HOT-chain with root at item j, then we set root_offsets[k - 1] = j. We zero
+ * out all unused entries.
  *
  * The function must be called with at least share lock on the buffer, to
  * prevent concurrent prune operations.
  *
+ * This is not a cheap function since it must scan through all line pointers
+ * and tuples on the page in order to find the root line pointers. To minimize
+ * the cost, we break early if target_offnum is specified and root line pointer
+ * to target_offnum is found.
+ *
  * Note: The information collected here is valid only as long as the caller
  * holds a pin on the buffer. Once pin is released, a tuple might be pruned
  * and reused by a completely unrelated tuple.
+ *
+ * Note: This function must not be called inside a critical section because it
+ * internally calls HeapTupleHeaderGetUpdateXid which somewhere down the stack
+ * may try to allocate heap memory. Memory allocation is disallowed in a
+ * critical section.
  */
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
 {
 	OffsetNumber offnum,
 				maxoff;
 
-	MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
+	if (OffsetNumberIsValid(target_offnum))
+		*root_offsets = InvalidOffsetNumber;
+	else
+		MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
 
 	maxoff = PageGetMaxOffsetNumber(page);
 	for (offnum = FirstOffsetNumber; offnum <= maxoff; offnum = OffsetNumberNext(offnum))
@@ -774,9 +807,28 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 
 			/*
 			 * This is either a plain tuple or the root of a HOT-chain.
-			 * Remember it in the mapping.
+			 *
+			 * If the target_offnum is specified and if we found its mapping,
+			 * return.
 			 */
-			root_offsets[offnum - 1] = offnum;
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (target_offnum == offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember it in the mapping. */
+				root_offsets[offnum - 1] = offnum;
+			}
 
 			/* If it's not the start of a HOT-chain, we're done with it */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
@@ -817,15 +869,65 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 				!TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(htup)))
 				break;
 
-			/* Remember the root line pointer for this item */
-			root_offsets[nextoffnum - 1] = offnum;
+			/*
+			 * If target_offnum is specified and we found its mapping, return.
+			 */
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (nextoffnum == target_offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember the root line pointer for this item. */
+				root_offsets[nextoffnum - 1] = offnum;
+			}
 
 			/* Advance to next chain member, if any */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				break;
 
+			/*
+			 * If the tuple was HOT-updated and the update was later aborted,
+			 * someone could mark this tuple to be the last tuple in the chain
+			 * and store root offset in CTID, without clearing the HOT-updated
+			 * flag. So we must check if CTID is actually root offset and break
+			 * to avoid infinite recursion.
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
 		}
 	}
 }
+
+/*
+ * Get root line pointer for the given tuple.
+ */
+OffsetNumber
+heap_get_root_tuple(Page page, OffsetNumber target_offnum)
+{
+	OffsetNumber offnum = InvalidOffsetNumber;
+	heap_get_root_tuples_internal(page, target_offnum, &offnum);
+	return offnum;
+}
+
+/*
+ * Get root line pointers for all tuples in the page
+ */
+void
+heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+{
+	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+			root_offsets);
+}
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index d7f65a5..2d3ae9b 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -421,14 +421,18 @@ rewrite_heap_tuple(RewriteState state,
 	 */
 	if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
 		  HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
-		!(ItemPointerEquals(&(old_tuple->t_self),
-							&(old_tuple->t_data->t_ctid))))
+		!(HeapTupleHeaderIsHeapLatest(old_tuple->t_data, &old_tuple->t_self)))
 	{
 		OldToNewMapping mapping;
 
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
-		hashkey.tid = old_tuple->t_data->t_ctid;
+
+		/* 
+		 * We've already checked that this is not the last tuple in the chain,
+		 * so fetch the next TID in the chain.
+		 */
+		HeapTupleHeaderGetNextTid(old_tuple->t_data, &hashkey.tid);
 
 		mapping = (OldToNewMapping)
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -441,7 +445,7 @@ rewrite_heap_tuple(RewriteState state,
 			 * set the ctid of this tuple to point to the new location, and
 			 * insert it right away.
 			 */
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextTid(new_tuple->t_data, &mapping->new_tid);
 
 			/* We don't need the mapping entry anymore */
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -527,7 +531,7 @@ rewrite_heap_tuple(RewriteState state,
 				new_tuple = unresolved->tuple;
 				free_new = true;
 				old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextTid(new_tuple->t_data, &new_tid);
 
 				/*
 				 * We don't need the hash entry anymore, but don't free its
@@ -733,7 +737,12 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		onpage_tup->t_ctid = tup->t_self;
+		/* 
+		 * Set t_ctid just to ensure that block number is copied correctly, but
+		 * then immediately mark the tuple as the latest.
+		 */
+		HeapTupleHeaderSetNextTid(onpage_tup, &tup->t_self);
+		HeapTupleHeaderSetHeapLatest(onpage_tup, newoff);
 	}
 
 	/* If heaptup is a private copy, release it. */
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 108060a..c3f1873 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -785,7 +785,8 @@ retry:
 			  DirtySnapshot.speculativeToken &&
 			  TransactionIdPrecedes(GetCurrentTransactionId(), xwait))))
 		{
-			ctid_wait = tup->t_data->t_ctid;
+			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
+				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index f2995f2..73e9c4a 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2623,7 +2623,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		 * As above, it should be safe to examine xmax and t_ctid without the
 		 * buffer content lock, because they can't be changing.
 		 */
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, &tuple.t_self))
 		{
 			/* deleted, so forget about it */
 			ReleaseBuffer(buffer);
@@ -2631,7 +2631,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		}
 
 		/* updated, so look at the updated row */
-		tuple.t_self = tuple.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tuple.t_data, &tuple.t_self);
 		/* updated row should have xmin matching this xmax */
 		priorXmax = HeapTupleHeaderGetUpdateXid(tuple.t_data);
 		ReleaseBuffer(buffer);
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 7e85510..5540e12 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -190,6 +190,7 @@ extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
+extern OffsetNumber heap_get_root_tuple(Page page, OffsetNumber target_offnum);
 extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
 
 /* in heap/syncscan.c */
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index b285f17..e6019d5 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -193,6 +193,8 @@ typedef struct xl_heap_update
 	uint8		flags;
 	TransactionId new_xmax;		/* xmax of the new tuple */
 	OffsetNumber new_offnum;	/* new tuple's offset */
+	OffsetNumber root_offnum;	/* offset of the root line pointer in case of
+								   HOT or WARM update */
 
 	/*
 	 * If XLOG_HEAP_CONTAINS_OLD_TUPLE or XLOG_HEAP_CONTAINS_OLD_KEY flags are
@@ -200,7 +202,7 @@ typedef struct xl_heap_update
 	 */
 } xl_heap_update;
 
-#define SizeOfHeapUpdate	(offsetof(xl_heap_update, new_offnum) + sizeof(OffsetNumber))
+#define SizeOfHeapUpdate	(offsetof(xl_heap_update, root_offnum) + sizeof(OffsetNumber))
 
 /*
  * This is what we need to know about vacuum page cleanup/redirect
diff --git a/src/include/access/hio.h b/src/include/access/hio.h
index 2824f23..921cb37 100644
--- a/src/include/access/hio.h
+++ b/src/include/access/hio.h
@@ -35,8 +35,8 @@ typedef struct BulkInsertStateData
 }	BulkInsertStateData;
 
 
-extern void RelationPutHeapTuple(Relation relation, Buffer buffer,
-					 HeapTuple tuple, bool token);
+extern OffsetNumber RelationPutHeapTuple(Relation relation, Buffer buffer,
+					 HeapTuple tuple, bool token, OffsetNumber root_offnum);
 extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
 						  Buffer otherBuffer, int options,
 						  BulkInsertState bistate,
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index a6c7e31..7552186 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,13 +260,19 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1800 are available */
+/* bits 0x0800 are available */
+#define HEAP_LATEST_TUPLE		0x1000	/*
+										 * This is the last tuple in chain and
+										 * ip_posid points to the root line
+										 * pointer
+										 */
 #define HEAP_KEYS_UPDATED		0x2000	/* tuple was updated and key cols
 										 * modified, or tuple deleted */
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xE000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+
 
 /*
  * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins.  It is
@@ -504,6 +510,43 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+/*
+ * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
+ * the TID of the tuple itself in t_ctid field to mark the end of the chain.
+ * But starting PG v10, we use a special flag HEAP_LATEST_TUPLE to identify the
+ * last tuple and store the root line pointer of the HOT chain in t_ctid field
+ * instead.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetHeapLatest(tup, offnum) \
+do { \
+	AssertMacro(OffsetNumberIsValid(offnum)); \
+	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE; \
+	ItemPointerSetOffsetNumber(&(tup)->t_ctid, (offnum)); \
+} while (0)
+
+#define HeapTupleHeaderClearHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 &= ~HEAP_LATEST_TUPLE \
+)
+
+/*
+ * Starting from PostgreSQL 10, the latest tuple in an update chain has
+ * HEAP_LATEST_TUPLE set; but tuples upgraded from earlier versions do not.
+ * For those, we determine whether a tuple is latest by testing that its t_ctid
+ * points to itself.
+ *
+ * Note: beware of multiple evaluations of "tup" and "tid" arguments.
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  (((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(tid))) \
+)
+
+
 #define HeapTupleHeaderSetHeapOnly(tup) \
 ( \
   (tup)->t_infomask2 |= HEAP_ONLY_TUPLE \
@@ -542,6 +585,56 @@ do { \
 
 
 /*
+ * Set the t_ctid chain and also clear the HEAP_LATEST_TUPLE flag since we
+ * now have a new tuple in the chain and this is no longer the last tuple of
+ * the chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetNextTid(tup, tid) \
+do { \
+		ItemPointerCopy((tid), &((tup)->t_ctid)); \
+		HeapTupleHeaderClearHeapLatest((tup)); \
+} while (0)
+
+/*
+ * Get TID of next tuple in the update chain. Caller must have checked that
+ * we are not already at the end of the chain because in that case t_ctid may
+ * actually store the root line pointer of the HOT chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetNextTid(tup, next_ctid) \
+do { \
+	AssertMacro(!((tup)->t_infomask2 & HEAP_LATEST_TUPLE)); \
+	ItemPointerCopy(&(tup)->t_ctid, (next_ctid)); \
+} while (0)
+
+/*
+ * Get the root line pointer of the HOT chain. The caller should have confirmed
+ * that the root offset is cached before calling this macro.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+	AssertMacro(((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0), \
+	ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)
+
+/*
+ * Return whether the tuple has a cached root offset.  We don't use
+ * HeapTupleHeaderIsHeapLatest because that one also considers the case of
+ * t_ctid pointing to itself, for tuples migrated from pre v10 clusters. Here
+ * we are only interested in the tuples which are marked with HEAP_LATEST_TUPLE
+ * flag.
+ */
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+	((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0 \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
0003_clear_ip_posid_blkid_refs_v21.patchapplication/octet-stream; name=0003_clear_ip_posid_blkid_refs_v21.patchDownload
diff --git a/contrib/pageinspect/btreefuncs.c b/contrib/pageinspect/btreefuncs.c
index 6f35e28..07496db 100644
--- a/contrib/pageinspect/btreefuncs.c
+++ b/contrib/pageinspect/btreefuncs.c
@@ -363,8 +363,8 @@ bt_page_items(PG_FUNCTION_ARGS)
 		j = 0;
 		values[j++] = psprintf("%d", uargs->offset);
 		values[j++] = psprintf("(%u,%u)",
-							   BlockIdGetBlockNumber(&(itup->t_tid.ip_blkid)),
-							   itup->t_tid.ip_posid);
+							   ItemPointerGetBlockNumberNoCheck(&itup->t_tid),
+							   ItemPointerGetOffsetNumberNoCheck(&itup->t_tid));
 		values[j++] = psprintf("%d", (int) IndexTupleSize(itup));
 		values[j++] = psprintf("%c", IndexTupleHasNulls(itup) ? 't' : 'f');
 		values[j++] = psprintf("%c", IndexTupleHasVarwidths(itup) ? 't' : 'f');
diff --git a/contrib/pgstattuple/pgstattuple.c b/contrib/pgstattuple/pgstattuple.c
index 1e0de5d..44f90cd 100644
--- a/contrib/pgstattuple/pgstattuple.c
+++ b/contrib/pgstattuple/pgstattuple.c
@@ -356,7 +356,7 @@ pgstat_heap(Relation rel, FunctionCallInfo fcinfo)
 		 * heap_getnext may find no tuples on a given page, so we cannot
 		 * simply examine the pages returned by the heap scan.
 		 */
-		tupblock = BlockIdGetBlockNumber(&tuple->t_self.ip_blkid);
+		tupblock = ItemPointerGetBlockNumber(&tuple->t_self);
 
 		while (block <= tupblock)
 		{
diff --git a/src/backend/access/gin/ginget.c b/src/backend/access/gin/ginget.c
index 87cd9ea..aa0b02f 100644
--- a/src/backend/access/gin/ginget.c
+++ b/src/backend/access/gin/ginget.c
@@ -626,8 +626,9 @@ entryLoadMoreItems(GinState *ginstate, GinScanEntry entry,
 		}
 		else
 		{
-			entry->btree.itemptr = advancePast;
-			entry->btree.itemptr.ip_posid++;
+			ItemPointerSet(&entry->btree.itemptr,
+					GinItemPointerGetBlockNumber(&advancePast),
+					OffsetNumberNext(GinItemPointerGetOffsetNumber(&advancePast)));
 		}
 		entry->btree.fullScan = false;
 		stack = ginFindLeafPage(&entry->btree, true, snapshot);
@@ -979,15 +980,17 @@ keyGetItem(GinState *ginstate, MemoryContext tempCtx, GinScanKey key,
 		if (GinItemPointerGetBlockNumber(&advancePast) <
 			GinItemPointerGetBlockNumber(&minItem))
 		{
-			advancePast.ip_blkid = minItem.ip_blkid;
-			advancePast.ip_posid = 0;
+			ItemPointerSet(&advancePast,
+					GinItemPointerGetBlockNumber(&minItem),
+					InvalidOffsetNumber);
 		}
 	}
 	else
 	{
-		Assert(minItem.ip_posid > 0);
-		advancePast = minItem;
-		advancePast.ip_posid--;
+		Assert(GinItemPointerGetOffsetNumber(&minItem) > 0);
+		ItemPointerSet(&advancePast,
+				GinItemPointerGetBlockNumber(&minItem),
+				OffsetNumberPrev(GinItemPointerGetOffsetNumber(&minItem)));
 	}
 
 	/*
@@ -1245,15 +1248,17 @@ scanGetItem(IndexScanDesc scan, ItemPointerData advancePast,
 				if (GinItemPointerGetBlockNumber(&advancePast) <
 					GinItemPointerGetBlockNumber(&key->curItem))
 				{
-					advancePast.ip_blkid = key->curItem.ip_blkid;
-					advancePast.ip_posid = 0;
+					ItemPointerSet(&advancePast,
+						GinItemPointerGetBlockNumber(&key->curItem),
+						InvalidOffsetNumber);
 				}
 			}
 			else
 			{
-				Assert(key->curItem.ip_posid > 0);
-				advancePast = key->curItem;
-				advancePast.ip_posid--;
+				Assert(GinItemPointerGetOffsetNumber(&key->curItem) > 0);
+				ItemPointerSet(&advancePast,
+						GinItemPointerGetBlockNumber(&key->curItem),
+						OffsetNumberPrev(GinItemPointerGetOffsetNumber(&key->curItem)));
 			}
 
 			/*
diff --git a/src/backend/access/gin/ginpostinglist.c b/src/backend/access/gin/ginpostinglist.c
index 598069d..8d2d31a 100644
--- a/src/backend/access/gin/ginpostinglist.c
+++ b/src/backend/access/gin/ginpostinglist.c
@@ -79,13 +79,11 @@ itemptr_to_uint64(const ItemPointer iptr)
 	uint64		val;
 
 	Assert(ItemPointerIsValid(iptr));
-	Assert(iptr->ip_posid < (1 << MaxHeapTuplesPerPageBits));
+	Assert(GinItemPointerGetOffsetNumber(iptr) < (1 << MaxHeapTuplesPerPageBits));
 
-	val = iptr->ip_blkid.bi_hi;
-	val <<= 16;
-	val |= iptr->ip_blkid.bi_lo;
+	val = GinItemPointerGetBlockNumber(iptr);
 	val <<= MaxHeapTuplesPerPageBits;
-	val |= iptr->ip_posid;
+	val |= GinItemPointerGetOffsetNumber(iptr);
 
 	return val;
 }
@@ -93,11 +91,9 @@ itemptr_to_uint64(const ItemPointer iptr)
 static inline void
 uint64_to_itemptr(uint64 val, ItemPointer iptr)
 {
-	iptr->ip_posid = val & ((1 << MaxHeapTuplesPerPageBits) - 1);
+	GinItemPointerSetOffsetNumber(iptr, val & ((1 << MaxHeapTuplesPerPageBits) - 1));
 	val = val >> MaxHeapTuplesPerPageBits;
-	iptr->ip_blkid.bi_lo = val & 0xFFFF;
-	val = val >> 16;
-	iptr->ip_blkid.bi_hi = val & 0xFFFF;
+	GinItemPointerSetBlockNumber(iptr, val);
 
 	Assert(ItemPointerIsValid(iptr));
 }
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index b437799..12ebadc 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -3013,8 +3013,8 @@ DisplayMapping(HTAB *tuplecid_data)
 			 ent->key.relnode.dbNode,
 			 ent->key.relnode.spcNode,
 			 ent->key.relnode.relNode,
-			 BlockIdGetBlockNumber(&ent->key.tid.ip_blkid),
-			 ent->key.tid.ip_posid,
+			 ItemPointerGetBlockNumber(&ent->key.tid),
+			 ItemPointerGetOffsetNumber(&ent->key.tid),
 			 ent->cmin,
 			 ent->cmax
 			);
diff --git a/src/backend/storage/page/itemptr.c b/src/backend/storage/page/itemptr.c
index 703cbb9..28ac885 100644
--- a/src/backend/storage/page/itemptr.c
+++ b/src/backend/storage/page/itemptr.c
@@ -54,18 +54,21 @@ ItemPointerCompare(ItemPointer arg1, ItemPointer arg2)
 	/*
 	 * Don't use ItemPointerGetBlockNumber or ItemPointerGetOffsetNumber here,
 	 * because they assert ip_posid != 0 which might not be true for a
-	 * user-supplied TID.
+	 * user-supplied TID. Instead we use ItemPointerGetBlockNumberNoCheck and
+	 * ItemPointerGetOffsetNumberNoCheck which do not do any validation.
 	 */
-	BlockNumber b1 = BlockIdGetBlockNumber(&(arg1->ip_blkid));
-	BlockNumber b2 = BlockIdGetBlockNumber(&(arg2->ip_blkid));
+	BlockNumber b1 = ItemPointerGetBlockNumberNoCheck(arg1);
+	BlockNumber b2 = ItemPointerGetBlockNumberNoCheck(arg2);
 
 	if (b1 < b2)
 		return -1;
 	else if (b1 > b2)
 		return 1;
-	else if (arg1->ip_posid < arg2->ip_posid)
+	else if (ItemPointerGetOffsetNumberNoCheck(arg1) <
+			ItemPointerGetOffsetNumberNoCheck(arg2))
 		return -1;
-	else if (arg1->ip_posid > arg2->ip_posid)
+	else if (ItemPointerGetOffsetNumberNoCheck(arg1) >
+			ItemPointerGetOffsetNumberNoCheck(arg2))
 		return 1;
 	else
 		return 0;
diff --git a/src/backend/utils/adt/tid.c b/src/backend/utils/adt/tid.c
index 49a5a15..7f3a692 100644
--- a/src/backend/utils/adt/tid.c
+++ b/src/backend/utils/adt/tid.c
@@ -109,8 +109,8 @@ tidout(PG_FUNCTION_ARGS)
 	OffsetNumber offsetNumber;
 	char		buf[32];
 
-	blockNumber = BlockIdGetBlockNumber(&(itemPtr->ip_blkid));
-	offsetNumber = itemPtr->ip_posid;
+	blockNumber = ItemPointerGetBlockNumberNoCheck(itemPtr);
+	offsetNumber = ItemPointerGetOffsetNumberNoCheck(itemPtr);
 
 	/* Perhaps someday we should output this as a record. */
 	snprintf(buf, sizeof(buf), "(%u,%u)", blockNumber, offsetNumber);
@@ -146,14 +146,12 @@ Datum
 tidsend(PG_FUNCTION_ARGS)
 {
 	ItemPointer itemPtr = PG_GETARG_ITEMPOINTER(0);
-	BlockId		blockId;
 	BlockNumber blockNumber;
 	OffsetNumber offsetNumber;
 	StringInfoData buf;
 
-	blockId = &(itemPtr->ip_blkid);
-	blockNumber = BlockIdGetBlockNumber(blockId);
-	offsetNumber = itemPtr->ip_posid;
+	blockNumber = ItemPointerGetBlockNumberNoCheck(itemPtr);
+	offsetNumber = ItemPointerGetOffsetNumberNoCheck(itemPtr);
 
 	pq_begintypsend(&buf);
 	pq_sendint(&buf, blockNumber, sizeof(blockNumber));
diff --git a/src/include/access/gin_private.h b/src/include/access/gin_private.h
index 824cc1c..6192b54 100644
--- a/src/include/access/gin_private.h
+++ b/src/include/access/gin_private.h
@@ -460,8 +460,8 @@ extern ItemPointer ginMergeItemPointers(ItemPointerData *a, uint32 na,
 static inline int
 ginCompareItemPointers(ItemPointer a, ItemPointer b)
 {
-	uint64		ia = (uint64) a->ip_blkid.bi_hi << 32 | (uint64) a->ip_blkid.bi_lo << 16 | a->ip_posid;
-	uint64		ib = (uint64) b->ip_blkid.bi_hi << 32 | (uint64) b->ip_blkid.bi_lo << 16 | b->ip_posid;
+	uint64		ia = (uint64) GinItemPointerGetBlockNumber(a) << 32 | GinItemPointerGetOffsetNumber(a);
+	uint64		ib = (uint64) GinItemPointerGetBlockNumber(b) << 32 | GinItemPointerGetOffsetNumber(b);
 
 	if (ia == ib)
 		return 0;
diff --git a/src/include/access/ginblock.h b/src/include/access/ginblock.h
index a3fb056..438912c 100644
--- a/src/include/access/ginblock.h
+++ b/src/include/access/ginblock.h
@@ -132,10 +132,17 @@ typedef struct GinMetaPageData
  * to avoid Asserts, since sometimes the ip_posid isn't "valid"
  */
 #define GinItemPointerGetBlockNumber(pointer) \
-	BlockIdGetBlockNumber(&(pointer)->ip_blkid)
+	(ItemPointerGetBlockNumberNoCheck(pointer))
 
 #define GinItemPointerGetOffsetNumber(pointer) \
-	((pointer)->ip_posid)
+	(ItemPointerGetOffsetNumberNoCheck(pointer))
+
+#define GinItemPointerSetBlockNumber(pointer, blkno) \
+	(ItemPointerSetBlockNumber((pointer), (blkno)))
+
+#define GinItemPointerSetOffsetNumber(pointer, offnum) \
+	(ItemPointerSetOffsetNumber((pointer), (offnum)))
+
 
 /*
  * Special-case item pointer values needed by the GIN search logic.
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 7552186..24433c7 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -428,7 +428,7 @@ do { \
 
 #define HeapTupleHeaderIsSpeculative(tup) \
 ( \
-	(tup)->t_ctid.ip_posid == SpecTokenOffsetNumber \
+	(ItemPointerGetOffsetNumberNoCheck(&(tup)->t_ctid) == SpecTokenOffsetNumber) \
 )
 
 #define HeapTupleHeaderGetSpeculativeToken(tup) \
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index 6289ffa..f9304db 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -151,9 +151,8 @@ typedef struct BTMetaPageData
  *	within a level). - vadim 04/09/97
  */
 #define BTTidSame(i1, i2)	\
-	( (i1).ip_blkid.bi_hi == (i2).ip_blkid.bi_hi && \
-	  (i1).ip_blkid.bi_lo == (i2).ip_blkid.bi_lo && \
-	  (i1).ip_posid == (i2).ip_posid )
+	((ItemPointerGetBlockNumber(&(i1)) == ItemPointerGetBlockNumber(&(i2))) && \
+	 (ItemPointerGetOffsetNumber(&(i1)) == ItemPointerGetOffsetNumber(&(i2))))
 #define BTEntrySame(i1, i2) \
 	BTTidSame((i1)->t_tid, (i2)->t_tid)
 
diff --git a/src/include/storage/itemptr.h b/src/include/storage/itemptr.h
index 576aaa8..60d0070 100644
--- a/src/include/storage/itemptr.h
+++ b/src/include/storage/itemptr.h
@@ -69,6 +69,12 @@ typedef ItemPointerData *ItemPointer;
 	BlockIdGetBlockNumber(&(pointer)->ip_blkid) \
 )
 
+/* Same as ItemPointerGetBlockNumber but without any assert-checks */
+#define ItemPointerGetBlockNumberNoCheck(pointer) \
+( \
+	BlockIdGetBlockNumber(&(pointer)->ip_blkid) \
+)
+
 /*
  * ItemPointerGetOffsetNumber
  *		Returns the offset number of a disk item pointer.
@@ -79,6 +85,12 @@ typedef ItemPointerData *ItemPointer;
 	(pointer)->ip_posid \
 )
 
+/* Same as ItemPointerGetOffsetNumber but without any assert-checks */
+#define ItemPointerGetOffsetNumberNoCheck(pointer) \
+( \
+	(pointer)->ip_posid \
+)
+
 /*
  * ItemPointerSet
  *		Sets a disk item pointer to the specified block and offset.
0004_freeup_3bits_ip_posid_v21.patchapplication/octet-stream; name=0004_freeup_3bits_ip_posid_v21.patchDownload
diff --git a/src/backend/access/gin/ginget.c b/src/backend/access/gin/ginget.c
index aa0b02f..1e1c978 100644
--- a/src/backend/access/gin/ginget.c
+++ b/src/backend/access/gin/ginget.c
@@ -928,7 +928,7 @@ keyGetItem(GinState *ginstate, MemoryContext tempCtx, GinScanKey key,
 	 * Find the minimum item > advancePast among the active entry streams.
 	 *
 	 * Note: a lossy-page entry is encoded by a ItemPointer with max value for
-	 * offset (0xffff), so that it will sort after any exact entries for the
+	 * offset (0x1fff), so that it will sort after any exact entries for the
 	 * same page.  So we'll prefer to return exact pointers not lossy
 	 * pointers, which is good.
 	 */
diff --git a/src/backend/access/gin/ginpostinglist.c b/src/backend/access/gin/ginpostinglist.c
index 8d2d31a..b22b9f5 100644
--- a/src/backend/access/gin/ginpostinglist.c
+++ b/src/backend/access/gin/ginpostinglist.c
@@ -253,7 +253,7 @@ ginCompressPostingList(const ItemPointer ipd, int nipd, int maxsize,
 
 		Assert(ndecoded == totalpacked);
 		for (i = 0; i < ndecoded; i++)
-			Assert(memcmp(&tmp[i], &ipd[i], sizeof(ItemPointerData)) == 0);
+			Assert(ItemPointerEquals(&tmp[i], &ipd[i]));
 		pfree(tmp);
 	}
 #endif
diff --git a/src/include/access/ginblock.h b/src/include/access/ginblock.h
index 438912c..3f7a3f0 100644
--- a/src/include/access/ginblock.h
+++ b/src/include/access/ginblock.h
@@ -160,14 +160,14 @@ typedef struct GinMetaPageData
 	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)0 && \
 	 GinItemPointerGetBlockNumber(p) == (BlockNumber)0)
 #define ItemPointerSetMax(p)  \
-	ItemPointerSet((p), InvalidBlockNumber, (OffsetNumber)0xffff)
+	ItemPointerSet((p), InvalidBlockNumber, (OffsetNumber)OffsetNumberMask)
 #define ItemPointerIsMax(p)  \
-	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)0xffff && \
+	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)OffsetNumberMask && \
 	 GinItemPointerGetBlockNumber(p) == InvalidBlockNumber)
 #define ItemPointerSetLossyPage(p, b)  \
-	ItemPointerSet((p), (b), (OffsetNumber)0xffff)
+	ItemPointerSet((p), (b), (OffsetNumber)OffsetNumberMask)
 #define ItemPointerIsLossyPage(p)  \
-	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)0xffff && \
+	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)OffsetNumberMask && \
 	 GinItemPointerGetBlockNumber(p) != InvalidBlockNumber)
 
 /*
@@ -218,7 +218,7 @@ typedef signed char GinNullCategory;
  */
 #define GinGetNPosting(itup)	GinItemPointerGetOffsetNumber(&(itup)->t_tid)
 #define GinSetNPosting(itup,n)	ItemPointerSetOffsetNumber(&(itup)->t_tid,n)
-#define GIN_TREE_POSTING		((OffsetNumber)0xffff)
+#define GIN_TREE_POSTING		((OffsetNumber)OffsetNumberMask)
 #define GinIsPostingTree(itup)	(GinGetNPosting(itup) == GIN_TREE_POSTING)
 #define GinSetPostingTree(itup, blkno)	( GinSetNPosting((itup),GIN_TREE_POSTING), ItemPointerSetBlockNumber(&(itup)->t_tid, blkno) )
 #define GinGetPostingTree(itup) GinItemPointerGetBlockNumber(&(itup)->t_tid)
diff --git a/src/include/access/gist_private.h b/src/include/access/gist_private.h
index 1ad4ed6..0ad11f1 100644
--- a/src/include/access/gist_private.h
+++ b/src/include/access/gist_private.h
@@ -269,8 +269,8 @@ typedef struct
  * invalid tuples in an index, so throwing an error is as far as we go with
  * supporting that.
  */
-#define TUPLE_IS_VALID		0xffff
-#define TUPLE_IS_INVALID	0xfffe
+#define TUPLE_IS_VALID		OffsetNumberMask
+#define TUPLE_IS_INVALID	OffsetNumberPrev(OffsetNumberMask)
 
 #define  GistTupleIsInvalid(itup)	( ItemPointerGetOffsetNumber( &((itup)->t_tid) ) == TUPLE_IS_INVALID )
 #define  GistTupleSetValid(itup)	ItemPointerSetOffsetNumber( &((itup)->t_tid), TUPLE_IS_VALID )
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 24433c7..4d614b7 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -288,7 +288,7 @@ struct HeapTupleHeaderData
  * than MaxOffsetNumber, so that it can be distinguished from a valid
  * offset number in a regular item pointer.
  */
-#define SpecTokenOffsetNumber		0xfffe
+#define SpecTokenOffsetNumber		OffsetNumberPrev(OffsetNumberMask)
 
 /*
  * HeapTupleHeader accessor macros
diff --git a/src/include/storage/itemptr.h b/src/include/storage/itemptr.h
index 60d0070..3144bdd 100644
--- a/src/include/storage/itemptr.h
+++ b/src/include/storage/itemptr.h
@@ -57,7 +57,7 @@ typedef ItemPointerData *ItemPointer;
  *		True iff the disk item pointer is not NULL.
  */
 #define ItemPointerIsValid(pointer) \
-	((bool) (PointerIsValid(pointer) && ((pointer)->ip_posid != 0)))
+	((bool) (PointerIsValid(pointer) && (((pointer)->ip_posid & OffsetNumberMask) != 0)))
 
 /*
  * ItemPointerGetBlockNumber
@@ -82,13 +82,37 @@ typedef ItemPointerData *ItemPointer;
 #define ItemPointerGetOffsetNumber(pointer) \
 ( \
 	AssertMacro(ItemPointerIsValid(pointer)), \
-	(pointer)->ip_posid \
+	((pointer)->ip_posid & OffsetNumberMask) \
 )
 
 /* Same as ItemPointerGetOffsetNumber but without any assert-checks */
 #define ItemPointerGetOffsetNumberNoCheck(pointer) \
 ( \
-	(pointer)->ip_posid \
+	((pointer)->ip_posid & OffsetNumberMask) \
+)
+
+/*
+ * Get the flags stored in high order bits in the OffsetNumber.
+ */
+#define ItemPointerGetFlags(pointer) \
+( \
+	((pointer)->ip_posid & ~OffsetNumberMask) >> OffsetNumberBits \
+)
+
+/*
+ * Set the flag bits. We first left-shift since flags are defined starting 0x01
+ */
+#define ItemPointerSetFlags(pointer, flags) \
+( \
+	((pointer)->ip_posid |= ((flags) << OffsetNumberBits)) \
+)
+
+/*
+ * Clear all flags.
+ */
+#define ItemPointerClearFlags(pointer) \
+( \
+	((pointer)->ip_posid &= OffsetNumberMask) \
 )
 
 /*
@@ -99,7 +123,7 @@ typedef ItemPointerData *ItemPointer;
 ( \
 	AssertMacro(PointerIsValid(pointer)), \
 	BlockIdSet(&((pointer)->ip_blkid), blockNumber), \
-	(pointer)->ip_posid = offNum \
+	(pointer)->ip_posid = (offNum) \
 )
 
 /*
diff --git a/src/include/storage/off.h b/src/include/storage/off.h
index fe8638f..fe1834c 100644
--- a/src/include/storage/off.h
+++ b/src/include/storage/off.h
@@ -26,8 +26,15 @@ typedef uint16 OffsetNumber;
 #define InvalidOffsetNumber		((OffsetNumber) 0)
 #define FirstOffsetNumber		((OffsetNumber) 1)
 #define MaxOffsetNumber			((OffsetNumber) (BLCKSZ / sizeof(ItemIdData)))
-#define OffsetNumberMask		(0xffff)		/* valid uint16 bits */
 
+/*
+ * Currently we support maxinum 32kB blocks and each ItemId takes 6 bytes. That
+ * limits the number of line pointers to (32kB/6 = 5461). 13 bits are enought o
+ * represent all line pointers. Hence we can reuse the high order bits in
+ * OffsetNumber for other purposes.
+ */
+#define OffsetNumberMask		(0x1fff)		/* valid uint16 bits */
+#define OffsetNumberBits		13	/* number of valid bits in OffsetNumber */
 /* ----------------
  *		support macros
  * ----------------
0001_interesting_attrs_v21.patchapplication/octet-stream; name=0001_interesting_attrs_v21.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index b147f64..51c773f 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -96,11 +96,8 @@ static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
 				HeapTuple newtup, HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
-static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
-							 Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
+							 Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup);
 static bool heap_acquire_tuplock(Relation relation, ItemPointer tid,
 					 LockTupleMode mode, LockWaitPolicy wait_policy,
@@ -3471,6 +3468,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *interesting_attrs;
+	Bitmapset  *modified_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3488,10 +3487,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				pagefree;
 	bool		have_tuple_lock = false;
 	bool		iscombo;
-	bool		satisfies_hot;
-	bool		satisfies_key;
-	bool		satisfies_id;
 	bool		use_hot_update = false;
+	bool		hot_attrs_checked = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
 	bool		all_visible_cleared_new = false;
@@ -3517,26 +3514,51 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				 errmsg("cannot update tuples during a parallel operation")));
 
 	/*
-	 * Fetch the list of attributes to be checked for HOT update.  This is
-	 * wasted effort if we fail to update or have to put the new tuple on a
-	 * different page.  But we must compute the list before obtaining buffer
-	 * lock --- in the worst case, if we are doing an update on one of the
-	 * relevant system catalogs, we could deadlock if we try to fetch the list
-	 * later.  In any case, the relcache caches the data so this is usually
-	 * pretty cheap.
+	 * Fetch the list of attributes to be checked for various operations.
 	 *
-	 * Note that we get a copy here, so we need not worry about relcache flush
-	 * happening midway through.
+	 * For HOT considerations, this is wasted effort if we fail to update or
+	 * have to put the new tuple on a different page.  But we must compute the
+	 * list before obtaining buffer lock --- in the worst case, if we are doing
+	 * an update on one of the relevant system catalogs, we could deadlock if
+	 * we try to fetch the list later.  In any case, the relcache caches the
+	 * data so this is usually pretty cheap.
+	 *
+	 * We also need columns used by the replica identity, the columns that
+	 * are considered the "key" of rows in the table, and columns that are
+	 * part of indirect indexes.
+	 *
+	 * Note that we get copies of each bitmap, so we need not worry about
+	 * relcache flush happening midway through.
 	 */
 	hot_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_ALL);
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
 
+
 	block = ItemPointerGetBlockNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
+	interesting_attrs = NULL;
+	/*
+	 * If the page is already full, there is hardly any chance of doing a HOT
+	 * update on this page. It might be wasteful effort to look for index
+	 * column updates only to later reject HOT updates for lack of space in the
+	 * same page. So we be conservative and only fetch hot_attrs if the page is
+	 * not already full. Since we are already holding a pin on the buffer,
+	 * there is no chance that the buffer can get cleaned up concurrently and
+	 * even if that was possible, in the worst case we lose a chance to do a
+	 * HOT update.
+	 */
+	if (!PageIsFull(page))
+	{
+		interesting_attrs = bms_add_members(interesting_attrs, hot_attrs);
+		hot_attrs_checked = true;
+	}
+	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
+
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
 	 * be necessary.  Since we haven't got the lock yet, someone else might be
@@ -3552,7 +3574,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Assert(ItemIdIsNormal(lp));
 
 	/*
-	 * Fill in enough data in oldtup for HeapSatisfiesHOTandKeyUpdate to work
+	 * Fill in enough data in oldtup for HeapDetermineModifiedColumns to work
 	 * properly.
 	 */
 	oldtup.t_tableOid = RelationGetRelid(relation);
@@ -3578,6 +3600,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 		Assert(!(newtup->t_data->t_infomask & HEAP_HASOID));
 	}
 
+	/* Determine columns modified by the update. */
+	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
+												  &oldtup, newtup);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3589,10 +3615,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	 * is updates that don't manipulate key columns, not those that
 	 * serendipitiously arrive at the same key values.
 	 */
-	HeapSatisfiesHOTandKeyUpdate(relation, hot_attrs, key_attrs, id_attrs,
-								 &satisfies_hot, &satisfies_key,
-								 &satisfies_id, &oldtup, newtup);
-	if (satisfies_key)
+	if (!bms_overlap(modified_attrs, key_attrs))
 	{
 		*lockmode = LockTupleNoKeyExclusive;
 		mxact_status = MultiXactStatusNoKeyUpdate;
@@ -3831,6 +3854,8 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(modified_attrs);
+		bms_free(interesting_attrs);
 		return result;
 	}
 
@@ -4133,9 +4158,10 @@ l2:
 		/*
 		 * Since the new tuple is going into the same page, we might be able
 		 * to do a HOT update.  Check if any of the index columns have been
-		 * changed.  If not, then HOT update is possible.
+		 * changed. If the page was already full, we may have skipped checking
+		 * for index columns. If so, HOT update is possible.
 		 */
-		if (satisfies_hot)
+		if (hot_attrs_checked && !bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
 	}
 	else
@@ -4150,7 +4176,9 @@ l2:
 	 * ExtractReplicaIdentity() will return NULL if nothing needs to be
 	 * logged.
 	 */
-	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup, !satisfies_id, &old_key_copied);
+	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup,
+										   bms_overlap(modified_attrs, id_attrs),
+										   &old_key_copied);
 
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
@@ -4298,13 +4326,15 @@ l2:
 	bms_free(hot_attrs);
 	bms_free(key_attrs);
 	bms_free(id_attrs);
+	bms_free(modified_attrs);
+	bms_free(interesting_attrs);
 
 	return HeapTupleMayBeUpdated;
 }
 
 /*
  * Check if the specified attribute's value is same in both given tuples.
- * Subroutine for HeapSatisfiesHOTandKeyUpdate.
+ * Subroutine for HeapDetermineModifiedColumns.
  */
 static bool
 heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
@@ -4338,7 +4368,7 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 
 	/*
 	 * Extract the corresponding values.  XXX this is pretty inefficient if
-	 * there are many indexed columns.  Should HeapSatisfiesHOTandKeyUpdate do
+	 * there are many indexed columns.  Should HeapDetermineModifiedColumns do
 	 * a single heap_deform_tuple call on each tuple, instead?	But that
 	 * doesn't work for system columns ...
 	 */
@@ -4383,114 +4413,30 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 /*
  * Check which columns are being updated.
  *
- * This simultaneously checks conditions for HOT updates, for FOR KEY
- * SHARE updates, and REPLICA IDENTITY concerns.  Since much of the time they
- * will be checking very similar sets of columns, and doing the same tests on
- * them, it makes sense to optimize and do them together.
- *
- * We receive three bitmapsets comprising the three sets of columns we're
- * interested in.  Note these are destructively modified; that is OK since
- * this is invoked at most once in heap_update.
+ * Given an updated tuple, determine (and return into the output bitmapset),
+ * from those listed as interesting, the set of columns that changed.
  *
- * hot_result is set to TRUE if it's okay to do a HOT update (i.e. it does not
- * modified indexed columns); key_result is set to TRUE if the update does not
- * modify columns used in the key; id_result is set to TRUE if the update does
- * not modify columns in any index marked as the REPLICA IDENTITY.
+ * The input bitmapset is destructively modified; that is OK since this is
+ * invoked at most once in heap_update.
  */
-static void
-HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *
+HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup)
 {
-	int			next_hot_attnum;
-	int			next_key_attnum;
-	int			next_id_attnum;
-	bool		hot_result = true;
-	bool		key_result = true;
-	bool		id_result = true;
-
-	/* If REPLICA IDENTITY is set to FULL, id_attrs will be empty. */
-	Assert(bms_is_subset(id_attrs, key_attrs));
-	Assert(bms_is_subset(key_attrs, hot_attrs));
-
-	/*
-	 * If one of these sets contains no remaining bits, bms_first_member will
-	 * return -1, and after adding FirstLowInvalidHeapAttributeNumber (which
-	 * is negative!)  we'll get an attribute number that can't possibly be
-	 * real, and thus won't match any actual attribute number.
-	 */
-	next_hot_attnum = bms_first_member(hot_attrs);
-	next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_key_attnum = bms_first_member(key_attrs);
-	next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_id_attnum = bms_first_member(id_attrs);
-	next_id_attnum += FirstLowInvalidHeapAttributeNumber;
+	int		attnum;
+	Bitmapset *modified = NULL;
 
-	for (;;)
+	while ((attnum = bms_first_member(interesting_cols)) >= 0)
 	{
-		bool		changed;
-		int			check_now;
+		attnum += FirstLowInvalidHeapAttributeNumber;
 
-		/*
-		 * Since the HOT attributes are a superset of the key attributes and
-		 * the key attributes are a superset of the id attributes, this logic
-		 * is guaranteed to identify the next column that needs to be checked.
-		 */
-		if (hot_result && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_hot_attnum;
-		else if (key_result && next_key_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_key_attnum;
-		else if (id_result && next_id_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_id_attnum;
-		else
-			break;
-
-		/* See whether it changed. */
-		changed = !heap_tuple_attr_equals(RelationGetDescr(relation),
-										  check_now, oldtup, newtup);
-		if (changed)
-		{
-			if (check_now == next_hot_attnum)
-				hot_result = false;
-			if (check_now == next_key_attnum)
-				key_result = false;
-			if (check_now == next_id_attnum)
-				id_result = false;
-
-			/* if all are false now, we can stop checking */
-			if (!hot_result && !key_result && !id_result)
-				break;
-		}
-
-		/*
-		 * Advance the next attribute numbers for the sets that contain the
-		 * attribute we just checked.  As we work our way through the columns,
-		 * the next_attnum values will rise; but when each set becomes empty,
-		 * bms_first_member() will return -1 and the attribute number will end
-		 * up with a value less than FirstLowInvalidHeapAttributeNumber.
-		 */
-		if (hot_result && check_now == next_hot_attnum)
-		{
-			next_hot_attnum = bms_first_member(hot_attrs);
-			next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (key_result && check_now == next_key_attnum)
-		{
-			next_key_attnum = bms_first_member(key_attrs);
-			next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (id_result && check_now == next_id_attnum)
-		{
-			next_id_attnum = bms_first_member(id_attrs);
-			next_id_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
+		if (!heap_tuple_attr_equals(RelationGetDescr(relation),
+								   attnum, oldtup, newtup))
+			modified = bms_add_member(modified,
+									  attnum - FirstLowInvalidHeapAttributeNumber);
 	}
 
-	*satisfies_hot = hot_result;
-	*satisfies_key = key_result;
-	*satisfies_id = id_result;
+	return modified;
 }
 
 /*
0007_vacuum_enhancements_v21.patchapplication/octet-stream; name=0007_vacuum_enhancements_v21.patchDownload
diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index 72e1253..b856503 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -338,6 +338,24 @@ static relopt_real realRelOpts[] =
 	},
 	{
 		{
+			"autovacuum_warmcleanup_scale_factor",
+			"Number of WARM chains prior to WARM cleanup as a fraction of reltuples",
+			RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
+			ShareUpdateExclusiveLock
+		},
+		-1, 0.0, 100.0
+	},
+	{
+		{
+			"autovacuum_warmcleanup_index_scale_factor",
+			"Number of WARM pointers in an index prior to WARM cleanup as a fraction of total WARM chains",
+			RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
+			ShareUpdateExclusiveLock
+		},
+		-1, 0.0, 100.0
+	},
+	{
+		{
 			"autovacuum_analyze_scale_factor",
 			"Number of tuple inserts, updates or deletes prior to analyze as a fraction of reltuples",
 			RELOPT_KIND_HEAP,
@@ -1341,6 +1359,10 @@ default_reloptions(Datum reloptions, bool validate, relopt_kind kind)
 		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, vacuum_scale_factor)},
 		{"autovacuum_analyze_scale_factor", RELOPT_TYPE_REAL,
 		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, analyze_scale_factor)},
+		{"autovacuum_warmcleanup_scale_factor", RELOPT_TYPE_REAL,
+		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, warmcleanup_scale_factor)},
+		{"autovacuum_warmcleanup_index_scale_factor", RELOPT_TYPE_REAL,
+		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, warmcleanup_index_scale)},
 		{"user_catalog_table", RELOPT_TYPE_BOOL,
 		offsetof(StdRdOptions, user_catalog_table)},
 		{"parallel_workers", RELOPT_TYPE_INT,
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index ca44e03..8d06c93 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -533,6 +533,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
+            pg_stat_get_warm_chains(C.oid) AS n_warm_chains,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
             pg_stat_get_last_vacuum_time(C.oid) as last_vacuum,
             pg_stat_get_last_autovacuum_time(C.oid) as last_autovacuum,
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index 404acb2..6c4fc4e 100644
--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -93,7 +93,8 @@ static VacAttrStats *examine_attribute(Relation onerel, int attnum,
 				  Node *index_expr);
 static int acquire_sample_rows(Relation onerel, int elevel,
 					HeapTuple *rows, int targrows,
-					double *totalrows, double *totaldeadrows);
+					double *totalrows, double *totaldeadrows,
+					double *totalwarmchains);
 static int	compare_rows(const void *a, const void *b);
 static int acquire_inherited_sample_rows(Relation onerel, int elevel,
 							  HeapTuple *rows, int targrows,
@@ -320,7 +321,8 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params,
 	int			targrows,
 				numrows;
 	double		totalrows,
-				totaldeadrows;
+				totaldeadrows,
+				totalwarmchains;
 	HeapTuple  *rows;
 	PGRUsage	ru0;
 	TimestampTz starttime = 0;
@@ -501,7 +503,8 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params,
 	else
 		numrows = (*acquirefunc) (onerel, elevel,
 								  rows, targrows,
-								  &totalrows, &totaldeadrows);
+								  &totalrows, &totaldeadrows,
+								  &totalwarmchains);
 
 	/*
 	 * Compute the statistics.  Temporary results during the calculations for
@@ -631,7 +634,7 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params,
 	 */
 	if (!inh)
 		pgstat_report_analyze(onerel, totalrows, totaldeadrows,
-							  (va_cols == NIL));
+							  totalwarmchains, (va_cols == NIL));
 
 	/* If this isn't part of VACUUM ANALYZE, let index AMs do cleanup */
 	if (!(options & VACOPT_VACUUM))
@@ -991,12 +994,14 @@ examine_attribute(Relation onerel, int attnum, Node *index_expr)
 static int
 acquire_sample_rows(Relation onerel, int elevel,
 					HeapTuple *rows, int targrows,
-					double *totalrows, double *totaldeadrows)
+					double *totalrows, double *totaldeadrows,
+					double *totalwarmchains)
 {
 	int			numrows = 0;	/* # rows now in reservoir */
 	double		samplerows = 0; /* total # rows collected */
 	double		liverows = 0;	/* # live rows seen */
 	double		deadrows = 0;	/* # dead rows seen */
+	double		warmchains = 0;
 	double		rowstoskip = -1;	/* -1 means not set yet */
 	BlockNumber totalblocks;
 	TransactionId OldestXmin;
@@ -1023,9 +1028,14 @@ acquire_sample_rows(Relation onerel, int elevel,
 		Page		targpage;
 		OffsetNumber targoffset,
 					maxoffset;
+		bool		marked[MaxHeapTuplesPerPage];
+		OffsetNumber root_offsets[MaxHeapTuplesPerPage];
 
 		vacuum_delay_point();
 
+		/* Track which root line pointers are already counted. */
+		memset(marked, 0, sizeof (marked));
+
 		/*
 		 * We must maintain a pin on the target page's buffer to ensure that
 		 * the maxoffset value stays good (else concurrent VACUUM might delete
@@ -1041,6 +1051,9 @@ acquire_sample_rows(Relation onerel, int elevel,
 		targpage = BufferGetPage(targbuffer);
 		maxoffset = PageGetMaxOffsetNumber(targpage);
 
+		/* Get all root line pointers first */
+		heap_get_root_tuples(targpage, root_offsets);
+
 		/* Inner loop over all tuples on the selected page */
 		for (targoffset = FirstOffsetNumber; targoffset <= maxoffset; targoffset++)
 		{
@@ -1069,6 +1082,22 @@ acquire_sample_rows(Relation onerel, int elevel,
 			targtuple.t_data = (HeapTupleHeader) PageGetItem(targpage, itemid);
 			targtuple.t_len = ItemIdGetLength(itemid);
 
+			/*
+			 * If this is a WARM-updated tuple, check if we have already seen
+			 * the root line pointer. If not, count this as a WARM chain. This
+			 * ensures that we count every WARM-chain just once, irrespective
+			 * of how many tuples exist in the chain.
+			 */
+			if (HeapTupleHeaderIsWarmUpdated(targtuple.t_data))
+			{
+				OffsetNumber root_offnum = root_offsets[targoffset];
+				if (!marked[root_offnum])
+				{
+					warmchains += 1;
+					marked[root_offnum] = true;
+				}
+			}
+
 			switch (HeapTupleSatisfiesVacuum(&targtuple,
 											 OldestXmin,
 											 targbuffer))
@@ -1200,18 +1229,24 @@ acquire_sample_rows(Relation onerel, int elevel,
 
 	/*
 	 * Estimate total numbers of rows in relation.  For live rows, use
-	 * vac_estimate_reltuples; for dead rows, we have no source of old
-	 * information, so we have to assume the density is the same in unseen
-	 * pages as in the pages we scanned.
+	 * vac_estimate_reltuples; for dead rows and WARM chains, we have no source
+	 * of old information, so we have to assume the density is the same in
+	 * unseen pages as in the pages we scanned.
 	 */
 	*totalrows = vac_estimate_reltuples(onerel, true,
 										totalblocks,
 										bs.m,
 										liverows);
 	if (bs.m > 0)
+	{
 		*totaldeadrows = floor((deadrows / bs.m) * totalblocks + 0.5);
+		*totalwarmchains = floor((warmchains / bs.m) * totalblocks + 0.5);
+	}
 	else
+	{
 		*totaldeadrows = 0.0;
+		*totalwarmchains = 0.0;
+	}
 
 	/*
 	 * Emit some interesting relation info
@@ -1219,11 +1254,13 @@ acquire_sample_rows(Relation onerel, int elevel,
 	ereport(elevel,
 			(errmsg("\"%s\": scanned %d of %u pages, "
 					"containing %.0f live rows and %.0f dead rows; "
-					"%d rows in sample, %.0f estimated total rows",
+					"%d rows in sample, %.0f estimated total rows; "
+					"%.0f warm chains",
 					RelationGetRelationName(onerel),
 					bs.m, totalblocks,
 					liverows, deadrows,
-					numrows, *totalrows)));
+					numrows, *totalrows,
+					*totalwarmchains)));
 
 	return numrows;
 }
@@ -1428,11 +1465,12 @@ acquire_inherited_sample_rows(Relation onerel, int elevel,
 				int			childrows;
 				double		trows,
 							tdrows;
+				double		twarmchains;
 
 				/* Fetch a random sample of the child's rows */
 				childrows = (*acquirefunc) (childrel, elevel,
 											rows + numrows, childtargrows,
-											&trows, &tdrows);
+											&trows, &tdrows, &twarmchains);
 
 				/* We may need to convert from child's rowtype to parent's */
 				if (childrows > 0 &&
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index 9fbb0eb..52a7838 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -103,6 +103,7 @@ ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel)
 		params.freeze_table_age = 0;
 		params.multixact_freeze_min_age = 0;
 		params.multixact_freeze_table_age = 0;
+		params.warmcleanup_index_scale = -1;
 	}
 	else
 	{
@@ -110,6 +111,7 @@ ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel)
 		params.freeze_table_age = -1;
 		params.multixact_freeze_min_age = -1;
 		params.multixact_freeze_table_age = -1;
+		params.warmcleanup_index_scale = -1;
 	}
 
 	/* user-invoked vacuum is never "for wraparound" */
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index f52490f..87510ea 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -187,11 +187,12 @@ static BufferAccessStrategy vac_strategy;
 /* non-export function prototypes */
 static void lazy_scan_heap(Relation onerel, int options,
 			   LVRelStats *vacrelstats, Relation *Irel, int nindexes,
-			   bool aggressive);
+			   bool aggressive, double warmcleanup_index_scale);
 static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats);
 static bool lazy_check_needs_freeze(Buffer buf, bool *hastup);
 static void lazy_vacuum_index(Relation indrel,
 				  bool clear_warm,
+				  double warmcleanup_index_scale,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats);
 static void lazy_cleanup_index(Relation indrel,
@@ -207,7 +208,8 @@ static bool should_attempt_truncation(LVRelStats *vacrelstats);
 static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
 static BlockNumber count_nondeletable_pages(Relation onerel,
 						 LVRelStats *vacrelstats);
-static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks);
+static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks,
+					   bool dowarmcleanup);
 static void lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr);
 static void lazy_record_warm_chain(LVRelStats *vacrelstats,
@@ -283,6 +285,9 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 						  &OldestXmin, &FreezeLimit, &xidFullScanLimit,
 						  &MultiXactCutoff, &mxactFullScanLimit);
 
+	/* Use default if the caller hasn't specified any value */
+	if (params->warmcleanup_index_scale == -1)
+		params->warmcleanup_index_scale = VacuumWarmCleanupIndexScale;
 	/*
 	 * We request an aggressive scan if the table's frozen Xid is now older
 	 * than or equal to the requested Xid full-table scan limit; or if the
@@ -309,7 +314,8 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 	vacrelstats->hasindex = (nindexes > 0);
 
 	/* Do the vacuuming */
-	lazy_scan_heap(onerel, options, vacrelstats, Irel, nindexes, aggressive);
+	lazy_scan_heap(onerel, options, vacrelstats, Irel, nindexes, aggressive,
+			params->warmcleanup_index_scale);
 
 	/* Done with indexes */
 	vac_close_indexes(nindexes, Irel, NoLock);
@@ -396,7 +402,8 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 	pgstat_report_vacuum(RelationGetRelid(onerel),
 						 onerel->rd_rel->relisshared,
 						 new_live_tuples,
-						 vacrelstats->new_dead_tuples);
+						 vacrelstats->new_dead_tuples,
+						 vacrelstats->num_non_convertible_warm_chains);
 	pgstat_progress_end_command();
 
 	/* and log the action if appropriate */
@@ -507,10 +514,19 @@ vacuum_log_cleanup_info(Relation rel, LVRelStats *vacrelstats)
  *		If there are no indexes then we can reclaim line pointers on the fly;
  *		dead line pointers need only be retained until all index pointers that
  *		reference them have been killed.
+ *
+ *		warmcleanup_index_scale specifies the number of WARM pointers in an
+ *		index as a fraction of total candidate WARM chains. If we find less
+ *		WARM pointers in an index than the specified fraction, then we don't
+ *		invoke cleanup that index. If WARM cleanup is skipped for any one
+ *		index, the WARM chain can't be cleared in the heap and no further WARM
+ *		updates are possible to such chains. Such chains are also not
+ *		considered for WARM cleanup in other indexes.
  */
 static void
 lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
-			   Relation *Irel, int nindexes, bool aggressive)
+			   Relation *Irel, int nindexes, bool aggressive,
+			   double warmcleanup_index_scale)
 {
 	BlockNumber nblocks,
 				blkno;
@@ -536,6 +552,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		PROGRESS_VACUUM_MAX_DEAD_TUPLES
 	};
 	int64		initprog_val[3];
+	bool		dowarmcleanup = ((options & VACOPT_WARM_CLEANUP) != 0);
 
 	pg_rusage_init(&ru0);
 
@@ -558,7 +575,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 	vacrelstats->nonempty_pages = 0;
 	vacrelstats->latestRemovedXid = InvalidTransactionId;
 
-	lazy_space_alloc(vacrelstats, nblocks);
+	lazy_space_alloc(vacrelstats, nblocks, dowarmcleanup);
 	frozen = palloc(sizeof(xl_heap_freeze_tuple) * MaxHeapTuplesPerPage);
 
 	/* Report that we're scanning the heap, advertising total # of blocks */
@@ -776,7 +793,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			/* Remove index entries */
 			for (i = 0; i < nindexes; i++)
 				lazy_vacuum_index(Irel[i],
-								  (vacrelstats->num_warm_chains > 0),
+								  dowarmcleanup && (vacrelstats->num_warm_chains > 0),
+								  warmcleanup_index_scale,
 								  &indstats[i],
 								  vacrelstats);
 
@@ -1408,7 +1426,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		/* Remove index entries */
 		for (i = 0; i < nindexes; i++)
 			lazy_vacuum_index(Irel[i],
-							  (vacrelstats->num_warm_chains > 0),
+							  dowarmcleanup && (vacrelstats->num_warm_chains > 0),
+							  warmcleanup_index_scale,
 							  &indstats[i],
 							  vacrelstats);
 
@@ -1863,6 +1882,7 @@ lazy_reset_warm_pointer_count(LVRelStats *vacrelstats)
 static void
 lazy_vacuum_index(Relation indrel,
 				  bool clear_warm,
+				  double warmcleanup_index_scale,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats)
 {
@@ -1927,25 +1947,55 @@ lazy_vacuum_index(Relation indrel,
 						(*stats)->warm_pointers_removed,
 						(*stats)->clear_pointers_removed)));
 
-		(*stats)->num_warm_pointers = 0;
-		(*stats)->num_clear_pointers = 0;
-		(*stats)->warm_pointers_removed = 0;
-		(*stats)->clear_pointers_removed = 0;
-		(*stats)->pointers_cleared = 0;
+		/*
+		 * If the number of WARM pointers found in the index are more than the
+		 * configured fraction of total candidate WARM chains, then do the
+		 * second index scan to clean up WARM chains.
+		 *
+		 * Otherwise we must set these WARM chains as non-convertible chains.
+		 */
+		if ((*stats)->num_warm_pointers >
+				((double)vacrelstats->num_warm_chains * warmcleanup_index_scale))
+		{
+			(*stats)->num_warm_pointers = 0;
+			(*stats)->num_clear_pointers = 0;
+			(*stats)->warm_pointers_removed = 0;
+			(*stats)->clear_pointers_removed = 0;
+			(*stats)->pointers_cleared = 0;
+
+			*stats = index_bulk_delete(&ivinfo, *stats,
+					lazy_indexvac_phase2, (void *) vacrelstats);
+			ereport(elevel,
+					(errmsg("scanned index \"%s\" to convert WARM pointers, found "
+							"%0.f WARM pointers, %0.f CLEAR pointers, removed "
+							"%0.f WARM pointers, removed %0.f CLEAR pointers, "
+							"cleared %0.f WARM pointers",
+							RelationGetRelationName(indrel),
+							(*stats)->num_warm_pointers,
+							(*stats)->num_clear_pointers,
+							(*stats)->warm_pointers_removed,
+							(*stats)->clear_pointers_removed,
+							(*stats)->pointers_cleared)));
+		}
+		else
+		{
+			int ii;
 
-		*stats = index_bulk_delete(&ivinfo, *stats,
-				lazy_indexvac_phase2, (void *) vacrelstats);
-		ereport(elevel,
-				(errmsg("scanned index \"%s\" to convert WARM pointers, found "
-						"%0.f WARM pointers, %0.f CLEAR pointers, removed "
-						"%0.f WARM pointers, removed %0.f CLEAR pointers, "
-						"cleared %0.f WARM pointers",
-						RelationGetRelationName(indrel),
-						(*stats)->num_warm_pointers,
-						(*stats)->num_clear_pointers,
-						(*stats)->warm_pointers_removed,
-						(*stats)->clear_pointers_removed,
-						(*stats)->pointers_cleared)));
+			/*
+			 * All chains skipped by this index are marked non-convertible.
+			 */
+			for (ii = 0; ii < vacrelstats->num_warm_chains; ii++)
+			{
+				LVWarmChain *chain = &vacrelstats->warm_chains[ii];
+				if (chain->num_warm_pointers > 0 ||
+					chain->num_clear_pointers > 1)
+				{
+					chain->keep_warm_chain = 1;
+					vacrelstats->num_non_convertible_warm_chains++;
+				}
+			}
+
+		}
 	}
 	else
 	{
@@ -2323,7 +2373,8 @@ count_nondeletable_pages(Relation onerel, LVRelStats *vacrelstats)
  * See the comments at the head of this file for rationale.
  */
 static void
-lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
+lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks,
+				 bool dowarmcleanup)
 {
 	long		maxtuples;
 	int			vac_work_mem = IsAutoVacuumWorkerProcess() &&
@@ -2332,8 +2383,13 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 
 	if (vacrelstats->hasindex)
 	{
+		/*
+		 * If we're not doing WARM cleanup then the entire memory is available
+		 * for tracking dead tuples. Otherwise it gets split between tracking
+		 * dead tuples and tracking WARM chains.
+		 */
 		maxtuples = (vac_work_mem * 1024L) / (sizeof(ItemPointerData) +
-				sizeof(LVWarmChain));
+				dowarmcleanup ? sizeof(LVWarmChain) : 0);
 		maxtuples = Min(maxtuples, INT_MAX);
 		maxtuples = Min(maxtuples, MaxAllocSize / (sizeof(ItemPointerData) +
 					sizeof(LVWarmChain)));
@@ -2359,11 +2415,18 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 	 * XXX Cheat for now and allocate the same size array for tracking warm
 	 * chains. maxtuples must have been already adjusted above to ensure we
 	 * don't cross vac_work_mem.
+	 *
+	 * XXX A better strategy seems to consume the available memory from two
+	 * ends and do a round of index cleanup if all available memory is
+	 * exhausted.
 	 */
-	vacrelstats->num_warm_chains = 0;
-	vacrelstats->max_warm_chains = (int) maxtuples;
-	vacrelstats->warm_chains = (LVWarmChain *)
-		palloc0(maxtuples * sizeof(LVWarmChain));
+	if (dowarmcleanup)
+	{
+		vacrelstats->num_warm_chains = 0;
+		vacrelstats->max_warm_chains = (int) maxtuples;
+		vacrelstats->warm_chains = (LVWarmChain *)
+			palloc0(maxtuples * sizeof(LVWarmChain));
+	}
 
 }
 
@@ -2385,6 +2448,8 @@ lazy_record_clear_chain(LVRelStats *vacrelstats,
 		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 0;
 		vacrelstats->num_warm_chains++;
 	}
+	else
+		vacrelstats->num_non_convertible_warm_chains++;
 }
 
 /*
@@ -2405,6 +2470,8 @@ lazy_record_warm_chain(LVRelStats *vacrelstats,
 		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 1;
 		vacrelstats->num_warm_chains++;
 	}
+	else
+		vacrelstats->num_non_convertible_warm_chains++;
 }
 
 /*
@@ -2600,6 +2667,7 @@ lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
 		 * index pointers.
 		 */
 		chain->keep_warm_chain = 1;
+		vacrelstats->num_non_convertible_warm_chains++;
 		return IBDCR_KEEP;
 	}
 	return IBDCR_KEEP;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 19dd77d..69a81ab 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -433,7 +433,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	overlay_placing substr_from substr_for
 
 %type <boolean> opt_instead
-%type <boolean> opt_unique opt_concurrently opt_verbose opt_full
+%type <boolean> opt_unique opt_concurrently opt_verbose opt_full opt_warmclean
 %type <boolean> opt_freeze opt_default opt_recheck
 %type <defelt>	opt_binary opt_oids copy_delimiter
 
@@ -684,7 +684,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	VACUUM VALID VALIDATE VALIDATOR VALUE_P VALUES VARCHAR VARIADIC VARYING
 	VERBOSE VERSION_P VIEW VIEWS VOLATILE
 
-	WHEN WHERE WHITESPACE_P WINDOW WITH WITHIN WITHOUT WORK WRAPPER WRITE
+	WARMCLEAN WHEN WHERE WHITESPACE_P WINDOW WITH WITHIN WITHOUT WORK WRAPPER WRITE
 
 	XML_P XMLATTRIBUTES XMLCONCAT XMLELEMENT XMLEXISTS XMLFOREST XMLNAMESPACES
 	XMLPARSE XMLPI XMLROOT XMLSERIALIZE XMLTABLE
@@ -10058,7 +10058,7 @@ cluster_index_specification:
  *
  *****************************************************************************/
 
-VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
+VacuumStmt: VACUUM opt_full opt_freeze opt_verbose opt_warmclean
 				{
 					VacuumStmt *n = makeNode(VacuumStmt);
 					n->options = VACOPT_VACUUM;
@@ -10068,11 +10068,13 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
 						n->options |= VACOPT_FREEZE;
 					if ($4)
 						n->options |= VACOPT_VERBOSE;
+					if ($5)
+						n->options |= VACOPT_WARM_CLEANUP;
 					n->relation = NULL;
 					n->va_cols = NIL;
 					$$ = (Node *)n;
 				}
-			| VACUUM opt_full opt_freeze opt_verbose qualified_name
+			| VACUUM opt_full opt_freeze opt_verbose opt_warmclean qualified_name
 				{
 					VacuumStmt *n = makeNode(VacuumStmt);
 					n->options = VACOPT_VACUUM;
@@ -10082,13 +10084,15 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
 						n->options |= VACOPT_FREEZE;
 					if ($4)
 						n->options |= VACOPT_VERBOSE;
-					n->relation = $5;
+					if ($5)
+						n->options |= VACOPT_WARM_CLEANUP;
+					n->relation = $6;
 					n->va_cols = NIL;
 					$$ = (Node *)n;
 				}
-			| VACUUM opt_full opt_freeze opt_verbose AnalyzeStmt
+			| VACUUM opt_full opt_freeze opt_verbose opt_warmclean AnalyzeStmt
 				{
-					VacuumStmt *n = (VacuumStmt *) $5;
+					VacuumStmt *n = (VacuumStmt *) $6;
 					n->options |= VACOPT_VACUUM;
 					if ($2)
 						n->options |= VACOPT_FULL;
@@ -10096,6 +10100,8 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
 						n->options |= VACOPT_FREEZE;
 					if ($4)
 						n->options |= VACOPT_VERBOSE;
+					if ($5)
+						n->options |= VACOPT_WARM_CLEANUP;
 					$$ = (Node *)n;
 				}
 			| VACUUM '(' vacuum_option_list ')'
@@ -10128,6 +10134,7 @@ vacuum_option_elem:
 			| VERBOSE			{ $$ = VACOPT_VERBOSE; }
 			| FREEZE			{ $$ = VACOPT_FREEZE; }
 			| FULL				{ $$ = VACOPT_FULL; }
+			| WARMCLEAN			{ $$ = VACOPT_WARM_CLEANUP; }
 			| IDENT
 				{
 					if (strcmp($1, "disable_page_skipping") == 0)
@@ -10181,6 +10188,10 @@ opt_freeze: FREEZE									{ $$ = TRUE; }
 			| /*EMPTY*/								{ $$ = FALSE; }
 		;
 
+opt_warmclean: WARMCLEAN							{ $$ = TRUE; }
+			| /*EMPTY*/								{ $$ = FALSE; }
+		;
+
 opt_name_list:
 			'(' name_list ')'						{ $$ = $2; }
 			| /*EMPTY*/								{ $$ = NIL; }
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 33ca749..91793e4 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -115,6 +115,8 @@ int			autovacuum_vac_thresh;
 double		autovacuum_vac_scale;
 int			autovacuum_anl_thresh;
 double		autovacuum_anl_scale;
+double		autovacuum_warmcleanup_scale;
+double		autovacuum_warmcleanup_index_scale;
 int			autovacuum_freeze_max_age;
 int			autovacuum_multixact_freeze_max_age;
 
@@ -307,7 +309,8 @@ static void relation_needs_vacanalyze(Oid relid, AutoVacOpts *relopts,
 						  Form_pg_class classForm,
 						  PgStat_StatTabEntry *tabentry,
 						  int effective_multixact_freeze_max_age,
-						  bool *dovacuum, bool *doanalyze, bool *wraparound);
+						  bool *dovacuum, bool *doanalyze, bool *wraparound,
+						  bool *dowarmcleanup);
 
 static void autovacuum_do_vac_analyze(autovac_table *tab,
 						  BufferAccessStrategy bstrategy);
@@ -2010,6 +2013,7 @@ do_autovacuum(void)
 		bool		dovacuum;
 		bool		doanalyze;
 		bool		wraparound;
+		bool		dowarmcleanup;
 
 		if (classForm->relkind != RELKIND_RELATION &&
 			classForm->relkind != RELKIND_MATVIEW)
@@ -2049,10 +2053,14 @@ do_autovacuum(void)
 		tabentry = get_pgstat_tabentry_relid(relid, classForm->relisshared,
 											 shared, dbentry);
 
-		/* Check if it needs vacuum or analyze */
+		/* 
+		 * Check if it needs vacuum or analyze. For vacuum, also check if it
+		 * needs WARM cleanup.
+		 */
 		relation_needs_vacanalyze(relid, relopts, classForm, tabentry,
 								  effective_multixact_freeze_max_age,
-								  &dovacuum, &doanalyze, &wraparound);
+								  &dovacuum, &doanalyze, &wraparound,
+								  &dowarmcleanup);
 
 		/* Relations that need work are added to table_oids */
 		if (dovacuum || doanalyze)
@@ -2105,6 +2113,7 @@ do_autovacuum(void)
 		bool		dovacuum;
 		bool		doanalyze;
 		bool		wraparound;
+		bool		dowarmcleanup;
 
 		/*
 		 * We cannot safely process other backends' temp tables, so skip 'em.
@@ -2135,7 +2144,8 @@ do_autovacuum(void)
 
 		relation_needs_vacanalyze(relid, relopts, classForm, tabentry,
 								  effective_multixact_freeze_max_age,
-								  &dovacuum, &doanalyze, &wraparound);
+								  &dovacuum, &doanalyze, &wraparound,
+								  &dowarmcleanup);
 
 		/* ignore analyze for toast tables */
 		if (dovacuum)
@@ -2566,6 +2576,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 	HeapTuple	classTup;
 	bool		dovacuum;
 	bool		doanalyze;
+	bool		dowarmcleanup;
 	autovac_table *tab = NULL;
 	PgStat_StatTabEntry *tabentry;
 	PgStat_StatDBEntry *shared;
@@ -2607,7 +2618,8 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 
 	relation_needs_vacanalyze(relid, avopts, classForm, tabentry,
 							  effective_multixact_freeze_max_age,
-							  &dovacuum, &doanalyze, &wraparound);
+							  &dovacuum, &doanalyze, &wraparound,
+							  &dowarmcleanup);
 
 	/* ignore ANALYZE for toast tables */
 	if (classForm->relkind == RELKIND_TOASTVALUE)
@@ -2623,6 +2635,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		int			vac_cost_limit;
 		int			vac_cost_delay;
 		int			log_min_duration;
+		double		warmcleanup_index_scale;
 
 		/*
 		 * Calculate the vacuum cost parameters and the freeze ages.  If there
@@ -2669,19 +2682,26 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 			? avopts->multixact_freeze_table_age
 			: default_multixact_freeze_table_age;
 
+		warmcleanup_index_scale = (avopts &&
+								   avopts->warmcleanup_index_scale >= 0)
+			? avopts->warmcleanup_index_scale
+			: autovacuum_warmcleanup_index_scale;
+
 		tab = palloc(sizeof(autovac_table));
 		tab->at_relid = relid;
 		tab->at_sharedrel = classForm->relisshared;
 		tab->at_vacoptions = VACOPT_SKIPTOAST |
 			(dovacuum ? VACOPT_VACUUM : 0) |
 			(doanalyze ? VACOPT_ANALYZE : 0) |
-			(!wraparound ? VACOPT_NOWAIT : 0);
+			(!wraparound ? VACOPT_NOWAIT : 0) |
+			(dowarmcleanup ? VACOPT_WARM_CLEANUP : 0);
 		tab->at_params.freeze_min_age = freeze_min_age;
 		tab->at_params.freeze_table_age = freeze_table_age;
 		tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age;
 		tab->at_params.multixact_freeze_table_age = multixact_freeze_table_age;
 		tab->at_params.is_wraparound = wraparound;
 		tab->at_params.log_min_duration = log_min_duration;
+		tab->at_params.warmcleanup_index_scale = warmcleanup_index_scale;
 		tab->at_vacuum_cost_limit = vac_cost_limit;
 		tab->at_vacuum_cost_delay = vac_cost_delay;
 		tab->at_relname = NULL;
@@ -2748,7 +2768,8 @@ relation_needs_vacanalyze(Oid relid,
  /* output params below */
 						  bool *dovacuum,
 						  bool *doanalyze,
-						  bool *wraparound)
+						  bool *wraparound,
+						  bool *dowarmcleanup)
 {
 	bool		force_vacuum;
 	bool		av_enabled;
@@ -2760,6 +2781,9 @@ relation_needs_vacanalyze(Oid relid,
 	float4		vac_scale_factor,
 				anl_scale_factor;
 
+	/* constant from reloptions or GUC valriable */
+	float4		warmcleanup_scale_factor;
+
 	/* thresholds calculated from above constants */
 	float4		vacthresh,
 				anlthresh;
@@ -2768,6 +2792,9 @@ relation_needs_vacanalyze(Oid relid,
 	float4		vactuples,
 				anltuples;
 
+	/* number of WARM chains in the table */
+	float4		warmchains;
+
 	/* freeze parameters */
 	int			freeze_max_age;
 	int			multixact_freeze_max_age;
@@ -2800,6 +2827,11 @@ relation_needs_vacanalyze(Oid relid,
 		? relopts->analyze_threshold
 		: autovacuum_anl_thresh;
 
+	/* Use table specific value or the GUC value */
+	warmcleanup_scale_factor = (relopts && relopts->warmcleanup_scale_factor >= 0)
+		? relopts->warmcleanup_scale_factor
+		: autovacuum_warmcleanup_scale;
+
 	freeze_max_age = (relopts && relopts->freeze_max_age >= 0)
 		? Min(relopts->freeze_max_age, autovacuum_freeze_max_age)
 		: autovacuum_freeze_max_age;
@@ -2847,6 +2879,7 @@ relation_needs_vacanalyze(Oid relid,
 		reltuples = classForm->reltuples;
 		vactuples = tabentry->n_dead_tuples;
 		anltuples = tabentry->changes_since_analyze;
+		warmchains = tabentry->n_warm_chains;
 
 		vacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples;
 		anlthresh = (float4) anl_base_thresh + anl_scale_factor * reltuples;
@@ -2863,6 +2896,17 @@ relation_needs_vacanalyze(Oid relid,
 		/* Determine if this table needs vacuum or analyze. */
 		*dovacuum = force_vacuum || (vactuples > vacthresh);
 		*doanalyze = (anltuples > anlthresh);
+
+		/*
+		 * If the number of WARM chains in the is more than the configured
+		 * fraction, then we also do a WARM cleanup. This only triggers at the
+		 * table level, but we then look at each index and do cleanup for the
+		 * index only if the WARM pointers in the index are more than
+		 * configured index-level scale factor. lazy_vacuum_index() later deals
+		 * with that.
+		 */
+		if (*dovacuum && (warmcleanup_scale_factor * reltuples < warmchains))
+			*dowarmcleanup = true;
 	}
 	else
 	{
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index cdfd76e..017ac20 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -199,9 +199,11 @@ typedef struct TwoPhasePgStatRecord
 	PgStat_Counter tuples_inserted;		/* tuples inserted in xact */
 	PgStat_Counter tuples_updated;		/* tuples updated in xact */
 	PgStat_Counter tuples_deleted;		/* tuples deleted in xact */
+	PgStat_Counter tuples_warm_updated;	/* tuples warm updated in xact */
 	PgStat_Counter inserted_pre_trunc;	/* tuples inserted prior to truncate */
 	PgStat_Counter updated_pre_trunc;	/* tuples updated prior to truncate */
 	PgStat_Counter deleted_pre_trunc;	/* tuples deleted prior to truncate */
+	PgStat_Counter warm_updated_pre_trunc;	/* tuples warm updated prior to truncate */
 	Oid			t_id;			/* table's OID */
 	bool		t_shared;		/* is it a shared catalog? */
 	bool		t_truncated;	/* was the relation truncated? */
@@ -1328,7 +1330,8 @@ pgstat_report_autovac(Oid dboid)
  */
 void
 pgstat_report_vacuum(Oid tableoid, bool shared,
-					 PgStat_Counter livetuples, PgStat_Counter deadtuples)
+					 PgStat_Counter livetuples, PgStat_Counter deadtuples,
+					 PgStat_Counter warmchains)
 {
 	PgStat_MsgVacuum msg;
 
@@ -1342,6 +1345,7 @@ pgstat_report_vacuum(Oid tableoid, bool shared,
 	msg.m_vacuumtime = GetCurrentTimestamp();
 	msg.m_live_tuples = livetuples;
 	msg.m_dead_tuples = deadtuples;
+	msg.m_warm_chains = warmchains;
 	pgstat_send(&msg, sizeof(msg));
 }
 
@@ -1357,7 +1361,7 @@ pgstat_report_vacuum(Oid tableoid, bool shared,
 void
 pgstat_report_analyze(Relation rel,
 					  PgStat_Counter livetuples, PgStat_Counter deadtuples,
-					  bool resetcounter)
+					  PgStat_Counter warmchains, bool resetcounter)
 {
 	PgStat_MsgAnalyze msg;
 
@@ -1382,12 +1386,14 @@ pgstat_report_analyze(Relation rel,
 		{
 			livetuples -= trans->tuples_inserted - trans->tuples_deleted;
 			deadtuples -= trans->tuples_updated + trans->tuples_deleted;
+			warmchains -= trans->tuples_warm_updated;
 		}
 		/* count stuff inserted by already-aborted subxacts, too */
 		deadtuples -= rel->pgstat_info->t_counts.t_delta_dead_tuples;
 		/* Since ANALYZE's counts are estimates, we could have underflowed */
 		livetuples = Max(livetuples, 0);
 		deadtuples = Max(deadtuples, 0);
+		warmchains = Max(warmchains, 0);
 	}
 
 	pgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_ANALYZE);
@@ -1398,6 +1404,7 @@ pgstat_report_analyze(Relation rel,
 	msg.m_analyzetime = GetCurrentTimestamp();
 	msg.m_live_tuples = livetuples;
 	msg.m_dead_tuples = deadtuples;
+	msg.m_warm_chains = warmchains;
 	pgstat_send(&msg, sizeof(msg));
 }
 
@@ -1843,7 +1850,10 @@ pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
 		else if (warm)
+		{
+			pgstat_info->trans->tuples_warm_updated++;
 			pgstat_info->t_counts.t_tuples_warm_updated++;
+		}
 	}
 }
 
@@ -2006,6 +2016,12 @@ AtEOXact_PgStat(bool isCommit)
 				/* update and delete each create a dead tuple */
 				tabstat->t_counts.t_delta_dead_tuples +=
 					trans->tuples_updated + trans->tuples_deleted;
+				/* 
+				 * commit or abort, a WARM update generates a WARM chain which
+				 * needs cleanup.
+				 */
+				tabstat->t_counts.t_delta_warm_chains +=
+					trans->tuples_warm_updated;
 				/* insert, update, delete each count as one change event */
 				tabstat->t_counts.t_changed_tuples +=
 					trans->tuples_inserted + trans->tuples_updated +
@@ -2016,6 +2032,12 @@ AtEOXact_PgStat(bool isCommit)
 				/* inserted tuples are dead, deleted tuples are unaffected */
 				tabstat->t_counts.t_delta_dead_tuples +=
 					trans->tuples_inserted + trans->tuples_updated;
+				/* 
+				 * commit or abort, a WARM update generates a WARM chain which
+				 * needs cleanup.
+				 */
+				tabstat->t_counts.t_delta_warm_chains +=
+					trans->tuples_warm_updated;
 				/* an aborted xact generates no changed_tuple events */
 			}
 			tabstat->trans = NULL;
@@ -2072,12 +2094,16 @@ AtEOSubXact_PgStat(bool isCommit, int nestDepth)
 						trans->upper->tuples_inserted = trans->tuples_inserted;
 						trans->upper->tuples_updated = trans->tuples_updated;
 						trans->upper->tuples_deleted = trans->tuples_deleted;
+						trans->upper->tuples_warm_updated =
+							trans->tuples_warm_updated;
 					}
 					else
 					{
 						trans->upper->tuples_inserted += trans->tuples_inserted;
 						trans->upper->tuples_updated += trans->tuples_updated;
 						trans->upper->tuples_deleted += trans->tuples_deleted;
+						trans->upper->tuples_warm_updated +=
+							trans->tuples_warm_updated;
 					}
 					tabstat->trans = trans->upper;
 					pfree(trans);
@@ -2113,9 +2139,13 @@ AtEOSubXact_PgStat(bool isCommit, int nestDepth)
 				tabstat->t_counts.t_tuples_inserted += trans->tuples_inserted;
 				tabstat->t_counts.t_tuples_updated += trans->tuples_updated;
 				tabstat->t_counts.t_tuples_deleted += trans->tuples_deleted;
+				tabstat->t_counts.t_tuples_warm_updated +=
+					trans->tuples_warm_updated;
 				/* inserted tuples are dead, deleted tuples are unaffected */
 				tabstat->t_counts.t_delta_dead_tuples +=
 					trans->tuples_inserted + trans->tuples_updated;
+				tabstat->t_counts.t_delta_warm_chains +=
+					trans->tuples_warm_updated;
 				tabstat->trans = trans->upper;
 				pfree(trans);
 			}
@@ -2157,9 +2187,11 @@ AtPrepare_PgStat(void)
 			record.tuples_inserted = trans->tuples_inserted;
 			record.tuples_updated = trans->tuples_updated;
 			record.tuples_deleted = trans->tuples_deleted;
+			record.tuples_warm_updated = trans->tuples_warm_updated;
 			record.inserted_pre_trunc = trans->inserted_pre_trunc;
 			record.updated_pre_trunc = trans->updated_pre_trunc;
 			record.deleted_pre_trunc = trans->deleted_pre_trunc;
+			record.warm_updated_pre_trunc = trans->warm_updated_pre_trunc;
 			record.t_id = tabstat->t_id;
 			record.t_shared = tabstat->t_shared;
 			record.t_truncated = trans->truncated;
@@ -2234,11 +2266,14 @@ pgstat_twophase_postcommit(TransactionId xid, uint16 info,
 		/* forget live/dead stats seen by backend thus far */
 		pgstat_info->t_counts.t_delta_live_tuples = 0;
 		pgstat_info->t_counts.t_delta_dead_tuples = 0;
+		pgstat_info->t_counts.t_delta_warm_chains = 0;
 	}
 	pgstat_info->t_counts.t_delta_live_tuples +=
 		rec->tuples_inserted - rec->tuples_deleted;
 	pgstat_info->t_counts.t_delta_dead_tuples +=
 		rec->tuples_updated + rec->tuples_deleted;
+	pgstat_info->t_counts.t_delta_warm_chains +=
+		rec->tuples_warm_updated;
 	pgstat_info->t_counts.t_changed_tuples +=
 		rec->tuples_inserted + rec->tuples_updated +
 		rec->tuples_deleted;
@@ -2266,12 +2301,16 @@ pgstat_twophase_postabort(TransactionId xid, uint16 info,
 		rec->tuples_inserted = rec->inserted_pre_trunc;
 		rec->tuples_updated = rec->updated_pre_trunc;
 		rec->tuples_deleted = rec->deleted_pre_trunc;
+		rec->tuples_warm_updated = rec->warm_updated_pre_trunc;
 	}
 	pgstat_info->t_counts.t_tuples_inserted += rec->tuples_inserted;
 	pgstat_info->t_counts.t_tuples_updated += rec->tuples_updated;
 	pgstat_info->t_counts.t_tuples_deleted += rec->tuples_deleted;
+	pgstat_info->t_counts.t_tuples_warm_updated += rec->tuples_warm_updated;
 	pgstat_info->t_counts.t_delta_dead_tuples +=
 		rec->tuples_inserted + rec->tuples_updated;
+	pgstat_info->t_counts.t_delta_warm_chains +=
+		rec->tuples_warm_updated;
 }
 
 
@@ -4335,6 +4374,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
+		result->n_warm_chains = 0;
 		result->changes_since_analyze = 0;
 		result->blocks_fetched = 0;
 		result->blocks_hit = 0;
@@ -5445,6 +5485,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
+			tabentry->n_warm_chains = tabmsg->t_counts.t_delta_warm_chains;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
 			tabentry->blocks_fetched = tabmsg->t_counts.t_blocks_fetched;
 			tabentry->blocks_hit = tabmsg->t_counts.t_blocks_hit;
@@ -5476,9 +5517,11 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			{
 				tabentry->n_live_tuples = 0;
 				tabentry->n_dead_tuples = 0;
+				tabentry->n_warm_chains = 0;
 			}
 			tabentry->n_live_tuples += tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples += tabmsg->t_counts.t_delta_dead_tuples;
+			tabentry->n_warm_chains += tabmsg->t_counts.t_delta_warm_chains;
 			tabentry->changes_since_analyze += tabmsg->t_counts.t_changed_tuples;
 			tabentry->blocks_fetched += tabmsg->t_counts.t_blocks_fetched;
 			tabentry->blocks_hit += tabmsg->t_counts.t_blocks_hit;
@@ -5488,6 +5531,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 		tabentry->n_live_tuples = Max(tabentry->n_live_tuples, 0);
 		/* Likewise for n_dead_tuples */
 		tabentry->n_dead_tuples = Max(tabentry->n_dead_tuples, 0);
+		tabentry->n_warm_chains = Max(tabentry->n_warm_chains, 0);
 
 		/*
 		 * Add per-table stats to the per-database entry, too.
@@ -5713,6 +5757,7 @@ pgstat_recv_vacuum(PgStat_MsgVacuum *msg, int len)
 
 	tabentry->n_live_tuples = msg->m_live_tuples;
 	tabentry->n_dead_tuples = msg->m_dead_tuples;
+	tabentry->n_warm_chains = msg->m_warm_chains;
 
 	if (msg->m_autovacuum)
 	{
@@ -5747,6 +5792,7 @@ pgstat_recv_analyze(PgStat_MsgAnalyze *msg, int len)
 
 	tabentry->n_live_tuples = msg->m_live_tuples;
 	tabentry->n_dead_tuples = msg->m_dead_tuples;
+	tabentry->n_warm_chains = msg->m_warm_chains;
 
 	/*
 	 * If commanded, reset changes_since_analyze to zero.  This forgets any
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index b8677f3..814d071 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -191,6 +191,21 @@ pg_stat_get_dead_tuples(PG_FUNCTION_ARGS)
 	PG_RETURN_INT64(result);
 }
 
+Datum
+pg_stat_get_warm_chains(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->n_warm_chains);
+
+	PG_RETURN_INT64(result);
+}
+
 
 Datum
 pg_stat_get_mod_since_analyze(PG_FUNCTION_ARGS)
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 08b6030..81fec03 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,6 +130,7 @@ int			VacuumCostPageMiss = 10;
 int			VacuumCostPageDirty = 20;
 int			VacuumCostLimit = 200;
 int			VacuumCostDelay = 0;
+double		VacuumWarmCleanupScale;
 
 int			VacuumPageHit = 0;
 int			VacuumPageMiss = 0;
@@ -137,3 +138,5 @@ int			VacuumPageDirty = 0;
 
 int			VacuumCostBalance = 0;		/* working state for vacuum */
 bool		VacuumCostActive = false;
+
+double		VacuumWarmCleanupIndexScale = 1;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 291bf76..b4daa2c 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -3016,6 +3016,36 @@ static struct config_real ConfigureNamesReal[] =
 	},
 
 	{
+		{"autovacuum_warmcleanup_scale_factor", PGC_SIGHUP, AUTOVACUUM,
+			gettext_noop("Number of WARM chains prior to cleanup as a fraction of reltuples."),
+			NULL
+		},
+		&autovacuum_warmcleanup_scale,
+		0.1, 0.0, 100.0,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"autovacuum_warmcleanup_index_scale_factor", PGC_SIGHUP, AUTOVACUUM,
+			gettext_noop("Number of WARM pointers prior to cleanup as a fraction of total WARM chains."),
+			NULL
+		},
+		&autovacuum_warmcleanup_index_scale,
+		0.2, 0.0, 100.0,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"vacuum_warmcleanup_index_scale_factor", PGC_USERSET, WARM_CLEANUP,
+			gettext_noop("Number of WARM pointers in the index prior to cleanup as a fraction of total WARM chains."),
+			NULL
+		},
+		&VacuumWarmCleanupIndexScale,
+		0.2, 0.0, 100.0,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"checkpoint_completion_target", PGC_SIGHUP, WAL_CHECKPOINTS,
 			gettext_noop("Time spent flushing dirty buffers during checkpoint, as fraction of checkpoint interval."),
 			NULL
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 509adda..70025ba 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2789,6 +2789,8 @@ DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of dead tuples");
+DATA(insert OID = 3374 (  pg_stat_get_warm_chains	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_warm_chains _null_ _null_ _null_ ));
+DESCR("statistics: number of warm chains");
 DATA(insert OID = 3177 (  pg_stat_get_mod_since_analyze PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_mod_since_analyze _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples changed since last analyze");
 DATA(insert OID = 1934 (  pg_stat_get_blocks_fetched	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_blocks_fetched _null_ _null_ _null_ ));
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index 541c2fa..9914143 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -145,6 +145,8 @@ typedef struct VacuumParams
 	int			log_min_duration;		/* minimum execution threshold in ms
 										 * at which  verbose logs are
 										 * activated, -1 to use default */
+	double		warmcleanup_index_scale; /* Fraction of WARM pointers to cause
+										  * index WARM cleanup */
 } VacuumParams;
 
 /* GUC parameters */
diff --git a/src/include/foreign/fdwapi.h b/src/include/foreign/fdwapi.h
index 6ca44f7..2993b1a 100644
--- a/src/include/foreign/fdwapi.h
+++ b/src/include/foreign/fdwapi.h
@@ -134,7 +134,8 @@ typedef void (*ExplainDirectModify_function) (ForeignScanState *node,
 typedef int (*AcquireSampleRowsFunc) (Relation relation, int elevel,
 											   HeapTuple *rows, int targrows,
 												  double *totalrows,
-												  double *totaldeadrows);
+												  double *totaldeadrows,
+												  double *totalwarmchains);
 
 typedef bool (*AnalyzeForeignTable_function) (Relation relation,
 												 AcquireSampleRowsFunc *func,
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 4c607b2..901960a 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -255,6 +255,7 @@ extern int	VacuumPageDirty;
 extern int	VacuumCostBalance;
 extern bool VacuumCostActive;
 
+extern double VacuumWarmCleanupIndexScale;
 
 /* in tcop/postgres.c */
 
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a71dd5..f842374 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3035,7 +3035,8 @@ typedef enum VacuumOption
 	VACOPT_FULL = 1 << 4,		/* FULL (non-concurrent) vacuum */
 	VACOPT_NOWAIT = 1 << 5,		/* don't wait to get lock (autovacuum only) */
 	VACOPT_SKIPTOAST = 1 << 6,	/* don't process the TOAST table, if any */
-	VACOPT_DISABLE_PAGE_SKIPPING = 1 << 7		/* don't skip any pages */
+	VACOPT_DISABLE_PAGE_SKIPPING = 1 << 7,		/* don't skip any pages */
+	VACOPT_WARM_CLEANUP = 1 << 8	/* do WARM cleanup */
 } VacuumOption;
 
 typedef struct VacuumStmt
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index 6cd36c7..632283b 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -432,6 +432,7 @@ PG_KEYWORD("version", VERSION_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("view", VIEW, UNRESERVED_KEYWORD)
 PG_KEYWORD("views", VIEWS, UNRESERVED_KEYWORD)
 PG_KEYWORD("volatile", VOLATILE, UNRESERVED_KEYWORD)
+PG_KEYWORD("warmclean", WARMCLEAN, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("when", WHEN, RESERVED_KEYWORD)
 PG_KEYWORD("where", WHERE, RESERVED_KEYWORD)
 PG_KEYWORD("whitespace", WHITESPACE_P, UNRESERVED_KEYWORD)
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 4b7d671..8901c9d 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -110,6 +110,7 @@ typedef struct PgStat_TableCounts
 
 	PgStat_Counter t_delta_live_tuples;
 	PgStat_Counter t_delta_dead_tuples;
+	PgStat_Counter t_delta_warm_chains;
 	PgStat_Counter t_changed_tuples;
 
 	PgStat_Counter t_blocks_fetched;
@@ -167,11 +168,13 @@ typedef struct PgStat_TableXactStatus
 {
 	PgStat_Counter tuples_inserted;		/* tuples inserted in (sub)xact */
 	PgStat_Counter tuples_updated;		/* tuples updated in (sub)xact */
+	PgStat_Counter tuples_warm_updated;	/* tuples warm-updated in (sub)xact */
 	PgStat_Counter tuples_deleted;		/* tuples deleted in (sub)xact */
 	bool		truncated;		/* relation truncated in this (sub)xact */
 	PgStat_Counter inserted_pre_trunc;	/* tuples inserted prior to truncate */
 	PgStat_Counter updated_pre_trunc;	/* tuples updated prior to truncate */
 	PgStat_Counter deleted_pre_trunc;	/* tuples deleted prior to truncate */
+	PgStat_Counter warm_updated_pre_trunc;	/* tuples warm updated prior to truncate */
 	int			nest_level;		/* subtransaction nest level */
 	/* links to other structs for same relation: */
 	struct PgStat_TableXactStatus *upper;		/* next higher subxact if any */
@@ -370,6 +373,7 @@ typedef struct PgStat_MsgVacuum
 	TimestampTz m_vacuumtime;
 	PgStat_Counter m_live_tuples;
 	PgStat_Counter m_dead_tuples;
+	PgStat_Counter m_warm_chains;
 } PgStat_MsgVacuum;
 
 
@@ -388,6 +392,7 @@ typedef struct PgStat_MsgAnalyze
 	TimestampTz m_analyzetime;
 	PgStat_Counter m_live_tuples;
 	PgStat_Counter m_dead_tuples;
+	PgStat_Counter m_warm_chains;
 } PgStat_MsgAnalyze;
 
 
@@ -630,6 +635,7 @@ typedef struct PgStat_StatTabEntry
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
+	PgStat_Counter n_warm_chains;
 	PgStat_Counter changes_since_analyze;
 
 	PgStat_Counter blocks_fetched;
@@ -1131,10 +1137,11 @@ extern void pgstat_reset_single_counter(Oid objectid, PgStat_Single_Reset_Type t
 
 extern void pgstat_report_autovac(Oid dboid);
 extern void pgstat_report_vacuum(Oid tableoid, bool shared,
-					 PgStat_Counter livetuples, PgStat_Counter deadtuples);
+					 PgStat_Counter livetuples, PgStat_Counter deadtuples,
+					 PgStat_Counter warmchains);
 extern void pgstat_report_analyze(Relation rel,
 					  PgStat_Counter livetuples, PgStat_Counter deadtuples,
-					  bool resetcounter);
+					  PgStat_Counter warmchains, bool resetcounter);
 
 extern void pgstat_report_recovery_conflict(int reason);
 extern void pgstat_report_deadlock(void);
diff --git a/src/include/postmaster/autovacuum.h b/src/include/postmaster/autovacuum.h
index 99d7f09..5ac9c8f 100644
--- a/src/include/postmaster/autovacuum.h
+++ b/src/include/postmaster/autovacuum.h
@@ -28,6 +28,8 @@ extern int	autovacuum_freeze_max_age;
 extern int	autovacuum_multixact_freeze_max_age;
 extern int	autovacuum_vac_cost_delay;
 extern int	autovacuum_vac_cost_limit;
+extern double autovacuum_warmcleanup_scale;
+extern double autovacuum_warmcleanup_index_scale;
 
 /* autovacuum launcher PID, only valid when worker is shutting down */
 extern int	AutovacuumLauncherPid;
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index 2da9115..cd4532b 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -68,6 +68,7 @@ enum config_group
 	WAL_SETTINGS,
 	WAL_CHECKPOINTS,
 	WAL_ARCHIVING,
+	WARM_CLEANUP,
 	REPLICATION,
 	REPLICATION_SENDING,
 	REPLICATION_MASTER,
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index cd1976a..9164f60 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -276,6 +276,8 @@ typedef struct AutoVacOpts
 	int			log_min_duration;
 	float8		vacuum_scale_factor;
 	float8		analyze_scale_factor;
+	float8		warmcleanup_scale_factor;
+	float8		warmcleanup_index_scale;
 } AutoVacOpts;
 
 typedef struct StdRdOptions
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index a37d443..1b2a69a 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1758,6 +1758,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
+    pg_stat_get_warm_chains(c.oid) AS n_warm_chains,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
     pg_stat_get_last_vacuum_time(c.oid) AS last_vacuum,
     pg_stat_get_last_autovacuum_time(c.oid) AS last_autovacuum,
@@ -1906,6 +1907,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
+    pg_stat_all_tables.n_warm_chains,
     pg_stat_all_tables.n_mod_since_analyze,
     pg_stat_all_tables.last_vacuum,
     pg_stat_all_tables.last_autovacuum,
@@ -1950,6 +1952,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
+    pg_stat_all_tables.n_warm_chains,
     pg_stat_all_tables.n_mod_since_analyze,
     pg_stat_all_tables.last_vacuum,
     pg_stat_all_tables.last_autovacuum,
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
index 8aa1505..1346bb1 100644
--- a/src/test/regress/expected/warm.out
+++ b/src/test/regress/expected/warm.out
@@ -745,3 +745,61 @@ SELECT a, b FROM test_toast_warm WHERE b = 104.20;
 (1 row)
 
 DROP TABLE test_toast_warm;
+-- Test VACUUM
+CREATE TABLE test_vacuum_warm (a int unique, b text, c int, d int);
+CREATE INDEX test_vacuum_warm_index1 ON test_vacuum_warm(b);
+CREATE INDEX test_vacuum_warm_index2 ON test_vacuum_warm(c);
+INSERT INTO test_vacuum_warm VALUES (1, 'a', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (2, 'b', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (3, 'c', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (4, 'd', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (5, 'e', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (6, 'f', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (7, 'g', 100, 200);
+UPDATE test_vacuum_warm SET b = 'u', c = 300 WHERE a = 1;
+UPDATE test_vacuum_warm SET b = 'v', c = 300 WHERE a = 2;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 3;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 4;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 5;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 6;
+-- a plain vacuum cannot clear WARM chains.
+SET enable_seqscan = false;
+SET enable_bitmapscan = false;
+SET seq_page_cost = 10000;
+VACUUM test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
+   Index Cond: (b = 'u'::text)
+   Heap Fetches: 1
+(3 rows)
+
+-- Now set vacuum_warmcleanup_index_scale_factor such that only
+-- test_vacuum_warm_index2 can be cleaned up.
+SET vacuum_warmcleanup_index_scale_factor=0.5;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
+   Index Cond: (b = 'u'::text)
+   Heap Fetches: 1
+(3 rows)
+
+-- All WARM chains cleaned up, so index-only scan should be used now without
+-- any heap fetches
+SET vacuum_warmcleanup_index_scale_factor=0;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect zero heap-fetches now
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
+   Index Cond: (b = 'u'::text)
+   Heap Fetches: 0
+(3 rows)
+
+DROP TABLE test_vacuum_warm;
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
index ab61cfb..0d751e2 100644
--- a/src/test/regress/sql/warm.sql
+++ b/src/test/regress/sql/warm.sql
@@ -284,3 +284,50 @@ SELECT a, b FROM test_toast_warm WHERE b = 104.201;
 SELECT a, b FROM test_toast_warm WHERE b = 104.20;
 
 DROP TABLE test_toast_warm;
+
+
+-- Test VACUUM
+
+CREATE TABLE test_vacuum_warm (a int unique, b text, c int, d int);
+CREATE INDEX test_vacuum_warm_index1 ON test_vacuum_warm(b);
+CREATE INDEX test_vacuum_warm_index2 ON test_vacuum_warm(c);
+
+INSERT INTO test_vacuum_warm VALUES (1, 'a', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (2, 'b', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (3, 'c', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (4, 'd', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (5, 'e', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (6, 'f', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (7, 'g', 100, 200);
+
+UPDATE test_vacuum_warm SET b = 'u', c = 300 WHERE a = 1;
+UPDATE test_vacuum_warm SET b = 'v', c = 300 WHERE a = 2;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 3;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 4;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 5;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 6;
+
+-- a plain vacuum cannot clear WARM chains.
+SET enable_seqscan = false;
+SET enable_bitmapscan = false;
+SET seq_page_cost = 10000;
+VACUUM test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+
+-- Now set vacuum_warmcleanup_index_scale_factor such that only
+-- test_vacuum_warm_index2 can be cleaned up.
+SET vacuum_warmcleanup_index_scale_factor=0.5;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+
+
+-- All WARM chains cleaned up, so index-only scan should be used now without
+-- any heap fetches
+SET vacuum_warmcleanup_index_scale_factor=0;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect zero heap-fetches now
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+
+DROP TABLE test_vacuum_warm;
0006_warm_taptests_v21.patchapplication/octet-stream; name=0006_warm_taptests_v21.patchDownload
diff --git b/src/test/modules/warm/t/001_recovery.pl a/src/test/modules/warm/t/001_recovery.pl
new file mode 100644
index 0000000..2a76830
--- /dev/null
+++ a/src/test/modules/warm/t/001_recovery.pl
@@ -0,0 +1,50 @@
+# Single-node test: run workload, crash, recover and run sanity check
+
+use strict;
+use warnings;
+
+use TestLib;
+use Test::More tests => 2;
+use PostgresNode;
+
+my $node = get_new_node();
+$node->init;
+$node->start;
+
+# Create a table, do some WARM updates and then restart
+$node->safe_psql('postgres',
+	'create table accounts (aid int unique, branch int, balance bigint) with (fillfactor=98)');
+$node->safe_psql('postgres',
+	'create table history (aid int, delta int)');
+$node->safe_psql('postgres',
+	'insert into accounts select generate_series(1,10000), (random()*1000)::int % 10, 0');
+$node->safe_psql('postgres',
+	'create index accounts_bal_indx on accounts(balance)');
+
+for( $a = 1; $a <= 1000; $a = $a + 1 ) {
+	my $aid1 = int(rand(10000)) + 1;
+	my $aid2 = int(rand(10000)) + 1;
+	my $balance = int(rand(99999));
+	$node->safe_psql('postgres',
+		"begin;
+		 update accounts set balance = balance + $balance where aid = $aid1;
+		 update accounts set balance = balance - $balance where aid = $aid2;
+		 insert into history values ($aid1, $balance);
+		 insert into history values ($aid2, 0 - $balance);
+		 end;");
+}
+
+# Verify that we read the same TS after crash recovery
+$node->stop('immediate');
+$node->start;
+
+my $recovered_balance = $node->safe_psql('postgres', 'select sum(balance) from accounts');
+my $total_delta = $node->safe_psql('postgres', 'select sum(delta) from history');
+
+# since delta is credited to one account and debited from the other, we expect
+# the sum(balance) to stay zero.
+is($recovered_balance, 0, 'balanace matches after recovery');
+
+# A positive and a negative value is inserted in the history table. Hence the
+# sum(delta) should remain zero.
+is($total_delta, 0, 'sum(delta) matches after recovery');
diff --git b/src/test/modules/warm/t/002_warm_stress.pl a/src/test/modules/warm/t/002_warm_stress.pl
new file mode 100644
index 0000000..a1a2371
--- /dev/null
+++ a/src/test/modules/warm/t/002_warm_stress.pl
@@ -0,0 +1,289 @@
+# Run varity of tests to check consistency of index access.
+#
+# These tests are primarily designed to test if WARM updates cause any
+# inconsistency in the indexes. We use a pgbench-like setup with an "accounts"
+# table and a "branches" table. But instead of a single "aid" column the
+# pgbench_warm_accounts table has four additional columns. These columns have
+# initial value as "aid * 10", "aid * 20", "aid * 30" and "aid * 40". And
+# unlike the aid column, values in these columns do not remain static. The
+# values are changed in a narrow change around the original value, such that
+# they still remain distinct, even after updates. We also build indexes on
+# these additional columns.
+#
+# This allows us to force WARM updates to the table, while accessing individual
+# rows using these auxillary columns. If things are solid, we must not miss any
+# row irrespective of which column we use to fetch the row. Also, the sum of
+# balances in two tables should match at the end.
+#
+# We drop and recreate indexes concurrently and also run VACUUM and run
+# consistency checks to ensure nothing breaks. The tests also aborts
+# transactions, acquires share/update locks etc to check any negative effects
+# of those things.
+
+use strict;
+use warnings;
+
+use TestLib;
+use Test::More tests => 10;
+use PostgresNode;
+
+# Different kinds of queries, some committing, some aborting. Also include FOR
+# SHARE, FOR UPDATE which may have implications on the visibility bits etc.
+my @query_set1 = (
+
+	"begin;
+	update pgbench_warm_accounts set abalance = abalance + :delta where aid = :aid;
+	select abalance from pgbench_warm_accounts where aid = :aid;
+	update pgbench_warm_branches set bbalance = bbalance + :delta where bid = :bid;
+	end;",
+
+	"begin;
+	update pgbench_warm_accounts set abalance = abalance + :delta where aid = :aid;
+	select abalance from pgbench_warm_accounts where aid = :aid;
+	update pgbench_warm_branches set bbalance = bbalance + :delta where bid = :bid;
+	rollback;",
+
+	"begin;
+	select abalance from pgbench_warm_accounts where aid = :aid for update;
+	update pgbench_warm_accounts set abalance = abalance + :delta where aid = :aid;
+	select bbalance from pgbench_warm_branches where bid = :bid for update;
+	update pgbench_warm_branches set bbalance = bbalance + :delta where bid = :bid;
+	commit;",
+
+	"begin;
+	select abalance from pgbench_warm_accounts where aid = :aid for update;
+	update pgbench_warm_accounts set abalance = abalance + :delta where aid = :aid;
+	select bbalance from pgbench_warm_branches where bid = :bid for update;
+	update pgbench_warm_branches set bbalance = bbalance + :delta where bid = :bid;
+	rollback;",
+
+	"begin;
+	select abalance from pgbench_warm_accounts where aid = :aid for share;
+	update pgbench_warm_accounts set abalance = abalance + :delta where aid = :aid;
+	select bbalance from pgbench_warm_branches where bid = :bid for update;
+	update pgbench_warm_branches set bbalance = bbalance + :delta where bid = :bid;
+	commit;",
+
+	"begin;
+	select abalance from pgbench_warm_accounts where aid = :aid for update;
+	update pgbench_warm_accounts set abalance = abalance + :delta where aid = :aid;
+	select bbalance from pgbench_warm_branches where bid = :bid for update;
+	update pgbench_warm_branches set bbalance = bbalance + :delta where bid = :bid;
+	rollback;"
+);
+
+# The following queries use user-defined functions to update rows in
+# pgbench_warm_accounts table by using auxillary columns. This allows us to
+# test if the updates are working fine in various scenarios.
+my @query_set2 = (
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_warm_update_using_aid1(:chg1, :aid, :bid, :delta);
+	commit;",
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_warm_update_using_aid2(:chg2, :aid, :bid, :delta);
+	commit;",
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_warm_update_using_aid3(:chg3, :aid, :bid, :delta);
+	commit;",
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_warm_update_using_aid4(:chg4, :aid, :bid, :delta);
+	commit;",
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_warm_update_using_aid1(:chg1, :aid, :bid, :delta);
+	rollback;",
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_warm_update_using_aid2(:chg2, :aid, :bid, :delta);
+	rollback;",
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_warm_update_using_aid3(:chg3, :aid, :bid, :delta);
+	rollback;",
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_warm_update_using_aid4(:chg4, :aid, :bid, :delta);
+	rollback;"
+);
+
+# Specify concurrent DDLs that you may want to execute with the tests.
+my @ddl_queries = (
+	"drop index pgb_a_aid1;
+	 create index pgb_a_aid1 on pgbench_warm_accounts(aid1);",
+	"drop index pgb_a_aid2;
+	 create index pgb_a_aid2 on pgbench_warm_accounts(aid2);",
+	"drop index pgb_a_aid3;
+	 create index pgb_a_aid3 on pgbench_warm_accounts(aid3) using hash (aid3);",
+	"drop index pgb_a_aid4;
+	 create index pgb_a_aid4 on pgbench_warm_accounts(aid4);",
+	"drop index pgb_a_aid1;
+	 create index concurrently pgb_a_aid1 on pgbench_warm_accounts(aid1);",
+	"drop index pgb_a_aid2;
+	 create index concurrently pgb_a_aid2 on pgbench_warm_accounts(aid2);",
+	"drop index pgb_a_aid3;
+	 create index concurrently pgb_a_aid3 on pgbench_warm_accounts(aid3) using hash(aid3);",
+	"drop index pgb_a_aid4;
+	 create index concurrently pgb_a_aid4 on pgbench_warm_accounts(aid4);",
+	"vacuum pgbench_warm_accounts",
+	"vacuum pgbench_warm_branches",
+	"vacuum full pgbench_warm_accounts",
+	"vacuum full pgbench_warm_branches"
+);
+
+# Consistency check queries.
+my @check_queries = (
+	"set enable_seqscan to false; select pgbench_warm_check_consistency();",
+	"set enable_seqscan to false; select pgbench_warm_check_row(:aid);"
+);
+
+my $node = get_new_node();
+$node->init;
+$node->start;
+
+# prepare the test for execution
+$node->run_log([ 'psql', '-X', $node->connstr(), '-f', 't/warm_stress_init.sql']);
+
+my $res = $node->safe_psql('postgres', "select proname from pg_proc where proname = 'pgbench_warm_update_using_aid1'");
+is($res, 'pgbench_warm_update_using_aid1', 'dummy test passed');
+
+$res = $node->safe_psql('postgres', "select count(*) from pgbench_warm_accounts");
+is($res, 10000, 'Fine match');
+
+# Start as many connections as we need
+sub create_connections {
+	my $count = shift;
+	my @handles;
+	my ($stdin, $stdout, $stderr) = ('','','');
+	for (my $proc = 0; $proc < $count; $proc = $proc + 1) {
+		my $handle = IPC::Run::start( 
+			[
+				'psql', '-v', '-f -', $node->connstr(),
+			],
+			\$stdin, \$stdout, \$stderr);
+		push @handles, [$handle,\$stdin,\$stdout,\$stderr];
+	}
+	return \@handles;
+}
+
+sub check_connections {
+	my @handles = @_;
+	my $failures = 0;
+	print @handles;
+	foreach my $elem (@handles) {
+		my ($handle, $stdin, $stdout, $stderr) = @$elem;
+		# Wait for all queries to complete and psql sessions to exit, checking
+		# exit codes. We don't need to do the fancy interpretation safe_psql
+		# does.
+		$handle->finish;
+		if (!is($handle->full_result(0), 0, "psql exited normally"))
+		{
+			$failures ++;
+			diag "psql exit code: " . ($handle->result(0)) . " or signal: " . ($handle->full_result(0) & 127);
+			diag "Stdout:\n---\n$$stdout\n---\nStderr:\n----\n$$stderr\n---";
+		}
+	}
+	return $failures;
+}
+
+my $set1_handles = create_connections(3);
+my $set2_handles = create_connections(3);
+my $aux_handles = create_connections(1);
+
+# Run a few thousand transactions, using various kinds of queries
+my $scale = 1;
+for (my $txn = 0; $txn < 10000; $txn = $txn + 1) {
+	# Run a randomly chosen query from set1
+	my $aid = int(rand($scale*10000)) + 1;
+	my $bid = int(rand(100)) + 1;
+	my $delta = int(rand(1000)) - 500;
+
+	my $connindx = rand(@$set1_handles);
+	my $elem = @$set1_handles[$connindx];
+	my ($handle, $stdin, $stdout, $stderr) = @$elem;
+
+	my $queryindx = rand(@query_set1);
+	my $query = $query_set1[$queryindx];
+
+	$query =~ s/\:aid/$aid/g;
+	$query =~ s/\:bid/$bid/g;
+	$query =~ s/\:delta/$delta/g;
+
+	$$stdin .= $query . "\n";
+	pump $handle while length $$stdin;
+
+	# Run a randomly chosen query from set1
+	my $chg1 = int(rand(4)) - 2;
+	my $chg2 = int(rand(6)) - 3;
+	my $chg3 = int(rand(8)) - 4;
+	my $chg4 = int(rand(10)) - 5;
+
+	$connindx = rand(@$set2_handles);
+	$elem = @$set2_handles[$connindx];
+	($handle, $stdin, $stdout, $stderr) = @$elem;
+
+	$queryindx = rand(@query_set2);
+	$query = $query_set2[$queryindx];
+
+	$query =~ s/\:aid/$aid/g;
+	$query =~ s/\:bid/$bid/g;
+	$query =~ s/\:delta/$delta/g;
+	$query =~ s/\:chg1/$chg1/g;
+	$query =~ s/\:chg2/$chg2/g;
+	$query =~ s/\:chg3/$chg3/g;
+	$query =~ s/\:chg4/$chg4/g;
+
+	$$stdin .= $query . "\n";
+	pump $handle while length $$stdin;
+
+	# Some randomly picked numbers to run DDLs and consistency checks
+	my $random = int(rand(100));
+
+	# Consistenct checks every 5 transactions
+	if ($random % 5 == 0)
+	{
+		$connindx = rand(@$aux_handles);
+		$elem = @$aux_handles[$connindx];
+		($handle, $stdin, $stdout, $stderr) = @$elem;
+
+		$queryindx = rand(@check_queries);
+		$query = $check_queries[$queryindx];
+
+		$$stdin .= $query . "\n";
+		pump $handle while length $$stdin;
+	}
+
+	# 1% DDLs
+	if ($random == 17)
+	{
+		$connindx = rand(@$aux_handles);
+		$elem = @$aux_handles[$connindx];
+		($handle, $stdin, $stdout, $stderr) = @$elem;
+
+		$queryindx = rand(@ddl_queries);
+		$query = $ddl_queries[$queryindx];
+
+		$$stdin .= $query . "\n";
+		pump $handle while length $$stdin;
+	}
+}
+
+check_connections(@$set1_handles);
+check_connections(@$set2_handles);
+check_connections(@$aux_handles);
+
+# Run final consistency checks
+my $res1 = $node->safe_psql('postgres', "select sum(abalance) from pgbench_warm_accounts");
+my $res2 = $node->safe_psql('postgres', "select sum(bbalance) from pgbench_warm_branches");
+is($res1, $res2, 'Fine match');
diff --git b/src/test/modules/warm/t/warm_stress_init.sql a/src/test/modules/warm/t/warm_stress_init.sql
new file mode 100644
index 0000000..4697480
--- /dev/null
+++ a/src/test/modules/warm/t/warm_stress_init.sql
@@ -0,0 +1,209 @@
+
+drop table if exists pgbench_warm_branches;
+drop table if exists pgbench_warm_accounts;
+
+create table pgbench_warm_branches (
+	bid bigint,
+	bbalance bigint);
+
+create table pgbench_warm_accounts (
+	aid bigint,
+	bid bigint,
+	abalance bigint,
+	aid1 bigint ,
+	aid2 bigint ,
+	aid3 bigint ,
+	aid4 bigint ,
+	aid5 text default md5(random()::text),
+	aid6 text default md5(random()::text),
+	aid7 text default md5(random()::text),
+	aid8 text default md5(random()::text),
+	aid9 text default md5(random()::text),
+	aid10 text default md5(random()::text),
+	gistcol	polygon default null
+);
+
+-- update using aid1. aid1 should stay within the range (aid * 10 - 2 <= aid1 <= aid * 10 + 2) 
+create or replace function pgbench_warm_update_using_aid1(chg integer, v_aid bigint, v_bid bigint, delta bigint)
+returns void as $$
+declare
+	qry varchar;
+	lower varchar;
+	upper varchar;
+	range integer;
+	aid_updated bigint;
+begin
+	range := 2;
+	update pgbench_warm_accounts p set aid1 = aid1 +  chg,  abalance = abalance +
+delta  where aid1 >= v_aid * 10 - range - chg and aid1 <= v_aid * 10 + range - chg
+returning p.aid into aid_updated;
+	if aid_updated is not null then
+		update pgbench_warm_branches p set bbalance = bbalance + delta where p.bid = v_bid;
+	else
+		select aid into aid_updated from pgbench_warm_accounts p where aid1 >=
+v_aid * 10 - range and aid1 <= v_aid * 10 + range;
+		if aid_updated is null then
+			raise exception 'pgbench_warm_accounts row not found';
+		end if;
+	end if;
+end
+$$ language plpgsql;
+
+-- update using aid2. aid2 should stay within the range (aid * 20 - 4 <= aid2 <= aid * 20 + 4) 
+create or replace function pgbench_warm_update_using_aid2(chg integer, v_aid bigint, v_bid bigint, delta bigint)
+returns void as $$
+declare
+	qry varchar;
+	lower varchar;
+	upper varchar;
+	range integer;
+	aid_updated bigint;
+begin
+	range := 4;
+	update pgbench_warm_accounts p set aid2 = aid2 +  chg,  abalance = abalance +
+delta  where aid2 >= v_aid * 20 - range - chg and aid2 <= v_aid * 20 + range - chg
+returning p.aid into aid_updated;
+	if aid_updated is not null then
+		update pgbench_warm_branches p set bbalance = bbalance + delta where p.bid = v_bid;
+	else
+		select aid into aid_updated from pgbench_warm_accounts p where aid2 >= v_aid * 20 - range and aid2 <= v_aid * 20 + range;
+		if aid_updated is null then
+			raise exception 'pgbench_warm_accounts row not found';
+		end if;
+	end if;
+end
+$$ language plpgsql;
+
+-- update using aid3. aid3 should stay within the range (aid * 30 - 6 <= aid3 <= aid * 30 + 6) 
+create or replace function pgbench_warm_update_using_aid3(chg integer, v_aid bigint, v_bid bigint, delta bigint)
+returns void as $$
+declare
+	qry varchar;
+	lower varchar;
+	upper varchar;
+	range integer;
+	aid_updated bigint;
+begin
+	range := 6;
+	update pgbench_warm_accounts p set aid3 = aid3 +  chg,  abalance = abalance +
+delta  where aid3 >= v_aid * 30 - range - chg and aid3 <= v_aid * 30 + range - chg
+returning p.aid into aid_updated;
+	if aid_updated is not null then
+		update pgbench_warm_branches p set bbalance = bbalance + delta where p.bid = v_bid;
+	else
+		select aid into aid_updated from pgbench_warm_accounts p where aid3 >= v_aid * 30 - range and aid3 <= v_aid * 30 + range;
+		if aid_updated is null then
+			raise exception 'pgbench_warm_accounts row not found';
+		end if;
+	end if;
+end
+$$ language plpgsql;
+
+-- update using aid4. aid4 should stay within the range (aid * 40 - 8 <= aid4 <= aid * 40 + 8) 
+create or replace function pgbench_warm_update_using_aid4(chg integer, v_aid bigint, v_bid bigint, delta bigint)
+returns void as $$
+declare
+	qry varchar;
+	lower varchar;
+	upper varchar;
+	range integer;
+	aid_updated bigint;
+begin
+	range := 8;
+	update pgbench_warm_accounts p set aid4 = aid4 +  chg,  abalance = abalance +
+delta  where aid4 >= v_aid * 40 - range - chg and aid4 <= v_aid * 40 + range - chg
+returning p.aid into aid_updated;
+	if aid_updated is not null then
+		update pgbench_warm_branches p set bbalance = bbalance + delta where p.bid = v_bid;
+	else
+		select aid into aid_updated from pgbench_warm_accounts p where aid4 >= v_aid * 40 - range and aid4 <= v_aid * 40 + range;
+		if aid_updated is null then
+			raise exception 'pgbench_warm_accounts row not found';
+		end if;
+	end if;
+end
+$$ language plpgsql;
+
+-- ensure that exactly one row exists within a given range. use different
+-- indexes to fetch the row
+create or replace function pgbench_warm_check_row(v_aid bigint)
+returns void as $$
+declare
+	range integer;
+	factor integer;
+	ret_aid1 bigint;
+	ret_aid2 bigint;
+	ret_aid3 bigint;
+	ret_aid4 bigint;
+begin
+	range := 2;
+	factor := 10;
+	select aid into ret_aid1 from pgbench_warm_accounts p where aid1 >= v_aid *
+		factor - range and aid1 <= v_aid * factor + range;
+
+	range := 4;
+	factor := 20;
+	select aid into ret_aid2 from pgbench_warm_accounts p where aid2 >= v_aid *
+		factor - range and aid2 <= v_aid * factor + range;
+
+	range := 6;
+	factor := 30;
+	select aid into ret_aid3 from pgbench_warm_accounts p where aid3 >= v_aid *
+		factor - range and aid3 <= v_aid * factor + range;
+
+	range := 8;
+	factor := 40;
+	select aid into ret_aid4 from pgbench_warm_accounts p where aid4 >= v_aid *
+		factor - range and aid4 <= v_aid * factor + range;
+
+	if ret_aid1 is null or ret_aid1 != v_aid then
+		raise exception 'pgbench_warm_accounts row (%) not found via aid1', v_aid;
+	end if;
+
+	if ret_aid2 is null or ret_aid2 != v_aid then
+		raise exception 'pgbench_warm_accounts row (%) not found via aid2', v_aid;
+	end if;
+
+	if ret_aid3 is null or ret_aid3 != v_aid then
+		raise exception 'pgbench_warm_accounts row (%) not found via aid3', v_aid;
+	end if;
+
+	if ret_aid4 is null or ret_aid4 != v_aid then
+		raise exception 'pgbench_warm_accounts row (%) not found via aid4', v_aid;
+	end if;
+end
+$$ language plpgsql;
+
+create or replace function pgbench_warm_check_consistency()
+returns void as $$
+declare
+	sum_abalance bigint;
+	sum_bbalance bigint;
+begin
+	select sum(abalance) into sum_abalance from pgbench_warm_accounts;
+	select sum(bbalance) into sum_bbalance from pgbench_warm_branches;
+	if sum_abalance != sum_bbalance then
+		raise exception 'found inconsitency in sum (%, %)', sum_abalance, sum_bbalance;
+	end if;
+end
+$$ language plpgsql;
+
+\set end 10000
+insert into pgbench_warm_branches select generate_series(1, 100), 0 ;
+insert into pgbench_warm_accounts select generate_series(1, :end),
+				(random() * 100)::int, 0,
+				generate_series(1, :end) * 10,
+				generate_series(1, :end) * 20,
+				generate_series(1, :end) * 30,
+				generate_series(1, :end) * 40;
+
+create unique index pgb_a_aid on pgbench_warm_accounts(aid);
+create index pgb_a_aid1 on pgbench_warm_accounts(aid1);
+create index pgb_a_aid2 on pgbench_warm_accounts(aid2);
+create index pgb_a_aid3 on pgbench_warm_accounts(aid3) using hash(aid3);
+create index pgb_a_aid4 on pgbench_warm_accounts(aid4);
+
+create unique index pgb_b_bid on pgbench_warm_branches(bid);
+create index pgb_b_bbalance on pgbench_warm_branches(bbalance);
+
+vacuum analyze;
#176Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Pavan Deolasee (#175)
7 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Mon, Mar 27, 2017 at 2:19 PM, Pavan Deolasee <pavan.deolasee@gmail.com>
wrote:

Revised patches are attached.

Hmm.. for some reason check_keywords.pl wasn't failing in my development
environment. Or to be precise, it failed once and then almost magically got
fixed.. still a mystery to me. Anyways, I think a change in gram.y will be
necessary to make 0007 compile. Attaching the entire set again, with just
0007 fixed.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0007_vacuum_enhancements_v21.patchapplication/octet-stream; name=0007_vacuum_enhancements_v21.patchDownload
diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index 72e1253..b856503 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -338,6 +338,24 @@ static relopt_real realRelOpts[] =
 	},
 	{
 		{
+			"autovacuum_warmcleanup_scale_factor",
+			"Number of WARM chains prior to WARM cleanup as a fraction of reltuples",
+			RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
+			ShareUpdateExclusiveLock
+		},
+		-1, 0.0, 100.0
+	},
+	{
+		{
+			"autovacuum_warmcleanup_index_scale_factor",
+			"Number of WARM pointers in an index prior to WARM cleanup as a fraction of total WARM chains",
+			RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
+			ShareUpdateExclusiveLock
+		},
+		-1, 0.0, 100.0
+	},
+	{
+		{
 			"autovacuum_analyze_scale_factor",
 			"Number of tuple inserts, updates or deletes prior to analyze as a fraction of reltuples",
 			RELOPT_KIND_HEAP,
@@ -1341,6 +1359,10 @@ default_reloptions(Datum reloptions, bool validate, relopt_kind kind)
 		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, vacuum_scale_factor)},
 		{"autovacuum_analyze_scale_factor", RELOPT_TYPE_REAL,
 		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, analyze_scale_factor)},
+		{"autovacuum_warmcleanup_scale_factor", RELOPT_TYPE_REAL,
+		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, warmcleanup_scale_factor)},
+		{"autovacuum_warmcleanup_index_scale_factor", RELOPT_TYPE_REAL,
+		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, warmcleanup_index_scale)},
 		{"user_catalog_table", RELOPT_TYPE_BOOL,
 		offsetof(StdRdOptions, user_catalog_table)},
 		{"parallel_workers", RELOPT_TYPE_INT,
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index ca44e03..8d06c93 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -533,6 +533,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
+            pg_stat_get_warm_chains(C.oid) AS n_warm_chains,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
             pg_stat_get_last_vacuum_time(C.oid) as last_vacuum,
             pg_stat_get_last_autovacuum_time(C.oid) as last_autovacuum,
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index 404acb2..6c4fc4e 100644
--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -93,7 +93,8 @@ static VacAttrStats *examine_attribute(Relation onerel, int attnum,
 				  Node *index_expr);
 static int acquire_sample_rows(Relation onerel, int elevel,
 					HeapTuple *rows, int targrows,
-					double *totalrows, double *totaldeadrows);
+					double *totalrows, double *totaldeadrows,
+					double *totalwarmchains);
 static int	compare_rows(const void *a, const void *b);
 static int acquire_inherited_sample_rows(Relation onerel, int elevel,
 							  HeapTuple *rows, int targrows,
@@ -320,7 +321,8 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params,
 	int			targrows,
 				numrows;
 	double		totalrows,
-				totaldeadrows;
+				totaldeadrows,
+				totalwarmchains;
 	HeapTuple  *rows;
 	PGRUsage	ru0;
 	TimestampTz starttime = 0;
@@ -501,7 +503,8 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params,
 	else
 		numrows = (*acquirefunc) (onerel, elevel,
 								  rows, targrows,
-								  &totalrows, &totaldeadrows);
+								  &totalrows, &totaldeadrows,
+								  &totalwarmchains);
 
 	/*
 	 * Compute the statistics.  Temporary results during the calculations for
@@ -631,7 +634,7 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params,
 	 */
 	if (!inh)
 		pgstat_report_analyze(onerel, totalrows, totaldeadrows,
-							  (va_cols == NIL));
+							  totalwarmchains, (va_cols == NIL));
 
 	/* If this isn't part of VACUUM ANALYZE, let index AMs do cleanup */
 	if (!(options & VACOPT_VACUUM))
@@ -991,12 +994,14 @@ examine_attribute(Relation onerel, int attnum, Node *index_expr)
 static int
 acquire_sample_rows(Relation onerel, int elevel,
 					HeapTuple *rows, int targrows,
-					double *totalrows, double *totaldeadrows)
+					double *totalrows, double *totaldeadrows,
+					double *totalwarmchains)
 {
 	int			numrows = 0;	/* # rows now in reservoir */
 	double		samplerows = 0; /* total # rows collected */
 	double		liverows = 0;	/* # live rows seen */
 	double		deadrows = 0;	/* # dead rows seen */
+	double		warmchains = 0;
 	double		rowstoskip = -1;	/* -1 means not set yet */
 	BlockNumber totalblocks;
 	TransactionId OldestXmin;
@@ -1023,9 +1028,14 @@ acquire_sample_rows(Relation onerel, int elevel,
 		Page		targpage;
 		OffsetNumber targoffset,
 					maxoffset;
+		bool		marked[MaxHeapTuplesPerPage];
+		OffsetNumber root_offsets[MaxHeapTuplesPerPage];
 
 		vacuum_delay_point();
 
+		/* Track which root line pointers are already counted. */
+		memset(marked, 0, sizeof (marked));
+
 		/*
 		 * We must maintain a pin on the target page's buffer to ensure that
 		 * the maxoffset value stays good (else concurrent VACUUM might delete
@@ -1041,6 +1051,9 @@ acquire_sample_rows(Relation onerel, int elevel,
 		targpage = BufferGetPage(targbuffer);
 		maxoffset = PageGetMaxOffsetNumber(targpage);
 
+		/* Get all root line pointers first */
+		heap_get_root_tuples(targpage, root_offsets);
+
 		/* Inner loop over all tuples on the selected page */
 		for (targoffset = FirstOffsetNumber; targoffset <= maxoffset; targoffset++)
 		{
@@ -1069,6 +1082,22 @@ acquire_sample_rows(Relation onerel, int elevel,
 			targtuple.t_data = (HeapTupleHeader) PageGetItem(targpage, itemid);
 			targtuple.t_len = ItemIdGetLength(itemid);
 
+			/*
+			 * If this is a WARM-updated tuple, check if we have already seen
+			 * the root line pointer. If not, count this as a WARM chain. This
+			 * ensures that we count every WARM-chain just once, irrespective
+			 * of how many tuples exist in the chain.
+			 */
+			if (HeapTupleHeaderIsWarmUpdated(targtuple.t_data))
+			{
+				OffsetNumber root_offnum = root_offsets[targoffset];
+				if (!marked[root_offnum])
+				{
+					warmchains += 1;
+					marked[root_offnum] = true;
+				}
+			}
+
 			switch (HeapTupleSatisfiesVacuum(&targtuple,
 											 OldestXmin,
 											 targbuffer))
@@ -1200,18 +1229,24 @@ acquire_sample_rows(Relation onerel, int elevel,
 
 	/*
 	 * Estimate total numbers of rows in relation.  For live rows, use
-	 * vac_estimate_reltuples; for dead rows, we have no source of old
-	 * information, so we have to assume the density is the same in unseen
-	 * pages as in the pages we scanned.
+	 * vac_estimate_reltuples; for dead rows and WARM chains, we have no source
+	 * of old information, so we have to assume the density is the same in
+	 * unseen pages as in the pages we scanned.
 	 */
 	*totalrows = vac_estimate_reltuples(onerel, true,
 										totalblocks,
 										bs.m,
 										liverows);
 	if (bs.m > 0)
+	{
 		*totaldeadrows = floor((deadrows / bs.m) * totalblocks + 0.5);
+		*totalwarmchains = floor((warmchains / bs.m) * totalblocks + 0.5);
+	}
 	else
+	{
 		*totaldeadrows = 0.0;
+		*totalwarmchains = 0.0;
+	}
 
 	/*
 	 * Emit some interesting relation info
@@ -1219,11 +1254,13 @@ acquire_sample_rows(Relation onerel, int elevel,
 	ereport(elevel,
 			(errmsg("\"%s\": scanned %d of %u pages, "
 					"containing %.0f live rows and %.0f dead rows; "
-					"%d rows in sample, %.0f estimated total rows",
+					"%d rows in sample, %.0f estimated total rows; "
+					"%.0f warm chains",
 					RelationGetRelationName(onerel),
 					bs.m, totalblocks,
 					liverows, deadrows,
-					numrows, *totalrows)));
+					numrows, *totalrows,
+					*totalwarmchains)));
 
 	return numrows;
 }
@@ -1428,11 +1465,12 @@ acquire_inherited_sample_rows(Relation onerel, int elevel,
 				int			childrows;
 				double		trows,
 							tdrows;
+				double		twarmchains;
 
 				/* Fetch a random sample of the child's rows */
 				childrows = (*acquirefunc) (childrel, elevel,
 											rows + numrows, childtargrows,
-											&trows, &tdrows);
+											&trows, &tdrows, &twarmchains);
 
 				/* We may need to convert from child's rowtype to parent's */
 				if (childrows > 0 &&
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index 9fbb0eb..52a7838 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -103,6 +103,7 @@ ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel)
 		params.freeze_table_age = 0;
 		params.multixact_freeze_min_age = 0;
 		params.multixact_freeze_table_age = 0;
+		params.warmcleanup_index_scale = -1;
 	}
 	else
 	{
@@ -110,6 +111,7 @@ ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel)
 		params.freeze_table_age = -1;
 		params.multixact_freeze_min_age = -1;
 		params.multixact_freeze_table_age = -1;
+		params.warmcleanup_index_scale = -1;
 	}
 
 	/* user-invoked vacuum is never "for wraparound" */
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index f52490f..87510ea 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -187,11 +187,12 @@ static BufferAccessStrategy vac_strategy;
 /* non-export function prototypes */
 static void lazy_scan_heap(Relation onerel, int options,
 			   LVRelStats *vacrelstats, Relation *Irel, int nindexes,
-			   bool aggressive);
+			   bool aggressive, double warmcleanup_index_scale);
 static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats);
 static bool lazy_check_needs_freeze(Buffer buf, bool *hastup);
 static void lazy_vacuum_index(Relation indrel,
 				  bool clear_warm,
+				  double warmcleanup_index_scale,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats);
 static void lazy_cleanup_index(Relation indrel,
@@ -207,7 +208,8 @@ static bool should_attempt_truncation(LVRelStats *vacrelstats);
 static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
 static BlockNumber count_nondeletable_pages(Relation onerel,
 						 LVRelStats *vacrelstats);
-static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks);
+static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks,
+					   bool dowarmcleanup);
 static void lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr);
 static void lazy_record_warm_chain(LVRelStats *vacrelstats,
@@ -283,6 +285,9 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 						  &OldestXmin, &FreezeLimit, &xidFullScanLimit,
 						  &MultiXactCutoff, &mxactFullScanLimit);
 
+	/* Use default if the caller hasn't specified any value */
+	if (params->warmcleanup_index_scale == -1)
+		params->warmcleanup_index_scale = VacuumWarmCleanupIndexScale;
 	/*
 	 * We request an aggressive scan if the table's frozen Xid is now older
 	 * than or equal to the requested Xid full-table scan limit; or if the
@@ -309,7 +314,8 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 	vacrelstats->hasindex = (nindexes > 0);
 
 	/* Do the vacuuming */
-	lazy_scan_heap(onerel, options, vacrelstats, Irel, nindexes, aggressive);
+	lazy_scan_heap(onerel, options, vacrelstats, Irel, nindexes, aggressive,
+			params->warmcleanup_index_scale);
 
 	/* Done with indexes */
 	vac_close_indexes(nindexes, Irel, NoLock);
@@ -396,7 +402,8 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 	pgstat_report_vacuum(RelationGetRelid(onerel),
 						 onerel->rd_rel->relisshared,
 						 new_live_tuples,
-						 vacrelstats->new_dead_tuples);
+						 vacrelstats->new_dead_tuples,
+						 vacrelstats->num_non_convertible_warm_chains);
 	pgstat_progress_end_command();
 
 	/* and log the action if appropriate */
@@ -507,10 +514,19 @@ vacuum_log_cleanup_info(Relation rel, LVRelStats *vacrelstats)
  *		If there are no indexes then we can reclaim line pointers on the fly;
  *		dead line pointers need only be retained until all index pointers that
  *		reference them have been killed.
+ *
+ *		warmcleanup_index_scale specifies the number of WARM pointers in an
+ *		index as a fraction of total candidate WARM chains. If we find less
+ *		WARM pointers in an index than the specified fraction, then we don't
+ *		invoke cleanup that index. If WARM cleanup is skipped for any one
+ *		index, the WARM chain can't be cleared in the heap and no further WARM
+ *		updates are possible to such chains. Such chains are also not
+ *		considered for WARM cleanup in other indexes.
  */
 static void
 lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
-			   Relation *Irel, int nindexes, bool aggressive)
+			   Relation *Irel, int nindexes, bool aggressive,
+			   double warmcleanup_index_scale)
 {
 	BlockNumber nblocks,
 				blkno;
@@ -536,6 +552,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		PROGRESS_VACUUM_MAX_DEAD_TUPLES
 	};
 	int64		initprog_val[3];
+	bool		dowarmcleanup = ((options & VACOPT_WARM_CLEANUP) != 0);
 
 	pg_rusage_init(&ru0);
 
@@ -558,7 +575,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 	vacrelstats->nonempty_pages = 0;
 	vacrelstats->latestRemovedXid = InvalidTransactionId;
 
-	lazy_space_alloc(vacrelstats, nblocks);
+	lazy_space_alloc(vacrelstats, nblocks, dowarmcleanup);
 	frozen = palloc(sizeof(xl_heap_freeze_tuple) * MaxHeapTuplesPerPage);
 
 	/* Report that we're scanning the heap, advertising total # of blocks */
@@ -776,7 +793,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			/* Remove index entries */
 			for (i = 0; i < nindexes; i++)
 				lazy_vacuum_index(Irel[i],
-								  (vacrelstats->num_warm_chains > 0),
+								  dowarmcleanup && (vacrelstats->num_warm_chains > 0),
+								  warmcleanup_index_scale,
 								  &indstats[i],
 								  vacrelstats);
 
@@ -1408,7 +1426,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		/* Remove index entries */
 		for (i = 0; i < nindexes; i++)
 			lazy_vacuum_index(Irel[i],
-							  (vacrelstats->num_warm_chains > 0),
+							  dowarmcleanup && (vacrelstats->num_warm_chains > 0),
+							  warmcleanup_index_scale,
 							  &indstats[i],
 							  vacrelstats);
 
@@ -1863,6 +1882,7 @@ lazy_reset_warm_pointer_count(LVRelStats *vacrelstats)
 static void
 lazy_vacuum_index(Relation indrel,
 				  bool clear_warm,
+				  double warmcleanup_index_scale,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats)
 {
@@ -1927,25 +1947,55 @@ lazy_vacuum_index(Relation indrel,
 						(*stats)->warm_pointers_removed,
 						(*stats)->clear_pointers_removed)));
 
-		(*stats)->num_warm_pointers = 0;
-		(*stats)->num_clear_pointers = 0;
-		(*stats)->warm_pointers_removed = 0;
-		(*stats)->clear_pointers_removed = 0;
-		(*stats)->pointers_cleared = 0;
+		/*
+		 * If the number of WARM pointers found in the index are more than the
+		 * configured fraction of total candidate WARM chains, then do the
+		 * second index scan to clean up WARM chains.
+		 *
+		 * Otherwise we must set these WARM chains as non-convertible chains.
+		 */
+		if ((*stats)->num_warm_pointers >
+				((double)vacrelstats->num_warm_chains * warmcleanup_index_scale))
+		{
+			(*stats)->num_warm_pointers = 0;
+			(*stats)->num_clear_pointers = 0;
+			(*stats)->warm_pointers_removed = 0;
+			(*stats)->clear_pointers_removed = 0;
+			(*stats)->pointers_cleared = 0;
+
+			*stats = index_bulk_delete(&ivinfo, *stats,
+					lazy_indexvac_phase2, (void *) vacrelstats);
+			ereport(elevel,
+					(errmsg("scanned index \"%s\" to convert WARM pointers, found "
+							"%0.f WARM pointers, %0.f CLEAR pointers, removed "
+							"%0.f WARM pointers, removed %0.f CLEAR pointers, "
+							"cleared %0.f WARM pointers",
+							RelationGetRelationName(indrel),
+							(*stats)->num_warm_pointers,
+							(*stats)->num_clear_pointers,
+							(*stats)->warm_pointers_removed,
+							(*stats)->clear_pointers_removed,
+							(*stats)->pointers_cleared)));
+		}
+		else
+		{
+			int ii;
 
-		*stats = index_bulk_delete(&ivinfo, *stats,
-				lazy_indexvac_phase2, (void *) vacrelstats);
-		ereport(elevel,
-				(errmsg("scanned index \"%s\" to convert WARM pointers, found "
-						"%0.f WARM pointers, %0.f CLEAR pointers, removed "
-						"%0.f WARM pointers, removed %0.f CLEAR pointers, "
-						"cleared %0.f WARM pointers",
-						RelationGetRelationName(indrel),
-						(*stats)->num_warm_pointers,
-						(*stats)->num_clear_pointers,
-						(*stats)->warm_pointers_removed,
-						(*stats)->clear_pointers_removed,
-						(*stats)->pointers_cleared)));
+			/*
+			 * All chains skipped by this index are marked non-convertible.
+			 */
+			for (ii = 0; ii < vacrelstats->num_warm_chains; ii++)
+			{
+				LVWarmChain *chain = &vacrelstats->warm_chains[ii];
+				if (chain->num_warm_pointers > 0 ||
+					chain->num_clear_pointers > 1)
+				{
+					chain->keep_warm_chain = 1;
+					vacrelstats->num_non_convertible_warm_chains++;
+				}
+			}
+
+		}
 	}
 	else
 	{
@@ -2323,7 +2373,8 @@ count_nondeletable_pages(Relation onerel, LVRelStats *vacrelstats)
  * See the comments at the head of this file for rationale.
  */
 static void
-lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
+lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks,
+				 bool dowarmcleanup)
 {
 	long		maxtuples;
 	int			vac_work_mem = IsAutoVacuumWorkerProcess() &&
@@ -2332,8 +2383,13 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 
 	if (vacrelstats->hasindex)
 	{
+		/*
+		 * If we're not doing WARM cleanup then the entire memory is available
+		 * for tracking dead tuples. Otherwise it gets split between tracking
+		 * dead tuples and tracking WARM chains.
+		 */
 		maxtuples = (vac_work_mem * 1024L) / (sizeof(ItemPointerData) +
-				sizeof(LVWarmChain));
+				dowarmcleanup ? sizeof(LVWarmChain) : 0);
 		maxtuples = Min(maxtuples, INT_MAX);
 		maxtuples = Min(maxtuples, MaxAllocSize / (sizeof(ItemPointerData) +
 					sizeof(LVWarmChain)));
@@ -2359,11 +2415,18 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 	 * XXX Cheat for now and allocate the same size array for tracking warm
 	 * chains. maxtuples must have been already adjusted above to ensure we
 	 * don't cross vac_work_mem.
+	 *
+	 * XXX A better strategy seems to consume the available memory from two
+	 * ends and do a round of index cleanup if all available memory is
+	 * exhausted.
 	 */
-	vacrelstats->num_warm_chains = 0;
-	vacrelstats->max_warm_chains = (int) maxtuples;
-	vacrelstats->warm_chains = (LVWarmChain *)
-		palloc0(maxtuples * sizeof(LVWarmChain));
+	if (dowarmcleanup)
+	{
+		vacrelstats->num_warm_chains = 0;
+		vacrelstats->max_warm_chains = (int) maxtuples;
+		vacrelstats->warm_chains = (LVWarmChain *)
+			palloc0(maxtuples * sizeof(LVWarmChain));
+	}
 
 }
 
@@ -2385,6 +2448,8 @@ lazy_record_clear_chain(LVRelStats *vacrelstats,
 		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 0;
 		vacrelstats->num_warm_chains++;
 	}
+	else
+		vacrelstats->num_non_convertible_warm_chains++;
 }
 
 /*
@@ -2405,6 +2470,8 @@ lazy_record_warm_chain(LVRelStats *vacrelstats,
 		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 1;
 		vacrelstats->num_warm_chains++;
 	}
+	else
+		vacrelstats->num_non_convertible_warm_chains++;
 }
 
 /*
@@ -2600,6 +2667,7 @@ lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
 		 * index pointers.
 		 */
 		chain->keep_warm_chain = 1;
+		vacrelstats->num_non_convertible_warm_chains++;
 		return IBDCR_KEEP;
 	}
 	return IBDCR_KEEP;
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 19dd77d..25b2d69 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -433,7 +433,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	overlay_placing substr_from substr_for
 
 %type <boolean> opt_instead
-%type <boolean> opt_unique opt_concurrently opt_verbose opt_full
+%type <boolean> opt_unique opt_concurrently opt_verbose opt_full opt_warmclean
 %type <boolean> opt_freeze opt_default opt_recheck
 %type <defelt>	opt_binary opt_oids copy_delimiter
 
@@ -684,7 +684,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	VACUUM VALID VALIDATE VALIDATOR VALUE_P VALUES VARCHAR VARIADIC VARYING
 	VERBOSE VERSION_P VIEW VIEWS VOLATILE
 
-	WHEN WHERE WHITESPACE_P WINDOW WITH WITHIN WITHOUT WORK WRAPPER WRITE
+	WARMCLEAN WHEN WHERE WHITESPACE_P WINDOW WITH WITHIN WITHOUT WORK WRAPPER WRITE
 
 	XML_P XMLATTRIBUTES XMLCONCAT XMLELEMENT XMLEXISTS XMLFOREST XMLNAMESPACES
 	XMLPARSE XMLPI XMLROOT XMLSERIALIZE XMLTABLE
@@ -10058,7 +10058,7 @@ cluster_index_specification:
  *
  *****************************************************************************/
 
-VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
+VacuumStmt: VACUUM opt_full opt_freeze opt_verbose opt_warmclean
 				{
 					VacuumStmt *n = makeNode(VacuumStmt);
 					n->options = VACOPT_VACUUM;
@@ -10068,11 +10068,13 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
 						n->options |= VACOPT_FREEZE;
 					if ($4)
 						n->options |= VACOPT_VERBOSE;
+					if ($5)
+						n->options |= VACOPT_WARM_CLEANUP;
 					n->relation = NULL;
 					n->va_cols = NIL;
 					$$ = (Node *)n;
 				}
-			| VACUUM opt_full opt_freeze opt_verbose qualified_name
+			| VACUUM opt_full opt_freeze opt_verbose opt_warmclean qualified_name
 				{
 					VacuumStmt *n = makeNode(VacuumStmt);
 					n->options = VACOPT_VACUUM;
@@ -10082,13 +10084,15 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
 						n->options |= VACOPT_FREEZE;
 					if ($4)
 						n->options |= VACOPT_VERBOSE;
-					n->relation = $5;
+					if ($5)
+						n->options |= VACOPT_WARM_CLEANUP;
+					n->relation = $6;
 					n->va_cols = NIL;
 					$$ = (Node *)n;
 				}
-			| VACUUM opt_full opt_freeze opt_verbose AnalyzeStmt
+			| VACUUM opt_full opt_freeze opt_verbose opt_warmclean AnalyzeStmt
 				{
-					VacuumStmt *n = (VacuumStmt *) $5;
+					VacuumStmt *n = (VacuumStmt *) $6;
 					n->options |= VACOPT_VACUUM;
 					if ($2)
 						n->options |= VACOPT_FULL;
@@ -10096,6 +10100,8 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
 						n->options |= VACOPT_FREEZE;
 					if ($4)
 						n->options |= VACOPT_VERBOSE;
+					if ($5)
+						n->options |= VACOPT_WARM_CLEANUP;
 					$$ = (Node *)n;
 				}
 			| VACUUM '(' vacuum_option_list ')'
@@ -10128,6 +10134,7 @@ vacuum_option_elem:
 			| VERBOSE			{ $$ = VACOPT_VERBOSE; }
 			| FREEZE			{ $$ = VACOPT_FREEZE; }
 			| FULL				{ $$ = VACOPT_FULL; }
+			| WARMCLEAN			{ $$ = VACOPT_WARM_CLEANUP; }
 			| IDENT
 				{
 					if (strcmp($1, "disable_page_skipping") == 0)
@@ -10181,6 +10188,10 @@ opt_freeze: FREEZE									{ $$ = TRUE; }
 			| /*EMPTY*/								{ $$ = FALSE; }
 		;
 
+opt_warmclean: WARMCLEAN							{ $$ = TRUE; }
+			| /*EMPTY*/								{ $$ = FALSE; }
+		;
+
 opt_name_list:
 			'(' name_list ')'						{ $$ = $2; }
 			| /*EMPTY*/								{ $$ = NIL; }
@@ -14884,6 +14895,7 @@ type_func_name_keyword:
 			| SIMILAR
 			| TABLESAMPLE
 			| VERBOSE
+			| WARMCLEAN
 		;
 
 /* Reserved keyword --- these keywords are usable only as a ColLabel.
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 33ca749..91793e4 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -115,6 +115,8 @@ int			autovacuum_vac_thresh;
 double		autovacuum_vac_scale;
 int			autovacuum_anl_thresh;
 double		autovacuum_anl_scale;
+double		autovacuum_warmcleanup_scale;
+double		autovacuum_warmcleanup_index_scale;
 int			autovacuum_freeze_max_age;
 int			autovacuum_multixact_freeze_max_age;
 
@@ -307,7 +309,8 @@ static void relation_needs_vacanalyze(Oid relid, AutoVacOpts *relopts,
 						  Form_pg_class classForm,
 						  PgStat_StatTabEntry *tabentry,
 						  int effective_multixact_freeze_max_age,
-						  bool *dovacuum, bool *doanalyze, bool *wraparound);
+						  bool *dovacuum, bool *doanalyze, bool *wraparound,
+						  bool *dowarmcleanup);
 
 static void autovacuum_do_vac_analyze(autovac_table *tab,
 						  BufferAccessStrategy bstrategy);
@@ -2010,6 +2013,7 @@ do_autovacuum(void)
 		bool		dovacuum;
 		bool		doanalyze;
 		bool		wraparound;
+		bool		dowarmcleanup;
 
 		if (classForm->relkind != RELKIND_RELATION &&
 			classForm->relkind != RELKIND_MATVIEW)
@@ -2049,10 +2053,14 @@ do_autovacuum(void)
 		tabentry = get_pgstat_tabentry_relid(relid, classForm->relisshared,
 											 shared, dbentry);
 
-		/* Check if it needs vacuum or analyze */
+		/* 
+		 * Check if it needs vacuum or analyze. For vacuum, also check if it
+		 * needs WARM cleanup.
+		 */
 		relation_needs_vacanalyze(relid, relopts, classForm, tabentry,
 								  effective_multixact_freeze_max_age,
-								  &dovacuum, &doanalyze, &wraparound);
+								  &dovacuum, &doanalyze, &wraparound,
+								  &dowarmcleanup);
 
 		/* Relations that need work are added to table_oids */
 		if (dovacuum || doanalyze)
@@ -2105,6 +2113,7 @@ do_autovacuum(void)
 		bool		dovacuum;
 		bool		doanalyze;
 		bool		wraparound;
+		bool		dowarmcleanup;
 
 		/*
 		 * We cannot safely process other backends' temp tables, so skip 'em.
@@ -2135,7 +2144,8 @@ do_autovacuum(void)
 
 		relation_needs_vacanalyze(relid, relopts, classForm, tabentry,
 								  effective_multixact_freeze_max_age,
-								  &dovacuum, &doanalyze, &wraparound);
+								  &dovacuum, &doanalyze, &wraparound,
+								  &dowarmcleanup);
 
 		/* ignore analyze for toast tables */
 		if (dovacuum)
@@ -2566,6 +2576,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 	HeapTuple	classTup;
 	bool		dovacuum;
 	bool		doanalyze;
+	bool		dowarmcleanup;
 	autovac_table *tab = NULL;
 	PgStat_StatTabEntry *tabentry;
 	PgStat_StatDBEntry *shared;
@@ -2607,7 +2618,8 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 
 	relation_needs_vacanalyze(relid, avopts, classForm, tabentry,
 							  effective_multixact_freeze_max_age,
-							  &dovacuum, &doanalyze, &wraparound);
+							  &dovacuum, &doanalyze, &wraparound,
+							  &dowarmcleanup);
 
 	/* ignore ANALYZE for toast tables */
 	if (classForm->relkind == RELKIND_TOASTVALUE)
@@ -2623,6 +2635,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		int			vac_cost_limit;
 		int			vac_cost_delay;
 		int			log_min_duration;
+		double		warmcleanup_index_scale;
 
 		/*
 		 * Calculate the vacuum cost parameters and the freeze ages.  If there
@@ -2669,19 +2682,26 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 			? avopts->multixact_freeze_table_age
 			: default_multixact_freeze_table_age;
 
+		warmcleanup_index_scale = (avopts &&
+								   avopts->warmcleanup_index_scale >= 0)
+			? avopts->warmcleanup_index_scale
+			: autovacuum_warmcleanup_index_scale;
+
 		tab = palloc(sizeof(autovac_table));
 		tab->at_relid = relid;
 		tab->at_sharedrel = classForm->relisshared;
 		tab->at_vacoptions = VACOPT_SKIPTOAST |
 			(dovacuum ? VACOPT_VACUUM : 0) |
 			(doanalyze ? VACOPT_ANALYZE : 0) |
-			(!wraparound ? VACOPT_NOWAIT : 0);
+			(!wraparound ? VACOPT_NOWAIT : 0) |
+			(dowarmcleanup ? VACOPT_WARM_CLEANUP : 0);
 		tab->at_params.freeze_min_age = freeze_min_age;
 		tab->at_params.freeze_table_age = freeze_table_age;
 		tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age;
 		tab->at_params.multixact_freeze_table_age = multixact_freeze_table_age;
 		tab->at_params.is_wraparound = wraparound;
 		tab->at_params.log_min_duration = log_min_duration;
+		tab->at_params.warmcleanup_index_scale = warmcleanup_index_scale;
 		tab->at_vacuum_cost_limit = vac_cost_limit;
 		tab->at_vacuum_cost_delay = vac_cost_delay;
 		tab->at_relname = NULL;
@@ -2748,7 +2768,8 @@ relation_needs_vacanalyze(Oid relid,
  /* output params below */
 						  bool *dovacuum,
 						  bool *doanalyze,
-						  bool *wraparound)
+						  bool *wraparound,
+						  bool *dowarmcleanup)
 {
 	bool		force_vacuum;
 	bool		av_enabled;
@@ -2760,6 +2781,9 @@ relation_needs_vacanalyze(Oid relid,
 	float4		vac_scale_factor,
 				anl_scale_factor;
 
+	/* constant from reloptions or GUC valriable */
+	float4		warmcleanup_scale_factor;
+
 	/* thresholds calculated from above constants */
 	float4		vacthresh,
 				anlthresh;
@@ -2768,6 +2792,9 @@ relation_needs_vacanalyze(Oid relid,
 	float4		vactuples,
 				anltuples;
 
+	/* number of WARM chains in the table */
+	float4		warmchains;
+
 	/* freeze parameters */
 	int			freeze_max_age;
 	int			multixact_freeze_max_age;
@@ -2800,6 +2827,11 @@ relation_needs_vacanalyze(Oid relid,
 		? relopts->analyze_threshold
 		: autovacuum_anl_thresh;
 
+	/* Use table specific value or the GUC value */
+	warmcleanup_scale_factor = (relopts && relopts->warmcleanup_scale_factor >= 0)
+		? relopts->warmcleanup_scale_factor
+		: autovacuum_warmcleanup_scale;
+
 	freeze_max_age = (relopts && relopts->freeze_max_age >= 0)
 		? Min(relopts->freeze_max_age, autovacuum_freeze_max_age)
 		: autovacuum_freeze_max_age;
@@ -2847,6 +2879,7 @@ relation_needs_vacanalyze(Oid relid,
 		reltuples = classForm->reltuples;
 		vactuples = tabentry->n_dead_tuples;
 		anltuples = tabentry->changes_since_analyze;
+		warmchains = tabentry->n_warm_chains;
 
 		vacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples;
 		anlthresh = (float4) anl_base_thresh + anl_scale_factor * reltuples;
@@ -2863,6 +2896,17 @@ relation_needs_vacanalyze(Oid relid,
 		/* Determine if this table needs vacuum or analyze. */
 		*dovacuum = force_vacuum || (vactuples > vacthresh);
 		*doanalyze = (anltuples > anlthresh);
+
+		/*
+		 * If the number of WARM chains in the is more than the configured
+		 * fraction, then we also do a WARM cleanup. This only triggers at the
+		 * table level, but we then look at each index and do cleanup for the
+		 * index only if the WARM pointers in the index are more than
+		 * configured index-level scale factor. lazy_vacuum_index() later deals
+		 * with that.
+		 */
+		if (*dovacuum && (warmcleanup_scale_factor * reltuples < warmchains))
+			*dowarmcleanup = true;
 	}
 	else
 	{
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index cdfd76e..017ac20 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -199,9 +199,11 @@ typedef struct TwoPhasePgStatRecord
 	PgStat_Counter tuples_inserted;		/* tuples inserted in xact */
 	PgStat_Counter tuples_updated;		/* tuples updated in xact */
 	PgStat_Counter tuples_deleted;		/* tuples deleted in xact */
+	PgStat_Counter tuples_warm_updated;	/* tuples warm updated in xact */
 	PgStat_Counter inserted_pre_trunc;	/* tuples inserted prior to truncate */
 	PgStat_Counter updated_pre_trunc;	/* tuples updated prior to truncate */
 	PgStat_Counter deleted_pre_trunc;	/* tuples deleted prior to truncate */
+	PgStat_Counter warm_updated_pre_trunc;	/* tuples warm updated prior to truncate */
 	Oid			t_id;			/* table's OID */
 	bool		t_shared;		/* is it a shared catalog? */
 	bool		t_truncated;	/* was the relation truncated? */
@@ -1328,7 +1330,8 @@ pgstat_report_autovac(Oid dboid)
  */
 void
 pgstat_report_vacuum(Oid tableoid, bool shared,
-					 PgStat_Counter livetuples, PgStat_Counter deadtuples)
+					 PgStat_Counter livetuples, PgStat_Counter deadtuples,
+					 PgStat_Counter warmchains)
 {
 	PgStat_MsgVacuum msg;
 
@@ -1342,6 +1345,7 @@ pgstat_report_vacuum(Oid tableoid, bool shared,
 	msg.m_vacuumtime = GetCurrentTimestamp();
 	msg.m_live_tuples = livetuples;
 	msg.m_dead_tuples = deadtuples;
+	msg.m_warm_chains = warmchains;
 	pgstat_send(&msg, sizeof(msg));
 }
 
@@ -1357,7 +1361,7 @@ pgstat_report_vacuum(Oid tableoid, bool shared,
 void
 pgstat_report_analyze(Relation rel,
 					  PgStat_Counter livetuples, PgStat_Counter deadtuples,
-					  bool resetcounter)
+					  PgStat_Counter warmchains, bool resetcounter)
 {
 	PgStat_MsgAnalyze msg;
 
@@ -1382,12 +1386,14 @@ pgstat_report_analyze(Relation rel,
 		{
 			livetuples -= trans->tuples_inserted - trans->tuples_deleted;
 			deadtuples -= trans->tuples_updated + trans->tuples_deleted;
+			warmchains -= trans->tuples_warm_updated;
 		}
 		/* count stuff inserted by already-aborted subxacts, too */
 		deadtuples -= rel->pgstat_info->t_counts.t_delta_dead_tuples;
 		/* Since ANALYZE's counts are estimates, we could have underflowed */
 		livetuples = Max(livetuples, 0);
 		deadtuples = Max(deadtuples, 0);
+		warmchains = Max(warmchains, 0);
 	}
 
 	pgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_ANALYZE);
@@ -1398,6 +1404,7 @@ pgstat_report_analyze(Relation rel,
 	msg.m_analyzetime = GetCurrentTimestamp();
 	msg.m_live_tuples = livetuples;
 	msg.m_dead_tuples = deadtuples;
+	msg.m_warm_chains = warmchains;
 	pgstat_send(&msg, sizeof(msg));
 }
 
@@ -1843,7 +1850,10 @@ pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
 		else if (warm)
+		{
+			pgstat_info->trans->tuples_warm_updated++;
 			pgstat_info->t_counts.t_tuples_warm_updated++;
+		}
 	}
 }
 
@@ -2006,6 +2016,12 @@ AtEOXact_PgStat(bool isCommit)
 				/* update and delete each create a dead tuple */
 				tabstat->t_counts.t_delta_dead_tuples +=
 					trans->tuples_updated + trans->tuples_deleted;
+				/* 
+				 * commit or abort, a WARM update generates a WARM chain which
+				 * needs cleanup.
+				 */
+				tabstat->t_counts.t_delta_warm_chains +=
+					trans->tuples_warm_updated;
 				/* insert, update, delete each count as one change event */
 				tabstat->t_counts.t_changed_tuples +=
 					trans->tuples_inserted + trans->tuples_updated +
@@ -2016,6 +2032,12 @@ AtEOXact_PgStat(bool isCommit)
 				/* inserted tuples are dead, deleted tuples are unaffected */
 				tabstat->t_counts.t_delta_dead_tuples +=
 					trans->tuples_inserted + trans->tuples_updated;
+				/* 
+				 * commit or abort, a WARM update generates a WARM chain which
+				 * needs cleanup.
+				 */
+				tabstat->t_counts.t_delta_warm_chains +=
+					trans->tuples_warm_updated;
 				/* an aborted xact generates no changed_tuple events */
 			}
 			tabstat->trans = NULL;
@@ -2072,12 +2094,16 @@ AtEOSubXact_PgStat(bool isCommit, int nestDepth)
 						trans->upper->tuples_inserted = trans->tuples_inserted;
 						trans->upper->tuples_updated = trans->tuples_updated;
 						trans->upper->tuples_deleted = trans->tuples_deleted;
+						trans->upper->tuples_warm_updated =
+							trans->tuples_warm_updated;
 					}
 					else
 					{
 						trans->upper->tuples_inserted += trans->tuples_inserted;
 						trans->upper->tuples_updated += trans->tuples_updated;
 						trans->upper->tuples_deleted += trans->tuples_deleted;
+						trans->upper->tuples_warm_updated +=
+							trans->tuples_warm_updated;
 					}
 					tabstat->trans = trans->upper;
 					pfree(trans);
@@ -2113,9 +2139,13 @@ AtEOSubXact_PgStat(bool isCommit, int nestDepth)
 				tabstat->t_counts.t_tuples_inserted += trans->tuples_inserted;
 				tabstat->t_counts.t_tuples_updated += trans->tuples_updated;
 				tabstat->t_counts.t_tuples_deleted += trans->tuples_deleted;
+				tabstat->t_counts.t_tuples_warm_updated +=
+					trans->tuples_warm_updated;
 				/* inserted tuples are dead, deleted tuples are unaffected */
 				tabstat->t_counts.t_delta_dead_tuples +=
 					trans->tuples_inserted + trans->tuples_updated;
+				tabstat->t_counts.t_delta_warm_chains +=
+					trans->tuples_warm_updated;
 				tabstat->trans = trans->upper;
 				pfree(trans);
 			}
@@ -2157,9 +2187,11 @@ AtPrepare_PgStat(void)
 			record.tuples_inserted = trans->tuples_inserted;
 			record.tuples_updated = trans->tuples_updated;
 			record.tuples_deleted = trans->tuples_deleted;
+			record.tuples_warm_updated = trans->tuples_warm_updated;
 			record.inserted_pre_trunc = trans->inserted_pre_trunc;
 			record.updated_pre_trunc = trans->updated_pre_trunc;
 			record.deleted_pre_trunc = trans->deleted_pre_trunc;
+			record.warm_updated_pre_trunc = trans->warm_updated_pre_trunc;
 			record.t_id = tabstat->t_id;
 			record.t_shared = tabstat->t_shared;
 			record.t_truncated = trans->truncated;
@@ -2234,11 +2266,14 @@ pgstat_twophase_postcommit(TransactionId xid, uint16 info,
 		/* forget live/dead stats seen by backend thus far */
 		pgstat_info->t_counts.t_delta_live_tuples = 0;
 		pgstat_info->t_counts.t_delta_dead_tuples = 0;
+		pgstat_info->t_counts.t_delta_warm_chains = 0;
 	}
 	pgstat_info->t_counts.t_delta_live_tuples +=
 		rec->tuples_inserted - rec->tuples_deleted;
 	pgstat_info->t_counts.t_delta_dead_tuples +=
 		rec->tuples_updated + rec->tuples_deleted;
+	pgstat_info->t_counts.t_delta_warm_chains +=
+		rec->tuples_warm_updated;
 	pgstat_info->t_counts.t_changed_tuples +=
 		rec->tuples_inserted + rec->tuples_updated +
 		rec->tuples_deleted;
@@ -2266,12 +2301,16 @@ pgstat_twophase_postabort(TransactionId xid, uint16 info,
 		rec->tuples_inserted = rec->inserted_pre_trunc;
 		rec->tuples_updated = rec->updated_pre_trunc;
 		rec->tuples_deleted = rec->deleted_pre_trunc;
+		rec->tuples_warm_updated = rec->warm_updated_pre_trunc;
 	}
 	pgstat_info->t_counts.t_tuples_inserted += rec->tuples_inserted;
 	pgstat_info->t_counts.t_tuples_updated += rec->tuples_updated;
 	pgstat_info->t_counts.t_tuples_deleted += rec->tuples_deleted;
+	pgstat_info->t_counts.t_tuples_warm_updated += rec->tuples_warm_updated;
 	pgstat_info->t_counts.t_delta_dead_tuples +=
 		rec->tuples_inserted + rec->tuples_updated;
+	pgstat_info->t_counts.t_delta_warm_chains +=
+		rec->tuples_warm_updated;
 }
 
 
@@ -4335,6 +4374,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
+		result->n_warm_chains = 0;
 		result->changes_since_analyze = 0;
 		result->blocks_fetched = 0;
 		result->blocks_hit = 0;
@@ -5445,6 +5485,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
+			tabentry->n_warm_chains = tabmsg->t_counts.t_delta_warm_chains;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
 			tabentry->blocks_fetched = tabmsg->t_counts.t_blocks_fetched;
 			tabentry->blocks_hit = tabmsg->t_counts.t_blocks_hit;
@@ -5476,9 +5517,11 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			{
 				tabentry->n_live_tuples = 0;
 				tabentry->n_dead_tuples = 0;
+				tabentry->n_warm_chains = 0;
 			}
 			tabentry->n_live_tuples += tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples += tabmsg->t_counts.t_delta_dead_tuples;
+			tabentry->n_warm_chains += tabmsg->t_counts.t_delta_warm_chains;
 			tabentry->changes_since_analyze += tabmsg->t_counts.t_changed_tuples;
 			tabentry->blocks_fetched += tabmsg->t_counts.t_blocks_fetched;
 			tabentry->blocks_hit += tabmsg->t_counts.t_blocks_hit;
@@ -5488,6 +5531,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 		tabentry->n_live_tuples = Max(tabentry->n_live_tuples, 0);
 		/* Likewise for n_dead_tuples */
 		tabentry->n_dead_tuples = Max(tabentry->n_dead_tuples, 0);
+		tabentry->n_warm_chains = Max(tabentry->n_warm_chains, 0);
 
 		/*
 		 * Add per-table stats to the per-database entry, too.
@@ -5713,6 +5757,7 @@ pgstat_recv_vacuum(PgStat_MsgVacuum *msg, int len)
 
 	tabentry->n_live_tuples = msg->m_live_tuples;
 	tabentry->n_dead_tuples = msg->m_dead_tuples;
+	tabentry->n_warm_chains = msg->m_warm_chains;
 
 	if (msg->m_autovacuum)
 	{
@@ -5747,6 +5792,7 @@ pgstat_recv_analyze(PgStat_MsgAnalyze *msg, int len)
 
 	tabentry->n_live_tuples = msg->m_live_tuples;
 	tabentry->n_dead_tuples = msg->m_dead_tuples;
+	tabentry->n_warm_chains = msg->m_warm_chains;
 
 	/*
 	 * If commanded, reset changes_since_analyze to zero.  This forgets any
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index b8677f3..814d071 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -191,6 +191,21 @@ pg_stat_get_dead_tuples(PG_FUNCTION_ARGS)
 	PG_RETURN_INT64(result);
 }
 
+Datum
+pg_stat_get_warm_chains(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->n_warm_chains);
+
+	PG_RETURN_INT64(result);
+}
+
 
 Datum
 pg_stat_get_mod_since_analyze(PG_FUNCTION_ARGS)
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 08b6030..81fec03 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,6 +130,7 @@ int			VacuumCostPageMiss = 10;
 int			VacuumCostPageDirty = 20;
 int			VacuumCostLimit = 200;
 int			VacuumCostDelay = 0;
+double		VacuumWarmCleanupScale;
 
 int			VacuumPageHit = 0;
 int			VacuumPageMiss = 0;
@@ -137,3 +138,5 @@ int			VacuumPageDirty = 0;
 
 int			VacuumCostBalance = 0;		/* working state for vacuum */
 bool		VacuumCostActive = false;
+
+double		VacuumWarmCleanupIndexScale = 1;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 291bf76..b4daa2c 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -3016,6 +3016,36 @@ static struct config_real ConfigureNamesReal[] =
 	},
 
 	{
+		{"autovacuum_warmcleanup_scale_factor", PGC_SIGHUP, AUTOVACUUM,
+			gettext_noop("Number of WARM chains prior to cleanup as a fraction of reltuples."),
+			NULL
+		},
+		&autovacuum_warmcleanup_scale,
+		0.1, 0.0, 100.0,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"autovacuum_warmcleanup_index_scale_factor", PGC_SIGHUP, AUTOVACUUM,
+			gettext_noop("Number of WARM pointers prior to cleanup as a fraction of total WARM chains."),
+			NULL
+		},
+		&autovacuum_warmcleanup_index_scale,
+		0.2, 0.0, 100.0,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"vacuum_warmcleanup_index_scale_factor", PGC_USERSET, WARM_CLEANUP,
+			gettext_noop("Number of WARM pointers in the index prior to cleanup as a fraction of total WARM chains."),
+			NULL
+		},
+		&VacuumWarmCleanupIndexScale,
+		0.2, 0.0, 100.0,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"checkpoint_completion_target", PGC_SIGHUP, WAL_CHECKPOINTS,
 			gettext_noop("Time spent flushing dirty buffers during checkpoint, as fraction of checkpoint interval."),
 			NULL
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 509adda..70025ba 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2789,6 +2789,8 @@ DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of dead tuples");
+DATA(insert OID = 3374 (  pg_stat_get_warm_chains	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_warm_chains _null_ _null_ _null_ ));
+DESCR("statistics: number of warm chains");
 DATA(insert OID = 3177 (  pg_stat_get_mod_since_analyze PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_mod_since_analyze _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples changed since last analyze");
 DATA(insert OID = 1934 (  pg_stat_get_blocks_fetched	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_blocks_fetched _null_ _null_ _null_ ));
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index 541c2fa..9914143 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -145,6 +145,8 @@ typedef struct VacuumParams
 	int			log_min_duration;		/* minimum execution threshold in ms
 										 * at which  verbose logs are
 										 * activated, -1 to use default */
+	double		warmcleanup_index_scale; /* Fraction of WARM pointers to cause
+										  * index WARM cleanup */
 } VacuumParams;
 
 /* GUC parameters */
diff --git a/src/include/foreign/fdwapi.h b/src/include/foreign/fdwapi.h
index 6ca44f7..2993b1a 100644
--- a/src/include/foreign/fdwapi.h
+++ b/src/include/foreign/fdwapi.h
@@ -134,7 +134,8 @@ typedef void (*ExplainDirectModify_function) (ForeignScanState *node,
 typedef int (*AcquireSampleRowsFunc) (Relation relation, int elevel,
 											   HeapTuple *rows, int targrows,
 												  double *totalrows,
-												  double *totaldeadrows);
+												  double *totaldeadrows,
+												  double *totalwarmchains);
 
 typedef bool (*AnalyzeForeignTable_function) (Relation relation,
 												 AcquireSampleRowsFunc *func,
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 4c607b2..901960a 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -255,6 +255,7 @@ extern int	VacuumPageDirty;
 extern int	VacuumCostBalance;
 extern bool VacuumCostActive;
 
+extern double VacuumWarmCleanupIndexScale;
 
 /* in tcop/postgres.c */
 
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a71dd5..f842374 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3035,7 +3035,8 @@ typedef enum VacuumOption
 	VACOPT_FULL = 1 << 4,		/* FULL (non-concurrent) vacuum */
 	VACOPT_NOWAIT = 1 << 5,		/* don't wait to get lock (autovacuum only) */
 	VACOPT_SKIPTOAST = 1 << 6,	/* don't process the TOAST table, if any */
-	VACOPT_DISABLE_PAGE_SKIPPING = 1 << 7		/* don't skip any pages */
+	VACOPT_DISABLE_PAGE_SKIPPING = 1 << 7,		/* don't skip any pages */
+	VACOPT_WARM_CLEANUP = 1 << 8	/* do WARM cleanup */
 } VacuumOption;
 
 typedef struct VacuumStmt
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index 6cd36c7..632283b 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -432,6 +432,7 @@ PG_KEYWORD("version", VERSION_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("view", VIEW, UNRESERVED_KEYWORD)
 PG_KEYWORD("views", VIEWS, UNRESERVED_KEYWORD)
 PG_KEYWORD("volatile", VOLATILE, UNRESERVED_KEYWORD)
+PG_KEYWORD("warmclean", WARMCLEAN, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("when", WHEN, RESERVED_KEYWORD)
 PG_KEYWORD("where", WHERE, RESERVED_KEYWORD)
 PG_KEYWORD("whitespace", WHITESPACE_P, UNRESERVED_KEYWORD)
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 4b7d671..8901c9d 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -110,6 +110,7 @@ typedef struct PgStat_TableCounts
 
 	PgStat_Counter t_delta_live_tuples;
 	PgStat_Counter t_delta_dead_tuples;
+	PgStat_Counter t_delta_warm_chains;
 	PgStat_Counter t_changed_tuples;
 
 	PgStat_Counter t_blocks_fetched;
@@ -167,11 +168,13 @@ typedef struct PgStat_TableXactStatus
 {
 	PgStat_Counter tuples_inserted;		/* tuples inserted in (sub)xact */
 	PgStat_Counter tuples_updated;		/* tuples updated in (sub)xact */
+	PgStat_Counter tuples_warm_updated;	/* tuples warm-updated in (sub)xact */
 	PgStat_Counter tuples_deleted;		/* tuples deleted in (sub)xact */
 	bool		truncated;		/* relation truncated in this (sub)xact */
 	PgStat_Counter inserted_pre_trunc;	/* tuples inserted prior to truncate */
 	PgStat_Counter updated_pre_trunc;	/* tuples updated prior to truncate */
 	PgStat_Counter deleted_pre_trunc;	/* tuples deleted prior to truncate */
+	PgStat_Counter warm_updated_pre_trunc;	/* tuples warm updated prior to truncate */
 	int			nest_level;		/* subtransaction nest level */
 	/* links to other structs for same relation: */
 	struct PgStat_TableXactStatus *upper;		/* next higher subxact if any */
@@ -370,6 +373,7 @@ typedef struct PgStat_MsgVacuum
 	TimestampTz m_vacuumtime;
 	PgStat_Counter m_live_tuples;
 	PgStat_Counter m_dead_tuples;
+	PgStat_Counter m_warm_chains;
 } PgStat_MsgVacuum;
 
 
@@ -388,6 +392,7 @@ typedef struct PgStat_MsgAnalyze
 	TimestampTz m_analyzetime;
 	PgStat_Counter m_live_tuples;
 	PgStat_Counter m_dead_tuples;
+	PgStat_Counter m_warm_chains;
 } PgStat_MsgAnalyze;
 
 
@@ -630,6 +635,7 @@ typedef struct PgStat_StatTabEntry
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
+	PgStat_Counter n_warm_chains;
 	PgStat_Counter changes_since_analyze;
 
 	PgStat_Counter blocks_fetched;
@@ -1131,10 +1137,11 @@ extern void pgstat_reset_single_counter(Oid objectid, PgStat_Single_Reset_Type t
 
 extern void pgstat_report_autovac(Oid dboid);
 extern void pgstat_report_vacuum(Oid tableoid, bool shared,
-					 PgStat_Counter livetuples, PgStat_Counter deadtuples);
+					 PgStat_Counter livetuples, PgStat_Counter deadtuples,
+					 PgStat_Counter warmchains);
 extern void pgstat_report_analyze(Relation rel,
 					  PgStat_Counter livetuples, PgStat_Counter deadtuples,
-					  bool resetcounter);
+					  PgStat_Counter warmchains, bool resetcounter);
 
 extern void pgstat_report_recovery_conflict(int reason);
 extern void pgstat_report_deadlock(void);
diff --git a/src/include/postmaster/autovacuum.h b/src/include/postmaster/autovacuum.h
index 99d7f09..5ac9c8f 100644
--- a/src/include/postmaster/autovacuum.h
+++ b/src/include/postmaster/autovacuum.h
@@ -28,6 +28,8 @@ extern int	autovacuum_freeze_max_age;
 extern int	autovacuum_multixact_freeze_max_age;
 extern int	autovacuum_vac_cost_delay;
 extern int	autovacuum_vac_cost_limit;
+extern double autovacuum_warmcleanup_scale;
+extern double autovacuum_warmcleanup_index_scale;
 
 /* autovacuum launcher PID, only valid when worker is shutting down */
 extern int	AutovacuumLauncherPid;
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index 2da9115..cd4532b 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -68,6 +68,7 @@ enum config_group
 	WAL_SETTINGS,
 	WAL_CHECKPOINTS,
 	WAL_ARCHIVING,
+	WARM_CLEANUP,
 	REPLICATION,
 	REPLICATION_SENDING,
 	REPLICATION_MASTER,
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index cd1976a..9164f60 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -276,6 +276,8 @@ typedef struct AutoVacOpts
 	int			log_min_duration;
 	float8		vacuum_scale_factor;
 	float8		analyze_scale_factor;
+	float8		warmcleanup_scale_factor;
+	float8		warmcleanup_index_scale;
 } AutoVacOpts;
 
 typedef struct StdRdOptions
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index a37d443..1b2a69a 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1758,6 +1758,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
+    pg_stat_get_warm_chains(c.oid) AS n_warm_chains,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
     pg_stat_get_last_vacuum_time(c.oid) AS last_vacuum,
     pg_stat_get_last_autovacuum_time(c.oid) AS last_autovacuum,
@@ -1906,6 +1907,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
+    pg_stat_all_tables.n_warm_chains,
     pg_stat_all_tables.n_mod_since_analyze,
     pg_stat_all_tables.last_vacuum,
     pg_stat_all_tables.last_autovacuum,
@@ -1950,6 +1952,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
+    pg_stat_all_tables.n_warm_chains,
     pg_stat_all_tables.n_mod_since_analyze,
     pg_stat_all_tables.last_vacuum,
     pg_stat_all_tables.last_autovacuum,
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
index 8aa1505..1346bb1 100644
--- a/src/test/regress/expected/warm.out
+++ b/src/test/regress/expected/warm.out
@@ -745,3 +745,61 @@ SELECT a, b FROM test_toast_warm WHERE b = 104.20;
 (1 row)
 
 DROP TABLE test_toast_warm;
+-- Test VACUUM
+CREATE TABLE test_vacuum_warm (a int unique, b text, c int, d int);
+CREATE INDEX test_vacuum_warm_index1 ON test_vacuum_warm(b);
+CREATE INDEX test_vacuum_warm_index2 ON test_vacuum_warm(c);
+INSERT INTO test_vacuum_warm VALUES (1, 'a', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (2, 'b', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (3, 'c', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (4, 'd', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (5, 'e', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (6, 'f', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (7, 'g', 100, 200);
+UPDATE test_vacuum_warm SET b = 'u', c = 300 WHERE a = 1;
+UPDATE test_vacuum_warm SET b = 'v', c = 300 WHERE a = 2;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 3;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 4;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 5;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 6;
+-- a plain vacuum cannot clear WARM chains.
+SET enable_seqscan = false;
+SET enable_bitmapscan = false;
+SET seq_page_cost = 10000;
+VACUUM test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
+   Index Cond: (b = 'u'::text)
+   Heap Fetches: 1
+(3 rows)
+
+-- Now set vacuum_warmcleanup_index_scale_factor such that only
+-- test_vacuum_warm_index2 can be cleaned up.
+SET vacuum_warmcleanup_index_scale_factor=0.5;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
+   Index Cond: (b = 'u'::text)
+   Heap Fetches: 1
+(3 rows)
+
+-- All WARM chains cleaned up, so index-only scan should be used now without
+-- any heap fetches
+SET vacuum_warmcleanup_index_scale_factor=0;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect zero heap-fetches now
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
+   Index Cond: (b = 'u'::text)
+   Heap Fetches: 0
+(3 rows)
+
+DROP TABLE test_vacuum_warm;
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
index ab61cfb..0d751e2 100644
--- a/src/test/regress/sql/warm.sql
+++ b/src/test/regress/sql/warm.sql
@@ -284,3 +284,50 @@ SELECT a, b FROM test_toast_warm WHERE b = 104.201;
 SELECT a, b FROM test_toast_warm WHERE b = 104.20;
 
 DROP TABLE test_toast_warm;
+
+
+-- Test VACUUM
+
+CREATE TABLE test_vacuum_warm (a int unique, b text, c int, d int);
+CREATE INDEX test_vacuum_warm_index1 ON test_vacuum_warm(b);
+CREATE INDEX test_vacuum_warm_index2 ON test_vacuum_warm(c);
+
+INSERT INTO test_vacuum_warm VALUES (1, 'a', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (2, 'b', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (3, 'c', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (4, 'd', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (5, 'e', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (6, 'f', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (7, 'g', 100, 200);
+
+UPDATE test_vacuum_warm SET b = 'u', c = 300 WHERE a = 1;
+UPDATE test_vacuum_warm SET b = 'v', c = 300 WHERE a = 2;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 3;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 4;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 5;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 6;
+
+-- a plain vacuum cannot clear WARM chains.
+SET enable_seqscan = false;
+SET enable_bitmapscan = false;
+SET seq_page_cost = 10000;
+VACUUM test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+
+-- Now set vacuum_warmcleanup_index_scale_factor such that only
+-- test_vacuum_warm_index2 can be cleaned up.
+SET vacuum_warmcleanup_index_scale_factor=0.5;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+
+
+-- All WARM chains cleaned up, so index-only scan should be used now without
+-- any heap fetches
+SET vacuum_warmcleanup_index_scale_factor=0;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect zero heap-fetches now
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+
+DROP TABLE test_vacuum_warm;
0005_warm_updates_v21.patchapplication/octet-stream; name=0005_warm_updates_v21.patchDownload
diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c
index f2eda67..b356e2b 100644
--- a/contrib/bloom/blutils.c
+++ b/contrib/bloom/blutils.c
@@ -142,6 +142,7 @@ blhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/contrib/bloom/blvacuum.c b/contrib/bloom/blvacuum.c
index 04abd0f..ff50361 100644
--- a/contrib/bloom/blvacuum.c
+++ b/contrib/bloom/blvacuum.c
@@ -88,7 +88,7 @@ blbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 		while (itup < itupEnd)
 		{
 			/* Do we have to delete this tuple? */
-			if (callback(&itup->heapPtr, callback_state))
+			if (callback(&itup->heapPtr, false, callback_state) == IBDCR_DELETE)
 			{
 				/* Yes; adjust count of tuples that will be left on page */
 				BloomPageGetOpaque(page)->maxoff--;
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index b22563b..b4a1465 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -116,6 +116,7 @@ brinhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gin/ginvacuum.c b/src/backend/access/gin/ginvacuum.c
index 26c077a..46ed4fe 100644
--- a/src/backend/access/gin/ginvacuum.c
+++ b/src/backend/access/gin/ginvacuum.c
@@ -56,7 +56,8 @@ ginVacuumItemPointers(GinVacuumState *gvs, ItemPointerData *items,
 	 */
 	for (i = 0; i < nitem; i++)
 	{
-		if (gvs->callback(items + i, gvs->callback_state))
+		if (gvs->callback(items + i, false, gvs->callback_state) ==
+				IBDCR_DELETE)
 		{
 			gvs->result->tuples_removed += 1;
 			if (!tmpitems)
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index 6593771..843389b 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -94,6 +94,7 @@ gisthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gist/gistvacuum.c b/src/backend/access/gist/gistvacuum.c
index 77d9d12..0955db6 100644
--- a/src/backend/access/gist/gistvacuum.c
+++ b/src/backend/access/gist/gistvacuum.c
@@ -202,7 +202,8 @@ gistbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 				iid = PageGetItemId(page, i);
 				idxtuple = (IndexTuple) PageGetItem(page, iid);
 
-				if (callback(&(idxtuple->t_tid), callback_state))
+				if (callback(&(idxtuple->t_tid), false, callback_state) ==
+						IBDCR_DELETE)
 					todelete[ntodelete++] = i;
 				else
 					stats->num_index_tuples += 1;
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index 34cc08f..ad56d6d 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -75,6 +75,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = hashbuild;
 	amroutine->ambuildempty = hashbuildempty;
 	amroutine->aminsert = hashinsert;
+	amroutine->amwarminsert = NULL;
 	amroutine->ambulkdelete = hashbulkdelete;
 	amroutine->amvacuumcleanup = hashvacuumcleanup;
 	amroutine->amcanreturn = NULL;
@@ -92,6 +93,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -807,6 +809,7 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 			IndexTuple	itup;
 			Bucket		bucket;
 			bool		kill_tuple = false;
+			IndexBulkDeleteCallbackResult	result;
 
 			itup = (IndexTuple) PageGetItem(page,
 											PageGetItemId(page, offno));
@@ -816,13 +819,18 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 			 * To remove the dead tuples, we strictly want to rely on results
 			 * of callback function.  refer btvacuumpage for detailed reason.
 			 */
-			if (callback && callback(htup, callback_state))
+			if (callback)
 			{
-				kill_tuple = true;
-				if (tuples_removed)
-					*tuples_removed += 1;
+				result = callback(htup, false, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					kill_tuple = true;
+					if (tuples_removed)
+						*tuples_removed += 1;
+				}
 			}
-			else if (split_cleanup)
+
+			if (!kill_tuple && split_cleanup)
 			{
 				/* delete the tuples that are moved by split. */
 				bucket = _hash_hashkey2bucket(_hash_get_indextuple_hashkey(itup),
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index 2d92049..330ccc5 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -59,6 +59,8 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
 	return true;
 }
 
@@ -367,6 +369,9 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
+
 	return true;
 }
 
diff --git a/src/backend/access/heap/README.WARM b/src/backend/access/heap/README.WARM
new file mode 100644
index 0000000..7c93a70
--- /dev/null
+++ b/src/backend/access/heap/README.WARM
@@ -0,0 +1,308 @@
+src/backend/access/heap/README.WARM
+
+Write Amplification Reduction Method (WARM)
+===========================================
+
+The Heap Only Tuple (HOT) feature greatly eliminated redudant index
+entries and allowed re-use of the dead space occupied by previously
+updated or deleted tuples (see src/backend/access/heap/README.HOT)
+
+One of the necessary conditions for satisfying HOT update is that the
+update must not change a column used in any of the indexes on the table.
+The condition is sometimes hard to meet, especially for complex
+workloads with several indexes on large yet frequently updated tables.
+Worse, sometimes only one or two index columns may be updated, but the
+regular non-HOT update will still insert a new index entry in every
+index on the table, irrespective of whether the key pertaining to the
+index is changed or not.
+
+WARM is a technique devised to address these problems.
+
+
+Update Chains With Multiple Index Entries Pointing to the Root
+--------------------------------------------------------------
+
+When a non-HOT update is caused by an index key change, a new index
+entry must be inserted for the changed index. But if the index key
+hasn't changed for other indexes, we don't really need to insert a new
+entry. Even though the existing index entry is pointing to the old
+tuple, the new tuple is reachable via the t_ctid chain. To keep things
+simple, a WARM update requires that the heap block must have enough
+space to store the new version of the tuple. This is same as HOT
+updates.
+
+In WARM, we ensure that every index entry always points to the root of
+the WARM chain. In fact, a WARM chain looks exactly like a HOT chain
+except for the fact that there could be multiple index entries pointing
+to the root of the chain. So when new entry is inserted in an index for
+updated tuple, and if we are doing a WARM update, the new entry is made
+point to the root of the WARM chain.
+
+For example, if we have a table with two columns and two indexes on each of the
+column. When a tuple is first inserted into the table, we have exactly one
+index entry pointing to the tuple from both indexes.
+
+	lp [1]
+	[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's entry (aaaa) also points to 1
+
+Now if the tuple's second column is updated and if there is room on the
+page, we perform a WARM update. To do so, Index1 does not get any new
+entry and Index2's new entry will still point to the root tuple of the
+chain.
+
+	lp [1]  [2]
+	[1111, aaaa]->[1111, bbbb]
+
+	Index1's entry (1111) points to 1
+	Index2's old entry (aaaa) points to 1
+	Index2's new entry (bbbb) also points to 1
+
+"A update chain which has more than one index entries pointing to its
+root line pointer is called WARM chain and the action that creates a
+WARM chain is called WARM update."
+
+Since all indexes always point to the root of the WARM chain, even when
+there are more than one index entries, WARM chains can be pruned and
+dead tuples can be removed without a need to do corresponding index
+cleanup.
+
+While this solves the problem of pruning dead tuples from a HOT/WARM
+chain, it also opens up a new technical challenge because now we have a
+situation where a heap tuple is reachable from multiple index entries,
+each having a different index key. While MVCC still ensures that only
+valid tuples are returned, a tuple with a wrong index key may be
+returned because of wrong index entries. In the above example, tuple
+[1111, bbbb] is reachable from both keys (aaaa) as well as (bbbb). For
+this reason, tuples returned from a WARM chain must always be rechecked
+for index key-match.
+
+Recheck Index Key Againt Heap Tuple
+-----------------------------------
+
+Since every Index AM has it's own notion of index tuples, each Index AM
+must implement its own method to recheck heap tuples. For example, a
+hash index stores the hash value of the column and hence recheck routine
+for hash AM must first compute the hash value of the heap attribute and
+then compare it against the value stored in the index tuple.
+
+The patch currently implement recheck routines for hash and btree
+indexes. If the table has an index which doesn't support recheck
+routine, WARM updates are disabled on such tables.
+
+Problem With Duplicate (key, ctid) Index Entries
+------------------------------------------------
+
+The index-key recheck logic works as long as there are no duplicate
+index keys, both pointing to the same WARM chain. In that case, the same
+valid tuple will be reachable via multiple index keys, yet satisfying
+the index key checks. In the above example, if the tuple [1111, bbbb] is
+again updated to [1111, aaaa] and if we insert a new index entry (aaaa)
+pointing to the root line pointer, we will end up with the following
+structure:
+
+	lp [1]  [2]  [3]
+	[1111, aaaa]->[1111, bbbb]->[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's oldest entry (aaaa) points to 1
+	Index2's old entry (bbbb) also points to 1
+	Index2's new entry (aaaa) also points to 1
+
+We must solve this problem to ensure that the same tuple is not
+reachable via multiple index pointers. There are couple of ways to
+address this issue:
+
+1. Do not allow WARM update to a tuple from a WARM chain. This
+guarantees that there can never be duplicate index entries to the same
+root line pointer because we must have checked for old and new index
+keys while doing the first WARM update.
+
+2. Do not allow duplicate (key, ctid) index pointers. In the above
+example, since (aaaa, 1) already exists in the index, we must not insert
+a duplicate index entry.
+
+The patch currently implements 1 i.e. do not do WARM updates to a tuple
+from a WARM chain. HOT updates are fine because they do not add a new
+index entry.
+
+Even with the restriction, this is a significant improvement because the
+number of regular UPDATEs are curtailed down to half.
+
+Expression and Partial Indexes
+------------------------------
+
+Expressions may evaluate to the same value even if the underlying column
+values have changed. A simple example is an index on "lower(col)" which
+will return the same value if the new heap value only differs in the
+case sensitivity. So we can not solely rely on the heap column check to
+decide whether or not to insert a new index entry for expression
+indexes. Similarly, for partial indexes, the predicate expression must
+be evaluated to decide whether or not to cause a new index entry when
+columns referred in the predicate expressions change.
+
+(None of these things are currently implemented and we squarely disallow
+WARM update if a column from expression indexes or predicate has
+changed).
+
+
+Efficiently Finding the Root Line Pointer
+-----------------------------------------
+
+During WARM update, we must be able to find the root line pointer of the
+tuple being updated. It must be noted that the t_ctid field in the heap
+tuple header is usually used to find the next tuple in the update chain.
+But the tuple that we are updating, must be the last tuple in the update
+chain. In such cases, the c_tid field usually points the tuple itself.
+So in theory, we could use the t_ctid to store additional information in
+the last tuple of the update chain, if the information about the tuple
+being the last tuple is stored elsewhere.
+
+We now utilize another bit from t_infomask2 to explicitly identify that
+this is the last tuple in the update chain.
+
+HEAP_LATEST_TUPLE - When this bit is set, the tuple is the last tuple in
+the update chain. The OffsetNumber part of t_ctid points to the root
+line pointer of the chain when HEAP_LATEST_TUPLE flag is set.
+
+If UPDATE operation is aborted, the last tuple in the update chain
+becomes dead. The root line pointer information stored in the tuple
+which remains the last valid tuple in the chain is also lost. In such
+rare cases, the root line pointer must be found in a hard way by
+scanning the entire heap page.
+
+Tracking WARM Chains
+--------------------
+
+When a tuple is WARM updated, the old, the new and every subsequent tuple in
+the chain is marked with a special HEAP_WARM_UPDATED flag. We use the last
+remaining bit in t_infomask2 to store this information.
+
+When a tuple is returned from a WARM chain, the caller must do additional
+checks to ensure that the tuple matches the index key. Even if the tuple
+precedes the WARM update in the chain, it must still be rechecked for the index
+key match (case when old tuple is returned by the new index key). So we must
+follow the update chain everytime to the end to check if this is a WARM
+chain.
+
+Converting WARM chains back to HOT chains (VACUUM ?)
+----------------------------------------------------
+
+The current implementation of WARM allows only one WARM update per
+chain. This simplifies the design and addresses certain issues around
+duplicate key scans. But this also implies that the benefit of WARM will be
+no more than 50%, which is still significant, but if we could return
+WARM chains back to normal status, we could do far more WARM updates.
+
+A distinct property of a WARM chain is that at least one index has more
+than one live index entries pointing to the root of the chain. In other
+words, if we can remove duplicate entry from every index or conclusively
+prove that there are no duplicate index entries for the root line
+pointer, the chain can again be marked as HOT.
+
+Here is one idea:
+
+A WARM chain has two parts, separated by the tuple that caused WARM
+update. All tuples in each part has matching index keys, but certain
+index keys may not match between these two parts. Lets say we mark heap
+tuples in the second part with a special HEAP_WARM_TUPLE flag. Similarly, the
+new index entries caused by the first WARM update are also marked with
+INDEX_WARM_POINTER flags.
+
+There are two distinct parts of the WARM chain. The first part where none of
+the tuples have HEAP_WARM_TUPLE flag set and the second part where every tuple
+has the flag set. Each of these parts satisfy HOT property on its own i.e. all
+tuples have the same value for indexed columns. But these two parts are
+separated by the WARM update which breaks HOT property for one or more indexes.
+
+Heap chain: [1] [2] [3] [4]
+			[aaaa, 1111] -> [aaaa, 1111] -> [bbbb, 1111]W -> [bbbb, 1111]W
+
+Index1: 	(aaaa) points to 1 (satisfies only tuples without W)
+			(bbbb)W points to 1 (satisfies only tuples marked with W)
+
+Index2:		(1111) points to 1 (satisfies tuples with and without W)
+
+
+It's clear that for indexes with both pointers, a heap tuple without
+HEAP_WARM_TUPLE flag will be reachable from the index pointer cleared of
+INDEX_WARM_POINTER flag and that with HEAP_WARM_TUPLE flag will be reachable
+from the pointer with INDEX_WARM_POINTER. But for indexes which did not create
+a new entry, tuples with and without the HEAP_WARM_TUPLE flag will be reachable
+from the original index pointer which doesn't have the INDEX_WARM_POINTER flag.
+(there is no pointer with INDEX_WARM_POINTER in such indexes).
+
+During first heap scan of VACUUM, we look for tuples with HEAP_WARM_UPDATED
+set.  If all or none of the live tuples in the chain are marked with
+HEAP_WARM_TUPLE flag, then the chain is a candidate for HOT conversion. We
+remember the root line pointer and whether the tuples in the chain had
+HEAP_WARM_TUPLE flags set or not.
+
+If we have a WARM chain with HEAP_WARM_TUPLE set, then our goal is to remove
+the index pointers without INDEX_WARM_POINTER flags and vice versa. But there
+is a catch. For Index2 above, there is only one pointer and it does not have
+the INDEX_WARM_POINTER flag set. Since all heap tuples are reachable only via
+this pointer, it must not be removed. IOW we should remove index pointer
+without INDEX_WARM_POINTER iff a another index pointer with INDEX_WARM_POINTER
+exists. Since index vacuum may visit these pointers in any order, we will need
+another index pass to detect redundant index pointers, which can safely be
+removed because all live tuples are reachable via the other index pointer. So
+in the first index pass we check which WARM candidates have 2 index pointers.
+In the second pass, we remove the redundant pointer and clear the
+INDEX_WARM_POINTER flag if that's the surviving index pointer. Note that
+all index pointers, either CLEAR or WARM, to dead tuples are removed during the
+first index scan itself.
+
+During the second heap scan, we fix WARM chain by clearing HEAP_WARM_UPDATED
+and HEAP_WARM_TUPLE flags on tuples.
+
+There are some more problems around aborted vacuums. For example, if vacuum
+aborts after clearing INDEX_WARM_POINTER flag but before removing the other
+index pointer, we will end up with two index pointers and none of those will
+have INDEX_WARM_POINTER set.  But since the HEAP_WARM_UPDATED flag on the heap
+tuple is still set, further WARM updates to the chain will be blocked. I guess
+we will need some special handling for case with multiple index pointers where
+none of the index pointers has INDEX_WARM_POINTER flag set. We can either leave
+these WARM chains alone and let them die with a subsequent non-WARM update or
+must apply heap-recheck logic during index vacuum to find the dead pointer.
+Given that vacuum-aborts are not common, I am inclined to leave this case
+unhandled. We must still check for presence of multiple index pointers without
+INDEX_WARM_POINTER flags and ensure that we don't accidently remove either of
+these pointers and also must not clear WARM chains.
+
+CREATE INDEX CONCURRENTLY
+-------------------------
+
+Currently CREATE INDEX CONCURRENTLY (CIC) is implemented as a 3-phase
+process.  In the first phase, we create catalog entry for the new index
+so that the index is visible to all other backends, but still don't use
+it for either read or write.  But we ensure that no new broken HOT
+chains are created by new transactions. In the second phase, we build
+the new index using a MVCC snapshot and then make the index available
+for inserts. We then do another pass over the index and insert any
+missing tuples, everytime indexing only it's root line pointer. See
+README.HOT for details about how HOT impacts CIC and how various
+challenges are tackeled.
+
+WARM poses another challenge because it allows creation of HOT chains
+even when an index key is changed. But since the index is not ready for
+insertion until the second phase is over, we might end up with a
+situation where the HOT chain has tuples with different index columns,
+yet only one of these values are indexed by the new index. Note that
+during the third phase, we only index tuples whose root line pointer is
+missing from the index. But we can't easily check if the existing index
+tuple is actually indexing the heap tuple visible to the new MVCC
+snapshot. Finding that information will require us to query the index
+again for every tuple in the chain, especially if it's a WARM tuple.
+This would require repeated access to the index. Another option would be
+to return index keys along with the heap TIDs when index is scanned for
+collecting all indexed TIDs during third phase. We can then compare the
+heap tuple against the already indexed key and decide whether or not to
+index the new tuple.
+
+We solve this problem more simply by disallowing WARM updates until the
+index is ready for insertion. We don't need to disallow WARM on a
+wholesale basis, but only those updates that change the columns of the
+new index are disallowed to be WARM updates.
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index e573f1a..4b4ccf6 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -1974,6 +1974,206 @@ heap_fetch(Relation relation,
 }
 
 /*
+ * Check status of a (possibly) WARM chain.
+ *
+ * This function looks at a HOT/WARM chain starting at tid and return a bitmask
+ * of information. We only follow the chain as long as it's known to be valid
+ * HOT chain. Information returned by the function consists of:
+ *
+ *  HCWC_WARM_UPDATED_TUPLE - a tuple with HEAP_WARM_UPDATED is found somewhere
+ *  						  in the chain. Note that when a tuple is WARM
+ *  						  updated, both old and new versions are marked
+ *  						  with this flag/
+ *
+ *  HCWC_WARM_TUPLE  - a tuple with HEAP_WARM_TUPLE is found somewhere in
+ *					  the chain.
+ *
+ *  HCWC_CLEAR_TUPLE - a tuple without HEAP_WARM_TUPLE is found somewhere in
+ *  					 the chain.
+ *
+ *	If stop_at_warm is true, we stop when the first HEAP_WARM_UPDATED tuple is
+ *	found and return information collected so far.
+ */
+HeapCheckWarmChainStatus
+heap_check_warm_chain(Page dp, ItemPointer tid, bool stop_at_warm)
+{
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	HeapCheckWarmChainStatus	status = 0;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
+			break;
+		}
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		if (HeapTupleHeaderIsWarmUpdated(heapTuple.t_data))
+		{
+			/* We found a WARM_UPDATED tuple */
+			status |= HCWC_WARM_UPDATED_TUPLE;
+
+			/*
+			 * If we've been told to stop at the first WARM_UPDATED tuple, just
+			 * return whatever information collected so far.
+			 */
+			if (stop_at_warm)
+				return status;
+
+			/*
+			 * Remember whether it's a CLEAR or a WARM tuple.
+			 */
+			if (HeapTupleHeaderIsWarm(heapTuple.t_data))
+				status |= HCWC_WARM_TUPLE;
+			else
+				status |= HCWC_CLEAR_TUPLE;
+		}
+		else
+			/* Must be a regular, non-WARM tuple */
+			status |= HCWC_CLEAR_TUPLE;
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	/* All OK. No need to recheck */
+	return status;
+}
+
+/*
+ * Scan through the WARM chain starting at tid and reset all WARM related
+ * flags. At the end, the chain will have all characteristics of a regular HOT
+ * chain.
+ *
+ * Return the number of cleared offnums. Cleared offnums are returned in the
+ * passed-in cleared_offnums array. The caller must ensure that the array is
+ * large enough to hold maximum offnums that can be cleared by this invokation
+ * of heap_clear_warm_chain().
+ */
+int
+heap_clear_warm_chain(Page dp, ItemPointer tid, OffsetNumber *cleared_offnums)
+{
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	int							num_cleared = 0;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
+			break;
+		}
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Clear WARM_UPDATED and WARM flags.
+		 */
+		if (HeapTupleHeaderIsWarmUpdated(heapTuple.t_data))
+		{
+			HeapTupleHeaderClearWarmUpdated(heapTuple.t_data);
+			HeapTupleHeaderClearWarm(heapTuple.t_data);
+			cleared_offnums[num_cleared++] = offnum;
+		}
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	return num_cleared;
+}
+
+/*
  *	heap_hot_search_buffer	- search HOT chain for tuple satisfying snapshot
  *
  * On entry, *tid is the TID of a tuple (either a simple tuple, or the root
@@ -1993,11 +2193,14 @@ heap_fetch(Relation relation,
  * Unlike heap_fetch, the caller must already have pin and (at least) share
  * lock on the buffer; it is still pinned/locked at exit.  Also unlike
  * heap_fetch, we do not report any pgstats count; caller may do so if wanted.
+ *
+ * recheck should be set false on entry by caller, will be set true on exit
+ * if a WARM tuple is encountered.
  */
 bool
 heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 					   Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call)
+					   bool *all_dead, bool first_call, bool *recheck)
 {
 	Page		dp = (Page) BufferGetPage(buffer);
 	TransactionId prev_xmax = InvalidTransactionId;
@@ -2051,9 +2254,12 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		ItemPointerSetOffsetNumber(&heapTuple->t_self, offnum);
 
 		/*
-		 * Shouldn't see a HEAP_ONLY tuple at chain start.
+		 * Shouldn't see a HEAP_ONLY tuple at chain start, unless we are
+		 * dealing with a WARM updated tuple in which case deferred triggers
+		 * may request to fetch a WARM tuple from middle of a chain.
 		 */
-		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple))
+		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple) &&
+				!HeapTupleIsWarmUpdated(heapTuple))
 			break;
 
 		/*
@@ -2066,6 +2272,20 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 			break;
 
 		/*
+		 * Check if there exists a WARM tuple somewhere down the chain and set
+		 * recheck to TRUE.
+		 *
+		 * XXX This is not very efficient right now, and we should look for
+		 * possible improvements here.
+		 */
+		if (recheck && *recheck == false)
+		{
+			HeapCheckWarmChainStatus status;
+			status = heap_check_warm_chain(dp, &heapTuple->t_self, true);
+			*recheck = HCWC_IS_WARM_UPDATED(status);
+		}
+
+		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
 		 * return the first tuple we find.  But on later passes, heapTuple
 		 * will initially be pointing to the tuple we returned last time.
@@ -2114,7 +2334,8 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		 * Check to see if HOT chain continues past this tuple; if so fetch
 		 * the next offnum and loop around.
 		 */
-		if (HeapTupleIsHotUpdated(heapTuple))
+		if (HeapTupleIsHotUpdated(heapTuple) &&
+			!HeapTupleHeaderHasRootOffset(heapTuple->t_data))
 		{
 			Assert(ItemPointerGetBlockNumber(&heapTuple->t_data->t_ctid) ==
 				   ItemPointerGetBlockNumber(tid));
@@ -2138,18 +2359,41 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
  */
 bool
 heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
-				bool *all_dead)
+				bool *all_dead, bool *recheck, Buffer *cbuffer,
+				HeapTuple heapTuple)
 {
 	bool		result;
 	Buffer		buffer;
-	HeapTupleData heapTuple;
+	ItemPointerData ret_tid = *tid;
 
 	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	LockBuffer(buffer, BUFFER_LOCK_SHARE);
-	result = heap_hot_search_buffer(tid, relation, buffer, snapshot,
-									&heapTuple, all_dead, true);
-	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buffer);
+	result = heap_hot_search_buffer(&ret_tid, relation, buffer, snapshot,
+									heapTuple, all_dead, true, recheck);
+
+	/*
+	 * If we are returning a potential candidate tuple from this chain and the
+	 * caller has requested for "recheck" hint, keep the buffer locked and
+	 * pinned. The caller must release the lock and pin on the buffer in all
+	 * such cases.
+	 */
+	if (!result || !recheck || !(*recheck))
+	{
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	/*
+	 * Set the caller supplied tid with the actual location of the tuple being
+	 * returned.
+	 */
+	if (result)
+	{
+		*tid = ret_tid;
+		if (cbuffer)
+			*cbuffer = buffer;
+	}
+
 	return result;
 }
 
@@ -2792,7 +3036,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		{
 			XLogRecPtr	recptr;
 			xl_heap_multi_insert *xlrec;
-			uint8		info = XLOG_HEAP2_MULTI_INSERT;
+			uint8		info = XLOG_HEAP_MULTI_INSERT;
 			char	   *tupledata;
 			int			totaldatalen;
 			char	   *scratchptr = scratch;
@@ -2889,7 +3133,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			/* filtering by origin on a row level is much more efficient */
 			XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);
 
-			recptr = XLogInsert(RM_HEAP2_ID, info);
+			recptr = XLogInsert(RM_HEAP_ID, info);
 
 			PageSetLSN(page, recptr);
 		}
@@ -3313,7 +3557,9 @@ l1:
 	}
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	tp.t_data->t_infomask |= new_infomask;
 	tp.t_data->t_infomask2 |= new_infomask2;
@@ -3508,15 +3754,18 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 HTSU_Result
 heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode)
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update)
 {
 	HTSU_Result result;
 	TransactionId xid = GetCurrentTransactionId();
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *exprindx_attrs;
 	Bitmapset  *interesting_attrs;
 	Bitmapset  *modified_attrs;
+	Bitmapset  *notready_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3537,6 +3786,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		use_hot_update = false;
+	bool		use_warm_update = false;
 	bool		hot_attrs_checked = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
@@ -3562,6 +3812,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot update tuples during a parallel operation")));
 
+	/* Assume no-warm update */
+	if (warm_update)
+		*warm_update = false;
+
 	/*
 	 * Fetch the list of attributes to be checked for various operations.
 	 *
@@ -3583,6 +3837,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	exprindx_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_EXPR_PREDICATE);
+	notready_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_NOTREADY);
 
 
 	block = ItemPointerGetBlockNumber(otid);
@@ -3606,8 +3864,11 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 		interesting_attrs = bms_add_members(interesting_attrs, hot_attrs);
 		hot_attrs_checked = true;
 	}
+
 	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, exprindx_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, notready_attrs);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -3654,6 +3915,9 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
 												  &oldtup, newtup);
 
+	if (modified_attrsp)
+		*modified_attrsp = bms_copy(modified_attrs);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3909,8 +4173,10 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(exprindx_attrs);
 		bms_free(modified_attrs);
 		bms_free(interesting_attrs);
+		bms_free(notready_attrs);
 		return result;
 	}
 
@@ -4074,7 +4340,9 @@ l2:
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
-		oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(oldtup.t_data))
+			oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 		oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleClearHotUpdated(&oldtup);
 		/* ... and store info about transaction updating this tuple */
@@ -4228,6 +4496,39 @@ l2:
 		 */
 		if (hot_attrs_checked && !bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
+		else
+		{
+			/*
+			 * If no WARM updates yet on this chain, let this update be a WARM
+			 * update. We must not do any WARM update even if the previous WARM
+			 * updated at the end aborted. That's why we look at
+			 * HEAP_WARM_UPDATED flag.
+			 *
+			 * We don't do WARM updates if one of the columns used in index
+			 * expressions is being modified. Since expressions may evaluate to
+			 * the same value, even when heap values change, we don't have a
+			 * good way to deal with duplicate key scans when expressions are
+			 * used in the index.
+			 *
+			 * We check if the HOT attrs are a subset of the modified
+			 * attributes. Since HOT attrs include all index attributes, this
+			 * allows to avoid doing a WARM update when all index attributes
+			 * are being updated. Performing a WARM update is not a great idea
+			 * because all indexes will receive a new entry anyways.
+			 *
+			 * We also disable WARM temporarily if we are modifying a column
+			 * which is used by a new index that's being added. We can't insert
+			 * new entries to such indexes and hence we must not allow creating
+			 * on WARM chains which are broken with respect to the new index
+			 * being added.
+			 */
+			if (relation->rd_supportswarm &&
+				!HeapTupleIsWarmUpdated(&oldtup) &&
+				!bms_overlap(modified_attrs, exprindx_attrs) &&
+				!bms_is_subset(hot_attrs, modified_attrs) &&
+				!bms_overlap(notready_attrs, modified_attrs))
+				use_warm_update = true;
+		}
 	}
 	else
 	{
@@ -4274,6 +4575,32 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+
+		/*
+		 * Even if we are doing a HOT update, we must carry forward the WARM
+		 * flag because we may have already inserted another index entry
+		 * pointing to our root and a third entry may create duplicates.
+		 *
+		 * Note: If we ever have a mechanism to avoid duplicate <key, TID> in
+		 * indexes, we could look at relaxing this restriction and allow even
+		 * more WARM udpates.
+		 */
+		if (HeapTupleIsWarmUpdated(&oldtup))
+		{
+			HeapTupleSetWarmUpdated(heaptup);
+			HeapTupleSetWarmUpdated(newtup);
+		}
+
+		/*
+		 * If the old tuple is a WARM tuple then mark the new tuple as a WARM
+		 * tuple as well.
+		 */
+		if (HeapTupleIsWarm(&oldtup))
+		{
+			HeapTupleSetWarm(heaptup);
+			HeapTupleSetWarm(newtup);
+		}
+
 		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
@@ -4286,12 +4613,45 @@ l2:
 		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
 			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
+	else if (use_warm_update)
+	{
+		/* Mark the old tuple as HOT-updated */
+		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetWarmUpdated(&oldtup);
+
+		/* And mark the new tuple as heap-only */
+		HeapTupleSetHeapOnly(heaptup);
+		/* Mark the new tuple as WARM tuple */
+		HeapTupleSetWarmUpdated(heaptup);
+		/* This update also starts the WARM chain */
+		HeapTupleSetWarm(heaptup);
+		Assert(!HeapTupleIsWarm(&oldtup));
+
+		/* Mark the caller's copy too, in case different from heaptup */
+		HeapTupleSetHeapOnly(newtup);
+		HeapTupleSetWarmUpdated(newtup);
+		HeapTupleSetWarm(newtup);
+
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
+		/* Let the caller know we did a WARM update */
+		if (warm_update)
+			*warm_update = true;
+	}
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		HeapTupleClearWarmUpdated(heaptup);
+		HeapTupleClearWarmUpdated(newtup);
+		HeapTupleClearWarm(heaptup);
+		HeapTupleClearWarm(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4310,7 +4670,9 @@ l2:
 	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
-	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(oldtup.t_data))
+		oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 	oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	/* ... and store info about transaction updating this tuple */
 	Assert(TransactionIdIsValid(xmax_old_tuple));
@@ -4401,7 +4763,10 @@ l2:
 	if (have_tuple_lock)
 		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
 
-	pgstat_count_heap_update(relation, use_hot_update);
+	/*
+	 * Count HOT and WARM updates separately
+	 */
+	pgstat_count_heap_update(relation, use_hot_update, use_warm_update);
 
 	/*
 	 * If heaptup is a private copy, release it.  Don't forget to copy t_self
@@ -4421,6 +4786,8 @@ l2:
 	bms_free(id_attrs);
 	bms_free(modified_attrs);
 	bms_free(interesting_attrs);
+	bms_free(exprindx_attrs);
+	bms_free(notready_attrs);
 
 	return HeapTupleMayBeUpdated;
 }
@@ -4541,7 +4908,8 @@ HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
  * via ereport().
  */
 void
-simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
+simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup,
+		Bitmapset **modified_attrs, bool *warm_update)
 {
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
@@ -4550,7 +4918,7 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &hufd, &lockmode);
+						 &hufd, &lockmode, modified_attrs, warm_update);
 	switch (result)
 	{
 		case HeapTupleSelfUpdated:
@@ -6227,7 +6595,9 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	PageSetPrunable(page, RecentGlobalXmin);
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 
 	/*
@@ -6801,7 +7171,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 	 * Old-style VACUUM FULL is gone, but we have to keep this code as long as
 	 * we support having MOVED_OFF/MOVED_IN tuples in the database.
 	 */
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 
@@ -6820,7 +7190,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			 * have failed; whereas a non-dead MOVED_IN tuple must mean the
 			 * xvac transaction succeeded.
 			 */
-			if (tuple->t_infomask & HEAP_MOVED_OFF)
+			if (HeapTupleHeaderIsMovedOff(tuple))
 				frz->frzflags |= XLH_INVALID_XVAC;
 			else
 				frz->frzflags |= XLH_FREEZE_XVAC;
@@ -7290,7 +7660,7 @@ heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple)
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid))
@@ -7373,7 +7743,7 @@ heap_tuple_needs_freeze(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid) &&
@@ -7399,7 +7769,7 @@ HeapTupleHeaderAdvanceLatestRemovedXid(HeapTupleHeader tuple,
 	TransactionId xmax = HeapTupleHeaderGetUpdateXid(tuple);
 	TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		if (TransactionIdPrecedes(*latestRemovedXid, xvac))
 			*latestRemovedXid = xvac;
@@ -7448,6 +7818,36 @@ log_heap_cleanup_info(RelFileNode rnode, TransactionId latestRemovedXid)
 }
 
 /*
+ * Perform XLogInsert for a heap-warm-clear operation.  Caller must already
+ * have modified the buffer and marked it dirty.
+ */
+XLogRecPtr
+log_heap_warmclear(Relation reln, Buffer buffer,
+			   OffsetNumber *cleared, int ncleared)
+{
+	xl_heap_warmclear	xlrec;
+	XLogRecPtr			recptr;
+
+	/* Caller should not call me on a non-WAL-logged relation */
+	Assert(RelationNeedsWAL(reln));
+
+	xlrec.ncleared = ncleared;
+
+	XLogBeginInsert();
+	XLogRegisterData((char *) &xlrec, SizeOfHeapWarmClear);
+
+	XLogRegisterBuffer(0, buffer, REGBUF_STANDARD);
+
+	if (ncleared > 0)
+		XLogRegisterBufData(0, (char *) cleared,
+							ncleared * sizeof(OffsetNumber));
+
+	recptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_WARMCLEAR);
+
+	return recptr;
+}
+
+/*
  * Perform XLogInsert for a heap-clean operation.  Caller must already
  * have modified the buffer and marked it dirty.
  *
@@ -7602,6 +8002,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
+	bool		warm_update = false;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
@@ -7613,6 +8014,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	else
 		info = XLOG_HEAP_UPDATE;
 
+	if (HeapTupleIsWarmUpdated(newtup))
+		warm_update = true;
+
 	/*
 	 * If the old and new tuple are on the same page, we only need to log the
 	 * parts of the new tuple that were changed.  That saves on the amount of
@@ -7686,6 +8090,8 @@ log_heap_update(Relation reln, Buffer oldbuf,
 				xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY;
 		}
 	}
+	if (warm_update)
+		xlrec.flags |= XLH_UPDATE_WARM_UPDATE;
 
 	/* If new tuple is the single and first tuple on page... */
 	if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber &&
@@ -8100,6 +8506,60 @@ heap_xlog_clean(XLogReaderState *record)
 		XLogRecordPageWithFreeSpace(rnode, blkno, freespace);
 }
 
+
+/*
+ * Handles HEAP2_WARMCLEAR record type
+ */
+static void
+heap_xlog_warmclear(XLogReaderState *record)
+{
+	XLogRecPtr	lsn = record->EndRecPtr;
+	xl_heap_warmclear	*xlrec = (xl_heap_warmclear *) XLogRecGetData(record);
+	Buffer		buffer;
+	RelFileNode rnode;
+	BlockNumber blkno;
+	XLogRedoAction action;
+
+	XLogRecGetBlockTag(record, 0, &rnode, NULL, &blkno);
+
+	/*
+	 * If we have a full-page image, restore it (using a cleanup lock) and
+	 * we're done.
+	 */
+	action = XLogReadBufferForRedoExtended(record, 0, RBM_NORMAL, true,
+										   &buffer);
+	if (action == BLK_NEEDS_REDO)
+	{
+		Page		page = (Page) BufferGetPage(buffer);
+		OffsetNumber *cleared;
+		int			ncleared;
+		Size		datalen;
+		int			i;
+
+		cleared = (OffsetNumber *) XLogRecGetBlockData(record, 0, &datalen);
+
+		ncleared = xlrec->ncleared;
+
+		for (i = 0; i < ncleared; i++)
+		{
+			ItemId			lp;
+			OffsetNumber	offnum = cleared[i];
+			HeapTupleData	heapTuple;
+
+			lp = PageGetItemId(page, offnum);
+			heapTuple.t_data = (HeapTupleHeader) PageGetItem(page, lp);
+
+			HeapTupleHeaderClearWarmUpdated(heapTuple.t_data);
+			HeapTupleHeaderClearWarm(heapTuple.t_data);
+		}
+
+		PageSetLSN(page, lsn);
+		MarkBufferDirty(buffer);
+	}
+	if (BufferIsValid(buffer))
+		UnlockReleaseBuffer(buffer);
+}
+
 /*
  * Replay XLOG_HEAP2_VISIBLE record.
  *
@@ -8346,7 +8806,9 @@ heap_xlog_delete(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleHeaderClearHotUpdated(htup);
 		fix_infomask_from_infobits(xlrec->infobits_set,
@@ -8367,7 +8829,7 @@ heap_xlog_delete(XLogReaderState *record)
 		if (!HeapTupleHeaderHasRootOffset(htup))
 		{
 			OffsetNumber	root_offnum;
-			root_offnum = heap_get_root_tuple(page, xlrec->offnum); 
+			root_offnum = heap_get_root_tuple(page, xlrec->offnum);
 			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
 		}
 
@@ -8663,16 +9125,22 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	Size		freespace = 0;
 	XLogRedoAction oldaction;
 	XLogRedoAction newaction;
+	bool		warm_update = false;
 
 	/* initialize to keep the compiler quiet */
 	oldtup.t_data = NULL;
 	oldtup.t_len = 0;
 
+	if (xlrec->flags & XLH_UPDATE_WARM_UPDATE)
+		warm_update = true;
+
 	XLogRecGetBlockTag(record, 0, &rnode, NULL, &newblk);
 	if (XLogRecGetBlockTag(record, 1, NULL, NULL, &oldblk))
 	{
 		/* HOT updates are never done across pages */
 		Assert(!hot_update);
+		/* WARM updates are never done across pages */
+		Assert(!warm_update);
 	}
 	else
 		oldblk = newblk;
@@ -8732,6 +9200,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 								   &htup->t_infomask2);
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
+
+		/* Mark the old tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetWarmUpdated(htup);
+
 		/* Set forward chain link in t_ctid */
 		HeapTupleHeaderSetNextTid(htup, &newtid);
 
@@ -8867,6 +9340,10 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
 
+		/* Mark the new tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetWarmUpdated(htup);
+
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
@@ -8994,7 +9471,9 @@ heap_xlog_lock(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9073,7 +9552,9 @@ heap_xlog_lock_updated(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9142,6 +9623,9 @@ heap_redo(XLogReaderState *record)
 		case XLOG_HEAP_INSERT:
 			heap_xlog_insert(record);
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			heap_xlog_multi_insert(record);
+			break;
 		case XLOG_HEAP_DELETE:
 			heap_xlog_delete(record);
 			break;
@@ -9170,7 +9654,7 @@ heap2_redo(XLogReaderState *record)
 {
 	uint8		info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
 
-	switch (info & XLOG_HEAP_OPMASK)
+	switch (info & XLOG_HEAP2_OPMASK)
 	{
 		case XLOG_HEAP2_CLEAN:
 			heap_xlog_clean(record);
@@ -9184,9 +9668,6 @@ heap2_redo(XLogReaderState *record)
 		case XLOG_HEAP2_VISIBLE:
 			heap_xlog_visible(record);
 			break;
-		case XLOG_HEAP2_MULTI_INSERT:
-			heap_xlog_multi_insert(record);
-			break;
 		case XLOG_HEAP2_LOCK_UPDATED:
 			heap_xlog_lock_updated(record);
 			break;
@@ -9200,6 +9681,9 @@ heap2_redo(XLogReaderState *record)
 		case XLOG_HEAP2_REWRITE:
 			heap_xlog_logical_rewrite(record);
 			break;
+		case XLOG_HEAP2_WARMCLEAR:
+			heap_xlog_warmclear(record);
+			break;
 		default:
 			elog(PANIC, "heap2_redo: unknown op code %u", info);
 	}
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index f54337c..4e8ed79 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -834,6 +834,13 @@ heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				continue;
 
+			/*
+			 * If the tuple has root line pointer, it must be the end of the
+			 * chain
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			/* Set up to scan the HOT-chain */
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index 2d3ae9b..bd469ee 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -404,6 +404,14 @@ rewrite_heap_tuple(RewriteState state,
 		old_tuple->t_data->t_infomask & HEAP_XACT_MASK;
 
 	/*
+	 * We must clear the HEAP_WARM_TUPLE flag if the HEAP_WARM_UPDATED is
+	 * cleared above.
+	 */
+	if (HeapTupleHeaderIsWarmUpdated(old_tuple->t_data))
+		HeapTupleHeaderClearWarm(new_tuple->t_data);
+
+
+	/*
 	 * While we have our hands on the tuple, we may as well freeze any
 	 * eligible xmin or xmax, so that future VACUUM effort can be saved.
 	 */
@@ -428,7 +436,7 @@ rewrite_heap_tuple(RewriteState state,
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
 
-		/* 
+		/*
 		 * We've already checked that this is not the last tuple in the chain,
 		 * so fetch the next TID in the chain.
 		 */
@@ -737,7 +745,7 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		/* 
+		/*
 		 * Set t_ctid just to ensure that block number is copied correctly, but
 		 * then immediately mark the tuple as the latest.
 		 */
diff --git a/src/backend/access/heap/tuptoaster.c b/src/backend/access/heap/tuptoaster.c
index 19e7048..47b01eb 100644
--- a/src/backend/access/heap/tuptoaster.c
+++ b/src/backend/access/heap/tuptoaster.c
@@ -1620,7 +1620,8 @@ toast_save_datum(Relation rel, Datum value,
 							 toastrel,
 							 toastidxs[i]->rd_index->indisunique ?
 							 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-							 NULL);
+							 NULL,
+							 false);
 		}
 
 		/*
diff --git a/src/backend/access/index/genam.c b/src/backend/access/index/genam.c
index a91fda7..d523c8f 100644
--- a/src/backend/access/index/genam.c
+++ b/src/backend/access/index/genam.c
@@ -127,6 +127,8 @@ RelationGetIndexScan(Relation indexRelation, int nkeys, int norderbys)
 	scan->xs_cbuf = InvalidBuffer;
 	scan->xs_continue_hot = false;
 
+	scan->indexInfo = NULL;
+
 	return scan;
 }
 
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index cc5ac8b..d048714 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -197,7 +197,8 @@ index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 IndexInfo *indexInfo)
+			 IndexInfo *indexInfo,
+			 bool warm_update)
 {
 	RELATION_CHECKS;
 	CHECK_REL_PROCEDURE(aminsert);
@@ -207,6 +208,12 @@ index_insert(Relation indexRelation,
 									   (HeapTuple) NULL,
 									   InvalidBuffer);
 
+	if (warm_update)
+	{
+		Assert(indexRelation->rd_amroutine->amwarminsert != NULL);
+		return indexRelation->rd_amroutine->amwarminsert(indexRelation, values,
+				isnull, heap_t_ctid, heapRelation, checkUnique, indexInfo);
+	}
 	return indexRelation->rd_amroutine->aminsert(indexRelation, values, isnull,
 												 heap_t_ctid, heapRelation,
 												 checkUnique, indexInfo);
@@ -291,6 +298,25 @@ index_beginscan_internal(Relation indexRelation,
 	scan->parallel_scan = pscan;
 	scan->xs_temp_snap = temp_snap;
 
+	/*
+	 * If the index supports recheck, make sure that index tuple is saved
+	 * during index scans. Also build and cache IndexInfo which is used by
+	 * amrecheck routine.
+	 *
+	 * XXX Ideally, we should look at all indexes on the table and check if
+	 * WARM is at all supported on the base table. If WARM is not supported
+	 * then we don't need to do any recheck. RelationGetIndexAttrBitmap() does
+	 * do that and sets rd_supportswarm after looking at all indexes. But we
+	 * don't know if the function was called earlier in the session when we're
+	 * here. We can't call it now because there exists a risk of causing
+	 * deadlock.
+	 */
+	if (indexRelation->rd_amroutine->amrecheck)
+	{
+		scan->xs_want_itup = true;
+		scan->indexInfo = BuildIndexInfo(indexRelation);
+	}
+
 	return scan;
 }
 
@@ -358,6 +384,10 @@ index_endscan(IndexScanDesc scan)
 	if (scan->xs_temp_snap)
 		UnregisterSnapshot(scan->xs_snapshot);
 
+	/* Free cached IndexInfo, if any */
+	if (scan->indexInfo)
+		pfree(scan->indexInfo);
+
 	/* Release the scan data structure itself */
 	IndexScanEnd(scan);
 }
@@ -535,7 +565,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
 	/*
 	 * The AM's amgettuple proc finds the next index entry matching the scan
 	 * keys, and puts the TID into scan->xs_ctup.t_self.  It should also set
-	 * scan->xs_recheck and possibly scan->xs_itup/scan->xs_hitup, though we
+	 * scan->xs_tuple_recheck and possibly scan->xs_itup/scan->xs_hitup, though we
 	 * pay no attention to those fields here.
 	 */
 	found = scan->indexRelation->rd_amroutine->amgettuple(scan, direction);
@@ -574,7 +604,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
  * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
  * call).
  *
- * Note: caller must check scan->xs_recheck, and perform rechecking of the
+ * Note: caller must check scan->xs_tuple_recheck, and perform rechecking of the
  * scan keys if required.  We do not do that here because we don't have
  * enough information to do it efficiently in the general case.
  * ----------------
@@ -585,6 +615,7 @@ index_fetch_heap(IndexScanDesc scan)
 	ItemPointer tid = &scan->xs_ctup.t_self;
 	bool		all_dead = false;
 	bool		got_heap_tuple;
+	bool		tuple_recheck;
 
 	/* We can skip the buffer-switching logic if we're in mid-HOT chain. */
 	if (!scan->xs_continue_hot)
@@ -603,6 +634,8 @@ index_fetch_heap(IndexScanDesc scan)
 			heap_page_prune_opt(scan->heapRelation, scan->xs_cbuf);
 	}
 
+	tuple_recheck = false;
+
 	/* Obtain share-lock on the buffer so we can examine visibility */
 	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
 	got_heap_tuple = heap_hot_search_buffer(tid, scan->heapRelation,
@@ -610,32 +643,60 @@ index_fetch_heap(IndexScanDesc scan)
 											scan->xs_snapshot,
 											&scan->xs_ctup,
 											&all_dead,
-											!scan->xs_continue_hot);
-	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+											!scan->xs_continue_hot,
+											&tuple_recheck);
 
 	if (got_heap_tuple)
 	{
+		bool res = true;
+
+		/*
+		 * Ok we got a tuple which satisfies the snapshot, but if its part of a
+		 * WARM chain, we must do additional checks to ensure that we are
+		 * indeed returning a correct tuple. Note that if the index AM does not
+		 * implement amrecheck method, then we don't any additional checks
+		 * since WARM must have been disabled on such tables.
+		 */
+		if (tuple_recheck && scan->xs_itup &&
+			scan->indexRelation->rd_amroutine->amrecheck)
+		{
+			res = scan->indexRelation->rd_amroutine->amrecheck(
+						scan->indexRelation,
+						scan->indexInfo,
+						scan->xs_itup,
+						scan->heapRelation,
+						&scan->xs_ctup);
+		}
+
+		LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+
 		/*
 		 * Only in a non-MVCC snapshot can more than one member of the HOT
 		 * chain be visible.
 		 */
 		scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
 		pgstat_count_heap_fetch(scan->indexRelation);
-		return &scan->xs_ctup;
+
+		if (res)
+			return &scan->xs_ctup;
 	}
+	else
+	{
+		LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
 
-	/* We've reached the end of the HOT chain. */
-	scan->xs_continue_hot = false;
+		/* We've reached the end of the HOT chain. */
+		scan->xs_continue_hot = false;
 
-	/*
-	 * If we scanned a whole HOT chain and found only dead tuples, tell index
-	 * AM to kill its entry for that TID (this will take effect in the next
-	 * amgettuple call, in index_getnext_tid).  We do not do this when in
-	 * recovery because it may violate MVCC to do so.  See comments in
-	 * RelationGetIndexScan().
-	 */
-	if (!scan->xactStartedInRecovery)
-		scan->kill_prior_tuple = all_dead;
+		/*
+		 * If we scanned a whole HOT chain and found only dead tuples, tell index
+		 * AM to kill its entry for that TID (this will take effect in the next
+		 * amgettuple call, in index_getnext_tid).  We do not do this when in
+		 * recovery because it may violate MVCC to do so.  See comments in
+		 * RelationGetIndexScan().
+		 */
+		if (!scan->xactStartedInRecovery)
+			scan->kill_prior_tuple = all_dead;
+	}
 
 	return NULL;
 }
diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c
index 6dca810..463d4bf 100644
--- a/src/backend/access/nbtree/nbtinsert.c
+++ b/src/backend/access/nbtree/nbtinsert.c
@@ -20,6 +20,7 @@
 #include "access/nbtxlog.h"
 #include "access/transam.h"
 #include "access/xloginsert.h"
+#include "catalog/index.h"
 #include "miscadmin.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
@@ -250,6 +251,10 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 	BTPageOpaque opaque;
 	Buffer		nbuf = InvalidBuffer;
 	bool		found = false;
+	Buffer		buffer;
+	HeapTupleData	heapTuple;
+	bool		recheck = false;
+	IndexInfo	*indexInfo = BuildIndexInfo(rel);
 
 	/* Assume unique until we find a duplicate */
 	*is_unique = true;
@@ -309,6 +314,8 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				curitup = (IndexTuple) PageGetItem(page, curitemid);
 				htid = curitup->t_tid;
 
+				recheck = false;
+
 				/*
 				 * If we are doing a recheck, we expect to find the tuple we
 				 * are rechecking.  It's not a duplicate, but we have to keep
@@ -326,112 +333,153 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				 * have just a single index entry for the entire chain.
 				 */
 				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
-										 &all_dead))
+							&all_dead, &recheck, &buffer,
+							&heapTuple))
 				{
 					TransactionId xwait;
+					bool result = true;
 
 					/*
-					 * It is a duplicate. If we are only doing a partial
-					 * check, then don't bother checking if the tuple is being
-					 * updated in another transaction. Just return the fact
-					 * that it is a potential conflict and leave the full
-					 * check till later.
+					 * If the tuple was WARM update, we may again see our own
+					 * tuple. Since WARM updates don't create new index
+					 * entries, our own tuple is only reachable via the old
+					 * index pointer.
 					 */
-					if (checkUnique == UNIQUE_CHECK_PARTIAL)
+					if (checkUnique == UNIQUE_CHECK_EXISTING &&
+							ItemPointerCompare(&htid, &itup->t_tid) == 0)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						*is_unique = false;
-						return InvalidTransactionId;
+						found = true;
+						result = false;
+						if (recheck)
+							UnlockReleaseBuffer(buffer);
 					}
-
-					/*
-					 * If this tuple is being updated by other transaction
-					 * then we have to wait for its commit/abort.
-					 */
-					xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
-						SnapshotDirty.xmin : SnapshotDirty.xmax;
-
-					if (TransactionIdIsValid(xwait))
+					else if (recheck)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						/* Tell _bt_doinsert to wait... */
-						*speculativeToken = SnapshotDirty.speculativeToken;
-						return xwait;
+						result = btrecheck(rel, indexInfo, curitup, heapRel, &heapTuple);
+						UnlockReleaseBuffer(buffer);
 					}
 
-					/*
-					 * Otherwise we have a definite conflict.  But before
-					 * complaining, look to see if the tuple we want to insert
-					 * is itself now committed dead --- if so, don't complain.
-					 * This is a waste of time in normal scenarios but we must
-					 * do it to support CREATE INDEX CONCURRENTLY.
-					 *
-					 * We must follow HOT-chains here because during
-					 * concurrent index build, we insert the root TID though
-					 * the actual tuple may be somewhere in the HOT-chain.
-					 * While following the chain we might not stop at the
-					 * exact tuple which triggered the insert, but that's OK
-					 * because if we find a live tuple anywhere in this chain,
-					 * we have a unique key conflict.  The other live tuple is
-					 * not part of this chain because it had a different index
-					 * entry.
-					 */
-					htid = itup->t_tid;
-					if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL))
-					{
-						/* Normal case --- it's still live */
-					}
-					else
+					if (result)
 					{
 						/*
-						 * It's been deleted, so no error, and no need to
-						 * continue searching
+						 * It is a duplicate. If we are only doing a partial
+						 * check, then don't bother checking if the tuple is being
+						 * updated in another transaction. Just return the fact
+						 * that it is a potential conflict and leave the full
+						 * check till later.
 						 */
-						break;
-					}
+						if (checkUnique == UNIQUE_CHECK_PARTIAL)
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							*is_unique = false;
+							return InvalidTransactionId;
+						}
 
-					/*
-					 * Check for a conflict-in as we would if we were going to
-					 * write to this page.  We aren't actually going to write,
-					 * but we want a chance to report SSI conflicts that would
-					 * otherwise be masked by this unique constraint
-					 * violation.
-					 */
-					CheckForSerializableConflictIn(rel, NULL, buf);
+						/*
+						 * If this tuple is being updated by other transaction
+						 * then we have to wait for its commit/abort.
+						 */
+						xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
+							SnapshotDirty.xmin : SnapshotDirty.xmax;
+
+						if (TransactionIdIsValid(xwait))
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							/* Tell _bt_doinsert to wait... */
+							*speculativeToken = SnapshotDirty.speculativeToken;
+							return xwait;
+						}
 
-					/*
-					 * This is a definite conflict.  Break the tuple down into
-					 * datums and report the error.  But first, make sure we
-					 * release the buffer locks we're holding ---
-					 * BuildIndexValueDescription could make catalog accesses,
-					 * which in the worst case might touch this same index and
-					 * cause deadlocks.
-					 */
-					if (nbuf != InvalidBuffer)
-						_bt_relbuf(rel, nbuf);
-					_bt_relbuf(rel, buf);
+						/*
+						 * Otherwise we have a definite conflict.  But before
+						 * complaining, look to see if the tuple we want to insert
+						 * is itself now committed dead --- if so, don't complain.
+						 * This is a waste of time in normal scenarios but we must
+						 * do it to support CREATE INDEX CONCURRENTLY.
+						 *
+						 * We must follow HOT-chains here because during
+						 * concurrent index build, we insert the root TID though
+						 * the actual tuple may be somewhere in the HOT-chain.
+						 * While following the chain we might not stop at the
+						 * exact tuple which triggered the insert, but that's OK
+						 * because if we find a live tuple anywhere in this chain,
+						 * we have a unique key conflict.  The other live tuple is
+						 * not part of this chain because it had a different index
+						 * entry.
+						 */
+						recheck = false;
+						ItemPointerCopy(&itup->t_tid, &htid);
+						if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL,
+									&recheck, &buffer, &heapTuple))
+						{
+							bool result = true;
+							if (recheck)
+							{
+								/*
+								 * Recheck if the tuple actually satisfies the
+								 * index key. Otherwise, we might be following
+								 * a wrong index pointer and mustn't entertain
+								 * this tuple.
+								 */
+								result = btrecheck(rel, indexInfo, itup, heapRel, &heapTuple);
+								UnlockReleaseBuffer(buffer);
+							}
+							if (!result)
+								break;
+							/* Normal case --- it's still live */
+						}
+						else
+						{
+							/*
+							 * It's been deleted, so no error, and no need to
+							 * continue searching.
+							 */
+							break;
+						}
 
-					{
-						Datum		values[INDEX_MAX_KEYS];
-						bool		isnull[INDEX_MAX_KEYS];
-						char	   *key_desc;
-
-						index_deform_tuple(itup, RelationGetDescr(rel),
-										   values, isnull);
-
-						key_desc = BuildIndexValueDescription(rel, values,
-															  isnull);
-
-						ereport(ERROR,
-								(errcode(ERRCODE_UNIQUE_VIOLATION),
-								 errmsg("duplicate key value violates unique constraint \"%s\"",
-										RelationGetRelationName(rel)),
-							   key_desc ? errdetail("Key %s already exists.",
-													key_desc) : 0,
-								 errtableconstraint(heapRel,
-											 RelationGetRelationName(rel))));
+						/*
+						 * Check for a conflict-in as we would if we were going to
+						 * write to this page.  We aren't actually going to write,
+						 * but we want a chance to report SSI conflicts that would
+						 * otherwise be masked by this unique constraint
+						 * violation.
+						 */
+						CheckForSerializableConflictIn(rel, NULL, buf);
+
+						/*
+						 * This is a definite conflict.  Break the tuple down into
+						 * datums and report the error.  But first, make sure we
+						 * release the buffer locks we're holding ---
+						 * BuildIndexValueDescription could make catalog accesses,
+						 * which in the worst case might touch this same index and
+						 * cause deadlocks.
+						 */
+						if (nbuf != InvalidBuffer)
+							_bt_relbuf(rel, nbuf);
+						_bt_relbuf(rel, buf);
+
+						{
+							Datum		values[INDEX_MAX_KEYS];
+							bool		isnull[INDEX_MAX_KEYS];
+							char	   *key_desc;
+
+							index_deform_tuple(itup, RelationGetDescr(rel),
+									values, isnull);
+
+							key_desc = BuildIndexValueDescription(rel, values,
+									isnull);
+
+							ereport(ERROR,
+									(errcode(ERRCODE_UNIQUE_VIOLATION),
+									 errmsg("duplicate key value violates unique constraint \"%s\"",
+										 RelationGetRelationName(rel)),
+									 key_desc ? errdetail("Key %s already exists.",
+										 key_desc) : 0,
+									 errtableconstraint(heapRel,
+										 RelationGetRelationName(rel))));
+						}
 					}
 				}
 				else if (all_dead)
diff --git a/src/backend/access/nbtree/nbtpage.c b/src/backend/access/nbtree/nbtpage.c
index f815fd4..061c8d4 100644
--- a/src/backend/access/nbtree/nbtpage.c
+++ b/src/backend/access/nbtree/nbtpage.c
@@ -766,29 +766,20 @@ _bt_page_recyclable(Page page)
 }
 
 /*
- * Delete item(s) from a btree page during VACUUM.
+ * Delete item(s) and clear WARM item(s) on a btree page during VACUUM.
  *
  * This must only be used for deleting leaf items.  Deleting an item on a
  * non-leaf page has to be done as part of an atomic action that includes
- * deleting the page it points to.
+ * deleting the page it points to. We don't ever clear pointers on a non-leaf
+ * page.
  *
  * This routine assumes that the caller has pinned and locked the buffer.
  * Also, the given itemnos *must* appear in increasing order in the array.
- *
- * We record VACUUMs and b-tree deletes differently in WAL. InHotStandby
- * we need to be able to pin all of the blocks in the btree in physical
- * order when replaying the effects of a VACUUM, just as we do for the
- * original VACUUM itself. lastBlockVacuumed allows us to tell whether an
- * intermediate range of blocks has had no changes at all by VACUUM,
- * and so must be scanned anyway during replay. We always write a WAL record
- * for the last block in the index, whether or not it contained any items
- * to be removed. This allows us to scan right up to end of index to
- * ensure correct locking.
  */
 void
-_bt_delitems_vacuum(Relation rel, Buffer buf,
-					OffsetNumber *itemnos, int nitems,
-					BlockNumber lastBlockVacuumed)
+_bt_handleitems_vacuum(Relation rel, Buffer buf,
+					OffsetNumber *delitemnos, int ndelitems,
+					OffsetNumber *clearitemnos, int nclearitems)
 {
 	Page		page = BufferGetPage(buf);
 	BTPageOpaque opaque;
@@ -796,9 +787,20 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 	/* No ereport(ERROR) until changes are logged */
 	START_CRIT_SECTION();
 
+	/*
+	 * Clear the WARM pointers.
+	 *
+	 * We must do this before dealing with the dead items because
+	 * PageIndexMultiDelete may move items around to compactify the array and
+	 * hence offnums recorded earlier won't make any sense after
+	 * PageIndexMultiDelete is called.
+	 */
+	if (nclearitems > 0)
+		_bt_clear_items(page, clearitemnos, nclearitems);
+
 	/* Fix the page */
-	if (nitems > 0)
-		PageIndexMultiDelete(page, itemnos, nitems);
+	if (ndelitems > 0)
+		PageIndexMultiDelete(page, delitemnos, ndelitems);
 
 	/*
 	 * We can clear the vacuum cycle ID since this page has certainly been
@@ -824,7 +826,8 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 		XLogRecPtr	recptr;
 		xl_btree_vacuum xlrec_vacuum;
 
-		xlrec_vacuum.lastBlockVacuumed = lastBlockVacuumed;
+		xlrec_vacuum.ndelitems = ndelitems;
+		xlrec_vacuum.nclearitems = nclearitems;
 
 		XLogBeginInsert();
 		XLogRegisterBuffer(0, buf, REGBUF_STANDARD);
@@ -835,8 +838,11 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 		 * is.  When XLogInsert stores the whole buffer, the offsets array
 		 * need not be stored too.
 		 */
-		if (nitems > 0)
-			XLogRegisterBufData(0, (char *) itemnos, nitems * sizeof(OffsetNumber));
+		if (ndelitems > 0)
+			XLogRegisterBufData(0, (char *) delitemnos, ndelitems * sizeof(OffsetNumber));
+
+		if (nclearitems > 0)
+			XLogRegisterBufData(0, (char *) clearitemnos, nclearitems * sizeof(OffsetNumber));
 
 		recptr = XLogInsert(RM_BTREE_ID, XLOG_BTREE_VACUUM);
 
@@ -1882,3 +1888,13 @@ _bt_unlink_halfdead_page(Relation rel, Buffer leafbuf, bool *rightsib_empty)
 
 	return true;
 }
+
+/*
+ * Currently just a wrapper arounf PageIndexClearWarmTuples, but in theory each
+ * index may have it's own way to handle WARM tuples.
+ */
+void
+_bt_clear_items(Page page, OffsetNumber *clearitemnos, uint16 nclearitems)
+{
+	PageIndexClearWarmTuples(page, clearitemnos, nclearitems);
+}
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 775f2ff..6d558af 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -146,6 +146,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = btbuild;
 	amroutine->ambuildempty = btbuildempty;
 	amroutine->aminsert = btinsert;
+	amroutine->amwarminsert = btwarminsert;
 	amroutine->ambulkdelete = btbulkdelete;
 	amroutine->amvacuumcleanup = btvacuumcleanup;
 	amroutine->amcanreturn = btcanreturn;
@@ -163,6 +164,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = btestimateparallelscan;
 	amroutine->aminitparallelscan = btinitparallelscan;
 	amroutine->amparallelrescan = btparallelrescan;
+	amroutine->amrecheck = btrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -315,11 +317,12 @@ btbuildempty(Relation index)
  *		Descend the tree recursively, find the appropriate location for our
  *		new tuple, and put it there.
  */
-bool
-btinsert(Relation rel, Datum *values, bool *isnull,
+static bool
+btinsert_internal(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
-		 IndexInfo *indexInfo)
+		 IndexInfo *indexInfo,
+		 bool warm_update)
 {
 	bool		result;
 	IndexTuple	itup;
@@ -328,6 +331,11 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	itup = index_form_tuple(RelationGetDescr(rel), values, isnull);
 	itup->t_tid = *ht_ctid;
 
+	if (warm_update)
+		ItemPointerSetFlags(&itup->t_tid, BTREE_INDEX_WARM_POINTER);
+	else
+		ItemPointerClearFlags(&itup->t_tid);
+
 	result = _bt_doinsert(rel, itup, checkUnique, heapRel);
 
 	pfree(itup);
@@ -335,6 +343,26 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	return result;
 }
 
+bool
+btinsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, false);
+}
+
+bool
+btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, true);
+}
+
 /*
  *	btgettuple() -- Get the next tuple in the scan.
  */
@@ -1103,7 +1131,7 @@ btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 								 RBM_NORMAL, info->strategy);
 		LockBufferForCleanup(buf);
 		_bt_checkpage(rel, buf);
-		_bt_delitems_vacuum(rel, buf, NULL, 0, vstate.lastBlockVacuumed);
+		_bt_handleitems_vacuum(rel, buf, NULL, 0, NULL, 0);
 		_bt_relbuf(rel, buf);
 	}
 
@@ -1201,6 +1229,8 @@ restart:
 	{
 		OffsetNumber deletable[MaxOffsetNumber];
 		int			ndeletable;
+		OffsetNumber clearwarm[MaxOffsetNumber];
+		int			nclearwarm;
 		OffsetNumber offnum,
 					minoff,
 					maxoff;
@@ -1239,7 +1269,7 @@ restart:
 		 * Scan over all items to see which ones need deleted according to the
 		 * callback function.
 		 */
-		ndeletable = 0;
+		ndeletable = nclearwarm = 0;
 		minoff = P_FIRSTDATAKEY(opaque);
 		maxoff = PageGetMaxOffsetNumber(page);
 		if (callback)
@@ -1250,6 +1280,9 @@ restart:
 			{
 				IndexTuple	itup;
 				ItemPointer htup;
+				int			flags;
+				bool		is_warm = false;
+				IndexBulkDeleteCallbackResult	result;
 
 				itup = (IndexTuple) PageGetItem(page,
 												PageGetItemId(page, offnum));
@@ -1276,16 +1309,36 @@ restart:
 				 * applies to *any* type of index that marks index tuples as
 				 * killed.
 				 */
-				if (callback(htup, callback_state))
+				flags = ItemPointerGetFlags(&itup->t_tid);
+				is_warm = ((flags & BTREE_INDEX_WARM_POINTER) != 0);
+
+				if (is_warm)
+					stats->num_warm_pointers++;
+				else
+					stats->num_clear_pointers++;
+
+				result = callback(htup, is_warm, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					if (is_warm)
+						stats->warm_pointers_removed++;
+					else
+						stats->clear_pointers_removed++;
 					deletable[ndeletable++] = offnum;
+				}
+				else if (result == IBDCR_CLEAR_WARM)
+				{
+					clearwarm[nclearwarm++] = offnum;
+				}
 			}
 		}
 
 		/*
-		 * Apply any needed deletes.  We issue just one _bt_delitems_vacuum()
-		 * call per page, so as to minimize WAL traffic.
+		 * Apply any needed deletes and clearing.  We issue just one
+		 * _bt_handleitems_vacuum() call per page, so as to minimize WAL
+		 * traffic.
 		 */
-		if (ndeletable > 0)
+		if (ndeletable > 0 || nclearwarm > 0)
 		{
 			/*
 			 * Notice that the issued XLOG_BTREE_VACUUM WAL record includes
@@ -1301,8 +1354,8 @@ restart:
 			 * doesn't seem worth the amount of bookkeeping it'd take to avoid
 			 * that.
 			 */
-			_bt_delitems_vacuum(rel, buf, deletable, ndeletable,
-								vstate->lastBlockVacuumed);
+			_bt_handleitems_vacuum(rel, buf, deletable, ndeletable,
+								clearwarm, nclearwarm);
 
 			/*
 			 * Remember highest leaf page number we've issued a
@@ -1312,6 +1365,7 @@ restart:
 				vstate->lastBlockVacuumed = blkno;
 
 			stats->tuples_removed += ndeletable;
+			stats->pointers_cleared += nclearwarm;
 			/* must recompute maxoff */
 			maxoff = PageGetMaxOffsetNumber(page);
 		}
diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c
index 5b259a3..2765809 100644
--- a/src/backend/access/nbtree/nbtutils.c
+++ b/src/backend/access/nbtree/nbtutils.c
@@ -20,11 +20,14 @@
 #include "access/nbtree.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "access/tuptoaster.h"
+#include "catalog/index.h"
 #include "miscadmin.h"
 #include "utils/array.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 typedef struct BTSortArrayContext
@@ -2069,3 +2072,93 @@ btproperty(Oid index_oid, int attno,
 			return false;		/* punt to generic code */
 	}
 }
+
+/*
+ * Check if the index tuple's key matches the one computed from the given heap
+ * tuple's attribute
+ */
+bool
+btrecheck(Relation indexRel, IndexInfo *indexInfo, IndexTuple indexTuple1,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	bool		isavail[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+	IndexTuple	indexTuple2;
+
+	/*
+	 * Get the index values, except for expression attributes. Since WARM is
+	 * not used when a column used by expressions in an index is modified, we
+	 * can safely assume that those index attributes are never changed by a
+	 * WARM update.
+	 *
+	 * We cannot use FormIndexDatum here because that requires access to
+	 * executor state and we don't have that here.
+	 */
+	FormIndexPlainDatum(indexInfo, heapRel, heapTuple, values, isnull, isavail);
+
+	/*
+	 * Form an index tuple using the heap values first. This allows to then
+	 * fetch index attributes from the current index tuple and the one that is
+	 * formed from the heap values and then do a binary comparison using
+	 * datumIsEqual().
+	 *
+	 * This takes care of doing the right comparison for compressed index
+	 * attributes (we just compare the compressed versions in both tuples) and
+	 * also ensure that we correctly detoast heap values, if need be.
+	 */
+	indexTuple2 = index_form_tuple(RelationGetDescr(indexRel), values, isnull);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue1;
+		bool	indxisnull1;
+		Datum	indxvalue2;
+		bool	indxisnull2;
+
+		/* No need to compare if the attribute value is not available */
+		if (!isavail[i - 1])
+			continue;
+
+		indxvalue1 = index_getattr(indexTuple1, i, indexRel->rd_att,
+								   &indxisnull1);
+		indxvalue2 = index_getattr(indexTuple2, i, indexRel->rd_att,
+								   &indxisnull2);
+
+		/*
+		 * If both are NULL, then they are equal
+		 */
+		if (indxisnull1 && indxisnull2)
+			continue;
+
+		/*
+		 * If just one is NULL, then they are not equal
+		 */
+		if (indxisnull1 || indxisnull2)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now just do a raw memory comparison. If the index tuple was formed
+		 * using this heap tuple, the computed index values must match
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(indxvalue1, indxvalue2, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	pfree(indexTuple2);
+
+	return equal;
+}
diff --git a/src/backend/access/nbtree/nbtxlog.c b/src/backend/access/nbtree/nbtxlog.c
index ac60db0..ef24738 100644
--- a/src/backend/access/nbtree/nbtxlog.c
+++ b/src/backend/access/nbtree/nbtxlog.c
@@ -390,8 +390,8 @@ btree_xlog_vacuum(XLogReaderState *record)
 	Buffer		buffer;
 	Page		page;
 	BTPageOpaque opaque;
-#ifdef UNUSED
 	xl_btree_vacuum *xlrec = (xl_btree_vacuum *) XLogRecGetData(record);
+#ifdef UNUSED
 
 	/*
 	 * This section of code is thought to be no longer needed, after analysis
@@ -482,19 +482,30 @@ btree_xlog_vacuum(XLogReaderState *record)
 
 		if (len > 0)
 		{
-			OffsetNumber *unused;
-			OffsetNumber *unend;
+			OffsetNumber *offnums = (OffsetNumber *) ptr;
 
-			unused = (OffsetNumber *) ptr;
-			unend = (OffsetNumber *) ((char *) ptr + len);
+			/*
+			 * Clear the WARM pointers.
+			 *
+			 * We must do this before dealing with the dead items because
+			 * PageIndexMultiDelete may move items around to compactify the
+			 * array and hence offnums recorded earlier won't make any sense
+			 * after PageIndexMultiDelete is called.
+			 */
+			if (xlrec->nclearitems > 0)
+				_bt_clear_items(page, offnums + xlrec->ndelitems,
+						xlrec->nclearitems);
 
-			if ((unend - unused) > 0)
-				PageIndexMultiDelete(page, unused, unend - unused);
+			/*
+			 * And handle the deleted items too
+			 */
+			if (xlrec->ndelitems > 0)
+				PageIndexMultiDelete(page, offnums, xlrec->ndelitems);
 		}
 
 		/*
 		 * Mark the page as not containing any LP_DEAD items --- see comments
-		 * in _bt_delitems_vacuum().
+		 * in _bt_handleitems_vacuum().
 		 */
 		opaque = (BTPageOpaque) PageGetSpecialPointer(page);
 		opaque->btpo_flags &= ~BTP_HAS_GARBAGE;
diff --git a/src/backend/access/rmgrdesc/heapdesc.c b/src/backend/access/rmgrdesc/heapdesc.c
index 44d2d63..d373e61 100644
--- a/src/backend/access/rmgrdesc/heapdesc.c
+++ b/src/backend/access/rmgrdesc/heapdesc.c
@@ -44,6 +44,12 @@ heap_desc(StringInfo buf, XLogReaderState *record)
 
 		appendStringInfo(buf, "off %u", xlrec->offnum);
 	}
+	else if (info == XLOG_HEAP_MULTI_INSERT)
+	{
+		xl_heap_multi_insert *xlrec = (xl_heap_multi_insert *) rec;
+
+		appendStringInfo(buf, "%d tuples", xlrec->ntuples);
+	}
 	else if (info == XLOG_HEAP_DELETE)
 	{
 		xl_heap_delete *xlrec = (xl_heap_delete *) rec;
@@ -102,7 +108,7 @@ heap2_desc(StringInfo buf, XLogReaderState *record)
 	char	   *rec = XLogRecGetData(record);
 	uint8		info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
 
-	info &= XLOG_HEAP_OPMASK;
+	info &= XLOG_HEAP2_OPMASK;
 	if (info == XLOG_HEAP2_CLEAN)
 	{
 		xl_heap_clean *xlrec = (xl_heap_clean *) rec;
@@ -129,12 +135,6 @@ heap2_desc(StringInfo buf, XLogReaderState *record)
 		appendStringInfo(buf, "cutoff xid %u flags %d",
 						 xlrec->cutoff_xid, xlrec->flags);
 	}
-	else if (info == XLOG_HEAP2_MULTI_INSERT)
-	{
-		xl_heap_multi_insert *xlrec = (xl_heap_multi_insert *) rec;
-
-		appendStringInfo(buf, "%d tuples", xlrec->ntuples);
-	}
 	else if (info == XLOG_HEAP2_LOCK_UPDATED)
 	{
 		xl_heap_lock_updated *xlrec = (xl_heap_lock_updated *) rec;
@@ -171,6 +171,12 @@ heap_identify(uint8 info)
 		case XLOG_HEAP_INSERT | XLOG_HEAP_INIT_PAGE:
 			id = "INSERT+INIT";
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			id = "MULTI_INSERT";
+			break;
+		case XLOG_HEAP_MULTI_INSERT | XLOG_HEAP_INIT_PAGE:
+			id = "MULTI_INSERT+INIT";
+			break;
 		case XLOG_HEAP_DELETE:
 			id = "DELETE";
 			break;
@@ -219,12 +225,6 @@ heap2_identify(uint8 info)
 		case XLOG_HEAP2_VISIBLE:
 			id = "VISIBLE";
 			break;
-		case XLOG_HEAP2_MULTI_INSERT:
-			id = "MULTI_INSERT";
-			break;
-		case XLOG_HEAP2_MULTI_INSERT | XLOG_HEAP_INIT_PAGE:
-			id = "MULTI_INSERT+INIT";
-			break;
 		case XLOG_HEAP2_LOCK_UPDATED:
 			id = "LOCK_UPDATED";
 			break;
diff --git a/src/backend/access/rmgrdesc/nbtdesc.c b/src/backend/access/rmgrdesc/nbtdesc.c
index fbde9d6..6b2c5d6 100644
--- a/src/backend/access/rmgrdesc/nbtdesc.c
+++ b/src/backend/access/rmgrdesc/nbtdesc.c
@@ -48,8 +48,8 @@ btree_desc(StringInfo buf, XLogReaderState *record)
 			{
 				xl_btree_vacuum *xlrec = (xl_btree_vacuum *) rec;
 
-				appendStringInfo(buf, "lastBlockVacuumed %u",
-								 xlrec->lastBlockVacuumed);
+				appendStringInfo(buf, "ndelitems %u, nclearitems %u",
+								 xlrec->ndelitems, xlrec->nclearitems);
 				break;
 			}
 		case XLOG_BTREE_DELETE:
diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c
index e57ac49..59ef7f3 100644
--- a/src/backend/access/spgist/spgutils.c
+++ b/src/backend/access/spgist/spgutils.c
@@ -72,6 +72,7 @@ spghandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/spgist/spgvacuum.c b/src/backend/access/spgist/spgvacuum.c
index cce9b3f..711d351 100644
--- a/src/backend/access/spgist/spgvacuum.c
+++ b/src/backend/access/spgist/spgvacuum.c
@@ -155,7 +155,8 @@ vacuumLeafPage(spgBulkDeleteState *bds, Relation index, Buffer buffer,
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				deletable[i] = true;
@@ -425,7 +426,8 @@ vacuumLeafRoot(spgBulkDeleteState *bds, Relation index, Buffer buffer)
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				toDelete[xlrec.nDelete] = i;
@@ -902,10 +904,10 @@ spgbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 }
 
 /* Dummy callback to delete no tuples during spgvacuumcleanup */
-static bool
-dummy_callback(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+dummy_callback(ItemPointer itemptr, bool is_warm, void *state)
 {
-	return false;
+	return IBDCR_KEEP;
 }
 
 /*
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 1eb163f..2c27661 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -54,6 +54,7 @@
 #include "nodes/makefuncs.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/var.h"
 #include "parser/parser.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
@@ -114,7 +115,7 @@ static void IndexCheckExclusion(Relation heapRelation,
 					IndexInfo *indexInfo);
 static inline int64 itemptr_encode(ItemPointer itemptr);
 static inline void itemptr_decode(ItemPointer itemptr, int64 encoded);
-static bool validate_index_callback(ItemPointer itemptr, void *opaque);
+static IndexBulkDeleteCallbackResult validate_index_callback(ItemPointer itemptr, bool is_warm, void *opaque);
 static void validate_index_heapscan(Relation heapRelation,
 						Relation indexRelation,
 						IndexInfo *indexInfo,
@@ -1691,6 +1692,20 @@ BuildIndexInfo(Relation index)
 	ii->ii_AmCache = NULL;
 	ii->ii_Context = CurrentMemoryContext;
 
+	/* build a bitmap of all table attributes referred by this index */
+	for (i = 0; i < ii->ii_NumIndexAttrs; i++)
+	{
+		AttrNumber attr = ii->ii_KeyAttrNumbers[i];
+		ii->ii_indxattrs = bms_add_member(ii->ii_indxattrs, attr -
+				FirstLowInvalidHeapAttributeNumber);
+	}
+
+	/* Collect all attributes used in expressions, too */
+	pull_varattnos((Node *) ii->ii_Expressions, 1, &ii->ii_indxattrs);
+
+	/* Collect all attributes in the index predicate, too */
+	pull_varattnos((Node *) ii->ii_Predicate, 1, &ii->ii_indxattrs);
+
 	return ii;
 }
 
@@ -1815,6 +1830,51 @@ FormIndexDatum(IndexInfo *indexInfo,
 		elog(ERROR, "wrong number of index expressions");
 }
 
+/*
+ * This is same as FormIndexDatum but we don't compute any expression
+ * attributes and hence can be used when executor interfaces are not available.
+ * If i'th attribute is available then isavail[i] is set to true, else set to
+ * false. The caller must always check if an attribute value is available
+ * before trying to do anything useful with that.
+ */
+void
+FormIndexPlainDatum(IndexInfo *indexInfo,
+			   Relation heapRel,
+			   HeapTuple heapTup,
+			   Datum *values,
+			   bool *isnull,
+			   bool *isavail)
+{
+	int			i;
+
+	for (i = 0; i < indexInfo->ii_NumIndexAttrs; i++)
+	{
+		int			keycol = indexInfo->ii_KeyAttrNumbers[i];
+		Datum		iDatum;
+		bool		isNull;
+
+		if (keycol != 0)
+		{
+			/*
+			 * Plain index column; get the value we need directly from the
+			 * heap tuple.
+			 */
+			iDatum = heap_getattr(heapTup, keycol, RelationGetDescr(heapRel), &isNull);
+			values[i] = iDatum;
+			isnull[i] = isNull;
+			isavail[i] = true;
+		}
+		else
+		{
+			/*
+			 * This is an expression attribute which can't be computed by us.
+			 * So just inform the caller about it.
+			 */
+			isavail[i] = false;
+			isnull[i] = true;
+		}
+	}
+}
 
 /*
  * index_update_stats --- update pg_class entry after CREATE INDEX or REINDEX
@@ -2929,15 +2989,15 @@ itemptr_decode(ItemPointer itemptr, int64 encoded)
 /*
  * validate_index_callback - bulkdelete callback to collect the index TIDs
  */
-static bool
-validate_index_callback(ItemPointer itemptr, void *opaque)
+static IndexBulkDeleteCallbackResult
+validate_index_callback(ItemPointer itemptr, bool is_warm, void *opaque)
 {
 	v_i_state  *state = (v_i_state *) opaque;
 	int64		encoded = itemptr_encode(itemptr);
 
 	tuplesort_putdatum(state->tuplesort, Int64GetDatum(encoded), false);
 	state->itups += 1;
-	return false;				/* never actually delete anything */
+	return IBDCR_KEEP;				/* never actually delete anything */
 }
 
 /*
@@ -3156,7 +3216,8 @@ validate_index_heapscan(Relation heapRelation,
 						 heapRelation,
 						 indexInfo->ii_Unique ?
 						 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-						 indexInfo);
+						 indexInfo,
+						 false);
 
 			state->tups_inserted += 1;
 		}
diff --git a/src/backend/catalog/indexing.c b/src/backend/catalog/indexing.c
index abc344a..6392f33 100644
--- a/src/backend/catalog/indexing.c
+++ b/src/backend/catalog/indexing.c
@@ -66,10 +66,15 @@ CatalogCloseIndexes(CatalogIndexState indstate)
  *
  * This should be called for each inserted or updated catalog tuple.
  *
+ * If the tuple was WARM updated, the modified_attrs contains the list of
+ * columns updated by the update. We must not insert new index entries for
+ * indexes which do not refer to any of the modified columns.
+ *
  * This is effectively a cut-down version of ExecInsertIndexTuples.
  */
 static void
-CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
+CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple,
+		Bitmapset *modified_attrs, bool warm_update)
 {
 	int			i;
 	int			numIndexes;
@@ -79,12 +84,28 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 	IndexInfo **indexInfoArray;
 	Datum		values[INDEX_MAX_KEYS];
 	bool		isnull[INDEX_MAX_KEYS];
+	ItemPointerData root_tid;
 
-	/* HOT update does not require index inserts */
-	if (HeapTupleIsHeapOnly(heapTuple))
+	/*
+	 * HOT update does not require index inserts, but WARM may need for some
+	 * indexes.
+	 */
+	if (HeapTupleIsHeapOnly(heapTuple) && !warm_update)
 		return;
 
 	/*
+	 * If we've done a WARM update, then we must index the TID of the root line
+	 * pointer and not the actual TID of the new tuple.
+	 */
+	if (warm_update)
+		ItemPointerSet(&root_tid,
+				ItemPointerGetBlockNumber(&(heapTuple->t_self)),
+				HeapTupleHeaderGetRootOffset(heapTuple->t_data));
+	else
+		ItemPointerCopy(&heapTuple->t_self, &root_tid);
+
+
+	/*
 	 * Get information from the state structure.  Fall out if nothing to do.
 	 */
 	numIndexes = indstate->ri_NumIndices;
@@ -112,6 +133,17 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 			continue;
 
 		/*
+		 * If we've done WARM update, then we must not insert a new index tuple
+		 * if none of the index keys have changed. This is not just an
+		 * optimization, but a requirement for WARM to work correctly.
+		 */
+		if (warm_update)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
+		/*
 		 * Expressional and partial indexes on system catalogs are not
 		 * supported, nor exclusion constraints, nor deferred uniqueness
 		 */
@@ -136,11 +168,12 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 		index_insert(relationDescs[i],	/* index relation */
 					 values,	/* array of index Datums */
 					 isnull,	/* is-null flags */
-					 &(heapTuple->t_self),		/* tid of heap tuple */
+					 &root_tid,
 					 heapRelation,
 					 relationDescs[i]->rd_index->indisunique ?
 					 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-					 indexInfo);
+					 indexInfo,
+					 warm_update);
 	}
 
 	ExecDropSingleTupleTableSlot(slot);
@@ -168,7 +201,7 @@ CatalogTupleInsert(Relation heapRel, HeapTuple tup)
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 	CatalogCloseIndexes(indstate);
 
 	return oid;
@@ -190,7 +223,7 @@ CatalogTupleInsertWithInfo(Relation heapRel, HeapTuple tup,
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 
 	return oid;
 }
@@ -210,12 +243,14 @@ void
 CatalogTupleUpdate(Relation heapRel, ItemPointer otid, HeapTuple tup)
 {
 	CatalogIndexState indstate;
+	bool	warm_update;
+	Bitmapset	*modified_attrs;
 
 	indstate = CatalogOpenIndexes(heapRel);
 
-	simple_heap_update(heapRel, otid, tup);
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 	CatalogCloseIndexes(indstate);
 }
 
@@ -231,9 +266,12 @@ void
 CatalogTupleUpdateWithInfo(Relation heapRel, ItemPointer otid, HeapTuple tup,
 						   CatalogIndexState indstate)
 {
-	simple_heap_update(heapRel, otid, tup);
+	Bitmapset  *modified_attrs;
+	bool		warm_update;
+
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 }
 
 /*
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index d8b762e..ca44e03 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -530,6 +530,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_tuples_deleted(C.oid) AS n_tup_del,
             pg_stat_get_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
@@ -560,7 +561,8 @@ CREATE VIEW pg_stat_xact_all_tables AS
             pg_stat_get_xact_tuples_inserted(C.oid) AS n_tup_ins,
             pg_stat_get_xact_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_xact_tuples_deleted(C.oid) AS n_tup_del,
-            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd
+            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_xact_tuples_warm_updated(C.oid) AS n_tup_warm_upd
     FROM pg_class C LEFT JOIN
          pg_index I ON C.oid = I.indrelid
          LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index e2544e5..330b661 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -40,6 +40,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	TriggerData *trigdata = castNode(TriggerData, fcinfo->context);
 	const char *funcname = "unique_key_recheck";
 	HeapTuple	new_row;
+	HeapTupleData heapTuple;
 	ItemPointerData tmptid;
 	Relation	indexRel;
 	IndexInfo  *indexInfo;
@@ -102,7 +103,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	 * removed.
 	 */
 	tmptid = new_row->t_self;
-	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL,
+				NULL, NULL, &heapTuple))
 	{
 		/*
 		 * All rows in the HOT chain are dead, so skip the check.
@@ -166,7 +168,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 		 */
 		index_insert(indexRel, values, isnull, &(new_row->t_self),
 					 trigdata->tg_relation, UNIQUE_CHECK_EXISTING,
-					 indexInfo);
+					 indexInfo,
+					 false);
 	}
 	else
 	{
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index ab59be8..22c272c 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -2688,6 +2688,8 @@ CopyFrom(CopyState cstate)
 					if (resultRelInfo->ri_NumIndices > 0)
 						recheckIndexes = ExecInsertIndexTuples(slot,
 															&(tuple->t_self),
+															&(tuple->t_self),
+															NULL,
 															   estate,
 															   false,
 															   NULL,
@@ -2842,6 +2844,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
 			ExecStoreTuple(bufferedTuples[i], myslot, InvalidBuffer, false);
 			recheckIndexes =
 				ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
+									  &(bufferedTuples[i]->t_self), NULL,
 									  estate, false, NULL, NIL);
 			ExecARInsertTriggers(estate, resultRelInfo,
 								 bufferedTuples[i],
diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c
index 4861799..b62b0e9 100644
--- a/src/backend/commands/indexcmds.c
+++ b/src/backend/commands/indexcmds.c
@@ -694,7 +694,14 @@ DefineIndex(Oid relationId,
 	 * visible to other transactions before we start to build the index. That
 	 * will prevent them from making incompatible HOT updates.  The new index
 	 * will be marked not indisready and not indisvalid, so that no one else
-	 * tries to either insert into it or use it for queries.
+	 * tries to either insert into it or use it for queries. In addition to
+	 * that, WARM updates will be disallowed if an update is modifying one of
+	 * the columns used by this new index. This is necessary to ensure that we
+	 * don't create WARM tuples which do not have corresponding entry in this
+	 * index. It must be noted that during the second phase, we will index only
+	 * those heap tuples whose root line pointer is not already in the index,
+	 * hence it's important that all tuples in a given chain, has the same
+	 * value for any indexed column (including this new index).
 	 *
 	 * We must commit our current transaction so that the index becomes
 	 * visible; then start another.  Note that all the data structures we just
@@ -742,7 +749,10 @@ DefineIndex(Oid relationId,
 	 * marked as "not-ready-for-inserts".  The index is consulted while
 	 * deciding HOT-safety though.  This arrangement ensures that no new HOT
 	 * chains can be created where the new tuple and the old tuple in the
-	 * chain have different index keys.
+	 * chain have different index keys. Also, the new index is consulted for
+	 * deciding whether a WARM update is possible, and WARM update is not done
+	 * if a column used by this index is being updated. This ensures that we
+	 * don't create WARM tuples which are not indexed by this index.
 	 *
 	 * We now take a new snapshot, and build the index using all tuples that
 	 * are visible in this snapshot.  We can be sure that any HOT updates to
@@ -777,7 +787,8 @@ DefineIndex(Oid relationId,
 	/*
 	 * Update the pg_index row to mark the index as ready for inserts. Once we
 	 * commit this transaction, any new transactions that open the table must
-	 * insert new entries into the index for insertions and non-HOT updates.
+	 * insert new entries into the index for insertions and non-HOT updates or
+	 * WARM updates where this index needs new entry.
 	 */
 	index_set_state_flags(indexRelationId, INDEX_CREATE_SET_READY);
 
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 5b43a66..f52490f 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -104,6 +104,39 @@
  */
 #define PREFETCH_SIZE			((BlockNumber) 32)
 
+/*
+ * Structure to track WARM chains that can be converted into HOT chains during
+ * this run.
+ *
+ * To reduce space requirement, we're using bitfields. But the way things are
+ * laid down, we're still wasting 1-byte per candidate chain.
+ */
+typedef struct LVWarmChain
+{
+	ItemPointerData	chain_tid;			/* root of the chain */
+
+	/*
+	 * 1 - if the chain contains only post-warm tuples
+	 * 0 - if the chain contains only pre-warm tuples
+	 */
+	uint8			is_postwarm_chain:2;
+
+	/* 1 - if this chain must remain a WARM chain */
+	uint8			keep_warm_chain:2;
+
+	/*
+	 * Number of CLEAR pointers to this root TID found so far - must never be
+	 * more than 2.
+	 */
+	uint8			num_clear_pointers:2;
+
+	/*
+	 * Number of WARM pointers to this root TID found so far - must never be
+	 * more than 1.
+	 */
+	uint8			num_warm_pointers:2;
+} LVWarmChain;
+
 typedef struct LVRelStats
 {
 	/* hasindex = true means two-pass strategy; false means one-pass */
@@ -122,6 +155,14 @@ typedef struct LVRelStats
 	BlockNumber pages_removed;
 	double		tuples_deleted;
 	BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */
+
+	/* List of candidate WARM chains that can be converted into HOT chains */
+	/* NB: this list is ordered by TID of the root pointers */
+	int				num_warm_chains;	/* current # of entries */
+	int				max_warm_chains;	/* # slots allocated in array */
+	LVWarmChain 	*warm_chains;		/* array of LVWarmChain */
+	double			num_non_convertible_warm_chains;
+
 	/* List of TIDs of tuples we intend to delete */
 	/* NB: this list is ordered by TID address */
 	int			num_dead_tuples;	/* current # of entries */
@@ -150,6 +191,7 @@ static void lazy_scan_heap(Relation onerel, int options,
 static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats);
 static bool lazy_check_needs_freeze(Buffer buf, bool *hastup);
 static void lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats);
 static void lazy_cleanup_index(Relation indrel,
@@ -157,6 +199,10 @@ static void lazy_cleanup_index(Relation indrel,
 				   LVRelStats *vacrelstats);
 static int lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 				 int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer);
+static int lazy_warmclear_page(Relation onerel, BlockNumber blkno,
+				 Buffer buffer, int chainindex, LVRelStats *vacrelstats,
+				 Buffer *vmbuffer, bool check_all_visible);
+static void lazy_reset_warm_pointer_count(LVRelStats *vacrelstats);
 static bool should_attempt_truncation(LVRelStats *vacrelstats);
 static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
 static BlockNumber count_nondeletable_pages(Relation onerel,
@@ -164,8 +210,15 @@ static BlockNumber count_nondeletable_pages(Relation onerel,
 static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks);
 static void lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr);
-static bool lazy_tid_reaped(ItemPointer itemptr, void *state);
+static void lazy_record_warm_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static void lazy_record_clear_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static IndexBulkDeleteCallbackResult lazy_tid_reaped(ItemPointer itemptr, bool is_warm, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state);
 static int	vac_cmp_itemptr(const void *left, const void *right);
+static int vac_cmp_warm_chain(const void *left, const void *right);
 static bool heap_page_is_all_visible(Relation rel, Buffer buf,
 					 TransactionId *visibility_cutoff_xid, bool *all_frozen);
 
@@ -690,8 +743,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		 * If we are close to overrunning the available space for dead-tuple
 		 * TIDs, pause and do a cycle of vacuuming before we tackle this page.
 		 */
-		if ((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
-			vacrelstats->num_dead_tuples > 0)
+		if (((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
+			vacrelstats->num_dead_tuples > 0) ||
+			((vacrelstats->max_warm_chains - vacrelstats->num_warm_chains) < MaxHeapTuplesPerPage &&
+			 vacrelstats->num_warm_chains > 0))
 		{
 			const int	hvp_index[] = {
 				PROGRESS_VACUUM_PHASE,
@@ -721,6 +776,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			/* Remove index entries */
 			for (i = 0; i < nindexes; i++)
 				lazy_vacuum_index(Irel[i],
+								  (vacrelstats->num_warm_chains > 0),
 								  &indstats[i],
 								  vacrelstats);
 
@@ -743,6 +799,9 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			 * valid.
 			 */
 			vacrelstats->num_dead_tuples = 0;
+			vacrelstats->num_warm_chains = 0;
+			memset(vacrelstats->warm_chains, 0,
+					vacrelstats->max_warm_chains * sizeof (LVWarmChain));
 			vacrelstats->num_index_scans++;
 
 			/* Report that we are once again scanning the heap */
@@ -947,15 +1006,31 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 				continue;
 			}
 
+			ItemPointerSet(&(tuple.t_self), blkno, offnum);
+
 			/* Redirect items mustn't be touched */
 			if (ItemIdIsRedirected(itemid))
 			{
+				HeapCheckWarmChainStatus status = heap_check_warm_chain(page,
+						&tuple.t_self, false);
+				if (HCWC_IS_WARM_UPDATED(status))
+				{
+					/*
+					 * A chain which is either complete WARM or CLEAR is a
+					 * candidate for chain conversion. Remember the chain and
+					 * whether the chain has all WARM tuples or not.
+					 */
+					if (HCWC_IS_ALL_WARM(status))
+						lazy_record_warm_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_CLEAR(status))
+						lazy_record_clear_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
 				hastup = true;	/* this page won't be truncatable */
 				continue;
 			}
 
-			ItemPointerSet(&(tuple.t_self), blkno, offnum);
-
 			/*
 			 * DEAD item pointers are to be vacuumed normally; but we don't
 			 * count them in tups_vacuumed, else we'd be double-counting (at
@@ -975,6 +1050,26 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			tuple.t_len = ItemIdGetLength(itemid);
 			tuple.t_tableOid = RelationGetRelid(onerel);
 
+			if (!HeapTupleIsHeapOnly(&tuple))
+			{
+				HeapCheckWarmChainStatus status = heap_check_warm_chain(page,
+						&tuple.t_self, false);
+				if (HCWC_IS_WARM_UPDATED(status))
+				{
+					/*
+					 * A chain which is either complete WARM or CLEAR is a
+					 * candidate for chain conversion. Remember the chain and
+					 * its color.
+					 */
+					if (HCWC_IS_ALL_WARM(status))
+						lazy_record_warm_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_CLEAR(status))
+						lazy_record_clear_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
+			}
+
 			tupgone = false;
 
 			switch (HeapTupleSatisfiesVacuum(&tuple, OldestXmin, buf))
@@ -1040,6 +1135,19 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 							break;
 						}
 
+						/*
+						 * If this tuple was ever WARM updated or is a WARM
+						 * tuple, there could be multiple index entries
+						 * pointing to the root of this chain. We can't do
+						 * index-only scans for such tuples without verifying
+						 * index key check. So mark the page as !all_visible
+						 */
+						if (HeapTupleHeaderIsWarmUpdated(tuple.t_data))
+						{
+							all_visible = false;
+							break;
+						}
+
 						/* Track newest xmin on page. */
 						if (TransactionIdFollows(xmin, visibility_cutoff_xid))
 							visibility_cutoff_xid = xmin;
@@ -1282,7 +1390,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 
 	/* If any tuples need to be deleted, perform final vacuum cycle */
 	/* XXX put a threshold on min number of tuples here? */
-	if (vacrelstats->num_dead_tuples > 0)
+	if (vacrelstats->num_dead_tuples > 0 || vacrelstats->num_warm_chains > 0)
 	{
 		const int	hvp_index[] = {
 			PROGRESS_VACUUM_PHASE,
@@ -1300,6 +1408,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		/* Remove index entries */
 		for (i = 0; i < nindexes; i++)
 			lazy_vacuum_index(Irel[i],
+							  (vacrelstats->num_warm_chains > 0),
 							  &indstats[i],
 							  vacrelstats);
 
@@ -1371,7 +1480,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
  *
  *		This routine marks dead tuples as unused and compacts out free
  *		space on their pages.  Pages not having dead tuples recorded from
- *		lazy_scan_heap are not visited at all.
+ *		lazy_scan_heap are not visited at all. This routine also converts
+ *		candidate WARM chains to HOT chains by clearing WARM related flags. The
+ *		candidate chains are determined by the preceeding index scans after
+ *		looking at the data collected by the first heap scan.
  *
  * Note: the reason for doing this as a second pass is we cannot remove
  * the tuples until we've removed their index entries, and we want to
@@ -1380,7 +1492,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 static void
 lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 {
-	int			tupindex;
+	int			tupindex, chainindex;
 	int			npages;
 	PGRUsage	ru0;
 	Buffer		vmbuffer = InvalidBuffer;
@@ -1389,33 +1501,69 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 	npages = 0;
 
 	tupindex = 0;
-	while (tupindex < vacrelstats->num_dead_tuples)
+	chainindex = 0;
+	while (tupindex < vacrelstats->num_dead_tuples ||
+		   chainindex < vacrelstats->num_warm_chains)
 	{
-		BlockNumber tblk;
+		BlockNumber tblk, chainblk, vacblk;
 		Buffer		buf;
 		Page		page;
 		Size		freespace;
 
 		vacuum_delay_point();
 
-		tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
-		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, tblk, RBM_NORMAL,
+		tblk = chainblk = InvalidBlockNumber;
+		if (chainindex < vacrelstats->num_warm_chains)
+			chainblk =
+				ItemPointerGetBlockNumber(&(vacrelstats->warm_chains[chainindex].chain_tid));
+
+		if (tupindex < vacrelstats->num_dead_tuples)
+			tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
+
+		if (tblk == InvalidBlockNumber)
+			vacblk = chainblk;
+		else if (chainblk == InvalidBlockNumber)
+			vacblk = tblk;
+		else
+			vacblk = Min(chainblk, tblk);
+
+		Assert(vacblk != InvalidBlockNumber);
+
+		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, vacblk, RBM_NORMAL,
 								 vac_strategy);
-		if (!ConditionalLockBufferForCleanup(buf))
+
+
+		if (vacblk == chainblk)
+			LockBufferForCleanup(buf);
+		else if (!ConditionalLockBufferForCleanup(buf))
 		{
 			ReleaseBuffer(buf);
 			++tupindex;
 			continue;
 		}
-		tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
-									&vmbuffer);
+
+		/*
+		 * Convert WARM chains on this page. This should be done before
+		 * vacuuming the page to ensure that we can correctly set visibility
+		 * bits after clearing WARM chains.
+		 *
+		 * If we are going to vacuum this page then don't check for
+		 * all-visibility just yet.
+		*/
+		if (vacblk == chainblk)
+			chainindex = lazy_warmclear_page(onerel, chainblk, buf, chainindex,
+					vacrelstats, &vmbuffer, chainblk != tblk);
+
+		if (vacblk == tblk)
+			tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
+					&vmbuffer);
 
 		/* Now that we've compacted the page, record its available space */
 		page = BufferGetPage(buf);
 		freespace = PageGetHeapFreeSpace(page);
 
 		UnlockReleaseBuffer(buf);
-		RecordPageWithFreeSpace(onerel, tblk, freespace);
+		RecordPageWithFreeSpace(onerel, vacblk, freespace);
 		npages++;
 	}
 
@@ -1434,6 +1582,107 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 }
 
 /*
+ *	lazy_warmclear_page() -- clear various WARM bits on the tuples.
+ *
+ * Caller must hold pin and buffer cleanup lock on the buffer.
+ *
+ * chainindex is the index in vacrelstats->warm_chains of the first dead
+ * tuple for this page.  We assume the rest follow sequentially.
+ * The return value is the first tupindex after the tuples of this page.
+ *
+ * If check_all_visible is set then we also check if the page has now become
+ * all visible and update visibility map.
+ */
+static int
+lazy_warmclear_page(Relation onerel, BlockNumber blkno, Buffer buffer,
+				 int chainindex, LVRelStats *vacrelstats, Buffer *vmbuffer,
+				 bool check_all_visible)
+{
+	Page			page = BufferGetPage(buffer);
+	OffsetNumber	cleared_offnums[MaxHeapTuplesPerPage];
+	int				num_cleared = 0;
+	TransactionId	visibility_cutoff_xid;
+	bool			all_frozen;
+
+	pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED, blkno);
+
+	START_CRIT_SECTION();
+
+	for (; chainindex < vacrelstats->num_warm_chains ; chainindex++)
+	{
+		BlockNumber tblk;
+		LVWarmChain	*chain;
+
+		chain = &vacrelstats->warm_chains[chainindex];
+
+		tblk = ItemPointerGetBlockNumber(&chain->chain_tid);
+		if (tblk != blkno)
+			break;				/* past end of tuples for this block */
+
+		/*
+		 * Since a heap page can have no more than MaxHeapTuplesPerPage
+		 * offnums and we process each offnum only once, MaxHeapTuplesPerPage
+		 * size array should be enough to hold all cleared tuples in this page.
+		 */
+		if (!chain->keep_warm_chain)
+			num_cleared += heap_clear_warm_chain(page, &chain->chain_tid,
+					cleared_offnums + num_cleared);
+	}
+
+	/*
+	 * Mark buffer dirty before we write WAL.
+	 */
+	MarkBufferDirty(buffer);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(onerel))
+	{
+		XLogRecPtr	recptr;
+
+		recptr = log_heap_warmclear(onerel, buffer,
+								cleared_offnums, num_cleared);
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	/* If not checking for all-visibility then we're done */
+	if (!check_all_visible)
+		return chainindex;
+
+	/*
+	 * The following code should match the corresponding code in
+	 * lazy_vacuum_page
+	 **/
+	if (heap_page_is_all_visible(onerel, buffer, &visibility_cutoff_xid,
+								 &all_frozen))
+		PageSetAllVisible(page);
+
+	/*
+	 * All the changes to the heap page have been done. If the all-visible
+	 * flag is now set, also set the VM all-visible bit (and, if possible, the
+	 * all-frozen bit) unless this has already been done previously.
+	 */
+	if (PageIsAllVisible(page))
+	{
+		uint8		vm_status = visibilitymap_get_status(onerel, blkno, vmbuffer);
+		uint8		flags = 0;
+
+		/* Set the VM all-frozen bit to flag, if needed */
+		if ((vm_status & VISIBILITYMAP_ALL_VISIBLE) == 0)
+			flags |= VISIBILITYMAP_ALL_VISIBLE;
+		if ((vm_status & VISIBILITYMAP_ALL_FROZEN) == 0 && all_frozen)
+			flags |= VISIBILITYMAP_ALL_FROZEN;
+
+		Assert(BufferIsValid(*vmbuffer));
+		if (flags != 0)
+			visibilitymap_set(onerel, blkno, buffer, InvalidXLogRecPtr,
+							  *vmbuffer, visibility_cutoff_xid, flags);
+	}
+	return chainindex;
+}
+
+/*
  *	lazy_vacuum_page() -- free dead tuples on a page
  *					 and repair its fragmentation.
  *
@@ -1586,6 +1835,24 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
 	return false;
 }
 
+/*
+ * Reset counters tracking number of WARM and CLEAR pointers per candidate TID.
+ * These counters are maintained per index and cleared when the next index is
+ * picked up for cleanup.
+ *
+ * We don't touch the keep_warm_chain since once a chain is known to be
+ * non-convertible, we must remember that across all indexes.
+ */
+static void
+lazy_reset_warm_pointer_count(LVRelStats *vacrelstats)
+{
+	int i;
+	for (i = 0; i < vacrelstats->num_warm_chains; i++)
+	{
+		LVWarmChain *chain = &vacrelstats->warm_chains[i];
+		chain->num_clear_pointers = chain->num_warm_pointers = 0;
+	}
+}
 
 /*
  *	lazy_vacuum_index() -- vacuum one index relation.
@@ -1595,6 +1862,7 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
  */
 static void
 lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats)
 {
@@ -1610,15 +1878,87 @@ lazy_vacuum_index(Relation indrel,
 	ivinfo.num_heap_tuples = vacrelstats->old_rel_tuples;
 	ivinfo.strategy = vac_strategy;
 
-	/* Do bulk deletion */
-	*stats = index_bulk_delete(&ivinfo, *stats,
-							   lazy_tid_reaped, (void *) vacrelstats);
+	/*
+	 * If told, convert WARM chains into HOT chains.
+	 *
+	 * We must have already collected candidate WARM chains i.e. chains that
+	 * have either all tuples with HEAP_WARM_TUPLE flag set or none.
+	 *
+	 * This works in two phases. In the first phase, we do a complete index
+	 * scan and collect information about index pointers to the candidate
+	 * chains, but we don't do conversion. To be precise, we count the number
+	 * of WARM and CLEAR index pointers to each candidate chain and use that
+	 * knowledge to arrive at a decision and do the actual conversion during
+	 * the second phase (we kill known dead pointers though in this phase).
+	 *
+	 * In the second phase, for each candidate chain we check if we have seen a
+	 * WARM index pointer. For such chains, we kill the CLEAR pointer and
+	 * convert the WARM pointer into a CLEAR pointer. The heap tuples are
+	 * cleared of WARM flags in the second heap scan. If we did not find any
+	 * WARM pointer to a WARM chain, that means that the chain is reachable
+	 * from the CLEAR pointer (because say WARM update did not add a new entry
+	 * for this index). In that case, we do nothing.  There is a third case
+	 * where we find two CLEAR pointers to a candidate chain. This can happen
+	 * because of aborted vacuums. We don't handle that case yet, but it should
+	 * be possible to apply the same recheck logic and find which of the clear
+	 * pointers is redundant and should be removed.
+	 *
+	 * For CLEAR chains, we just kill the WARM pointer, if it exists, and keep
+	 * the CLEAR pointer.
+	 */
+	if (clear_warm)
+	{
+		/*
+		 * Before starting the index scan, reset the counters of WARM and CLEAR
+		 * pointers, probably carried forward from the previous index.
+		 */
+		lazy_reset_warm_pointer_count(vacrelstats);
+
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase1, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row version, found "
+						"%0.f warm pointers, %0.f clear pointers, removed "
+						"%0.f warm pointers, removed %0.f clear pointers",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples,
+						(*stats)->num_warm_pointers,
+						(*stats)->num_clear_pointers,
+						(*stats)->warm_pointers_removed,
+						(*stats)->clear_pointers_removed)));
+
+		(*stats)->num_warm_pointers = 0;
+		(*stats)->num_clear_pointers = 0;
+		(*stats)->warm_pointers_removed = 0;
+		(*stats)->clear_pointers_removed = 0;
+		(*stats)->pointers_cleared = 0;
+
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase2, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to convert WARM pointers, found "
+						"%0.f WARM pointers, %0.f CLEAR pointers, removed "
+						"%0.f WARM pointers, removed %0.f CLEAR pointers, "
+						"cleared %0.f WARM pointers",
+						RelationGetRelationName(indrel),
+						(*stats)->num_warm_pointers,
+						(*stats)->num_clear_pointers,
+						(*stats)->warm_pointers_removed,
+						(*stats)->clear_pointers_removed,
+						(*stats)->pointers_cleared)));
+	}
+	else
+	{
+		/* Do bulk deletion */
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_tid_reaped, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row versions",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples),
+				 errdetail("%s.", pg_rusage_show(&ru0))));
+	}
 
-	ereport(elevel,
-			(errmsg("scanned index \"%s\" to remove %d row versions",
-					RelationGetRelationName(indrel),
-					vacrelstats->num_dead_tuples),
-			 errdetail("%s.", pg_rusage_show(&ru0))));
 }
 
 /*
@@ -1992,9 +2332,11 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 
 	if (vacrelstats->hasindex)
 	{
-		maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData);
+		maxtuples = (vac_work_mem * 1024L) / (sizeof(ItemPointerData) +
+				sizeof(LVWarmChain));
 		maxtuples = Min(maxtuples, INT_MAX);
-		maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData));
+		maxtuples = Min(maxtuples, MaxAllocSize / (sizeof(ItemPointerData) +
+					sizeof(LVWarmChain)));
 
 		/* curious coding here to ensure the multiplication can't overflow */
 		if ((BlockNumber) (maxtuples / LAZY_ALLOC_TUPLES) > relblocks)
@@ -2012,6 +2354,57 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 	vacrelstats->max_dead_tuples = (int) maxtuples;
 	vacrelstats->dead_tuples = (ItemPointer)
 		palloc(maxtuples * sizeof(ItemPointerData));
+
+	/*
+	 * XXX Cheat for now and allocate the same size array for tracking warm
+	 * chains. maxtuples must have been already adjusted above to ensure we
+	 * don't cross vac_work_mem.
+	 */
+	vacrelstats->num_warm_chains = 0;
+	vacrelstats->max_warm_chains = (int) maxtuples;
+	vacrelstats->warm_chains = (LVWarmChain *)
+		palloc0(maxtuples * sizeof(LVWarmChain));
+
+}
+
+/*
+ * lazy_record_clear_chain - remember one CLEAR chain
+ */
+static void
+lazy_record_clear_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	{
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 0;
+		vacrelstats->num_warm_chains++;
+	}
+}
+
+/*
+ * lazy_record_warm_chain - remember one WARM chain
+ */
+static void
+lazy_record_warm_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	{
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 1;
+		vacrelstats->num_warm_chains++;
+	}
 }
 
 /*
@@ -2042,8 +2435,8 @@ lazy_record_dead_tuple(LVRelStats *vacrelstats,
  *
  *		Assumes dead_tuples array is in sorted order.
  */
-static bool
-lazy_tid_reaped(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+lazy_tid_reaped(ItemPointer itemptr, bool is_warm, void *state)
 {
 	LVRelStats *vacrelstats = (LVRelStats *) state;
 	ItemPointer res;
@@ -2054,7 +2447,193 @@ lazy_tid_reaped(ItemPointer itemptr, void *state)
 								sizeof(ItemPointerData),
 								vac_cmp_itemptr);
 
-	return (res != NULL);
+	return (res != NULL) ? IBDCR_DELETE : IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase1() -- run first pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	ItemPointer		res;
+	LVWarmChain	*chain;
+
+	res = (ItemPointer) bsearch((void *) itemptr,
+								(void *) vacrelstats->dead_tuples,
+								vacrelstats->num_dead_tuples,
+								sizeof(ItemPointerData),
+								vac_cmp_itemptr);
+
+	if (res != NULL)
+		return IBDCR_DELETE;
+
+	chain = (LVWarmChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->warm_chains,
+								vacrelstats->num_warm_chains,
+								sizeof(LVWarmChain),
+								vac_cmp_warm_chain);
+	if (chain != NULL)
+	{
+		if (is_warm)
+			chain->num_warm_pointers++;
+		else
+			chain->num_clear_pointers++;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase2() -- run second pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	LVWarmChain	*chain;
+
+	chain = (LVWarmChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->warm_chains,
+								vacrelstats->num_warm_chains,
+								sizeof(LVWarmChain),
+								vac_cmp_warm_chain);
+
+	if (chain != NULL && (chain->keep_warm_chain != 1))
+	{
+		/*
+		 * At no point, we can have more than 1 warm pointer to any chain and
+		 * no more than 2 clear pointers.
+		 */
+		Assert(chain->num_warm_pointers <= 1);
+		Assert(chain->num_clear_pointers <= 2);
+
+		if (chain->is_postwarm_chain == 1)
+		{
+			if (is_warm)
+			{
+				/*
+				 * A WARM pointer, pointing to a WARM chain.
+				 *
+				 * Clear the warm pointer (and delete the CLEAR pointer). We
+				 * may have already seen the CLEAR pointer in the scan and
+				 * deleted that or we may see it later in the scan. It doesn't
+				 * matter if we fail at any point because we won't clear up
+				 * WARM bits on the heap tuples until we have dealt with the
+				 * index pointers cleanly.
+				 */
+				return IBDCR_CLEAR_WARM;
+			}
+			else
+			{
+				/*
+				 * CLEAR pointer to a WARM chain.
+				 */
+				if (chain->num_warm_pointers > 0)
+				{
+					/*
+					 * If there exists a WARM pointer to the chain, we can
+					 * delete the CLEAR pointer and clear the WARM bits on the
+					 * heap tuples.
+					 */
+					return IBDCR_DELETE;
+				}
+				else if (chain->num_clear_pointers == 1)
+				{
+					/*
+					 * If this is the only pointer to a WARM chain, we must
+					 * keep the CLEAR pointer.
+					 *
+					 * The presence of WARM chain indicates that the WARM update
+					 * must have been committed good. But during the update
+					 * this index was probably not updated and hence it
+					 * contains just one, original CLEAR pointer to the chain.
+					 * We should be able to clear the WARM bits on heap tuples
+					 * unless we later find another index which prevents the
+					 * cleanup.
+					 */
+					return IBDCR_KEEP;
+				}
+			}
+		}
+		else
+		{
+			/*
+			 * This is a CLEAR chain.
+			 */
+			if (is_warm)
+			{
+				/*
+				 * A WARM pointer to a CLEAR chain.
+				 *
+				 * This can happen when a WARM update is aborted. Later the HOT
+				 * chain is pruned leaving behind only CLEAR tuples in the
+				 * chain. But the WARM index pointer inserted in the index
+				 * remains and it must now be deleted before we clear WARM bits
+				 * from the heap tuple.
+				 */
+				return IBDCR_DELETE;
+			}
+
+			/*
+			 * CLEAR pointer to a CLEAR chain.
+			 *
+			 * If this is the only surviving CLEAR pointer, keep it and clear
+			 * the WARM bits from the heap tuples.
+			 */
+			if (chain->num_clear_pointers == 1)
+				return IBDCR_KEEP;
+
+			/*
+			 * If there are more than 1 CLEAR pointers to this chain, we can
+			 * apply the recheck logic and kill the redudant CLEAR pointer and
+			 * convert the chain. But that's not yet done.
+			 */
+		}
+
+		/*
+		 * For everything else, we must keep the WARM bits and also keep the
+		 * index pointers.
+		 */
+		chain->keep_warm_chain = 1;
+		return IBDCR_KEEP;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ * Comparator routines for use with qsort() and bsearch(). Similar to
+ * vac_cmp_itemptr, but right hand argument is LVWarmChain struct pointer.
+ */
+static int
+vac_cmp_warm_chain(const void *left, const void *right)
+{
+	BlockNumber lblk,
+				rblk;
+	OffsetNumber loff,
+				roff;
+
+	lblk = ItemPointerGetBlockNumber((ItemPointer) left);
+	rblk = ItemPointerGetBlockNumber(&((LVWarmChain *) right)->chain_tid);
+
+	if (lblk < rblk)
+		return -1;
+	if (lblk > rblk)
+		return 1;
+
+	loff = ItemPointerGetOffsetNumber((ItemPointer) left);
+	roff = ItemPointerGetOffsetNumber(&((LVWarmChain *) right)->chain_tid);
+
+	if (loff < roff)
+		return -1;
+	if (loff > roff)
+		return 1;
+
+	return 0;
 }
 
 /*
@@ -2170,6 +2749,18 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 						break;
 					}
 
+					/*
+					 * If this or any other tuple in the chain ever WARM
+					 * updated, there could be multiple index entries pointing
+					 * to the root of this chain. We can't do index-only scans
+					 * for such tuples without verifying index key check. So
+					 * mark the page as !all_visible
+					 */
+					if (HeapTupleHeaderIsWarmUpdated(tuple.t_data))
+					{
+						all_visible = false;
+					}
+
 					/* Track newest xmin on page. */
 					if (TransactionIdFollows(xmin, *visibility_cutoff_xid))
 						*visibility_cutoff_xid = xmin;
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index c3f1873..2143978 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -270,6 +270,8 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
 List *
 ExecInsertIndexTuples(TupleTableSlot *slot,
 					  ItemPointer tupleid,
+					  ItemPointer root_tid,
+					  Bitmapset *modified_attrs,
 					  EState *estate,
 					  bool noDupErr,
 					  bool *specConflict,
@@ -324,6 +326,17 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 		if (!indexInfo->ii_ReadyForInserts)
 			continue;
 
+		/*
+		 * If modified_attrs is set, we only insert index entries for those
+		 * indexes whose column has changed. All other indexes can use their
+		 * existing index pointers to look up the new tuple
+		 */
+		if (modified_attrs)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
 		/* Check for partial index */
 		if (indexInfo->ii_Predicate != NIL)
 		{
@@ -387,10 +400,11 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 			index_insert(indexRelation, /* index relation */
 						 values,	/* array of index Datums */
 						 isnull,	/* null flags */
-						 tupleid,		/* tid of heap tuple */
+						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique,	/* type of uniqueness check to do */
-						 indexInfo);	/* index AM may need this */
+						 indexInfo,	/* index AM may need this */
+						 (modified_attrs != NULL));	/* is it a WARM update? */
 
 		/*
 		 * If the index has an associated exclusion constraint, check that.
@@ -787,6 +801,9 @@ retry:
 		{
 			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
 				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
+			else
+				ItemPointerCopy(&tup->t_self, &ctid_wait);
+
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index f20d728..747e4ce 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -399,6 +399,8 @@ ExecSimpleRelationInsert(EState *estate, TupleTableSlot *slot)
 
 		if (resultRelInfo->ri_NumIndices > 0)
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &(tuple->t_self),
+												   NULL,
 												   estate, false, NULL,
 												   NIL);
 
@@ -445,6 +447,8 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 	if (!skip_tuple)
 	{
 		List	   *recheckIndexes = NIL;
+		bool		warm_update;
+		Bitmapset  *modified_attrs;
 
 		/* Check the constraints of the tuple */
 		if (rel->rd_att->constr)
@@ -455,13 +459,35 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 
 		/* OK, update the tuple and index entries for it */
 		simple_heap_update(rel, &searchslot->tts_tuple->t_self,
-						   slot->tts_tuple);
+						   slot->tts_tuple, &modified_attrs, &warm_update);
 
 		if (resultRelInfo->ri_NumIndices > 0 &&
-			!HeapTupleIsHeapOnly(slot->tts_tuple))
+			(!HeapTupleIsHeapOnly(slot->tts_tuple) || warm_update))
+		{
+			ItemPointerData root_tid;
+
+			/*
+			 * If we did a WARM update then we must index the tuple using its
+			 * root line pointer and not the tuple TID itself.
+			 */
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self,
+						&root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
+
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL,
 												   NIL);
+		}
 
 		/* AFTER ROW UPDATE Triggers */
 		ExecARUpdateTriggers(estate, resultRelInfo,
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index 19eb175..ef3653c 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -39,6 +39,7 @@
 
 #include "access/relscan.h"
 #include "access/transam.h"
+#include "access/valid.h"
 #include "executor/execdebug.h"
 #include "executor/nodeBitmapHeapscan.h"
 #include "pgstat.h"
@@ -395,11 +396,27 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			OffsetNumber offnum = tbmres->offsets[curslot];
 			ItemPointerData tid;
 			HeapTupleData heapTuple;
+			bool recheck = false;
 
 			ItemPointerSet(&tid, page, offnum);
 			if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
-									   &heapTuple, NULL, true))
-				scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+									   &heapTuple, NULL, true, &recheck))
+			{
+				bool valid = true;
+
+				if (scan->rs_key)
+					HeapKeyTest(&heapTuple, RelationGetDescr(scan->rs_rd),
+							scan->rs_nkeys, scan->rs_key, valid);
+				if (valid)
+					scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+
+				/*
+				 * If the heap tuple needs a recheck because of a WARM update,
+				 * it's a lossy case.
+				 */
+				if (recheck)
+					tbmres->recheck = true;
+			}
 		}
 	}
 	else
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index 5afd02e..6e48c2e 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -142,8 +142,8 @@ IndexNext(IndexScanState *node)
 					   false);	/* don't pfree */
 
 		/*
-		 * If the index was lossy, we have to recheck the index quals using
-		 * the fetched tuple.
+		 * If the index was lossy or the tuple was WARM, we have to recheck
+		 * the index quals using the fetched tuple.
 		 */
 		if (scandesc->xs_recheck)
 		{
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index 0b524e0..2ad4a2c 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -513,6 +513,7 @@ ExecInsert(ModifyTableState *mtstate,
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												 &(tuple->t_self), NULL,
 												 estate, true, &specConflict,
 												   arbiterIndexes);
 
@@ -559,6 +560,7 @@ ExecInsert(ModifyTableState *mtstate,
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
 				recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+													   &(tuple->t_self), NULL,
 													   estate, false, NULL,
 													   arbiterIndexes);
 		}
@@ -892,6 +894,9 @@ ExecUpdate(ItemPointer tupleid,
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
 	List	   *recheckIndexes = NIL;
+	Bitmapset  *modified_attrs = NULL;
+	ItemPointerData	root_tid;
+	bool		warm_update;
 
 	/*
 	 * abort the operation if not running transactions
@@ -1008,7 +1013,7 @@ lreplace:;
 							 estate->es_output_cid,
 							 estate->es_crosscheck_snapshot,
 							 true /* wait for commit */ ,
-							 &hufd, &lockmode);
+							 &hufd, &lockmode, &modified_attrs, &warm_update);
 		switch (result)
 		{
 			case HeapTupleSelfUpdated:
@@ -1095,10 +1100,28 @@ lreplace:;
 		 * the t_self field.
 		 *
 		 * If it's a HOT update, we mustn't insert new index entries.
+		 *
+		 * If it's a WARM update, then we must insert new entries with TID
+		 * pointing to the root of the WARM chain.
 		 */
-		if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
+		if (resultRelInfo->ri_NumIndices > 0 &&
+			(!HeapTupleIsHeapOnly(tuple) || warm_update))
+		{
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL, NIL);
+		}
 	}
 
 	if (canSetTag)
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index b704788..cdfd76e 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -1824,7 +1824,7 @@ pgstat_count_heap_insert(Relation rel, PgStat_Counter n)
  * pgstat_count_heap_update - count a tuple update
  */
 void
-pgstat_count_heap_update(Relation rel, bool hot)
+pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 {
 	PgStat_TableStatus *pgstat_info = rel->pgstat_info;
 
@@ -1842,6 +1842,8 @@ pgstat_count_heap_update(Relation rel, bool hot)
 		/* t_tuples_hot_updated is nontransactional, so just advance it */
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
+		else if (warm)
+			pgstat_info->t_counts.t_tuples_warm_updated++;
 	}
 }
 
@@ -4330,6 +4332,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_updated = 0;
 		result->tuples_deleted = 0;
 		result->tuples_hot_updated = 0;
+		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
 		result->changes_since_analyze = 0;
@@ -5439,6 +5442,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated = tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted = tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated = tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
@@ -5466,6 +5470,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated += tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted += tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated += tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated += tabmsg->t_counts.t_tuples_warm_updated;
 			/* If table was truncated, first reset the live/dead counters */
 			if (tabmsg->t_counts.t_truncated)
 			{
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 5c13d26..7a9b48a 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -347,7 +347,7 @@ DecodeStandbyOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 static void
 DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 {
-	uint8		info = XLogRecGetInfo(buf->record) & XLOG_HEAP_OPMASK;
+	uint8		info = XLogRecGetInfo(buf->record) & XLOG_HEAP2_OPMASK;
 	TransactionId xid = XLogRecGetXid(buf->record);
 	SnapBuild  *builder = ctx->snapshot_builder;
 
@@ -359,10 +359,6 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 
 	switch (info)
 	{
-		case XLOG_HEAP2_MULTI_INSERT:
-			if (SnapBuildProcessChange(builder, xid, buf->origptr))
-				DecodeMultiInsert(ctx, buf);
-			break;
 		case XLOG_HEAP2_NEW_CID:
 			{
 				xl_heap_new_cid *xlrec;
@@ -390,6 +386,7 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 		case XLOG_HEAP2_CLEANUP_INFO:
 		case XLOG_HEAP2_VISIBLE:
 		case XLOG_HEAP2_LOCK_UPDATED:
+		case XLOG_HEAP2_WARMCLEAR:
 			break;
 		default:
 			elog(ERROR, "unexpected RM_HEAP2_ID record type: %u", info);
@@ -418,6 +415,10 @@ DecodeHeapOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 			if (SnapBuildProcessChange(builder, xid, buf->origptr))
 				DecodeInsert(ctx, buf);
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			if (SnapBuildProcessChange(builder, xid, buf->origptr))
+				DecodeMultiInsert(ctx, buf);
+			break;
 
 			/*
 			 * Treat HOT update as normal updates. There is no useful
@@ -809,7 +810,7 @@ DecodeDelete(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 }
 
 /*
- * Decode XLOG_HEAP2_MULTI_INSERT_insert record into multiple tuplebufs.
+ * Decode XLOG_HEAP_MULTI_INSERT_insert record into multiple tuplebufs.
  *
  * Currently MULTI_INSERT will always contain the full tuples.
  */
diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index fdf045a..8d23e92 100644
--- a/src/backend/storage/page/bufpage.c
+++ b/src/backend/storage/page/bufpage.c
@@ -1151,6 +1151,29 @@ PageIndexTupleOverwrite(Page page, OffsetNumber offnum,
 	return true;
 }
 
+/*
+ * PageIndexClearWarmTuples
+ *
+ * Clear the given WARM pointers by resetting the flags stored in the TID
+ * field. We assume there is nothing else in the TID flags other than the WARM
+ * information and clearing all flag bits is safe. If that changes, we must
+ * change this routine as well.
+ */
+void
+PageIndexClearWarmTuples(Page page, OffsetNumber *clearitemnos,
+						 uint16 nclearitems)
+{
+	int			i;
+	ItemId		itemid;
+	IndexTuple	itup;
+
+	for (i = 0; i < nclearitems; i++)
+	{
+		itemid = PageGetItemId(page, clearitemnos[i]);
+		itup = (IndexTuple) PageGetItem(page, itemid);
+		ItemPointerClearFlags(&itup->t_tid);
+	}
+}
 
 /*
  * Set checksum for a page in shared buffers.
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index a987d0d..b8677f3 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -145,6 +145,22 @@ pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS)
 
 
 Datum
+pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+
+Datum
 pg_stat_get_live_tuples(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
@@ -1644,6 +1660,21 @@ pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS)
 }
 
 Datum
+pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_TableStatus *tabentry;
+
+	if ((tabentry = find_tabstat_entry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->t_counts.t_tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+Datum
 pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index a6b60c6..285e07c 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2339,6 +2339,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	list_free_deep(relation->rd_fkeylist);
 	list_free(relation->rd_indexlist);
 	bms_free(relation->rd_indexattr);
+	bms_free(relation->rd_exprindexattr);
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_pkattr);
 	bms_free(relation->rd_idattr);
@@ -4353,6 +4354,13 @@ RelationGetIndexList(Relation relation)
 		return list_copy(relation->rd_indexlist);
 
 	/*
+	 * If the index list was invalidated, we better also invalidate the index
+	 * attribute list (which should automatically invalidate other attributes
+	 * such as primary key and replica identity)
+	 */
+	relation->rd_indexattr = NULL;
+
+	/*
 	 * We build the list we intend to return (in the caller's context) while
 	 * doing the scan.  After successfully completing the scan, we copy that
 	 * list into the relcache entry.  This avoids cache-context memory leakage
@@ -4836,15 +4844,19 @@ Bitmapset *
 RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 {
 	Bitmapset  *indexattrs;		/* indexed columns */
+	Bitmapset  *exprindexattrs;	/* indexed columns in expression/prediacate
+									 indexes */
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *pkindexattrs;	/* columns in the primary index */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
+	Bitmapset  *indxnotreadyattrs;	/* columns in not ready indexes */
 	List	   *indexoidlist;
 	List	   *newindexoidlist;
 	Oid			relpkindex;
 	Oid			relreplindex;
 	ListCell   *l;
 	MemoryContext oldcxt;
+	bool		supportswarm = true;/* True if the table can be WARM updated */
 
 	/* Quick exit if we already computed the result. */
 	if (relation->rd_indexattr != NULL)
@@ -4859,6 +4871,10 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_pkattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
+			case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+				return bms_copy(relation->rd_exprindexattr);
+			case INDEX_ATTR_BITMAP_NOTREADY:
+				return bms_copy(relation->rd_indxnotreadyattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -4899,9 +4915,11 @@ restart:
 	 * won't be returned at all by RelationGetIndexList.
 	 */
 	indexattrs = NULL;
+	exprindexattrs = NULL;
 	uindexattrs = NULL;
 	pkindexattrs = NULL;
 	idindexattrs = NULL;
+	indxnotreadyattrs = NULL;
 	foreach(l, indexoidlist)
 	{
 		Oid			indexOid = lfirst_oid(l);
@@ -4938,6 +4956,10 @@ restart:
 				indexattrs = bms_add_member(indexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
 
+				if (!indexInfo->ii_ReadyForInserts)
+					indxnotreadyattrs = bms_add_member(indxnotreadyattrs,
+							   attrnum - FirstLowInvalidHeapAttributeNumber);
+
 				if (isKey)
 					uindexattrs = bms_add_member(uindexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
@@ -4953,10 +4975,29 @@ restart:
 		}
 
 		/* Collect all attributes used in expressions, too */
-		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &exprindexattrs);
 
 		/* Collect all attributes in the index predicate, too */
-		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
+
+		/*
+		 * indexattrs should include attributes referenced in index expressions
+		 * and predicates too.
+		 */
+		indexattrs = bms_add_members(indexattrs, exprindexattrs);
+
+		if (!indexInfo->ii_ReadyForInserts)
+			indxnotreadyattrs = bms_add_members(indxnotreadyattrs,
+					exprindexattrs);
+
+		/*
+		 * Check if the index has amrecheck method defined. If the method is
+		 * not defined, the index does not support WARM update. Completely
+		 * disable WARM updates on such tables.
+		 */
+		if (!indexDesc->rd_amroutine->amrecheck)
+			supportswarm = false;
+
 
 		index_close(indexDesc, AccessShareLock);
 	}
@@ -4989,15 +5030,22 @@ restart:
 		goto restart;
 	}
 
+	/* Remember if the table can do WARM updates */
+	relation->rd_supportswarm = supportswarm;
+
 	/* Don't leak the old values of these bitmaps, if any */
 	bms_free(relation->rd_indexattr);
 	relation->rd_indexattr = NULL;
+	bms_free(relation->rd_exprindexattr);
+	relation->rd_exprindexattr = NULL;
 	bms_free(relation->rd_keyattr);
 	relation->rd_keyattr = NULL;
 	bms_free(relation->rd_pkattr);
 	relation->rd_pkattr = NULL;
 	bms_free(relation->rd_idattr);
 	relation->rd_idattr = NULL;
+	bms_free(relation->rd_indxnotreadyattr);
+	relation->rd_indxnotreadyattr = NULL;
 
 	/*
 	 * Now save copies of the bitmaps in the relcache entry.  We intentionally
@@ -5010,7 +5058,9 @@ restart:
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_pkattr = bms_copy(pkindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_indexattr = bms_copy(indexattrs);
+	relation->rd_exprindexattr = bms_copy(exprindexattrs);
+	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
+	relation->rd_indxnotreadyattr = bms_copy(indxnotreadyattrs);
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -5024,6 +5074,10 @@ restart:
 			return bms_copy(relation->rd_pkattr);
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
+		case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+			return exprindexattrs;
+		case INDEX_ATTR_BITMAP_NOTREADY:
+			return indxnotreadyattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
@@ -5636,6 +5690,7 @@ load_relcache_init_file(bool shared)
 		rel->rd_keyattr = NULL;
 		rel->rd_pkattr = NULL;
 		rel->rd_idattr = NULL;
+		rel->rd_indxnotreadyattr = NULL;
 		rel->rd_pubactions = NULL;
 		rel->rd_statvalid = false;
 		rel->rd_statlist = NIL;
diff --git a/src/backend/utils/time/combocid.c b/src/backend/utils/time/combocid.c
index baff998..6a2e2f2 100644
--- a/src/backend/utils/time/combocid.c
+++ b/src/backend/utils/time/combocid.c
@@ -106,7 +106,7 @@ HeapTupleHeaderGetCmin(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 	Assert(TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmin(tup)));
 
 	if (tup->t_infomask & HEAP_COMBOCID)
@@ -120,7 +120,7 @@ HeapTupleHeaderGetCmax(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 
 	/*
 	 * Because GetUpdateXid() performs memory allocations if xmax is a
diff --git a/src/backend/utils/time/tqual.c b/src/backend/utils/time/tqual.c
index 519f3b6..e54d0df 100644
--- a/src/backend/utils/time/tqual.c
+++ b/src/backend/utils/time/tqual.c
@@ -186,7 +186,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -205,7 +205,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -377,7 +377,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -396,7 +396,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -471,7 +471,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			return HeapTupleInvisible;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -490,7 +490,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -753,7 +753,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -772,7 +772,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -974,7 +974,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -993,7 +993,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1180,7 +1180,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 		if (HeapTupleHeaderXminInvalid(tuple))
 			return HEAPTUPLE_DEAD;
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_OFF)
+		else if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1198,7 +1198,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 						InvalidTransactionId);
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h
index f919cf8..8b7af1e 100644
--- a/src/include/access/amapi.h
+++ b/src/include/access/amapi.h
@@ -13,6 +13,7 @@
 #define AMAPI_H
 
 #include "access/genam.h"
+#include "access/itup.h"
 
 /*
  * We don't wish to include planner header files here, since most of an index
@@ -74,6 +75,14 @@ typedef bool (*aminsert_function) (Relation indexRelation,
 											   Relation heapRelation,
 											   IndexUniqueCheck checkUnique,
 											   struct IndexInfo *indexInfo);
+/* insert this WARM tuple */
+typedef bool (*amwarminsert_function) (Relation indexRelation,
+											   Datum *values,
+											   bool *isnull,
+											   ItemPointer heap_tid,
+											   Relation heapRelation,
+											   IndexUniqueCheck checkUnique,
+											   struct IndexInfo *indexInfo);
 
 /* bulk delete */
 typedef IndexBulkDeleteResult *(*ambulkdelete_function) (IndexVacuumInfo *info,
@@ -152,6 +161,11 @@ typedef void (*aminitparallelscan_function) (void *target);
 /* (re)start parallel index scan */
 typedef void (*amparallelrescan_function) (IndexScanDesc scan);
 
+/* recheck index tuple and heap tuple match */
+typedef bool (*amrecheck_function) (Relation indexRel,
+		struct IndexInfo *indexInfo, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 /*
  * API struct for an index AM.  Note this must be stored in a single palloc'd
  * chunk of memory.
@@ -198,6 +212,7 @@ typedef struct IndexAmRoutine
 	ambuild_function ambuild;
 	ambuildempty_function ambuildempty;
 	aminsert_function aminsert;
+	amwarminsert_function amwarminsert;
 	ambulkdelete_function ambulkdelete;
 	amvacuumcleanup_function amvacuumcleanup;
 	amcanreturn_function amcanreturn;	/* can be NULL */
@@ -217,6 +232,9 @@ typedef struct IndexAmRoutine
 	amestimateparallelscan_function amestimateparallelscan;		/* can be NULL */
 	aminitparallelscan_function aminitparallelscan;		/* can be NULL */
 	amparallelrescan_function amparallelrescan; /* can be NULL */
+
+	/* interface function to support WARM */
+	amrecheck_function amrecheck;		/* can be NULL */
 } IndexAmRoutine;
 
 
diff --git a/src/include/access/genam.h b/src/include/access/genam.h
index f467b18..965be45 100644
--- a/src/include/access/genam.h
+++ b/src/include/access/genam.h
@@ -75,12 +75,29 @@ typedef struct IndexBulkDeleteResult
 	bool		estimated_count;	/* num_index_tuples is an estimate */
 	double		num_index_tuples;		/* tuples remaining */
 	double		tuples_removed; /* # removed during vacuum operation */
+	double		num_warm_pointers;	/* # WARM pointers found */
+	double		num_clear_pointers;	/* # CLEAR pointers found */
+	double		pointers_cleared;	/* # WARM pointers cleared */
+	double		warm_pointers_removed;	/* # WARM pointers removed */
+	double		clear_pointers_removed;	/* # CLEAR pointers removed */
 	BlockNumber pages_deleted;	/* # unused pages in index */
 	BlockNumber pages_free;		/* # pages available for reuse */
 } IndexBulkDeleteResult;
 
+/*
+ * IndexBulkDeleteCallback should return one of the following
+ */
+typedef enum IndexBulkDeleteCallbackResult
+{
+	IBDCR_KEEP,			/* index tuple should be preserved */
+	IBDCR_DELETE,		/* index tuple should be deleted */
+	IBDCR_CLEAR_WARM	/* index tuple should be cleared of WARM bit */
+} IndexBulkDeleteCallbackResult;
+
 /* Typedef for callback function to determine if a tuple is bulk-deletable */
-typedef bool (*IndexBulkDeleteCallback) (ItemPointer itemptr, void *state);
+typedef IndexBulkDeleteCallbackResult (*IndexBulkDeleteCallback) (
+										 ItemPointer itemptr,
+										 bool is_warm, void *state);
 
 /* struct definitions appear in relscan.h */
 typedef struct IndexScanDescData *IndexScanDesc;
@@ -135,7 +152,8 @@ extern bool index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 struct IndexInfo *indexInfo);
+			 struct IndexInfo *indexInfo,
+			 bool warm_update);
 
 extern IndexScanDesc index_beginscan(Relation heapRelation,
 				Relation indexRelation,
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 5540e12..2217af9 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -72,6 +72,20 @@ typedef struct HeapUpdateFailureData
 	CommandId	cmax;
 } HeapUpdateFailureData;
 
+typedef int HeapCheckWarmChainStatus;
+
+#define HCWC_CLEAR_TUPLE		0x0001
+#define	HCWC_WARM_TUPLE			0x0002
+#define HCWC_WARM_UPDATED_TUPLE	0x0004
+
+#define HCWC_IS_MIXED(status) \
+	(((status) & (HCWC_CLEAR_TUPLE | HCWC_WARM_TUPLE)) != 0)
+#define HCWC_IS_ALL_WARM(status) \
+	(((status) & HCWC_CLEAR_TUPLE) == 0)
+#define HCWC_IS_ALL_CLEAR(status) \
+	(((status) & HCWC_WARM_TUPLE) == 0)
+#define HCWC_IS_WARM_UPDATED(status) \
+	(((status) & HCWC_WARM_UPDATED_TUPLE) != 0)
 
 /* ----------------
  *		function prototypes for heap access method
@@ -137,9 +151,10 @@ extern bool heap_fetch(Relation relation, Snapshot snapshot,
 		   Relation stats_relation);
 extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
 					   Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call);
+					   bool *all_dead, bool first_call, bool *recheck);
 extern bool heap_hot_search(ItemPointer tid, Relation relation,
-				Snapshot snapshot, bool *all_dead);
+				Snapshot snapshot, bool *all_dead,
+				bool *recheck, Buffer *buffer, HeapTuple heapTuple);
 
 extern void heap_get_latest_tid(Relation relation, Snapshot snapshot,
 					ItemPointer tid);
@@ -161,7 +176,8 @@ extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
 extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
@@ -176,10 +192,16 @@ extern bool heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple);
 extern Oid	simple_heap_insert(Relation relation, HeapTuple tup);
 extern void simple_heap_delete(Relation relation, ItemPointer tid);
 extern void simple_heap_update(Relation relation, ItemPointer otid,
-				   HeapTuple tup);
+				   HeapTuple tup,
+				   Bitmapset **modified_attrs,
+				   bool *warm_update);
 
 extern void heap_sync(Relation relation);
 extern void heap_update_snapshot(HeapScanDesc scan, Snapshot snapshot);
+extern HeapCheckWarmChainStatus heap_check_warm_chain(Page dp,
+				   ItemPointer tid, bool stop_at_warm);
+extern int heap_clear_warm_chain(Page dp, ItemPointer tid,
+				   OffsetNumber *cleared_offnums);
 
 /* in heap/pruneheap.c */
 extern void heap_page_prune_opt(Relation relation, Buffer buffer);
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index e6019d5..66fd0ea 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -32,7 +32,7 @@
 #define XLOG_HEAP_INSERT		0x00
 #define XLOG_HEAP_DELETE		0x10
 #define XLOG_HEAP_UPDATE		0x20
-/* 0x030 is free, was XLOG_HEAP_MOVE */
+#define XLOG_HEAP_MULTI_INSERT	0x30
 #define XLOG_HEAP_HOT_UPDATE	0x40
 #define XLOG_HEAP_CONFIRM		0x50
 #define XLOG_HEAP_LOCK			0x60
@@ -47,18 +47,23 @@
 /*
  * We ran out of opcodes, so heapam.c now has a second RmgrId.  These opcodes
  * are associated with RM_HEAP2_ID, but are not logically different from
- * the ones above associated with RM_HEAP_ID.  XLOG_HEAP_OPMASK applies to
- * these, too.
+ * the ones above associated with RM_HEAP_ID.
+ *
+ * In PG 10, we moved XLOG_HEAP2_MULTI_INSERT to RM_HEAP_ID. That allows us to
+ * use 0x80 bit in RM_HEAP2_ID, thus potentially creating another 8 possible
+ * opcodes in RM_HEAP2_ID.
  */
 #define XLOG_HEAP2_REWRITE		0x00
 #define XLOG_HEAP2_CLEAN		0x10
 #define XLOG_HEAP2_FREEZE_PAGE	0x20
 #define XLOG_HEAP2_CLEANUP_INFO 0x30
 #define XLOG_HEAP2_VISIBLE		0x40
-#define XLOG_HEAP2_MULTI_INSERT 0x50
+#define XLOG_HEAP2_WARMCLEAR	0x50
 #define XLOG_HEAP2_LOCK_UPDATED 0x60
 #define XLOG_HEAP2_NEW_CID		0x70
 
+#define XLOG_HEAP2_OPMASK		0x70
+
 /*
  * xl_heap_insert/xl_heap_multi_insert flag values, 8 bits are available.
  */
@@ -80,6 +85,7 @@
 #define XLH_UPDATE_CONTAINS_NEW_TUPLE			(1<<4)
 #define XLH_UPDATE_PREFIX_FROM_OLD				(1<<5)
 #define XLH_UPDATE_SUFFIX_FROM_OLD				(1<<6)
+#define XLH_UPDATE_WARM_UPDATE					(1<<7)
 
 /* convenience macro for checking whether any form of old tuple was logged */
 #define XLH_UPDATE_CONTAINS_OLD						\
@@ -225,6 +231,14 @@ typedef struct xl_heap_clean
 
 #define SizeOfHeapClean (offsetof(xl_heap_clean, ndead) + sizeof(uint16))
 
+typedef struct xl_heap_warmclear
+{
+	uint16		ncleared;
+	/* OFFSET NUMBERS are in the block reference 0 */
+} xl_heap_warmclear;
+
+#define SizeOfHeapWarmClear (offsetof(xl_heap_warmclear, ncleared) + sizeof(uint16))
+
 /*
  * Cleanup_info is required in some cases during a lazy VACUUM.
  * Used for reporting the results of HeapTupleHeaderAdvanceLatestRemovedXid()
@@ -388,6 +402,8 @@ extern XLogRecPtr log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid);
+extern XLogRecPtr log_heap_warmclear(Relation reln, Buffer buffer,
+			   OffsetNumber *cleared, int ncleared);
 extern XLogRecPtr log_heap_freeze(Relation reln, Buffer buffer,
 				TransactionId cutoff_xid, xl_heap_freeze_tuple *tuples,
 				int ntuples);
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 4d614b7..bcefba6 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -201,6 +201,21 @@ struct HeapTupleHeaderData
 										 * upgrade support */
 #define HEAP_MOVED (HEAP_MOVED_OFF | HEAP_MOVED_IN)
 
+/*
+ * A WARM chain usually consists of two parts. Each of these parts are HOT
+ * chains in themselves i.e. all indexed columns has the same value, but a WARM
+ * update separates these parts. We need a mechanism to identify which part a
+ * tuple belongs to. We can't just look at if it's a
+ * HeapTupleHeaderIsWarmUpdated() because during WARM update, both old and new
+ * tuples are marked as WARM tuples.
+ *
+ * We need another infomask bit for this. But we use the same infomask bit that
+ * was earlier used for by old-style VACUUM FULL. This is safe because
+ * HEAP_WARM_TUPLE flag will always be set along with HEAP_WARM_UPDATED. So if
+ * HEAP_WARM_TUPLE and HEAP_WARM_UPDATED is set then we know that it's
+ * referring to red part of the WARM chain.
+ */
+#define HEAP_WARM_TUPLE			0x4000
 #define HEAP_XACT_MASK			0xFFF0	/* visibility-related bits */
 
 /*
@@ -260,7 +275,11 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x0800 are available */
+#define HEAP_WARM_UPDATED		0x0800	/*
+										 * This or a prior version of this
+										 * tuple in the current HOT chain was
+										 * once WARM updated
+										 */
 #define HEAP_LATEST_TUPLE		0x1000	/*
 										 * This is the last tuple in chain and
 										 * ip_posid points to the root line
@@ -271,7 +290,7 @@ struct HeapTupleHeaderData
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF800	/* visibility-related bits */
 
 
 /*
@@ -396,7 +415,7 @@ struct HeapTupleHeaderData
 /* SetCmin is reasonably simple since we never need a combo CID */
 #define HeapTupleHeaderSetCmin(tup, cid) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	(tup)->t_infomask &= ~HEAP_COMBOCID; \
 } while (0)
@@ -404,7 +423,7 @@ do { \
 /* SetCmax must be used after HeapTupleHeaderAdjustCmax; see combocid.c */
 #define HeapTupleHeaderSetCmax(tup, cid, iscombo) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	if (iscombo) \
 		(tup)->t_infomask |= HEAP_COMBOCID; \
@@ -414,7 +433,7 @@ do { \
 
 #define HeapTupleHeaderGetXvac(tup) \
 ( \
-	((tup)->t_infomask & HEAP_MOVED) ? \
+	HeapTupleHeaderIsMoved(tup) ? \
 		(tup)->t_choice.t_heap.t_field3.t_xvac \
 	: \
 		InvalidTransactionId \
@@ -422,7 +441,7 @@ do { \
 
 #define HeapTupleHeaderSetXvac(tup, xid) \
 do { \
-	Assert((tup)->t_infomask & HEAP_MOVED); \
+	Assert(HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_xvac = (xid); \
 } while (0)
 
@@ -510,6 +529,21 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+#define HeapTupleHeaderSetWarmUpdated(tup) \
+do { \
+	(tup)->t_infomask2 |= HEAP_WARM_UPDATED; \
+} while (0)
+
+#define HeapTupleHeaderClearWarmUpdated(tup) \
+do { \
+	(tup)->t_infomask2 &= ~HEAP_WARM_UPDATED; \
+} while (0)
+
+#define HeapTupleHeaderIsWarmUpdated(tup) \
+( \
+  ((tup)->t_infomask2 & HEAP_WARM_UPDATED) != 0 \
+)
+
 /*
  * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
  * the TID of the tuple itself in t_ctid field to mark the end of the chain.
@@ -635,6 +669,58 @@ do { \
 )
 
 /*
+ * Macros to check if tuple is a moved-off/in tuple by VACUUM FULL in from
+ * pre-9.0 era. Such tuple must not have HEAP_WARM_TUPLE flag set.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderIsMovedOff(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED_OFF) \
+)
+
+#define HeapTupleHeaderIsMovedIn(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED_IN) \
+)
+
+#define HeapTupleHeaderIsMoved(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED) \
+)
+
+/*
+ * Check if tuple belongs to the second part of the WARM chain.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderIsWarm(tuple) \
+( \
+	HeapTupleHeaderIsWarmUpdated(tuple) && \
+	(((tuple)->t_infomask & HEAP_WARM_TUPLE) != 0) \
+)
+
+/*
+ * Mark tuple as a member of the second part of the chain. Must only be done on
+ * a tuple which is already marked a WARM-tuple.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderSetWarm(tuple) \
+( \
+	AssertMacro(HeapTupleHeaderIsWarmUpdated(tuple)), \
+	(tuple)->t_infomask |= HEAP_WARM_TUPLE \
+)
+
+#define HeapTupleHeaderClearWarm(tuple) \
+( \
+	(tuple)->t_infomask &= ~HEAP_WARM_TUPLE \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
@@ -785,6 +871,24 @@ struct MinimalTupleData
 #define HeapTupleClearHeapOnly(tuple) \
 		HeapTupleHeaderClearHeapOnly((tuple)->t_data)
 
+#define HeapTupleIsWarmUpdated(tuple) \
+		HeapTupleHeaderIsWarmUpdated((tuple)->t_data)
+
+#define HeapTupleSetWarmUpdated(tuple) \
+		HeapTupleHeaderSetWarmUpdated((tuple)->t_data)
+
+#define HeapTupleClearWarmUpdated(tuple) \
+		HeapTupleHeaderClearWarmUpdated((tuple)->t_data)
+
+#define HeapTupleIsWarm(tuple) \
+		HeapTupleHeaderIsWarm((tuple)->t_data)
+
+#define HeapTupleSetWarm(tuple) \
+		HeapTupleHeaderSetWarm((tuple)->t_data)
+
+#define HeapTupleClearWarm(tuple) \
+		HeapTupleHeaderClearWarm((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index f9304db..163180d 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -427,6 +427,12 @@ typedef BTScanOpaqueData *BTScanOpaque;
 #define SK_BT_NULLS_FIRST	(INDOPTION_NULLS_FIRST << SK_BT_INDOPTION_SHIFT)
 
 /*
+ * Flags overloaded on t_tid.ip_posid field. They are managed by
+ * ItemPointerSetFlags and corresponing routines.
+ */
+#define BTREE_INDEX_WARM_POINTER	0x01
+
+/*
  * external entry points for btree, in nbtree.c
  */
 extern IndexBuildResult *btbuild(Relation heap, Relation index,
@@ -436,6 +442,10 @@ extern bool btinsert(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
 		 struct IndexInfo *indexInfo);
+extern bool btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 struct IndexInfo *indexInfo);
 extern IndexScanDesc btbeginscan(Relation rel, int nkeys, int norderbys);
 extern Size btestimateparallelscan(void);
 extern void btinitparallelscan(void *target);
@@ -487,10 +497,12 @@ extern void _bt_pageinit(Page page, Size size);
 extern bool _bt_page_recyclable(Page page);
 extern void _bt_delitems_delete(Relation rel, Buffer buf,
 					OffsetNumber *itemnos, int nitems, Relation heapRel);
-extern void _bt_delitems_vacuum(Relation rel, Buffer buf,
-					OffsetNumber *itemnos, int nitems,
-					BlockNumber lastBlockVacuumed);
+extern void _bt_handleitems_vacuum(Relation rel, Buffer buf,
+					OffsetNumber *delitemnos, int ndelitems,
+					OffsetNumber *clearitemnos, int nclearitems);
 extern int	_bt_pagedel(Relation rel, Buffer buf);
+extern void	_bt_clear_items(Page page, OffsetNumber *clearitemnos,
+					uint16 nclearitems);
 
 /*
  * prototypes for functions in nbtsearch.c
@@ -537,6 +549,9 @@ extern bytea *btoptions(Datum reloptions, bool validate);
 extern bool btproperty(Oid index_oid, int attno,
 		   IndexAMProperty prop, const char *propname,
 		   bool *res, bool *isnull);
+extern bool btrecheck(Relation indexRel, struct IndexInfo *indexInfo,
+		IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * prototypes for functions in nbtvalidate.c
diff --git a/src/include/access/nbtxlog.h b/src/include/access/nbtxlog.h
index d6a3085..6a86628 100644
--- a/src/include/access/nbtxlog.h
+++ b/src/include/access/nbtxlog.h
@@ -142,7 +142,8 @@ typedef struct xl_btree_reuse_page
 /*
  * This is what we need to know about vacuum of individual leaf index tuples.
  * The WAL record can represent deletion of any number of index tuples on a
- * single index page when executed by VACUUM.
+ * single index page when executed by VACUUM. It also includes tuples which
+ * are cleared of WARM bits by VACUUM.
  *
  * For MVCC scans, lastBlockVacuumed will be set to InvalidBlockNumber.
  * For a non-MVCC index scans there is an additional correctness requirement
@@ -165,11 +166,12 @@ typedef struct xl_btree_reuse_page
 typedef struct xl_btree_vacuum
 {
 	BlockNumber lastBlockVacuumed;
-
-	/* TARGET OFFSET NUMBERS FOLLOW */
+	uint16		ndelitems;
+	uint16		nclearitems;
+	/* ndelitems + nclearitems TARGET OFFSET NUMBERS FOLLOW */
 } xl_btree_vacuum;
 
-#define SizeOfBtreeVacuum	(offsetof(xl_btree_vacuum, lastBlockVacuumed) + sizeof(BlockNumber))
+#define SizeOfBtreeVacuum	(offsetof(xl_btree_vacuum, nclearitems) + sizeof(uint16))
 
 /*
  * This is what we need to know about marking an empty branch for deletion.
diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h
index 3fc726d..fa178d3 100644
--- a/src/include/access/relscan.h
+++ b/src/include/access/relscan.h
@@ -104,6 +104,9 @@ typedef struct IndexScanDescData
 	/* index access method's private state */
 	void	   *opaque;			/* access-method-specific info */
 
+	/* IndexInfo structure for this index */
+	struct IndexInfo  *indexInfo;
+
 	/*
 	 * In an index-only scan, a successful amgettuple call must fill either
 	 * xs_itup (and xs_itupdesc) or xs_hitup (and xs_hitupdesc) to provide the
@@ -119,7 +122,7 @@ typedef struct IndexScanDescData
 	HeapTupleData xs_ctup;		/* current heap tuple, if any */
 	Buffer		xs_cbuf;		/* current heap buffer in scan, if any */
 	/* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */
-	bool		xs_recheck;		/* T means scan keys must be rechecked */
+	bool		xs_recheck;		/* T means scan keys must be rechecked for each tuple */
 
 	/*
 	 * When fetching with an ordering operator, the values of the ORDER BY
diff --git a/src/include/catalog/index.h b/src/include/catalog/index.h
index 20bec90..f92ec29 100644
--- a/src/include/catalog/index.h
+++ b/src/include/catalog/index.h
@@ -89,6 +89,13 @@ extern void FormIndexDatum(IndexInfo *indexInfo,
 			   Datum *values,
 			   bool *isnull);
 
+extern void FormIndexPlainDatum(IndexInfo *indexInfo,
+			   Relation heapRel,
+			   HeapTuple heapTup,
+			   Datum *values,
+			   bool *isnull,
+			   bool *isavail);
+
 extern void index_build(Relation heapRelation,
 			Relation indexRelation,
 			IndexInfo *indexInfo,
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index ee67459..509adda 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2783,6 +2783,8 @@ DATA(insert OID = 1933 (  pg_stat_get_tuples_deleted	PGNSP PGUID 12 1 0 0 0 f f
 DESCR("statistics: number of tuples deleted");
 DATA(insert OID = 1972 (  pg_stat_get_tuples_hot_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated");
+DATA(insert OID = 3373 (  pg_stat_get_tuples_warm_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated");
 DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_live_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
@@ -2935,6 +2937,8 @@ DATA(insert OID = 3042 (  pg_stat_get_xact_tuples_deleted		PGNSP PGUID 12 1 0 0
 DESCR("statistics: number of tuples deleted in current transaction");
 DATA(insert OID = 3043 (  pg_stat_get_xact_tuples_hot_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated in current transaction");
+DATA(insert OID = 3359 (  pg_stat_get_xact_tuples_warm_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated in current transaction");
 DATA(insert OID = 3044 (  pg_stat_get_xact_blocks_fetched		PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_fetched _null_ _null_ _null_ ));
 DESCR("statistics: number of blocks fetched in current transaction");
 DATA(insert OID = 3045 (  pg_stat_get_xact_blocks_hit			PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_hit _null_ _null_ _null_ ));
diff --git a/src/include/commands/progress.h b/src/include/commands/progress.h
index 9472ecc..b355b61 100644
--- a/src/include/commands/progress.h
+++ b/src/include/commands/progress.h
@@ -25,6 +25,7 @@
 #define PROGRESS_VACUUM_NUM_INDEX_VACUUMS		4
 #define PROGRESS_VACUUM_MAX_DEAD_TUPLES			5
 #define PROGRESS_VACUUM_NUM_DEAD_TUPLES			6
+#define PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED	7
 
 /* Phases of vacuum (as advertised via PROGRESS_VACUUM_PHASE) */
 #define PROGRESS_VACUUM_PHASE_SCAN_HEAP			1
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index d3849b9..7e1ec56 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -506,6 +506,7 @@ extern int	ExecCleanTargetListLength(List *targetlist);
 extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
 extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
 extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
+					  ItemPointer root_tid, Bitmapset *modified_attrs,
 					  EState *estate, bool noDupErr, bool *specConflict,
 					  List *arbiterIndexes);
 extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
diff --git a/src/include/executor/nodeIndexscan.h b/src/include/executor/nodeIndexscan.h
index ea3f3a5..ebeec74 100644
--- a/src/include/executor/nodeIndexscan.h
+++ b/src/include/executor/nodeIndexscan.h
@@ -41,5 +41,4 @@ extern void ExecIndexEvalRuntimeKeys(ExprContext *econtext,
 extern bool ExecIndexEvalArrayKeys(ExprContext *econtext,
 					   IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 extern bool ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
-
 #endif   /* NODEINDEXSCAN_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index ff42895..042003a 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -132,6 +132,7 @@ typedef struct IndexInfo
 	NodeTag		type;
 	int			ii_NumIndexAttrs;
 	AttrNumber	ii_KeyAttrNumbers[INDEX_MAX_KEYS];
+	Bitmapset  *ii_indxattrs;	/* bitmap of all columns used in this index */
 	List	   *ii_Expressions; /* list of Expr */
 	List	   *ii_ExpressionsState;	/* list of ExprState */
 	List	   *ii_Predicate;	/* list of Expr */
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 2015625..4b7d671 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -105,6 +105,7 @@ typedef struct PgStat_TableCounts
 	PgStat_Counter t_tuples_updated;
 	PgStat_Counter t_tuples_deleted;
 	PgStat_Counter t_tuples_hot_updated;
+	PgStat_Counter t_tuples_warm_updated;
 	bool		t_truncated;
 
 	PgStat_Counter t_delta_live_tuples;
@@ -625,6 +626,7 @@ typedef struct PgStat_StatTabEntry
 	PgStat_Counter tuples_updated;
 	PgStat_Counter tuples_deleted;
 	PgStat_Counter tuples_hot_updated;
+	PgStat_Counter tuples_warm_updated;
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
@@ -1259,7 +1261,7 @@ pgstat_report_wait_end(void)
 	(pgStatBlockWriteTime += (n))
 
 extern void pgstat_count_heap_insert(Relation rel, PgStat_Counter n);
-extern void pgstat_count_heap_update(Relation rel, bool hot);
+extern void pgstat_count_heap_update(Relation rel, bool hot, bool warm);
 extern void pgstat_count_heap_delete(Relation rel);
 extern void pgstat_count_truncate(Relation rel);
 extern void pgstat_update_heap_dead_tuples(Relation rel, int delta);
diff --git a/src/include/storage/bufpage.h b/src/include/storage/bufpage.h
index e956dc3..1852195 100644
--- a/src/include/storage/bufpage.h
+++ b/src/include/storage/bufpage.h
@@ -433,6 +433,8 @@ extern void PageIndexMultiDelete(Page page, OffsetNumber *itemnos, int nitems);
 extern void PageIndexTupleDeleteNoCompact(Page page, OffsetNumber offset);
 extern bool PageIndexTupleOverwrite(Page page, OffsetNumber offnum,
 						Item newtup, Size newsize);
+extern void PageIndexClearWarmTuples(Page page, OffsetNumber *clearitemnos,
+						uint16 nclearitems);
 extern char *PageSetChecksumCopy(Page page, BlockNumber blkno);
 extern void PageSetChecksumInplace(Page page, BlockNumber blkno);
 
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index ab875bb..cd1976a 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -142,9 +142,14 @@ typedef struct RelationData
 
 	/* data managed by RelationGetIndexAttrBitmap: */
 	Bitmapset  *rd_indexattr;	/* identifies columns used in indexes */
+	Bitmapset  *rd_exprindexattr; /* indentified columns used in expression or
+									 predicate indexes */
+	Bitmapset  *rd_indxnotreadyattr;	/* columns used by indexes not yet
+										   ready */
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_pkattr;		/* cols included in primary key */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	PublicationActions  *rd_pubactions;	/* publication actions */
 
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index 81af3ae..d5b3072 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -51,7 +51,9 @@ typedef enum IndexAttrBitmapKind
 	INDEX_ATTR_BITMAP_ALL,
 	INDEX_ATTR_BITMAP_KEY,
 	INDEX_ATTR_BITMAP_PRIMARY_KEY,
-	INDEX_ATTR_BITMAP_IDENTITY_KEY
+	INDEX_ATTR_BITMAP_IDENTITY_KEY,
+	INDEX_ATTR_BITMAP_EXPR_PREDICATE,
+	INDEX_ATTR_BITMAP_NOTREADY
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
diff --git a/src/test/regress/expected/alter_generic.out b/src/test/regress/expected/alter_generic.out
index ce581bb..85e4c70 100644
--- a/src/test/regress/expected/alter_generic.out
+++ b/src/test/regress/expected/alter_generic.out
@@ -161,15 +161,15 @@ ALTER SERVER alt_fserv1 RENAME TO alt_fserv3;   -- OK
 SELECT fdwname FROM pg_foreign_data_wrapper WHERE fdwname like 'alt_fdw%';
  fdwname  
 ----------
- alt_fdw2
  alt_fdw3
+ alt_fdw2
 (2 rows)
 
 SELECT srvname FROM pg_foreign_server WHERE srvname like 'alt_fserv%';
   srvname   
 ------------
- alt_fserv2
  alt_fserv3
+ alt_fserv2
 (2 rows)
 
 --
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index e8f8726..a37d443 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1755,6 +1755,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_tuples_deleted(c.oid) AS n_tup_del,
     pg_stat_get_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
@@ -1902,6 +1903,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1945,6 +1947,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1982,7 +1985,8 @@ pg_stat_xact_all_tables| SELECT c.oid AS relid,
     pg_stat_get_xact_tuples_inserted(c.oid) AS n_tup_ins,
     pg_stat_get_xact_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_xact_tuples_deleted(c.oid) AS n_tup_del,
-    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd
+    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_xact_tuples_warm_updated(c.oid) AS n_tup_warm_upd
    FROM ((pg_class c
      LEFT JOIN pg_index i ON ((c.oid = i.indrelid)))
      LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
@@ -1998,7 +2002,8 @@ pg_stat_xact_sys_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname = ANY (ARRAY['pg_catalog'::name, 'information_schema'::name])) OR (pg_stat_xact_all_tables.schemaname ~ '^pg_toast'::text));
 pg_stat_xact_user_functions| SELECT p.oid AS funcid,
@@ -2020,7 +2025,8 @@ pg_stat_xact_user_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname <> ALL (ARRAY['pg_catalog'::name, 'information_schema'::name])) AND (pg_stat_xact_all_tables.schemaname !~ '^pg_toast'::text));
 pg_statio_all_indexes| SELECT c.oid AS relid,
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
new file mode 100644
index 0000000..8aa1505
--- /dev/null
+++ b/src/test/regress/expected/warm.out
@@ -0,0 +1,747 @@
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Check if index only scan works correctly
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx1 on updtst_tab1
+   Index Cond: (b = 140001)
+(2 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab1;
+------------------
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE a = 1;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE b = 701;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+VACUUM updtst_tab2;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx2 on updtst_tab2
+   Index Cond: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab2 WHERE b = 701;
+  b  
+-----
+ 701
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab2;
+------------------
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 1;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+       QUERY PLAN        
+-------------------------
+ Seq Scan on updtst_tab3
+   Filter: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 701;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+  b   
+------
+ 1421
+(1 row)
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+SET enable_seqscan = false;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    98
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 2;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx3 on updtst_tab3
+   Index Cond: (b = 702)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 702;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+  b   
+------
+ 1422
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab3;
+------------------
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+explain select * from test_warm where lower(a) = 'test';
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on test_warm  (cost=4.18..12.65 rows=4 width=64)
+   Recheck Cond: (lower(a) = 'test'::text)
+   ->  Bitmap Index Scan on test_warmindx  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (lower(a) = 'test'::text)
+(4 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+select *, ctid from test_warm where a = 'test';
+ a | b | ctid 
+---+---+------
+(0 rows)
+
+select *, ctid from test_warm where a = 'TEST';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+                                   QUERY PLAN                                    
+---------------------------------------------------------------------------------
+ Index Scan using test_warmindx on test_warm  (cost=0.15..20.22 rows=4 width=64)
+   Index Cond: (lower(a) = 'test'::text)
+(2 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+DROP TABLE test_warm;
+--- Test with toast data types
+CREATE TABLE test_toast_warm (a int unique, b text, c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+-- insert a large enough value to cause index datum compression
+INSERT INTO test_toast_warm VALUES (1, repeat('a', 600), 100);
+INSERT INTO test_toast_warm VALUES (2, repeat('b', 2), 100);
+INSERT INTO test_toast_warm VALUES (3, repeat('c', 4), 100);
+INSERT INTO test_toast_warm VALUES (4, repeat('d', 63), 100);
+INSERT INTO test_toast_warm VALUES (5, repeat('e', 126), 100);
+INSERT INTO test_toast_warm VALUES (6, repeat('f', 127), 100);
+INSERT INTO test_toast_warm VALUES (7, repeat('g', 128), 100);
+INSERT INTO test_toast_warm VALUES (8, repeat('h', 3200), 100);
+UPDATE test_toast_warm SET b = repeat('q', 600) WHERE a = 1;
+UPDATE test_toast_warm SET b = repeat('r', 2) WHERE a = 2;
+UPDATE test_toast_warm SET b = repeat('s', 4) WHERE a = 3;
+UPDATE test_toast_warm SET b = repeat('t', 63) WHERE a = 4;
+UPDATE test_toast_warm SET b = repeat('u', 126) WHERE a = 5;
+UPDATE test_toast_warm SET b = repeat('v', 127) WHERE a = 6;
+UPDATE test_toast_warm SET b = repeat('w', 128) WHERE a = 7;
+UPDATE test_toast_warm SET b = repeat('x', 3200) WHERE a = 8;
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+                        QUERY PLAN                         
+-----------------------------------------------------------
+ Index Scan using test_toast_warm_a_key on test_toast_warm
+   Index Cond: (a = 1)
+(2 rows)
+
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+                                                                                                                                                                                                                                                                                                                      QUERY PLAN                                                                                                                                                                                                                                                                                                                      
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Index Scan using test_toast_warm_index on test_toast_warm
+   Index Cond: (b = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'::text)
+(2 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+                                                                                                                                                                                                                                                                                                                      QUERY PLAN                                                                                                                                                                                                                                                                                                                      
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Index Only Scan using test_toast_warm_index on test_toast_warm
+   Index Cond: (b = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'::text)
+(2 rows)
+
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+ a |                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+---+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 1 | qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+ a | b 
+---+---
+(0 rows)
+
+SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+ b 
+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+ a |                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+---+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 1 | qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+ a | b  
+---+----
+ 2 | rr
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+ a |  b   
+---+------
+ 3 | ssss
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+ a |                                b                                
+---+-----------------------------------------------------------------
+ 4 | ttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttt
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+ a |                                                               b                                                                
+---+--------------------------------------------------------------------------------------------------------------------------------
+ 5 | uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+ a |                                                                b                                                                
+---+---------------------------------------------------------------------------------------------------------------------------------
+ 6 | vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+ a |                                                                b                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------
+ 7 | wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+ a |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                b                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 8 | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+(1 row)
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                                    QUERY PLAN                                                                                                                                                                                                                                                                                                                    
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 'qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq'::text)
+(2 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                                    QUERY PLAN                                                                                                                                                                                                                                                                                                                    
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 'qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq'::text)
+(2 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+ a |                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+---+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 1 | qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+ a | b  
+---+----
+ 2 | rr
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+ a |  b   
+---+------
+ 3 | ssss
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+ a |                                b                                
+---+-----------------------------------------------------------------
+ 4 | ttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttt
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+ a |                                                               b                                                                
+---+--------------------------------------------------------------------------------------------------------------------------------
+ 5 | uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+ a |                                                                b                                                                
+---+---------------------------------------------------------------------------------------------------------------------------------
+ 6 | vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+ a |                                                                b                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------
+ 7 | wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+ a |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                b                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 8 | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+(1 row)
+
+DROP TABLE test_toast_warm;
+-- Test with numeric data type
+CREATE TABLE test_toast_warm (a int unique, b numeric(10,2), c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+INSERT INTO test_toast_warm VALUES (1, 10.2, 100);
+INSERT INTO test_toast_warm VALUES (2, 11.22, 100);
+INSERT INTO test_toast_warm VALUES (3, 12.222, 100);
+INSERT INTO test_toast_warm VALUES (4, 13.20, 100);
+INSERT INTO test_toast_warm VALUES (5, 14.201, 100);
+UPDATE test_toast_warm SET b = 100.2 WHERE a = 1;
+UPDATE test_toast_warm SET b = 101.22 WHERE a = 2;
+UPDATE test_toast_warm SET b = 102.222 WHERE a = 3;
+UPDATE test_toast_warm SET b = 103.20 WHERE a = 4;
+UPDATE test_toast_warm SET b = 104.201 WHERE a = 5;
+SELECT * FROM test_toast_warm;
+ a |   b    |  c  
+---+--------+-----
+ 1 | 100.20 | 100
+ 2 | 101.22 | 100
+ 3 | 102.22 | 100
+ 4 | 103.20 | 100
+ 5 | 104.20 | 100
+(5 rows)
+
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+                        QUERY PLAN                         
+-----------------------------------------------------------
+ Index Scan using test_toast_warm_a_key on test_toast_warm
+   Index Cond: (a = 1)
+(2 rows)
+
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+                    QUERY PLAN                    
+--------------------------------------------------
+ Bitmap Heap Scan on test_toast_warm
+   Recheck Cond: (b = 10.2)
+   ->  Bitmap Index Scan on test_toast_warm_index
+         Index Cond: (b = 10.2)
+(4 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+                    QUERY PLAN                    
+--------------------------------------------------
+ Bitmap Heap Scan on test_toast_warm
+   Recheck Cond: (b = 100.2)
+   ->  Bitmap Index Scan on test_toast_warm_index
+         Index Cond: (b = 100.2)
+(4 rows)
+
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+ a | b 
+---+---
+(0 rows)
+
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+ b 
+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+   b    
+--------
+ 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+ a |   b    
+---+--------
+ 2 | 101.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+ a |   b    
+---+--------
+ 3 | 102.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+ a |   b    
+---+--------
+ 4 | 103.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+ a |   b    
+---+--------
+ 5 | 104.20
+(1 row)
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+         QUERY PLAN          
+-----------------------------
+ Seq Scan on test_toast_warm
+   Filter: (a = 1)
+(2 rows)
+
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+         QUERY PLAN          
+-----------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 10.2)
+(2 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+         QUERY PLAN          
+-----------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 100.2)
+(2 rows)
+
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+ a | b 
+---+---
+(0 rows)
+
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+ b 
+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+   b    
+--------
+ 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+ a |   b    
+---+--------
+ 2 | 101.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+ a |   b    
+---+--------
+ 3 | 102.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+ a |   b    
+---+--------
+ 4 | 103.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+ a |   b    
+---+--------
+ 5 | 104.20
+(1 row)
+
+DROP TABLE test_toast_warm;
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 9f95b01..cd99f88 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -42,6 +42,8 @@ test: create_type
 test: create_table
 test: create_function_2
 
+test: warm
+
 # ----------
 # Load huge amounts of data
 # We should split the data files into single files and then
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
new file mode 100644
index 0000000..ab61cfb
--- /dev/null
+++ b/src/test/regress/sql/warm.sql
@@ -0,0 +1,286 @@
+
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Check if index only scan works correctly
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab1;
+
+------------------
+
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE a = 1;
+
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE b = 701;
+
+VACUUM updtst_tab2;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+SELECT b FROM updtst_tab2 WHERE b = 701;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab2;
+------------------
+
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+SELECT * FROM updtst_tab3 WHERE a = 1;
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+SET enable_seqscan = false;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+SELECT * FROM updtst_tab3 WHERE a = 2;
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab3;
+------------------
+
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where a = 'test';
+select *, ctid from test_warm where a = 'TEST';
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+DROP TABLE test_warm;
+
+--- Test with toast data types
+
+CREATE TABLE test_toast_warm (a int unique, b text, c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+
+-- insert a large enough value to cause index datum compression
+INSERT INTO test_toast_warm VALUES (1, repeat('a', 600), 100);
+INSERT INTO test_toast_warm VALUES (2, repeat('b', 2), 100);
+INSERT INTO test_toast_warm VALUES (3, repeat('c', 4), 100);
+INSERT INTO test_toast_warm VALUES (4, repeat('d', 63), 100);
+INSERT INTO test_toast_warm VALUES (5, repeat('e', 126), 100);
+INSERT INTO test_toast_warm VALUES (6, repeat('f', 127), 100);
+INSERT INTO test_toast_warm VALUES (7, repeat('g', 128), 100);
+INSERT INTO test_toast_warm VALUES (8, repeat('h', 3200), 100);
+
+UPDATE test_toast_warm SET b = repeat('q', 600) WHERE a = 1;
+UPDATE test_toast_warm SET b = repeat('r', 2) WHERE a = 2;
+UPDATE test_toast_warm SET b = repeat('s', 4) WHERE a = 3;
+UPDATE test_toast_warm SET b = repeat('t', 63) WHERE a = 4;
+UPDATE test_toast_warm SET b = repeat('u', 126) WHERE a = 5;
+UPDATE test_toast_warm SET b = repeat('v', 127) WHERE a = 6;
+UPDATE test_toast_warm SET b = repeat('w', 128) WHERE a = 7;
+UPDATE test_toast_warm SET b = repeat('x', 3200) WHERE a = 8;
+
+
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+
+DROP TABLE test_toast_warm;
+
+-- Test with numeric data type
+
+CREATE TABLE test_toast_warm (a int unique, b numeric(10,2), c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+
+INSERT INTO test_toast_warm VALUES (1, 10.2, 100);
+INSERT INTO test_toast_warm VALUES (2, 11.22, 100);
+INSERT INTO test_toast_warm VALUES (3, 12.222, 100);
+INSERT INTO test_toast_warm VALUES (4, 13.20, 100);
+INSERT INTO test_toast_warm VALUES (5, 14.201, 100);
+
+UPDATE test_toast_warm SET b = 100.2 WHERE a = 1;
+UPDATE test_toast_warm SET b = 101.22 WHERE a = 2;
+UPDATE test_toast_warm SET b = 102.222 WHERE a = 3;
+UPDATE test_toast_warm SET b = 103.20 WHERE a = 4;
+UPDATE test_toast_warm SET b = 104.201 WHERE a = 5;
+
+SELECT * FROM test_toast_warm;
+
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+
+DROP TABLE test_toast_warm;
0006_warm_taptests_v21.patchapplication/octet-stream; name=0006_warm_taptests_v21.patchDownload
diff --git b/src/test/modules/warm/t/001_recovery.pl a/src/test/modules/warm/t/001_recovery.pl
new file mode 100644
index 0000000..2a76830
--- /dev/null
+++ a/src/test/modules/warm/t/001_recovery.pl
@@ -0,0 +1,50 @@
+# Single-node test: run workload, crash, recover and run sanity check
+
+use strict;
+use warnings;
+
+use TestLib;
+use Test::More tests => 2;
+use PostgresNode;
+
+my $node = get_new_node();
+$node->init;
+$node->start;
+
+# Create a table, do some WARM updates and then restart
+$node->safe_psql('postgres',
+	'create table accounts (aid int unique, branch int, balance bigint) with (fillfactor=98)');
+$node->safe_psql('postgres',
+	'create table history (aid int, delta int)');
+$node->safe_psql('postgres',
+	'insert into accounts select generate_series(1,10000), (random()*1000)::int % 10, 0');
+$node->safe_psql('postgres',
+	'create index accounts_bal_indx on accounts(balance)');
+
+for( $a = 1; $a <= 1000; $a = $a + 1 ) {
+	my $aid1 = int(rand(10000)) + 1;
+	my $aid2 = int(rand(10000)) + 1;
+	my $balance = int(rand(99999));
+	$node->safe_psql('postgres',
+		"begin;
+		 update accounts set balance = balance + $balance where aid = $aid1;
+		 update accounts set balance = balance - $balance where aid = $aid2;
+		 insert into history values ($aid1, $balance);
+		 insert into history values ($aid2, 0 - $balance);
+		 end;");
+}
+
+# Verify that we read the same TS after crash recovery
+$node->stop('immediate');
+$node->start;
+
+my $recovered_balance = $node->safe_psql('postgres', 'select sum(balance) from accounts');
+my $total_delta = $node->safe_psql('postgres', 'select sum(delta) from history');
+
+# since delta is credited to one account and debited from the other, we expect
+# the sum(balance) to stay zero.
+is($recovered_balance, 0, 'balanace matches after recovery');
+
+# A positive and a negative value is inserted in the history table. Hence the
+# sum(delta) should remain zero.
+is($total_delta, 0, 'sum(delta) matches after recovery');
diff --git b/src/test/modules/warm/t/002_warm_stress.pl a/src/test/modules/warm/t/002_warm_stress.pl
new file mode 100644
index 0000000..a1a2371
--- /dev/null
+++ a/src/test/modules/warm/t/002_warm_stress.pl
@@ -0,0 +1,289 @@
+# Run varity of tests to check consistency of index access.
+#
+# These tests are primarily designed to test if WARM updates cause any
+# inconsistency in the indexes. We use a pgbench-like setup with an "accounts"
+# table and a "branches" table. But instead of a single "aid" column the
+# pgbench_warm_accounts table has four additional columns. These columns have
+# initial value as "aid * 10", "aid * 20", "aid * 30" and "aid * 40". And
+# unlike the aid column, values in these columns do not remain static. The
+# values are changed in a narrow change around the original value, such that
+# they still remain distinct, even after updates. We also build indexes on
+# these additional columns.
+#
+# This allows us to force WARM updates to the table, while accessing individual
+# rows using these auxillary columns. If things are solid, we must not miss any
+# row irrespective of which column we use to fetch the row. Also, the sum of
+# balances in two tables should match at the end.
+#
+# We drop and recreate indexes concurrently and also run VACUUM and run
+# consistency checks to ensure nothing breaks. The tests also aborts
+# transactions, acquires share/update locks etc to check any negative effects
+# of those things.
+
+use strict;
+use warnings;
+
+use TestLib;
+use Test::More tests => 10;
+use PostgresNode;
+
+# Different kinds of queries, some committing, some aborting. Also include FOR
+# SHARE, FOR UPDATE which may have implications on the visibility bits etc.
+my @query_set1 = (
+
+	"begin;
+	update pgbench_warm_accounts set abalance = abalance + :delta where aid = :aid;
+	select abalance from pgbench_warm_accounts where aid = :aid;
+	update pgbench_warm_branches set bbalance = bbalance + :delta where bid = :bid;
+	end;",
+
+	"begin;
+	update pgbench_warm_accounts set abalance = abalance + :delta where aid = :aid;
+	select abalance from pgbench_warm_accounts where aid = :aid;
+	update pgbench_warm_branches set bbalance = bbalance + :delta where bid = :bid;
+	rollback;",
+
+	"begin;
+	select abalance from pgbench_warm_accounts where aid = :aid for update;
+	update pgbench_warm_accounts set abalance = abalance + :delta where aid = :aid;
+	select bbalance from pgbench_warm_branches where bid = :bid for update;
+	update pgbench_warm_branches set bbalance = bbalance + :delta where bid = :bid;
+	commit;",
+
+	"begin;
+	select abalance from pgbench_warm_accounts where aid = :aid for update;
+	update pgbench_warm_accounts set abalance = abalance + :delta where aid = :aid;
+	select bbalance from pgbench_warm_branches where bid = :bid for update;
+	update pgbench_warm_branches set bbalance = bbalance + :delta where bid = :bid;
+	rollback;",
+
+	"begin;
+	select abalance from pgbench_warm_accounts where aid = :aid for share;
+	update pgbench_warm_accounts set abalance = abalance + :delta where aid = :aid;
+	select bbalance from pgbench_warm_branches where bid = :bid for update;
+	update pgbench_warm_branches set bbalance = bbalance + :delta where bid = :bid;
+	commit;",
+
+	"begin;
+	select abalance from pgbench_warm_accounts where aid = :aid for update;
+	update pgbench_warm_accounts set abalance = abalance + :delta where aid = :aid;
+	select bbalance from pgbench_warm_branches where bid = :bid for update;
+	update pgbench_warm_branches set bbalance = bbalance + :delta where bid = :bid;
+	rollback;"
+);
+
+# The following queries use user-defined functions to update rows in
+# pgbench_warm_accounts table by using auxillary columns. This allows us to
+# test if the updates are working fine in various scenarios.
+my @query_set2 = (
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_warm_update_using_aid1(:chg1, :aid, :bid, :delta);
+	commit;",
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_warm_update_using_aid2(:chg2, :aid, :bid, :delta);
+	commit;",
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_warm_update_using_aid3(:chg3, :aid, :bid, :delta);
+	commit;",
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_warm_update_using_aid4(:chg4, :aid, :bid, :delta);
+	commit;",
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_warm_update_using_aid1(:chg1, :aid, :bid, :delta);
+	rollback;",
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_warm_update_using_aid2(:chg2, :aid, :bid, :delta);
+	rollback;",
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_warm_update_using_aid3(:chg3, :aid, :bid, :delta);
+	rollback;",
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_warm_update_using_aid4(:chg4, :aid, :bid, :delta);
+	rollback;"
+);
+
+# Specify concurrent DDLs that you may want to execute with the tests.
+my @ddl_queries = (
+	"drop index pgb_a_aid1;
+	 create index pgb_a_aid1 on pgbench_warm_accounts(aid1);",
+	"drop index pgb_a_aid2;
+	 create index pgb_a_aid2 on pgbench_warm_accounts(aid2);",
+	"drop index pgb_a_aid3;
+	 create index pgb_a_aid3 on pgbench_warm_accounts(aid3) using hash (aid3);",
+	"drop index pgb_a_aid4;
+	 create index pgb_a_aid4 on pgbench_warm_accounts(aid4);",
+	"drop index pgb_a_aid1;
+	 create index concurrently pgb_a_aid1 on pgbench_warm_accounts(aid1);",
+	"drop index pgb_a_aid2;
+	 create index concurrently pgb_a_aid2 on pgbench_warm_accounts(aid2);",
+	"drop index pgb_a_aid3;
+	 create index concurrently pgb_a_aid3 on pgbench_warm_accounts(aid3) using hash(aid3);",
+	"drop index pgb_a_aid4;
+	 create index concurrently pgb_a_aid4 on pgbench_warm_accounts(aid4);",
+	"vacuum pgbench_warm_accounts",
+	"vacuum pgbench_warm_branches",
+	"vacuum full pgbench_warm_accounts",
+	"vacuum full pgbench_warm_branches"
+);
+
+# Consistency check queries.
+my @check_queries = (
+	"set enable_seqscan to false; select pgbench_warm_check_consistency();",
+	"set enable_seqscan to false; select pgbench_warm_check_row(:aid);"
+);
+
+my $node = get_new_node();
+$node->init;
+$node->start;
+
+# prepare the test for execution
+$node->run_log([ 'psql', '-X', $node->connstr(), '-f', 't/warm_stress_init.sql']);
+
+my $res = $node->safe_psql('postgres', "select proname from pg_proc where proname = 'pgbench_warm_update_using_aid1'");
+is($res, 'pgbench_warm_update_using_aid1', 'dummy test passed');
+
+$res = $node->safe_psql('postgres', "select count(*) from pgbench_warm_accounts");
+is($res, 10000, 'Fine match');
+
+# Start as many connections as we need
+sub create_connections {
+	my $count = shift;
+	my @handles;
+	my ($stdin, $stdout, $stderr) = ('','','');
+	for (my $proc = 0; $proc < $count; $proc = $proc + 1) {
+		my $handle = IPC::Run::start( 
+			[
+				'psql', '-v', '-f -', $node->connstr(),
+			],
+			\$stdin, \$stdout, \$stderr);
+		push @handles, [$handle,\$stdin,\$stdout,\$stderr];
+	}
+	return \@handles;
+}
+
+sub check_connections {
+	my @handles = @_;
+	my $failures = 0;
+	print @handles;
+	foreach my $elem (@handles) {
+		my ($handle, $stdin, $stdout, $stderr) = @$elem;
+		# Wait for all queries to complete and psql sessions to exit, checking
+		# exit codes. We don't need to do the fancy interpretation safe_psql
+		# does.
+		$handle->finish;
+		if (!is($handle->full_result(0), 0, "psql exited normally"))
+		{
+			$failures ++;
+			diag "psql exit code: " . ($handle->result(0)) . " or signal: " . ($handle->full_result(0) & 127);
+			diag "Stdout:\n---\n$$stdout\n---\nStderr:\n----\n$$stderr\n---";
+		}
+	}
+	return $failures;
+}
+
+my $set1_handles = create_connections(3);
+my $set2_handles = create_connections(3);
+my $aux_handles = create_connections(1);
+
+# Run a few thousand transactions, using various kinds of queries
+my $scale = 1;
+for (my $txn = 0; $txn < 10000; $txn = $txn + 1) {
+	# Run a randomly chosen query from set1
+	my $aid = int(rand($scale*10000)) + 1;
+	my $bid = int(rand(100)) + 1;
+	my $delta = int(rand(1000)) - 500;
+
+	my $connindx = rand(@$set1_handles);
+	my $elem = @$set1_handles[$connindx];
+	my ($handle, $stdin, $stdout, $stderr) = @$elem;
+
+	my $queryindx = rand(@query_set1);
+	my $query = $query_set1[$queryindx];
+
+	$query =~ s/\:aid/$aid/g;
+	$query =~ s/\:bid/$bid/g;
+	$query =~ s/\:delta/$delta/g;
+
+	$$stdin .= $query . "\n";
+	pump $handle while length $$stdin;
+
+	# Run a randomly chosen query from set1
+	my $chg1 = int(rand(4)) - 2;
+	my $chg2 = int(rand(6)) - 3;
+	my $chg3 = int(rand(8)) - 4;
+	my $chg4 = int(rand(10)) - 5;
+
+	$connindx = rand(@$set2_handles);
+	$elem = @$set2_handles[$connindx];
+	($handle, $stdin, $stdout, $stderr) = @$elem;
+
+	$queryindx = rand(@query_set2);
+	$query = $query_set2[$queryindx];
+
+	$query =~ s/\:aid/$aid/g;
+	$query =~ s/\:bid/$bid/g;
+	$query =~ s/\:delta/$delta/g;
+	$query =~ s/\:chg1/$chg1/g;
+	$query =~ s/\:chg2/$chg2/g;
+	$query =~ s/\:chg3/$chg3/g;
+	$query =~ s/\:chg4/$chg4/g;
+
+	$$stdin .= $query . "\n";
+	pump $handle while length $$stdin;
+
+	# Some randomly picked numbers to run DDLs and consistency checks
+	my $random = int(rand(100));
+
+	# Consistenct checks every 5 transactions
+	if ($random % 5 == 0)
+	{
+		$connindx = rand(@$aux_handles);
+		$elem = @$aux_handles[$connindx];
+		($handle, $stdin, $stdout, $stderr) = @$elem;
+
+		$queryindx = rand(@check_queries);
+		$query = $check_queries[$queryindx];
+
+		$$stdin .= $query . "\n";
+		pump $handle while length $$stdin;
+	}
+
+	# 1% DDLs
+	if ($random == 17)
+	{
+		$connindx = rand(@$aux_handles);
+		$elem = @$aux_handles[$connindx];
+		($handle, $stdin, $stdout, $stderr) = @$elem;
+
+		$queryindx = rand(@ddl_queries);
+		$query = $ddl_queries[$queryindx];
+
+		$$stdin .= $query . "\n";
+		pump $handle while length $$stdin;
+	}
+}
+
+check_connections(@$set1_handles);
+check_connections(@$set2_handles);
+check_connections(@$aux_handles);
+
+# Run final consistency checks
+my $res1 = $node->safe_psql('postgres', "select sum(abalance) from pgbench_warm_accounts");
+my $res2 = $node->safe_psql('postgres', "select sum(bbalance) from pgbench_warm_branches");
+is($res1, $res2, 'Fine match');
diff --git b/src/test/modules/warm/t/warm_stress_init.sql a/src/test/modules/warm/t/warm_stress_init.sql
new file mode 100644
index 0000000..4697480
--- /dev/null
+++ a/src/test/modules/warm/t/warm_stress_init.sql
@@ -0,0 +1,209 @@
+
+drop table if exists pgbench_warm_branches;
+drop table if exists pgbench_warm_accounts;
+
+create table pgbench_warm_branches (
+	bid bigint,
+	bbalance bigint);
+
+create table pgbench_warm_accounts (
+	aid bigint,
+	bid bigint,
+	abalance bigint,
+	aid1 bigint ,
+	aid2 bigint ,
+	aid3 bigint ,
+	aid4 bigint ,
+	aid5 text default md5(random()::text),
+	aid6 text default md5(random()::text),
+	aid7 text default md5(random()::text),
+	aid8 text default md5(random()::text),
+	aid9 text default md5(random()::text),
+	aid10 text default md5(random()::text),
+	gistcol	polygon default null
+);
+
+-- update using aid1. aid1 should stay within the range (aid * 10 - 2 <= aid1 <= aid * 10 + 2) 
+create or replace function pgbench_warm_update_using_aid1(chg integer, v_aid bigint, v_bid bigint, delta bigint)
+returns void as $$
+declare
+	qry varchar;
+	lower varchar;
+	upper varchar;
+	range integer;
+	aid_updated bigint;
+begin
+	range := 2;
+	update pgbench_warm_accounts p set aid1 = aid1 +  chg,  abalance = abalance +
+delta  where aid1 >= v_aid * 10 - range - chg and aid1 <= v_aid * 10 + range - chg
+returning p.aid into aid_updated;
+	if aid_updated is not null then
+		update pgbench_warm_branches p set bbalance = bbalance + delta where p.bid = v_bid;
+	else
+		select aid into aid_updated from pgbench_warm_accounts p where aid1 >=
+v_aid * 10 - range and aid1 <= v_aid * 10 + range;
+		if aid_updated is null then
+			raise exception 'pgbench_warm_accounts row not found';
+		end if;
+	end if;
+end
+$$ language plpgsql;
+
+-- update using aid2. aid2 should stay within the range (aid * 20 - 4 <= aid2 <= aid * 20 + 4) 
+create or replace function pgbench_warm_update_using_aid2(chg integer, v_aid bigint, v_bid bigint, delta bigint)
+returns void as $$
+declare
+	qry varchar;
+	lower varchar;
+	upper varchar;
+	range integer;
+	aid_updated bigint;
+begin
+	range := 4;
+	update pgbench_warm_accounts p set aid2 = aid2 +  chg,  abalance = abalance +
+delta  where aid2 >= v_aid * 20 - range - chg and aid2 <= v_aid * 20 + range - chg
+returning p.aid into aid_updated;
+	if aid_updated is not null then
+		update pgbench_warm_branches p set bbalance = bbalance + delta where p.bid = v_bid;
+	else
+		select aid into aid_updated from pgbench_warm_accounts p where aid2 >= v_aid * 20 - range and aid2 <= v_aid * 20 + range;
+		if aid_updated is null then
+			raise exception 'pgbench_warm_accounts row not found';
+		end if;
+	end if;
+end
+$$ language plpgsql;
+
+-- update using aid3. aid3 should stay within the range (aid * 30 - 6 <= aid3 <= aid * 30 + 6) 
+create or replace function pgbench_warm_update_using_aid3(chg integer, v_aid bigint, v_bid bigint, delta bigint)
+returns void as $$
+declare
+	qry varchar;
+	lower varchar;
+	upper varchar;
+	range integer;
+	aid_updated bigint;
+begin
+	range := 6;
+	update pgbench_warm_accounts p set aid3 = aid3 +  chg,  abalance = abalance +
+delta  where aid3 >= v_aid * 30 - range - chg and aid3 <= v_aid * 30 + range - chg
+returning p.aid into aid_updated;
+	if aid_updated is not null then
+		update pgbench_warm_branches p set bbalance = bbalance + delta where p.bid = v_bid;
+	else
+		select aid into aid_updated from pgbench_warm_accounts p where aid3 >= v_aid * 30 - range and aid3 <= v_aid * 30 + range;
+		if aid_updated is null then
+			raise exception 'pgbench_warm_accounts row not found';
+		end if;
+	end if;
+end
+$$ language plpgsql;
+
+-- update using aid4. aid4 should stay within the range (aid * 40 - 8 <= aid4 <= aid * 40 + 8) 
+create or replace function pgbench_warm_update_using_aid4(chg integer, v_aid bigint, v_bid bigint, delta bigint)
+returns void as $$
+declare
+	qry varchar;
+	lower varchar;
+	upper varchar;
+	range integer;
+	aid_updated bigint;
+begin
+	range := 8;
+	update pgbench_warm_accounts p set aid4 = aid4 +  chg,  abalance = abalance +
+delta  where aid4 >= v_aid * 40 - range - chg and aid4 <= v_aid * 40 + range - chg
+returning p.aid into aid_updated;
+	if aid_updated is not null then
+		update pgbench_warm_branches p set bbalance = bbalance + delta where p.bid = v_bid;
+	else
+		select aid into aid_updated from pgbench_warm_accounts p where aid4 >= v_aid * 40 - range and aid4 <= v_aid * 40 + range;
+		if aid_updated is null then
+			raise exception 'pgbench_warm_accounts row not found';
+		end if;
+	end if;
+end
+$$ language plpgsql;
+
+-- ensure that exactly one row exists within a given range. use different
+-- indexes to fetch the row
+create or replace function pgbench_warm_check_row(v_aid bigint)
+returns void as $$
+declare
+	range integer;
+	factor integer;
+	ret_aid1 bigint;
+	ret_aid2 bigint;
+	ret_aid3 bigint;
+	ret_aid4 bigint;
+begin
+	range := 2;
+	factor := 10;
+	select aid into ret_aid1 from pgbench_warm_accounts p where aid1 >= v_aid *
+		factor - range and aid1 <= v_aid * factor + range;
+
+	range := 4;
+	factor := 20;
+	select aid into ret_aid2 from pgbench_warm_accounts p where aid2 >= v_aid *
+		factor - range and aid2 <= v_aid * factor + range;
+
+	range := 6;
+	factor := 30;
+	select aid into ret_aid3 from pgbench_warm_accounts p where aid3 >= v_aid *
+		factor - range and aid3 <= v_aid * factor + range;
+
+	range := 8;
+	factor := 40;
+	select aid into ret_aid4 from pgbench_warm_accounts p where aid4 >= v_aid *
+		factor - range and aid4 <= v_aid * factor + range;
+
+	if ret_aid1 is null or ret_aid1 != v_aid then
+		raise exception 'pgbench_warm_accounts row (%) not found via aid1', v_aid;
+	end if;
+
+	if ret_aid2 is null or ret_aid2 != v_aid then
+		raise exception 'pgbench_warm_accounts row (%) not found via aid2', v_aid;
+	end if;
+
+	if ret_aid3 is null or ret_aid3 != v_aid then
+		raise exception 'pgbench_warm_accounts row (%) not found via aid3', v_aid;
+	end if;
+
+	if ret_aid4 is null or ret_aid4 != v_aid then
+		raise exception 'pgbench_warm_accounts row (%) not found via aid4', v_aid;
+	end if;
+end
+$$ language plpgsql;
+
+create or replace function pgbench_warm_check_consistency()
+returns void as $$
+declare
+	sum_abalance bigint;
+	sum_bbalance bigint;
+begin
+	select sum(abalance) into sum_abalance from pgbench_warm_accounts;
+	select sum(bbalance) into sum_bbalance from pgbench_warm_branches;
+	if sum_abalance != sum_bbalance then
+		raise exception 'found inconsitency in sum (%, %)', sum_abalance, sum_bbalance;
+	end if;
+end
+$$ language plpgsql;
+
+\set end 10000
+insert into pgbench_warm_branches select generate_series(1, 100), 0 ;
+insert into pgbench_warm_accounts select generate_series(1, :end),
+				(random() * 100)::int, 0,
+				generate_series(1, :end) * 10,
+				generate_series(1, :end) * 20,
+				generate_series(1, :end) * 30,
+				generate_series(1, :end) * 40;
+
+create unique index pgb_a_aid on pgbench_warm_accounts(aid);
+create index pgb_a_aid1 on pgbench_warm_accounts(aid1);
+create index pgb_a_aid2 on pgbench_warm_accounts(aid2);
+create index pgb_a_aid3 on pgbench_warm_accounts(aid3) using hash(aid3);
+create index pgb_a_aid4 on pgbench_warm_accounts(aid4);
+
+create unique index pgb_b_bid on pgbench_warm_branches(bid);
+create index pgb_b_bbalance on pgbench_warm_branches(bbalance);
+
+vacuum analyze;
0004_freeup_3bits_ip_posid_v21.patchapplication/octet-stream; name=0004_freeup_3bits_ip_posid_v21.patchDownload
diff --git a/src/backend/access/gin/ginget.c b/src/backend/access/gin/ginget.c
index aa0b02f..1e1c978 100644
--- a/src/backend/access/gin/ginget.c
+++ b/src/backend/access/gin/ginget.c
@@ -928,7 +928,7 @@ keyGetItem(GinState *ginstate, MemoryContext tempCtx, GinScanKey key,
 	 * Find the minimum item > advancePast among the active entry streams.
 	 *
 	 * Note: a lossy-page entry is encoded by a ItemPointer with max value for
-	 * offset (0xffff), so that it will sort after any exact entries for the
+	 * offset (0x1fff), so that it will sort after any exact entries for the
 	 * same page.  So we'll prefer to return exact pointers not lossy
 	 * pointers, which is good.
 	 */
diff --git a/src/backend/access/gin/ginpostinglist.c b/src/backend/access/gin/ginpostinglist.c
index 8d2d31a..b22b9f5 100644
--- a/src/backend/access/gin/ginpostinglist.c
+++ b/src/backend/access/gin/ginpostinglist.c
@@ -253,7 +253,7 @@ ginCompressPostingList(const ItemPointer ipd, int nipd, int maxsize,
 
 		Assert(ndecoded == totalpacked);
 		for (i = 0; i < ndecoded; i++)
-			Assert(memcmp(&tmp[i], &ipd[i], sizeof(ItemPointerData)) == 0);
+			Assert(ItemPointerEquals(&tmp[i], &ipd[i]));
 		pfree(tmp);
 	}
 #endif
diff --git a/src/include/access/ginblock.h b/src/include/access/ginblock.h
index 438912c..3f7a3f0 100644
--- a/src/include/access/ginblock.h
+++ b/src/include/access/ginblock.h
@@ -160,14 +160,14 @@ typedef struct GinMetaPageData
 	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)0 && \
 	 GinItemPointerGetBlockNumber(p) == (BlockNumber)0)
 #define ItemPointerSetMax(p)  \
-	ItemPointerSet((p), InvalidBlockNumber, (OffsetNumber)0xffff)
+	ItemPointerSet((p), InvalidBlockNumber, (OffsetNumber)OffsetNumberMask)
 #define ItemPointerIsMax(p)  \
-	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)0xffff && \
+	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)OffsetNumberMask && \
 	 GinItemPointerGetBlockNumber(p) == InvalidBlockNumber)
 #define ItemPointerSetLossyPage(p, b)  \
-	ItemPointerSet((p), (b), (OffsetNumber)0xffff)
+	ItemPointerSet((p), (b), (OffsetNumber)OffsetNumberMask)
 #define ItemPointerIsLossyPage(p)  \
-	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)0xffff && \
+	(GinItemPointerGetOffsetNumber(p) == (OffsetNumber)OffsetNumberMask && \
 	 GinItemPointerGetBlockNumber(p) != InvalidBlockNumber)
 
 /*
@@ -218,7 +218,7 @@ typedef signed char GinNullCategory;
  */
 #define GinGetNPosting(itup)	GinItemPointerGetOffsetNumber(&(itup)->t_tid)
 #define GinSetNPosting(itup,n)	ItemPointerSetOffsetNumber(&(itup)->t_tid,n)
-#define GIN_TREE_POSTING		((OffsetNumber)0xffff)
+#define GIN_TREE_POSTING		((OffsetNumber)OffsetNumberMask)
 #define GinIsPostingTree(itup)	(GinGetNPosting(itup) == GIN_TREE_POSTING)
 #define GinSetPostingTree(itup, blkno)	( GinSetNPosting((itup),GIN_TREE_POSTING), ItemPointerSetBlockNumber(&(itup)->t_tid, blkno) )
 #define GinGetPostingTree(itup) GinItemPointerGetBlockNumber(&(itup)->t_tid)
diff --git a/src/include/access/gist_private.h b/src/include/access/gist_private.h
index 1ad4ed6..0ad11f1 100644
--- a/src/include/access/gist_private.h
+++ b/src/include/access/gist_private.h
@@ -269,8 +269,8 @@ typedef struct
  * invalid tuples in an index, so throwing an error is as far as we go with
  * supporting that.
  */
-#define TUPLE_IS_VALID		0xffff
-#define TUPLE_IS_INVALID	0xfffe
+#define TUPLE_IS_VALID		OffsetNumberMask
+#define TUPLE_IS_INVALID	OffsetNumberPrev(OffsetNumberMask)
 
 #define  GistTupleIsInvalid(itup)	( ItemPointerGetOffsetNumber( &((itup)->t_tid) ) == TUPLE_IS_INVALID )
 #define  GistTupleSetValid(itup)	ItemPointerSetOffsetNumber( &((itup)->t_tid), TUPLE_IS_VALID )
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 24433c7..4d614b7 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -288,7 +288,7 @@ struct HeapTupleHeaderData
  * than MaxOffsetNumber, so that it can be distinguished from a valid
  * offset number in a regular item pointer.
  */
-#define SpecTokenOffsetNumber		0xfffe
+#define SpecTokenOffsetNumber		OffsetNumberPrev(OffsetNumberMask)
 
 /*
  * HeapTupleHeader accessor macros
diff --git a/src/include/storage/itemptr.h b/src/include/storage/itemptr.h
index 60d0070..3144bdd 100644
--- a/src/include/storage/itemptr.h
+++ b/src/include/storage/itemptr.h
@@ -57,7 +57,7 @@ typedef ItemPointerData *ItemPointer;
  *		True iff the disk item pointer is not NULL.
  */
 #define ItemPointerIsValid(pointer) \
-	((bool) (PointerIsValid(pointer) && ((pointer)->ip_posid != 0)))
+	((bool) (PointerIsValid(pointer) && (((pointer)->ip_posid & OffsetNumberMask) != 0)))
 
 /*
  * ItemPointerGetBlockNumber
@@ -82,13 +82,37 @@ typedef ItemPointerData *ItemPointer;
 #define ItemPointerGetOffsetNumber(pointer) \
 ( \
 	AssertMacro(ItemPointerIsValid(pointer)), \
-	(pointer)->ip_posid \
+	((pointer)->ip_posid & OffsetNumberMask) \
 )
 
 /* Same as ItemPointerGetOffsetNumber but without any assert-checks */
 #define ItemPointerGetOffsetNumberNoCheck(pointer) \
 ( \
-	(pointer)->ip_posid \
+	((pointer)->ip_posid & OffsetNumberMask) \
+)
+
+/*
+ * Get the flags stored in high order bits in the OffsetNumber.
+ */
+#define ItemPointerGetFlags(pointer) \
+( \
+	((pointer)->ip_posid & ~OffsetNumberMask) >> OffsetNumberBits \
+)
+
+/*
+ * Set the flag bits. We first left-shift since flags are defined starting 0x01
+ */
+#define ItemPointerSetFlags(pointer, flags) \
+( \
+	((pointer)->ip_posid |= ((flags) << OffsetNumberBits)) \
+)
+
+/*
+ * Clear all flags.
+ */
+#define ItemPointerClearFlags(pointer) \
+( \
+	((pointer)->ip_posid &= OffsetNumberMask) \
 )
 
 /*
@@ -99,7 +123,7 @@ typedef ItemPointerData *ItemPointer;
 ( \
 	AssertMacro(PointerIsValid(pointer)), \
 	BlockIdSet(&((pointer)->ip_blkid), blockNumber), \
-	(pointer)->ip_posid = offNum \
+	(pointer)->ip_posid = (offNum) \
 )
 
 /*
diff --git a/src/include/storage/off.h b/src/include/storage/off.h
index fe8638f..fe1834c 100644
--- a/src/include/storage/off.h
+++ b/src/include/storage/off.h
@@ -26,8 +26,15 @@ typedef uint16 OffsetNumber;
 #define InvalidOffsetNumber		((OffsetNumber) 0)
 #define FirstOffsetNumber		((OffsetNumber) 1)
 #define MaxOffsetNumber			((OffsetNumber) (BLCKSZ / sizeof(ItemIdData)))
-#define OffsetNumberMask		(0xffff)		/* valid uint16 bits */
 
+/*
+ * Currently we support maxinum 32kB blocks and each ItemId takes 6 bytes. That
+ * limits the number of line pointers to (32kB/6 = 5461). 13 bits are enought o
+ * represent all line pointers. Hence we can reuse the high order bits in
+ * OffsetNumber for other purposes.
+ */
+#define OffsetNumberMask		(0x1fff)		/* valid uint16 bits */
+#define OffsetNumberBits		13	/* number of valid bits in OffsetNumber */
 /* ----------------
  *		support macros
  * ----------------
0003_clear_ip_posid_blkid_refs_v21.patchapplication/octet-stream; name=0003_clear_ip_posid_blkid_refs_v21.patchDownload
diff --git a/contrib/pageinspect/btreefuncs.c b/contrib/pageinspect/btreefuncs.c
index 6f35e28..07496db 100644
--- a/contrib/pageinspect/btreefuncs.c
+++ b/contrib/pageinspect/btreefuncs.c
@@ -363,8 +363,8 @@ bt_page_items(PG_FUNCTION_ARGS)
 		j = 0;
 		values[j++] = psprintf("%d", uargs->offset);
 		values[j++] = psprintf("(%u,%u)",
-							   BlockIdGetBlockNumber(&(itup->t_tid.ip_blkid)),
-							   itup->t_tid.ip_posid);
+							   ItemPointerGetBlockNumberNoCheck(&itup->t_tid),
+							   ItemPointerGetOffsetNumberNoCheck(&itup->t_tid));
 		values[j++] = psprintf("%d", (int) IndexTupleSize(itup));
 		values[j++] = psprintf("%c", IndexTupleHasNulls(itup) ? 't' : 'f');
 		values[j++] = psprintf("%c", IndexTupleHasVarwidths(itup) ? 't' : 'f');
diff --git a/contrib/pgstattuple/pgstattuple.c b/contrib/pgstattuple/pgstattuple.c
index 1e0de5d..44f90cd 100644
--- a/contrib/pgstattuple/pgstattuple.c
+++ b/contrib/pgstattuple/pgstattuple.c
@@ -356,7 +356,7 @@ pgstat_heap(Relation rel, FunctionCallInfo fcinfo)
 		 * heap_getnext may find no tuples on a given page, so we cannot
 		 * simply examine the pages returned by the heap scan.
 		 */
-		tupblock = BlockIdGetBlockNumber(&tuple->t_self.ip_blkid);
+		tupblock = ItemPointerGetBlockNumber(&tuple->t_self);
 
 		while (block <= tupblock)
 		{
diff --git a/src/backend/access/gin/ginget.c b/src/backend/access/gin/ginget.c
index 87cd9ea..aa0b02f 100644
--- a/src/backend/access/gin/ginget.c
+++ b/src/backend/access/gin/ginget.c
@@ -626,8 +626,9 @@ entryLoadMoreItems(GinState *ginstate, GinScanEntry entry,
 		}
 		else
 		{
-			entry->btree.itemptr = advancePast;
-			entry->btree.itemptr.ip_posid++;
+			ItemPointerSet(&entry->btree.itemptr,
+					GinItemPointerGetBlockNumber(&advancePast),
+					OffsetNumberNext(GinItemPointerGetOffsetNumber(&advancePast)));
 		}
 		entry->btree.fullScan = false;
 		stack = ginFindLeafPage(&entry->btree, true, snapshot);
@@ -979,15 +980,17 @@ keyGetItem(GinState *ginstate, MemoryContext tempCtx, GinScanKey key,
 		if (GinItemPointerGetBlockNumber(&advancePast) <
 			GinItemPointerGetBlockNumber(&minItem))
 		{
-			advancePast.ip_blkid = minItem.ip_blkid;
-			advancePast.ip_posid = 0;
+			ItemPointerSet(&advancePast,
+					GinItemPointerGetBlockNumber(&minItem),
+					InvalidOffsetNumber);
 		}
 	}
 	else
 	{
-		Assert(minItem.ip_posid > 0);
-		advancePast = minItem;
-		advancePast.ip_posid--;
+		Assert(GinItemPointerGetOffsetNumber(&minItem) > 0);
+		ItemPointerSet(&advancePast,
+				GinItemPointerGetBlockNumber(&minItem),
+				OffsetNumberPrev(GinItemPointerGetOffsetNumber(&minItem)));
 	}
 
 	/*
@@ -1245,15 +1248,17 @@ scanGetItem(IndexScanDesc scan, ItemPointerData advancePast,
 				if (GinItemPointerGetBlockNumber(&advancePast) <
 					GinItemPointerGetBlockNumber(&key->curItem))
 				{
-					advancePast.ip_blkid = key->curItem.ip_blkid;
-					advancePast.ip_posid = 0;
+					ItemPointerSet(&advancePast,
+						GinItemPointerGetBlockNumber(&key->curItem),
+						InvalidOffsetNumber);
 				}
 			}
 			else
 			{
-				Assert(key->curItem.ip_posid > 0);
-				advancePast = key->curItem;
-				advancePast.ip_posid--;
+				Assert(GinItemPointerGetOffsetNumber(&key->curItem) > 0);
+				ItemPointerSet(&advancePast,
+						GinItemPointerGetBlockNumber(&key->curItem),
+						OffsetNumberPrev(GinItemPointerGetOffsetNumber(&key->curItem)));
 			}
 
 			/*
diff --git a/src/backend/access/gin/ginpostinglist.c b/src/backend/access/gin/ginpostinglist.c
index 598069d..8d2d31a 100644
--- a/src/backend/access/gin/ginpostinglist.c
+++ b/src/backend/access/gin/ginpostinglist.c
@@ -79,13 +79,11 @@ itemptr_to_uint64(const ItemPointer iptr)
 	uint64		val;
 
 	Assert(ItemPointerIsValid(iptr));
-	Assert(iptr->ip_posid < (1 << MaxHeapTuplesPerPageBits));
+	Assert(GinItemPointerGetOffsetNumber(iptr) < (1 << MaxHeapTuplesPerPageBits));
 
-	val = iptr->ip_blkid.bi_hi;
-	val <<= 16;
-	val |= iptr->ip_blkid.bi_lo;
+	val = GinItemPointerGetBlockNumber(iptr);
 	val <<= MaxHeapTuplesPerPageBits;
-	val |= iptr->ip_posid;
+	val |= GinItemPointerGetOffsetNumber(iptr);
 
 	return val;
 }
@@ -93,11 +91,9 @@ itemptr_to_uint64(const ItemPointer iptr)
 static inline void
 uint64_to_itemptr(uint64 val, ItemPointer iptr)
 {
-	iptr->ip_posid = val & ((1 << MaxHeapTuplesPerPageBits) - 1);
+	GinItemPointerSetOffsetNumber(iptr, val & ((1 << MaxHeapTuplesPerPageBits) - 1));
 	val = val >> MaxHeapTuplesPerPageBits;
-	iptr->ip_blkid.bi_lo = val & 0xFFFF;
-	val = val >> 16;
-	iptr->ip_blkid.bi_hi = val & 0xFFFF;
+	GinItemPointerSetBlockNumber(iptr, val);
 
 	Assert(ItemPointerIsValid(iptr));
 }
diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c
index b437799..12ebadc 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -3013,8 +3013,8 @@ DisplayMapping(HTAB *tuplecid_data)
 			 ent->key.relnode.dbNode,
 			 ent->key.relnode.spcNode,
 			 ent->key.relnode.relNode,
-			 BlockIdGetBlockNumber(&ent->key.tid.ip_blkid),
-			 ent->key.tid.ip_posid,
+			 ItemPointerGetBlockNumber(&ent->key.tid),
+			 ItemPointerGetOffsetNumber(&ent->key.tid),
 			 ent->cmin,
 			 ent->cmax
 			);
diff --git a/src/backend/storage/page/itemptr.c b/src/backend/storage/page/itemptr.c
index 703cbb9..28ac885 100644
--- a/src/backend/storage/page/itemptr.c
+++ b/src/backend/storage/page/itemptr.c
@@ -54,18 +54,21 @@ ItemPointerCompare(ItemPointer arg1, ItemPointer arg2)
 	/*
 	 * Don't use ItemPointerGetBlockNumber or ItemPointerGetOffsetNumber here,
 	 * because they assert ip_posid != 0 which might not be true for a
-	 * user-supplied TID.
+	 * user-supplied TID. Instead we use ItemPointerGetBlockNumberNoCheck and
+	 * ItemPointerGetOffsetNumberNoCheck which do not do any validation.
 	 */
-	BlockNumber b1 = BlockIdGetBlockNumber(&(arg1->ip_blkid));
-	BlockNumber b2 = BlockIdGetBlockNumber(&(arg2->ip_blkid));
+	BlockNumber b1 = ItemPointerGetBlockNumberNoCheck(arg1);
+	BlockNumber b2 = ItemPointerGetBlockNumberNoCheck(arg2);
 
 	if (b1 < b2)
 		return -1;
 	else if (b1 > b2)
 		return 1;
-	else if (arg1->ip_posid < arg2->ip_posid)
+	else if (ItemPointerGetOffsetNumberNoCheck(arg1) <
+			ItemPointerGetOffsetNumberNoCheck(arg2))
 		return -1;
-	else if (arg1->ip_posid > arg2->ip_posid)
+	else if (ItemPointerGetOffsetNumberNoCheck(arg1) >
+			ItemPointerGetOffsetNumberNoCheck(arg2))
 		return 1;
 	else
 		return 0;
diff --git a/src/backend/utils/adt/tid.c b/src/backend/utils/adt/tid.c
index 49a5a15..7f3a692 100644
--- a/src/backend/utils/adt/tid.c
+++ b/src/backend/utils/adt/tid.c
@@ -109,8 +109,8 @@ tidout(PG_FUNCTION_ARGS)
 	OffsetNumber offsetNumber;
 	char		buf[32];
 
-	blockNumber = BlockIdGetBlockNumber(&(itemPtr->ip_blkid));
-	offsetNumber = itemPtr->ip_posid;
+	blockNumber = ItemPointerGetBlockNumberNoCheck(itemPtr);
+	offsetNumber = ItemPointerGetOffsetNumberNoCheck(itemPtr);
 
 	/* Perhaps someday we should output this as a record. */
 	snprintf(buf, sizeof(buf), "(%u,%u)", blockNumber, offsetNumber);
@@ -146,14 +146,12 @@ Datum
 tidsend(PG_FUNCTION_ARGS)
 {
 	ItemPointer itemPtr = PG_GETARG_ITEMPOINTER(0);
-	BlockId		blockId;
 	BlockNumber blockNumber;
 	OffsetNumber offsetNumber;
 	StringInfoData buf;
 
-	blockId = &(itemPtr->ip_blkid);
-	blockNumber = BlockIdGetBlockNumber(blockId);
-	offsetNumber = itemPtr->ip_posid;
+	blockNumber = ItemPointerGetBlockNumberNoCheck(itemPtr);
+	offsetNumber = ItemPointerGetOffsetNumberNoCheck(itemPtr);
 
 	pq_begintypsend(&buf);
 	pq_sendint(&buf, blockNumber, sizeof(blockNumber));
diff --git a/src/include/access/gin_private.h b/src/include/access/gin_private.h
index 824cc1c..6192b54 100644
--- a/src/include/access/gin_private.h
+++ b/src/include/access/gin_private.h
@@ -460,8 +460,8 @@ extern ItemPointer ginMergeItemPointers(ItemPointerData *a, uint32 na,
 static inline int
 ginCompareItemPointers(ItemPointer a, ItemPointer b)
 {
-	uint64		ia = (uint64) a->ip_blkid.bi_hi << 32 | (uint64) a->ip_blkid.bi_lo << 16 | a->ip_posid;
-	uint64		ib = (uint64) b->ip_blkid.bi_hi << 32 | (uint64) b->ip_blkid.bi_lo << 16 | b->ip_posid;
+	uint64		ia = (uint64) GinItemPointerGetBlockNumber(a) << 32 | GinItemPointerGetOffsetNumber(a);
+	uint64		ib = (uint64) GinItemPointerGetBlockNumber(b) << 32 | GinItemPointerGetOffsetNumber(b);
 
 	if (ia == ib)
 		return 0;
diff --git a/src/include/access/ginblock.h b/src/include/access/ginblock.h
index a3fb056..438912c 100644
--- a/src/include/access/ginblock.h
+++ b/src/include/access/ginblock.h
@@ -132,10 +132,17 @@ typedef struct GinMetaPageData
  * to avoid Asserts, since sometimes the ip_posid isn't "valid"
  */
 #define GinItemPointerGetBlockNumber(pointer) \
-	BlockIdGetBlockNumber(&(pointer)->ip_blkid)
+	(ItemPointerGetBlockNumberNoCheck(pointer))
 
 #define GinItemPointerGetOffsetNumber(pointer) \
-	((pointer)->ip_posid)
+	(ItemPointerGetOffsetNumberNoCheck(pointer))
+
+#define GinItemPointerSetBlockNumber(pointer, blkno) \
+	(ItemPointerSetBlockNumber((pointer), (blkno)))
+
+#define GinItemPointerSetOffsetNumber(pointer, offnum) \
+	(ItemPointerSetOffsetNumber((pointer), (offnum)))
+
 
 /*
  * Special-case item pointer values needed by the GIN search logic.
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 7552186..24433c7 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -428,7 +428,7 @@ do { \
 
 #define HeapTupleHeaderIsSpeculative(tup) \
 ( \
-	(tup)->t_ctid.ip_posid == SpecTokenOffsetNumber \
+	(ItemPointerGetOffsetNumberNoCheck(&(tup)->t_ctid) == SpecTokenOffsetNumber) \
 )
 
 #define HeapTupleHeaderGetSpeculativeToken(tup) \
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index 6289ffa..f9304db 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -151,9 +151,8 @@ typedef struct BTMetaPageData
  *	within a level). - vadim 04/09/97
  */
 #define BTTidSame(i1, i2)	\
-	( (i1).ip_blkid.bi_hi == (i2).ip_blkid.bi_hi && \
-	  (i1).ip_blkid.bi_lo == (i2).ip_blkid.bi_lo && \
-	  (i1).ip_posid == (i2).ip_posid )
+	((ItemPointerGetBlockNumber(&(i1)) == ItemPointerGetBlockNumber(&(i2))) && \
+	 (ItemPointerGetOffsetNumber(&(i1)) == ItemPointerGetOffsetNumber(&(i2))))
 #define BTEntrySame(i1, i2) \
 	BTTidSame((i1)->t_tid, (i2)->t_tid)
 
diff --git a/src/include/storage/itemptr.h b/src/include/storage/itemptr.h
index 576aaa8..60d0070 100644
--- a/src/include/storage/itemptr.h
+++ b/src/include/storage/itemptr.h
@@ -69,6 +69,12 @@ typedef ItemPointerData *ItemPointer;
 	BlockIdGetBlockNumber(&(pointer)->ip_blkid) \
 )
 
+/* Same as ItemPointerGetBlockNumber but without any assert-checks */
+#define ItemPointerGetBlockNumberNoCheck(pointer) \
+( \
+	BlockIdGetBlockNumber(&(pointer)->ip_blkid) \
+)
+
 /*
  * ItemPointerGetOffsetNumber
  *		Returns the offset number of a disk item pointer.
@@ -79,6 +85,12 @@ typedef ItemPointerData *ItemPointer;
 	(pointer)->ip_posid \
 )
 
+/* Same as ItemPointerGetOffsetNumber but without any assert-checks */
+#define ItemPointerGetOffsetNumberNoCheck(pointer) \
+( \
+	(pointer)->ip_posid \
+)
+
 /*
  * ItemPointerSet
  *		Sets a disk item pointer to the specified block and offset.
0002_track_root_lp_v21.patchapplication/octet-stream; name=0002_track_root_lp_v21.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 51c773f..e573f1a 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -94,7 +94,8 @@ static HeapTuple heap_prepare_insert(Relation relation, HeapTuple tup,
 					TransactionId xid, CommandId cid, int options);
 static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
-				HeapTuple newtup, HeapTuple old_key_tup,
+				HeapTuple newtup, OffsetNumber root_offnum,
+				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
 static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
 							 Bitmapset *interesting_cols,
@@ -2264,13 +2265,13 @@ heap_get_latest_tid(Relation relation,
 		 */
 		if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) ||
 			HeapTupleHeaderIsOnlyLocked(tp.t_data) ||
-			ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid))
+			HeapTupleHeaderIsHeapLatest(tp.t_data, &ctid))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
 		}
 
-		ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tp.t_data, &ctid);
 		priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		UnlockReleaseBuffer(buffer);
 	}							/* end of loop */
@@ -2401,6 +2402,7 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	bool		all_visible_cleared = false;
+	OffsetNumber	root_offnum;
 
 	/*
 	 * Fill in tuple header fields, assign an OID, and toast the tuple if
@@ -2439,8 +2441,13 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
-	RelationPutHeapTuple(relation, buffer, heaptup,
-						 (options & HEAP_INSERT_SPECULATIVE) != 0);
+	root_offnum = RelationPutHeapTuple(relation, buffer, heaptup,
+						 (options & HEAP_INSERT_SPECULATIVE) != 0,
+						 InvalidOffsetNumber);
+
+	/* We must not overwrite the speculative insertion token. */
+	if ((options & HEAP_INSERT_SPECULATIVE) == 0)
+		HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 	if (PageIsAllVisible(BufferGetPage(buffer)))
 	{
@@ -2668,6 +2675,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 	Size		saveFreeSpace;
 	bool		need_tuple_data = RelationIsLogicallyLogged(relation);
 	bool		need_cids = RelationIsAccessibleInLogicalDecoding(relation);
+	OffsetNumber	root_offnum;
 
 	needwal = !(options & HEAP_INSERT_SKIP_WAL) && RelationNeedsWAL(relation);
 	saveFreeSpace = RelationGetTargetPageFreeSpace(relation,
@@ -2738,7 +2746,12 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		 * RelationGetBufferForTuple has ensured that the first tuple fits.
 		 * Put that on the page, and then as many other tuples as fit.
 		 */
-		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false);
+		root_offnum = RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false,
+				InvalidOffsetNumber);
+
+		/* Mark this tuple as the latest and also set root offset. */
+		HeapTupleHeaderSetHeapLatest(heaptuples[ndone]->t_data, root_offnum);
+
 		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++)
 		{
 			HeapTuple	heaptup = heaptuples[ndone + nthispage];
@@ -2746,7 +2759,10 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace)
 				break;
 
-			RelationPutHeapTuple(relation, buffer, heaptup, false);
+			root_offnum = RelationPutHeapTuple(relation, buffer, heaptup, false,
+					InvalidOffsetNumber);
+			/* Mark each tuple as the latest and also set root offset. */
+			HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 			/*
 			 * We don't use heap_multi_insert for catalog tuples yet, but
@@ -3018,6 +3034,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	HeapTupleData tp;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	TransactionId new_xmax;
@@ -3028,6 +3045,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	bool		all_visible_cleared = false;
 	HeapTuple	old_key_tuple = NULL;	/* replica identity of the tuple */
 	bool		old_key_copied = false;
+	OffsetNumber	root_offnum;
 
 	Assert(ItemPointerIsValid(tid));
 
@@ -3069,7 +3087,8 @@ heap_delete(Relation relation, ItemPointer tid,
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 	}
 
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
+	offnum = ItemPointerGetOffsetNumber(tid);
+	lp = PageGetItemId(page, offnum);
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
@@ -3199,7 +3218,17 @@ l1:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tp.t_data->t_ctid;
+
+		/*
+		 * If we're at the end of the chain, then just return the same TID back
+		 * to the caller. The caller uses that as a hint to know if we have hit
+		 * the end of the chain.
+		 */
+		if (!HeapTupleHeaderIsHeapLatest(tp.t_data, &tp.t_self))
+			HeapTupleHeaderGetNextTid(tp.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&tp.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tp.t_data);
@@ -3248,6 +3277,22 @@ l1:
 							  xid, LockTupleExclusive, true,
 							  &new_xmax, &new_infomask, &new_infomask2);
 
+	/*
+	 * heap_get_root_tuple_one() may call palloc, which is disallowed once we
+	 * enter the critical section. So check if the root offset is cached in the
+	 * tuple and if not, fetch that information hard way before entering the
+	 * critical section.
+	 *
+	 * Most often and unless we are dealing with a pg-upgraded cluster, the
+	 * root offset information should be cached. So there should not be too
+	 * much overhead of fetching this information. Also, once a tuple is
+	 * updated, the information will be copied to the new version. So it's not
+	 * as if we're going to pay this price forever.
+	 */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tp.t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -3275,8 +3320,10 @@ l1:
 	HeapTupleHeaderClearHotUpdated(tp.t_data);
 	HeapTupleHeaderSetXmax(tp.t_data, new_xmax);
 	HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo);
-	/* Make sure there is no forward chain link in t_ctid */
-	tp.t_data->t_ctid = tp.t_self;
+
+	/* Mark this tuple as the latest tuple in the update chain. */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		HeapTupleHeaderSetHeapLatest(tp.t_data, root_offnum);
 
 	MarkBufferDirty(buffer);
 
@@ -3477,6 +3524,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
+	OffsetNumber	root_offnum;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3537,6 +3586,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 
 
 	block = ItemPointerGetBlockNumber(otid);
+	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3840,7 +3890,12 @@ l2:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = oldtup.t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(oldtup.t_data, &oldtup.t_self))
+			HeapTupleHeaderGetNextTid(oldtup.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&oldtup.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
@@ -3980,6 +4035,7 @@ l2:
 		uint16		infomask_lock_old_tuple,
 					infomask2_lock_old_tuple;
 		bool		cleared_all_frozen = false;
+		OffsetNumber	root_offnum;
 
 		/*
 		 * To prevent concurrent sessions from updating the tuple, we have to
@@ -4007,6 +4063,14 @@ l2:
 
 		Assert(HEAP_XMAX_IS_LOCKED_ONLY(infomask_lock_old_tuple));
 
+		/*
+		 * Fetch root offset before entering the critical section. We do this
+		 * only if the information is not already available.
+		 */
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&oldtup.t_self));
+
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
@@ -4021,7 +4085,8 @@ l2:
 		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			HeapTupleHeaderSetHeapLatest(oldtup.t_data, root_offnum);
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4180,6 +4245,10 @@ l2:
 										   bms_overlap(modified_attrs, id_attrs),
 										   &old_key_copied);
 
+	if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
@@ -4205,6 +4274,17 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way (we should have done
+		 * that before entering the critical section above).
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
 	else
 	{
@@ -4212,10 +4292,22 @@ l2:
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		root_offnum = InvalidOffsetNumber;
 	}
 
-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
-
+	/* insert new tuple */
+	root_offnum = RelationPutHeapTuple(relation, newbuf, heaptup, false,
+									   root_offnum);
+	/*
+	 * Also mark both copies as latest and set the root offset information. If
+	 * we're doing a HOT/WARM update, then we just copy the information from
+	 * old tuple, if available or computed above. For regular updates,
+	 * RelationPutHeapTuple must have returned us the actual offset number
+	 * where the new version was inserted and we store the same value since the
+	 * update resulted in a new HOT-chain.
+	 */
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
 	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
@@ -4228,7 +4320,7 @@ l2:
 	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	HeapTupleHeaderSetNextTid(oldtup.t_data, &(heaptup->t_self));
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4267,6 +4359,7 @@ l2:
 
 		recptr = log_heap_update(relation, buffer,
 								 newbuf, &oldtup, heaptup,
+								 root_offnum,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4547,7 +4640,8 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	ItemId		lp;
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
-	BlockNumber block;
+	BlockNumber	block;
+	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4556,9 +4650,11 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	bool		first_time = true;
 	bool		have_tuple_lock = false;
 	bool		cleared_all_frozen = false;
+	OffsetNumber	root_offnum;
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
+	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -4578,6 +4674,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	tuple->t_data = (HeapTupleHeader) PageGetItem(page, lp);
 	tuple->t_len = ItemIdGetLength(lp);
 	tuple->t_tableOid = RelationGetRelid(relation);
+	tuple->t_self = *tid;
 
 l3:
 	result = HeapTupleSatisfiesUpdate(tuple, cid, *buffer);
@@ -4605,7 +4702,11 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &t_ctid);
+		else
+			ItemPointerCopy(tid, &t_ctid);
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -5043,7 +5144,12 @@ failed:
 		Assert(result == HeapTupleSelfUpdated || result == HeapTupleUpdated ||
 			   result == HeapTupleWouldBlock);
 		Assert(!(tuple->t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tuple->t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(tid, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tuple->t_data);
@@ -5091,6 +5197,10 @@ failed:
 							  GetCurrentTransactionId(), mode, false,
 							  &xid, &new_infomask, &new_infomask2);
 
+	if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tuple->t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -5119,7 +5229,10 @@ failed:
 	 * the tuple as well.
 	 */
 	if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask))
-		tuple->t_data->t_ctid = *tid;
+	{
+		if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+			HeapTupleHeaderSetHeapLatest(tuple->t_data, root_offnum);
+	}
 
 	/* Clear only the all-frozen bit on visibility map if needed */
 	if (PageIsAllVisible(page) &&
@@ -5633,6 +5746,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
+	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5641,6 +5755,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
+		offnum = ItemPointerGetOffsetNumber(&tupid);
+
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
@@ -5870,7 +5986,7 @@ l4:
 
 		/* if we find the end of update chain, we're done. */
 		if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID ||
-			ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) ||
+			HeapTupleHeaderIsHeapLatest(mytup.t_data, &mytup.t_self) ||
 			HeapTupleHeaderIsOnlyLocked(mytup.t_data))
 		{
 			result = HeapTupleMayBeUpdated;
@@ -5879,7 +5995,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		HeapTupleHeaderGetNextTid(mytup.t_data, &tupid);
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
@@ -5996,7 +6112,7 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	 * Replace the speculative insertion token with a real t_ctid, pointing to
 	 * itself like it does on regular tuples.
 	 */
-	htup->t_ctid = tuple->t_self;
+	HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 	/* XLOG stuff */
 	if (RelationNeedsWAL(relation))
@@ -6122,8 +6238,7 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
 
 	/* Clear the speculative insertion token too */
-	tp.t_data->t_ctid = tp.t_self;
-
+	HeapTupleHeaderSetHeapLatest(tp.t_data, ItemPointerGetOffsetNumber(tid));
 	MarkBufferDirty(buffer);
 
 	/*
@@ -7471,6 +7586,7 @@ log_heap_visible(RelFileNode rnode, Buffer heap_buffer, Buffer vm_buffer,
 static XLogRecPtr
 log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup, HeapTuple newtup,
+				OffsetNumber root_offnum,
 				HeapTuple old_key_tuple,
 				bool all_visible_cleared, bool new_all_visible_cleared)
 {
@@ -7591,6 +7707,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self);
 	xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data);
 
+	Assert(OffsetNumberIsValid(root_offnum));
+	xlrec.root_offnum = root_offnum;
+
 	bufflags = REGBUF_STANDARD;
 	if (init)
 		bufflags |= REGBUF_WILL_INIT;
@@ -8245,7 +8364,13 @@ heap_xlog_delete(XLogReaderState *record)
 			PageClearAllVisible(page);
 
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = target_tid;
+		if (!HeapTupleHeaderHasRootOffset(htup))
+		{
+			OffsetNumber	root_offnum;
+			root_offnum = heap_get_root_tuple(page, xlrec->offnum); 
+			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+		}
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8335,7 +8460,8 @@ heap_xlog_insert(XLogReaderState *record)
 		htup->t_hoff = xlhdr.t_hoff;
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
-		htup->t_ctid = target_tid;
+
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->offnum);
 
 		if (PageAddItem(page, (Item) htup, newlen, xlrec->offnum,
 						true, true) == InvalidOffsetNumber)
@@ -8470,8 +8596,8 @@ heap_xlog_multi_insert(XLogReaderState *record)
 			htup->t_hoff = xlhdr->t_hoff;
 			HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 			HeapTupleHeaderSetCmin(htup, FirstCommandId);
-			ItemPointerSetBlockNumber(&htup->t_ctid, blkno);
-			ItemPointerSetOffsetNumber(&htup->t_ctid, offnum);
+
+			HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 			offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 			if (offnum == InvalidOffsetNumber)
@@ -8607,7 +8733,7 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 		/* Set forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetNextTid(htup, &newtid);
 
 		/* Mark the page as a candidate for pruning */
 		PageSetPrunable(page, XLogRecGetXid(record));
@@ -8740,13 +8866,17 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
-		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = newtid;
 
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
 
+		/*
+		 * Make sure the tuple is marked as the latest and root offset
+		 * information is restored.
+		 */
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->root_offnum);
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
@@ -8809,6 +8939,9 @@ heap_xlog_confirm(XLogReaderState *record)
 		 */
 		ItemPointerSet(&htup->t_ctid, BufferGetBlockNumber(buffer), offnum);
 
+		/* For newly inserted tuple, set root offset to itself. */
+		HeapTupleHeaderSetHeapLatest(htup, offnum);
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8872,11 +9005,17 @@ heap_xlog_lock(XLogReaderState *record)
 		 */
 		if (HEAP_XMAX_IS_LOCKED_ONLY(htup->t_infomask))
 		{
+			ItemPointerData	target_tid;
+
+			ItemPointerSet(&target_tid, BufferGetBlockNumber(buffer), offnum);
 			HeapTupleHeaderClearHotUpdated(htup);
 			/* Make sure there is no forward chain link in t_ctid */
-			ItemPointerSet(&htup->t_ctid,
-						   BufferGetBlockNumber(buffer),
-						   offnum);
+			if (!HeapTupleHeaderHasRootOffset(htup))
+			{
+				OffsetNumber	root_offnum;
+				root_offnum = heap_get_root_tuple(page, offnum);
+				HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+			}
 		}
 		HeapTupleHeaderSetXmax(htup, xlrec->locking_xid);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index 6529fe3..8052519 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -31,12 +31,20 @@
  * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
  * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *
+ * The caller can optionally tell us to set the root offset to the given value.
+ * Otherwise, the root offset is set to the offset of the new location once its
+ * known. The former is used while updating an existing tuple where the caller
+ * tells us about the root line pointer of the chain.  The latter is used
+ * during insertion of a new row, hence root line pointer is set to the offset
+ * where this tuple is inserted.
  */
-void
+OffsetNumber
 RelationPutHeapTuple(Relation relation,
 					 Buffer buffer,
 					 HeapTuple tuple,
-					 bool token)
+					 bool token,
+					 OffsetNumber root_offnum)
 {
 	Page		pageHeader;
 	OffsetNumber offnum;
@@ -60,17 +68,24 @@ RelationPutHeapTuple(Relation relation,
 	ItemPointerSet(&(tuple->t_self), BufferGetBlockNumber(buffer), offnum);
 
 	/*
-	 * Insert the correct position into CTID of the stored tuple, too (unless
-	 * this is a speculative insertion, in which case the token is held in
-	 * CTID field instead)
+	 * Set block number and the root offset into CTID of the stored tuple, too
+	 * (unless this is a speculative insertion, in which case the token is held
+	 * in CTID field instead).
 	 */
 	if (!token)
 	{
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
+		/* Copy t_ctid to set the correct block number. */
 		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+
+		if (!OffsetNumberIsValid(root_offnum))
+			root_offnum = offnum;
+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item, root_offnum);
 	}
+
+	return root_offnum;
 }
 
 /*
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index d69a266..f54337c 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -55,6 +55,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
 
+static void heap_get_root_tuples_internal(Page page,
+				OffsetNumber target_offnum, OffsetNumber *root_offsets);
 
 /*
  * Optionally prune and repair fragmentation in the specified page.
@@ -553,6 +555,17 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 		if (!HeapTupleHeaderIsHotUpdated(htup))
 			break;
 
+
+		/*
+		 * If the tuple was HOT-updated and the update was later
+		 * aborted, someone could mark this tuple to be the last tuple
+		 * in the chain, without clearing the HOT-updated flag. So we must
+		 * check if this is the last tuple in the chain and stop following the
+		 * CTID, else we risk getting into an infinite recursion (though
+		 * prstate->marked[] currently protects against that).
+		 */
+		if (HeapTupleHeaderHasRootOffset(htup))
+			break;
 		/*
 		 * Advance to next chain member.
 		 */
@@ -726,27 +739,47 @@ heap_page_prune_execute(Buffer buffer,
 
 
 /*
- * For all items in this page, find their respective root line pointers.
- * If item k is part of a HOT-chain with root at item j, then we set
- * root_offsets[k - 1] = j.
+ * Either for all items in this page or for the given item, find their
+ * respective root line pointers.
+ *
+ * When target_offnum is a valid offset number, the caller is interested in
+ * just one item. In that case, the root line pointer is returned in
+ * root_offsets.
  *
- * The passed-in root_offsets array must have MaxHeapTuplesPerPage entries.
- * We zero out all unused entries.
+ * When target_offnum is a InvalidOffsetNumber then the caller wants to know
+ * the root line pointers of all the items in this page. The root_offsets array
+ * must have MaxHeapTuplesPerPage entries in that case. If item k is part of a
+ * HOT-chain with root at item j, then we set root_offsets[k - 1] = j. We zero
+ * out all unused entries.
  *
  * The function must be called with at least share lock on the buffer, to
  * prevent concurrent prune operations.
  *
+ * This is not a cheap function since it must scan through all line pointers
+ * and tuples on the page in order to find the root line pointers. To minimize
+ * the cost, we break early if target_offnum is specified and root line pointer
+ * to target_offnum is found.
+ *
  * Note: The information collected here is valid only as long as the caller
  * holds a pin on the buffer. Once pin is released, a tuple might be pruned
  * and reused by a completely unrelated tuple.
+ *
+ * Note: This function must not be called inside a critical section because it
+ * internally calls HeapTupleHeaderGetUpdateXid which somewhere down the stack
+ * may try to allocate heap memory. Memory allocation is disallowed in a
+ * critical section.
  */
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
 {
 	OffsetNumber offnum,
 				maxoff;
 
-	MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
+	if (OffsetNumberIsValid(target_offnum))
+		*root_offsets = InvalidOffsetNumber;
+	else
+		MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
 
 	maxoff = PageGetMaxOffsetNumber(page);
 	for (offnum = FirstOffsetNumber; offnum <= maxoff; offnum = OffsetNumberNext(offnum))
@@ -774,9 +807,28 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 
 			/*
 			 * This is either a plain tuple or the root of a HOT-chain.
-			 * Remember it in the mapping.
+			 *
+			 * If the target_offnum is specified and if we found its mapping,
+			 * return.
 			 */
-			root_offsets[offnum - 1] = offnum;
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (target_offnum == offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember it in the mapping. */
+				root_offsets[offnum - 1] = offnum;
+			}
 
 			/* If it's not the start of a HOT-chain, we're done with it */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
@@ -817,15 +869,65 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 				!TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(htup)))
 				break;
 
-			/* Remember the root line pointer for this item */
-			root_offsets[nextoffnum - 1] = offnum;
+			/*
+			 * If target_offnum is specified and we found its mapping, return.
+			 */
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (nextoffnum == target_offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember the root line pointer for this item. */
+				root_offsets[nextoffnum - 1] = offnum;
+			}
 
 			/* Advance to next chain member, if any */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				break;
 
+			/*
+			 * If the tuple was HOT-updated and the update was later aborted,
+			 * someone could mark this tuple to be the last tuple in the chain
+			 * and store root offset in CTID, without clearing the HOT-updated
+			 * flag. So we must check if CTID is actually root offset and break
+			 * to avoid infinite recursion.
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
 		}
 	}
 }
+
+/*
+ * Get root line pointer for the given tuple.
+ */
+OffsetNumber
+heap_get_root_tuple(Page page, OffsetNumber target_offnum)
+{
+	OffsetNumber offnum = InvalidOffsetNumber;
+	heap_get_root_tuples_internal(page, target_offnum, &offnum);
+	return offnum;
+}
+
+/*
+ * Get root line pointers for all tuples in the page
+ */
+void
+heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+{
+	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+			root_offsets);
+}
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index d7f65a5..2d3ae9b 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -421,14 +421,18 @@ rewrite_heap_tuple(RewriteState state,
 	 */
 	if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
 		  HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
-		!(ItemPointerEquals(&(old_tuple->t_self),
-							&(old_tuple->t_data->t_ctid))))
+		!(HeapTupleHeaderIsHeapLatest(old_tuple->t_data, &old_tuple->t_self)))
 	{
 		OldToNewMapping mapping;
 
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
-		hashkey.tid = old_tuple->t_data->t_ctid;
+
+		/* 
+		 * We've already checked that this is not the last tuple in the chain,
+		 * so fetch the next TID in the chain.
+		 */
+		HeapTupleHeaderGetNextTid(old_tuple->t_data, &hashkey.tid);
 
 		mapping = (OldToNewMapping)
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -441,7 +445,7 @@ rewrite_heap_tuple(RewriteState state,
 			 * set the ctid of this tuple to point to the new location, and
 			 * insert it right away.
 			 */
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextTid(new_tuple->t_data, &mapping->new_tid);
 
 			/* We don't need the mapping entry anymore */
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -527,7 +531,7 @@ rewrite_heap_tuple(RewriteState state,
 				new_tuple = unresolved->tuple;
 				free_new = true;
 				old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextTid(new_tuple->t_data, &new_tid);
 
 				/*
 				 * We don't need the hash entry anymore, but don't free its
@@ -733,7 +737,12 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		onpage_tup->t_ctid = tup->t_self;
+		/* 
+		 * Set t_ctid just to ensure that block number is copied correctly, but
+		 * then immediately mark the tuple as the latest.
+		 */
+		HeapTupleHeaderSetNextTid(onpage_tup, &tup->t_self);
+		HeapTupleHeaderSetHeapLatest(onpage_tup, newoff);
 	}
 
 	/* If heaptup is a private copy, release it. */
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 108060a..c3f1873 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -785,7 +785,8 @@ retry:
 			  DirtySnapshot.speculativeToken &&
 			  TransactionIdPrecedes(GetCurrentTransactionId(), xwait))))
 		{
-			ctid_wait = tup->t_data->t_ctid;
+			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
+				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index f2995f2..73e9c4a 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2623,7 +2623,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		 * As above, it should be safe to examine xmax and t_ctid without the
 		 * buffer content lock, because they can't be changing.
 		 */
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, &tuple.t_self))
 		{
 			/* deleted, so forget about it */
 			ReleaseBuffer(buffer);
@@ -2631,7 +2631,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		}
 
 		/* updated, so look at the updated row */
-		tuple.t_self = tuple.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tuple.t_data, &tuple.t_self);
 		/* updated row should have xmin matching this xmax */
 		priorXmax = HeapTupleHeaderGetUpdateXid(tuple.t_data);
 		ReleaseBuffer(buffer);
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 7e85510..5540e12 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -190,6 +190,7 @@ extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
+extern OffsetNumber heap_get_root_tuple(Page page, OffsetNumber target_offnum);
 extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
 
 /* in heap/syncscan.c */
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index b285f17..e6019d5 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -193,6 +193,8 @@ typedef struct xl_heap_update
 	uint8		flags;
 	TransactionId new_xmax;		/* xmax of the new tuple */
 	OffsetNumber new_offnum;	/* new tuple's offset */
+	OffsetNumber root_offnum;	/* offset of the root line pointer in case of
+								   HOT or WARM update */
 
 	/*
 	 * If XLOG_HEAP_CONTAINS_OLD_TUPLE or XLOG_HEAP_CONTAINS_OLD_KEY flags are
@@ -200,7 +202,7 @@ typedef struct xl_heap_update
 	 */
 } xl_heap_update;
 
-#define SizeOfHeapUpdate	(offsetof(xl_heap_update, new_offnum) + sizeof(OffsetNumber))
+#define SizeOfHeapUpdate	(offsetof(xl_heap_update, root_offnum) + sizeof(OffsetNumber))
 
 /*
  * This is what we need to know about vacuum page cleanup/redirect
diff --git a/src/include/access/hio.h b/src/include/access/hio.h
index 2824f23..921cb37 100644
--- a/src/include/access/hio.h
+++ b/src/include/access/hio.h
@@ -35,8 +35,8 @@ typedef struct BulkInsertStateData
 }	BulkInsertStateData;
 
 
-extern void RelationPutHeapTuple(Relation relation, Buffer buffer,
-					 HeapTuple tuple, bool token);
+extern OffsetNumber RelationPutHeapTuple(Relation relation, Buffer buffer,
+					 HeapTuple tuple, bool token, OffsetNumber root_offnum);
 extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
 						  Buffer otherBuffer, int options,
 						  BulkInsertState bistate,
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index a6c7e31..7552186 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,13 +260,19 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1800 are available */
+/* bits 0x0800 are available */
+#define HEAP_LATEST_TUPLE		0x1000	/*
+										 * This is the last tuple in chain and
+										 * ip_posid points to the root line
+										 * pointer
+										 */
 #define HEAP_KEYS_UPDATED		0x2000	/* tuple was updated and key cols
 										 * modified, or tuple deleted */
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xE000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+
 
 /*
  * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins.  It is
@@ -504,6 +510,43 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+/*
+ * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
+ * the TID of the tuple itself in t_ctid field to mark the end of the chain.
+ * But starting PG v10, we use a special flag HEAP_LATEST_TUPLE to identify the
+ * last tuple and store the root line pointer of the HOT chain in t_ctid field
+ * instead.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetHeapLatest(tup, offnum) \
+do { \
+	AssertMacro(OffsetNumberIsValid(offnum)); \
+	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE; \
+	ItemPointerSetOffsetNumber(&(tup)->t_ctid, (offnum)); \
+} while (0)
+
+#define HeapTupleHeaderClearHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 &= ~HEAP_LATEST_TUPLE \
+)
+
+/*
+ * Starting from PostgreSQL 10, the latest tuple in an update chain has
+ * HEAP_LATEST_TUPLE set; but tuples upgraded from earlier versions do not.
+ * For those, we determine whether a tuple is latest by testing that its t_ctid
+ * points to itself.
+ *
+ * Note: beware of multiple evaluations of "tup" and "tid" arguments.
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  (((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(tid))) \
+)
+
+
 #define HeapTupleHeaderSetHeapOnly(tup) \
 ( \
   (tup)->t_infomask2 |= HEAP_ONLY_TUPLE \
@@ -542,6 +585,56 @@ do { \
 
 
 /*
+ * Set the t_ctid chain and also clear the HEAP_LATEST_TUPLE flag since we
+ * now have a new tuple in the chain and this is no longer the last tuple of
+ * the chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetNextTid(tup, tid) \
+do { \
+		ItemPointerCopy((tid), &((tup)->t_ctid)); \
+		HeapTupleHeaderClearHeapLatest((tup)); \
+} while (0)
+
+/*
+ * Get TID of next tuple in the update chain. Caller must have checked that
+ * we are not already at the end of the chain because in that case t_ctid may
+ * actually store the root line pointer of the HOT chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetNextTid(tup, next_ctid) \
+do { \
+	AssertMacro(!((tup)->t_infomask2 & HEAP_LATEST_TUPLE)); \
+	ItemPointerCopy(&(tup)->t_ctid, (next_ctid)); \
+} while (0)
+
+/*
+ * Get the root line pointer of the HOT chain. The caller should have confirmed
+ * that the root offset is cached before calling this macro.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+	AssertMacro(((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0), \
+	ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)
+
+/*
+ * Return whether the tuple has a cached root offset.  We don't use
+ * HeapTupleHeaderIsHeapLatest because that one also considers the case of
+ * t_ctid pointing to itself, for tuples migrated from pre v10 clusters. Here
+ * we are only interested in the tuples which are marked with HEAP_LATEST_TUPLE
+ * flag.
+ */
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+	((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0 \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
0001_interesting_attrs_v21.patchapplication/octet-stream; name=0001_interesting_attrs_v21.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index b147f64..51c773f 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -96,11 +96,8 @@ static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
 				HeapTuple newtup, HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
-static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
-							 Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
+							 Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup);
 static bool heap_acquire_tuplock(Relation relation, ItemPointer tid,
 					 LockTupleMode mode, LockWaitPolicy wait_policy,
@@ -3471,6 +3468,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *interesting_attrs;
+	Bitmapset  *modified_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3488,10 +3487,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				pagefree;
 	bool		have_tuple_lock = false;
 	bool		iscombo;
-	bool		satisfies_hot;
-	bool		satisfies_key;
-	bool		satisfies_id;
 	bool		use_hot_update = false;
+	bool		hot_attrs_checked = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
 	bool		all_visible_cleared_new = false;
@@ -3517,26 +3514,51 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				 errmsg("cannot update tuples during a parallel operation")));
 
 	/*
-	 * Fetch the list of attributes to be checked for HOT update.  This is
-	 * wasted effort if we fail to update or have to put the new tuple on a
-	 * different page.  But we must compute the list before obtaining buffer
-	 * lock --- in the worst case, if we are doing an update on one of the
-	 * relevant system catalogs, we could deadlock if we try to fetch the list
-	 * later.  In any case, the relcache caches the data so this is usually
-	 * pretty cheap.
+	 * Fetch the list of attributes to be checked for various operations.
 	 *
-	 * Note that we get a copy here, so we need not worry about relcache flush
-	 * happening midway through.
+	 * For HOT considerations, this is wasted effort if we fail to update or
+	 * have to put the new tuple on a different page.  But we must compute the
+	 * list before obtaining buffer lock --- in the worst case, if we are doing
+	 * an update on one of the relevant system catalogs, we could deadlock if
+	 * we try to fetch the list later.  In any case, the relcache caches the
+	 * data so this is usually pretty cheap.
+	 *
+	 * We also need columns used by the replica identity, the columns that
+	 * are considered the "key" of rows in the table, and columns that are
+	 * part of indirect indexes.
+	 *
+	 * Note that we get copies of each bitmap, so we need not worry about
+	 * relcache flush happening midway through.
 	 */
 	hot_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_ALL);
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
 
+
 	block = ItemPointerGetBlockNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
+	interesting_attrs = NULL;
+	/*
+	 * If the page is already full, there is hardly any chance of doing a HOT
+	 * update on this page. It might be wasteful effort to look for index
+	 * column updates only to later reject HOT updates for lack of space in the
+	 * same page. So we be conservative and only fetch hot_attrs if the page is
+	 * not already full. Since we are already holding a pin on the buffer,
+	 * there is no chance that the buffer can get cleaned up concurrently and
+	 * even if that was possible, in the worst case we lose a chance to do a
+	 * HOT update.
+	 */
+	if (!PageIsFull(page))
+	{
+		interesting_attrs = bms_add_members(interesting_attrs, hot_attrs);
+		hot_attrs_checked = true;
+	}
+	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
+
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
 	 * be necessary.  Since we haven't got the lock yet, someone else might be
@@ -3552,7 +3574,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Assert(ItemIdIsNormal(lp));
 
 	/*
-	 * Fill in enough data in oldtup for HeapSatisfiesHOTandKeyUpdate to work
+	 * Fill in enough data in oldtup for HeapDetermineModifiedColumns to work
 	 * properly.
 	 */
 	oldtup.t_tableOid = RelationGetRelid(relation);
@@ -3578,6 +3600,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 		Assert(!(newtup->t_data->t_infomask & HEAP_HASOID));
 	}
 
+	/* Determine columns modified by the update. */
+	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
+												  &oldtup, newtup);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3589,10 +3615,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	 * is updates that don't manipulate key columns, not those that
 	 * serendipitiously arrive at the same key values.
 	 */
-	HeapSatisfiesHOTandKeyUpdate(relation, hot_attrs, key_attrs, id_attrs,
-								 &satisfies_hot, &satisfies_key,
-								 &satisfies_id, &oldtup, newtup);
-	if (satisfies_key)
+	if (!bms_overlap(modified_attrs, key_attrs))
 	{
 		*lockmode = LockTupleNoKeyExclusive;
 		mxact_status = MultiXactStatusNoKeyUpdate;
@@ -3831,6 +3854,8 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(modified_attrs);
+		bms_free(interesting_attrs);
 		return result;
 	}
 
@@ -4133,9 +4158,10 @@ l2:
 		/*
 		 * Since the new tuple is going into the same page, we might be able
 		 * to do a HOT update.  Check if any of the index columns have been
-		 * changed.  If not, then HOT update is possible.
+		 * changed. If the page was already full, we may have skipped checking
+		 * for index columns. If so, HOT update is possible.
 		 */
-		if (satisfies_hot)
+		if (hot_attrs_checked && !bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
 	}
 	else
@@ -4150,7 +4176,9 @@ l2:
 	 * ExtractReplicaIdentity() will return NULL if nothing needs to be
 	 * logged.
 	 */
-	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup, !satisfies_id, &old_key_copied);
+	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup,
+										   bms_overlap(modified_attrs, id_attrs),
+										   &old_key_copied);
 
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
@@ -4298,13 +4326,15 @@ l2:
 	bms_free(hot_attrs);
 	bms_free(key_attrs);
 	bms_free(id_attrs);
+	bms_free(modified_attrs);
+	bms_free(interesting_attrs);
 
 	return HeapTupleMayBeUpdated;
 }
 
 /*
  * Check if the specified attribute's value is same in both given tuples.
- * Subroutine for HeapSatisfiesHOTandKeyUpdate.
+ * Subroutine for HeapDetermineModifiedColumns.
  */
 static bool
 heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
@@ -4338,7 +4368,7 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 
 	/*
 	 * Extract the corresponding values.  XXX this is pretty inefficient if
-	 * there are many indexed columns.  Should HeapSatisfiesHOTandKeyUpdate do
+	 * there are many indexed columns.  Should HeapDetermineModifiedColumns do
 	 * a single heap_deform_tuple call on each tuple, instead?	But that
 	 * doesn't work for system columns ...
 	 */
@@ -4383,114 +4413,30 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 /*
  * Check which columns are being updated.
  *
- * This simultaneously checks conditions for HOT updates, for FOR KEY
- * SHARE updates, and REPLICA IDENTITY concerns.  Since much of the time they
- * will be checking very similar sets of columns, and doing the same tests on
- * them, it makes sense to optimize and do them together.
- *
- * We receive three bitmapsets comprising the three sets of columns we're
- * interested in.  Note these are destructively modified; that is OK since
- * this is invoked at most once in heap_update.
+ * Given an updated tuple, determine (and return into the output bitmapset),
+ * from those listed as interesting, the set of columns that changed.
  *
- * hot_result is set to TRUE if it's okay to do a HOT update (i.e. it does not
- * modified indexed columns); key_result is set to TRUE if the update does not
- * modify columns used in the key; id_result is set to TRUE if the update does
- * not modify columns in any index marked as the REPLICA IDENTITY.
+ * The input bitmapset is destructively modified; that is OK since this is
+ * invoked at most once in heap_update.
  */
-static void
-HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *
+HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup)
 {
-	int			next_hot_attnum;
-	int			next_key_attnum;
-	int			next_id_attnum;
-	bool		hot_result = true;
-	bool		key_result = true;
-	bool		id_result = true;
-
-	/* If REPLICA IDENTITY is set to FULL, id_attrs will be empty. */
-	Assert(bms_is_subset(id_attrs, key_attrs));
-	Assert(bms_is_subset(key_attrs, hot_attrs));
-
-	/*
-	 * If one of these sets contains no remaining bits, bms_first_member will
-	 * return -1, and after adding FirstLowInvalidHeapAttributeNumber (which
-	 * is negative!)  we'll get an attribute number that can't possibly be
-	 * real, and thus won't match any actual attribute number.
-	 */
-	next_hot_attnum = bms_first_member(hot_attrs);
-	next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_key_attnum = bms_first_member(key_attrs);
-	next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_id_attnum = bms_first_member(id_attrs);
-	next_id_attnum += FirstLowInvalidHeapAttributeNumber;
+	int		attnum;
+	Bitmapset *modified = NULL;
 
-	for (;;)
+	while ((attnum = bms_first_member(interesting_cols)) >= 0)
 	{
-		bool		changed;
-		int			check_now;
+		attnum += FirstLowInvalidHeapAttributeNumber;
 
-		/*
-		 * Since the HOT attributes are a superset of the key attributes and
-		 * the key attributes are a superset of the id attributes, this logic
-		 * is guaranteed to identify the next column that needs to be checked.
-		 */
-		if (hot_result && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_hot_attnum;
-		else if (key_result && next_key_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_key_attnum;
-		else if (id_result && next_id_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_id_attnum;
-		else
-			break;
-
-		/* See whether it changed. */
-		changed = !heap_tuple_attr_equals(RelationGetDescr(relation),
-										  check_now, oldtup, newtup);
-		if (changed)
-		{
-			if (check_now == next_hot_attnum)
-				hot_result = false;
-			if (check_now == next_key_attnum)
-				key_result = false;
-			if (check_now == next_id_attnum)
-				id_result = false;
-
-			/* if all are false now, we can stop checking */
-			if (!hot_result && !key_result && !id_result)
-				break;
-		}
-
-		/*
-		 * Advance the next attribute numbers for the sets that contain the
-		 * attribute we just checked.  As we work our way through the columns,
-		 * the next_attnum values will rise; but when each set becomes empty,
-		 * bms_first_member() will return -1 and the attribute number will end
-		 * up with a value less than FirstLowInvalidHeapAttributeNumber.
-		 */
-		if (hot_result && check_now == next_hot_attnum)
-		{
-			next_hot_attnum = bms_first_member(hot_attrs);
-			next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (key_result && check_now == next_key_attnum)
-		{
-			next_key_attnum = bms_first_member(key_attrs);
-			next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (id_result && check_now == next_id_attnum)
-		{
-			next_id_attnum = bms_first_member(id_attrs);
-			next_id_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
+		if (!heap_tuple_attr_equals(RelationGetDescr(relation),
+								   attnum, oldtup, newtup))
+			modified = bms_add_member(modified,
+									  attnum - FirstLowInvalidHeapAttributeNumber);
 	}
 
-	*satisfies_hot = hot_result;
-	*satisfies_key = key_result;
-	*satisfies_id = id_result;
+	return modified;
 }
 
 /*
#177Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#171)
Re: Patch: Write Amplification Reduction Method (WARM)

On Sat, Mar 25, 2017 at 1:24 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Fri, Mar 24, 2017 at 11:49 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Fri, Mar 24, 2017 at 6:46 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

While looking at this problem, it occurred to me that the assumptions made
for hash indexes are also wrong :-( Hash index has the same problem as
expression indexes have. A change in heap value may not necessarily cause a
change in the hash key. If we don't detect that, we will end up having two
hash identical hash keys with the same TID pointer. This will cause the
duplicate key scans problem since hashrecheck will return true for both the
hash entries.

Isn't it possible to detect duplicate keys in hashrecheck if we
compare both hashkey and tid stored in index tuple with the
corresponding values from heap tuple?

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#178Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Pavan Deolasee (#176)
1 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

Is the WARM tap test suite supposed to work when applied without all the
other patches? I just tried applied that one and running "make check -C
src/test/modules", and it seems to hang after giving "ok 5" for
t/002_warm_stress.pl. (I had to add a Makefile too, attached.)

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachments:

Makefiletext/plain; charset=us-asciiDownload
#179Robert Haas
robertmhaas@gmail.com
In reply to: Pavan Deolasee (#165)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 23, 2017 at 2:47 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

It's quite hard to say that until we see many more benchmarks. As author of
the patch, I might have got repetitive with my benchmarks. But I've seen
over 50% improvement in TPS even without chain conversion (6 indexes on a 12
column table test).

This seems quite mystifying. What can account for such a large
performance difference in such a pessimal scenario? It seems to me
that without chain conversion, WARM can only apply to each row once
and therefore no sustained performance improvement should be possible
-- unless rows are regularly being moved to new blocks, in which case
those updates would "reset" the ability to again perform an update.
However, one would hope that most updates get done within a single
block, so that the row-moves-to-new-block case wouldn't happen very
often.

I'm perplexed.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#180Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Alvaro Herrera (#178)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 28, 2017 at 1:32 AM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

Is the WARM tap test suite supposed to work when applied without all the
other patches? I just tried applied that one and running "make check -C
src/test/modules", and it seems to hang after giving "ok 5" for
t/002_warm_stress.pl. (I had to add a Makefile too, attached.)

These tests should run without WARM. I wonder though if IPC::Run's
start/pump/finish facility is fully portable. Andrew on off-list
conversation reminded me that there are no (or may be one) tests currently
using that in Postgres. I've run these tests on OSX, will try on some linux
platform too.

Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#181Bruce Momjian
bruce@momjian.us
In reply to: Robert Haas (#179)
Re: Patch: Write Amplification Reduction Method (WARM)

On Mon, Mar 27, 2017 at 04:29:56PM -0400, Robert Haas wrote:

On Thu, Mar 23, 2017 at 2:47 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

It's quite hard to say that until we see many more benchmarks. As author of
the patch, I might have got repetitive with my benchmarks. But I've seen
over 50% improvement in TPS even without chain conversion (6 indexes on a 12
column table test).

This seems quite mystifying. What can account for such a large
performance difference in such a pessimal scenario? It seems to me
that without chain conversion, WARM can only apply to each row once
and therefore no sustained performance improvement should be possible
-- unless rows are regularly being moved to new blocks, in which case
those updates would "reset" the ability to again perform an update.
However, one would hope that most updates get done within a single
block, so that the row-moves-to-new-block case wouldn't happen very
often.

I'm perplexed.

Yes, I asked the same question in this email:

/messages/by-id/20170321190000.GE16918@momjian.us

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#182Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#179)
1 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 28, 2017 at 1:59 AM, Robert Haas <robertmhaas@gmail.com> wrote:

On Thu, Mar 23, 2017 at 2:47 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

It's quite hard to say that until we see many more benchmarks. As author

of

the patch, I might have got repetitive with my benchmarks. But I've seen
over 50% improvement in TPS even without chain conversion (6 indexes on

a 12

column table test).

This seems quite mystifying. What can account for such a large
performance difference in such a pessimal scenario? It seems to me
that without chain conversion, WARM can only apply to each row once
and therefore no sustained performance improvement should be possible
-- unless rows are regularly being moved to new blocks, in which case
those updates would "reset" the ability to again perform an update.
However, one would hope that most updates get done within a single
block, so that the row-moves-to-new-block case wouldn't happen very
often.

I think you're confusing between update chains that stay within a block vs
HOT/WARM chains. Even when the entire update chain stays within a block, it
can be made up of multiple HOT/WARM chains and each of these chains offer
ability to do one WARM update. So even without chain conversion, every
alternate update will be a WARM update. So the gains are perpetual.

For example, if I take a simplistic example of a table with just one tuple
and four indexes and where every update updates just one of the indexes.
Assuming no WARM chain conversion this is what would happen for every
update:

1. WARM update, new entry in just one index
2. Regular update, new entries in all indexes
3. WARM update, new entry in just one index
4. Regular update, new entries in all indexes

At the end of N updates (assuming all fit in the same block), one index
will have N entries and rest will have N/2 entries.

Compare that against master:
1. Regular update, new entries in all indexes
2. Regular update, new entries in all indexes
3. Regular update, new entries in all indexes
4. Regular update, new entries in all indexes

At the end of N updates (assuming all fit in the same block), all indexes
will have N entries. So with WARM we reduce bloats in 3 indexes. And WARM
works almost in a perpetual way even without chain conversion. If you see
the graph I earlier shared (attach again), without WARM chain conversion
the rate of WARM updates settle down to 50%, which is not surprising given
what I explained above.

Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

Percentage-WARM-with-time (1).pngimage/png; name="Percentage-WARM-with-time (1).png"Download
#183Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Bruce Momjian (#181)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 28, 2017 at 7:49 AM, Bruce Momjian <bruce@momjian.us> wrote:

On Mon, Mar 27, 2017 at 04:29:56PM -0400, Robert Haas wrote:

On Thu, Mar 23, 2017 at 2:47 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

It's quite hard to say that until we see many more benchmarks. As

author of

the patch, I might have got repetitive with my benchmarks. But I've

seen

over 50% improvement in TPS even without chain conversion (6 indexes

on a 12

column table test).

This seems quite mystifying. What can account for such a large
performance difference in such a pessimal scenario? It seems to me
that without chain conversion, WARM can only apply to each row once
and therefore no sustained performance improvement should be possible
-- unless rows are regularly being moved to new blocks, in which case
those updates would "reset" the ability to again perform an update.
However, one would hope that most updates get done within a single
block, so that the row-moves-to-new-block case wouldn't happen very
often.

I'm perplexed.

Yes, I asked the same question in this email:

/messages/by-id/20170321190000.
GE16918@momjian.us

And I've answered it so many times by now :-)

Just to add more to what I just said in another email, note that HOT/WARM
chains are created when a new root line pointer is created in the heap (a
line pointer that has an index pointing to it). And a new root line pointer
is created when a non-HOT/non-WARM update is performed. As soon as you do a
non-HOT/non-WARM update, the next update can again be a WARM update even
when everything fits in a single block.

That's why for a workload which doesn't do HOT updates and where not all
index keys are updated, you'll find every alternate update to a row to be a
WARM update, even when there is no chain conversion. That itself can save
lots of index bloat, reduce IO on the index and WAL.

Let me know if its still not clear and I can draw some diagrams to explain
it.

Thanks,
Pavan
--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#184Bruce Momjian
bruce@momjian.us
In reply to: Pavan Deolasee (#183)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 28, 2017 at 08:04:34AM +0530, Pavan Deolasee wrote:

And I've answered it so many times by now :-)�

LOL

Just to add more to what I just said in another email, note that HOT/WARM
chains are created when a new root line pointer is created in the heap (a line
pointer that has an index pointing to it). And a new root line pointer is
created when a non-HOT/non-WARM update is performed. As soon as you do a
non-HOT/non-WARM update, the next update can again be a WARM update even when
everything fits in a single block.�

That's why for a workload which doesn't do HOT updates and where not all index
keys are updated, you'll find every alternate update to a row to be a WARM
update, even when there is no chain conversion. That itself can save lots of
index bloat, reduce IO on the index and WAL.

Let me know if its still not clear and I can draw some diagrams to explain it.

Ah, yes, that does help to explain the 50% because 50% of updates are
now HOT/WARM.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#185Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Amit Kapila (#177)
Re: Patch: Write Amplification Reduction Method (WARM)

On Mon, Mar 27, 2017 at 4:45 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Sat, Mar 25, 2017 at 1:24 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Fri, Mar 24, 2017 at 11:49 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Fri, Mar 24, 2017 at 6:46 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

While looking at this problem, it occurred to me that the assumptions

made

for hash indexes are also wrong :-( Hash index has the same problem as
expression indexes have. A change in heap value may not necessarily

cause a

change in the hash key. If we don't detect that, we will end up having

two

hash identical hash keys with the same TID pointer. This will cause the
duplicate key scans problem since hashrecheck will return true for both

the

hash entries.

Isn't it possible to detect duplicate keys in hashrecheck if we
compare both hashkey and tid stored in index tuple with the
corresponding values from heap tuple?

Hmm.. I thought that won't work. For example, say we have a tuple (X, Y, Z)
in the heap with a btree index on X and a hash index on Y. If that is
updated to (X, Y', Z) and say we do a WARM update and insert a new entry in
the hash index. Now if Y and Y' both generate the same hashkey, we will
have exactly similar looking <hashkey, TID> tuples in the hash index
leading to duplicate key scans.

I think one way to solve this is to pass both old and new heap values to
amwarminsert and expect each AM to detect duplicates and avoid creating of
a WARM pointer if index keys are exactly the same (we can do that since
there already exists another index tuple with the same keys pointing to the
same root TID).

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#186Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Alvaro Herrera (#178)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 28, 2017 at 1:32 AM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

Is the WARM tap test suite supposed to work when applied without all the
other patches? I just tried applied that one and running "make check -C
src/test/modules", and it seems to hang after giving "ok 5" for
t/002_warm_stress.pl. (I had to add a Makefile too, attached.)

Yeah, sorry. Looks like I forgot to git add the Makefile.

BTW just tested on Ubuntu, and it works fine on that too. FWIW I'm using
perl v5.22.1 and IPC::Run 0.94 (assuming I got the versions correctly).

$ make -C src/test/modules/warm/ prove-check
make: Entering directory '/home/ubuntu/postgresql/src/test/modules/warm'
rm -rf /home/ubuntu/postgresql/src/test/modules/warm/tmp_check/log
cd . && TESTDIR='/home/ubuntu/postgresql/src/test/modules/warm'
PATH="/home/ubuntu/postgresql/tmp_install/home/ubuntu/pg-master-install/bin:$PATH"
LD_LIBRARY_PATH="/home/ubuntu/postgresql/tmp_install/home/ubuntu/pg-master-install/lib"
PGPORT='65432'
PG_REGRESS='/home/ubuntu/postgresql/src/test/modules/warm/../../../../src/test/regress/pg_regress'
prove -I ../../../../src/test/perl/ -I . --verbose t/*.pl
t/001_recovery.pl .....
1..2
ok 1 - balanace matches after recovery
ok 2 - sum(delta) matches after recovery
ok
t/002_warm_stress.pl ..
1..10
ok 1 - dummy test passed
ok 2 - Fine match
ok 3 - psql exited normally
ok 4 - psql exited normally
ok 5 - psql exited normally
ok 6 - psql exited normally
ok 7 - psql exited normally
ok 8 - psql exited normally
ok 9 - psql exited normally
ok 10 - Fine match
ok
All tests successful.
Files=2, Tests=12, 22 wallclock secs ( 0.03 usr 0.00 sys + 7.94 cusr
2.41 csys = 10.38 CPU)
Result: PASS
make: Leaving directory '/home/ubuntu/postgresql/src/test/modules/warm'

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#187Amit Kapila
amit.kapila16@gmail.com
In reply to: Pavan Deolasee (#175)
Re: Patch: Write Amplification Reduction Method (WARM)

On Mon, Mar 27, 2017 at 2:19 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Fri, Mar 24, 2017 at 11:49 PM, Pavan Deolasee <pavan.deolasee@gmail.com>
wrote:

On Fri, Mar 24, 2017 at 6:46 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

I was worried for the case if the index is created non-default
collation, will the datumIsEqual() suffice the need. Now again
thinking about it, I think it will because in the index tuple we are
storing the value as in heap tuple. However today it occurred to me
how will this work for toasted index values (index value >
TOAST_INDEX_TARGET). It is mentioned on top of datumIsEqual() that it
probably won't work for toasted values. Have you considered that
point?

No, I haven't and thanks for bringing that up. And now that I think more
about it and see the code, I think the naive way of just comparing index
attribute value against heap values is probably wrong. The example of
TOAST_INDEX_TARGET is one such case, but I wonder if there are other varlena
attributes that we might store differently in heap and index. Like
index_form_tuple() ->heap_fill_tuple seem to some churning for varlena. It's
not clear to me if index_get_attr will return the values which are binary
comparable to heap values.. I wonder if calling index_form_tuple on the heap
values, fetching attributes via index_get_attr on both index tuples and then
doing a binary compare is a more robust idea. Or may be that's just
duplicating efforts.

While looking at this problem, it occurred to me that the assumptions made
for hash indexes are also wrong :-( Hash index has the same problem as
expression indexes have. A change in heap value may not necessarily cause a
change in the hash key. If we don't detect that, we will end up having two
hash identical hash keys with the same TID pointer. This will cause the
duplicate key scans problem since hashrecheck will return true for both the
hash entries. That's a bummer as far as supporting WARM for hash indexes is
concerned, unless we find a way to avoid duplicate index entries.

Revised patches are attached. I've added a few more regression tests which
demonstrates the problems with compressed and toasted attributes. I've now
implemented the idea of creating index tuple from heap values before doing
binary comparison using datumIsEqual. This seems to work ok and I see no
reason this should not be robust.

As asked previously, can you explain me on what basis are you
considering it robust? The comments on top of datumIsEqual() clearly
indicates the danger of using it for toasted values (Also, it will
probably not give the answer you want if either datum has been
"toasted".). If you think that because we are using it during
heap_update to find modified columns, then I think that is not right
comparison, because there we are comparing compressed value (of old
tuple) with uncompressed value (of new tuple) which should always give
result as false.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#188Amit Kapila
amit.kapila16@gmail.com
In reply to: Pavan Deolasee (#175)
Re: Patch: Write Amplification Reduction Method (WARM)

On Mon, Mar 27, 2017 at 2:19 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Fri, Mar 24, 2017 at 11:49 PM, Pavan Deolasee <pavan.deolasee@gmail.com>
wrote:

On Fri, Mar 24, 2017 at 6:46 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

I was worried for the case if the index is created non-default
collation, will the datumIsEqual() suffice the need. Now again
thinking about it, I think it will because in the index tuple we are
storing the value as in heap tuple. However today it occurred to me
how will this work for toasted index values (index value >
TOAST_INDEX_TARGET). It is mentioned on top of datumIsEqual() that it
probably won't work for toasted values. Have you considered that
point?

No, I haven't and thanks for bringing that up. And now that I think more
about it and see the code, I think the naive way of just comparing index
attribute value against heap values is probably wrong. The example of
TOAST_INDEX_TARGET is one such case, but I wonder if there are other varlena
attributes that we might store differently in heap and index. Like
index_form_tuple() ->heap_fill_tuple seem to some churning for varlena. It's
not clear to me if index_get_attr will return the values which are binary
comparable to heap values.. I wonder if calling index_form_tuple on the heap
values, fetching attributes via index_get_attr on both index tuples and then
doing a binary compare is a more robust idea. Or may be that's just
duplicating efforts.

While looking at this problem, it occurred to me that the assumptions made
for hash indexes are also wrong :-( Hash index has the same problem as
expression indexes have. A change in heap value may not necessarily cause a
change in the hash key. If we don't detect that, we will end up having two
hash identical hash keys with the same TID pointer. This will cause the
duplicate key scans problem since hashrecheck will return true for both the
hash entries. That's a bummer as far as supporting WARM for hash indexes is
concerned, unless we find a way to avoid duplicate index entries.

Revised patches are attached.

Noted few cosmetic issues in 0005_warm_updates_v21:

1.
pruneheap.c(939): warning C4098: 'heap_get_root_tuples' : 'void'
function returning a value

2.
+ *  HCWC_WARM_UPDATED_TUPLE - a tuple with HEAP_WARM_UPDATED is found somewhere
+ *    in the chain. Note that when a tuple is WARM
+ *    updated, both old and new versions are marked
+ *    with this flag/
+ *
+ *  HCWC_WARM_TUPLE  - a tuple with HEAP_WARM_TUPLE is found somewhere in
+ *  the chain.
+ *
+ *  HCWC_CLEAR_TUPLE - a tuple without HEAP_WARM_TUPLE is found somewhere in
+ *   the chain.

Description of all three flags is same.

3.
+ *  HCWC_WARM_UPDATED_TUPLE - a tuple with HEAP_WARM_UPDATED is found somewhere
+ *    in the chain. Note that when a tuple is WARM
+ *    updated, both old and new versions are marked
+ *    with this flag/

Spurious '/' at end of line.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#189Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#187)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 28, 2017 at 4:05 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Mon, Mar 27, 2017 at 2:19 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Fri, Mar 24, 2017 at 11:49 PM, Pavan Deolasee <pavan.deolasee@gmail.com>
wrote:

On Fri, Mar 24, 2017 at 6:46 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

I was worried for the case if the index is created non-default
collation, will the datumIsEqual() suffice the need. Now again
thinking about it, I think it will because in the index tuple we are
storing the value as in heap tuple. However today it occurred to me
how will this work for toasted index values (index value >
TOAST_INDEX_TARGET). It is mentioned on top of datumIsEqual() that it
probably won't work for toasted values. Have you considered that
point?

No, I haven't and thanks for bringing that up. And now that I think more
about it and see the code, I think the naive way of just comparing index
attribute value against heap values is probably wrong. The example of
TOAST_INDEX_TARGET is one such case, but I wonder if there are other varlena
attributes that we might store differently in heap and index. Like
index_form_tuple() ->heap_fill_tuple seem to some churning for varlena. It's
not clear to me if index_get_attr will return the values which are binary
comparable to heap values.. I wonder if calling index_form_tuple on the heap
values, fetching attributes via index_get_attr on both index tuples and then
doing a binary compare is a more robust idea. Or may be that's just
duplicating efforts.

While looking at this problem, it occurred to me that the assumptions made
for hash indexes are also wrong :-( Hash index has the same problem as
expression indexes have. A change in heap value may not necessarily cause a
change in the hash key. If we don't detect that, we will end up having two
hash identical hash keys with the same TID pointer. This will cause the
duplicate key scans problem since hashrecheck will return true for both the
hash entries. That's a bummer as far as supporting WARM for hash indexes is
concerned, unless we find a way to avoid duplicate index entries.

Revised patches are attached. I've added a few more regression tests which
demonstrates the problems with compressed and toasted attributes. I've now
implemented the idea of creating index tuple from heap values before doing
binary comparison using datumIsEqual. This seems to work ok and I see no
reason this should not be robust.

As asked previously, can you explain me on what basis are you
considering it robust? The comments on top of datumIsEqual() clearly
indicates the danger of using it for toasted values (Also, it will
probably not give the answer you want if either datum has been
"toasted".). If you think that because we are using it during
heap_update to find modified columns, then I think that is not right
comparison, because there we are comparing compressed value (of old
tuple) with uncompressed value (of new tuple) which should always give
result as false.

Yet another point to think about the recheck implementation is will it
work correctly when heap tuple itself is toasted. Consider a case
where table has integer and text column (t1 (c1 int, c2 text)) and we
have indexes on c1 and c2 columns. Now, insert a tuple such that the
text column has value more than 2 or 3K which will make it stored in
compressed form in heap (and the size of compressed value is still
more than TOAST_INDEX_TARGET). For such an heap insert, we will pass
the actual value of column to index_form_tuple during index insert.
However during recheck when we fetch the value of c2 from heap tuple
and pass it index tuple, the value is already in compressed form and
index_form_tuple might again try to compress it because the size will
still be greater than TOAST_INDEX_TARGET and if it does so, it might
make recheck fail.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#190Robert Haas
robertmhaas@gmail.com
In reply to: Pavan Deolasee (#182)
Re: Patch: Write Amplification Reduction Method (WARM)

On Mon, Mar 27, 2017 at 10:25 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Tue, Mar 28, 2017 at 1:59 AM, Robert Haas <robertmhaas@gmail.com> wrote:

On Thu, Mar 23, 2017 at 2:47 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

It's quite hard to say that until we see many more benchmarks. As author
of
the patch, I might have got repetitive with my benchmarks. But I've seen
over 50% improvement in TPS even without chain conversion (6 indexes on
a 12
column table test).

This seems quite mystifying. What can account for such a large
performance difference in such a pessimal scenario? It seems to me
that without chain conversion, WARM can only apply to each row once
and therefore no sustained performance improvement should be possible
-- unless rows are regularly being moved to new blocks, in which case
those updates would "reset" the ability to again perform an update.
However, one would hope that most updates get done within a single
block, so that the row-moves-to-new-block case wouldn't happen very
often.

I think you're confusing between update chains that stay within a block vs
HOT/WARM chains. Even when the entire update chain stays within a block, it
can be made up of multiple HOT/WARM chains and each of these chains offer
ability to do one WARM update. So even without chain conversion, every
alternate update will be a WARM update. So the gains are perpetual.

You're right, I had overlooked that. But then I'm confused: how does
the chain conversion stuff help as much as it does? You said that you
got a 50% improvement from WARM, because we got to skip half the index
updates. But then you said with chain conversion you got an
improvement of more like 100%. However, I would think that on this
workload, chain conversion shouldn't save much. If you're sweeping
through the database constantly performing updates, the updates ought
to be a lot more frequent than the vacuums.

No?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#191David Steele
david@pgmasters.net
In reply to: Robert Haas (#190)
Re: Patch: Write Amplification Reduction Method (WARM)

Hi Pavan,

On 3/28/17 11:04 AM, Robert Haas wrote:

On Mon, Mar 27, 2017 at 10:25 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Tue, Mar 28, 2017 at 1:59 AM, Robert Haas <robertmhaas@gmail.com> wrote:

On Thu, Mar 23, 2017 at 2:47 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

It's quite hard to say that until we see many more benchmarks. As author
of
the patch, I might have got repetitive with my benchmarks. But I've seen
over 50% improvement in TPS even without chain conversion (6 indexes on
a 12
column table test).

This seems quite mystifying. What can account for such a large
performance difference in such a pessimal scenario? It seems to me
that without chain conversion, WARM can only apply to each row once
and therefore no sustained performance improvement should be possible
-- unless rows are regularly being moved to new blocks, in which case
those updates would "reset" the ability to again perform an update.
However, one would hope that most updates get done within a single
block, so that the row-moves-to-new-block case wouldn't happen very
often.

I think you're confusing between update chains that stay within a block vs
HOT/WARM chains. Even when the entire update chain stays within a block, it
can be made up of multiple HOT/WARM chains and each of these chains offer
ability to do one WARM update. So even without chain conversion, every
alternate update will be a WARM update. So the gains are perpetual.

You're right, I had overlooked that. But then I'm confused: how does
the chain conversion stuff help as much as it does? You said that you
got a 50% improvement from WARM, because we got to skip half the index
updates. But then you said with chain conversion you got an
improvement of more like 100%. However, I would think that on this
workload, chain conversion shouldn't save much. If you're sweeping
through the database constantly performing updates, the updates ought
to be a lot more frequent than the vacuums.

No?

It appears that a patch is required to address Amit's review. I have
marked this as "Waiting for Author".

Thanks,
--
-David
david@pgmasters.net

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#192Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Amit Kapila (#187)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 28, 2017 at 4:05 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

As asked previously, can you explain me on what basis are you
considering it robust? The comments on top of datumIsEqual() clearly
indicates the danger of using it for toasted values (Also, it will
probably not give the answer you want if either datum has been
"toasted".).

Hmm. I don' see why the new code in recheck is unsafe. The index values
themselves can't be toasted (IIUC), but they can be compressed.
index_form_tuple() already untoasts any toasted heap attributes and
compresses them if needed. So once we pass heap values via
index_form_tuple() we should have exactly the same index values as they
were inserted. Or am I missing something obvious here?

If you think that because we are using it during

heap_update to find modified columns, then I think that is not right
comparison, because there we are comparing compressed value (of old
tuple) with uncompressed value (of new tuple) which should always give
result as false.

Hmm, this seems like a problem. While HOT could tolerate occasional false
results (i.e. reporting a heap column as modified even though it it not),
WARM assumes that if the heap has reported different values, then they
better be different and should better result in different index values.
Because that's how recheck later works. Since index expressions are not
supported, I wonder if toasted heap values are the only remaining problem
in this area. So heap_tuple_attr_equals() should first detoast the heap
values and then do the comparison. I already have a test case that fails
for this reason, so let me try this approach.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#193Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Amit Kapila (#188)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 28, 2017 at 4:07 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

Noted few cosmetic issues in 0005_warm_updates_v21:

1.
pruneheap.c(939): warning C4098: 'heap_get_root_tuples' : 'void'
function returning a value

Thanks. Will fix.

2.
+ *  HCWC_WARM_UPDATED_TUPLE - a tuple with HEAP_WARM_UPDATED is found
somewhere
+ *    in the chain. Note that when a tuple is WARM
+ *    updated, both old and new versions are marked
+ *    with this flag/
+ *
+ *  HCWC_WARM_TUPLE  - a tuple with HEAP_WARM_TUPLE is found somewhere in
+ *  the chain.
+ *
+ *  HCWC_CLEAR_TUPLE - a tuple without HEAP_WARM_TUPLE is found somewhere
in
+ *   the chain.

Description of all three flags is same.

Well the description is different (and correct), but given that it confused
you, I think I should rewrite those comments. Will do.

3.
+ *  HCWC_WARM_UPDATED_TUPLE - a tuple with HEAP_WARM_UPDATED is found
somewhere
+ *    in the chain. Note that when a tuple is WARM
+ *    updated, both old and new versions are marked
+ *    with this flag/

Spurious '/' at end of line.

Thanks. Will fix.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#194Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Amit Kapila (#189)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 28, 2017 at 7:04 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

For such an heap insert, we will pass
the actual value of column to index_form_tuple during index insert.
However during recheck when we fetch the value of c2 from heap tuple
and pass it index tuple, the value is already in compressed form and
index_form_tuple might again try to compress it because the size will
still be greater than TOAST_INDEX_TARGET and if it does so, it might
make recheck fail.

Would it? I thought "if
(!VARATT_IS_EXTENDED(DatumGetPointer(untoasted_values[i]))" check should
prevent that. But I could be reading those macros wrong. They are probably
heavily uncommented and it's not clear what each of those VARATT_* macro do.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#195Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#190)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 28, 2017 at 8:34 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Mon, Mar 27, 2017 at 10:25 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Tue, Mar 28, 2017 at 1:59 AM, Robert Haas <robertmhaas@gmail.com>

wrote:

On Thu, Mar 23, 2017 at 2:47 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

It's quite hard to say that until we see many more benchmarks. As

author

of
the patch, I might have got repetitive with my benchmarks. But I've

seen

over 50% improvement in TPS even without chain conversion (6 indexes

on

a 12
column table test).

This seems quite mystifying. What can account for such a large
performance difference in such a pessimal scenario? It seems to me
that without chain conversion, WARM can only apply to each row once
and therefore no sustained performance improvement should be possible
-- unless rows are regularly being moved to new blocks, in which case
those updates would "reset" the ability to again perform an update.
However, one would hope that most updates get done within a single
block, so that the row-moves-to-new-block case wouldn't happen very
often.

I think you're confusing between update chains that stay within a block

vs

HOT/WARM chains. Even when the entire update chain stays within a block,

it

can be made up of multiple HOT/WARM chains and each of these chains offer
ability to do one WARM update. So even without chain conversion, every
alternate update will be a WARM update. So the gains are perpetual.

You're right, I had overlooked that. But then I'm confused: how does
the chain conversion stuff help as much as it does? You said that you
got a 50% improvement from WARM, because we got to skip half the index
updates. But then you said with chain conversion you got an
improvement of more like 100%. However, I would think that on this
workload, chain conversion shouldn't save much. If you're sweeping
through the database constantly performing updates, the updates ought
to be a lot more frequent than the vacuums.

No?

These tests were done on a very large table of 80M rows. The table itself
was wide with 15 columns and a few indexes. So in a 8hr test, master could
do only 55M updates where as WARM did 105M updates. There were 4 autovacuum
cycles in both these runs. So while there were many updates, I am sure
autovacuum must have helped to increase the percentage of WARM updates
(from ~50% after steady state to ~67% after steady state). Also I said more
than 50%, but it was probably close to 65%.

Unfortunately these tests were done on different hardware, with different
settings and even slightly different scale factors. So they may not be
exactly comparable. But there is no doubt chain conversion will help to
some extent. I'll repeat the benchmark with chain conversion turned off and
report the exact difference.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#196Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Pavan Deolasee (#175)
1 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

I pushed 0002 after some makeup, since it's just cosmetic and not
controversial. Here's 0003 rebased on top of it.

(Also, I took out the gin and gist changes: it would be wrong to change
that unconditionally, because the 0xFFFF pattern appears in indexes that
would be pg_upgraded. We need a different strategy, if we want to
enable WARM on GiST/GIN indexes.)

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachments:

0001-Free-3-bits-of-ItemPointerData.ip_posid.patchtext/plain; charset=us-asciiDownload
From f6ba238dd46416eb29ac43dadac0c69a75f106fe Mon Sep 17 00:00:00 2001
From: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date: Tue, 28 Mar 2017 17:39:26 -0300
Subject: [PATCH] Free 3 bits of ItemPointerData.ip_posid

Since we limit block size to 32 kB, the highest offset number used in a
bufpage.h-organized page is 5461, which can be represented with 13 bits.
Therefore, the upper 3 bits in 16-bit ItemPointerData.ip_posid are
unused and this commit reserves them for other purposes.

Author: Pavan Deolasee
---
 src/include/access/htup_details.h |  2 +-
 src/include/storage/itemptr.h     | 30 +++++++++++++++++++++++++++---
 src/include/storage/off.h         | 11 ++++++++++-
 3 files changed, 38 insertions(+), 5 deletions(-)

diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 7b6285d..d3cc0ad 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -282,7 +282,7 @@ struct HeapTupleHeaderData
  * than MaxOffsetNumber, so that it can be distinguished from a valid
  * offset number in a regular item pointer.
  */
-#define SpecTokenOffsetNumber		0xfffe
+#define SpecTokenOffsetNumber		OffsetNumberPrev(OffsetNumberMask)
 
 /*
  * HeapTupleHeader accessor macros
diff --git a/src/include/storage/itemptr.h b/src/include/storage/itemptr.h
index c21d2ad..74eed4e 100644
--- a/src/include/storage/itemptr.h
+++ b/src/include/storage/itemptr.h
@@ -57,7 +57,7 @@ typedef ItemPointerData *ItemPointer;
  *		True iff the disk item pointer is not NULL.
  */
 #define ItemPointerIsValid(pointer) \
-	((bool) (PointerIsValid(pointer) && ((pointer)->ip_posid != 0)))
+	((bool) (PointerIsValid(pointer) && (((pointer)->ip_posid & OffsetNumberMask) != 0)))
 
 /*
  * ItemPointerGetBlockNumberNoCheck
@@ -84,7 +84,7 @@ typedef ItemPointerData *ItemPointer;
  */
 #define ItemPointerGetOffsetNumberNoCheck(pointer) \
 ( \
-	(pointer)->ip_posid \
+	((pointer)->ip_posid & OffsetNumberMask) \
 )
 
 /*
@@ -98,6 +98,30 @@ typedef ItemPointerData *ItemPointer;
 )
 
 /*
+ * Get the flags stored in high order bits in the OffsetNumber.
+ */
+#define ItemPointerGetFlags(pointer) \
+( \
+	((pointer)->ip_posid & ~OffsetNumberMask) >> OffsetNumberBits \
+)
+
+/*
+ * Set the flag bits. We first left-shift since flags are defined starting 0x01
+ */
+#define ItemPointerSetFlags(pointer, flags) \
+( \
+	((pointer)->ip_posid |= ((flags) << OffsetNumberBits)) \
+)
+
+/*
+ * Clear all flags.
+ */
+#define ItemPointerClearFlags(pointer) \
+( \
+	((pointer)->ip_posid &= OffsetNumberMask) \
+)
+
+/*
  * ItemPointerSet
  *		Sets a disk item pointer to the specified block and offset.
  */
@@ -105,7 +129,7 @@ typedef ItemPointerData *ItemPointer;
 ( \
 	AssertMacro(PointerIsValid(pointer)), \
 	BlockIdSet(&((pointer)->ip_blkid), blockNumber), \
-	(pointer)->ip_posid = offNum \
+	(pointer)->ip_posid = (offNum) \
 )
 
 /*
diff --git a/src/include/storage/off.h b/src/include/storage/off.h
index fe8638f..f058fe1 100644
--- a/src/include/storage/off.h
+++ b/src/include/storage/off.h
@@ -26,7 +26,16 @@ typedef uint16 OffsetNumber;
 #define InvalidOffsetNumber		((OffsetNumber) 0)
 #define FirstOffsetNumber		((OffsetNumber) 1)
 #define MaxOffsetNumber			((OffsetNumber) (BLCKSZ / sizeof(ItemIdData)))
-#define OffsetNumberMask		(0xffff)		/* valid uint16 bits */
+
+/*
+ * The biggest BLCKSZ we support is 32kB, and each ItemId takes 6 bytes.
+ * That limits the number of line pointers in a page to 32kB/6B = 5461.
+ * Therefore, 13 bits in OffsetNumber are enough to represent all valid
+ * on-disk line pointers.  Hence, we can reserve the high-order bits in
+ * OffsetNumber for other purposes.
+ */
+#define OffsetNumberBits		13
+#define OffsetNumberMask		((((uint16) 1) << OffsetNumberBits) - 1)
 
 /* ----------------
  *		support macros
-- 
2.1.4

#197Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Alvaro Herrera (#196)
6 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Mar 29, 2017 at 3:42 AM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

I pushed 0002 after some makeup, since it's just cosmetic and not
controversial.

Thanks. I think your patch of tracking interesting attributes seems ok too
after the performance issue was addressed. Even though we can still improve
that further, at least Mithun confirmed that there is no significant
regression anymore and in fact for one artificial case, patch does better
than even master.

Here's 0003 rebased on top of it.

(Also, I took out the gin and gist changes: it would be wrong to change
that unconditionally, because the 0xFFFF pattern appears in indexes that
would be pg_upgraded. We need a different strategy, if we want to
enable WARM on GiST/GIN indexes.)

Yeah, those changes would have broken pg-upgraded clusters. So looks good.
But the rebased patch throws an assertion failure.
ItemPointerGetOffsetNumberNoCheck will mask the first 3 bits and return the
rest, but since GIN continues to store ip_posid greater than
OffsetNumberMask, the masking causes problems. May be we can teach
GinItemPointerGetOffsetNumber to fetch the flags separately and add them
back to what ItemPointerGetOffsetNumberNoCheck returns. This avoids
referencing ip_posid directly from this code.

BTW we have messed up patch names a bit here. You applied 0003 from v21 and
rebased 0004. But the rebased patch was
named 0001-Free-3-bits-of-ItemPointerData.ip_posid.patch. I'm reverting
back to the earlier used names. So rebased v22 set of patches attached.

0001_interesting_attrs_v22.patch - Alvaro's patch of simplifying attr
checks. I think this has settled down

0002_track_root_lp_v22 - We probably need to decide whether its worth
saving a bit in tuple header for additional work during WARM update of
finding root tuple.

0004_Free-3-bits-of-ItemPointerData.ip_posid_v22 - A slight update to
Alvaro's rebased version posted yesterday

0005_warm_updates_v22 - Main WARM patch. Addresses all review comments so
far and includes fixes for toasted value handling

0007_vacuum_enhancements_v22 - VACUUM enhancements to control WARM cleanup.
This now also includes changes made to memory usage. The dead tuples and
warm chains are tracked in a single work area, from two ends. When these
ends meet, we do a round of index cleanup. IMO this should give us most
optimal utilisation of available memory depending on whether we are doing
WARM cleanup or not and percentage of dead tuples and warm chains.

0006_warm_taptests_v22 - Alvaro reported lack of Makefile. It also seemed
that he wants to rename it to avoid "warm" reference. So done that, but
Alvaro is seeing hangs with the tests in his environment, so probably needs
some investigation. It works for me with IPC::Run 0.94

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0006_warm_taptests_v22.patchapplication/octet-stream; name=0006_warm_taptests_v22.patchDownload
diff --git a/src/test/modules/indexing/Makefile b/src/test/modules/indexing/Makefile
new file mode 100644
index 0000000..9853005
--- /dev/null
+++ b/src/test/modules/indexing/Makefile
@@ -0,0 +1,19 @@
+# src/test/modules/commit_ts/Makefile
+
+REGRESS = indexing
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/test/modules/indexing
+top_builddir = ../../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/contrib/contrib-global.mk
+endif
+
+check: prove-check
+
+prove-check:
+	$(prove_check)
diff --git a/src/test/modules/indexing/t/001_recovery.pl b/src/test/modules/indexing/t/001_recovery.pl
new file mode 100644
index 0000000..2a76830
--- /dev/null
+++ b/src/test/modules/indexing/t/001_recovery.pl
@@ -0,0 +1,50 @@
+# Single-node test: run workload, crash, recover and run sanity check
+
+use strict;
+use warnings;
+
+use TestLib;
+use Test::More tests => 2;
+use PostgresNode;
+
+my $node = get_new_node();
+$node->init;
+$node->start;
+
+# Create a table, do some WARM updates and then restart
+$node->safe_psql('postgres',
+	'create table accounts (aid int unique, branch int, balance bigint) with (fillfactor=98)');
+$node->safe_psql('postgres',
+	'create table history (aid int, delta int)');
+$node->safe_psql('postgres',
+	'insert into accounts select generate_series(1,10000), (random()*1000)::int % 10, 0');
+$node->safe_psql('postgres',
+	'create index accounts_bal_indx on accounts(balance)');
+
+for( $a = 1; $a <= 1000; $a = $a + 1 ) {
+	my $aid1 = int(rand(10000)) + 1;
+	my $aid2 = int(rand(10000)) + 1;
+	my $balance = int(rand(99999));
+	$node->safe_psql('postgres',
+		"begin;
+		 update accounts set balance = balance + $balance where aid = $aid1;
+		 update accounts set balance = balance - $balance where aid = $aid2;
+		 insert into history values ($aid1, $balance);
+		 insert into history values ($aid2, 0 - $balance);
+		 end;");
+}
+
+# Verify that we read the same TS after crash recovery
+$node->stop('immediate');
+$node->start;
+
+my $recovered_balance = $node->safe_psql('postgres', 'select sum(balance) from accounts');
+my $total_delta = $node->safe_psql('postgres', 'select sum(delta) from history');
+
+# since delta is credited to one account and debited from the other, we expect
+# the sum(balance) to stay zero.
+is($recovered_balance, 0, 'balanace matches after recovery');
+
+# A positive and a negative value is inserted in the history table. Hence the
+# sum(delta) should remain zero.
+is($total_delta, 0, 'sum(delta) matches after recovery');
diff --git a/src/test/modules/indexing/t/002_indexing_stress.pl b/src/test/modules/indexing/t/002_indexing_stress.pl
new file mode 100644
index 0000000..cf83f19
--- /dev/null
+++ b/src/test/modules/indexing/t/002_indexing_stress.pl
@@ -0,0 +1,289 @@
+# Run varity of tests to check consistency of index access.
+#
+# These tests are primarily designed to test if WARM updates cause any
+# inconsistency in the indexes. We use a pgbench-like setup with an "accounts"
+# table and a "branches" table. But instead of a single "aid" column the
+# pgbench_indexing_accounts table has four additional columns. These columns have
+# initial value as "aid * 10", "aid * 20", "aid * 30" and "aid * 40". And
+# unlike the aid column, values in these columns do not remain static. The
+# values are changed in a narrow change around the original value, such that
+# they still remain distinct, even after updates. We also build indexes on
+# these additional columns.
+#
+# This allows us to force WARM updates to the table, while accessing individual
+# rows using these auxillary columns. If things are solid, we must not miss any
+# row irrespective of which column we use to fetch the row. Also, the sum of
+# balances in two tables should match at the end.
+#
+# We drop and recreate indexes concurrently and also run VACUUM and run
+# consistency checks to ensure nothing breaks. The tests also aborts
+# transactions, acquires share/update locks etc to check any negative effects
+# of those things.
+
+use strict;
+use warnings;
+
+use TestLib;
+use Test::More tests => 10;
+use PostgresNode;
+
+# Different kinds of queries, some committing, some aborting. Also include FOR
+# SHARE, FOR UPDATE which may have implications on the visibility bits etc.
+my @query_set1 = (
+
+	"begin;
+	update pgbench_indexing_accounts set abalance = abalance + :delta where aid = :aid;
+	select abalance from pgbench_indexing_accounts where aid = :aid;
+	update pgbench_indexing_branches set bbalance = bbalance + :delta where bid = :bid;
+	end;",
+
+	"begin;
+	update pgbench_indexing_accounts set abalance = abalance + :delta where aid = :aid;
+	select abalance from pgbench_indexing_accounts where aid = :aid;
+	update pgbench_indexing_branches set bbalance = bbalance + :delta where bid = :bid;
+	rollback;",
+
+	"begin;
+	select abalance from pgbench_indexing_accounts where aid = :aid for update;
+	update pgbench_indexing_accounts set abalance = abalance + :delta where aid = :aid;
+	select bbalance from pgbench_indexing_branches where bid = :bid for update;
+	update pgbench_indexing_branches set bbalance = bbalance + :delta where bid = :bid;
+	commit;",
+
+	"begin;
+	select abalance from pgbench_indexing_accounts where aid = :aid for update;
+	update pgbench_indexing_accounts set abalance = abalance + :delta where aid = :aid;
+	select bbalance from pgbench_indexing_branches where bid = :bid for update;
+	update pgbench_indexing_branches set bbalance = bbalance + :delta where bid = :bid;
+	rollback;",
+
+	"begin;
+	select abalance from pgbench_indexing_accounts where aid = :aid for share;
+	update pgbench_indexing_accounts set abalance = abalance + :delta where aid = :aid;
+	select bbalance from pgbench_indexing_branches where bid = :bid for update;
+	update pgbench_indexing_branches set bbalance = bbalance + :delta where bid = :bid;
+	commit;",
+
+	"begin;
+	select abalance from pgbench_indexing_accounts where aid = :aid for update;
+	update pgbench_indexing_accounts set abalance = abalance + :delta where aid = :aid;
+	select bbalance from pgbench_indexing_branches where bid = :bid for update;
+	update pgbench_indexing_branches set bbalance = bbalance + :delta where bid = :bid;
+	rollback;"
+);
+
+# The following queries use user-defined functions to update rows in
+# pgbench_indexing_accounts table by using auxillary columns. This allows us to
+# test if the updates are working fine in various scenarios.
+my @query_set2 = (
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_indexing_update_using_aid1(:chg1, :aid, :bid, :delta);
+	commit;",
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_indexing_update_using_aid2(:chg2, :aid, :bid, :delta);
+	commit;",
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_indexing_update_using_aid3(:chg3, :aid, :bid, :delta);
+	commit;",
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_indexing_update_using_aid4(:chg4, :aid, :bid, :delta);
+	commit;",
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_indexing_update_using_aid1(:chg1, :aid, :bid, :delta);
+	rollback;",
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_indexing_update_using_aid2(:chg2, :aid, :bid, :delta);
+	rollback;",
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_indexing_update_using_aid3(:chg3, :aid, :bid, :delta);
+	rollback;",
+
+	"begin;
+	set enable_seqscan to false;
+	select pgbench_indexing_update_using_aid4(:chg4, :aid, :bid, :delta);
+	rollback;"
+);
+
+# Specify concurrent DDLs that you may want to execute with the tests.
+my @ddl_queries = (
+	"drop index pgb_a_aid1;
+	 create index pgb_a_aid1 on pgbench_indexing_accounts(aid1);",
+	"drop index pgb_a_aid2;
+	 create index pgb_a_aid2 on pgbench_indexing_accounts(aid2);",
+	"drop index pgb_a_aid3;
+	 create index pgb_a_aid3 on pgbench_indexing_accounts(aid3) using hash (aid3);",
+	"drop index pgb_a_aid4;
+	 create index pgb_a_aid4 on pgbench_indexing_accounts(aid4);",
+	"drop index pgb_a_aid1;
+	 create index concurrently pgb_a_aid1 on pgbench_indexing_accounts(aid1);",
+	"drop index pgb_a_aid2;
+	 create index concurrently pgb_a_aid2 on pgbench_indexing_accounts(aid2);",
+	"drop index pgb_a_aid3;
+	 create index concurrently pgb_a_aid3 on pgbench_indexing_accounts(aid3) using hash(aid3);",
+	"drop index pgb_a_aid4;
+	 create index concurrently pgb_a_aid4 on pgbench_indexing_accounts(aid4);",
+	"vacuum pgbench_indexing_accounts",
+	"vacuum pgbench_indexing_branches",
+	"vacuum full pgbench_indexing_accounts",
+	"vacuum full pgbench_indexing_branches"
+);
+
+# Consistency check queries.
+my @check_queries = (
+	"set enable_seqscan to false; select pgbench_indexing_check_consistency();",
+	"set enable_seqscan to false; select pgbench_indexing_check_row(:aid);"
+);
+
+my $node = get_new_node();
+$node->init;
+$node->start;
+
+# prepare the test for execution
+$node->run_log([ 'psql', '-X', $node->connstr(), '-f', 't/indexing_stress_init.sql']);
+
+my $res = $node->safe_psql('postgres', "select proname from pg_proc where proname = 'pgbench_indexing_update_using_aid1'");
+is($res, 'pgbench_indexing_update_using_aid1', 'dummy test passed');
+
+$res = $node->safe_psql('postgres', "select count(*) from pgbench_indexing_accounts");
+is($res, 10000, 'Fine match');
+
+# Start as many connections as we need
+sub create_connections {
+	my $count = shift;
+	my @handles;
+	my ($stdin, $stdout, $stderr) = ('','','');
+	for (my $proc = 0; $proc < $count; $proc = $proc + 1) {
+		my $handle = IPC::Run::start( 
+			[
+				'psql', '-v', '-f -', $node->connstr(),
+			],
+			\$stdin, \$stdout, \$stderr);
+		push @handles, [$handle,\$stdin,\$stdout,\$stderr];
+	}
+	return \@handles;
+}
+
+sub check_connections {
+	my @handles = @_;
+	my $failures = 0;
+	print @handles;
+	foreach my $elem (@handles) {
+		my ($handle, $stdin, $stdout, $stderr) = @$elem;
+		# Wait for all queries to complete and psql sessions to exit, checking
+		# exit codes. We don't need to do the fancy interpretation safe_psql
+		# does.
+		$handle->finish;
+		if (!is($handle->full_result(0), 0, "psql exited normally"))
+		{
+			$failures ++;
+			diag "psql exit code: " . ($handle->result(0)) . " or signal: " . ($handle->full_result(0) & 127);
+			diag "Stdout:\n---\n$$stdout\n---\nStderr:\n----\n$$stderr\n---";
+		}
+	}
+	return $failures;
+}
+
+my $set1_handles = create_connections(3);
+my $set2_handles = create_connections(3);
+my $aux_handles = create_connections(1);
+
+# Run a few thousand transactions, using various kinds of queries
+my $scale = 1;
+for (my $txn = 0; $txn < 10000; $txn = $txn + 1) {
+	# Run a randomly chosen query from set1
+	my $aid = int(rand($scale*10000)) + 1;
+	my $bid = int(rand(100)) + 1;
+	my $delta = int(rand(1000)) - 500;
+
+	my $connindx = rand(@$set1_handles);
+	my $elem = @$set1_handles[$connindx];
+	my ($handle, $stdin, $stdout, $stderr) = @$elem;
+
+	my $queryindx = rand(@query_set1);
+	my $query = $query_set1[$queryindx];
+
+	$query =~ s/\:aid/$aid/g;
+	$query =~ s/\:bid/$bid/g;
+	$query =~ s/\:delta/$delta/g;
+
+	$$stdin .= $query . "\n";
+	pump $handle while length $$stdin;
+
+	# Run a randomly chosen query from set1
+	my $chg1 = int(rand(4)) - 2;
+	my $chg2 = int(rand(6)) - 3;
+	my $chg3 = int(rand(8)) - 4;
+	my $chg4 = int(rand(10)) - 5;
+
+	$connindx = rand(@$set2_handles);
+	$elem = @$set2_handles[$connindx];
+	($handle, $stdin, $stdout, $stderr) = @$elem;
+
+	$queryindx = rand(@query_set2);
+	$query = $query_set2[$queryindx];
+
+	$query =~ s/\:aid/$aid/g;
+	$query =~ s/\:bid/$bid/g;
+	$query =~ s/\:delta/$delta/g;
+	$query =~ s/\:chg1/$chg1/g;
+	$query =~ s/\:chg2/$chg2/g;
+	$query =~ s/\:chg3/$chg3/g;
+	$query =~ s/\:chg4/$chg4/g;
+
+	$$stdin .= $query . "\n";
+	pump $handle while length $$stdin;
+
+	# Some randomly picked numbers to run DDLs and consistency checks
+	my $random = int(rand(100));
+
+	# Consistenct checks every 5 transactions
+	if ($random % 5 == 0)
+	{
+		$connindx = rand(@$aux_handles);
+		$elem = @$aux_handles[$connindx];
+		($handle, $stdin, $stdout, $stderr) = @$elem;
+
+		$queryindx = rand(@check_queries);
+		$query = $check_queries[$queryindx];
+
+		$$stdin .= $query . "\n";
+		pump $handle while length $$stdin;
+	}
+
+	# 1% DDLs
+	if ($random == 17)
+	{
+		$connindx = rand(@$aux_handles);
+		$elem = @$aux_handles[$connindx];
+		($handle, $stdin, $stdout, $stderr) = @$elem;
+
+		$queryindx = rand(@ddl_queries);
+		$query = $ddl_queries[$queryindx];
+
+		$$stdin .= $query . "\n";
+		pump $handle while length $$stdin;
+	}
+}
+
+check_connections(@$set1_handles);
+check_connections(@$set2_handles);
+check_connections(@$aux_handles);
+
+# Run final consistency checks
+my $res1 = $node->safe_psql('postgres', "select sum(abalance) from pgbench_indexing_accounts");
+my $res2 = $node->safe_psql('postgres', "select sum(bbalance) from pgbench_indexing_branches");
+is($res1, $res2, 'Fine match');
diff --git a/src/test/modules/indexing/t/indexing_stress_init.sql b/src/test/modules/indexing/t/indexing_stress_init.sql
new file mode 100644
index 0000000..8ca3e57
--- /dev/null
+++ b/src/test/modules/indexing/t/indexing_stress_init.sql
@@ -0,0 +1,209 @@
+
+drop table if exists pgbench_indexing_branches;
+drop table if exists pgbench_indexing_accounts;
+
+create table pgbench_indexing_branches (
+	bid bigint,
+	bbalance bigint);
+
+create table pgbench_indexing_accounts (
+	aid bigint,
+	bid bigint,
+	abalance bigint,
+	aid1 bigint ,
+	aid2 bigint ,
+	aid3 bigint ,
+	aid4 bigint ,
+	aid5 text default md5(random()::text),
+	aid6 text default md5(random()::text),
+	aid7 text default md5(random()::text),
+	aid8 text default md5(random()::text),
+	aid9 text default md5(random()::text),
+	aid10 text default md5(random()::text),
+	gistcol	polygon default null
+);
+
+-- update using aid1. aid1 should stay within the range (aid * 10 - 2 <= aid1 <= aid * 10 + 2) 
+create or replace function pgbench_indexing_update_using_aid1(chg integer, v_aid bigint, v_bid bigint, delta bigint)
+returns void as $$
+declare
+	qry varchar;
+	lower varchar;
+	upper varchar;
+	range integer;
+	aid_updated bigint;
+begin
+	range := 2;
+	update pgbench_indexing_accounts p set aid1 = aid1 +  chg,  abalance = abalance +
+delta  where aid1 >= v_aid * 10 - range - chg and aid1 <= v_aid * 10 + range - chg
+returning p.aid into aid_updated;
+	if aid_updated is not null then
+		update pgbench_indexing_branches p set bbalance = bbalance + delta where p.bid = v_bid;
+	else
+		select aid into aid_updated from pgbench_indexing_accounts p where aid1 >=
+v_aid * 10 - range and aid1 <= v_aid * 10 + range;
+		if aid_updated is null then
+			raise exception 'pgbench_indexing_accounts row not found';
+		end if;
+	end if;
+end
+$$ language plpgsql;
+
+-- update using aid2. aid2 should stay within the range (aid * 20 - 4 <= aid2 <= aid * 20 + 4) 
+create or replace function pgbench_indexing_update_using_aid2(chg integer, v_aid bigint, v_bid bigint, delta bigint)
+returns void as $$
+declare
+	qry varchar;
+	lower varchar;
+	upper varchar;
+	range integer;
+	aid_updated bigint;
+begin
+	range := 4;
+	update pgbench_indexing_accounts p set aid2 = aid2 +  chg,  abalance = abalance +
+delta  where aid2 >= v_aid * 20 - range - chg and aid2 <= v_aid * 20 + range - chg
+returning p.aid into aid_updated;
+	if aid_updated is not null then
+		update pgbench_indexing_branches p set bbalance = bbalance + delta where p.bid = v_bid;
+	else
+		select aid into aid_updated from pgbench_indexing_accounts p where aid2 >= v_aid * 20 - range and aid2 <= v_aid * 20 + range;
+		if aid_updated is null then
+			raise exception 'pgbench_indexing_accounts row not found';
+		end if;
+	end if;
+end
+$$ language plpgsql;
+
+-- update using aid3. aid3 should stay within the range (aid * 30 - 6 <= aid3 <= aid * 30 + 6) 
+create or replace function pgbench_indexing_update_using_aid3(chg integer, v_aid bigint, v_bid bigint, delta bigint)
+returns void as $$
+declare
+	qry varchar;
+	lower varchar;
+	upper varchar;
+	range integer;
+	aid_updated bigint;
+begin
+	range := 6;
+	update pgbench_indexing_accounts p set aid3 = aid3 +  chg,  abalance = abalance +
+delta  where aid3 >= v_aid * 30 - range - chg and aid3 <= v_aid * 30 + range - chg
+returning p.aid into aid_updated;
+	if aid_updated is not null then
+		update pgbench_indexing_branches p set bbalance = bbalance + delta where p.bid = v_bid;
+	else
+		select aid into aid_updated from pgbench_indexing_accounts p where aid3 >= v_aid * 30 - range and aid3 <= v_aid * 30 + range;
+		if aid_updated is null then
+			raise exception 'pgbench_indexing_accounts row not found';
+		end if;
+	end if;
+end
+$$ language plpgsql;
+
+-- update using aid4. aid4 should stay within the range (aid * 40 - 8 <= aid4 <= aid * 40 + 8) 
+create or replace function pgbench_indexing_update_using_aid4(chg integer, v_aid bigint, v_bid bigint, delta bigint)
+returns void as $$
+declare
+	qry varchar;
+	lower varchar;
+	upper varchar;
+	range integer;
+	aid_updated bigint;
+begin
+	range := 8;
+	update pgbench_indexing_accounts p set aid4 = aid4 +  chg,  abalance = abalance +
+delta  where aid4 >= v_aid * 40 - range - chg and aid4 <= v_aid * 40 + range - chg
+returning p.aid into aid_updated;
+	if aid_updated is not null then
+		update pgbench_indexing_branches p set bbalance = bbalance + delta where p.bid = v_bid;
+	else
+		select aid into aid_updated from pgbench_indexing_accounts p where aid4 >= v_aid * 40 - range and aid4 <= v_aid * 40 + range;
+		if aid_updated is null then
+			raise exception 'pgbench_indexing_accounts row not found';
+		end if;
+	end if;
+end
+$$ language plpgsql;
+
+-- ensure that exactly one row exists within a given range. use different
+-- indexes to fetch the row
+create or replace function pgbench_indexing_check_row(v_aid bigint)
+returns void as $$
+declare
+	range integer;
+	factor integer;
+	ret_aid1 bigint;
+	ret_aid2 bigint;
+	ret_aid3 bigint;
+	ret_aid4 bigint;
+begin
+	range := 2;
+	factor := 10;
+	select aid into ret_aid1 from pgbench_indexing_accounts p where aid1 >= v_aid *
+		factor - range and aid1 <= v_aid * factor + range;
+
+	range := 4;
+	factor := 20;
+	select aid into ret_aid2 from pgbench_indexing_accounts p where aid2 >= v_aid *
+		factor - range and aid2 <= v_aid * factor + range;
+
+	range := 6;
+	factor := 30;
+	select aid into ret_aid3 from pgbench_indexing_accounts p where aid3 >= v_aid *
+		factor - range and aid3 <= v_aid * factor + range;
+
+	range := 8;
+	factor := 40;
+	select aid into ret_aid4 from pgbench_indexing_accounts p where aid4 >= v_aid *
+		factor - range and aid4 <= v_aid * factor + range;
+
+	if ret_aid1 is null or ret_aid1 != v_aid then
+		raise exception 'pgbench_indexing_accounts row (%) not found via aid1', v_aid;
+	end if;
+
+	if ret_aid2 is null or ret_aid2 != v_aid then
+		raise exception 'pgbench_indexing_accounts row (%) not found via aid2', v_aid;
+	end if;
+
+	if ret_aid3 is null or ret_aid3 != v_aid then
+		raise exception 'pgbench_indexing_accounts row (%) not found via aid3', v_aid;
+	end if;
+
+	if ret_aid4 is null or ret_aid4 != v_aid then
+		raise exception 'pgbench_indexing_accounts row (%) not found via aid4', v_aid;
+	end if;
+end
+$$ language plpgsql;
+
+create or replace function pgbench_indexing_check_consistency()
+returns void as $$
+declare
+	sum_abalance bigint;
+	sum_bbalance bigint;
+begin
+	select sum(abalance) into sum_abalance from pgbench_indexing_accounts;
+	select sum(bbalance) into sum_bbalance from pgbench_indexing_branches;
+	if sum_abalance != sum_bbalance then
+		raise exception 'found inconsitency in sum (%, %)', sum_abalance, sum_bbalance;
+	end if;
+end
+$$ language plpgsql;
+
+\set end 10000
+insert into pgbench_indexing_branches select generate_series(1, 100), 0 ;
+insert into pgbench_indexing_accounts select generate_series(1, :end),
+				(random() * 100)::int, 0,
+				generate_series(1, :end) * 10,
+				generate_series(1, :end) * 20,
+				generate_series(1, :end) * 30,
+				generate_series(1, :end) * 40;
+
+create unique index pgb_a_aid on pgbench_indexing_accounts(aid);
+create index pgb_a_aid1 on pgbench_indexing_accounts(aid1);
+create index pgb_a_aid2 on pgbench_indexing_accounts(aid2);
+create index pgb_a_aid3 on pgbench_indexing_accounts(aid3) using hash(aid3);
+create index pgb_a_aid4 on pgbench_indexing_accounts(aid4);
+
+create unique index pgb_b_bid on pgbench_indexing_branches(bid);
+create index pgb_b_bbalance on pgbench_indexing_branches(bbalance);
+
+vacuum analyze;
0007_vacuum_enhancements_v22.patchapplication/octet-stream; name=0007_vacuum_enhancements_v22.patchDownload
diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index 72e1253..b856503 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -338,6 +338,24 @@ static relopt_real realRelOpts[] =
 	},
 	{
 		{
+			"autovacuum_warmcleanup_scale_factor",
+			"Number of WARM chains prior to WARM cleanup as a fraction of reltuples",
+			RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
+			ShareUpdateExclusiveLock
+		},
+		-1, 0.0, 100.0
+	},
+	{
+		{
+			"autovacuum_warmcleanup_index_scale_factor",
+			"Number of WARM pointers in an index prior to WARM cleanup as a fraction of total WARM chains",
+			RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
+			ShareUpdateExclusiveLock
+		},
+		-1, 0.0, 100.0
+	},
+	{
+		{
 			"autovacuum_analyze_scale_factor",
 			"Number of tuple inserts, updates or deletes prior to analyze as a fraction of reltuples",
 			RELOPT_KIND_HEAP,
@@ -1341,6 +1359,10 @@ default_reloptions(Datum reloptions, bool validate, relopt_kind kind)
 		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, vacuum_scale_factor)},
 		{"autovacuum_analyze_scale_factor", RELOPT_TYPE_REAL,
 		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, analyze_scale_factor)},
+		{"autovacuum_warmcleanup_scale_factor", RELOPT_TYPE_REAL,
+		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, warmcleanup_scale_factor)},
+		{"autovacuum_warmcleanup_index_scale_factor", RELOPT_TYPE_REAL,
+		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, warmcleanup_index_scale)},
 		{"user_catalog_table", RELOPT_TYPE_BOOL,
 		offsetof(StdRdOptions, user_catalog_table)},
 		{"parallel_workers", RELOPT_TYPE_INT,
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 66a39d0..2a4d782 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -533,6 +533,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
+            pg_stat_get_warm_chains(C.oid) AS n_warm_chains,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
             pg_stat_get_last_vacuum_time(C.oid) as last_vacuum,
             pg_stat_get_last_autovacuum_time(C.oid) as last_autovacuum,
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index 404acb2..6c4fc4e 100644
--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -93,7 +93,8 @@ static VacAttrStats *examine_attribute(Relation onerel, int attnum,
 				  Node *index_expr);
 static int acquire_sample_rows(Relation onerel, int elevel,
 					HeapTuple *rows, int targrows,
-					double *totalrows, double *totaldeadrows);
+					double *totalrows, double *totaldeadrows,
+					double *totalwarmchains);
 static int	compare_rows(const void *a, const void *b);
 static int acquire_inherited_sample_rows(Relation onerel, int elevel,
 							  HeapTuple *rows, int targrows,
@@ -320,7 +321,8 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params,
 	int			targrows,
 				numrows;
 	double		totalrows,
-				totaldeadrows;
+				totaldeadrows,
+				totalwarmchains;
 	HeapTuple  *rows;
 	PGRUsage	ru0;
 	TimestampTz starttime = 0;
@@ -501,7 +503,8 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params,
 	else
 		numrows = (*acquirefunc) (onerel, elevel,
 								  rows, targrows,
-								  &totalrows, &totaldeadrows);
+								  &totalrows, &totaldeadrows,
+								  &totalwarmchains);
 
 	/*
 	 * Compute the statistics.  Temporary results during the calculations for
@@ -631,7 +634,7 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params,
 	 */
 	if (!inh)
 		pgstat_report_analyze(onerel, totalrows, totaldeadrows,
-							  (va_cols == NIL));
+							  totalwarmchains, (va_cols == NIL));
 
 	/* If this isn't part of VACUUM ANALYZE, let index AMs do cleanup */
 	if (!(options & VACOPT_VACUUM))
@@ -991,12 +994,14 @@ examine_attribute(Relation onerel, int attnum, Node *index_expr)
 static int
 acquire_sample_rows(Relation onerel, int elevel,
 					HeapTuple *rows, int targrows,
-					double *totalrows, double *totaldeadrows)
+					double *totalrows, double *totaldeadrows,
+					double *totalwarmchains)
 {
 	int			numrows = 0;	/* # rows now in reservoir */
 	double		samplerows = 0; /* total # rows collected */
 	double		liverows = 0;	/* # live rows seen */
 	double		deadrows = 0;	/* # dead rows seen */
+	double		warmchains = 0;
 	double		rowstoskip = -1;	/* -1 means not set yet */
 	BlockNumber totalblocks;
 	TransactionId OldestXmin;
@@ -1023,9 +1028,14 @@ acquire_sample_rows(Relation onerel, int elevel,
 		Page		targpage;
 		OffsetNumber targoffset,
 					maxoffset;
+		bool		marked[MaxHeapTuplesPerPage];
+		OffsetNumber root_offsets[MaxHeapTuplesPerPage];
 
 		vacuum_delay_point();
 
+		/* Track which root line pointers are already counted. */
+		memset(marked, 0, sizeof (marked));
+
 		/*
 		 * We must maintain a pin on the target page's buffer to ensure that
 		 * the maxoffset value stays good (else concurrent VACUUM might delete
@@ -1041,6 +1051,9 @@ acquire_sample_rows(Relation onerel, int elevel,
 		targpage = BufferGetPage(targbuffer);
 		maxoffset = PageGetMaxOffsetNumber(targpage);
 
+		/* Get all root line pointers first */
+		heap_get_root_tuples(targpage, root_offsets);
+
 		/* Inner loop over all tuples on the selected page */
 		for (targoffset = FirstOffsetNumber; targoffset <= maxoffset; targoffset++)
 		{
@@ -1069,6 +1082,22 @@ acquire_sample_rows(Relation onerel, int elevel,
 			targtuple.t_data = (HeapTupleHeader) PageGetItem(targpage, itemid);
 			targtuple.t_len = ItemIdGetLength(itemid);
 
+			/*
+			 * If this is a WARM-updated tuple, check if we have already seen
+			 * the root line pointer. If not, count this as a WARM chain. This
+			 * ensures that we count every WARM-chain just once, irrespective
+			 * of how many tuples exist in the chain.
+			 */
+			if (HeapTupleHeaderIsWarmUpdated(targtuple.t_data))
+			{
+				OffsetNumber root_offnum = root_offsets[targoffset];
+				if (!marked[root_offnum])
+				{
+					warmchains += 1;
+					marked[root_offnum] = true;
+				}
+			}
+
 			switch (HeapTupleSatisfiesVacuum(&targtuple,
 											 OldestXmin,
 											 targbuffer))
@@ -1200,18 +1229,24 @@ acquire_sample_rows(Relation onerel, int elevel,
 
 	/*
 	 * Estimate total numbers of rows in relation.  For live rows, use
-	 * vac_estimate_reltuples; for dead rows, we have no source of old
-	 * information, so we have to assume the density is the same in unseen
-	 * pages as in the pages we scanned.
+	 * vac_estimate_reltuples; for dead rows and WARM chains, we have no source
+	 * of old information, so we have to assume the density is the same in
+	 * unseen pages as in the pages we scanned.
 	 */
 	*totalrows = vac_estimate_reltuples(onerel, true,
 										totalblocks,
 										bs.m,
 										liverows);
 	if (bs.m > 0)
+	{
 		*totaldeadrows = floor((deadrows / bs.m) * totalblocks + 0.5);
+		*totalwarmchains = floor((warmchains / bs.m) * totalblocks + 0.5);
+	}
 	else
+	{
 		*totaldeadrows = 0.0;
+		*totalwarmchains = 0.0;
+	}
 
 	/*
 	 * Emit some interesting relation info
@@ -1219,11 +1254,13 @@ acquire_sample_rows(Relation onerel, int elevel,
 	ereport(elevel,
 			(errmsg("\"%s\": scanned %d of %u pages, "
 					"containing %.0f live rows and %.0f dead rows; "
-					"%d rows in sample, %.0f estimated total rows",
+					"%d rows in sample, %.0f estimated total rows; "
+					"%.0f warm chains",
 					RelationGetRelationName(onerel),
 					bs.m, totalblocks,
 					liverows, deadrows,
-					numrows, *totalrows)));
+					numrows, *totalrows,
+					*totalwarmchains)));
 
 	return numrows;
 }
@@ -1428,11 +1465,12 @@ acquire_inherited_sample_rows(Relation onerel, int elevel,
 				int			childrows;
 				double		trows,
 							tdrows;
+				double		twarmchains;
 
 				/* Fetch a random sample of the child's rows */
 				childrows = (*acquirefunc) (childrel, elevel,
 											rows + numrows, childtargrows,
-											&trows, &tdrows);
+											&trows, &tdrows, &twarmchains);
 
 				/* We may need to convert from child's rowtype to parent's */
 				if (childrows > 0 &&
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index 9fbb0eb..52a7838 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -103,6 +103,7 @@ ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel)
 		params.freeze_table_age = 0;
 		params.multixact_freeze_min_age = 0;
 		params.multixact_freeze_table_age = 0;
+		params.warmcleanup_index_scale = -1;
 	}
 	else
 	{
@@ -110,6 +111,7 @@ ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel)
 		params.freeze_table_age = -1;
 		params.multixact_freeze_min_age = -1;
 		params.multixact_freeze_table_age = -1;
+		params.warmcleanup_index_scale = -1;
 	}
 
 	/* user-invoked vacuum is never "for wraparound" */
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index f52490f..d68b4fb 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -156,18 +156,23 @@ typedef struct LVRelStats
 	double		tuples_deleted;
 	BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */
 
+	int			maxtuples;		/* maxtuples computed while allocating space */
+	Size		work_area_size;	/* working area size */
+	char		*work_area;		/* working area for storing dead tuples and
+								 * warm chains */
 	/* List of candidate WARM chains that can be converted into HOT chains */
-	/* NB: this list is ordered by TID of the root pointers */
+	/* 
+	 * NB: this list grows from bottom to top and is ordered by TID of the root
+	 * pointers, with the lowest entry at the bottom
+	 */
 	int				num_warm_chains;	/* current # of entries */
-	int				max_warm_chains;	/* # slots allocated in array */
 	LVWarmChain 	*warm_chains;		/* array of LVWarmChain */
 	double			num_non_convertible_warm_chains;
-
 	/* List of TIDs of tuples we intend to delete */
 	/* NB: this list is ordered by TID address */
 	int			num_dead_tuples;	/* current # of entries */
-	int			max_dead_tuples;	/* # slots allocated in array */
 	ItemPointer dead_tuples;	/* array of ItemPointerData */
+
 	int			num_index_scans;
 	TransactionId latestRemovedXid;
 	bool		lock_waiter_detected;
@@ -187,11 +192,12 @@ static BufferAccessStrategy vac_strategy;
 /* non-export function prototypes */
 static void lazy_scan_heap(Relation onerel, int options,
 			   LVRelStats *vacrelstats, Relation *Irel, int nindexes,
-			   bool aggressive);
+			   bool aggressive, double warmcleanup_index_scale);
 static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats);
 static bool lazy_check_needs_freeze(Buffer buf, bool *hastup);
 static void lazy_vacuum_index(Relation indrel,
 				  bool clear_warm,
+				  double warmcleanup_index_scale,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats);
 static void lazy_cleanup_index(Relation indrel,
@@ -207,7 +213,8 @@ static bool should_attempt_truncation(LVRelStats *vacrelstats);
 static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
 static BlockNumber count_nondeletable_pages(Relation onerel,
 						 LVRelStats *vacrelstats);
-static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks);
+static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks,
+					   bool dowarmcleanup);
 static void lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr);
 static void lazy_record_warm_chain(LVRelStats *vacrelstats,
@@ -283,6 +290,9 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 						  &OldestXmin, &FreezeLimit, &xidFullScanLimit,
 						  &MultiXactCutoff, &mxactFullScanLimit);
 
+	/* Use default if the caller hasn't specified any value */
+	if (params->warmcleanup_index_scale == -1)
+		params->warmcleanup_index_scale = VacuumWarmCleanupIndexScale;
 	/*
 	 * We request an aggressive scan if the table's frozen Xid is now older
 	 * than or equal to the requested Xid full-table scan limit; or if the
@@ -309,7 +319,8 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 	vacrelstats->hasindex = (nindexes > 0);
 
 	/* Do the vacuuming */
-	lazy_scan_heap(onerel, options, vacrelstats, Irel, nindexes, aggressive);
+	lazy_scan_heap(onerel, options, vacrelstats, Irel, nindexes, aggressive,
+			params->warmcleanup_index_scale);
 
 	/* Done with indexes */
 	vac_close_indexes(nindexes, Irel, NoLock);
@@ -396,7 +407,8 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 	pgstat_report_vacuum(RelationGetRelid(onerel),
 						 onerel->rd_rel->relisshared,
 						 new_live_tuples,
-						 vacrelstats->new_dead_tuples);
+						 vacrelstats->new_dead_tuples,
+						 vacrelstats->num_non_convertible_warm_chains);
 	pgstat_progress_end_command();
 
 	/* and log the action if appropriate */
@@ -507,10 +519,19 @@ vacuum_log_cleanup_info(Relation rel, LVRelStats *vacrelstats)
  *		If there are no indexes then we can reclaim line pointers on the fly;
  *		dead line pointers need only be retained until all index pointers that
  *		reference them have been killed.
+ *
+ *		warmcleanup_index_scale specifies the number of WARM pointers in an
+ *		index as a fraction of total candidate WARM chains. If we find less
+ *		WARM pointers in an index than the specified fraction, then we don't
+ *		invoke cleanup that index. If WARM cleanup is skipped for any one
+ *		index, the WARM chain can't be cleared in the heap and no further WARM
+ *		updates are possible to such chains. Such chains are also not
+ *		considered for WARM cleanup in other indexes.
  */
 static void
 lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
-			   Relation *Irel, int nindexes, bool aggressive)
+			   Relation *Irel, int nindexes, bool aggressive,
+			   double warmcleanup_index_scale)
 {
 	BlockNumber nblocks,
 				blkno;
@@ -536,6 +557,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		PROGRESS_VACUUM_MAX_DEAD_TUPLES
 	};
 	int64		initprog_val[3];
+	bool		dowarmcleanup = ((options & VACOPT_WARM_CLEANUP) != 0);
 
 	pg_rusage_init(&ru0);
 
@@ -558,13 +580,13 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 	vacrelstats->nonempty_pages = 0;
 	vacrelstats->latestRemovedXid = InvalidTransactionId;
 
-	lazy_space_alloc(vacrelstats, nblocks);
+	lazy_space_alloc(vacrelstats, nblocks, dowarmcleanup);
 	frozen = palloc(sizeof(xl_heap_freeze_tuple) * MaxHeapTuplesPerPage);
 
 	/* Report that we're scanning the heap, advertising total # of blocks */
 	initprog_val[0] = PROGRESS_VACUUM_PHASE_SCAN_HEAP;
 	initprog_val[1] = nblocks;
-	initprog_val[2] = vacrelstats->max_dead_tuples;
+	initprog_val[2] = vacrelstats->maxtuples;
 	pgstat_progress_update_multi_param(3, initprog_index, initprog_val);
 
 	/*
@@ -656,6 +678,11 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		bool		all_frozen = true;	/* provided all_visible is also true */
 		bool		has_dead_tuples;
 		TransactionId visibility_cutoff_xid = InvalidTransactionId;
+		char		*end_deads;
+		char		*end_warms;
+		Size		free_work_area;
+		int			avail_dead_tuples;
+		int			avail_warm_chains;
 
 		/* see note above about forcing scanning of last page */
 #define FORCE_CHECK_PAGE() \
@@ -740,13 +767,38 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		vacuum_delay_point();
 
 		/*
+		 * The dead tuples are stored starting from the start of the work
+		 * area and growing downwards. The candidate warm chains are stored
+		 * starting from the bottom on the work area and growing upwards. Once
+		 * the difference between these two segments is too small to accomodate
+		 * potentially all tuples in the current page, we stop and do one round
+		 * of index cleanup.
+		 */
+		end_deads = (char *)(vacrelstats->dead_tuples + vacrelstats->num_dead_tuples);
+
+		/*
+		 * If we are not doing WARM cleanup, then the entire work area is used
+		 * by the dead tuples.
+		 */
+		if (vacrelstats->warm_chains)
+		{
+			end_warms = (char *)(vacrelstats->warm_chains - vacrelstats->num_warm_chains);
+			free_work_area = end_warms - end_deads;
+			avail_warm_chains = (free_work_area / sizeof (LVWarmChain));
+		}
+		else
+		{
+			free_work_area = vacrelstats->work_area +
+				vacrelstats->work_area_size - end_deads;
+		}
+		avail_dead_tuples = (free_work_area / sizeof (ItemPointerData));
+
+		/*
 		 * If we are close to overrunning the available space for dead-tuple
 		 * TIDs, pause and do a cycle of vacuuming before we tackle this page.
 		 */
-		if (((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
-			vacrelstats->num_dead_tuples > 0) ||
-			((vacrelstats->max_warm_chains - vacrelstats->num_warm_chains) < MaxHeapTuplesPerPage &&
-			 vacrelstats->num_warm_chains > 0))
+		if ((avail_dead_tuples < MaxHeapTuplesPerPage && vacrelstats->num_dead_tuples > 0) ||
+			(avail_warm_chains < MaxHeapTuplesPerPage && vacrelstats->num_warm_chains > 0))
 		{
 			const int	hvp_index[] = {
 				PROGRESS_VACUUM_PHASE,
@@ -776,7 +828,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			/* Remove index entries */
 			for (i = 0; i < nindexes; i++)
 				lazy_vacuum_index(Irel[i],
-								  (vacrelstats->num_warm_chains > 0),
+								  dowarmcleanup && (vacrelstats->num_warm_chains > 0),
+								  warmcleanup_index_scale,
 								  &indstats[i],
 								  vacrelstats);
 
@@ -800,8 +853,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			 */
 			vacrelstats->num_dead_tuples = 0;
 			vacrelstats->num_warm_chains = 0;
-			memset(vacrelstats->warm_chains, 0,
-					vacrelstats->max_warm_chains * sizeof (LVWarmChain));
+			memset(vacrelstats->work_area, 0, vacrelstats->work_area_size);
 			vacrelstats->num_index_scans++;
 
 			/* Report that we are once again scanning the heap */
@@ -1408,7 +1460,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		/* Remove index entries */
 		for (i = 0; i < nindexes; i++)
 			lazy_vacuum_index(Irel[i],
-							  (vacrelstats->num_warm_chains > 0),
+							  dowarmcleanup && (vacrelstats->num_warm_chains > 0),
+							  warmcleanup_index_scale,
 							  &indstats[i],
 							  vacrelstats);
 
@@ -1513,9 +1566,12 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 		vacuum_delay_point();
 
 		tblk = chainblk = InvalidBlockNumber;
-		if (chainindex < vacrelstats->num_warm_chains)
-			chainblk =
-				ItemPointerGetBlockNumber(&(vacrelstats->warm_chains[chainindex].chain_tid));
+		if (vacrelstats->warm_chains &&
+			chainindex < vacrelstats->num_warm_chains)
+		{
+			LVWarmChain *chain = vacrelstats->warm_chains - (chainindex + 1);
+			chainblk = ItemPointerGetBlockNumber(&chain->chain_tid);
+		}
 
 		if (tupindex < vacrelstats->num_dead_tuples)
 			tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
@@ -1613,7 +1669,8 @@ lazy_warmclear_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 		BlockNumber tblk;
 		LVWarmChain	*chain;
 
-		chain = &vacrelstats->warm_chains[chainindex];
+		/* The warm chains are indexed from bottom */
+		chain = vacrelstats->warm_chains - (chainindex + 1);
 
 		tblk = ItemPointerGetBlockNumber(&chain->chain_tid);
 		if (tblk != blkno)
@@ -1847,9 +1904,11 @@ static void
 lazy_reset_warm_pointer_count(LVRelStats *vacrelstats)
 {
 	int i;
-	for (i = 0; i < vacrelstats->num_warm_chains; i++)
+
+	/* Start from the bottom and move upwards */
+	for (i = 1; i <= vacrelstats->num_warm_chains; i++)
 	{
-		LVWarmChain *chain = &vacrelstats->warm_chains[i];
+		LVWarmChain *chain = (vacrelstats->warm_chains - i);
 		chain->num_clear_pointers = chain->num_warm_pointers = 0;
 	}
 }
@@ -1863,6 +1922,7 @@ lazy_reset_warm_pointer_count(LVRelStats *vacrelstats)
 static void
 lazy_vacuum_index(Relation indrel,
 				  bool clear_warm,
+				  double warmcleanup_index_scale,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats)
 {
@@ -1927,25 +1987,57 @@ lazy_vacuum_index(Relation indrel,
 						(*stats)->warm_pointers_removed,
 						(*stats)->clear_pointers_removed)));
 
-		(*stats)->num_warm_pointers = 0;
-		(*stats)->num_clear_pointers = 0;
-		(*stats)->warm_pointers_removed = 0;
-		(*stats)->clear_pointers_removed = 0;
-		(*stats)->pointers_cleared = 0;
+		/*
+		 * If the number of WARM pointers found in the index are more than the
+		 * configured fraction of total candidate WARM chains, then do the
+		 * second index scan to clean up WARM chains.
+		 *
+		 * Otherwise we must set these WARM chains as non-convertible chains.
+		 */
+		if ((*stats)->num_warm_pointers >
+				((double)vacrelstats->num_warm_chains * warmcleanup_index_scale))
+		{
+			(*stats)->num_warm_pointers = 0;
+			(*stats)->num_clear_pointers = 0;
+			(*stats)->warm_pointers_removed = 0;
+			(*stats)->clear_pointers_removed = 0;
+			(*stats)->pointers_cleared = 0;
+
+			*stats = index_bulk_delete(&ivinfo, *stats,
+					lazy_indexvac_phase2, (void *) vacrelstats);
+			ereport(elevel,
+					(errmsg("scanned index \"%s\" to convert WARM pointers, found "
+							"%0.f WARM pointers, %0.f CLEAR pointers, removed "
+							"%0.f WARM pointers, removed %0.f CLEAR pointers, "
+							"cleared %0.f WARM pointers",
+							RelationGetRelationName(indrel),
+							(*stats)->num_warm_pointers,
+							(*stats)->num_clear_pointers,
+							(*stats)->warm_pointers_removed,
+							(*stats)->clear_pointers_removed,
+							(*stats)->pointers_cleared)));
+		}
+		else
+		{
+			int ii;
 
-		*stats = index_bulk_delete(&ivinfo, *stats,
-				lazy_indexvac_phase2, (void *) vacrelstats);
-		ereport(elevel,
-				(errmsg("scanned index \"%s\" to convert WARM pointers, found "
-						"%0.f WARM pointers, %0.f CLEAR pointers, removed "
-						"%0.f WARM pointers, removed %0.f CLEAR pointers, "
-						"cleared %0.f WARM pointers",
-						RelationGetRelationName(indrel),
-						(*stats)->num_warm_pointers,
-						(*stats)->num_clear_pointers,
-						(*stats)->warm_pointers_removed,
-						(*stats)->clear_pointers_removed,
-						(*stats)->pointers_cleared)));
+			/*
+			 * All chains skipped by this index are marked non-convertible.
+			 *
+			 * Start from bottom and move upwards.
+			 */
+			for (ii = 1; ii <= vacrelstats->num_warm_chains; ii++)
+			{
+				LVWarmChain *chain = vacrelstats->warm_chains - ii;
+				if (chain->num_warm_pointers > 0 ||
+					chain->num_clear_pointers > 1)
+				{
+					chain->keep_warm_chain = 1;
+					vacrelstats->num_non_convertible_warm_chains++;
+				}
+			}
+
+		}
 	}
 	else
 	{
@@ -2323,7 +2415,8 @@ count_nondeletable_pages(Relation onerel, LVRelStats *vacrelstats)
  * See the comments at the head of this file for rationale.
  */
 static void
-lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
+lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks,
+				 bool dowarmcleanup)
 {
 	long		maxtuples;
 	int			vac_work_mem = IsAutoVacuumWorkerProcess() &&
@@ -2332,11 +2425,16 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 
 	if (vacrelstats->hasindex)
 	{
+		/*
+		 * If we're not doing WARM cleanup then the entire memory is available
+		 * for tracking dead tuples. Otherwise it gets split between tracking
+		 * dead tuples and tracking WARM chains.
+		 */
 		maxtuples = (vac_work_mem * 1024L) / (sizeof(ItemPointerData) +
-				sizeof(LVWarmChain));
+				dowarmcleanup ? sizeof(LVWarmChain) : 0);
 		maxtuples = Min(maxtuples, INT_MAX);
 		maxtuples = Min(maxtuples, MaxAllocSize / (sizeof(ItemPointerData) +
-					sizeof(LVWarmChain)));
+				dowarmcleanup ? sizeof(LVWarmChain) : 0));
 
 		/* curious coding here to ensure the multiplication can't overflow */
 		if ((BlockNumber) (maxtuples / LAZY_ALLOC_TUPLES) > relblocks)
@@ -2350,21 +2448,29 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 		maxtuples = MaxHeapTuplesPerPage;
 	}
 
-	vacrelstats->num_dead_tuples = 0;
-	vacrelstats->max_dead_tuples = (int) maxtuples;
-	vacrelstats->dead_tuples = (ItemPointer)
-		palloc(maxtuples * sizeof(ItemPointerData));
-
-	/*
-	 * XXX Cheat for now and allocate the same size array for tracking warm
-	 * chains. maxtuples must have been already adjusted above to ensure we
-	 * don't cross vac_work_mem.
+	/* Allocate work area of the desired size and setup dead_tuples and
+	 * warm_chains to the start and the end of the area respectively. They grow
+	 * in opposite directions as dead tuples and warm chains are added. Note
+	 * that if we are not doing WARM cleanup then the entire area will only be
+	 * used for tracking dead tuples.
 	 */
-	vacrelstats->num_warm_chains = 0;
-	vacrelstats->max_warm_chains = (int) maxtuples;
-	vacrelstats->warm_chains = (LVWarmChain *)
-		palloc0(maxtuples * sizeof(LVWarmChain));
+	vacrelstats->work_area_size = maxtuples * (sizeof(ItemPointerData) +
+				dowarmcleanup ? sizeof(LVWarmChain) : 0);
+	vacrelstats->work_area = (char *) palloc0(vacrelstats->work_area_size);
+	vacrelstats->num_dead_tuples = 0;
+	vacrelstats->dead_tuples = (ItemPointer)vacrelstats->work_area;
+	vacrelstats->maxtuples = maxtuples;
 
+	if (dowarmcleanup)
+	{
+		vacrelstats->num_warm_chains = 0;
+		vacrelstats->warm_chains = (LVWarmChain *)
+			(vacrelstats->work_area + vacrelstats->work_area_size);
+	}
+	else
+	{
+		vacrelstats->warm_chains = NULL;
+	}
 }
 
 /*
@@ -2374,17 +2480,38 @@ static void
 lazy_record_clear_chain(LVRelStats *vacrelstats,
 					   ItemPointer itemptr)
 {
+	char *end_deads, *end_warms;
+	Size free_work_area;
+
+	if (vacrelstats->warm_chains == NULL)
+	{
+		vacrelstats->num_non_convertible_warm_chains++;
+		return;
+	}
+
+	end_deads = (char *) (vacrelstats->dead_tuples +
+					vacrelstats->num_dead_tuples);
+	end_warms = (char *) (vacrelstats->warm_chains -
+					vacrelstats->num_warm_chains);
+	free_work_area = (end_warms - end_deads);
+
+	Assert(free_work_area >= 0);
 	/*
 	 * The array shouldn't overflow under normal behavior, but perhaps it
 	 * could if we are given a really small maintenance_work_mem. In that
 	 * case, just forget the last few tuples (we'll get 'em next time).
 	 */
-	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	if (free_work_area >= sizeof (LVWarmChain))
 	{
-		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
-		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 0;
+		LVWarmChain *chain;
+
 		vacrelstats->num_warm_chains++;
+	   	chain = vacrelstats->warm_chains - vacrelstats->num_warm_chains;
+		chain->chain_tid = *itemptr;
+		chain->is_postwarm_chain = 0;
 	}
+	else
+		vacrelstats->num_non_convertible_warm_chains++;
 }
 
 /*
@@ -2394,17 +2521,39 @@ static void
 lazy_record_warm_chain(LVRelStats *vacrelstats,
 					   ItemPointer itemptr)
 {
+	char *end_deads, *end_warms;
+	Size free_work_area;
+
+	if (vacrelstats->warm_chains == NULL)
+	{
+		vacrelstats->num_non_convertible_warm_chains++;
+		return;
+	}
+
+	end_deads = (char *) (vacrelstats->dead_tuples +
+					vacrelstats->num_dead_tuples);
+	end_warms = (char *) (vacrelstats->warm_chains -
+					vacrelstats->num_warm_chains);
+	free_work_area = (end_warms - end_deads);
+
+	Assert(free_work_area >= 0);
+
 	/*
 	 * The array shouldn't overflow under normal behavior, but perhaps it
 	 * could if we are given a really small maintenance_work_mem. In that
 	 * case, just forget the last few tuples (we'll get 'em next time).
 	 */
-	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	if (free_work_area >= sizeof (LVWarmChain))
 	{
-		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
-		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 1;
+		LVWarmChain *chain;
+
 		vacrelstats->num_warm_chains++;
+	   	chain = vacrelstats->warm_chains - vacrelstats->num_warm_chains;
+		chain->chain_tid = *itemptr;
+		chain->is_postwarm_chain = 1;
 	}
+	else
+		vacrelstats->num_non_convertible_warm_chains++;
 }
 
 /*
@@ -2414,12 +2563,20 @@ static void
 lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr)
 {
+	char *end_deads = (char *) (vacrelstats->dead_tuples +
+		 	vacrelstats->num_dead_tuples);
+	char *end_warms = (char *) (vacrelstats->warm_chains -
+			vacrelstats->num_warm_chains);
+	Size freespace = (end_warms - end_deads);
+
+	Assert(freespace >= 0);
+	
 	/*
 	 * The array shouldn't overflow under normal behavior, but perhaps it
 	 * could if we are given a really small maintenance_work_mem. In that
 	 * case, just forget the last few tuples (we'll get 'em next time).
 	 */
-	if (vacrelstats->num_dead_tuples < vacrelstats->max_dead_tuples)
+	if (freespace >= sizeof (ItemPointer))
 	{
 		vacrelstats->dead_tuples[vacrelstats->num_dead_tuples] = *itemptr;
 		vacrelstats->num_dead_tuples++;
@@ -2472,10 +2629,10 @@ lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state)
 		return IBDCR_DELETE;
 
 	chain = (LVWarmChain *) bsearch((void *) itemptr,
-								(void *) vacrelstats->warm_chains,
-								vacrelstats->num_warm_chains,
-								sizeof(LVWarmChain),
-								vac_cmp_warm_chain);
+				(void *) (vacrelstats->warm_chains - vacrelstats->num_warm_chains),
+				vacrelstats->num_warm_chains,
+				sizeof(LVWarmChain),
+				vac_cmp_warm_chain);
 	if (chain != NULL)
 	{
 		if (is_warm)
@@ -2495,13 +2652,13 @@ static IndexBulkDeleteCallbackResult
 lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
 {
 	LVRelStats		*vacrelstats = (LVRelStats *) state;
-	LVWarmChain	*chain;
+	LVWarmChain		*chain;
 
 	chain = (LVWarmChain *) bsearch((void *) itemptr,
-								(void *) vacrelstats->warm_chains,
-								vacrelstats->num_warm_chains,
-								sizeof(LVWarmChain),
-								vac_cmp_warm_chain);
+				(void *) (vacrelstats->warm_chains - vacrelstats->num_warm_chains),
+				vacrelstats->num_warm_chains,
+				sizeof(LVWarmChain),
+				vac_cmp_warm_chain);
 
 	if (chain != NULL && (chain->keep_warm_chain != 1))
 	{
@@ -2600,6 +2757,7 @@ lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
 		 * index pointers.
 		 */
 		chain->keep_warm_chain = 1;
+		vacrelstats->num_non_convertible_warm_chains++;
 		return IBDCR_KEEP;
 	}
 	return IBDCR_KEEP;
@@ -2608,6 +2766,9 @@ lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
 /*
  * Comparator routines for use with qsort() and bsearch(). Similar to
  * vac_cmp_itemptr, but right hand argument is LVWarmChain struct pointer.
+ *
+ * The warm_chains array is sorted in descending order hence the return values
+ * are flipped.
  */
 static int
 vac_cmp_warm_chain(const void *left, const void *right)
@@ -2621,17 +2782,17 @@ vac_cmp_warm_chain(const void *left, const void *right)
 	rblk = ItemPointerGetBlockNumber(&((LVWarmChain *) right)->chain_tid);
 
 	if (lblk < rblk)
-		return -1;
-	if (lblk > rblk)
 		return 1;
+	if (lblk > rblk)
+		return -1;
 
 	loff = ItemPointerGetOffsetNumber((ItemPointer) left);
 	roff = ItemPointerGetOffsetNumber(&((LVWarmChain *) right)->chain_tid);
 
 	if (loff < roff)
-		return -1;
-	if (loff > roff)
 		return 1;
+	if (loff > roff)
+		return -1;
 
 	return 0;
 }
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9d53a29..1592220 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -433,7 +433,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	overlay_placing substr_from substr_for
 
 %type <boolean> opt_instead
-%type <boolean> opt_unique opt_concurrently opt_verbose opt_full
+%type <boolean> opt_unique opt_concurrently opt_verbose opt_full opt_warmclean
 %type <boolean> opt_freeze opt_default opt_recheck
 %type <defelt>	opt_binary opt_oids copy_delimiter
 
@@ -684,7 +684,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	VACUUM VALID VALIDATE VALIDATOR VALUE_P VALUES VARCHAR VARIADIC VARYING
 	VERBOSE VERSION_P VIEW VIEWS VOLATILE
 
-	WHEN WHERE WHITESPACE_P WINDOW WITH WITHIN WITHOUT WORK WRAPPER WRITE
+	WARMCLEAN WHEN WHERE WHITESPACE_P WINDOW WITH WITHIN WITHOUT WORK WRAPPER WRITE
 
 	XML_P XMLATTRIBUTES XMLCONCAT XMLELEMENT XMLEXISTS XMLFOREST XMLNAMESPACES
 	XMLPARSE XMLPI XMLROOT XMLSERIALIZE XMLTABLE
@@ -10059,7 +10059,7 @@ cluster_index_specification:
  *
  *****************************************************************************/
 
-VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
+VacuumStmt: VACUUM opt_full opt_freeze opt_verbose opt_warmclean
 				{
 					VacuumStmt *n = makeNode(VacuumStmt);
 					n->options = VACOPT_VACUUM;
@@ -10069,11 +10069,13 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
 						n->options |= VACOPT_FREEZE;
 					if ($4)
 						n->options |= VACOPT_VERBOSE;
+					if ($5)
+						n->options |= VACOPT_WARM_CLEANUP;
 					n->relation = NULL;
 					n->va_cols = NIL;
 					$$ = (Node *)n;
 				}
-			| VACUUM opt_full opt_freeze opt_verbose qualified_name
+			| VACUUM opt_full opt_freeze opt_verbose opt_warmclean qualified_name
 				{
 					VacuumStmt *n = makeNode(VacuumStmt);
 					n->options = VACOPT_VACUUM;
@@ -10083,13 +10085,15 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
 						n->options |= VACOPT_FREEZE;
 					if ($4)
 						n->options |= VACOPT_VERBOSE;
-					n->relation = $5;
+					if ($5)
+						n->options |= VACOPT_WARM_CLEANUP;
+					n->relation = $6;
 					n->va_cols = NIL;
 					$$ = (Node *)n;
 				}
-			| VACUUM opt_full opt_freeze opt_verbose AnalyzeStmt
+			| VACUUM opt_full opt_freeze opt_verbose opt_warmclean AnalyzeStmt
 				{
-					VacuumStmt *n = (VacuumStmt *) $5;
+					VacuumStmt *n = (VacuumStmt *) $6;
 					n->options |= VACOPT_VACUUM;
 					if ($2)
 						n->options |= VACOPT_FULL;
@@ -10097,6 +10101,8 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
 						n->options |= VACOPT_FREEZE;
 					if ($4)
 						n->options |= VACOPT_VERBOSE;
+					if ($5)
+						n->options |= VACOPT_WARM_CLEANUP;
 					$$ = (Node *)n;
 				}
 			| VACUUM '(' vacuum_option_list ')'
@@ -10129,6 +10135,7 @@ vacuum_option_elem:
 			| VERBOSE			{ $$ = VACOPT_VERBOSE; }
 			| FREEZE			{ $$ = VACOPT_FREEZE; }
 			| FULL				{ $$ = VACOPT_FULL; }
+			| WARMCLEAN			{ $$ = VACOPT_WARM_CLEANUP; }
 			| IDENT
 				{
 					if (strcmp($1, "disable_page_skipping") == 0)
@@ -10182,6 +10189,10 @@ opt_freeze: FREEZE									{ $$ = TRUE; }
 			| /*EMPTY*/								{ $$ = FALSE; }
 		;
 
+opt_warmclean: WARMCLEAN							{ $$ = TRUE; }
+			| /*EMPTY*/								{ $$ = FALSE; }
+		;
+
 opt_name_list:
 			'(' name_list ')'						{ $$ = $2; }
 			| /*EMPTY*/								{ $$ = NIL; }
@@ -14886,6 +14897,7 @@ type_func_name_keyword:
 			| SIMILAR
 			| TABLESAMPLE
 			| VERBOSE
+			| WARMCLEAN
 		;
 
 /* Reserved keyword --- these keywords are usable only as a ColLabel.
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 33ca749..91793e4 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -115,6 +115,8 @@ int			autovacuum_vac_thresh;
 double		autovacuum_vac_scale;
 int			autovacuum_anl_thresh;
 double		autovacuum_anl_scale;
+double		autovacuum_warmcleanup_scale;
+double		autovacuum_warmcleanup_index_scale;
 int			autovacuum_freeze_max_age;
 int			autovacuum_multixact_freeze_max_age;
 
@@ -307,7 +309,8 @@ static void relation_needs_vacanalyze(Oid relid, AutoVacOpts *relopts,
 						  Form_pg_class classForm,
 						  PgStat_StatTabEntry *tabentry,
 						  int effective_multixact_freeze_max_age,
-						  bool *dovacuum, bool *doanalyze, bool *wraparound);
+						  bool *dovacuum, bool *doanalyze, bool *wraparound,
+						  bool *dowarmcleanup);
 
 static void autovacuum_do_vac_analyze(autovac_table *tab,
 						  BufferAccessStrategy bstrategy);
@@ -2010,6 +2013,7 @@ do_autovacuum(void)
 		bool		dovacuum;
 		bool		doanalyze;
 		bool		wraparound;
+		bool		dowarmcleanup;
 
 		if (classForm->relkind != RELKIND_RELATION &&
 			classForm->relkind != RELKIND_MATVIEW)
@@ -2049,10 +2053,14 @@ do_autovacuum(void)
 		tabentry = get_pgstat_tabentry_relid(relid, classForm->relisshared,
 											 shared, dbentry);
 
-		/* Check if it needs vacuum or analyze */
+		/* 
+		 * Check if it needs vacuum or analyze. For vacuum, also check if it
+		 * needs WARM cleanup.
+		 */
 		relation_needs_vacanalyze(relid, relopts, classForm, tabentry,
 								  effective_multixact_freeze_max_age,
-								  &dovacuum, &doanalyze, &wraparound);
+								  &dovacuum, &doanalyze, &wraparound,
+								  &dowarmcleanup);
 
 		/* Relations that need work are added to table_oids */
 		if (dovacuum || doanalyze)
@@ -2105,6 +2113,7 @@ do_autovacuum(void)
 		bool		dovacuum;
 		bool		doanalyze;
 		bool		wraparound;
+		bool		dowarmcleanup;
 
 		/*
 		 * We cannot safely process other backends' temp tables, so skip 'em.
@@ -2135,7 +2144,8 @@ do_autovacuum(void)
 
 		relation_needs_vacanalyze(relid, relopts, classForm, tabentry,
 								  effective_multixact_freeze_max_age,
-								  &dovacuum, &doanalyze, &wraparound);
+								  &dovacuum, &doanalyze, &wraparound,
+								  &dowarmcleanup);
 
 		/* ignore analyze for toast tables */
 		if (dovacuum)
@@ -2566,6 +2576,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 	HeapTuple	classTup;
 	bool		dovacuum;
 	bool		doanalyze;
+	bool		dowarmcleanup;
 	autovac_table *tab = NULL;
 	PgStat_StatTabEntry *tabentry;
 	PgStat_StatDBEntry *shared;
@@ -2607,7 +2618,8 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 
 	relation_needs_vacanalyze(relid, avopts, classForm, tabentry,
 							  effective_multixact_freeze_max_age,
-							  &dovacuum, &doanalyze, &wraparound);
+							  &dovacuum, &doanalyze, &wraparound,
+							  &dowarmcleanup);
 
 	/* ignore ANALYZE for toast tables */
 	if (classForm->relkind == RELKIND_TOASTVALUE)
@@ -2623,6 +2635,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		int			vac_cost_limit;
 		int			vac_cost_delay;
 		int			log_min_duration;
+		double		warmcleanup_index_scale;
 
 		/*
 		 * Calculate the vacuum cost parameters and the freeze ages.  If there
@@ -2669,19 +2682,26 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 			? avopts->multixact_freeze_table_age
 			: default_multixact_freeze_table_age;
 
+		warmcleanup_index_scale = (avopts &&
+								   avopts->warmcleanup_index_scale >= 0)
+			? avopts->warmcleanup_index_scale
+			: autovacuum_warmcleanup_index_scale;
+
 		tab = palloc(sizeof(autovac_table));
 		tab->at_relid = relid;
 		tab->at_sharedrel = classForm->relisshared;
 		tab->at_vacoptions = VACOPT_SKIPTOAST |
 			(dovacuum ? VACOPT_VACUUM : 0) |
 			(doanalyze ? VACOPT_ANALYZE : 0) |
-			(!wraparound ? VACOPT_NOWAIT : 0);
+			(!wraparound ? VACOPT_NOWAIT : 0) |
+			(dowarmcleanup ? VACOPT_WARM_CLEANUP : 0);
 		tab->at_params.freeze_min_age = freeze_min_age;
 		tab->at_params.freeze_table_age = freeze_table_age;
 		tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age;
 		tab->at_params.multixact_freeze_table_age = multixact_freeze_table_age;
 		tab->at_params.is_wraparound = wraparound;
 		tab->at_params.log_min_duration = log_min_duration;
+		tab->at_params.warmcleanup_index_scale = warmcleanup_index_scale;
 		tab->at_vacuum_cost_limit = vac_cost_limit;
 		tab->at_vacuum_cost_delay = vac_cost_delay;
 		tab->at_relname = NULL;
@@ -2748,7 +2768,8 @@ relation_needs_vacanalyze(Oid relid,
  /* output params below */
 						  bool *dovacuum,
 						  bool *doanalyze,
-						  bool *wraparound)
+						  bool *wraparound,
+						  bool *dowarmcleanup)
 {
 	bool		force_vacuum;
 	bool		av_enabled;
@@ -2760,6 +2781,9 @@ relation_needs_vacanalyze(Oid relid,
 	float4		vac_scale_factor,
 				anl_scale_factor;
 
+	/* constant from reloptions or GUC valriable */
+	float4		warmcleanup_scale_factor;
+
 	/* thresholds calculated from above constants */
 	float4		vacthresh,
 				anlthresh;
@@ -2768,6 +2792,9 @@ relation_needs_vacanalyze(Oid relid,
 	float4		vactuples,
 				anltuples;
 
+	/* number of WARM chains in the table */
+	float4		warmchains;
+
 	/* freeze parameters */
 	int			freeze_max_age;
 	int			multixact_freeze_max_age;
@@ -2800,6 +2827,11 @@ relation_needs_vacanalyze(Oid relid,
 		? relopts->analyze_threshold
 		: autovacuum_anl_thresh;
 
+	/* Use table specific value or the GUC value */
+	warmcleanup_scale_factor = (relopts && relopts->warmcleanup_scale_factor >= 0)
+		? relopts->warmcleanup_scale_factor
+		: autovacuum_warmcleanup_scale;
+
 	freeze_max_age = (relopts && relopts->freeze_max_age >= 0)
 		? Min(relopts->freeze_max_age, autovacuum_freeze_max_age)
 		: autovacuum_freeze_max_age;
@@ -2847,6 +2879,7 @@ relation_needs_vacanalyze(Oid relid,
 		reltuples = classForm->reltuples;
 		vactuples = tabentry->n_dead_tuples;
 		anltuples = tabentry->changes_since_analyze;
+		warmchains = tabentry->n_warm_chains;
 
 		vacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples;
 		anlthresh = (float4) anl_base_thresh + anl_scale_factor * reltuples;
@@ -2863,6 +2896,17 @@ relation_needs_vacanalyze(Oid relid,
 		/* Determine if this table needs vacuum or analyze. */
 		*dovacuum = force_vacuum || (vactuples > vacthresh);
 		*doanalyze = (anltuples > anlthresh);
+
+		/*
+		 * If the number of WARM chains in the is more than the configured
+		 * fraction, then we also do a WARM cleanup. This only triggers at the
+		 * table level, but we then look at each index and do cleanup for the
+		 * index only if the WARM pointers in the index are more than
+		 * configured index-level scale factor. lazy_vacuum_index() later deals
+		 * with that.
+		 */
+		if (*dovacuum && (warmcleanup_scale_factor * reltuples < warmchains))
+			*dowarmcleanup = true;
 	}
 	else
 	{
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index 52fe4ba..f38ce8a 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -226,9 +226,11 @@ typedef struct TwoPhasePgStatRecord
 	PgStat_Counter tuples_inserted;		/* tuples inserted in xact */
 	PgStat_Counter tuples_updated;		/* tuples updated in xact */
 	PgStat_Counter tuples_deleted;		/* tuples deleted in xact */
+	PgStat_Counter tuples_warm_updated;	/* tuples warm updated in xact */
 	PgStat_Counter inserted_pre_trunc;	/* tuples inserted prior to truncate */
 	PgStat_Counter updated_pre_trunc;	/* tuples updated prior to truncate */
 	PgStat_Counter deleted_pre_trunc;	/* tuples deleted prior to truncate */
+	PgStat_Counter warm_updated_pre_trunc;	/* tuples warm updated prior to truncate */
 	Oid			t_id;			/* table's OID */
 	bool		t_shared;		/* is it a shared catalog? */
 	bool		t_truncated;	/* was the relation truncated? */
@@ -1367,7 +1369,8 @@ pgstat_report_autovac(Oid dboid)
  */
 void
 pgstat_report_vacuum(Oid tableoid, bool shared,
-					 PgStat_Counter livetuples, PgStat_Counter deadtuples)
+					 PgStat_Counter livetuples, PgStat_Counter deadtuples,
+					 PgStat_Counter warmchains)
 {
 	PgStat_MsgVacuum msg;
 
@@ -1381,6 +1384,7 @@ pgstat_report_vacuum(Oid tableoid, bool shared,
 	msg.m_vacuumtime = GetCurrentTimestamp();
 	msg.m_live_tuples = livetuples;
 	msg.m_dead_tuples = deadtuples;
+	msg.m_warm_chains = warmchains;
 	pgstat_send(&msg, sizeof(msg));
 }
 
@@ -1396,7 +1400,7 @@ pgstat_report_vacuum(Oid tableoid, bool shared,
 void
 pgstat_report_analyze(Relation rel,
 					  PgStat_Counter livetuples, PgStat_Counter deadtuples,
-					  bool resetcounter)
+					  PgStat_Counter warmchains, bool resetcounter)
 {
 	PgStat_MsgAnalyze msg;
 
@@ -1421,12 +1425,14 @@ pgstat_report_analyze(Relation rel,
 		{
 			livetuples -= trans->tuples_inserted - trans->tuples_deleted;
 			deadtuples -= trans->tuples_updated + trans->tuples_deleted;
+			warmchains -= trans->tuples_warm_updated;
 		}
 		/* count stuff inserted by already-aborted subxacts, too */
 		deadtuples -= rel->pgstat_info->t_counts.t_delta_dead_tuples;
 		/* Since ANALYZE's counts are estimates, we could have underflowed */
 		livetuples = Max(livetuples, 0);
 		deadtuples = Max(deadtuples, 0);
+		warmchains = Max(warmchains, 0);
 	}
 
 	pgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_ANALYZE);
@@ -1437,6 +1443,7 @@ pgstat_report_analyze(Relation rel,
 	msg.m_analyzetime = GetCurrentTimestamp();
 	msg.m_live_tuples = livetuples;
 	msg.m_dead_tuples = deadtuples;
+	msg.m_warm_chains = warmchains;
 	pgstat_send(&msg, sizeof(msg));
 }
 
@@ -1907,7 +1914,10 @@ pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
 		else if (warm)
+		{
+			pgstat_info->trans->tuples_warm_updated++;
 			pgstat_info->t_counts.t_tuples_warm_updated++;
+		}
 	}
 }
 
@@ -2070,6 +2080,12 @@ AtEOXact_PgStat(bool isCommit)
 				/* update and delete each create a dead tuple */
 				tabstat->t_counts.t_delta_dead_tuples +=
 					trans->tuples_updated + trans->tuples_deleted;
+				/* 
+				 * commit or abort, a WARM update generates a WARM chain which
+				 * needs cleanup.
+				 */
+				tabstat->t_counts.t_delta_warm_chains +=
+					trans->tuples_warm_updated;
 				/* insert, update, delete each count as one change event */
 				tabstat->t_counts.t_changed_tuples +=
 					trans->tuples_inserted + trans->tuples_updated +
@@ -2080,6 +2096,12 @@ AtEOXact_PgStat(bool isCommit)
 				/* inserted tuples are dead, deleted tuples are unaffected */
 				tabstat->t_counts.t_delta_dead_tuples +=
 					trans->tuples_inserted + trans->tuples_updated;
+				/* 
+				 * commit or abort, a WARM update generates a WARM chain which
+				 * needs cleanup.
+				 */
+				tabstat->t_counts.t_delta_warm_chains +=
+					trans->tuples_warm_updated;
 				/* an aborted xact generates no changed_tuple events */
 			}
 			tabstat->trans = NULL;
@@ -2136,12 +2158,16 @@ AtEOSubXact_PgStat(bool isCommit, int nestDepth)
 						trans->upper->tuples_inserted = trans->tuples_inserted;
 						trans->upper->tuples_updated = trans->tuples_updated;
 						trans->upper->tuples_deleted = trans->tuples_deleted;
+						trans->upper->tuples_warm_updated =
+							trans->tuples_warm_updated;
 					}
 					else
 					{
 						trans->upper->tuples_inserted += trans->tuples_inserted;
 						trans->upper->tuples_updated += trans->tuples_updated;
 						trans->upper->tuples_deleted += trans->tuples_deleted;
+						trans->upper->tuples_warm_updated +=
+							trans->tuples_warm_updated;
 					}
 					tabstat->trans = trans->upper;
 					pfree(trans);
@@ -2177,9 +2203,13 @@ AtEOSubXact_PgStat(bool isCommit, int nestDepth)
 				tabstat->t_counts.t_tuples_inserted += trans->tuples_inserted;
 				tabstat->t_counts.t_tuples_updated += trans->tuples_updated;
 				tabstat->t_counts.t_tuples_deleted += trans->tuples_deleted;
+				tabstat->t_counts.t_tuples_warm_updated +=
+					trans->tuples_warm_updated;
 				/* inserted tuples are dead, deleted tuples are unaffected */
 				tabstat->t_counts.t_delta_dead_tuples +=
 					trans->tuples_inserted + trans->tuples_updated;
+				tabstat->t_counts.t_delta_warm_chains +=
+					trans->tuples_warm_updated;
 				tabstat->trans = trans->upper;
 				pfree(trans);
 			}
@@ -2221,9 +2251,11 @@ AtPrepare_PgStat(void)
 			record.tuples_inserted = trans->tuples_inserted;
 			record.tuples_updated = trans->tuples_updated;
 			record.tuples_deleted = trans->tuples_deleted;
+			record.tuples_warm_updated = trans->tuples_warm_updated;
 			record.inserted_pre_trunc = trans->inserted_pre_trunc;
 			record.updated_pre_trunc = trans->updated_pre_trunc;
 			record.deleted_pre_trunc = trans->deleted_pre_trunc;
+			record.warm_updated_pre_trunc = trans->warm_updated_pre_trunc;
 			record.t_id = tabstat->t_id;
 			record.t_shared = tabstat->t_shared;
 			record.t_truncated = trans->truncated;
@@ -2298,11 +2330,14 @@ pgstat_twophase_postcommit(TransactionId xid, uint16 info,
 		/* forget live/dead stats seen by backend thus far */
 		pgstat_info->t_counts.t_delta_live_tuples = 0;
 		pgstat_info->t_counts.t_delta_dead_tuples = 0;
+		pgstat_info->t_counts.t_delta_warm_chains = 0;
 	}
 	pgstat_info->t_counts.t_delta_live_tuples +=
 		rec->tuples_inserted - rec->tuples_deleted;
 	pgstat_info->t_counts.t_delta_dead_tuples +=
 		rec->tuples_updated + rec->tuples_deleted;
+	pgstat_info->t_counts.t_delta_warm_chains +=
+		rec->tuples_warm_updated;
 	pgstat_info->t_counts.t_changed_tuples +=
 		rec->tuples_inserted + rec->tuples_updated +
 		rec->tuples_deleted;
@@ -2330,12 +2365,16 @@ pgstat_twophase_postabort(TransactionId xid, uint16 info,
 		rec->tuples_inserted = rec->inserted_pre_trunc;
 		rec->tuples_updated = rec->updated_pre_trunc;
 		rec->tuples_deleted = rec->deleted_pre_trunc;
+		rec->tuples_warm_updated = rec->warm_updated_pre_trunc;
 	}
 	pgstat_info->t_counts.t_tuples_inserted += rec->tuples_inserted;
 	pgstat_info->t_counts.t_tuples_updated += rec->tuples_updated;
 	pgstat_info->t_counts.t_tuples_deleted += rec->tuples_deleted;
+	pgstat_info->t_counts.t_tuples_warm_updated += rec->tuples_warm_updated;
 	pgstat_info->t_counts.t_delta_dead_tuples +=
 		rec->tuples_inserted + rec->tuples_updated;
+	pgstat_info->t_counts.t_delta_warm_chains +=
+		rec->tuples_warm_updated;
 }
 
 
@@ -4526,6 +4565,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
+		result->n_warm_chains = 0;
 		result->changes_since_analyze = 0;
 		result->blocks_fetched = 0;
 		result->blocks_hit = 0;
@@ -5636,6 +5676,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
+			tabentry->n_warm_chains = tabmsg->t_counts.t_delta_warm_chains;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
 			tabentry->blocks_fetched = tabmsg->t_counts.t_blocks_fetched;
 			tabentry->blocks_hit = tabmsg->t_counts.t_blocks_hit;
@@ -5667,9 +5708,11 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			{
 				tabentry->n_live_tuples = 0;
 				tabentry->n_dead_tuples = 0;
+				tabentry->n_warm_chains = 0;
 			}
 			tabentry->n_live_tuples += tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples += tabmsg->t_counts.t_delta_dead_tuples;
+			tabentry->n_warm_chains += tabmsg->t_counts.t_delta_warm_chains;
 			tabentry->changes_since_analyze += tabmsg->t_counts.t_changed_tuples;
 			tabentry->blocks_fetched += tabmsg->t_counts.t_blocks_fetched;
 			tabentry->blocks_hit += tabmsg->t_counts.t_blocks_hit;
@@ -5679,6 +5722,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 		tabentry->n_live_tuples = Max(tabentry->n_live_tuples, 0);
 		/* Likewise for n_dead_tuples */
 		tabentry->n_dead_tuples = Max(tabentry->n_dead_tuples, 0);
+		tabentry->n_warm_chains = Max(tabentry->n_warm_chains, 0);
 
 		/*
 		 * Add per-table stats to the per-database entry, too.
@@ -5904,6 +5948,7 @@ pgstat_recv_vacuum(PgStat_MsgVacuum *msg, int len)
 
 	tabentry->n_live_tuples = msg->m_live_tuples;
 	tabentry->n_dead_tuples = msg->m_dead_tuples;
+	tabentry->n_warm_chains = msg->m_warm_chains;
 
 	if (msg->m_autovacuum)
 	{
@@ -5938,6 +5983,7 @@ pgstat_recv_analyze(PgStat_MsgAnalyze *msg, int len)
 
 	tabentry->n_live_tuples = msg->m_live_tuples;
 	tabentry->n_dead_tuples = msg->m_dead_tuples;
+	tabentry->n_warm_chains = msg->m_warm_chains;
 
 	/*
 	 * If commanded, reset changes_since_analyze to zero.  This forgets any
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 713d731..907e570 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -192,6 +192,21 @@ pg_stat_get_dead_tuples(PG_FUNCTION_ARGS)
 	PG_RETURN_INT64(result);
 }
 
+Datum
+pg_stat_get_warm_chains(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->n_warm_chains);
+
+	PG_RETURN_INT64(result);
+}
+
 
 Datum
 pg_stat_get_mod_since_analyze(PG_FUNCTION_ARGS)
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 08b6030..81fec03 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,6 +130,7 @@ int			VacuumCostPageMiss = 10;
 int			VacuumCostPageDirty = 20;
 int			VacuumCostLimit = 200;
 int			VacuumCostDelay = 0;
+double		VacuumWarmCleanupScale;
 
 int			VacuumPageHit = 0;
 int			VacuumPageMiss = 0;
@@ -137,3 +138,5 @@ int			VacuumPageDirty = 0;
 
 int			VacuumCostBalance = 0;		/* working state for vacuum */
 bool		VacuumCostActive = false;
+
+double		VacuumWarmCleanupIndexScale = 1;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index e9d561b..96b8918 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -3016,6 +3016,36 @@ static struct config_real ConfigureNamesReal[] =
 	},
 
 	{
+		{"autovacuum_warmcleanup_scale_factor", PGC_SIGHUP, AUTOVACUUM,
+			gettext_noop("Number of WARM chains prior to cleanup as a fraction of reltuples."),
+			NULL
+		},
+		&autovacuum_warmcleanup_scale,
+		0.1, 0.0, 100.0,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"autovacuum_warmcleanup_index_scale_factor", PGC_SIGHUP, AUTOVACUUM,
+			gettext_noop("Number of WARM pointers prior to cleanup as a fraction of total WARM chains."),
+			NULL
+		},
+		&autovacuum_warmcleanup_index_scale,
+		0.2, 0.0, 100.0,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"vacuum_warmcleanup_index_scale_factor", PGC_USERSET, WARM_CLEANUP,
+			gettext_noop("Number of WARM pointers in the index prior to cleanup as a fraction of total WARM chains."),
+			NULL
+		},
+		&VacuumWarmCleanupIndexScale,
+		0.2, 0.0, 100.0,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"checkpoint_completion_target", PGC_SIGHUP, WAL_CHECKPOINTS,
 			gettext_noop("Time spent flushing dirty buffers during checkpoint, as fraction of checkpoint interval."),
 			NULL
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 8587135..82b9af4 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2789,6 +2789,8 @@ DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of dead tuples");
+DATA(insert OID = 3374 (  pg_stat_get_warm_chains	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_warm_chains _null_ _null_ _null_ ));
+DESCR("statistics: number of warm chains");
 DATA(insert OID = 3177 (  pg_stat_get_mod_since_analyze PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_mod_since_analyze _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples changed since last analyze");
 DATA(insert OID = 1934 (  pg_stat_get_blocks_fetched	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_blocks_fetched _null_ _null_ _null_ ));
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index 541c2fa..9914143 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -145,6 +145,8 @@ typedef struct VacuumParams
 	int			log_min_duration;		/* minimum execution threshold in ms
 										 * at which  verbose logs are
 										 * activated, -1 to use default */
+	double		warmcleanup_index_scale; /* Fraction of WARM pointers to cause
+										  * index WARM cleanup */
 } VacuumParams;
 
 /* GUC parameters */
diff --git a/src/include/foreign/fdwapi.h b/src/include/foreign/fdwapi.h
index 6ca44f7..2993b1a 100644
--- a/src/include/foreign/fdwapi.h
+++ b/src/include/foreign/fdwapi.h
@@ -134,7 +134,8 @@ typedef void (*ExplainDirectModify_function) (ForeignScanState *node,
 typedef int (*AcquireSampleRowsFunc) (Relation relation, int elevel,
 											   HeapTuple *rows, int targrows,
 												  double *totalrows,
-												  double *totaldeadrows);
+												  double *totaldeadrows,
+												  double *totalwarmchains);
 
 typedef bool (*AnalyzeForeignTable_function) (Relation relation,
 												 AcquireSampleRowsFunc *func,
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 4c607b2..901960a 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -255,6 +255,7 @@ extern int	VacuumPageDirty;
 extern int	VacuumCostBalance;
 extern bool VacuumCostActive;
 
+extern double VacuumWarmCleanupIndexScale;
 
 /* in tcop/postgres.c */
 
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a71dd5..f842374 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3035,7 +3035,8 @@ typedef enum VacuumOption
 	VACOPT_FULL = 1 << 4,		/* FULL (non-concurrent) vacuum */
 	VACOPT_NOWAIT = 1 << 5,		/* don't wait to get lock (autovacuum only) */
 	VACOPT_SKIPTOAST = 1 << 6,	/* don't process the TOAST table, if any */
-	VACOPT_DISABLE_PAGE_SKIPPING = 1 << 7		/* don't skip any pages */
+	VACOPT_DISABLE_PAGE_SKIPPING = 1 << 7,		/* don't skip any pages */
+	VACOPT_WARM_CLEANUP = 1 << 8	/* do WARM cleanup */
 } VacuumOption;
 
 typedef struct VacuumStmt
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index cd21a78..7d9818b 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -433,6 +433,7 @@ PG_KEYWORD("version", VERSION_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("view", VIEW, UNRESERVED_KEYWORD)
 PG_KEYWORD("views", VIEWS, UNRESERVED_KEYWORD)
 PG_KEYWORD("volatile", VOLATILE, UNRESERVED_KEYWORD)
+PG_KEYWORD("warmclean", WARMCLEAN, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("when", WHEN, RESERVED_KEYWORD)
 PG_KEYWORD("where", WHERE, RESERVED_KEYWORD)
 PG_KEYWORD("whitespace", WHITESPACE_P, UNRESERVED_KEYWORD)
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 99bdc8b..883cbd4 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -110,6 +110,7 @@ typedef struct PgStat_TableCounts
 
 	PgStat_Counter t_delta_live_tuples;
 	PgStat_Counter t_delta_dead_tuples;
+	PgStat_Counter t_delta_warm_chains;
 	PgStat_Counter t_changed_tuples;
 
 	PgStat_Counter t_blocks_fetched;
@@ -167,11 +168,13 @@ typedef struct PgStat_TableXactStatus
 {
 	PgStat_Counter tuples_inserted;		/* tuples inserted in (sub)xact */
 	PgStat_Counter tuples_updated;		/* tuples updated in (sub)xact */
+	PgStat_Counter tuples_warm_updated;	/* tuples warm-updated in (sub)xact */
 	PgStat_Counter tuples_deleted;		/* tuples deleted in (sub)xact */
 	bool		truncated;		/* relation truncated in this (sub)xact */
 	PgStat_Counter inserted_pre_trunc;	/* tuples inserted prior to truncate */
 	PgStat_Counter updated_pre_trunc;	/* tuples updated prior to truncate */
 	PgStat_Counter deleted_pre_trunc;	/* tuples deleted prior to truncate */
+	PgStat_Counter warm_updated_pre_trunc;	/* tuples warm updated prior to truncate */
 	int			nest_level;		/* subtransaction nest level */
 	/* links to other structs for same relation: */
 	struct PgStat_TableXactStatus *upper;		/* next higher subxact if any */
@@ -370,6 +373,7 @@ typedef struct PgStat_MsgVacuum
 	TimestampTz m_vacuumtime;
 	PgStat_Counter m_live_tuples;
 	PgStat_Counter m_dead_tuples;
+	PgStat_Counter m_warm_chains;
 } PgStat_MsgVacuum;
 
 
@@ -388,6 +392,7 @@ typedef struct PgStat_MsgAnalyze
 	TimestampTz m_analyzetime;
 	PgStat_Counter m_live_tuples;
 	PgStat_Counter m_dead_tuples;
+	PgStat_Counter m_warm_chains;
 } PgStat_MsgAnalyze;
 
 
@@ -630,6 +635,7 @@ typedef struct PgStat_StatTabEntry
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
+	PgStat_Counter n_warm_chains;
 	PgStat_Counter changes_since_analyze;
 
 	PgStat_Counter blocks_fetched;
@@ -1156,10 +1162,11 @@ extern void pgstat_reset_single_counter(Oid objectid, PgStat_Single_Reset_Type t
 
 extern void pgstat_report_autovac(Oid dboid);
 extern void pgstat_report_vacuum(Oid tableoid, bool shared,
-					 PgStat_Counter livetuples, PgStat_Counter deadtuples);
+					 PgStat_Counter livetuples, PgStat_Counter deadtuples,
+					 PgStat_Counter warmchains);
 extern void pgstat_report_analyze(Relation rel,
 					  PgStat_Counter livetuples, PgStat_Counter deadtuples,
-					  bool resetcounter);
+					  PgStat_Counter warmchains, bool resetcounter);
 
 extern void pgstat_report_recovery_conflict(int reason);
 extern void pgstat_report_deadlock(void);
diff --git a/src/include/postmaster/autovacuum.h b/src/include/postmaster/autovacuum.h
index 99d7f09..5ac9c8f 100644
--- a/src/include/postmaster/autovacuum.h
+++ b/src/include/postmaster/autovacuum.h
@@ -28,6 +28,8 @@ extern int	autovacuum_freeze_max_age;
 extern int	autovacuum_multixact_freeze_max_age;
 extern int	autovacuum_vac_cost_delay;
 extern int	autovacuum_vac_cost_limit;
+extern double autovacuum_warmcleanup_scale;
+extern double autovacuum_warmcleanup_index_scale;
 
 /* autovacuum launcher PID, only valid when worker is shutting down */
 extern int	AutovacuumLauncherPid;
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index 2da9115..cd4532b 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -68,6 +68,7 @@ enum config_group
 	WAL_SETTINGS,
 	WAL_CHECKPOINTS,
 	WAL_ARCHIVING,
+	WARM_CLEANUP,
 	REPLICATION,
 	REPLICATION_SENDING,
 	REPLICATION_MASTER,
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index cd1976a..9164f60 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -276,6 +276,8 @@ typedef struct AutoVacOpts
 	int			log_min_duration;
 	float8		vacuum_scale_factor;
 	float8		analyze_scale_factor;
+	float8		warmcleanup_scale_factor;
+	float8		warmcleanup_index_scale;
 } AutoVacOpts;
 
 typedef struct StdRdOptions
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index f7dc4a4..d34aa68 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1759,6 +1759,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
+    pg_stat_get_warm_chains(c.oid) AS n_warm_chains,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
     pg_stat_get_last_vacuum_time(c.oid) AS last_vacuum,
     pg_stat_get_last_autovacuum_time(c.oid) AS last_autovacuum,
@@ -1907,6 +1908,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
+    pg_stat_all_tables.n_warm_chains,
     pg_stat_all_tables.n_mod_since_analyze,
     pg_stat_all_tables.last_vacuum,
     pg_stat_all_tables.last_autovacuum,
@@ -1951,6 +1953,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
+    pg_stat_all_tables.n_warm_chains,
     pg_stat_all_tables.n_mod_since_analyze,
     pg_stat_all_tables.last_vacuum,
     pg_stat_all_tables.last_autovacuum,
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
index 1ae2f40..92f8136 100644
--- a/src/test/regress/expected/warm.out
+++ b/src/test/regress/expected/warm.out
@@ -745,6 +745,64 @@ SELECT a, b FROM test_toast_warm WHERE b = 104.20;
 (1 row)
 
 DROP TABLE test_toast_warm;
+-- Test VACUUM
+CREATE TABLE test_vacuum_warm (a int unique, b text, c int, d int);
+CREATE INDEX test_vacuum_warm_index1 ON test_vacuum_warm(b);
+CREATE INDEX test_vacuum_warm_index2 ON test_vacuum_warm(c);
+INSERT INTO test_vacuum_warm VALUES (1, 'a', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (2, 'b', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (3, 'c', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (4, 'd', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (5, 'e', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (6, 'f', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (7, 'g', 100, 200);
+UPDATE test_vacuum_warm SET b = 'u', c = 300 WHERE a = 1;
+UPDATE test_vacuum_warm SET b = 'v', c = 300 WHERE a = 2;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 3;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 4;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 5;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 6;
+-- a plain vacuum cannot clear WARM chains.
+SET enable_seqscan = false;
+SET enable_bitmapscan = false;
+SET seq_page_cost = 10000;
+VACUUM test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
+   Index Cond: (b = 'u'::text)
+   Heap Fetches: 1
+(3 rows)
+
+-- Now set vacuum_warmcleanup_index_scale_factor such that only
+-- test_vacuum_warm_index2 can be cleaned up.
+SET vacuum_warmcleanup_index_scale_factor=0.5;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
+   Index Cond: (b = 'u'::text)
+   Heap Fetches: 1
+(3 rows)
+
+-- All WARM chains cleaned up, so index-only scan should be used now without
+-- any heap fetches
+SET vacuum_warmcleanup_index_scale_factor=0;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect zero heap-fetches now
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
+   Index Cond: (b = 'u'::text)
+   Heap Fetches: 0
+(3 rows)
+
+DROP TABLE test_vacuum_warm;
 -- Toasted heap attributes
 CREATE TABLE toasttest(descr text , cnt int DEFAULT 0, f1 text, f2 text);
 CREATE INDEX testindx1 ON toasttest(descr);
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
index fb1f93e..964bb6e 100644
--- a/src/test/regress/sql/warm.sql
+++ b/src/test/regress/sql/warm.sql
@@ -285,6 +285,52 @@ SELECT a, b FROM test_toast_warm WHERE b = 104.20;
 
 DROP TABLE test_toast_warm;
 
+-- Test VACUUM
+
+CREATE TABLE test_vacuum_warm (a int unique, b text, c int, d int);
+CREATE INDEX test_vacuum_warm_index1 ON test_vacuum_warm(b);
+CREATE INDEX test_vacuum_warm_index2 ON test_vacuum_warm(c);
+
+INSERT INTO test_vacuum_warm VALUES (1, 'a', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (2, 'b', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (3, 'c', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (4, 'd', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (5, 'e', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (6, 'f', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (7, 'g', 100, 200);
+
+UPDATE test_vacuum_warm SET b = 'u', c = 300 WHERE a = 1;
+UPDATE test_vacuum_warm SET b = 'v', c = 300 WHERE a = 2;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 3;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 4;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 5;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 6;
+
+-- a plain vacuum cannot clear WARM chains.
+SET enable_seqscan = false;
+SET enable_bitmapscan = false;
+SET seq_page_cost = 10000;
+VACUUM test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+
+-- Now set vacuum_warmcleanup_index_scale_factor such that only
+-- test_vacuum_warm_index2 can be cleaned up.
+SET vacuum_warmcleanup_index_scale_factor=0.5;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+
+
+-- All WARM chains cleaned up, so index-only scan should be used now without
+-- any heap fetches
+SET vacuum_warmcleanup_index_scale_factor=0;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect zero heap-fetches now
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+
+DROP TABLE test_vacuum_warm;
+
 -- Toasted heap attributes
 CREATE TABLE toasttest(descr text , cnt int DEFAULT 0, f1 text, f2 text);
 CREATE INDEX testindx1 ON toasttest(descr);
0004_Free-3-bits-of-ItemPointerData.ip_posid_v22.patchapplication/octet-stream; name=0004_Free-3-bits-of-ItemPointerData.ip_posid_v22.patchDownload
diff --git a/src/include/access/ginblock.h b/src/include/access/ginblock.h
index 438912c..316ab65 100644
--- a/src/include/access/ginblock.h
+++ b/src/include/access/ginblock.h
@@ -135,7 +135,8 @@ typedef struct GinMetaPageData
 	(ItemPointerGetBlockNumberNoCheck(pointer))
 
 #define GinItemPointerGetOffsetNumber(pointer) \
-	(ItemPointerGetOffsetNumberNoCheck(pointer))
+	(ItemPointerGetOffsetNumberNoCheck(pointer) | \
+	 (ItemPointerGetFlags(pointer) << OffsetNumberBits))
 
 #define GinItemPointerSetBlockNumber(pointer, blkno) \
 	(ItemPointerSetBlockNumber((pointer), (blkno)))
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 24433c7..4d614b7 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -288,7 +288,7 @@ struct HeapTupleHeaderData
  * than MaxOffsetNumber, so that it can be distinguished from a valid
  * offset number in a regular item pointer.
  */
-#define SpecTokenOffsetNumber		0xfffe
+#define SpecTokenOffsetNumber		OffsetNumberPrev(OffsetNumberMask)
 
 /*
  * HeapTupleHeader accessor macros
diff --git a/src/include/storage/itemptr.h b/src/include/storage/itemptr.h
index c21d2ad..74eed4e 100644
--- a/src/include/storage/itemptr.h
+++ b/src/include/storage/itemptr.h
@@ -57,7 +57,7 @@ typedef ItemPointerData *ItemPointer;
  *		True iff the disk item pointer is not NULL.
  */
 #define ItemPointerIsValid(pointer) \
-	((bool) (PointerIsValid(pointer) && ((pointer)->ip_posid != 0)))
+	((bool) (PointerIsValid(pointer) && (((pointer)->ip_posid & OffsetNumberMask) != 0)))
 
 /*
  * ItemPointerGetBlockNumberNoCheck
@@ -84,7 +84,7 @@ typedef ItemPointerData *ItemPointer;
  */
 #define ItemPointerGetOffsetNumberNoCheck(pointer) \
 ( \
-	(pointer)->ip_posid \
+	((pointer)->ip_posid & OffsetNumberMask) \
 )
 
 /*
@@ -98,6 +98,30 @@ typedef ItemPointerData *ItemPointer;
 )
 
 /*
+ * Get the flags stored in high order bits in the OffsetNumber.
+ */
+#define ItemPointerGetFlags(pointer) \
+( \
+	((pointer)->ip_posid & ~OffsetNumberMask) >> OffsetNumberBits \
+)
+
+/*
+ * Set the flag bits. We first left-shift since flags are defined starting 0x01
+ */
+#define ItemPointerSetFlags(pointer, flags) \
+( \
+	((pointer)->ip_posid |= ((flags) << OffsetNumberBits)) \
+)
+
+/*
+ * Clear all flags.
+ */
+#define ItemPointerClearFlags(pointer) \
+( \
+	((pointer)->ip_posid &= OffsetNumberMask) \
+)
+
+/*
  * ItemPointerSet
  *		Sets a disk item pointer to the specified block and offset.
  */
@@ -105,7 +129,7 @@ typedef ItemPointerData *ItemPointer;
 ( \
 	AssertMacro(PointerIsValid(pointer)), \
 	BlockIdSet(&((pointer)->ip_blkid), blockNumber), \
-	(pointer)->ip_posid = offNum \
+	(pointer)->ip_posid = (offNum) \
 )
 
 /*
diff --git a/src/include/storage/off.h b/src/include/storage/off.h
index fe8638f..f058fe1 100644
--- a/src/include/storage/off.h
+++ b/src/include/storage/off.h
@@ -26,7 +26,16 @@ typedef uint16 OffsetNumber;
 #define InvalidOffsetNumber		((OffsetNumber) 0)
 #define FirstOffsetNumber		((OffsetNumber) 1)
 #define MaxOffsetNumber			((OffsetNumber) (BLCKSZ / sizeof(ItemIdData)))
-#define OffsetNumberMask		(0xffff)		/* valid uint16 bits */
+
+/*
+ * The biggest BLCKSZ we support is 32kB, and each ItemId takes 6 bytes.
+ * That limits the number of line pointers in a page to 32kB/6B = 5461.
+ * Therefore, 13 bits in OffsetNumber are enough to represent all valid
+ * on-disk line pointers.  Hence, we can reserve the high-order bits in
+ * OffsetNumber for other purposes.
+ */
+#define OffsetNumberBits		13
+#define OffsetNumberMask		((((uint16) 1) << OffsetNumberBits) - 1)
 
 /* ----------------
  *		support macros
0002_track_root_lp_v22.patchapplication/octet-stream; name=0002_track_root_lp_v22.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 51c773f..e573f1a 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -94,7 +94,8 @@ static HeapTuple heap_prepare_insert(Relation relation, HeapTuple tup,
 					TransactionId xid, CommandId cid, int options);
 static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
-				HeapTuple newtup, HeapTuple old_key_tup,
+				HeapTuple newtup, OffsetNumber root_offnum,
+				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
 static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
 							 Bitmapset *interesting_cols,
@@ -2264,13 +2265,13 @@ heap_get_latest_tid(Relation relation,
 		 */
 		if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) ||
 			HeapTupleHeaderIsOnlyLocked(tp.t_data) ||
-			ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid))
+			HeapTupleHeaderIsHeapLatest(tp.t_data, &ctid))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
 		}
 
-		ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tp.t_data, &ctid);
 		priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		UnlockReleaseBuffer(buffer);
 	}							/* end of loop */
@@ -2401,6 +2402,7 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	bool		all_visible_cleared = false;
+	OffsetNumber	root_offnum;
 
 	/*
 	 * Fill in tuple header fields, assign an OID, and toast the tuple if
@@ -2439,8 +2441,13 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
-	RelationPutHeapTuple(relation, buffer, heaptup,
-						 (options & HEAP_INSERT_SPECULATIVE) != 0);
+	root_offnum = RelationPutHeapTuple(relation, buffer, heaptup,
+						 (options & HEAP_INSERT_SPECULATIVE) != 0,
+						 InvalidOffsetNumber);
+
+	/* We must not overwrite the speculative insertion token. */
+	if ((options & HEAP_INSERT_SPECULATIVE) == 0)
+		HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 	if (PageIsAllVisible(BufferGetPage(buffer)))
 	{
@@ -2668,6 +2675,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 	Size		saveFreeSpace;
 	bool		need_tuple_data = RelationIsLogicallyLogged(relation);
 	bool		need_cids = RelationIsAccessibleInLogicalDecoding(relation);
+	OffsetNumber	root_offnum;
 
 	needwal = !(options & HEAP_INSERT_SKIP_WAL) && RelationNeedsWAL(relation);
 	saveFreeSpace = RelationGetTargetPageFreeSpace(relation,
@@ -2738,7 +2746,12 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		 * RelationGetBufferForTuple has ensured that the first tuple fits.
 		 * Put that on the page, and then as many other tuples as fit.
 		 */
-		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false);
+		root_offnum = RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false,
+				InvalidOffsetNumber);
+
+		/* Mark this tuple as the latest and also set root offset. */
+		HeapTupleHeaderSetHeapLatest(heaptuples[ndone]->t_data, root_offnum);
+
 		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++)
 		{
 			HeapTuple	heaptup = heaptuples[ndone + nthispage];
@@ -2746,7 +2759,10 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace)
 				break;
 
-			RelationPutHeapTuple(relation, buffer, heaptup, false);
+			root_offnum = RelationPutHeapTuple(relation, buffer, heaptup, false,
+					InvalidOffsetNumber);
+			/* Mark each tuple as the latest and also set root offset. */
+			HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 			/*
 			 * We don't use heap_multi_insert for catalog tuples yet, but
@@ -3018,6 +3034,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	HeapTupleData tp;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	TransactionId new_xmax;
@@ -3028,6 +3045,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	bool		all_visible_cleared = false;
 	HeapTuple	old_key_tuple = NULL;	/* replica identity of the tuple */
 	bool		old_key_copied = false;
+	OffsetNumber	root_offnum;
 
 	Assert(ItemPointerIsValid(tid));
 
@@ -3069,7 +3087,8 @@ heap_delete(Relation relation, ItemPointer tid,
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 	}
 
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
+	offnum = ItemPointerGetOffsetNumber(tid);
+	lp = PageGetItemId(page, offnum);
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
@@ -3199,7 +3218,17 @@ l1:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tp.t_data->t_ctid;
+
+		/*
+		 * If we're at the end of the chain, then just return the same TID back
+		 * to the caller. The caller uses that as a hint to know if we have hit
+		 * the end of the chain.
+		 */
+		if (!HeapTupleHeaderIsHeapLatest(tp.t_data, &tp.t_self))
+			HeapTupleHeaderGetNextTid(tp.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&tp.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tp.t_data);
@@ -3248,6 +3277,22 @@ l1:
 							  xid, LockTupleExclusive, true,
 							  &new_xmax, &new_infomask, &new_infomask2);
 
+	/*
+	 * heap_get_root_tuple_one() may call palloc, which is disallowed once we
+	 * enter the critical section. So check if the root offset is cached in the
+	 * tuple and if not, fetch that information hard way before entering the
+	 * critical section.
+	 *
+	 * Most often and unless we are dealing with a pg-upgraded cluster, the
+	 * root offset information should be cached. So there should not be too
+	 * much overhead of fetching this information. Also, once a tuple is
+	 * updated, the information will be copied to the new version. So it's not
+	 * as if we're going to pay this price forever.
+	 */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tp.t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -3275,8 +3320,10 @@ l1:
 	HeapTupleHeaderClearHotUpdated(tp.t_data);
 	HeapTupleHeaderSetXmax(tp.t_data, new_xmax);
 	HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo);
-	/* Make sure there is no forward chain link in t_ctid */
-	tp.t_data->t_ctid = tp.t_self;
+
+	/* Mark this tuple as the latest tuple in the update chain. */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		HeapTupleHeaderSetHeapLatest(tp.t_data, root_offnum);
 
 	MarkBufferDirty(buffer);
 
@@ -3477,6 +3524,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
+	OffsetNumber	root_offnum;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3537,6 +3586,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 
 
 	block = ItemPointerGetBlockNumber(otid);
+	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3840,7 +3890,12 @@ l2:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = oldtup.t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(oldtup.t_data, &oldtup.t_self))
+			HeapTupleHeaderGetNextTid(oldtup.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&oldtup.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
@@ -3980,6 +4035,7 @@ l2:
 		uint16		infomask_lock_old_tuple,
 					infomask2_lock_old_tuple;
 		bool		cleared_all_frozen = false;
+		OffsetNumber	root_offnum;
 
 		/*
 		 * To prevent concurrent sessions from updating the tuple, we have to
@@ -4007,6 +4063,14 @@ l2:
 
 		Assert(HEAP_XMAX_IS_LOCKED_ONLY(infomask_lock_old_tuple));
 
+		/*
+		 * Fetch root offset before entering the critical section. We do this
+		 * only if the information is not already available.
+		 */
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&oldtup.t_self));
+
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
@@ -4021,7 +4085,8 @@ l2:
 		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			HeapTupleHeaderSetHeapLatest(oldtup.t_data, root_offnum);
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4180,6 +4245,10 @@ l2:
 										   bms_overlap(modified_attrs, id_attrs),
 										   &old_key_copied);
 
+	if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
@@ -4205,6 +4274,17 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way (we should have done
+		 * that before entering the critical section above).
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
 	else
 	{
@@ -4212,10 +4292,22 @@ l2:
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		root_offnum = InvalidOffsetNumber;
 	}
 
-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
-
+	/* insert new tuple */
+	root_offnum = RelationPutHeapTuple(relation, newbuf, heaptup, false,
+									   root_offnum);
+	/*
+	 * Also mark both copies as latest and set the root offset information. If
+	 * we're doing a HOT/WARM update, then we just copy the information from
+	 * old tuple, if available or computed above. For regular updates,
+	 * RelationPutHeapTuple must have returned us the actual offset number
+	 * where the new version was inserted and we store the same value since the
+	 * update resulted in a new HOT-chain.
+	 */
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
 	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
@@ -4228,7 +4320,7 @@ l2:
 	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	HeapTupleHeaderSetNextTid(oldtup.t_data, &(heaptup->t_self));
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4267,6 +4359,7 @@ l2:
 
 		recptr = log_heap_update(relation, buffer,
 								 newbuf, &oldtup, heaptup,
+								 root_offnum,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4547,7 +4640,8 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	ItemId		lp;
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
-	BlockNumber block;
+	BlockNumber	block;
+	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4556,9 +4650,11 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	bool		first_time = true;
 	bool		have_tuple_lock = false;
 	bool		cleared_all_frozen = false;
+	OffsetNumber	root_offnum;
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
+	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -4578,6 +4674,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	tuple->t_data = (HeapTupleHeader) PageGetItem(page, lp);
 	tuple->t_len = ItemIdGetLength(lp);
 	tuple->t_tableOid = RelationGetRelid(relation);
+	tuple->t_self = *tid;
 
 l3:
 	result = HeapTupleSatisfiesUpdate(tuple, cid, *buffer);
@@ -4605,7 +4702,11 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &t_ctid);
+		else
+			ItemPointerCopy(tid, &t_ctid);
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -5043,7 +5144,12 @@ failed:
 		Assert(result == HeapTupleSelfUpdated || result == HeapTupleUpdated ||
 			   result == HeapTupleWouldBlock);
 		Assert(!(tuple->t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tuple->t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(tid, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tuple->t_data);
@@ -5091,6 +5197,10 @@ failed:
 							  GetCurrentTransactionId(), mode, false,
 							  &xid, &new_infomask, &new_infomask2);
 
+	if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tuple->t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -5119,7 +5229,10 @@ failed:
 	 * the tuple as well.
 	 */
 	if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask))
-		tuple->t_data->t_ctid = *tid;
+	{
+		if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+			HeapTupleHeaderSetHeapLatest(tuple->t_data, root_offnum);
+	}
 
 	/* Clear only the all-frozen bit on visibility map if needed */
 	if (PageIsAllVisible(page) &&
@@ -5633,6 +5746,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
+	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5641,6 +5755,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
+		offnum = ItemPointerGetOffsetNumber(&tupid);
+
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
@@ -5870,7 +5986,7 @@ l4:
 
 		/* if we find the end of update chain, we're done. */
 		if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID ||
-			ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) ||
+			HeapTupleHeaderIsHeapLatest(mytup.t_data, &mytup.t_self) ||
 			HeapTupleHeaderIsOnlyLocked(mytup.t_data))
 		{
 			result = HeapTupleMayBeUpdated;
@@ -5879,7 +5995,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		HeapTupleHeaderGetNextTid(mytup.t_data, &tupid);
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
@@ -5996,7 +6112,7 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	 * Replace the speculative insertion token with a real t_ctid, pointing to
 	 * itself like it does on regular tuples.
 	 */
-	htup->t_ctid = tuple->t_self;
+	HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 	/* XLOG stuff */
 	if (RelationNeedsWAL(relation))
@@ -6122,8 +6238,7 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
 
 	/* Clear the speculative insertion token too */
-	tp.t_data->t_ctid = tp.t_self;
-
+	HeapTupleHeaderSetHeapLatest(tp.t_data, ItemPointerGetOffsetNumber(tid));
 	MarkBufferDirty(buffer);
 
 	/*
@@ -7471,6 +7586,7 @@ log_heap_visible(RelFileNode rnode, Buffer heap_buffer, Buffer vm_buffer,
 static XLogRecPtr
 log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup, HeapTuple newtup,
+				OffsetNumber root_offnum,
 				HeapTuple old_key_tuple,
 				bool all_visible_cleared, bool new_all_visible_cleared)
 {
@@ -7591,6 +7707,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self);
 	xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data);
 
+	Assert(OffsetNumberIsValid(root_offnum));
+	xlrec.root_offnum = root_offnum;
+
 	bufflags = REGBUF_STANDARD;
 	if (init)
 		bufflags |= REGBUF_WILL_INIT;
@@ -8245,7 +8364,13 @@ heap_xlog_delete(XLogReaderState *record)
 			PageClearAllVisible(page);
 
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = target_tid;
+		if (!HeapTupleHeaderHasRootOffset(htup))
+		{
+			OffsetNumber	root_offnum;
+			root_offnum = heap_get_root_tuple(page, xlrec->offnum); 
+			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+		}
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8335,7 +8460,8 @@ heap_xlog_insert(XLogReaderState *record)
 		htup->t_hoff = xlhdr.t_hoff;
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
-		htup->t_ctid = target_tid;
+
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->offnum);
 
 		if (PageAddItem(page, (Item) htup, newlen, xlrec->offnum,
 						true, true) == InvalidOffsetNumber)
@@ -8470,8 +8596,8 @@ heap_xlog_multi_insert(XLogReaderState *record)
 			htup->t_hoff = xlhdr->t_hoff;
 			HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 			HeapTupleHeaderSetCmin(htup, FirstCommandId);
-			ItemPointerSetBlockNumber(&htup->t_ctid, blkno);
-			ItemPointerSetOffsetNumber(&htup->t_ctid, offnum);
+
+			HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 			offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 			if (offnum == InvalidOffsetNumber)
@@ -8607,7 +8733,7 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 		/* Set forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetNextTid(htup, &newtid);
 
 		/* Mark the page as a candidate for pruning */
 		PageSetPrunable(page, XLogRecGetXid(record));
@@ -8740,13 +8866,17 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
-		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = newtid;
 
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
 
+		/*
+		 * Make sure the tuple is marked as the latest and root offset
+		 * information is restored.
+		 */
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->root_offnum);
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
@@ -8809,6 +8939,9 @@ heap_xlog_confirm(XLogReaderState *record)
 		 */
 		ItemPointerSet(&htup->t_ctid, BufferGetBlockNumber(buffer), offnum);
 
+		/* For newly inserted tuple, set root offset to itself. */
+		HeapTupleHeaderSetHeapLatest(htup, offnum);
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8872,11 +9005,17 @@ heap_xlog_lock(XLogReaderState *record)
 		 */
 		if (HEAP_XMAX_IS_LOCKED_ONLY(htup->t_infomask))
 		{
+			ItemPointerData	target_tid;
+
+			ItemPointerSet(&target_tid, BufferGetBlockNumber(buffer), offnum);
 			HeapTupleHeaderClearHotUpdated(htup);
 			/* Make sure there is no forward chain link in t_ctid */
-			ItemPointerSet(&htup->t_ctid,
-						   BufferGetBlockNumber(buffer),
-						   offnum);
+			if (!HeapTupleHeaderHasRootOffset(htup))
+			{
+				OffsetNumber	root_offnum;
+				root_offnum = heap_get_root_tuple(page, offnum);
+				HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+			}
 		}
 		HeapTupleHeaderSetXmax(htup, xlrec->locking_xid);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index 6529fe3..8052519 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -31,12 +31,20 @@
  * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
  * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *
+ * The caller can optionally tell us to set the root offset to the given value.
+ * Otherwise, the root offset is set to the offset of the new location once its
+ * known. The former is used while updating an existing tuple where the caller
+ * tells us about the root line pointer of the chain.  The latter is used
+ * during insertion of a new row, hence root line pointer is set to the offset
+ * where this tuple is inserted.
  */
-void
+OffsetNumber
 RelationPutHeapTuple(Relation relation,
 					 Buffer buffer,
 					 HeapTuple tuple,
-					 bool token)
+					 bool token,
+					 OffsetNumber root_offnum)
 {
 	Page		pageHeader;
 	OffsetNumber offnum;
@@ -60,17 +68,24 @@ RelationPutHeapTuple(Relation relation,
 	ItemPointerSet(&(tuple->t_self), BufferGetBlockNumber(buffer), offnum);
 
 	/*
-	 * Insert the correct position into CTID of the stored tuple, too (unless
-	 * this is a speculative insertion, in which case the token is held in
-	 * CTID field instead)
+	 * Set block number and the root offset into CTID of the stored tuple, too
+	 * (unless this is a speculative insertion, in which case the token is held
+	 * in CTID field instead).
 	 */
 	if (!token)
 	{
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
+		/* Copy t_ctid to set the correct block number. */
 		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+
+		if (!OffsetNumberIsValid(root_offnum))
+			root_offnum = offnum;
+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item, root_offnum);
 	}
+
+	return root_offnum;
 }
 
 /*
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index d69a266..f54337c 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -55,6 +55,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
 
+static void heap_get_root_tuples_internal(Page page,
+				OffsetNumber target_offnum, OffsetNumber *root_offsets);
 
 /*
  * Optionally prune and repair fragmentation in the specified page.
@@ -553,6 +555,17 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 		if (!HeapTupleHeaderIsHotUpdated(htup))
 			break;
 
+
+		/*
+		 * If the tuple was HOT-updated and the update was later
+		 * aborted, someone could mark this tuple to be the last tuple
+		 * in the chain, without clearing the HOT-updated flag. So we must
+		 * check if this is the last tuple in the chain and stop following the
+		 * CTID, else we risk getting into an infinite recursion (though
+		 * prstate->marked[] currently protects against that).
+		 */
+		if (HeapTupleHeaderHasRootOffset(htup))
+			break;
 		/*
 		 * Advance to next chain member.
 		 */
@@ -726,27 +739,47 @@ heap_page_prune_execute(Buffer buffer,
 
 
 /*
- * For all items in this page, find their respective root line pointers.
- * If item k is part of a HOT-chain with root at item j, then we set
- * root_offsets[k - 1] = j.
+ * Either for all items in this page or for the given item, find their
+ * respective root line pointers.
+ *
+ * When target_offnum is a valid offset number, the caller is interested in
+ * just one item. In that case, the root line pointer is returned in
+ * root_offsets.
  *
- * The passed-in root_offsets array must have MaxHeapTuplesPerPage entries.
- * We zero out all unused entries.
+ * When target_offnum is a InvalidOffsetNumber then the caller wants to know
+ * the root line pointers of all the items in this page. The root_offsets array
+ * must have MaxHeapTuplesPerPage entries in that case. If item k is part of a
+ * HOT-chain with root at item j, then we set root_offsets[k - 1] = j. We zero
+ * out all unused entries.
  *
  * The function must be called with at least share lock on the buffer, to
  * prevent concurrent prune operations.
  *
+ * This is not a cheap function since it must scan through all line pointers
+ * and tuples on the page in order to find the root line pointers. To minimize
+ * the cost, we break early if target_offnum is specified and root line pointer
+ * to target_offnum is found.
+ *
  * Note: The information collected here is valid only as long as the caller
  * holds a pin on the buffer. Once pin is released, a tuple might be pruned
  * and reused by a completely unrelated tuple.
+ *
+ * Note: This function must not be called inside a critical section because it
+ * internally calls HeapTupleHeaderGetUpdateXid which somewhere down the stack
+ * may try to allocate heap memory. Memory allocation is disallowed in a
+ * critical section.
  */
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
 {
 	OffsetNumber offnum,
 				maxoff;
 
-	MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
+	if (OffsetNumberIsValid(target_offnum))
+		*root_offsets = InvalidOffsetNumber;
+	else
+		MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
 
 	maxoff = PageGetMaxOffsetNumber(page);
 	for (offnum = FirstOffsetNumber; offnum <= maxoff; offnum = OffsetNumberNext(offnum))
@@ -774,9 +807,28 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 
 			/*
 			 * This is either a plain tuple or the root of a HOT-chain.
-			 * Remember it in the mapping.
+			 *
+			 * If the target_offnum is specified and if we found its mapping,
+			 * return.
 			 */
-			root_offsets[offnum - 1] = offnum;
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (target_offnum == offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember it in the mapping. */
+				root_offsets[offnum - 1] = offnum;
+			}
 
 			/* If it's not the start of a HOT-chain, we're done with it */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
@@ -817,15 +869,65 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 				!TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(htup)))
 				break;
 
-			/* Remember the root line pointer for this item */
-			root_offsets[nextoffnum - 1] = offnum;
+			/*
+			 * If target_offnum is specified and we found its mapping, return.
+			 */
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (nextoffnum == target_offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember the root line pointer for this item. */
+				root_offsets[nextoffnum - 1] = offnum;
+			}
 
 			/* Advance to next chain member, if any */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				break;
 
+			/*
+			 * If the tuple was HOT-updated and the update was later aborted,
+			 * someone could mark this tuple to be the last tuple in the chain
+			 * and store root offset in CTID, without clearing the HOT-updated
+			 * flag. So we must check if CTID is actually root offset and break
+			 * to avoid infinite recursion.
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
 		}
 	}
 }
+
+/*
+ * Get root line pointer for the given tuple.
+ */
+OffsetNumber
+heap_get_root_tuple(Page page, OffsetNumber target_offnum)
+{
+	OffsetNumber offnum = InvalidOffsetNumber;
+	heap_get_root_tuples_internal(page, target_offnum, &offnum);
+	return offnum;
+}
+
+/*
+ * Get root line pointers for all tuples in the page
+ */
+void
+heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+{
+	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+			root_offsets);
+}
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index d7f65a5..2d3ae9b 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -421,14 +421,18 @@ rewrite_heap_tuple(RewriteState state,
 	 */
 	if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
 		  HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
-		!(ItemPointerEquals(&(old_tuple->t_self),
-							&(old_tuple->t_data->t_ctid))))
+		!(HeapTupleHeaderIsHeapLatest(old_tuple->t_data, &old_tuple->t_self)))
 	{
 		OldToNewMapping mapping;
 
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
-		hashkey.tid = old_tuple->t_data->t_ctid;
+
+		/* 
+		 * We've already checked that this is not the last tuple in the chain,
+		 * so fetch the next TID in the chain.
+		 */
+		HeapTupleHeaderGetNextTid(old_tuple->t_data, &hashkey.tid);
 
 		mapping = (OldToNewMapping)
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -441,7 +445,7 @@ rewrite_heap_tuple(RewriteState state,
 			 * set the ctid of this tuple to point to the new location, and
 			 * insert it right away.
 			 */
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextTid(new_tuple->t_data, &mapping->new_tid);
 
 			/* We don't need the mapping entry anymore */
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -527,7 +531,7 @@ rewrite_heap_tuple(RewriteState state,
 				new_tuple = unresolved->tuple;
 				free_new = true;
 				old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextTid(new_tuple->t_data, &new_tid);
 
 				/*
 				 * We don't need the hash entry anymore, but don't free its
@@ -733,7 +737,12 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		onpage_tup->t_ctid = tup->t_self;
+		/* 
+		 * Set t_ctid just to ensure that block number is copied correctly, but
+		 * then immediately mark the tuple as the latest.
+		 */
+		HeapTupleHeaderSetNextTid(onpage_tup, &tup->t_self);
+		HeapTupleHeaderSetHeapLatest(onpage_tup, newoff);
 	}
 
 	/* If heaptup is a private copy, release it. */
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 108060a..c3f1873 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -785,7 +785,8 @@ retry:
 			  DirtySnapshot.speculativeToken &&
 			  TransactionIdPrecedes(GetCurrentTransactionId(), xwait))))
 		{
-			ctid_wait = tup->t_data->t_ctid;
+			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
+				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index f2995f2..73e9c4a 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2623,7 +2623,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		 * As above, it should be safe to examine xmax and t_ctid without the
 		 * buffer content lock, because they can't be changing.
 		 */
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, &tuple.t_self))
 		{
 			/* deleted, so forget about it */
 			ReleaseBuffer(buffer);
@@ -2631,7 +2631,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		}
 
 		/* updated, so look at the updated row */
-		tuple.t_self = tuple.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tuple.t_data, &tuple.t_self);
 		/* updated row should have xmin matching this xmax */
 		priorXmax = HeapTupleHeaderGetUpdateXid(tuple.t_data);
 		ReleaseBuffer(buffer);
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 7e85510..5540e12 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -190,6 +190,7 @@ extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
+extern OffsetNumber heap_get_root_tuple(Page page, OffsetNumber target_offnum);
 extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
 
 /* in heap/syncscan.c */
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index b285f17..e6019d5 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -193,6 +193,8 @@ typedef struct xl_heap_update
 	uint8		flags;
 	TransactionId new_xmax;		/* xmax of the new tuple */
 	OffsetNumber new_offnum;	/* new tuple's offset */
+	OffsetNumber root_offnum;	/* offset of the root line pointer in case of
+								   HOT or WARM update */
 
 	/*
 	 * If XLOG_HEAP_CONTAINS_OLD_TUPLE or XLOG_HEAP_CONTAINS_OLD_KEY flags are
@@ -200,7 +202,7 @@ typedef struct xl_heap_update
 	 */
 } xl_heap_update;
 
-#define SizeOfHeapUpdate	(offsetof(xl_heap_update, new_offnum) + sizeof(OffsetNumber))
+#define SizeOfHeapUpdate	(offsetof(xl_heap_update, root_offnum) + sizeof(OffsetNumber))
 
 /*
  * This is what we need to know about vacuum page cleanup/redirect
diff --git a/src/include/access/hio.h b/src/include/access/hio.h
index 2824f23..921cb37 100644
--- a/src/include/access/hio.h
+++ b/src/include/access/hio.h
@@ -35,8 +35,8 @@ typedef struct BulkInsertStateData
 }	BulkInsertStateData;
 
 
-extern void RelationPutHeapTuple(Relation relation, Buffer buffer,
-					 HeapTuple tuple, bool token);
+extern OffsetNumber RelationPutHeapTuple(Relation relation, Buffer buffer,
+					 HeapTuple tuple, bool token, OffsetNumber root_offnum);
 extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
 						  Buffer otherBuffer, int options,
 						  BulkInsertState bistate,
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 7b6285d..24433c7 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,13 +260,19 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1800 are available */
+/* bits 0x0800 are available */
+#define HEAP_LATEST_TUPLE		0x1000	/*
+										 * This is the last tuple in chain and
+										 * ip_posid points to the root line
+										 * pointer
+										 */
 #define HEAP_KEYS_UPDATED		0x2000	/* tuple was updated and key cols
 										 * modified, or tuple deleted */
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xE000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+
 
 /*
  * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins.  It is
@@ -504,6 +510,43 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+/*
+ * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
+ * the TID of the tuple itself in t_ctid field to mark the end of the chain.
+ * But starting PG v10, we use a special flag HEAP_LATEST_TUPLE to identify the
+ * last tuple and store the root line pointer of the HOT chain in t_ctid field
+ * instead.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetHeapLatest(tup, offnum) \
+do { \
+	AssertMacro(OffsetNumberIsValid(offnum)); \
+	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE; \
+	ItemPointerSetOffsetNumber(&(tup)->t_ctid, (offnum)); \
+} while (0)
+
+#define HeapTupleHeaderClearHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 &= ~HEAP_LATEST_TUPLE \
+)
+
+/*
+ * Starting from PostgreSQL 10, the latest tuple in an update chain has
+ * HEAP_LATEST_TUPLE set; but tuples upgraded from earlier versions do not.
+ * For those, we determine whether a tuple is latest by testing that its t_ctid
+ * points to itself.
+ *
+ * Note: beware of multiple evaluations of "tup" and "tid" arguments.
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  (((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(tid))) \
+)
+
+
 #define HeapTupleHeaderSetHeapOnly(tup) \
 ( \
   (tup)->t_infomask2 |= HEAP_ONLY_TUPLE \
@@ -542,6 +585,56 @@ do { \
 
 
 /*
+ * Set the t_ctid chain and also clear the HEAP_LATEST_TUPLE flag since we
+ * now have a new tuple in the chain and this is no longer the last tuple of
+ * the chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetNextTid(tup, tid) \
+do { \
+		ItemPointerCopy((tid), &((tup)->t_ctid)); \
+		HeapTupleHeaderClearHeapLatest((tup)); \
+} while (0)
+
+/*
+ * Get TID of next tuple in the update chain. Caller must have checked that
+ * we are not already at the end of the chain because in that case t_ctid may
+ * actually store the root line pointer of the HOT chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetNextTid(tup, next_ctid) \
+do { \
+	AssertMacro(!((tup)->t_infomask2 & HEAP_LATEST_TUPLE)); \
+	ItemPointerCopy(&(tup)->t_ctid, (next_ctid)); \
+} while (0)
+
+/*
+ * Get the root line pointer of the HOT chain. The caller should have confirmed
+ * that the root offset is cached before calling this macro.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+	AssertMacro(((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0), \
+	ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)
+
+/*
+ * Return whether the tuple has a cached root offset.  We don't use
+ * HeapTupleHeaderIsHeapLatest because that one also considers the case of
+ * t_ctid pointing to itself, for tuples migrated from pre v10 clusters. Here
+ * we are only interested in the tuples which are marked with HEAP_LATEST_TUPLE
+ * flag.
+ */
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+	((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0 \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
0001_interesting_attrs_v22.patchapplication/octet-stream; name=0001_interesting_attrs_v22.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index b147f64..51c773f 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -96,11 +96,8 @@ static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
 				HeapTuple newtup, HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
-static void HeapSatisfiesHOTandKeyUpdate(Relation relation,
-							 Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
+							 Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup);
 static bool heap_acquire_tuplock(Relation relation, ItemPointer tid,
 					 LockTupleMode mode, LockWaitPolicy wait_policy,
@@ -3471,6 +3468,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *interesting_attrs;
+	Bitmapset  *modified_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3488,10 +3487,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				pagefree;
 	bool		have_tuple_lock = false;
 	bool		iscombo;
-	bool		satisfies_hot;
-	bool		satisfies_key;
-	bool		satisfies_id;
 	bool		use_hot_update = false;
+	bool		hot_attrs_checked = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
 	bool		all_visible_cleared_new = false;
@@ -3517,26 +3514,51 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				 errmsg("cannot update tuples during a parallel operation")));
 
 	/*
-	 * Fetch the list of attributes to be checked for HOT update.  This is
-	 * wasted effort if we fail to update or have to put the new tuple on a
-	 * different page.  But we must compute the list before obtaining buffer
-	 * lock --- in the worst case, if we are doing an update on one of the
-	 * relevant system catalogs, we could deadlock if we try to fetch the list
-	 * later.  In any case, the relcache caches the data so this is usually
-	 * pretty cheap.
+	 * Fetch the list of attributes to be checked for various operations.
 	 *
-	 * Note that we get a copy here, so we need not worry about relcache flush
-	 * happening midway through.
+	 * For HOT considerations, this is wasted effort if we fail to update or
+	 * have to put the new tuple on a different page.  But we must compute the
+	 * list before obtaining buffer lock --- in the worst case, if we are doing
+	 * an update on one of the relevant system catalogs, we could deadlock if
+	 * we try to fetch the list later.  In any case, the relcache caches the
+	 * data so this is usually pretty cheap.
+	 *
+	 * We also need columns used by the replica identity, the columns that
+	 * are considered the "key" of rows in the table, and columns that are
+	 * part of indirect indexes.
+	 *
+	 * Note that we get copies of each bitmap, so we need not worry about
+	 * relcache flush happening midway through.
 	 */
 	hot_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_ALL);
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
 
+
 	block = ItemPointerGetBlockNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
+	interesting_attrs = NULL;
+	/*
+	 * If the page is already full, there is hardly any chance of doing a HOT
+	 * update on this page. It might be wasteful effort to look for index
+	 * column updates only to later reject HOT updates for lack of space in the
+	 * same page. So we be conservative and only fetch hot_attrs if the page is
+	 * not already full. Since we are already holding a pin on the buffer,
+	 * there is no chance that the buffer can get cleaned up concurrently and
+	 * even if that was possible, in the worst case we lose a chance to do a
+	 * HOT update.
+	 */
+	if (!PageIsFull(page))
+	{
+		interesting_attrs = bms_add_members(interesting_attrs, hot_attrs);
+		hot_attrs_checked = true;
+	}
+	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
+
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
 	 * be necessary.  Since we haven't got the lock yet, someone else might be
@@ -3552,7 +3574,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Assert(ItemIdIsNormal(lp));
 
 	/*
-	 * Fill in enough data in oldtup for HeapSatisfiesHOTandKeyUpdate to work
+	 * Fill in enough data in oldtup for HeapDetermineModifiedColumns to work
 	 * properly.
 	 */
 	oldtup.t_tableOid = RelationGetRelid(relation);
@@ -3578,6 +3600,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 		Assert(!(newtup->t_data->t_infomask & HEAP_HASOID));
 	}
 
+	/* Determine columns modified by the update. */
+	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
+												  &oldtup, newtup);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3589,10 +3615,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	 * is updates that don't manipulate key columns, not those that
 	 * serendipitiously arrive at the same key values.
 	 */
-	HeapSatisfiesHOTandKeyUpdate(relation, hot_attrs, key_attrs, id_attrs,
-								 &satisfies_hot, &satisfies_key,
-								 &satisfies_id, &oldtup, newtup);
-	if (satisfies_key)
+	if (!bms_overlap(modified_attrs, key_attrs))
 	{
 		*lockmode = LockTupleNoKeyExclusive;
 		mxact_status = MultiXactStatusNoKeyUpdate;
@@ -3831,6 +3854,8 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(modified_attrs);
+		bms_free(interesting_attrs);
 		return result;
 	}
 
@@ -4133,9 +4158,10 @@ l2:
 		/*
 		 * Since the new tuple is going into the same page, we might be able
 		 * to do a HOT update.  Check if any of the index columns have been
-		 * changed.  If not, then HOT update is possible.
+		 * changed. If the page was already full, we may have skipped checking
+		 * for index columns. If so, HOT update is possible.
 		 */
-		if (satisfies_hot)
+		if (hot_attrs_checked && !bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
 	}
 	else
@@ -4150,7 +4176,9 @@ l2:
 	 * ExtractReplicaIdentity() will return NULL if nothing needs to be
 	 * logged.
 	 */
-	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup, !satisfies_id, &old_key_copied);
+	old_key_tuple = ExtractReplicaIdentity(relation, &oldtup,
+										   bms_overlap(modified_attrs, id_attrs),
+										   &old_key_copied);
 
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
@@ -4298,13 +4326,15 @@ l2:
 	bms_free(hot_attrs);
 	bms_free(key_attrs);
 	bms_free(id_attrs);
+	bms_free(modified_attrs);
+	bms_free(interesting_attrs);
 
 	return HeapTupleMayBeUpdated;
 }
 
 /*
  * Check if the specified attribute's value is same in both given tuples.
- * Subroutine for HeapSatisfiesHOTandKeyUpdate.
+ * Subroutine for HeapDetermineModifiedColumns.
  */
 static bool
 heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
@@ -4338,7 +4368,7 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 
 	/*
 	 * Extract the corresponding values.  XXX this is pretty inefficient if
-	 * there are many indexed columns.  Should HeapSatisfiesHOTandKeyUpdate do
+	 * there are many indexed columns.  Should HeapDetermineModifiedColumns do
 	 * a single heap_deform_tuple call on each tuple, instead?	But that
 	 * doesn't work for system columns ...
 	 */
@@ -4383,114 +4413,30 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 /*
  * Check which columns are being updated.
  *
- * This simultaneously checks conditions for HOT updates, for FOR KEY
- * SHARE updates, and REPLICA IDENTITY concerns.  Since much of the time they
- * will be checking very similar sets of columns, and doing the same tests on
- * them, it makes sense to optimize and do them together.
- *
- * We receive three bitmapsets comprising the three sets of columns we're
- * interested in.  Note these are destructively modified; that is OK since
- * this is invoked at most once in heap_update.
+ * Given an updated tuple, determine (and return into the output bitmapset),
+ * from those listed as interesting, the set of columns that changed.
  *
- * hot_result is set to TRUE if it's okay to do a HOT update (i.e. it does not
- * modified indexed columns); key_result is set to TRUE if the update does not
- * modify columns used in the key; id_result is set to TRUE if the update does
- * not modify columns in any index marked as the REPLICA IDENTITY.
+ * The input bitmapset is destructively modified; that is OK since this is
+ * invoked at most once in heap_update.
  */
-static void
-HeapSatisfiesHOTandKeyUpdate(Relation relation, Bitmapset *hot_attrs,
-							 Bitmapset *key_attrs, Bitmapset *id_attrs,
-							 bool *satisfies_hot, bool *satisfies_key,
-							 bool *satisfies_id,
+static Bitmapset *
+HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
 							 HeapTuple oldtup, HeapTuple newtup)
 {
-	int			next_hot_attnum;
-	int			next_key_attnum;
-	int			next_id_attnum;
-	bool		hot_result = true;
-	bool		key_result = true;
-	bool		id_result = true;
-
-	/* If REPLICA IDENTITY is set to FULL, id_attrs will be empty. */
-	Assert(bms_is_subset(id_attrs, key_attrs));
-	Assert(bms_is_subset(key_attrs, hot_attrs));
-
-	/*
-	 * If one of these sets contains no remaining bits, bms_first_member will
-	 * return -1, and after adding FirstLowInvalidHeapAttributeNumber (which
-	 * is negative!)  we'll get an attribute number that can't possibly be
-	 * real, and thus won't match any actual attribute number.
-	 */
-	next_hot_attnum = bms_first_member(hot_attrs);
-	next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_key_attnum = bms_first_member(key_attrs);
-	next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-	next_id_attnum = bms_first_member(id_attrs);
-	next_id_attnum += FirstLowInvalidHeapAttributeNumber;
+	int		attnum;
+	Bitmapset *modified = NULL;
 
-	for (;;)
+	while ((attnum = bms_first_member(interesting_cols)) >= 0)
 	{
-		bool		changed;
-		int			check_now;
+		attnum += FirstLowInvalidHeapAttributeNumber;
 
-		/*
-		 * Since the HOT attributes are a superset of the key attributes and
-		 * the key attributes are a superset of the id attributes, this logic
-		 * is guaranteed to identify the next column that needs to be checked.
-		 */
-		if (hot_result && next_hot_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_hot_attnum;
-		else if (key_result && next_key_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_key_attnum;
-		else if (id_result && next_id_attnum > FirstLowInvalidHeapAttributeNumber)
-			check_now = next_id_attnum;
-		else
-			break;
-
-		/* See whether it changed. */
-		changed = !heap_tuple_attr_equals(RelationGetDescr(relation),
-										  check_now, oldtup, newtup);
-		if (changed)
-		{
-			if (check_now == next_hot_attnum)
-				hot_result = false;
-			if (check_now == next_key_attnum)
-				key_result = false;
-			if (check_now == next_id_attnum)
-				id_result = false;
-
-			/* if all are false now, we can stop checking */
-			if (!hot_result && !key_result && !id_result)
-				break;
-		}
-
-		/*
-		 * Advance the next attribute numbers for the sets that contain the
-		 * attribute we just checked.  As we work our way through the columns,
-		 * the next_attnum values will rise; but when each set becomes empty,
-		 * bms_first_member() will return -1 and the attribute number will end
-		 * up with a value less than FirstLowInvalidHeapAttributeNumber.
-		 */
-		if (hot_result && check_now == next_hot_attnum)
-		{
-			next_hot_attnum = bms_first_member(hot_attrs);
-			next_hot_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (key_result && check_now == next_key_attnum)
-		{
-			next_key_attnum = bms_first_member(key_attrs);
-			next_key_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
-		if (id_result && check_now == next_id_attnum)
-		{
-			next_id_attnum = bms_first_member(id_attrs);
-			next_id_attnum += FirstLowInvalidHeapAttributeNumber;
-		}
+		if (!heap_tuple_attr_equals(RelationGetDescr(relation),
+								   attnum, oldtup, newtup))
+			modified = bms_add_member(modified,
+									  attnum - FirstLowInvalidHeapAttributeNumber);
 	}
 
-	*satisfies_hot = hot_result;
-	*satisfies_key = key_result;
-	*satisfies_id = id_result;
+	return modified;
 }
 
 /*
0005_warm_updates_v22.patchapplication/octet-stream; name=0005_warm_updates_v22.patchDownload
diff --git b/contrib/bloom/blutils.c a/contrib/bloom/blutils.c
index f2eda67..b356e2b 100644
--- b/contrib/bloom/blutils.c
+++ a/contrib/bloom/blutils.c
@@ -142,6 +142,7 @@ blhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git b/contrib/bloom/blvacuum.c a/contrib/bloom/blvacuum.c
index 04abd0f..ff50361 100644
--- b/contrib/bloom/blvacuum.c
+++ a/contrib/bloom/blvacuum.c
@@ -88,7 +88,7 @@ blbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 		while (itup < itupEnd)
 		{
 			/* Do we have to delete this tuple? */
-			if (callback(&itup->heapPtr, callback_state))
+			if (callback(&itup->heapPtr, false, callback_state) == IBDCR_DELETE)
 			{
 				/* Yes; adjust count of tuples that will be left on page */
 				BloomPageGetOpaque(page)->maxoff--;
diff --git b/src/backend/access/brin/brin.c a/src/backend/access/brin/brin.c
index b22563b..b4a1465 100644
--- b/src/backend/access/brin/brin.c
+++ a/src/backend/access/brin/brin.c
@@ -116,6 +116,7 @@ brinhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git b/src/backend/access/gin/ginvacuum.c a/src/backend/access/gin/ginvacuum.c
index 26c077a..46ed4fe 100644
--- b/src/backend/access/gin/ginvacuum.c
+++ a/src/backend/access/gin/ginvacuum.c
@@ -56,7 +56,8 @@ ginVacuumItemPointers(GinVacuumState *gvs, ItemPointerData *items,
 	 */
 	for (i = 0; i < nitem; i++)
 	{
-		if (gvs->callback(items + i, gvs->callback_state))
+		if (gvs->callback(items + i, false, gvs->callback_state) ==
+				IBDCR_DELETE)
 		{
 			gvs->result->tuples_removed += 1;
 			if (!tmpitems)
diff --git b/src/backend/access/gist/gist.c a/src/backend/access/gist/gist.c
index 6593771..843389b 100644
--- b/src/backend/access/gist/gist.c
+++ a/src/backend/access/gist/gist.c
@@ -94,6 +94,7 @@ gisthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git b/src/backend/access/gist/gistvacuum.c a/src/backend/access/gist/gistvacuum.c
index 77d9d12..0955db6 100644
--- b/src/backend/access/gist/gistvacuum.c
+++ a/src/backend/access/gist/gistvacuum.c
@@ -202,7 +202,8 @@ gistbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 				iid = PageGetItemId(page, i);
 				idxtuple = (IndexTuple) PageGetItem(page, iid);
 
-				if (callback(&(idxtuple->t_tid), callback_state))
+				if (callback(&(idxtuple->t_tid), false, callback_state) ==
+						IBDCR_DELETE)
 					todelete[ntodelete++] = i;
 				else
 					stats->num_index_tuples += 1;
diff --git b/src/backend/access/hash/hash.c a/src/backend/access/hash/hash.c
index 34cc08f..ad56d6d 100644
--- b/src/backend/access/hash/hash.c
+++ a/src/backend/access/hash/hash.c
@@ -75,6 +75,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = hashbuild;
 	amroutine->ambuildempty = hashbuildempty;
 	amroutine->aminsert = hashinsert;
+	amroutine->amwarminsert = NULL;
 	amroutine->ambulkdelete = hashbulkdelete;
 	amroutine->amvacuumcleanup = hashvacuumcleanup;
 	amroutine->amcanreturn = NULL;
@@ -92,6 +93,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -807,6 +809,7 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 			IndexTuple	itup;
 			Bucket		bucket;
 			bool		kill_tuple = false;
+			IndexBulkDeleteCallbackResult	result;
 
 			itup = (IndexTuple) PageGetItem(page,
 											PageGetItemId(page, offno));
@@ -816,13 +819,18 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 			 * To remove the dead tuples, we strictly want to rely on results
 			 * of callback function.  refer btvacuumpage for detailed reason.
 			 */
-			if (callback && callback(htup, callback_state))
+			if (callback)
 			{
-				kill_tuple = true;
-				if (tuples_removed)
-					*tuples_removed += 1;
+				result = callback(htup, false, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					kill_tuple = true;
+					if (tuples_removed)
+						*tuples_removed += 1;
+				}
 			}
-			else if (split_cleanup)
+
+			if (!kill_tuple && split_cleanup)
 			{
 				/* delete the tuples that are moved by split. */
 				bucket = _hash_hashkey2bucket(_hash_get_indextuple_hashkey(itup),
diff --git b/src/backend/access/hash/hashsearch.c a/src/backend/access/hash/hashsearch.c
index 2d92049..330ccc5 100644
--- b/src/backend/access/hash/hashsearch.c
+++ a/src/backend/access/hash/hashsearch.c
@@ -59,6 +59,8 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
 	return true;
 }
 
@@ -367,6 +369,9 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
+
 	return true;
 }
 
diff --git b/src/backend/access/heap/README.WARM a/src/backend/access/heap/README.WARM
new file mode 100644
index 0000000..7c93a70
--- /dev/null
+++ a/src/backend/access/heap/README.WARM
@@ -0,0 +1,308 @@
+src/backend/access/heap/README.WARM
+
+Write Amplification Reduction Method (WARM)
+===========================================
+
+The Heap Only Tuple (HOT) feature greatly eliminated redudant index
+entries and allowed re-use of the dead space occupied by previously
+updated or deleted tuples (see src/backend/access/heap/README.HOT)
+
+One of the necessary conditions for satisfying HOT update is that the
+update must not change a column used in any of the indexes on the table.
+The condition is sometimes hard to meet, especially for complex
+workloads with several indexes on large yet frequently updated tables.
+Worse, sometimes only one or two index columns may be updated, but the
+regular non-HOT update will still insert a new index entry in every
+index on the table, irrespective of whether the key pertaining to the
+index is changed or not.
+
+WARM is a technique devised to address these problems.
+
+
+Update Chains With Multiple Index Entries Pointing to the Root
+--------------------------------------------------------------
+
+When a non-HOT update is caused by an index key change, a new index
+entry must be inserted for the changed index. But if the index key
+hasn't changed for other indexes, we don't really need to insert a new
+entry. Even though the existing index entry is pointing to the old
+tuple, the new tuple is reachable via the t_ctid chain. To keep things
+simple, a WARM update requires that the heap block must have enough
+space to store the new version of the tuple. This is same as HOT
+updates.
+
+In WARM, we ensure that every index entry always points to the root of
+the WARM chain. In fact, a WARM chain looks exactly like a HOT chain
+except for the fact that there could be multiple index entries pointing
+to the root of the chain. So when new entry is inserted in an index for
+updated tuple, and if we are doing a WARM update, the new entry is made
+point to the root of the WARM chain.
+
+For example, if we have a table with two columns and two indexes on each of the
+column. When a tuple is first inserted into the table, we have exactly one
+index entry pointing to the tuple from both indexes.
+
+	lp [1]
+	[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's entry (aaaa) also points to 1
+
+Now if the tuple's second column is updated and if there is room on the
+page, we perform a WARM update. To do so, Index1 does not get any new
+entry and Index2's new entry will still point to the root tuple of the
+chain.
+
+	lp [1]  [2]
+	[1111, aaaa]->[1111, bbbb]
+
+	Index1's entry (1111) points to 1
+	Index2's old entry (aaaa) points to 1
+	Index2's new entry (bbbb) also points to 1
+
+"A update chain which has more than one index entries pointing to its
+root line pointer is called WARM chain and the action that creates a
+WARM chain is called WARM update."
+
+Since all indexes always point to the root of the WARM chain, even when
+there are more than one index entries, WARM chains can be pruned and
+dead tuples can be removed without a need to do corresponding index
+cleanup.
+
+While this solves the problem of pruning dead tuples from a HOT/WARM
+chain, it also opens up a new technical challenge because now we have a
+situation where a heap tuple is reachable from multiple index entries,
+each having a different index key. While MVCC still ensures that only
+valid tuples are returned, a tuple with a wrong index key may be
+returned because of wrong index entries. In the above example, tuple
+[1111, bbbb] is reachable from both keys (aaaa) as well as (bbbb). For
+this reason, tuples returned from a WARM chain must always be rechecked
+for index key-match.
+
+Recheck Index Key Againt Heap Tuple
+-----------------------------------
+
+Since every Index AM has it's own notion of index tuples, each Index AM
+must implement its own method to recheck heap tuples. For example, a
+hash index stores the hash value of the column and hence recheck routine
+for hash AM must first compute the hash value of the heap attribute and
+then compare it against the value stored in the index tuple.
+
+The patch currently implement recheck routines for hash and btree
+indexes. If the table has an index which doesn't support recheck
+routine, WARM updates are disabled on such tables.
+
+Problem With Duplicate (key, ctid) Index Entries
+------------------------------------------------
+
+The index-key recheck logic works as long as there are no duplicate
+index keys, both pointing to the same WARM chain. In that case, the same
+valid tuple will be reachable via multiple index keys, yet satisfying
+the index key checks. In the above example, if the tuple [1111, bbbb] is
+again updated to [1111, aaaa] and if we insert a new index entry (aaaa)
+pointing to the root line pointer, we will end up with the following
+structure:
+
+	lp [1]  [2]  [3]
+	[1111, aaaa]->[1111, bbbb]->[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's oldest entry (aaaa) points to 1
+	Index2's old entry (bbbb) also points to 1
+	Index2's new entry (aaaa) also points to 1
+
+We must solve this problem to ensure that the same tuple is not
+reachable via multiple index pointers. There are couple of ways to
+address this issue:
+
+1. Do not allow WARM update to a tuple from a WARM chain. This
+guarantees that there can never be duplicate index entries to the same
+root line pointer because we must have checked for old and new index
+keys while doing the first WARM update.
+
+2. Do not allow duplicate (key, ctid) index pointers. In the above
+example, since (aaaa, 1) already exists in the index, we must not insert
+a duplicate index entry.
+
+The patch currently implements 1 i.e. do not do WARM updates to a tuple
+from a WARM chain. HOT updates are fine because they do not add a new
+index entry.
+
+Even with the restriction, this is a significant improvement because the
+number of regular UPDATEs are curtailed down to half.
+
+Expression and Partial Indexes
+------------------------------
+
+Expressions may evaluate to the same value even if the underlying column
+values have changed. A simple example is an index on "lower(col)" which
+will return the same value if the new heap value only differs in the
+case sensitivity. So we can not solely rely on the heap column check to
+decide whether or not to insert a new index entry for expression
+indexes. Similarly, for partial indexes, the predicate expression must
+be evaluated to decide whether or not to cause a new index entry when
+columns referred in the predicate expressions change.
+
+(None of these things are currently implemented and we squarely disallow
+WARM update if a column from expression indexes or predicate has
+changed).
+
+
+Efficiently Finding the Root Line Pointer
+-----------------------------------------
+
+During WARM update, we must be able to find the root line pointer of the
+tuple being updated. It must be noted that the t_ctid field in the heap
+tuple header is usually used to find the next tuple in the update chain.
+But the tuple that we are updating, must be the last tuple in the update
+chain. In such cases, the c_tid field usually points the tuple itself.
+So in theory, we could use the t_ctid to store additional information in
+the last tuple of the update chain, if the information about the tuple
+being the last tuple is stored elsewhere.
+
+We now utilize another bit from t_infomask2 to explicitly identify that
+this is the last tuple in the update chain.
+
+HEAP_LATEST_TUPLE - When this bit is set, the tuple is the last tuple in
+the update chain. The OffsetNumber part of t_ctid points to the root
+line pointer of the chain when HEAP_LATEST_TUPLE flag is set.
+
+If UPDATE operation is aborted, the last tuple in the update chain
+becomes dead. The root line pointer information stored in the tuple
+which remains the last valid tuple in the chain is also lost. In such
+rare cases, the root line pointer must be found in a hard way by
+scanning the entire heap page.
+
+Tracking WARM Chains
+--------------------
+
+When a tuple is WARM updated, the old, the new and every subsequent tuple in
+the chain is marked with a special HEAP_WARM_UPDATED flag. We use the last
+remaining bit in t_infomask2 to store this information.
+
+When a tuple is returned from a WARM chain, the caller must do additional
+checks to ensure that the tuple matches the index key. Even if the tuple
+precedes the WARM update in the chain, it must still be rechecked for the index
+key match (case when old tuple is returned by the new index key). So we must
+follow the update chain everytime to the end to check if this is a WARM
+chain.
+
+Converting WARM chains back to HOT chains (VACUUM ?)
+----------------------------------------------------
+
+The current implementation of WARM allows only one WARM update per
+chain. This simplifies the design and addresses certain issues around
+duplicate key scans. But this also implies that the benefit of WARM will be
+no more than 50%, which is still significant, but if we could return
+WARM chains back to normal status, we could do far more WARM updates.
+
+A distinct property of a WARM chain is that at least one index has more
+than one live index entries pointing to the root of the chain. In other
+words, if we can remove duplicate entry from every index or conclusively
+prove that there are no duplicate index entries for the root line
+pointer, the chain can again be marked as HOT.
+
+Here is one idea:
+
+A WARM chain has two parts, separated by the tuple that caused WARM
+update. All tuples in each part has matching index keys, but certain
+index keys may not match between these two parts. Lets say we mark heap
+tuples in the second part with a special HEAP_WARM_TUPLE flag. Similarly, the
+new index entries caused by the first WARM update are also marked with
+INDEX_WARM_POINTER flags.
+
+There are two distinct parts of the WARM chain. The first part where none of
+the tuples have HEAP_WARM_TUPLE flag set and the second part where every tuple
+has the flag set. Each of these parts satisfy HOT property on its own i.e. all
+tuples have the same value for indexed columns. But these two parts are
+separated by the WARM update which breaks HOT property for one or more indexes.
+
+Heap chain: [1] [2] [3] [4]
+			[aaaa, 1111] -> [aaaa, 1111] -> [bbbb, 1111]W -> [bbbb, 1111]W
+
+Index1: 	(aaaa) points to 1 (satisfies only tuples without W)
+			(bbbb)W points to 1 (satisfies only tuples marked with W)
+
+Index2:		(1111) points to 1 (satisfies tuples with and without W)
+
+
+It's clear that for indexes with both pointers, a heap tuple without
+HEAP_WARM_TUPLE flag will be reachable from the index pointer cleared of
+INDEX_WARM_POINTER flag and that with HEAP_WARM_TUPLE flag will be reachable
+from the pointer with INDEX_WARM_POINTER. But for indexes which did not create
+a new entry, tuples with and without the HEAP_WARM_TUPLE flag will be reachable
+from the original index pointer which doesn't have the INDEX_WARM_POINTER flag.
+(there is no pointer with INDEX_WARM_POINTER in such indexes).
+
+During first heap scan of VACUUM, we look for tuples with HEAP_WARM_UPDATED
+set.  If all or none of the live tuples in the chain are marked with
+HEAP_WARM_TUPLE flag, then the chain is a candidate for HOT conversion. We
+remember the root line pointer and whether the tuples in the chain had
+HEAP_WARM_TUPLE flags set or not.
+
+If we have a WARM chain with HEAP_WARM_TUPLE set, then our goal is to remove
+the index pointers without INDEX_WARM_POINTER flags and vice versa. But there
+is a catch. For Index2 above, there is only one pointer and it does not have
+the INDEX_WARM_POINTER flag set. Since all heap tuples are reachable only via
+this pointer, it must not be removed. IOW we should remove index pointer
+without INDEX_WARM_POINTER iff a another index pointer with INDEX_WARM_POINTER
+exists. Since index vacuum may visit these pointers in any order, we will need
+another index pass to detect redundant index pointers, which can safely be
+removed because all live tuples are reachable via the other index pointer. So
+in the first index pass we check which WARM candidates have 2 index pointers.
+In the second pass, we remove the redundant pointer and clear the
+INDEX_WARM_POINTER flag if that's the surviving index pointer. Note that
+all index pointers, either CLEAR or WARM, to dead tuples are removed during the
+first index scan itself.
+
+During the second heap scan, we fix WARM chain by clearing HEAP_WARM_UPDATED
+and HEAP_WARM_TUPLE flags on tuples.
+
+There are some more problems around aborted vacuums. For example, if vacuum
+aborts after clearing INDEX_WARM_POINTER flag but before removing the other
+index pointer, we will end up with two index pointers and none of those will
+have INDEX_WARM_POINTER set.  But since the HEAP_WARM_UPDATED flag on the heap
+tuple is still set, further WARM updates to the chain will be blocked. I guess
+we will need some special handling for case with multiple index pointers where
+none of the index pointers has INDEX_WARM_POINTER flag set. We can either leave
+these WARM chains alone and let them die with a subsequent non-WARM update or
+must apply heap-recheck logic during index vacuum to find the dead pointer.
+Given that vacuum-aborts are not common, I am inclined to leave this case
+unhandled. We must still check for presence of multiple index pointers without
+INDEX_WARM_POINTER flags and ensure that we don't accidently remove either of
+these pointers and also must not clear WARM chains.
+
+CREATE INDEX CONCURRENTLY
+-------------------------
+
+Currently CREATE INDEX CONCURRENTLY (CIC) is implemented as a 3-phase
+process.  In the first phase, we create catalog entry for the new index
+so that the index is visible to all other backends, but still don't use
+it for either read or write.  But we ensure that no new broken HOT
+chains are created by new transactions. In the second phase, we build
+the new index using a MVCC snapshot and then make the index available
+for inserts. We then do another pass over the index and insert any
+missing tuples, everytime indexing only it's root line pointer. See
+README.HOT for details about how HOT impacts CIC and how various
+challenges are tackeled.
+
+WARM poses another challenge because it allows creation of HOT chains
+even when an index key is changed. But since the index is not ready for
+insertion until the second phase is over, we might end up with a
+situation where the HOT chain has tuples with different index columns,
+yet only one of these values are indexed by the new index. Note that
+during the third phase, we only index tuples whose root line pointer is
+missing from the index. But we can't easily check if the existing index
+tuple is actually indexing the heap tuple visible to the new MVCC
+snapshot. Finding that information will require us to query the index
+again for every tuple in the chain, especially if it's a WARM tuple.
+This would require repeated access to the index. Another option would be
+to return index keys along with the heap TIDs when index is scanned for
+collecting all indexed TIDs during third phase. We can then compare the
+heap tuple against the already indexed key and decide whether or not to
+index the new tuple.
+
+We solve this problem more simply by disallowing WARM updates until the
+index is ready for insertion. We don't need to disallow WARM on a
+wholesale basis, but only those updates that change the columns of the
+new index are disallowed to be WARM updates.
diff --git b/src/backend/access/heap/heapam.c a/src/backend/access/heap/heapam.c
index e573f1a..daf98c0 100644
--- b/src/backend/access/heap/heapam.c
+++ a/src/backend/access/heap/heapam.c
@@ -1974,6 +1974,212 @@ heap_fetch(Relation relation,
 }
 
 /*
+ * Check status of a (possibly) WARM chain.
+ *
+ * This function looks at a HOT/WARM chain starting at tid and return a bitmask
+ * of information. We only follow the chain as long as it's known to be valid
+ * HOT chain. Information returned by the function consists of:
+ *
+ *  HCWC_WARM_UPDATED_TUPLE - a tuple with HEAP_WARM_UPDATED is found somewhere
+ *  						  in the chain. Note that when a tuple is WARM
+ *  						  updated, both old and new versions are marked
+ *  						  with this flag. So presence of this flag
+ *  						  indicates that a WARM update was performed on
+ *  						  this chain, but the update may have either
+ *  						  committed or aborted.
+ *
+ *  HCWC_WARM_TUPLE  - a tuple with HEAP_WARM_TUPLE is found somewhere in
+ *					  the chain. This flag is set only on the new version of
+ *					  the tuple while performing WARM update.
+ *
+ *  HCWC_CLEAR_TUPLE - a tuple without HEAP_WARM_TUPLE is found somewhere in
+ *  					 the chain. This either implies that the WARM updated
+ *  					 either aborted or it's recent enough that the old
+ *  					 tuple is still not pruned away by chain pruning logic.
+ *
+ *	If stop_at_warm is true, we stop when the first HEAP_WARM_UPDATED tuple is
+ *	found and return information collected so far.
+ */
+HeapCheckWarmChainStatus
+heap_check_warm_chain(Page dp, ItemPointer tid, bool stop_at_warm)
+{
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	HeapCheckWarmChainStatus	status = 0;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
+			break;
+		}
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		if (HeapTupleHeaderIsWarmUpdated(heapTuple.t_data))
+		{
+			/* We found a WARM_UPDATED tuple */
+			status |= HCWC_WARM_UPDATED_TUPLE;
+
+			/*
+			 * If we've been told to stop at the first WARM_UPDATED tuple, just
+			 * return whatever information collected so far.
+			 */
+			if (stop_at_warm)
+				return status;
+
+			/*
+			 * Remember whether it's a CLEAR or a WARM tuple.
+			 */
+			if (HeapTupleHeaderIsWarm(heapTuple.t_data))
+				status |= HCWC_WARM_TUPLE;
+			else
+				status |= HCWC_CLEAR_TUPLE;
+		}
+		else
+			/* Must be a regular, non-WARM tuple */
+			status |= HCWC_CLEAR_TUPLE;
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	/* All OK. No need to recheck */
+	return status;
+}
+
+/*
+ * Scan through the WARM chain starting at tid and reset all WARM related
+ * flags. At the end, the chain will have all characteristics of a regular HOT
+ * chain.
+ *
+ * Return the number of cleared offnums. Cleared offnums are returned in the
+ * passed-in cleared_offnums array. The caller must ensure that the array is
+ * large enough to hold maximum offnums that can be cleared by this invokation
+ * of heap_clear_warm_chain().
+ */
+int
+heap_clear_warm_chain(Page dp, ItemPointer tid, OffsetNumber *cleared_offnums)
+{
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	int							num_cleared = 0;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
+			break;
+		}
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Clear WARM_UPDATED and WARM flags.
+		 */
+		if (HeapTupleHeaderIsWarmUpdated(heapTuple.t_data))
+		{
+			HeapTupleHeaderClearWarmUpdated(heapTuple.t_data);
+			HeapTupleHeaderClearWarm(heapTuple.t_data);
+			cleared_offnums[num_cleared++] = offnum;
+		}
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	return num_cleared;
+}
+
+/*
  *	heap_hot_search_buffer	- search HOT chain for tuple satisfying snapshot
  *
  * On entry, *tid is the TID of a tuple (either a simple tuple, or the root
@@ -1993,11 +2199,14 @@ heap_fetch(Relation relation,
  * Unlike heap_fetch, the caller must already have pin and (at least) share
  * lock on the buffer; it is still pinned/locked at exit.  Also unlike
  * heap_fetch, we do not report any pgstats count; caller may do so if wanted.
+ *
+ * recheck should be set false on entry by caller, will be set true on exit
+ * if a WARM tuple is encountered.
  */
 bool
 heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 					   Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call)
+					   bool *all_dead, bool first_call, bool *recheck)
 {
 	Page		dp = (Page) BufferGetPage(buffer);
 	TransactionId prev_xmax = InvalidTransactionId;
@@ -2051,9 +2260,12 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		ItemPointerSetOffsetNumber(&heapTuple->t_self, offnum);
 
 		/*
-		 * Shouldn't see a HEAP_ONLY tuple at chain start.
+		 * Shouldn't see a HEAP_ONLY tuple at chain start, unless we are
+		 * dealing with a WARM updated tuple in which case deferred triggers
+		 * may request to fetch a WARM tuple from middle of a chain.
 		 */
-		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple))
+		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple) &&
+				!HeapTupleIsWarmUpdated(heapTuple))
 			break;
 
 		/*
@@ -2066,6 +2278,20 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 			break;
 
 		/*
+		 * Check if there exists a WARM tuple somewhere down the chain and set
+		 * recheck to TRUE.
+		 *
+		 * XXX This is not very efficient right now, and we should look for
+		 * possible improvements here.
+		 */
+		if (recheck && *recheck == false)
+		{
+			HeapCheckWarmChainStatus status;
+			status = heap_check_warm_chain(dp, &heapTuple->t_self, true);
+			*recheck = HCWC_IS_WARM_UPDATED(status);
+		}
+
+		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
 		 * return the first tuple we find.  But on later passes, heapTuple
 		 * will initially be pointing to the tuple we returned last time.
@@ -2114,7 +2340,8 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		 * Check to see if HOT chain continues past this tuple; if so fetch
 		 * the next offnum and loop around.
 		 */
-		if (HeapTupleIsHotUpdated(heapTuple))
+		if (HeapTupleIsHotUpdated(heapTuple) &&
+			!HeapTupleHeaderHasRootOffset(heapTuple->t_data))
 		{
 			Assert(ItemPointerGetBlockNumber(&heapTuple->t_data->t_ctid) ==
 				   ItemPointerGetBlockNumber(tid));
@@ -2138,18 +2365,41 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
  */
 bool
 heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
-				bool *all_dead)
+				bool *all_dead, bool *recheck, Buffer *cbuffer,
+				HeapTuple heapTuple)
 {
 	bool		result;
 	Buffer		buffer;
-	HeapTupleData heapTuple;
+	ItemPointerData ret_tid = *tid;
 
 	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	LockBuffer(buffer, BUFFER_LOCK_SHARE);
-	result = heap_hot_search_buffer(tid, relation, buffer, snapshot,
-									&heapTuple, all_dead, true);
-	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buffer);
+	result = heap_hot_search_buffer(&ret_tid, relation, buffer, snapshot,
+									heapTuple, all_dead, true, recheck);
+
+	/*
+	 * If we are returning a potential candidate tuple from this chain and the
+	 * caller has requested for "recheck" hint, keep the buffer locked and
+	 * pinned. The caller must release the lock and pin on the buffer in all
+	 * such cases.
+	 */
+	if (!result || !recheck || !(*recheck))
+	{
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	/*
+	 * Set the caller supplied tid with the actual location of the tuple being
+	 * returned.
+	 */
+	if (result)
+	{
+		*tid = ret_tid;
+		if (cbuffer)
+			*cbuffer = buffer;
+	}
+
 	return result;
 }
 
@@ -2792,7 +3042,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		{
 			XLogRecPtr	recptr;
 			xl_heap_multi_insert *xlrec;
-			uint8		info = XLOG_HEAP2_MULTI_INSERT;
+			uint8		info = XLOG_HEAP_MULTI_INSERT;
 			char	   *tupledata;
 			int			totaldatalen;
 			char	   *scratchptr = scratch;
@@ -2889,7 +3139,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			/* filtering by origin on a row level is much more efficient */
 			XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);
 
-			recptr = XLogInsert(RM_HEAP2_ID, info);
+			recptr = XLogInsert(RM_HEAP_ID, info);
 
 			PageSetLSN(page, recptr);
 		}
@@ -3278,7 +3528,7 @@ l1:
 							  &new_xmax, &new_infomask, &new_infomask2);
 
 	/*
-	 * heap_get_root_tuple_one() may call palloc, which is disallowed once we
+	 * heap_get_root_tuple() may call palloc, which is disallowed once we
 	 * enter the critical section. So check if the root offset is cached in the
 	 * tuple and if not, fetch that information hard way before entering the
 	 * critical section.
@@ -3313,7 +3563,9 @@ l1:
 	}
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	tp.t_data->t_infomask |= new_infomask;
 	tp.t_data->t_infomask2 |= new_infomask2;
@@ -3508,15 +3760,18 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 HTSU_Result
 heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode)
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update)
 {
 	HTSU_Result result;
 	TransactionId xid = GetCurrentTransactionId();
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *exprindx_attrs;
 	Bitmapset  *interesting_attrs;
 	Bitmapset  *modified_attrs;
+	Bitmapset  *notready_attrs;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3537,6 +3792,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		use_hot_update = false;
+	bool		use_warm_update = false;
 	bool		hot_attrs_checked = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
@@ -3562,6 +3818,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot update tuples during a parallel operation")));
 
+	/* Assume no-warm update */
+	if (warm_update)
+		*warm_update = false;
+
 	/*
 	 * Fetch the list of attributes to be checked for various operations.
 	 *
@@ -3583,6 +3843,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	exprindx_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_EXPR_PREDICATE);
+	notready_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_NOTREADY);
 
 
 	block = ItemPointerGetBlockNumber(otid);
@@ -3606,8 +3870,11 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 		interesting_attrs = bms_add_members(interesting_attrs, hot_attrs);
 		hot_attrs_checked = true;
 	}
+
 	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, exprindx_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, notready_attrs);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -3654,6 +3921,9 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
 												  &oldtup, newtup);
 
+	if (modified_attrsp)
+		*modified_attrsp = bms_copy(modified_attrs);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3909,8 +4179,10 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(exprindx_attrs);
 		bms_free(modified_attrs);
 		bms_free(interesting_attrs);
+		bms_free(notready_attrs);
 		return result;
 	}
 
@@ -4074,7 +4346,9 @@ l2:
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
-		oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(oldtup.t_data))
+			oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 		oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleClearHotUpdated(&oldtup);
 		/* ... and store info about transaction updating this tuple */
@@ -4228,6 +4502,39 @@ l2:
 		 */
 		if (hot_attrs_checked && !bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
+		else
+		{
+			/*
+			 * If no WARM updates yet on this chain, let this update be a WARM
+			 * update. We must not do any WARM update even if the previous WARM
+			 * updated at the end aborted. That's why we look at
+			 * HEAP_WARM_UPDATED flag.
+			 *
+			 * We don't do WARM updates if one of the columns used in index
+			 * expressions is being modified. Since expressions may evaluate to
+			 * the same value, even when heap values change, we don't have a
+			 * good way to deal with duplicate key scans when expressions are
+			 * used in the index.
+			 *
+			 * We check if the HOT attrs are a subset of the modified
+			 * attributes. Since HOT attrs include all index attributes, this
+			 * allows to avoid doing a WARM update when all index attributes
+			 * are being updated. Performing a WARM update is not a great idea
+			 * because all indexes will receive a new entry anyways.
+			 *
+			 * We also disable WARM temporarily if we are modifying a column
+			 * which is used by a new index that's being added. We can't insert
+			 * new entries to such indexes and hence we must not allow creating
+			 * on WARM chains which are broken with respect to the new index
+			 * being added.
+			 */
+			if (relation->rd_supportswarm &&
+				!HeapTupleIsWarmUpdated(&oldtup) &&
+				!bms_overlap(modified_attrs, exprindx_attrs) &&
+				!bms_is_subset(hot_attrs, modified_attrs) &&
+				!bms_overlap(notready_attrs, modified_attrs))
+				use_warm_update = true;
+		}
 	}
 	else
 	{
@@ -4274,6 +4581,32 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+
+		/*
+		 * Even if we are doing a HOT update, we must carry forward the WARM
+		 * flag because we may have already inserted another index entry
+		 * pointing to our root and a third entry may create duplicates.
+		 *
+		 * Note: If we ever have a mechanism to avoid duplicate <key, TID> in
+		 * indexes, we could look at relaxing this restriction and allow even
+		 * more WARM udpates.
+		 */
+		if (HeapTupleIsWarmUpdated(&oldtup))
+		{
+			HeapTupleSetWarmUpdated(heaptup);
+			HeapTupleSetWarmUpdated(newtup);
+		}
+
+		/*
+		 * If the old tuple is a WARM tuple then mark the new tuple as a WARM
+		 * tuple as well.
+		 */
+		if (HeapTupleIsWarm(&oldtup))
+		{
+			HeapTupleSetWarm(heaptup);
+			HeapTupleSetWarm(newtup);
+		}
+
 		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
@@ -4286,12 +4619,45 @@ l2:
 		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
 			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
+	else if (use_warm_update)
+	{
+		/* Mark the old tuple as HOT-updated */
+		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetWarmUpdated(&oldtup);
+
+		/* And mark the new tuple as heap-only */
+		HeapTupleSetHeapOnly(heaptup);
+		/* Mark the new tuple as WARM tuple */
+		HeapTupleSetWarmUpdated(heaptup);
+		/* This update also starts the WARM chain */
+		HeapTupleSetWarm(heaptup);
+		Assert(!HeapTupleIsWarm(&oldtup));
+
+		/* Mark the caller's copy too, in case different from heaptup */
+		HeapTupleSetHeapOnly(newtup);
+		HeapTupleSetWarmUpdated(newtup);
+		HeapTupleSetWarm(newtup);
+
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
+		/* Let the caller know we did a WARM update */
+		if (warm_update)
+			*warm_update = true;
+	}
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		HeapTupleClearWarmUpdated(heaptup);
+		HeapTupleClearWarmUpdated(newtup);
+		HeapTupleClearWarm(heaptup);
+		HeapTupleClearWarm(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4310,7 +4676,9 @@ l2:
 	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
-	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(oldtup.t_data))
+		oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 	oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	/* ... and store info about transaction updating this tuple */
 	Assert(TransactionIdIsValid(xmax_old_tuple));
@@ -4401,7 +4769,10 @@ l2:
 	if (have_tuple_lock)
 		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
 
-	pgstat_count_heap_update(relation, use_hot_update);
+	/*
+	 * Count HOT and WARM updates separately
+	 */
+	pgstat_count_heap_update(relation, use_hot_update, use_warm_update);
 
 	/*
 	 * If heaptup is a private copy, release it.  Don't forget to copy t_self
@@ -4421,6 +4792,8 @@ l2:
 	bms_free(id_attrs);
 	bms_free(modified_attrs);
 	bms_free(interesting_attrs);
+	bms_free(exprindx_attrs);
+	bms_free(notready_attrs);
 
 	return HeapTupleMayBeUpdated;
 }
@@ -4497,9 +4870,47 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 	}
 	else
 	{
+		bool res;
+		bool value1_free = false, value2_free = false;
+
 		Assert(attrnum <= tupdesc->natts);
 		att = tupdesc->attrs[attrnum - 1];
-		return datumIsEqual(value1, value2, att->attbyval, att->attlen);
+
+		/*
+		 * Fetch untoasted values before doing the comparison.
+		 *
+		 * While it's ok for HOT to declare certain values are non-equal even
+		 * if they are physically equal. At worst, this can cause certain
+		 * potential HOT updates to be done in a non-HOT manner. But WARM
+		 * relies on index recheck to decide which index pointer should return
+		 * which row in a WARM chain. For this it's necessary that if old and
+		 * new heap values are declared unequal here, they better produce
+		 * different index values too. We are not so much bothered about
+		 * logical equality since recheck also uses datumIsEqual, but if
+		 * datumIsEqual returns false here, it should return false during index
+		 * recheck too. So we must detoast heap values and then do the
+		 * comparison. As a bonus, it might result in a HOT update which may
+		 * have been ignored earlier.
+		 */
+		if ((att->attlen == -1) && VARATT_IS_EXTENDED(value1))
+		{
+			value1 = PointerGetDatum(heap_tuple_untoast_attr((struct varlena *)
+					DatumGetPointer(value1)));
+			value1_free = true;
+		}
+
+		if ((att->attlen == -1) && VARATT_IS_EXTENDED(value2))
+		{
+			value2 = PointerGetDatum(heap_tuple_untoast_attr((struct varlena *)
+					DatumGetPointer(value2)));
+			value2_free = true;
+		}
+		res = datumIsEqual(value1, value2, att->attbyval, att->attlen);
+		if (value1_free)
+			pfree(DatumGetPointer(value1));
+		if (value2_free)
+			pfree(DatumGetPointer(value2));
+		return res;
 	}
 }
 
@@ -4541,7 +4952,8 @@ HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
  * via ereport().
  */
 void
-simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
+simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup,
+		Bitmapset **modified_attrs, bool *warm_update)
 {
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
@@ -4550,7 +4962,7 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &hufd, &lockmode);
+						 &hufd, &lockmode, modified_attrs, warm_update);
 	switch (result)
 	{
 		case HeapTupleSelfUpdated:
@@ -6227,7 +6639,9 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	PageSetPrunable(page, RecentGlobalXmin);
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 
 	/*
@@ -6801,7 +7215,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 	 * Old-style VACUUM FULL is gone, but we have to keep this code as long as
 	 * we support having MOVED_OFF/MOVED_IN tuples in the database.
 	 */
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 
@@ -6820,7 +7234,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			 * have failed; whereas a non-dead MOVED_IN tuple must mean the
 			 * xvac transaction succeeded.
 			 */
-			if (tuple->t_infomask & HEAP_MOVED_OFF)
+			if (HeapTupleHeaderIsMovedOff(tuple))
 				frz->frzflags |= XLH_INVALID_XVAC;
 			else
 				frz->frzflags |= XLH_FREEZE_XVAC;
@@ -7290,7 +7704,7 @@ heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple)
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid))
@@ -7373,7 +7787,7 @@ heap_tuple_needs_freeze(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid) &&
@@ -7399,7 +7813,7 @@ HeapTupleHeaderAdvanceLatestRemovedXid(HeapTupleHeader tuple,
 	TransactionId xmax = HeapTupleHeaderGetUpdateXid(tuple);
 	TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		if (TransactionIdPrecedes(*latestRemovedXid, xvac))
 			*latestRemovedXid = xvac;
@@ -7448,6 +7862,36 @@ log_heap_cleanup_info(RelFileNode rnode, TransactionId latestRemovedXid)
 }
 
 /*
+ * Perform XLogInsert for a heap-warm-clear operation.  Caller must already
+ * have modified the buffer and marked it dirty.
+ */
+XLogRecPtr
+log_heap_warmclear(Relation reln, Buffer buffer,
+			   OffsetNumber *cleared, int ncleared)
+{
+	xl_heap_warmclear	xlrec;
+	XLogRecPtr			recptr;
+
+	/* Caller should not call me on a non-WAL-logged relation */
+	Assert(RelationNeedsWAL(reln));
+
+	xlrec.ncleared = ncleared;
+
+	XLogBeginInsert();
+	XLogRegisterData((char *) &xlrec, SizeOfHeapWarmClear);
+
+	XLogRegisterBuffer(0, buffer, REGBUF_STANDARD);
+
+	if (ncleared > 0)
+		XLogRegisterBufData(0, (char *) cleared,
+							ncleared * sizeof(OffsetNumber));
+
+	recptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_WARMCLEAR);
+
+	return recptr;
+}
+
+/*
  * Perform XLogInsert for a heap-clean operation.  Caller must already
  * have modified the buffer and marked it dirty.
  *
@@ -7602,6 +8046,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
+	bool		warm_update = false;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
@@ -7613,6 +8058,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	else
 		info = XLOG_HEAP_UPDATE;
 
+	if (HeapTupleIsWarmUpdated(newtup))
+		warm_update = true;
+
 	/*
 	 * If the old and new tuple are on the same page, we only need to log the
 	 * parts of the new tuple that were changed.  That saves on the amount of
@@ -7686,6 +8134,8 @@ log_heap_update(Relation reln, Buffer oldbuf,
 				xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY;
 		}
 	}
+	if (warm_update)
+		xlrec.flags |= XLH_UPDATE_WARM_UPDATE;
 
 	/* If new tuple is the single and first tuple on page... */
 	if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber &&
@@ -8100,6 +8550,60 @@ heap_xlog_clean(XLogReaderState *record)
 		XLogRecordPageWithFreeSpace(rnode, blkno, freespace);
 }
 
+
+/*
+ * Handles HEAP2_WARMCLEAR record type
+ */
+static void
+heap_xlog_warmclear(XLogReaderState *record)
+{
+	XLogRecPtr	lsn = record->EndRecPtr;
+	xl_heap_warmclear	*xlrec = (xl_heap_warmclear *) XLogRecGetData(record);
+	Buffer		buffer;
+	RelFileNode rnode;
+	BlockNumber blkno;
+	XLogRedoAction action;
+
+	XLogRecGetBlockTag(record, 0, &rnode, NULL, &blkno);
+
+	/*
+	 * If we have a full-page image, restore it (using a cleanup lock) and
+	 * we're done.
+	 */
+	action = XLogReadBufferForRedoExtended(record, 0, RBM_NORMAL, true,
+										   &buffer);
+	if (action == BLK_NEEDS_REDO)
+	{
+		Page		page = (Page) BufferGetPage(buffer);
+		OffsetNumber *cleared;
+		int			ncleared;
+		Size		datalen;
+		int			i;
+
+		cleared = (OffsetNumber *) XLogRecGetBlockData(record, 0, &datalen);
+
+		ncleared = xlrec->ncleared;
+
+		for (i = 0; i < ncleared; i++)
+		{
+			ItemId			lp;
+			OffsetNumber	offnum = cleared[i];
+			HeapTupleData	heapTuple;
+
+			lp = PageGetItemId(page, offnum);
+			heapTuple.t_data = (HeapTupleHeader) PageGetItem(page, lp);
+
+			HeapTupleHeaderClearWarmUpdated(heapTuple.t_data);
+			HeapTupleHeaderClearWarm(heapTuple.t_data);
+		}
+
+		PageSetLSN(page, lsn);
+		MarkBufferDirty(buffer);
+	}
+	if (BufferIsValid(buffer))
+		UnlockReleaseBuffer(buffer);
+}
+
 /*
  * Replay XLOG_HEAP2_VISIBLE record.
  *
@@ -8346,7 +8850,9 @@ heap_xlog_delete(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleHeaderClearHotUpdated(htup);
 		fix_infomask_from_infobits(xlrec->infobits_set,
@@ -8367,7 +8873,7 @@ heap_xlog_delete(XLogReaderState *record)
 		if (!HeapTupleHeaderHasRootOffset(htup))
 		{
 			OffsetNumber	root_offnum;
-			root_offnum = heap_get_root_tuple(page, xlrec->offnum); 
+			root_offnum = heap_get_root_tuple(page, xlrec->offnum);
 			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
 		}
 
@@ -8663,16 +9169,22 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	Size		freespace = 0;
 	XLogRedoAction oldaction;
 	XLogRedoAction newaction;
+	bool		warm_update = false;
 
 	/* initialize to keep the compiler quiet */
 	oldtup.t_data = NULL;
 	oldtup.t_len = 0;
 
+	if (xlrec->flags & XLH_UPDATE_WARM_UPDATE)
+		warm_update = true;
+
 	XLogRecGetBlockTag(record, 0, &rnode, NULL, &newblk);
 	if (XLogRecGetBlockTag(record, 1, NULL, NULL, &oldblk))
 	{
 		/* HOT updates are never done across pages */
 		Assert(!hot_update);
+		/* WARM updates are never done across pages */
+		Assert(!warm_update);
 	}
 	else
 		oldblk = newblk;
@@ -8732,6 +9244,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 								   &htup->t_infomask2);
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
+
+		/* Mark the old tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetWarmUpdated(htup);
+
 		/* Set forward chain link in t_ctid */
 		HeapTupleHeaderSetNextTid(htup, &newtid);
 
@@ -8867,6 +9384,10 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
 
+		/* Mark the new tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetWarmUpdated(htup);
+
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
@@ -8994,7 +9515,9 @@ heap_xlog_lock(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9073,7 +9596,9 @@ heap_xlog_lock_updated(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9142,6 +9667,9 @@ heap_redo(XLogReaderState *record)
 		case XLOG_HEAP_INSERT:
 			heap_xlog_insert(record);
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			heap_xlog_multi_insert(record);
+			break;
 		case XLOG_HEAP_DELETE:
 			heap_xlog_delete(record);
 			break;
@@ -9170,7 +9698,7 @@ heap2_redo(XLogReaderState *record)
 {
 	uint8		info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
 
-	switch (info & XLOG_HEAP_OPMASK)
+	switch (info & XLOG_HEAP2_OPMASK)
 	{
 		case XLOG_HEAP2_CLEAN:
 			heap_xlog_clean(record);
@@ -9184,9 +9712,6 @@ heap2_redo(XLogReaderState *record)
 		case XLOG_HEAP2_VISIBLE:
 			heap_xlog_visible(record);
 			break;
-		case XLOG_HEAP2_MULTI_INSERT:
-			heap_xlog_multi_insert(record);
-			break;
 		case XLOG_HEAP2_LOCK_UPDATED:
 			heap_xlog_lock_updated(record);
 			break;
@@ -9200,6 +9725,9 @@ heap2_redo(XLogReaderState *record)
 		case XLOG_HEAP2_REWRITE:
 			heap_xlog_logical_rewrite(record);
 			break;
+		case XLOG_HEAP2_WARMCLEAR:
+			heap_xlog_warmclear(record);
+			break;
 		default:
 			elog(PANIC, "heap2_redo: unknown op code %u", info);
 	}
diff --git b/src/backend/access/heap/pruneheap.c a/src/backend/access/heap/pruneheap.c
index f54337c..6a3baff 100644
--- b/src/backend/access/heap/pruneheap.c
+++ a/src/backend/access/heap/pruneheap.c
@@ -834,6 +834,13 @@ heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				continue;
 
+			/*
+			 * If the tuple has root line pointer, it must be the end of the
+			 * chain
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			/* Set up to scan the HOT-chain */
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
@@ -928,6 +935,6 @@ heap_get_root_tuple(Page page, OffsetNumber target_offnum)
 void
 heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 {
-	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+	heap_get_root_tuples_internal(page, InvalidOffsetNumber,
 			root_offsets);
 }
diff --git b/src/backend/access/heap/rewriteheap.c a/src/backend/access/heap/rewriteheap.c
index 2d3ae9b..bd469ee 100644
--- b/src/backend/access/heap/rewriteheap.c
+++ a/src/backend/access/heap/rewriteheap.c
@@ -404,6 +404,14 @@ rewrite_heap_tuple(RewriteState state,
 		old_tuple->t_data->t_infomask & HEAP_XACT_MASK;
 
 	/*
+	 * We must clear the HEAP_WARM_TUPLE flag if the HEAP_WARM_UPDATED is
+	 * cleared above.
+	 */
+	if (HeapTupleHeaderIsWarmUpdated(old_tuple->t_data))
+		HeapTupleHeaderClearWarm(new_tuple->t_data);
+
+
+	/*
 	 * While we have our hands on the tuple, we may as well freeze any
 	 * eligible xmin or xmax, so that future VACUUM effort can be saved.
 	 */
@@ -428,7 +436,7 @@ rewrite_heap_tuple(RewriteState state,
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
 
-		/* 
+		/*
 		 * We've already checked that this is not the last tuple in the chain,
 		 * so fetch the next TID in the chain.
 		 */
@@ -737,7 +745,7 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		/* 
+		/*
 		 * Set t_ctid just to ensure that block number is copied correctly, but
 		 * then immediately mark the tuple as the latest.
 		 */
diff --git b/src/backend/access/heap/tuptoaster.c a/src/backend/access/heap/tuptoaster.c
index aa5a45d..bab48fd 100644
--- b/src/backend/access/heap/tuptoaster.c
+++ a/src/backend/access/heap/tuptoaster.c
@@ -1688,7 +1688,8 @@ toast_save_datum(Relation rel, Datum value,
 							 toastrel,
 							 toastidxs[i]->rd_index->indisunique ?
 							 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-							 NULL);
+							 NULL,
+							 false);
 		}
 
 		/*
diff --git b/src/backend/access/index/genam.c a/src/backend/access/index/genam.c
index a91fda7..d523c8f 100644
--- b/src/backend/access/index/genam.c
+++ a/src/backend/access/index/genam.c
@@ -127,6 +127,8 @@ RelationGetIndexScan(Relation indexRelation, int nkeys, int norderbys)
 	scan->xs_cbuf = InvalidBuffer;
 	scan->xs_continue_hot = false;
 
+	scan->indexInfo = NULL;
+
 	return scan;
 }
 
diff --git b/src/backend/access/index/indexam.c a/src/backend/access/index/indexam.c
index cc5ac8b..d048714 100644
--- b/src/backend/access/index/indexam.c
+++ a/src/backend/access/index/indexam.c
@@ -197,7 +197,8 @@ index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 IndexInfo *indexInfo)
+			 IndexInfo *indexInfo,
+			 bool warm_update)
 {
 	RELATION_CHECKS;
 	CHECK_REL_PROCEDURE(aminsert);
@@ -207,6 +208,12 @@ index_insert(Relation indexRelation,
 									   (HeapTuple) NULL,
 									   InvalidBuffer);
 
+	if (warm_update)
+	{
+		Assert(indexRelation->rd_amroutine->amwarminsert != NULL);
+		return indexRelation->rd_amroutine->amwarminsert(indexRelation, values,
+				isnull, heap_t_ctid, heapRelation, checkUnique, indexInfo);
+	}
 	return indexRelation->rd_amroutine->aminsert(indexRelation, values, isnull,
 												 heap_t_ctid, heapRelation,
 												 checkUnique, indexInfo);
@@ -291,6 +298,25 @@ index_beginscan_internal(Relation indexRelation,
 	scan->parallel_scan = pscan;
 	scan->xs_temp_snap = temp_snap;
 
+	/*
+	 * If the index supports recheck, make sure that index tuple is saved
+	 * during index scans. Also build and cache IndexInfo which is used by
+	 * amrecheck routine.
+	 *
+	 * XXX Ideally, we should look at all indexes on the table and check if
+	 * WARM is at all supported on the base table. If WARM is not supported
+	 * then we don't need to do any recheck. RelationGetIndexAttrBitmap() does
+	 * do that and sets rd_supportswarm after looking at all indexes. But we
+	 * don't know if the function was called earlier in the session when we're
+	 * here. We can't call it now because there exists a risk of causing
+	 * deadlock.
+	 */
+	if (indexRelation->rd_amroutine->amrecheck)
+	{
+		scan->xs_want_itup = true;
+		scan->indexInfo = BuildIndexInfo(indexRelation);
+	}
+
 	return scan;
 }
 
@@ -358,6 +384,10 @@ index_endscan(IndexScanDesc scan)
 	if (scan->xs_temp_snap)
 		UnregisterSnapshot(scan->xs_snapshot);
 
+	/* Free cached IndexInfo, if any */
+	if (scan->indexInfo)
+		pfree(scan->indexInfo);
+
 	/* Release the scan data structure itself */
 	IndexScanEnd(scan);
 }
@@ -535,7 +565,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
 	/*
 	 * The AM's amgettuple proc finds the next index entry matching the scan
 	 * keys, and puts the TID into scan->xs_ctup.t_self.  It should also set
-	 * scan->xs_recheck and possibly scan->xs_itup/scan->xs_hitup, though we
+	 * scan->xs_tuple_recheck and possibly scan->xs_itup/scan->xs_hitup, though we
 	 * pay no attention to those fields here.
 	 */
 	found = scan->indexRelation->rd_amroutine->amgettuple(scan, direction);
@@ -574,7 +604,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
  * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
  * call).
  *
- * Note: caller must check scan->xs_recheck, and perform rechecking of the
+ * Note: caller must check scan->xs_tuple_recheck, and perform rechecking of the
  * scan keys if required.  We do not do that here because we don't have
  * enough information to do it efficiently in the general case.
  * ----------------
@@ -585,6 +615,7 @@ index_fetch_heap(IndexScanDesc scan)
 	ItemPointer tid = &scan->xs_ctup.t_self;
 	bool		all_dead = false;
 	bool		got_heap_tuple;
+	bool		tuple_recheck;
 
 	/* We can skip the buffer-switching logic if we're in mid-HOT chain. */
 	if (!scan->xs_continue_hot)
@@ -603,6 +634,8 @@ index_fetch_heap(IndexScanDesc scan)
 			heap_page_prune_opt(scan->heapRelation, scan->xs_cbuf);
 	}
 
+	tuple_recheck = false;
+
 	/* Obtain share-lock on the buffer so we can examine visibility */
 	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
 	got_heap_tuple = heap_hot_search_buffer(tid, scan->heapRelation,
@@ -610,32 +643,60 @@ index_fetch_heap(IndexScanDesc scan)
 											scan->xs_snapshot,
 											&scan->xs_ctup,
 											&all_dead,
-											!scan->xs_continue_hot);
-	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+											!scan->xs_continue_hot,
+											&tuple_recheck);
 
 	if (got_heap_tuple)
 	{
+		bool res = true;
+
+		/*
+		 * Ok we got a tuple which satisfies the snapshot, but if its part of a
+		 * WARM chain, we must do additional checks to ensure that we are
+		 * indeed returning a correct tuple. Note that if the index AM does not
+		 * implement amrecheck method, then we don't any additional checks
+		 * since WARM must have been disabled on such tables.
+		 */
+		if (tuple_recheck && scan->xs_itup &&
+			scan->indexRelation->rd_amroutine->amrecheck)
+		{
+			res = scan->indexRelation->rd_amroutine->amrecheck(
+						scan->indexRelation,
+						scan->indexInfo,
+						scan->xs_itup,
+						scan->heapRelation,
+						&scan->xs_ctup);
+		}
+
+		LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+
 		/*
 		 * Only in a non-MVCC snapshot can more than one member of the HOT
 		 * chain be visible.
 		 */
 		scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
 		pgstat_count_heap_fetch(scan->indexRelation);
-		return &scan->xs_ctup;
+
+		if (res)
+			return &scan->xs_ctup;
 	}
+	else
+	{
+		LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
 
-	/* We've reached the end of the HOT chain. */
-	scan->xs_continue_hot = false;
+		/* We've reached the end of the HOT chain. */
+		scan->xs_continue_hot = false;
 
-	/*
-	 * If we scanned a whole HOT chain and found only dead tuples, tell index
-	 * AM to kill its entry for that TID (this will take effect in the next
-	 * amgettuple call, in index_getnext_tid).  We do not do this when in
-	 * recovery because it may violate MVCC to do so.  See comments in
-	 * RelationGetIndexScan().
-	 */
-	if (!scan->xactStartedInRecovery)
-		scan->kill_prior_tuple = all_dead;
+		/*
+		 * If we scanned a whole HOT chain and found only dead tuples, tell index
+		 * AM to kill its entry for that TID (this will take effect in the next
+		 * amgettuple call, in index_getnext_tid).  We do not do this when in
+		 * recovery because it may violate MVCC to do so.  See comments in
+		 * RelationGetIndexScan().
+		 */
+		if (!scan->xactStartedInRecovery)
+			scan->kill_prior_tuple = all_dead;
+	}
 
 	return NULL;
 }
diff --git b/src/backend/access/nbtree/nbtinsert.c a/src/backend/access/nbtree/nbtinsert.c
index 6dca810..463d4bf 100644
--- b/src/backend/access/nbtree/nbtinsert.c
+++ a/src/backend/access/nbtree/nbtinsert.c
@@ -20,6 +20,7 @@
 #include "access/nbtxlog.h"
 #include "access/transam.h"
 #include "access/xloginsert.h"
+#include "catalog/index.h"
 #include "miscadmin.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
@@ -250,6 +251,10 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 	BTPageOpaque opaque;
 	Buffer		nbuf = InvalidBuffer;
 	bool		found = false;
+	Buffer		buffer;
+	HeapTupleData	heapTuple;
+	bool		recheck = false;
+	IndexInfo	*indexInfo = BuildIndexInfo(rel);
 
 	/* Assume unique until we find a duplicate */
 	*is_unique = true;
@@ -309,6 +314,8 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				curitup = (IndexTuple) PageGetItem(page, curitemid);
 				htid = curitup->t_tid;
 
+				recheck = false;
+
 				/*
 				 * If we are doing a recheck, we expect to find the tuple we
 				 * are rechecking.  It's not a duplicate, but we have to keep
@@ -326,112 +333,153 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				 * have just a single index entry for the entire chain.
 				 */
 				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
-										 &all_dead))
+							&all_dead, &recheck, &buffer,
+							&heapTuple))
 				{
 					TransactionId xwait;
+					bool result = true;
 
 					/*
-					 * It is a duplicate. If we are only doing a partial
-					 * check, then don't bother checking if the tuple is being
-					 * updated in another transaction. Just return the fact
-					 * that it is a potential conflict and leave the full
-					 * check till later.
+					 * If the tuple was WARM update, we may again see our own
+					 * tuple. Since WARM updates don't create new index
+					 * entries, our own tuple is only reachable via the old
+					 * index pointer.
 					 */
-					if (checkUnique == UNIQUE_CHECK_PARTIAL)
+					if (checkUnique == UNIQUE_CHECK_EXISTING &&
+							ItemPointerCompare(&htid, &itup->t_tid) == 0)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						*is_unique = false;
-						return InvalidTransactionId;
+						found = true;
+						result = false;
+						if (recheck)
+							UnlockReleaseBuffer(buffer);
 					}
-
-					/*
-					 * If this tuple is being updated by other transaction
-					 * then we have to wait for its commit/abort.
-					 */
-					xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
-						SnapshotDirty.xmin : SnapshotDirty.xmax;
-
-					if (TransactionIdIsValid(xwait))
+					else if (recheck)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						/* Tell _bt_doinsert to wait... */
-						*speculativeToken = SnapshotDirty.speculativeToken;
-						return xwait;
+						result = btrecheck(rel, indexInfo, curitup, heapRel, &heapTuple);
+						UnlockReleaseBuffer(buffer);
 					}
 
-					/*
-					 * Otherwise we have a definite conflict.  But before
-					 * complaining, look to see if the tuple we want to insert
-					 * is itself now committed dead --- if so, don't complain.
-					 * This is a waste of time in normal scenarios but we must
-					 * do it to support CREATE INDEX CONCURRENTLY.
-					 *
-					 * We must follow HOT-chains here because during
-					 * concurrent index build, we insert the root TID though
-					 * the actual tuple may be somewhere in the HOT-chain.
-					 * While following the chain we might not stop at the
-					 * exact tuple which triggered the insert, but that's OK
-					 * because if we find a live tuple anywhere in this chain,
-					 * we have a unique key conflict.  The other live tuple is
-					 * not part of this chain because it had a different index
-					 * entry.
-					 */
-					htid = itup->t_tid;
-					if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL))
-					{
-						/* Normal case --- it's still live */
-					}
-					else
+					if (result)
 					{
 						/*
-						 * It's been deleted, so no error, and no need to
-						 * continue searching
+						 * It is a duplicate. If we are only doing a partial
+						 * check, then don't bother checking if the tuple is being
+						 * updated in another transaction. Just return the fact
+						 * that it is a potential conflict and leave the full
+						 * check till later.
 						 */
-						break;
-					}
+						if (checkUnique == UNIQUE_CHECK_PARTIAL)
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							*is_unique = false;
+							return InvalidTransactionId;
+						}
 
-					/*
-					 * Check for a conflict-in as we would if we were going to
-					 * write to this page.  We aren't actually going to write,
-					 * but we want a chance to report SSI conflicts that would
-					 * otherwise be masked by this unique constraint
-					 * violation.
-					 */
-					CheckForSerializableConflictIn(rel, NULL, buf);
+						/*
+						 * If this tuple is being updated by other transaction
+						 * then we have to wait for its commit/abort.
+						 */
+						xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
+							SnapshotDirty.xmin : SnapshotDirty.xmax;
+
+						if (TransactionIdIsValid(xwait))
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							/* Tell _bt_doinsert to wait... */
+							*speculativeToken = SnapshotDirty.speculativeToken;
+							return xwait;
+						}
 
-					/*
-					 * This is a definite conflict.  Break the tuple down into
-					 * datums and report the error.  But first, make sure we
-					 * release the buffer locks we're holding ---
-					 * BuildIndexValueDescription could make catalog accesses,
-					 * which in the worst case might touch this same index and
-					 * cause deadlocks.
-					 */
-					if (nbuf != InvalidBuffer)
-						_bt_relbuf(rel, nbuf);
-					_bt_relbuf(rel, buf);
+						/*
+						 * Otherwise we have a definite conflict.  But before
+						 * complaining, look to see if the tuple we want to insert
+						 * is itself now committed dead --- if so, don't complain.
+						 * This is a waste of time in normal scenarios but we must
+						 * do it to support CREATE INDEX CONCURRENTLY.
+						 *
+						 * We must follow HOT-chains here because during
+						 * concurrent index build, we insert the root TID though
+						 * the actual tuple may be somewhere in the HOT-chain.
+						 * While following the chain we might not stop at the
+						 * exact tuple which triggered the insert, but that's OK
+						 * because if we find a live tuple anywhere in this chain,
+						 * we have a unique key conflict.  The other live tuple is
+						 * not part of this chain because it had a different index
+						 * entry.
+						 */
+						recheck = false;
+						ItemPointerCopy(&itup->t_tid, &htid);
+						if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL,
+									&recheck, &buffer, &heapTuple))
+						{
+							bool result = true;
+							if (recheck)
+							{
+								/*
+								 * Recheck if the tuple actually satisfies the
+								 * index key. Otherwise, we might be following
+								 * a wrong index pointer and mustn't entertain
+								 * this tuple.
+								 */
+								result = btrecheck(rel, indexInfo, itup, heapRel, &heapTuple);
+								UnlockReleaseBuffer(buffer);
+							}
+							if (!result)
+								break;
+							/* Normal case --- it's still live */
+						}
+						else
+						{
+							/*
+							 * It's been deleted, so no error, and no need to
+							 * continue searching.
+							 */
+							break;
+						}
 
-					{
-						Datum		values[INDEX_MAX_KEYS];
-						bool		isnull[INDEX_MAX_KEYS];
-						char	   *key_desc;
-
-						index_deform_tuple(itup, RelationGetDescr(rel),
-										   values, isnull);
-
-						key_desc = BuildIndexValueDescription(rel, values,
-															  isnull);
-
-						ereport(ERROR,
-								(errcode(ERRCODE_UNIQUE_VIOLATION),
-								 errmsg("duplicate key value violates unique constraint \"%s\"",
-										RelationGetRelationName(rel)),
-							   key_desc ? errdetail("Key %s already exists.",
-													key_desc) : 0,
-								 errtableconstraint(heapRel,
-											 RelationGetRelationName(rel))));
+						/*
+						 * Check for a conflict-in as we would if we were going to
+						 * write to this page.  We aren't actually going to write,
+						 * but we want a chance to report SSI conflicts that would
+						 * otherwise be masked by this unique constraint
+						 * violation.
+						 */
+						CheckForSerializableConflictIn(rel, NULL, buf);
+
+						/*
+						 * This is a definite conflict.  Break the tuple down into
+						 * datums and report the error.  But first, make sure we
+						 * release the buffer locks we're holding ---
+						 * BuildIndexValueDescription could make catalog accesses,
+						 * which in the worst case might touch this same index and
+						 * cause deadlocks.
+						 */
+						if (nbuf != InvalidBuffer)
+							_bt_relbuf(rel, nbuf);
+						_bt_relbuf(rel, buf);
+
+						{
+							Datum		values[INDEX_MAX_KEYS];
+							bool		isnull[INDEX_MAX_KEYS];
+							char	   *key_desc;
+
+							index_deform_tuple(itup, RelationGetDescr(rel),
+									values, isnull);
+
+							key_desc = BuildIndexValueDescription(rel, values,
+									isnull);
+
+							ereport(ERROR,
+									(errcode(ERRCODE_UNIQUE_VIOLATION),
+									 errmsg("duplicate key value violates unique constraint \"%s\"",
+										 RelationGetRelationName(rel)),
+									 key_desc ? errdetail("Key %s already exists.",
+										 key_desc) : 0,
+									 errtableconstraint(heapRel,
+										 RelationGetRelationName(rel))));
+						}
 					}
 				}
 				else if (all_dead)
diff --git b/src/backend/access/nbtree/nbtpage.c a/src/backend/access/nbtree/nbtpage.c
index f815fd4..061c8d4 100644
--- b/src/backend/access/nbtree/nbtpage.c
+++ a/src/backend/access/nbtree/nbtpage.c
@@ -766,29 +766,20 @@ _bt_page_recyclable(Page page)
 }
 
 /*
- * Delete item(s) from a btree page during VACUUM.
+ * Delete item(s) and clear WARM item(s) on a btree page during VACUUM.
  *
  * This must only be used for deleting leaf items.  Deleting an item on a
  * non-leaf page has to be done as part of an atomic action that includes
- * deleting the page it points to.
+ * deleting the page it points to. We don't ever clear pointers on a non-leaf
+ * page.
  *
  * This routine assumes that the caller has pinned and locked the buffer.
  * Also, the given itemnos *must* appear in increasing order in the array.
- *
- * We record VACUUMs and b-tree deletes differently in WAL. InHotStandby
- * we need to be able to pin all of the blocks in the btree in physical
- * order when replaying the effects of a VACUUM, just as we do for the
- * original VACUUM itself. lastBlockVacuumed allows us to tell whether an
- * intermediate range of blocks has had no changes at all by VACUUM,
- * and so must be scanned anyway during replay. We always write a WAL record
- * for the last block in the index, whether or not it contained any items
- * to be removed. This allows us to scan right up to end of index to
- * ensure correct locking.
  */
 void
-_bt_delitems_vacuum(Relation rel, Buffer buf,
-					OffsetNumber *itemnos, int nitems,
-					BlockNumber lastBlockVacuumed)
+_bt_handleitems_vacuum(Relation rel, Buffer buf,
+					OffsetNumber *delitemnos, int ndelitems,
+					OffsetNumber *clearitemnos, int nclearitems)
 {
 	Page		page = BufferGetPage(buf);
 	BTPageOpaque opaque;
@@ -796,9 +787,20 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 	/* No ereport(ERROR) until changes are logged */
 	START_CRIT_SECTION();
 
+	/*
+	 * Clear the WARM pointers.
+	 *
+	 * We must do this before dealing with the dead items because
+	 * PageIndexMultiDelete may move items around to compactify the array and
+	 * hence offnums recorded earlier won't make any sense after
+	 * PageIndexMultiDelete is called.
+	 */
+	if (nclearitems > 0)
+		_bt_clear_items(page, clearitemnos, nclearitems);
+
 	/* Fix the page */
-	if (nitems > 0)
-		PageIndexMultiDelete(page, itemnos, nitems);
+	if (ndelitems > 0)
+		PageIndexMultiDelete(page, delitemnos, ndelitems);
 
 	/*
 	 * We can clear the vacuum cycle ID since this page has certainly been
@@ -824,7 +826,8 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 		XLogRecPtr	recptr;
 		xl_btree_vacuum xlrec_vacuum;
 
-		xlrec_vacuum.lastBlockVacuumed = lastBlockVacuumed;
+		xlrec_vacuum.ndelitems = ndelitems;
+		xlrec_vacuum.nclearitems = nclearitems;
 
 		XLogBeginInsert();
 		XLogRegisterBuffer(0, buf, REGBUF_STANDARD);
@@ -835,8 +838,11 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 		 * is.  When XLogInsert stores the whole buffer, the offsets array
 		 * need not be stored too.
 		 */
-		if (nitems > 0)
-			XLogRegisterBufData(0, (char *) itemnos, nitems * sizeof(OffsetNumber));
+		if (ndelitems > 0)
+			XLogRegisterBufData(0, (char *) delitemnos, ndelitems * sizeof(OffsetNumber));
+
+		if (nclearitems > 0)
+			XLogRegisterBufData(0, (char *) clearitemnos, nclearitems * sizeof(OffsetNumber));
 
 		recptr = XLogInsert(RM_BTREE_ID, XLOG_BTREE_VACUUM);
 
@@ -1882,3 +1888,13 @@ _bt_unlink_halfdead_page(Relation rel, Buffer leafbuf, bool *rightsib_empty)
 
 	return true;
 }
+
+/*
+ * Currently just a wrapper arounf PageIndexClearWarmTuples, but in theory each
+ * index may have it's own way to handle WARM tuples.
+ */
+void
+_bt_clear_items(Page page, OffsetNumber *clearitemnos, uint16 nclearitems)
+{
+	PageIndexClearWarmTuples(page, clearitemnos, nclearitems);
+}
diff --git b/src/backend/access/nbtree/nbtree.c a/src/backend/access/nbtree/nbtree.c
index 775f2ff..6d558af 100644
--- b/src/backend/access/nbtree/nbtree.c
+++ a/src/backend/access/nbtree/nbtree.c
@@ -146,6 +146,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = btbuild;
 	amroutine->ambuildempty = btbuildempty;
 	amroutine->aminsert = btinsert;
+	amroutine->amwarminsert = btwarminsert;
 	amroutine->ambulkdelete = btbulkdelete;
 	amroutine->amvacuumcleanup = btvacuumcleanup;
 	amroutine->amcanreturn = btcanreturn;
@@ -163,6 +164,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = btestimateparallelscan;
 	amroutine->aminitparallelscan = btinitparallelscan;
 	amroutine->amparallelrescan = btparallelrescan;
+	amroutine->amrecheck = btrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -315,11 +317,12 @@ btbuildempty(Relation index)
  *		Descend the tree recursively, find the appropriate location for our
  *		new tuple, and put it there.
  */
-bool
-btinsert(Relation rel, Datum *values, bool *isnull,
+static bool
+btinsert_internal(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
-		 IndexInfo *indexInfo)
+		 IndexInfo *indexInfo,
+		 bool warm_update)
 {
 	bool		result;
 	IndexTuple	itup;
@@ -328,6 +331,11 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	itup = index_form_tuple(RelationGetDescr(rel), values, isnull);
 	itup->t_tid = *ht_ctid;
 
+	if (warm_update)
+		ItemPointerSetFlags(&itup->t_tid, BTREE_INDEX_WARM_POINTER);
+	else
+		ItemPointerClearFlags(&itup->t_tid);
+
 	result = _bt_doinsert(rel, itup, checkUnique, heapRel);
 
 	pfree(itup);
@@ -335,6 +343,26 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	return result;
 }
 
+bool
+btinsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, false);
+}
+
+bool
+btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, true);
+}
+
 /*
  *	btgettuple() -- Get the next tuple in the scan.
  */
@@ -1103,7 +1131,7 @@ btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 								 RBM_NORMAL, info->strategy);
 		LockBufferForCleanup(buf);
 		_bt_checkpage(rel, buf);
-		_bt_delitems_vacuum(rel, buf, NULL, 0, vstate.lastBlockVacuumed);
+		_bt_handleitems_vacuum(rel, buf, NULL, 0, NULL, 0);
 		_bt_relbuf(rel, buf);
 	}
 
@@ -1201,6 +1229,8 @@ restart:
 	{
 		OffsetNumber deletable[MaxOffsetNumber];
 		int			ndeletable;
+		OffsetNumber clearwarm[MaxOffsetNumber];
+		int			nclearwarm;
 		OffsetNumber offnum,
 					minoff,
 					maxoff;
@@ -1239,7 +1269,7 @@ restart:
 		 * Scan over all items to see which ones need deleted according to the
 		 * callback function.
 		 */
-		ndeletable = 0;
+		ndeletable = nclearwarm = 0;
 		minoff = P_FIRSTDATAKEY(opaque);
 		maxoff = PageGetMaxOffsetNumber(page);
 		if (callback)
@@ -1250,6 +1280,9 @@ restart:
 			{
 				IndexTuple	itup;
 				ItemPointer htup;
+				int			flags;
+				bool		is_warm = false;
+				IndexBulkDeleteCallbackResult	result;
 
 				itup = (IndexTuple) PageGetItem(page,
 												PageGetItemId(page, offnum));
@@ -1276,16 +1309,36 @@ restart:
 				 * applies to *any* type of index that marks index tuples as
 				 * killed.
 				 */
-				if (callback(htup, callback_state))
+				flags = ItemPointerGetFlags(&itup->t_tid);
+				is_warm = ((flags & BTREE_INDEX_WARM_POINTER) != 0);
+
+				if (is_warm)
+					stats->num_warm_pointers++;
+				else
+					stats->num_clear_pointers++;
+
+				result = callback(htup, is_warm, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					if (is_warm)
+						stats->warm_pointers_removed++;
+					else
+						stats->clear_pointers_removed++;
 					deletable[ndeletable++] = offnum;
+				}
+				else if (result == IBDCR_CLEAR_WARM)
+				{
+					clearwarm[nclearwarm++] = offnum;
+				}
 			}
 		}
 
 		/*
-		 * Apply any needed deletes.  We issue just one _bt_delitems_vacuum()
-		 * call per page, so as to minimize WAL traffic.
+		 * Apply any needed deletes and clearing.  We issue just one
+		 * _bt_handleitems_vacuum() call per page, so as to minimize WAL
+		 * traffic.
 		 */
-		if (ndeletable > 0)
+		if (ndeletable > 0 || nclearwarm > 0)
 		{
 			/*
 			 * Notice that the issued XLOG_BTREE_VACUUM WAL record includes
@@ -1301,8 +1354,8 @@ restart:
 			 * doesn't seem worth the amount of bookkeeping it'd take to avoid
 			 * that.
 			 */
-			_bt_delitems_vacuum(rel, buf, deletable, ndeletable,
-								vstate->lastBlockVacuumed);
+			_bt_handleitems_vacuum(rel, buf, deletable, ndeletable,
+								clearwarm, nclearwarm);
 
 			/*
 			 * Remember highest leaf page number we've issued a
@@ -1312,6 +1365,7 @@ restart:
 				vstate->lastBlockVacuumed = blkno;
 
 			stats->tuples_removed += ndeletable;
+			stats->pointers_cleared += nclearwarm;
 			/* must recompute maxoff */
 			maxoff = PageGetMaxOffsetNumber(page);
 		}
diff --git b/src/backend/access/nbtree/nbtutils.c a/src/backend/access/nbtree/nbtutils.c
index 5b259a3..2765809 100644
--- b/src/backend/access/nbtree/nbtutils.c
+++ a/src/backend/access/nbtree/nbtutils.c
@@ -20,11 +20,14 @@
 #include "access/nbtree.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "access/tuptoaster.h"
+#include "catalog/index.h"
 #include "miscadmin.h"
 #include "utils/array.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 typedef struct BTSortArrayContext
@@ -2069,3 +2072,93 @@ btproperty(Oid index_oid, int attno,
 			return false;		/* punt to generic code */
 	}
 }
+
+/*
+ * Check if the index tuple's key matches the one computed from the given heap
+ * tuple's attribute
+ */
+bool
+btrecheck(Relation indexRel, IndexInfo *indexInfo, IndexTuple indexTuple1,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	bool		isavail[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+	IndexTuple	indexTuple2;
+
+	/*
+	 * Get the index values, except for expression attributes. Since WARM is
+	 * not used when a column used by expressions in an index is modified, we
+	 * can safely assume that those index attributes are never changed by a
+	 * WARM update.
+	 *
+	 * We cannot use FormIndexDatum here because that requires access to
+	 * executor state and we don't have that here.
+	 */
+	FormIndexPlainDatum(indexInfo, heapRel, heapTuple, values, isnull, isavail);
+
+	/*
+	 * Form an index tuple using the heap values first. This allows to then
+	 * fetch index attributes from the current index tuple and the one that is
+	 * formed from the heap values and then do a binary comparison using
+	 * datumIsEqual().
+	 *
+	 * This takes care of doing the right comparison for compressed index
+	 * attributes (we just compare the compressed versions in both tuples) and
+	 * also ensure that we correctly detoast heap values, if need be.
+	 */
+	indexTuple2 = index_form_tuple(RelationGetDescr(indexRel), values, isnull);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue1;
+		bool	indxisnull1;
+		Datum	indxvalue2;
+		bool	indxisnull2;
+
+		/* No need to compare if the attribute value is not available */
+		if (!isavail[i - 1])
+			continue;
+
+		indxvalue1 = index_getattr(indexTuple1, i, indexRel->rd_att,
+								   &indxisnull1);
+		indxvalue2 = index_getattr(indexTuple2, i, indexRel->rd_att,
+								   &indxisnull2);
+
+		/*
+		 * If both are NULL, then they are equal
+		 */
+		if (indxisnull1 && indxisnull2)
+			continue;
+
+		/*
+		 * If just one is NULL, then they are not equal
+		 */
+		if (indxisnull1 || indxisnull2)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now just do a raw memory comparison. If the index tuple was formed
+		 * using this heap tuple, the computed index values must match
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(indxvalue1, indxvalue2, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	pfree(indexTuple2);
+
+	return equal;
+}
diff --git b/src/backend/access/nbtree/nbtxlog.c a/src/backend/access/nbtree/nbtxlog.c
index ac60db0..ef24738 100644
--- b/src/backend/access/nbtree/nbtxlog.c
+++ a/src/backend/access/nbtree/nbtxlog.c
@@ -390,8 +390,8 @@ btree_xlog_vacuum(XLogReaderState *record)
 	Buffer		buffer;
 	Page		page;
 	BTPageOpaque opaque;
-#ifdef UNUSED
 	xl_btree_vacuum *xlrec = (xl_btree_vacuum *) XLogRecGetData(record);
+#ifdef UNUSED
 
 	/*
 	 * This section of code is thought to be no longer needed, after analysis
@@ -482,19 +482,30 @@ btree_xlog_vacuum(XLogReaderState *record)
 
 		if (len > 0)
 		{
-			OffsetNumber *unused;
-			OffsetNumber *unend;
+			OffsetNumber *offnums = (OffsetNumber *) ptr;
 
-			unused = (OffsetNumber *) ptr;
-			unend = (OffsetNumber *) ((char *) ptr + len);
+			/*
+			 * Clear the WARM pointers.
+			 *
+			 * We must do this before dealing with the dead items because
+			 * PageIndexMultiDelete may move items around to compactify the
+			 * array and hence offnums recorded earlier won't make any sense
+			 * after PageIndexMultiDelete is called.
+			 */
+			if (xlrec->nclearitems > 0)
+				_bt_clear_items(page, offnums + xlrec->ndelitems,
+						xlrec->nclearitems);
 
-			if ((unend - unused) > 0)
-				PageIndexMultiDelete(page, unused, unend - unused);
+			/*
+			 * And handle the deleted items too
+			 */
+			if (xlrec->ndelitems > 0)
+				PageIndexMultiDelete(page, offnums, xlrec->ndelitems);
 		}
 
 		/*
 		 * Mark the page as not containing any LP_DEAD items --- see comments
-		 * in _bt_delitems_vacuum().
+		 * in _bt_handleitems_vacuum().
 		 */
 		opaque = (BTPageOpaque) PageGetSpecialPointer(page);
 		opaque->btpo_flags &= ~BTP_HAS_GARBAGE;
diff --git b/src/backend/access/rmgrdesc/heapdesc.c a/src/backend/access/rmgrdesc/heapdesc.c
index 44d2d63..d373e61 100644
--- b/src/backend/access/rmgrdesc/heapdesc.c
+++ a/src/backend/access/rmgrdesc/heapdesc.c
@@ -44,6 +44,12 @@ heap_desc(StringInfo buf, XLogReaderState *record)
 
 		appendStringInfo(buf, "off %u", xlrec->offnum);
 	}
+	else if (info == XLOG_HEAP_MULTI_INSERT)
+	{
+		xl_heap_multi_insert *xlrec = (xl_heap_multi_insert *) rec;
+
+		appendStringInfo(buf, "%d tuples", xlrec->ntuples);
+	}
 	else if (info == XLOG_HEAP_DELETE)
 	{
 		xl_heap_delete *xlrec = (xl_heap_delete *) rec;
@@ -102,7 +108,7 @@ heap2_desc(StringInfo buf, XLogReaderState *record)
 	char	   *rec = XLogRecGetData(record);
 	uint8		info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
 
-	info &= XLOG_HEAP_OPMASK;
+	info &= XLOG_HEAP2_OPMASK;
 	if (info == XLOG_HEAP2_CLEAN)
 	{
 		xl_heap_clean *xlrec = (xl_heap_clean *) rec;
@@ -129,12 +135,6 @@ heap2_desc(StringInfo buf, XLogReaderState *record)
 		appendStringInfo(buf, "cutoff xid %u flags %d",
 						 xlrec->cutoff_xid, xlrec->flags);
 	}
-	else if (info == XLOG_HEAP2_MULTI_INSERT)
-	{
-		xl_heap_multi_insert *xlrec = (xl_heap_multi_insert *) rec;
-
-		appendStringInfo(buf, "%d tuples", xlrec->ntuples);
-	}
 	else if (info == XLOG_HEAP2_LOCK_UPDATED)
 	{
 		xl_heap_lock_updated *xlrec = (xl_heap_lock_updated *) rec;
@@ -171,6 +171,12 @@ heap_identify(uint8 info)
 		case XLOG_HEAP_INSERT | XLOG_HEAP_INIT_PAGE:
 			id = "INSERT+INIT";
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			id = "MULTI_INSERT";
+			break;
+		case XLOG_HEAP_MULTI_INSERT | XLOG_HEAP_INIT_PAGE:
+			id = "MULTI_INSERT+INIT";
+			break;
 		case XLOG_HEAP_DELETE:
 			id = "DELETE";
 			break;
@@ -219,12 +225,6 @@ heap2_identify(uint8 info)
 		case XLOG_HEAP2_VISIBLE:
 			id = "VISIBLE";
 			break;
-		case XLOG_HEAP2_MULTI_INSERT:
-			id = "MULTI_INSERT";
-			break;
-		case XLOG_HEAP2_MULTI_INSERT | XLOG_HEAP_INIT_PAGE:
-			id = "MULTI_INSERT+INIT";
-			break;
 		case XLOG_HEAP2_LOCK_UPDATED:
 			id = "LOCK_UPDATED";
 			break;
diff --git b/src/backend/access/rmgrdesc/nbtdesc.c a/src/backend/access/rmgrdesc/nbtdesc.c
index fbde9d6..6b2c5d6 100644
--- b/src/backend/access/rmgrdesc/nbtdesc.c
+++ a/src/backend/access/rmgrdesc/nbtdesc.c
@@ -48,8 +48,8 @@ btree_desc(StringInfo buf, XLogReaderState *record)
 			{
 				xl_btree_vacuum *xlrec = (xl_btree_vacuum *) rec;
 
-				appendStringInfo(buf, "lastBlockVacuumed %u",
-								 xlrec->lastBlockVacuumed);
+				appendStringInfo(buf, "ndelitems %u, nclearitems %u",
+								 xlrec->ndelitems, xlrec->nclearitems);
 				break;
 			}
 		case XLOG_BTREE_DELETE:
diff --git b/src/backend/access/spgist/spgutils.c a/src/backend/access/spgist/spgutils.c
index e57ac49..59ef7f3 100644
--- b/src/backend/access/spgist/spgutils.c
+++ a/src/backend/access/spgist/spgutils.c
@@ -72,6 +72,7 @@ spghandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git b/src/backend/access/spgist/spgvacuum.c a/src/backend/access/spgist/spgvacuum.c
index cce9b3f..711d351 100644
--- b/src/backend/access/spgist/spgvacuum.c
+++ a/src/backend/access/spgist/spgvacuum.c
@@ -155,7 +155,8 @@ vacuumLeafPage(spgBulkDeleteState *bds, Relation index, Buffer buffer,
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				deletable[i] = true;
@@ -425,7 +426,8 @@ vacuumLeafRoot(spgBulkDeleteState *bds, Relation index, Buffer buffer)
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				toDelete[xlrec.nDelete] = i;
@@ -902,10 +904,10 @@ spgbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 }
 
 /* Dummy callback to delete no tuples during spgvacuumcleanup */
-static bool
-dummy_callback(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+dummy_callback(ItemPointer itemptr, bool is_warm, void *state)
 {
-	return false;
+	return IBDCR_KEEP;
 }
 
 /*
diff --git b/src/backend/catalog/index.c a/src/backend/catalog/index.c
index 1eb163f..2c27661 100644
--- b/src/backend/catalog/index.c
+++ a/src/backend/catalog/index.c
@@ -54,6 +54,7 @@
 #include "nodes/makefuncs.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/var.h"
 #include "parser/parser.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
@@ -114,7 +115,7 @@ static void IndexCheckExclusion(Relation heapRelation,
 					IndexInfo *indexInfo);
 static inline int64 itemptr_encode(ItemPointer itemptr);
 static inline void itemptr_decode(ItemPointer itemptr, int64 encoded);
-static bool validate_index_callback(ItemPointer itemptr, void *opaque);
+static IndexBulkDeleteCallbackResult validate_index_callback(ItemPointer itemptr, bool is_warm, void *opaque);
 static void validate_index_heapscan(Relation heapRelation,
 						Relation indexRelation,
 						IndexInfo *indexInfo,
@@ -1691,6 +1692,20 @@ BuildIndexInfo(Relation index)
 	ii->ii_AmCache = NULL;
 	ii->ii_Context = CurrentMemoryContext;
 
+	/* build a bitmap of all table attributes referred by this index */
+	for (i = 0; i < ii->ii_NumIndexAttrs; i++)
+	{
+		AttrNumber attr = ii->ii_KeyAttrNumbers[i];
+		ii->ii_indxattrs = bms_add_member(ii->ii_indxattrs, attr -
+				FirstLowInvalidHeapAttributeNumber);
+	}
+
+	/* Collect all attributes used in expressions, too */
+	pull_varattnos((Node *) ii->ii_Expressions, 1, &ii->ii_indxattrs);
+
+	/* Collect all attributes in the index predicate, too */
+	pull_varattnos((Node *) ii->ii_Predicate, 1, &ii->ii_indxattrs);
+
 	return ii;
 }
 
@@ -1815,6 +1830,51 @@ FormIndexDatum(IndexInfo *indexInfo,
 		elog(ERROR, "wrong number of index expressions");
 }
 
+/*
+ * This is same as FormIndexDatum but we don't compute any expression
+ * attributes and hence can be used when executor interfaces are not available.
+ * If i'th attribute is available then isavail[i] is set to true, else set to
+ * false. The caller must always check if an attribute value is available
+ * before trying to do anything useful with that.
+ */
+void
+FormIndexPlainDatum(IndexInfo *indexInfo,
+			   Relation heapRel,
+			   HeapTuple heapTup,
+			   Datum *values,
+			   bool *isnull,
+			   bool *isavail)
+{
+	int			i;
+
+	for (i = 0; i < indexInfo->ii_NumIndexAttrs; i++)
+	{
+		int			keycol = indexInfo->ii_KeyAttrNumbers[i];
+		Datum		iDatum;
+		bool		isNull;
+
+		if (keycol != 0)
+		{
+			/*
+			 * Plain index column; get the value we need directly from the
+			 * heap tuple.
+			 */
+			iDatum = heap_getattr(heapTup, keycol, RelationGetDescr(heapRel), &isNull);
+			values[i] = iDatum;
+			isnull[i] = isNull;
+			isavail[i] = true;
+		}
+		else
+		{
+			/*
+			 * This is an expression attribute which can't be computed by us.
+			 * So just inform the caller about it.
+			 */
+			isavail[i] = false;
+			isnull[i] = true;
+		}
+	}
+}
 
 /*
  * index_update_stats --- update pg_class entry after CREATE INDEX or REINDEX
@@ -2929,15 +2989,15 @@ itemptr_decode(ItemPointer itemptr, int64 encoded)
 /*
  * validate_index_callback - bulkdelete callback to collect the index TIDs
  */
-static bool
-validate_index_callback(ItemPointer itemptr, void *opaque)
+static IndexBulkDeleteCallbackResult
+validate_index_callback(ItemPointer itemptr, bool is_warm, void *opaque)
 {
 	v_i_state  *state = (v_i_state *) opaque;
 	int64		encoded = itemptr_encode(itemptr);
 
 	tuplesort_putdatum(state->tuplesort, Int64GetDatum(encoded), false);
 	state->itups += 1;
-	return false;				/* never actually delete anything */
+	return IBDCR_KEEP;				/* never actually delete anything */
 }
 
 /*
@@ -3156,7 +3216,8 @@ validate_index_heapscan(Relation heapRelation,
 						 heapRelation,
 						 indexInfo->ii_Unique ?
 						 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-						 indexInfo);
+						 indexInfo,
+						 false);
 
 			state->tups_inserted += 1;
 		}
diff --git b/src/backend/catalog/indexing.c a/src/backend/catalog/indexing.c
index abc344a..6392f33 100644
--- b/src/backend/catalog/indexing.c
+++ a/src/backend/catalog/indexing.c
@@ -66,10 +66,15 @@ CatalogCloseIndexes(CatalogIndexState indstate)
  *
  * This should be called for each inserted or updated catalog tuple.
  *
+ * If the tuple was WARM updated, the modified_attrs contains the list of
+ * columns updated by the update. We must not insert new index entries for
+ * indexes which do not refer to any of the modified columns.
+ *
  * This is effectively a cut-down version of ExecInsertIndexTuples.
  */
 static void
-CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
+CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple,
+		Bitmapset *modified_attrs, bool warm_update)
 {
 	int			i;
 	int			numIndexes;
@@ -79,12 +84,28 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 	IndexInfo **indexInfoArray;
 	Datum		values[INDEX_MAX_KEYS];
 	bool		isnull[INDEX_MAX_KEYS];
+	ItemPointerData root_tid;
 
-	/* HOT update does not require index inserts */
-	if (HeapTupleIsHeapOnly(heapTuple))
+	/*
+	 * HOT update does not require index inserts, but WARM may need for some
+	 * indexes.
+	 */
+	if (HeapTupleIsHeapOnly(heapTuple) && !warm_update)
 		return;
 
 	/*
+	 * If we've done a WARM update, then we must index the TID of the root line
+	 * pointer and not the actual TID of the new tuple.
+	 */
+	if (warm_update)
+		ItemPointerSet(&root_tid,
+				ItemPointerGetBlockNumber(&(heapTuple->t_self)),
+				HeapTupleHeaderGetRootOffset(heapTuple->t_data));
+	else
+		ItemPointerCopy(&heapTuple->t_self, &root_tid);
+
+
+	/*
 	 * Get information from the state structure.  Fall out if nothing to do.
 	 */
 	numIndexes = indstate->ri_NumIndices;
@@ -112,6 +133,17 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 			continue;
 
 		/*
+		 * If we've done WARM update, then we must not insert a new index tuple
+		 * if none of the index keys have changed. This is not just an
+		 * optimization, but a requirement for WARM to work correctly.
+		 */
+		if (warm_update)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
+		/*
 		 * Expressional and partial indexes on system catalogs are not
 		 * supported, nor exclusion constraints, nor deferred uniqueness
 		 */
@@ -136,11 +168,12 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 		index_insert(relationDescs[i],	/* index relation */
 					 values,	/* array of index Datums */
 					 isnull,	/* is-null flags */
-					 &(heapTuple->t_self),		/* tid of heap tuple */
+					 &root_tid,
 					 heapRelation,
 					 relationDescs[i]->rd_index->indisunique ?
 					 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-					 indexInfo);
+					 indexInfo,
+					 warm_update);
 	}
 
 	ExecDropSingleTupleTableSlot(slot);
@@ -168,7 +201,7 @@ CatalogTupleInsert(Relation heapRel, HeapTuple tup)
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 	CatalogCloseIndexes(indstate);
 
 	return oid;
@@ -190,7 +223,7 @@ CatalogTupleInsertWithInfo(Relation heapRel, HeapTuple tup,
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 
 	return oid;
 }
@@ -210,12 +243,14 @@ void
 CatalogTupleUpdate(Relation heapRel, ItemPointer otid, HeapTuple tup)
 {
 	CatalogIndexState indstate;
+	bool	warm_update;
+	Bitmapset	*modified_attrs;
 
 	indstate = CatalogOpenIndexes(heapRel);
 
-	simple_heap_update(heapRel, otid, tup);
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 	CatalogCloseIndexes(indstate);
 }
 
@@ -231,9 +266,12 @@ void
 CatalogTupleUpdateWithInfo(Relation heapRel, ItemPointer otid, HeapTuple tup,
 						   CatalogIndexState indstate)
 {
-	simple_heap_update(heapRel, otid, tup);
+	Bitmapset  *modified_attrs;
+	bool		warm_update;
+
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 }
 
 /*
diff --git b/src/backend/catalog/system_views.sql a/src/backend/catalog/system_views.sql
index d357c8b..66a39d0 100644
--- b/src/backend/catalog/system_views.sql
+++ a/src/backend/catalog/system_views.sql
@@ -530,6 +530,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_tuples_deleted(C.oid) AS n_tup_del,
             pg_stat_get_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
@@ -560,7 +561,8 @@ CREATE VIEW pg_stat_xact_all_tables AS
             pg_stat_get_xact_tuples_inserted(C.oid) AS n_tup_ins,
             pg_stat_get_xact_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_xact_tuples_deleted(C.oid) AS n_tup_del,
-            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd
+            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_xact_tuples_warm_updated(C.oid) AS n_tup_warm_upd
     FROM pg_class C LEFT JOIN
          pg_index I ON C.oid = I.indrelid
          LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
diff --git b/src/backend/commands/constraint.c a/src/backend/commands/constraint.c
index e2544e5..330b661 100644
--- b/src/backend/commands/constraint.c
+++ a/src/backend/commands/constraint.c
@@ -40,6 +40,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	TriggerData *trigdata = castNode(TriggerData, fcinfo->context);
 	const char *funcname = "unique_key_recheck";
 	HeapTuple	new_row;
+	HeapTupleData heapTuple;
 	ItemPointerData tmptid;
 	Relation	indexRel;
 	IndexInfo  *indexInfo;
@@ -102,7 +103,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	 * removed.
 	 */
 	tmptid = new_row->t_self;
-	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL,
+				NULL, NULL, &heapTuple))
 	{
 		/*
 		 * All rows in the HOT chain are dead, so skip the check.
@@ -166,7 +168,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 		 */
 		index_insert(indexRel, values, isnull, &(new_row->t_self),
 					 trigdata->tg_relation, UNIQUE_CHECK_EXISTING,
-					 indexInfo);
+					 indexInfo,
+					 false);
 	}
 	else
 	{
diff --git b/src/backend/commands/copy.c a/src/backend/commands/copy.c
index 0158eda..d6ef4a8 100644
--- b/src/backend/commands/copy.c
+++ a/src/backend/commands/copy.c
@@ -2688,6 +2688,8 @@ CopyFrom(CopyState cstate)
 					if (resultRelInfo->ri_NumIndices > 0)
 						recheckIndexes = ExecInsertIndexTuples(slot,
 															&(tuple->t_self),
+															&(tuple->t_self),
+															NULL,
 															   estate,
 															   false,
 															   NULL,
@@ -2842,6 +2844,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
 			ExecStoreTuple(bufferedTuples[i], myslot, InvalidBuffer, false);
 			recheckIndexes =
 				ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
+									  &(bufferedTuples[i]->t_self), NULL,
 									  estate, false, NULL, NIL);
 			ExecARInsertTriggers(estate, resultRelInfo,
 								 bufferedTuples[i],
diff --git b/src/backend/commands/indexcmds.c a/src/backend/commands/indexcmds.c
index 4861799..b62b0e9 100644
--- b/src/backend/commands/indexcmds.c
+++ a/src/backend/commands/indexcmds.c
@@ -694,7 +694,14 @@ DefineIndex(Oid relationId,
 	 * visible to other transactions before we start to build the index. That
 	 * will prevent them from making incompatible HOT updates.  The new index
 	 * will be marked not indisready and not indisvalid, so that no one else
-	 * tries to either insert into it or use it for queries.
+	 * tries to either insert into it or use it for queries. In addition to
+	 * that, WARM updates will be disallowed if an update is modifying one of
+	 * the columns used by this new index. This is necessary to ensure that we
+	 * don't create WARM tuples which do not have corresponding entry in this
+	 * index. It must be noted that during the second phase, we will index only
+	 * those heap tuples whose root line pointer is not already in the index,
+	 * hence it's important that all tuples in a given chain, has the same
+	 * value for any indexed column (including this new index).
 	 *
 	 * We must commit our current transaction so that the index becomes
 	 * visible; then start another.  Note that all the data structures we just
@@ -742,7 +749,10 @@ DefineIndex(Oid relationId,
 	 * marked as "not-ready-for-inserts".  The index is consulted while
 	 * deciding HOT-safety though.  This arrangement ensures that no new HOT
 	 * chains can be created where the new tuple and the old tuple in the
-	 * chain have different index keys.
+	 * chain have different index keys. Also, the new index is consulted for
+	 * deciding whether a WARM update is possible, and WARM update is not done
+	 * if a column used by this index is being updated. This ensures that we
+	 * don't create WARM tuples which are not indexed by this index.
 	 *
 	 * We now take a new snapshot, and build the index using all tuples that
 	 * are visible in this snapshot.  We can be sure that any HOT updates to
@@ -777,7 +787,8 @@ DefineIndex(Oid relationId,
 	/*
 	 * Update the pg_index row to mark the index as ready for inserts. Once we
 	 * commit this transaction, any new transactions that open the table must
-	 * insert new entries into the index for insertions and non-HOT updates.
+	 * insert new entries into the index for insertions and non-HOT updates or
+	 * WARM updates where this index needs new entry.
 	 */
 	index_set_state_flags(indexRelationId, INDEX_CREATE_SET_READY);
 
diff --git b/src/backend/commands/vacuumlazy.c a/src/backend/commands/vacuumlazy.c
index 5b43a66..f52490f 100644
--- b/src/backend/commands/vacuumlazy.c
+++ a/src/backend/commands/vacuumlazy.c
@@ -104,6 +104,39 @@
  */
 #define PREFETCH_SIZE			((BlockNumber) 32)
 
+/*
+ * Structure to track WARM chains that can be converted into HOT chains during
+ * this run.
+ *
+ * To reduce space requirement, we're using bitfields. But the way things are
+ * laid down, we're still wasting 1-byte per candidate chain.
+ */
+typedef struct LVWarmChain
+{
+	ItemPointerData	chain_tid;			/* root of the chain */
+
+	/*
+	 * 1 - if the chain contains only post-warm tuples
+	 * 0 - if the chain contains only pre-warm tuples
+	 */
+	uint8			is_postwarm_chain:2;
+
+	/* 1 - if this chain must remain a WARM chain */
+	uint8			keep_warm_chain:2;
+
+	/*
+	 * Number of CLEAR pointers to this root TID found so far - must never be
+	 * more than 2.
+	 */
+	uint8			num_clear_pointers:2;
+
+	/*
+	 * Number of WARM pointers to this root TID found so far - must never be
+	 * more than 1.
+	 */
+	uint8			num_warm_pointers:2;
+} LVWarmChain;
+
 typedef struct LVRelStats
 {
 	/* hasindex = true means two-pass strategy; false means one-pass */
@@ -122,6 +155,14 @@ typedef struct LVRelStats
 	BlockNumber pages_removed;
 	double		tuples_deleted;
 	BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */
+
+	/* List of candidate WARM chains that can be converted into HOT chains */
+	/* NB: this list is ordered by TID of the root pointers */
+	int				num_warm_chains;	/* current # of entries */
+	int				max_warm_chains;	/* # slots allocated in array */
+	LVWarmChain 	*warm_chains;		/* array of LVWarmChain */
+	double			num_non_convertible_warm_chains;
+
 	/* List of TIDs of tuples we intend to delete */
 	/* NB: this list is ordered by TID address */
 	int			num_dead_tuples;	/* current # of entries */
@@ -150,6 +191,7 @@ static void lazy_scan_heap(Relation onerel, int options,
 static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats);
 static bool lazy_check_needs_freeze(Buffer buf, bool *hastup);
 static void lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats);
 static void lazy_cleanup_index(Relation indrel,
@@ -157,6 +199,10 @@ static void lazy_cleanup_index(Relation indrel,
 				   LVRelStats *vacrelstats);
 static int lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 				 int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer);
+static int lazy_warmclear_page(Relation onerel, BlockNumber blkno,
+				 Buffer buffer, int chainindex, LVRelStats *vacrelstats,
+				 Buffer *vmbuffer, bool check_all_visible);
+static void lazy_reset_warm_pointer_count(LVRelStats *vacrelstats);
 static bool should_attempt_truncation(LVRelStats *vacrelstats);
 static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
 static BlockNumber count_nondeletable_pages(Relation onerel,
@@ -164,8 +210,15 @@ static BlockNumber count_nondeletable_pages(Relation onerel,
 static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks);
 static void lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr);
-static bool lazy_tid_reaped(ItemPointer itemptr, void *state);
+static void lazy_record_warm_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static void lazy_record_clear_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static IndexBulkDeleteCallbackResult lazy_tid_reaped(ItemPointer itemptr, bool is_warm, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state);
 static int	vac_cmp_itemptr(const void *left, const void *right);
+static int vac_cmp_warm_chain(const void *left, const void *right);
 static bool heap_page_is_all_visible(Relation rel, Buffer buf,
 					 TransactionId *visibility_cutoff_xid, bool *all_frozen);
 
@@ -690,8 +743,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		 * If we are close to overrunning the available space for dead-tuple
 		 * TIDs, pause and do a cycle of vacuuming before we tackle this page.
 		 */
-		if ((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
-			vacrelstats->num_dead_tuples > 0)
+		if (((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
+			vacrelstats->num_dead_tuples > 0) ||
+			((vacrelstats->max_warm_chains - vacrelstats->num_warm_chains) < MaxHeapTuplesPerPage &&
+			 vacrelstats->num_warm_chains > 0))
 		{
 			const int	hvp_index[] = {
 				PROGRESS_VACUUM_PHASE,
@@ -721,6 +776,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			/* Remove index entries */
 			for (i = 0; i < nindexes; i++)
 				lazy_vacuum_index(Irel[i],
+								  (vacrelstats->num_warm_chains > 0),
 								  &indstats[i],
 								  vacrelstats);
 
@@ -743,6 +799,9 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			 * valid.
 			 */
 			vacrelstats->num_dead_tuples = 0;
+			vacrelstats->num_warm_chains = 0;
+			memset(vacrelstats->warm_chains, 0,
+					vacrelstats->max_warm_chains * sizeof (LVWarmChain));
 			vacrelstats->num_index_scans++;
 
 			/* Report that we are once again scanning the heap */
@@ -947,15 +1006,31 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 				continue;
 			}
 
+			ItemPointerSet(&(tuple.t_self), blkno, offnum);
+
 			/* Redirect items mustn't be touched */
 			if (ItemIdIsRedirected(itemid))
 			{
+				HeapCheckWarmChainStatus status = heap_check_warm_chain(page,
+						&tuple.t_self, false);
+				if (HCWC_IS_WARM_UPDATED(status))
+				{
+					/*
+					 * A chain which is either complete WARM or CLEAR is a
+					 * candidate for chain conversion. Remember the chain and
+					 * whether the chain has all WARM tuples or not.
+					 */
+					if (HCWC_IS_ALL_WARM(status))
+						lazy_record_warm_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_CLEAR(status))
+						lazy_record_clear_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
 				hastup = true;	/* this page won't be truncatable */
 				continue;
 			}
 
-			ItemPointerSet(&(tuple.t_self), blkno, offnum);
-
 			/*
 			 * DEAD item pointers are to be vacuumed normally; but we don't
 			 * count them in tups_vacuumed, else we'd be double-counting (at
@@ -975,6 +1050,26 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			tuple.t_len = ItemIdGetLength(itemid);
 			tuple.t_tableOid = RelationGetRelid(onerel);
 
+			if (!HeapTupleIsHeapOnly(&tuple))
+			{
+				HeapCheckWarmChainStatus status = heap_check_warm_chain(page,
+						&tuple.t_self, false);
+				if (HCWC_IS_WARM_UPDATED(status))
+				{
+					/*
+					 * A chain which is either complete WARM or CLEAR is a
+					 * candidate for chain conversion. Remember the chain and
+					 * its color.
+					 */
+					if (HCWC_IS_ALL_WARM(status))
+						lazy_record_warm_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_CLEAR(status))
+						lazy_record_clear_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
+			}
+
 			tupgone = false;
 
 			switch (HeapTupleSatisfiesVacuum(&tuple, OldestXmin, buf))
@@ -1040,6 +1135,19 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 							break;
 						}
 
+						/*
+						 * If this tuple was ever WARM updated or is a WARM
+						 * tuple, there could be multiple index entries
+						 * pointing to the root of this chain. We can't do
+						 * index-only scans for such tuples without verifying
+						 * index key check. So mark the page as !all_visible
+						 */
+						if (HeapTupleHeaderIsWarmUpdated(tuple.t_data))
+						{
+							all_visible = false;
+							break;
+						}
+
 						/* Track newest xmin on page. */
 						if (TransactionIdFollows(xmin, visibility_cutoff_xid))
 							visibility_cutoff_xid = xmin;
@@ -1282,7 +1390,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 
 	/* If any tuples need to be deleted, perform final vacuum cycle */
 	/* XXX put a threshold on min number of tuples here? */
-	if (vacrelstats->num_dead_tuples > 0)
+	if (vacrelstats->num_dead_tuples > 0 || vacrelstats->num_warm_chains > 0)
 	{
 		const int	hvp_index[] = {
 			PROGRESS_VACUUM_PHASE,
@@ -1300,6 +1408,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		/* Remove index entries */
 		for (i = 0; i < nindexes; i++)
 			lazy_vacuum_index(Irel[i],
+							  (vacrelstats->num_warm_chains > 0),
 							  &indstats[i],
 							  vacrelstats);
 
@@ -1371,7 +1480,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
  *
  *		This routine marks dead tuples as unused and compacts out free
  *		space on their pages.  Pages not having dead tuples recorded from
- *		lazy_scan_heap are not visited at all.
+ *		lazy_scan_heap are not visited at all. This routine also converts
+ *		candidate WARM chains to HOT chains by clearing WARM related flags. The
+ *		candidate chains are determined by the preceeding index scans after
+ *		looking at the data collected by the first heap scan.
  *
  * Note: the reason for doing this as a second pass is we cannot remove
  * the tuples until we've removed their index entries, and we want to
@@ -1380,7 +1492,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 static void
 lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 {
-	int			tupindex;
+	int			tupindex, chainindex;
 	int			npages;
 	PGRUsage	ru0;
 	Buffer		vmbuffer = InvalidBuffer;
@@ -1389,33 +1501,69 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 	npages = 0;
 
 	tupindex = 0;
-	while (tupindex < vacrelstats->num_dead_tuples)
+	chainindex = 0;
+	while (tupindex < vacrelstats->num_dead_tuples ||
+		   chainindex < vacrelstats->num_warm_chains)
 	{
-		BlockNumber tblk;
+		BlockNumber tblk, chainblk, vacblk;
 		Buffer		buf;
 		Page		page;
 		Size		freespace;
 
 		vacuum_delay_point();
 
-		tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
-		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, tblk, RBM_NORMAL,
+		tblk = chainblk = InvalidBlockNumber;
+		if (chainindex < vacrelstats->num_warm_chains)
+			chainblk =
+				ItemPointerGetBlockNumber(&(vacrelstats->warm_chains[chainindex].chain_tid));
+
+		if (tupindex < vacrelstats->num_dead_tuples)
+			tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
+
+		if (tblk == InvalidBlockNumber)
+			vacblk = chainblk;
+		else if (chainblk == InvalidBlockNumber)
+			vacblk = tblk;
+		else
+			vacblk = Min(chainblk, tblk);
+
+		Assert(vacblk != InvalidBlockNumber);
+
+		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, vacblk, RBM_NORMAL,
 								 vac_strategy);
-		if (!ConditionalLockBufferForCleanup(buf))
+
+
+		if (vacblk == chainblk)
+			LockBufferForCleanup(buf);
+		else if (!ConditionalLockBufferForCleanup(buf))
 		{
 			ReleaseBuffer(buf);
 			++tupindex;
 			continue;
 		}
-		tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
-									&vmbuffer);
+
+		/*
+		 * Convert WARM chains on this page. This should be done before
+		 * vacuuming the page to ensure that we can correctly set visibility
+		 * bits after clearing WARM chains.
+		 *
+		 * If we are going to vacuum this page then don't check for
+		 * all-visibility just yet.
+		*/
+		if (vacblk == chainblk)
+			chainindex = lazy_warmclear_page(onerel, chainblk, buf, chainindex,
+					vacrelstats, &vmbuffer, chainblk != tblk);
+
+		if (vacblk == tblk)
+			tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
+					&vmbuffer);
 
 		/* Now that we've compacted the page, record its available space */
 		page = BufferGetPage(buf);
 		freespace = PageGetHeapFreeSpace(page);
 
 		UnlockReleaseBuffer(buf);
-		RecordPageWithFreeSpace(onerel, tblk, freespace);
+		RecordPageWithFreeSpace(onerel, vacblk, freespace);
 		npages++;
 	}
 
@@ -1434,6 +1582,107 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 }
 
 /*
+ *	lazy_warmclear_page() -- clear various WARM bits on the tuples.
+ *
+ * Caller must hold pin and buffer cleanup lock on the buffer.
+ *
+ * chainindex is the index in vacrelstats->warm_chains of the first dead
+ * tuple for this page.  We assume the rest follow sequentially.
+ * The return value is the first tupindex after the tuples of this page.
+ *
+ * If check_all_visible is set then we also check if the page has now become
+ * all visible and update visibility map.
+ */
+static int
+lazy_warmclear_page(Relation onerel, BlockNumber blkno, Buffer buffer,
+				 int chainindex, LVRelStats *vacrelstats, Buffer *vmbuffer,
+				 bool check_all_visible)
+{
+	Page			page = BufferGetPage(buffer);
+	OffsetNumber	cleared_offnums[MaxHeapTuplesPerPage];
+	int				num_cleared = 0;
+	TransactionId	visibility_cutoff_xid;
+	bool			all_frozen;
+
+	pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED, blkno);
+
+	START_CRIT_SECTION();
+
+	for (; chainindex < vacrelstats->num_warm_chains ; chainindex++)
+	{
+		BlockNumber tblk;
+		LVWarmChain	*chain;
+
+		chain = &vacrelstats->warm_chains[chainindex];
+
+		tblk = ItemPointerGetBlockNumber(&chain->chain_tid);
+		if (tblk != blkno)
+			break;				/* past end of tuples for this block */
+
+		/*
+		 * Since a heap page can have no more than MaxHeapTuplesPerPage
+		 * offnums and we process each offnum only once, MaxHeapTuplesPerPage
+		 * size array should be enough to hold all cleared tuples in this page.
+		 */
+		if (!chain->keep_warm_chain)
+			num_cleared += heap_clear_warm_chain(page, &chain->chain_tid,
+					cleared_offnums + num_cleared);
+	}
+
+	/*
+	 * Mark buffer dirty before we write WAL.
+	 */
+	MarkBufferDirty(buffer);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(onerel))
+	{
+		XLogRecPtr	recptr;
+
+		recptr = log_heap_warmclear(onerel, buffer,
+								cleared_offnums, num_cleared);
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	/* If not checking for all-visibility then we're done */
+	if (!check_all_visible)
+		return chainindex;
+
+	/*
+	 * The following code should match the corresponding code in
+	 * lazy_vacuum_page
+	 **/
+	if (heap_page_is_all_visible(onerel, buffer, &visibility_cutoff_xid,
+								 &all_frozen))
+		PageSetAllVisible(page);
+
+	/*
+	 * All the changes to the heap page have been done. If the all-visible
+	 * flag is now set, also set the VM all-visible bit (and, if possible, the
+	 * all-frozen bit) unless this has already been done previously.
+	 */
+	if (PageIsAllVisible(page))
+	{
+		uint8		vm_status = visibilitymap_get_status(onerel, blkno, vmbuffer);
+		uint8		flags = 0;
+
+		/* Set the VM all-frozen bit to flag, if needed */
+		if ((vm_status & VISIBILITYMAP_ALL_VISIBLE) == 0)
+			flags |= VISIBILITYMAP_ALL_VISIBLE;
+		if ((vm_status & VISIBILITYMAP_ALL_FROZEN) == 0 && all_frozen)
+			flags |= VISIBILITYMAP_ALL_FROZEN;
+
+		Assert(BufferIsValid(*vmbuffer));
+		if (flags != 0)
+			visibilitymap_set(onerel, blkno, buffer, InvalidXLogRecPtr,
+							  *vmbuffer, visibility_cutoff_xid, flags);
+	}
+	return chainindex;
+}
+
+/*
  *	lazy_vacuum_page() -- free dead tuples on a page
  *					 and repair its fragmentation.
  *
@@ -1586,6 +1835,24 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
 	return false;
 }
 
+/*
+ * Reset counters tracking number of WARM and CLEAR pointers per candidate TID.
+ * These counters are maintained per index and cleared when the next index is
+ * picked up for cleanup.
+ *
+ * We don't touch the keep_warm_chain since once a chain is known to be
+ * non-convertible, we must remember that across all indexes.
+ */
+static void
+lazy_reset_warm_pointer_count(LVRelStats *vacrelstats)
+{
+	int i;
+	for (i = 0; i < vacrelstats->num_warm_chains; i++)
+	{
+		LVWarmChain *chain = &vacrelstats->warm_chains[i];
+		chain->num_clear_pointers = chain->num_warm_pointers = 0;
+	}
+}
 
 /*
  *	lazy_vacuum_index() -- vacuum one index relation.
@@ -1595,6 +1862,7 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
  */
 static void
 lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats)
 {
@@ -1610,15 +1878,87 @@ lazy_vacuum_index(Relation indrel,
 	ivinfo.num_heap_tuples = vacrelstats->old_rel_tuples;
 	ivinfo.strategy = vac_strategy;
 
-	/* Do bulk deletion */
-	*stats = index_bulk_delete(&ivinfo, *stats,
-							   lazy_tid_reaped, (void *) vacrelstats);
+	/*
+	 * If told, convert WARM chains into HOT chains.
+	 *
+	 * We must have already collected candidate WARM chains i.e. chains that
+	 * have either all tuples with HEAP_WARM_TUPLE flag set or none.
+	 *
+	 * This works in two phases. In the first phase, we do a complete index
+	 * scan and collect information about index pointers to the candidate
+	 * chains, but we don't do conversion. To be precise, we count the number
+	 * of WARM and CLEAR index pointers to each candidate chain and use that
+	 * knowledge to arrive at a decision and do the actual conversion during
+	 * the second phase (we kill known dead pointers though in this phase).
+	 *
+	 * In the second phase, for each candidate chain we check if we have seen a
+	 * WARM index pointer. For such chains, we kill the CLEAR pointer and
+	 * convert the WARM pointer into a CLEAR pointer. The heap tuples are
+	 * cleared of WARM flags in the second heap scan. If we did not find any
+	 * WARM pointer to a WARM chain, that means that the chain is reachable
+	 * from the CLEAR pointer (because say WARM update did not add a new entry
+	 * for this index). In that case, we do nothing.  There is a third case
+	 * where we find two CLEAR pointers to a candidate chain. This can happen
+	 * because of aborted vacuums. We don't handle that case yet, but it should
+	 * be possible to apply the same recheck logic and find which of the clear
+	 * pointers is redundant and should be removed.
+	 *
+	 * For CLEAR chains, we just kill the WARM pointer, if it exists, and keep
+	 * the CLEAR pointer.
+	 */
+	if (clear_warm)
+	{
+		/*
+		 * Before starting the index scan, reset the counters of WARM and CLEAR
+		 * pointers, probably carried forward from the previous index.
+		 */
+		lazy_reset_warm_pointer_count(vacrelstats);
+
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase1, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row version, found "
+						"%0.f warm pointers, %0.f clear pointers, removed "
+						"%0.f warm pointers, removed %0.f clear pointers",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples,
+						(*stats)->num_warm_pointers,
+						(*stats)->num_clear_pointers,
+						(*stats)->warm_pointers_removed,
+						(*stats)->clear_pointers_removed)));
+
+		(*stats)->num_warm_pointers = 0;
+		(*stats)->num_clear_pointers = 0;
+		(*stats)->warm_pointers_removed = 0;
+		(*stats)->clear_pointers_removed = 0;
+		(*stats)->pointers_cleared = 0;
+
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase2, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to convert WARM pointers, found "
+						"%0.f WARM pointers, %0.f CLEAR pointers, removed "
+						"%0.f WARM pointers, removed %0.f CLEAR pointers, "
+						"cleared %0.f WARM pointers",
+						RelationGetRelationName(indrel),
+						(*stats)->num_warm_pointers,
+						(*stats)->num_clear_pointers,
+						(*stats)->warm_pointers_removed,
+						(*stats)->clear_pointers_removed,
+						(*stats)->pointers_cleared)));
+	}
+	else
+	{
+		/* Do bulk deletion */
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_tid_reaped, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row versions",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples),
+				 errdetail("%s.", pg_rusage_show(&ru0))));
+	}
 
-	ereport(elevel,
-			(errmsg("scanned index \"%s\" to remove %d row versions",
-					RelationGetRelationName(indrel),
-					vacrelstats->num_dead_tuples),
-			 errdetail("%s.", pg_rusage_show(&ru0))));
 }
 
 /*
@@ -1992,9 +2332,11 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 
 	if (vacrelstats->hasindex)
 	{
-		maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData);
+		maxtuples = (vac_work_mem * 1024L) / (sizeof(ItemPointerData) +
+				sizeof(LVWarmChain));
 		maxtuples = Min(maxtuples, INT_MAX);
-		maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData));
+		maxtuples = Min(maxtuples, MaxAllocSize / (sizeof(ItemPointerData) +
+					sizeof(LVWarmChain)));
 
 		/* curious coding here to ensure the multiplication can't overflow */
 		if ((BlockNumber) (maxtuples / LAZY_ALLOC_TUPLES) > relblocks)
@@ -2012,6 +2354,57 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 	vacrelstats->max_dead_tuples = (int) maxtuples;
 	vacrelstats->dead_tuples = (ItemPointer)
 		palloc(maxtuples * sizeof(ItemPointerData));
+
+	/*
+	 * XXX Cheat for now and allocate the same size array for tracking warm
+	 * chains. maxtuples must have been already adjusted above to ensure we
+	 * don't cross vac_work_mem.
+	 */
+	vacrelstats->num_warm_chains = 0;
+	vacrelstats->max_warm_chains = (int) maxtuples;
+	vacrelstats->warm_chains = (LVWarmChain *)
+		palloc0(maxtuples * sizeof(LVWarmChain));
+
+}
+
+/*
+ * lazy_record_clear_chain - remember one CLEAR chain
+ */
+static void
+lazy_record_clear_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	{
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 0;
+		vacrelstats->num_warm_chains++;
+	}
+}
+
+/*
+ * lazy_record_warm_chain - remember one WARM chain
+ */
+static void
+lazy_record_warm_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	{
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 1;
+		vacrelstats->num_warm_chains++;
+	}
 }
 
 /*
@@ -2042,8 +2435,8 @@ lazy_record_dead_tuple(LVRelStats *vacrelstats,
  *
  *		Assumes dead_tuples array is in sorted order.
  */
-static bool
-lazy_tid_reaped(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+lazy_tid_reaped(ItemPointer itemptr, bool is_warm, void *state)
 {
 	LVRelStats *vacrelstats = (LVRelStats *) state;
 	ItemPointer res;
@@ -2054,7 +2447,193 @@ lazy_tid_reaped(ItemPointer itemptr, void *state)
 								sizeof(ItemPointerData),
 								vac_cmp_itemptr);
 
-	return (res != NULL);
+	return (res != NULL) ? IBDCR_DELETE : IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase1() -- run first pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	ItemPointer		res;
+	LVWarmChain	*chain;
+
+	res = (ItemPointer) bsearch((void *) itemptr,
+								(void *) vacrelstats->dead_tuples,
+								vacrelstats->num_dead_tuples,
+								sizeof(ItemPointerData),
+								vac_cmp_itemptr);
+
+	if (res != NULL)
+		return IBDCR_DELETE;
+
+	chain = (LVWarmChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->warm_chains,
+								vacrelstats->num_warm_chains,
+								sizeof(LVWarmChain),
+								vac_cmp_warm_chain);
+	if (chain != NULL)
+	{
+		if (is_warm)
+			chain->num_warm_pointers++;
+		else
+			chain->num_clear_pointers++;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase2() -- run second pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	LVWarmChain	*chain;
+
+	chain = (LVWarmChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->warm_chains,
+								vacrelstats->num_warm_chains,
+								sizeof(LVWarmChain),
+								vac_cmp_warm_chain);
+
+	if (chain != NULL && (chain->keep_warm_chain != 1))
+	{
+		/*
+		 * At no point, we can have more than 1 warm pointer to any chain and
+		 * no more than 2 clear pointers.
+		 */
+		Assert(chain->num_warm_pointers <= 1);
+		Assert(chain->num_clear_pointers <= 2);
+
+		if (chain->is_postwarm_chain == 1)
+		{
+			if (is_warm)
+			{
+				/*
+				 * A WARM pointer, pointing to a WARM chain.
+				 *
+				 * Clear the warm pointer (and delete the CLEAR pointer). We
+				 * may have already seen the CLEAR pointer in the scan and
+				 * deleted that or we may see it later in the scan. It doesn't
+				 * matter if we fail at any point because we won't clear up
+				 * WARM bits on the heap tuples until we have dealt with the
+				 * index pointers cleanly.
+				 */
+				return IBDCR_CLEAR_WARM;
+			}
+			else
+			{
+				/*
+				 * CLEAR pointer to a WARM chain.
+				 */
+				if (chain->num_warm_pointers > 0)
+				{
+					/*
+					 * If there exists a WARM pointer to the chain, we can
+					 * delete the CLEAR pointer and clear the WARM bits on the
+					 * heap tuples.
+					 */
+					return IBDCR_DELETE;
+				}
+				else if (chain->num_clear_pointers == 1)
+				{
+					/*
+					 * If this is the only pointer to a WARM chain, we must
+					 * keep the CLEAR pointer.
+					 *
+					 * The presence of WARM chain indicates that the WARM update
+					 * must have been committed good. But during the update
+					 * this index was probably not updated and hence it
+					 * contains just one, original CLEAR pointer to the chain.
+					 * We should be able to clear the WARM bits on heap tuples
+					 * unless we later find another index which prevents the
+					 * cleanup.
+					 */
+					return IBDCR_KEEP;
+				}
+			}
+		}
+		else
+		{
+			/*
+			 * This is a CLEAR chain.
+			 */
+			if (is_warm)
+			{
+				/*
+				 * A WARM pointer to a CLEAR chain.
+				 *
+				 * This can happen when a WARM update is aborted. Later the HOT
+				 * chain is pruned leaving behind only CLEAR tuples in the
+				 * chain. But the WARM index pointer inserted in the index
+				 * remains and it must now be deleted before we clear WARM bits
+				 * from the heap tuple.
+				 */
+				return IBDCR_DELETE;
+			}
+
+			/*
+			 * CLEAR pointer to a CLEAR chain.
+			 *
+			 * If this is the only surviving CLEAR pointer, keep it and clear
+			 * the WARM bits from the heap tuples.
+			 */
+			if (chain->num_clear_pointers == 1)
+				return IBDCR_KEEP;
+
+			/*
+			 * If there are more than 1 CLEAR pointers to this chain, we can
+			 * apply the recheck logic and kill the redudant CLEAR pointer and
+			 * convert the chain. But that's not yet done.
+			 */
+		}
+
+		/*
+		 * For everything else, we must keep the WARM bits and also keep the
+		 * index pointers.
+		 */
+		chain->keep_warm_chain = 1;
+		return IBDCR_KEEP;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ * Comparator routines for use with qsort() and bsearch(). Similar to
+ * vac_cmp_itemptr, but right hand argument is LVWarmChain struct pointer.
+ */
+static int
+vac_cmp_warm_chain(const void *left, const void *right)
+{
+	BlockNumber lblk,
+				rblk;
+	OffsetNumber loff,
+				roff;
+
+	lblk = ItemPointerGetBlockNumber((ItemPointer) left);
+	rblk = ItemPointerGetBlockNumber(&((LVWarmChain *) right)->chain_tid);
+
+	if (lblk < rblk)
+		return -1;
+	if (lblk > rblk)
+		return 1;
+
+	loff = ItemPointerGetOffsetNumber((ItemPointer) left);
+	roff = ItemPointerGetOffsetNumber(&((LVWarmChain *) right)->chain_tid);
+
+	if (loff < roff)
+		return -1;
+	if (loff > roff)
+		return 1;
+
+	return 0;
 }
 
 /*
@@ -2170,6 +2749,18 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 						break;
 					}
 
+					/*
+					 * If this or any other tuple in the chain ever WARM
+					 * updated, there could be multiple index entries pointing
+					 * to the root of this chain. We can't do index-only scans
+					 * for such tuples without verifying index key check. So
+					 * mark the page as !all_visible
+					 */
+					if (HeapTupleHeaderIsWarmUpdated(tuple.t_data))
+					{
+						all_visible = false;
+					}
+
 					/* Track newest xmin on page. */
 					if (TransactionIdFollows(xmin, *visibility_cutoff_xid))
 						*visibility_cutoff_xid = xmin;
diff --git b/src/backend/executor/execIndexing.c a/src/backend/executor/execIndexing.c
index c3f1873..2143978 100644
--- b/src/backend/executor/execIndexing.c
+++ a/src/backend/executor/execIndexing.c
@@ -270,6 +270,8 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
 List *
 ExecInsertIndexTuples(TupleTableSlot *slot,
 					  ItemPointer tupleid,
+					  ItemPointer root_tid,
+					  Bitmapset *modified_attrs,
 					  EState *estate,
 					  bool noDupErr,
 					  bool *specConflict,
@@ -324,6 +326,17 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 		if (!indexInfo->ii_ReadyForInserts)
 			continue;
 
+		/*
+		 * If modified_attrs is set, we only insert index entries for those
+		 * indexes whose column has changed. All other indexes can use their
+		 * existing index pointers to look up the new tuple
+		 */
+		if (modified_attrs)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
 		/* Check for partial index */
 		if (indexInfo->ii_Predicate != NIL)
 		{
@@ -387,10 +400,11 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 			index_insert(indexRelation, /* index relation */
 						 values,	/* array of index Datums */
 						 isnull,	/* null flags */
-						 tupleid,		/* tid of heap tuple */
+						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique,	/* type of uniqueness check to do */
-						 indexInfo);	/* index AM may need this */
+						 indexInfo,	/* index AM may need this */
+						 (modified_attrs != NULL));	/* is it a WARM update? */
 
 		/*
 		 * If the index has an associated exclusion constraint, check that.
@@ -787,6 +801,9 @@ retry:
 		{
 			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
 				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
+			else
+				ItemPointerCopy(&tup->t_self, &ctid_wait);
+
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git b/src/backend/executor/execReplication.c a/src/backend/executor/execReplication.c
index f20d728..747e4ce 100644
--- b/src/backend/executor/execReplication.c
+++ a/src/backend/executor/execReplication.c
@@ -399,6 +399,8 @@ ExecSimpleRelationInsert(EState *estate, TupleTableSlot *slot)
 
 		if (resultRelInfo->ri_NumIndices > 0)
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &(tuple->t_self),
+												   NULL,
 												   estate, false, NULL,
 												   NIL);
 
@@ -445,6 +447,8 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 	if (!skip_tuple)
 	{
 		List	   *recheckIndexes = NIL;
+		bool		warm_update;
+		Bitmapset  *modified_attrs;
 
 		/* Check the constraints of the tuple */
 		if (rel->rd_att->constr)
@@ -455,13 +459,35 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 
 		/* OK, update the tuple and index entries for it */
 		simple_heap_update(rel, &searchslot->tts_tuple->t_self,
-						   slot->tts_tuple);
+						   slot->tts_tuple, &modified_attrs, &warm_update);
 
 		if (resultRelInfo->ri_NumIndices > 0 &&
-			!HeapTupleIsHeapOnly(slot->tts_tuple))
+			(!HeapTupleIsHeapOnly(slot->tts_tuple) || warm_update))
+		{
+			ItemPointerData root_tid;
+
+			/*
+			 * If we did a WARM update then we must index the tuple using its
+			 * root line pointer and not the tuple TID itself.
+			 */
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self,
+						&root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
+
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL,
 												   NIL);
+		}
 
 		/* AFTER ROW UPDATE Triggers */
 		ExecARUpdateTriggers(estate, resultRelInfo,
diff --git b/src/backend/executor/nodeBitmapHeapscan.c a/src/backend/executor/nodeBitmapHeapscan.c
index 19eb175..ef3653c 100644
--- b/src/backend/executor/nodeBitmapHeapscan.c
+++ a/src/backend/executor/nodeBitmapHeapscan.c
@@ -39,6 +39,7 @@
 
 #include "access/relscan.h"
 #include "access/transam.h"
+#include "access/valid.h"
 #include "executor/execdebug.h"
 #include "executor/nodeBitmapHeapscan.h"
 #include "pgstat.h"
@@ -395,11 +396,27 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			OffsetNumber offnum = tbmres->offsets[curslot];
 			ItemPointerData tid;
 			HeapTupleData heapTuple;
+			bool recheck = false;
 
 			ItemPointerSet(&tid, page, offnum);
 			if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
-									   &heapTuple, NULL, true))
-				scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+									   &heapTuple, NULL, true, &recheck))
+			{
+				bool valid = true;
+
+				if (scan->rs_key)
+					HeapKeyTest(&heapTuple, RelationGetDescr(scan->rs_rd),
+							scan->rs_nkeys, scan->rs_key, valid);
+				if (valid)
+					scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+
+				/*
+				 * If the heap tuple needs a recheck because of a WARM update,
+				 * it's a lossy case.
+				 */
+				if (recheck)
+					tbmres->recheck = true;
+			}
 		}
 	}
 	else
diff --git b/src/backend/executor/nodeIndexscan.c a/src/backend/executor/nodeIndexscan.c
index 5afd02e..6e48c2e 100644
--- b/src/backend/executor/nodeIndexscan.c
+++ a/src/backend/executor/nodeIndexscan.c
@@ -142,8 +142,8 @@ IndexNext(IndexScanState *node)
 					   false);	/* don't pfree */
 
 		/*
-		 * If the index was lossy, we have to recheck the index quals using
-		 * the fetched tuple.
+		 * If the index was lossy or the tuple was WARM, we have to recheck
+		 * the index quals using the fetched tuple.
 		 */
 		if (scandesc->xs_recheck)
 		{
diff --git b/src/backend/executor/nodeModifyTable.c a/src/backend/executor/nodeModifyTable.c
index 0b524e0..2ad4a2c 100644
--- b/src/backend/executor/nodeModifyTable.c
+++ a/src/backend/executor/nodeModifyTable.c
@@ -513,6 +513,7 @@ ExecInsert(ModifyTableState *mtstate,
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												 &(tuple->t_self), NULL,
 												 estate, true, &specConflict,
 												   arbiterIndexes);
 
@@ -559,6 +560,7 @@ ExecInsert(ModifyTableState *mtstate,
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
 				recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+													   &(tuple->t_self), NULL,
 													   estate, false, NULL,
 													   arbiterIndexes);
 		}
@@ -892,6 +894,9 @@ ExecUpdate(ItemPointer tupleid,
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
 	List	   *recheckIndexes = NIL;
+	Bitmapset  *modified_attrs = NULL;
+	ItemPointerData	root_tid;
+	bool		warm_update;
 
 	/*
 	 * abort the operation if not running transactions
@@ -1008,7 +1013,7 @@ lreplace:;
 							 estate->es_output_cid,
 							 estate->es_crosscheck_snapshot,
 							 true /* wait for commit */ ,
-							 &hufd, &lockmode);
+							 &hufd, &lockmode, &modified_attrs, &warm_update);
 		switch (result)
 		{
 			case HeapTupleSelfUpdated:
@@ -1095,10 +1100,28 @@ lreplace:;
 		 * the t_self field.
 		 *
 		 * If it's a HOT update, we mustn't insert new index entries.
+		 *
+		 * If it's a WARM update, then we must insert new entries with TID
+		 * pointing to the root of the WARM chain.
 		 */
-		if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
+		if (resultRelInfo->ri_NumIndices > 0 &&
+			(!HeapTupleIsHeapOnly(tuple) || warm_update))
+		{
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL, NIL);
+		}
 	}
 
 	if (canSetTag)
diff --git b/src/backend/postmaster/pgstat.c a/src/backend/postmaster/pgstat.c
index 56a8bf2..52fe4ba 100644
--- b/src/backend/postmaster/pgstat.c
+++ a/src/backend/postmaster/pgstat.c
@@ -1888,7 +1888,7 @@ pgstat_count_heap_insert(Relation rel, PgStat_Counter n)
  * pgstat_count_heap_update - count a tuple update
  */
 void
-pgstat_count_heap_update(Relation rel, bool hot)
+pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 {
 	PgStat_TableStatus *pgstat_info = rel->pgstat_info;
 
@@ -1906,6 +1906,8 @@ pgstat_count_heap_update(Relation rel, bool hot)
 		/* t_tuples_hot_updated is nontransactional, so just advance it */
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
+		else if (warm)
+			pgstat_info->t_counts.t_tuples_warm_updated++;
 	}
 }
 
@@ -4521,6 +4523,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_updated = 0;
 		result->tuples_deleted = 0;
 		result->tuples_hot_updated = 0;
+		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
 		result->changes_since_analyze = 0;
@@ -5630,6 +5633,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated = tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted = tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated = tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
@@ -5657,6 +5661,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated += tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted += tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated += tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated += tabmsg->t_counts.t_tuples_warm_updated;
 			/* If table was truncated, first reset the live/dead counters */
 			if (tabmsg->t_counts.t_truncated)
 			{
diff --git b/src/backend/replication/logical/decode.c a/src/backend/replication/logical/decode.c
index 5c13d26..7a9b48a 100644
--- b/src/backend/replication/logical/decode.c
+++ a/src/backend/replication/logical/decode.c
@@ -347,7 +347,7 @@ DecodeStandbyOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 static void
 DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 {
-	uint8		info = XLogRecGetInfo(buf->record) & XLOG_HEAP_OPMASK;
+	uint8		info = XLogRecGetInfo(buf->record) & XLOG_HEAP2_OPMASK;
 	TransactionId xid = XLogRecGetXid(buf->record);
 	SnapBuild  *builder = ctx->snapshot_builder;
 
@@ -359,10 +359,6 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 
 	switch (info)
 	{
-		case XLOG_HEAP2_MULTI_INSERT:
-			if (SnapBuildProcessChange(builder, xid, buf->origptr))
-				DecodeMultiInsert(ctx, buf);
-			break;
 		case XLOG_HEAP2_NEW_CID:
 			{
 				xl_heap_new_cid *xlrec;
@@ -390,6 +386,7 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 		case XLOG_HEAP2_CLEANUP_INFO:
 		case XLOG_HEAP2_VISIBLE:
 		case XLOG_HEAP2_LOCK_UPDATED:
+		case XLOG_HEAP2_WARMCLEAR:
 			break;
 		default:
 			elog(ERROR, "unexpected RM_HEAP2_ID record type: %u", info);
@@ -418,6 +415,10 @@ DecodeHeapOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 			if (SnapBuildProcessChange(builder, xid, buf->origptr))
 				DecodeInsert(ctx, buf);
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			if (SnapBuildProcessChange(builder, xid, buf->origptr))
+				DecodeMultiInsert(ctx, buf);
+			break;
 
 			/*
 			 * Treat HOT update as normal updates. There is no useful
@@ -809,7 +810,7 @@ DecodeDelete(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 }
 
 /*
- * Decode XLOG_HEAP2_MULTI_INSERT_insert record into multiple tuplebufs.
+ * Decode XLOG_HEAP_MULTI_INSERT_insert record into multiple tuplebufs.
  *
  * Currently MULTI_INSERT will always contain the full tuples.
  */
diff --git b/src/backend/storage/page/bufpage.c a/src/backend/storage/page/bufpage.c
index fdf045a..8d23e92 100644
--- b/src/backend/storage/page/bufpage.c
+++ a/src/backend/storage/page/bufpage.c
@@ -1151,6 +1151,29 @@ PageIndexTupleOverwrite(Page page, OffsetNumber offnum,
 	return true;
 }
 
+/*
+ * PageIndexClearWarmTuples
+ *
+ * Clear the given WARM pointers by resetting the flags stored in the TID
+ * field. We assume there is nothing else in the TID flags other than the WARM
+ * information and clearing all flag bits is safe. If that changes, we must
+ * change this routine as well.
+ */
+void
+PageIndexClearWarmTuples(Page page, OffsetNumber *clearitemnos,
+						 uint16 nclearitems)
+{
+	int			i;
+	ItemId		itemid;
+	IndexTuple	itup;
+
+	for (i = 0; i < nclearitems; i++)
+	{
+		itemid = PageGetItemId(page, clearitemnos[i]);
+		itup = (IndexTuple) PageGetItem(page, itemid);
+		ItemPointerClearFlags(&itup->t_tid);
+	}
+}
 
 /*
  * Set checksum for a page in shared buffers.
diff --git b/src/backend/utils/adt/pgstatfuncs.c a/src/backend/utils/adt/pgstatfuncs.c
index dd2b924..713d731 100644
--- b/src/backend/utils/adt/pgstatfuncs.c
+++ a/src/backend/utils/adt/pgstatfuncs.c
@@ -146,6 +146,22 @@ pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS)
 
 
 Datum
+pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+
+Datum
 pg_stat_get_live_tuples(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
@@ -1672,6 +1688,21 @@ pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS)
 }
 
 Datum
+pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_TableStatus *tabentry;
+
+	if ((tabentry = find_tabstat_entry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->t_counts.t_tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+Datum
 pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
diff --git b/src/backend/utils/cache/relcache.c a/src/backend/utils/cache/relcache.c
index bc22098..4be2445 100644
--- b/src/backend/utils/cache/relcache.c
+++ a/src/backend/utils/cache/relcache.c
@@ -2339,6 +2339,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	list_free_deep(relation->rd_fkeylist);
 	list_free(relation->rd_indexlist);
 	bms_free(relation->rd_indexattr);
+	bms_free(relation->rd_exprindexattr);
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_pkattr);
 	bms_free(relation->rd_idattr);
@@ -4353,6 +4354,13 @@ RelationGetIndexList(Relation relation)
 		return list_copy(relation->rd_indexlist);
 
 	/*
+	 * If the index list was invalidated, we better also invalidate the index
+	 * attribute list (which should automatically invalidate other attributes
+	 * such as primary key and replica identity)
+	 */
+	relation->rd_indexattr = NULL;
+
+	/*
 	 * We build the list we intend to return (in the caller's context) while
 	 * doing the scan.  After successfully completing the scan, we copy that
 	 * list into the relcache entry.  This avoids cache-context memory leakage
@@ -4836,15 +4844,19 @@ Bitmapset *
 RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 {
 	Bitmapset  *indexattrs;		/* indexed columns */
+	Bitmapset  *exprindexattrs;	/* indexed columns in expression/prediacate
+									 indexes */
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *pkindexattrs;	/* columns in the primary index */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
+	Bitmapset  *indxnotreadyattrs;	/* columns in not ready indexes */
 	List	   *indexoidlist;
 	List	   *newindexoidlist;
 	Oid			relpkindex;
 	Oid			relreplindex;
 	ListCell   *l;
 	MemoryContext oldcxt;
+	bool		supportswarm = true;/* True if the table can be WARM updated */
 
 	/* Quick exit if we already computed the result. */
 	if (relation->rd_indexattr != NULL)
@@ -4859,6 +4871,10 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_pkattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
+			case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+				return bms_copy(relation->rd_exprindexattr);
+			case INDEX_ATTR_BITMAP_NOTREADY:
+				return bms_copy(relation->rd_indxnotreadyattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -4899,9 +4915,11 @@ restart:
 	 * won't be returned at all by RelationGetIndexList.
 	 */
 	indexattrs = NULL;
+	exprindexattrs = NULL;
 	uindexattrs = NULL;
 	pkindexattrs = NULL;
 	idindexattrs = NULL;
+	indxnotreadyattrs = NULL;
 	foreach(l, indexoidlist)
 	{
 		Oid			indexOid = lfirst_oid(l);
@@ -4938,6 +4956,10 @@ restart:
 				indexattrs = bms_add_member(indexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
 
+				if (!indexInfo->ii_ReadyForInserts)
+					indxnotreadyattrs = bms_add_member(indxnotreadyattrs,
+							   attrnum - FirstLowInvalidHeapAttributeNumber);
+
 				if (isKey)
 					uindexattrs = bms_add_member(uindexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
@@ -4953,10 +4975,29 @@ restart:
 		}
 
 		/* Collect all attributes used in expressions, too */
-		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &exprindexattrs);
 
 		/* Collect all attributes in the index predicate, too */
-		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
+
+		/*
+		 * indexattrs should include attributes referenced in index expressions
+		 * and predicates too.
+		 */
+		indexattrs = bms_add_members(indexattrs, exprindexattrs);
+
+		if (!indexInfo->ii_ReadyForInserts)
+			indxnotreadyattrs = bms_add_members(indxnotreadyattrs,
+					exprindexattrs);
+
+		/*
+		 * Check if the index has amrecheck method defined. If the method is
+		 * not defined, the index does not support WARM update. Completely
+		 * disable WARM updates on such tables.
+		 */
+		if (!indexDesc->rd_amroutine->amrecheck)
+			supportswarm = false;
+
 
 		index_close(indexDesc, AccessShareLock);
 	}
@@ -4989,15 +5030,22 @@ restart:
 		goto restart;
 	}
 
+	/* Remember if the table can do WARM updates */
+	relation->rd_supportswarm = supportswarm;
+
 	/* Don't leak the old values of these bitmaps, if any */
 	bms_free(relation->rd_indexattr);
 	relation->rd_indexattr = NULL;
+	bms_free(relation->rd_exprindexattr);
+	relation->rd_exprindexattr = NULL;
 	bms_free(relation->rd_keyattr);
 	relation->rd_keyattr = NULL;
 	bms_free(relation->rd_pkattr);
 	relation->rd_pkattr = NULL;
 	bms_free(relation->rd_idattr);
 	relation->rd_idattr = NULL;
+	bms_free(relation->rd_indxnotreadyattr);
+	relation->rd_indxnotreadyattr = NULL;
 
 	/*
 	 * Now save copies of the bitmaps in the relcache entry.  We intentionally
@@ -5010,7 +5058,9 @@ restart:
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_pkattr = bms_copy(pkindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_indexattr = bms_copy(indexattrs);
+	relation->rd_exprindexattr = bms_copy(exprindexattrs);
+	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
+	relation->rd_indxnotreadyattr = bms_copy(indxnotreadyattrs);
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -5024,6 +5074,10 @@ restart:
 			return bms_copy(relation->rd_pkattr);
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
+		case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+			return exprindexattrs;
+		case INDEX_ATTR_BITMAP_NOTREADY:
+			return indxnotreadyattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
@@ -5636,6 +5690,7 @@ load_relcache_init_file(bool shared)
 		rel->rd_keyattr = NULL;
 		rel->rd_pkattr = NULL;
 		rel->rd_idattr = NULL;
+		rel->rd_indxnotreadyattr = NULL;
 		rel->rd_pubactions = NULL;
 		rel->rd_statvalid = false;
 		rel->rd_statlist = NIL;
diff --git b/src/backend/utils/time/combocid.c a/src/backend/utils/time/combocid.c
index baff998..6a2e2f2 100644
--- b/src/backend/utils/time/combocid.c
+++ a/src/backend/utils/time/combocid.c
@@ -106,7 +106,7 @@ HeapTupleHeaderGetCmin(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 	Assert(TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmin(tup)));
 
 	if (tup->t_infomask & HEAP_COMBOCID)
@@ -120,7 +120,7 @@ HeapTupleHeaderGetCmax(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 
 	/*
 	 * Because GetUpdateXid() performs memory allocations if xmax is a
diff --git b/src/backend/utils/time/tqual.c a/src/backend/utils/time/tqual.c
index 519f3b6..e54d0df 100644
--- b/src/backend/utils/time/tqual.c
+++ a/src/backend/utils/time/tqual.c
@@ -186,7 +186,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -205,7 +205,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -377,7 +377,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -396,7 +396,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -471,7 +471,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			return HeapTupleInvisible;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -490,7 +490,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -753,7 +753,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -772,7 +772,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -974,7 +974,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -993,7 +993,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1180,7 +1180,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 		if (HeapTupleHeaderXminInvalid(tuple))
 			return HEAPTUPLE_DEAD;
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_OFF)
+		else if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1198,7 +1198,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 						InvalidTransactionId);
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
diff --git b/src/include/access/amapi.h a/src/include/access/amapi.h
index f919cf8..8b7af1e 100644
--- b/src/include/access/amapi.h
+++ a/src/include/access/amapi.h
@@ -13,6 +13,7 @@
 #define AMAPI_H
 
 #include "access/genam.h"
+#include "access/itup.h"
 
 /*
  * We don't wish to include planner header files here, since most of an index
@@ -74,6 +75,14 @@ typedef bool (*aminsert_function) (Relation indexRelation,
 											   Relation heapRelation,
 											   IndexUniqueCheck checkUnique,
 											   struct IndexInfo *indexInfo);
+/* insert this WARM tuple */
+typedef bool (*amwarminsert_function) (Relation indexRelation,
+											   Datum *values,
+											   bool *isnull,
+											   ItemPointer heap_tid,
+											   Relation heapRelation,
+											   IndexUniqueCheck checkUnique,
+											   struct IndexInfo *indexInfo);
 
 /* bulk delete */
 typedef IndexBulkDeleteResult *(*ambulkdelete_function) (IndexVacuumInfo *info,
@@ -152,6 +161,11 @@ typedef void (*aminitparallelscan_function) (void *target);
 /* (re)start parallel index scan */
 typedef void (*amparallelrescan_function) (IndexScanDesc scan);
 
+/* recheck index tuple and heap tuple match */
+typedef bool (*amrecheck_function) (Relation indexRel,
+		struct IndexInfo *indexInfo, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 /*
  * API struct for an index AM.  Note this must be stored in a single palloc'd
  * chunk of memory.
@@ -198,6 +212,7 @@ typedef struct IndexAmRoutine
 	ambuild_function ambuild;
 	ambuildempty_function ambuildempty;
 	aminsert_function aminsert;
+	amwarminsert_function amwarminsert;
 	ambulkdelete_function ambulkdelete;
 	amvacuumcleanup_function amvacuumcleanup;
 	amcanreturn_function amcanreturn;	/* can be NULL */
@@ -217,6 +232,9 @@ typedef struct IndexAmRoutine
 	amestimateparallelscan_function amestimateparallelscan;		/* can be NULL */
 	aminitparallelscan_function aminitparallelscan;		/* can be NULL */
 	amparallelrescan_function amparallelrescan; /* can be NULL */
+
+	/* interface function to support WARM */
+	amrecheck_function amrecheck;		/* can be NULL */
 } IndexAmRoutine;
 
 
diff --git b/src/include/access/genam.h a/src/include/access/genam.h
index f467b18..965be45 100644
--- b/src/include/access/genam.h
+++ a/src/include/access/genam.h
@@ -75,12 +75,29 @@ typedef struct IndexBulkDeleteResult
 	bool		estimated_count;	/* num_index_tuples is an estimate */
 	double		num_index_tuples;		/* tuples remaining */
 	double		tuples_removed; /* # removed during vacuum operation */
+	double		num_warm_pointers;	/* # WARM pointers found */
+	double		num_clear_pointers;	/* # CLEAR pointers found */
+	double		pointers_cleared;	/* # WARM pointers cleared */
+	double		warm_pointers_removed;	/* # WARM pointers removed */
+	double		clear_pointers_removed;	/* # CLEAR pointers removed */
 	BlockNumber pages_deleted;	/* # unused pages in index */
 	BlockNumber pages_free;		/* # pages available for reuse */
 } IndexBulkDeleteResult;
 
+/*
+ * IndexBulkDeleteCallback should return one of the following
+ */
+typedef enum IndexBulkDeleteCallbackResult
+{
+	IBDCR_KEEP,			/* index tuple should be preserved */
+	IBDCR_DELETE,		/* index tuple should be deleted */
+	IBDCR_CLEAR_WARM	/* index tuple should be cleared of WARM bit */
+} IndexBulkDeleteCallbackResult;
+
 /* Typedef for callback function to determine if a tuple is bulk-deletable */
-typedef bool (*IndexBulkDeleteCallback) (ItemPointer itemptr, void *state);
+typedef IndexBulkDeleteCallbackResult (*IndexBulkDeleteCallback) (
+										 ItemPointer itemptr,
+										 bool is_warm, void *state);
 
 /* struct definitions appear in relscan.h */
 typedef struct IndexScanDescData *IndexScanDesc;
@@ -135,7 +152,8 @@ extern bool index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 struct IndexInfo *indexInfo);
+			 struct IndexInfo *indexInfo,
+			 bool warm_update);
 
 extern IndexScanDesc index_beginscan(Relation heapRelation,
 				Relation indexRelation,
diff --git b/src/include/access/heapam.h a/src/include/access/heapam.h
index 5540e12..2217af9 100644
--- b/src/include/access/heapam.h
+++ a/src/include/access/heapam.h
@@ -72,6 +72,20 @@ typedef struct HeapUpdateFailureData
 	CommandId	cmax;
 } HeapUpdateFailureData;
 
+typedef int HeapCheckWarmChainStatus;
+
+#define HCWC_CLEAR_TUPLE		0x0001
+#define	HCWC_WARM_TUPLE			0x0002
+#define HCWC_WARM_UPDATED_TUPLE	0x0004
+
+#define HCWC_IS_MIXED(status) \
+	(((status) & (HCWC_CLEAR_TUPLE | HCWC_WARM_TUPLE)) != 0)
+#define HCWC_IS_ALL_WARM(status) \
+	(((status) & HCWC_CLEAR_TUPLE) == 0)
+#define HCWC_IS_ALL_CLEAR(status) \
+	(((status) & HCWC_WARM_TUPLE) == 0)
+#define HCWC_IS_WARM_UPDATED(status) \
+	(((status) & HCWC_WARM_UPDATED_TUPLE) != 0)
 
 /* ----------------
  *		function prototypes for heap access method
@@ -137,9 +151,10 @@ extern bool heap_fetch(Relation relation, Snapshot snapshot,
 		   Relation stats_relation);
 extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
 					   Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call);
+					   bool *all_dead, bool first_call, bool *recheck);
 extern bool heap_hot_search(ItemPointer tid, Relation relation,
-				Snapshot snapshot, bool *all_dead);
+				Snapshot snapshot, bool *all_dead,
+				bool *recheck, Buffer *buffer, HeapTuple heapTuple);
 
 extern void heap_get_latest_tid(Relation relation, Snapshot snapshot,
 					ItemPointer tid);
@@ -161,7 +176,8 @@ extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
 extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
@@ -176,10 +192,16 @@ extern bool heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple);
 extern Oid	simple_heap_insert(Relation relation, HeapTuple tup);
 extern void simple_heap_delete(Relation relation, ItemPointer tid);
 extern void simple_heap_update(Relation relation, ItemPointer otid,
-				   HeapTuple tup);
+				   HeapTuple tup,
+				   Bitmapset **modified_attrs,
+				   bool *warm_update);
 
 extern void heap_sync(Relation relation);
 extern void heap_update_snapshot(HeapScanDesc scan, Snapshot snapshot);
+extern HeapCheckWarmChainStatus heap_check_warm_chain(Page dp,
+				   ItemPointer tid, bool stop_at_warm);
+extern int heap_clear_warm_chain(Page dp, ItemPointer tid,
+				   OffsetNumber *cleared_offnums);
 
 /* in heap/pruneheap.c */
 extern void heap_page_prune_opt(Relation relation, Buffer buffer);
diff --git b/src/include/access/heapam_xlog.h a/src/include/access/heapam_xlog.h
index e6019d5..66fd0ea 100644
--- b/src/include/access/heapam_xlog.h
+++ a/src/include/access/heapam_xlog.h
@@ -32,7 +32,7 @@
 #define XLOG_HEAP_INSERT		0x00
 #define XLOG_HEAP_DELETE		0x10
 #define XLOG_HEAP_UPDATE		0x20
-/* 0x030 is free, was XLOG_HEAP_MOVE */
+#define XLOG_HEAP_MULTI_INSERT	0x30
 #define XLOG_HEAP_HOT_UPDATE	0x40
 #define XLOG_HEAP_CONFIRM		0x50
 #define XLOG_HEAP_LOCK			0x60
@@ -47,18 +47,23 @@
 /*
  * We ran out of opcodes, so heapam.c now has a second RmgrId.  These opcodes
  * are associated with RM_HEAP2_ID, but are not logically different from
- * the ones above associated with RM_HEAP_ID.  XLOG_HEAP_OPMASK applies to
- * these, too.
+ * the ones above associated with RM_HEAP_ID.
+ *
+ * In PG 10, we moved XLOG_HEAP2_MULTI_INSERT to RM_HEAP_ID. That allows us to
+ * use 0x80 bit in RM_HEAP2_ID, thus potentially creating another 8 possible
+ * opcodes in RM_HEAP2_ID.
  */
 #define XLOG_HEAP2_REWRITE		0x00
 #define XLOG_HEAP2_CLEAN		0x10
 #define XLOG_HEAP2_FREEZE_PAGE	0x20
 #define XLOG_HEAP2_CLEANUP_INFO 0x30
 #define XLOG_HEAP2_VISIBLE		0x40
-#define XLOG_HEAP2_MULTI_INSERT 0x50
+#define XLOG_HEAP2_WARMCLEAR	0x50
 #define XLOG_HEAP2_LOCK_UPDATED 0x60
 #define XLOG_HEAP2_NEW_CID		0x70
 
+#define XLOG_HEAP2_OPMASK		0x70
+
 /*
  * xl_heap_insert/xl_heap_multi_insert flag values, 8 bits are available.
  */
@@ -80,6 +85,7 @@
 #define XLH_UPDATE_CONTAINS_NEW_TUPLE			(1<<4)
 #define XLH_UPDATE_PREFIX_FROM_OLD				(1<<5)
 #define XLH_UPDATE_SUFFIX_FROM_OLD				(1<<6)
+#define XLH_UPDATE_WARM_UPDATE					(1<<7)
 
 /* convenience macro for checking whether any form of old tuple was logged */
 #define XLH_UPDATE_CONTAINS_OLD						\
@@ -225,6 +231,14 @@ typedef struct xl_heap_clean
 
 #define SizeOfHeapClean (offsetof(xl_heap_clean, ndead) + sizeof(uint16))
 
+typedef struct xl_heap_warmclear
+{
+	uint16		ncleared;
+	/* OFFSET NUMBERS are in the block reference 0 */
+} xl_heap_warmclear;
+
+#define SizeOfHeapWarmClear (offsetof(xl_heap_warmclear, ncleared) + sizeof(uint16))
+
 /*
  * Cleanup_info is required in some cases during a lazy VACUUM.
  * Used for reporting the results of HeapTupleHeaderAdvanceLatestRemovedXid()
@@ -388,6 +402,8 @@ extern XLogRecPtr log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid);
+extern XLogRecPtr log_heap_warmclear(Relation reln, Buffer buffer,
+			   OffsetNumber *cleared, int ncleared);
 extern XLogRecPtr log_heap_freeze(Relation reln, Buffer buffer,
 				TransactionId cutoff_xid, xl_heap_freeze_tuple *tuples,
 				int ntuples);
diff --git b/src/include/access/htup_details.h a/src/include/access/htup_details.h
index 4d614b7..bcefba6 100644
--- b/src/include/access/htup_details.h
+++ a/src/include/access/htup_details.h
@@ -201,6 +201,21 @@ struct HeapTupleHeaderData
 										 * upgrade support */
 #define HEAP_MOVED (HEAP_MOVED_OFF | HEAP_MOVED_IN)
 
+/*
+ * A WARM chain usually consists of two parts. Each of these parts are HOT
+ * chains in themselves i.e. all indexed columns has the same value, but a WARM
+ * update separates these parts. We need a mechanism to identify which part a
+ * tuple belongs to. We can't just look at if it's a
+ * HeapTupleHeaderIsWarmUpdated() because during WARM update, both old and new
+ * tuples are marked as WARM tuples.
+ *
+ * We need another infomask bit for this. But we use the same infomask bit that
+ * was earlier used for by old-style VACUUM FULL. This is safe because
+ * HEAP_WARM_TUPLE flag will always be set along with HEAP_WARM_UPDATED. So if
+ * HEAP_WARM_TUPLE and HEAP_WARM_UPDATED is set then we know that it's
+ * referring to red part of the WARM chain.
+ */
+#define HEAP_WARM_TUPLE			0x4000
 #define HEAP_XACT_MASK			0xFFF0	/* visibility-related bits */
 
 /*
@@ -260,7 +275,11 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x0800 are available */
+#define HEAP_WARM_UPDATED		0x0800	/*
+										 * This or a prior version of this
+										 * tuple in the current HOT chain was
+										 * once WARM updated
+										 */
 #define HEAP_LATEST_TUPLE		0x1000	/*
 										 * This is the last tuple in chain and
 										 * ip_posid points to the root line
@@ -271,7 +290,7 @@ struct HeapTupleHeaderData
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF800	/* visibility-related bits */
 
 
 /*
@@ -396,7 +415,7 @@ struct HeapTupleHeaderData
 /* SetCmin is reasonably simple since we never need a combo CID */
 #define HeapTupleHeaderSetCmin(tup, cid) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	(tup)->t_infomask &= ~HEAP_COMBOCID; \
 } while (0)
@@ -404,7 +423,7 @@ do { \
 /* SetCmax must be used after HeapTupleHeaderAdjustCmax; see combocid.c */
 #define HeapTupleHeaderSetCmax(tup, cid, iscombo) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	if (iscombo) \
 		(tup)->t_infomask |= HEAP_COMBOCID; \
@@ -414,7 +433,7 @@ do { \
 
 #define HeapTupleHeaderGetXvac(tup) \
 ( \
-	((tup)->t_infomask & HEAP_MOVED) ? \
+	HeapTupleHeaderIsMoved(tup) ? \
 		(tup)->t_choice.t_heap.t_field3.t_xvac \
 	: \
 		InvalidTransactionId \
@@ -422,7 +441,7 @@ do { \
 
 #define HeapTupleHeaderSetXvac(tup, xid) \
 do { \
-	Assert((tup)->t_infomask & HEAP_MOVED); \
+	Assert(HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_xvac = (xid); \
 } while (0)
 
@@ -510,6 +529,21 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+#define HeapTupleHeaderSetWarmUpdated(tup) \
+do { \
+	(tup)->t_infomask2 |= HEAP_WARM_UPDATED; \
+} while (0)
+
+#define HeapTupleHeaderClearWarmUpdated(tup) \
+do { \
+	(tup)->t_infomask2 &= ~HEAP_WARM_UPDATED; \
+} while (0)
+
+#define HeapTupleHeaderIsWarmUpdated(tup) \
+( \
+  ((tup)->t_infomask2 & HEAP_WARM_UPDATED) != 0 \
+)
+
 /*
  * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
  * the TID of the tuple itself in t_ctid field to mark the end of the chain.
@@ -635,6 +669,58 @@ do { \
 )
 
 /*
+ * Macros to check if tuple is a moved-off/in tuple by VACUUM FULL in from
+ * pre-9.0 era. Such tuple must not have HEAP_WARM_TUPLE flag set.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderIsMovedOff(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED_OFF) \
+)
+
+#define HeapTupleHeaderIsMovedIn(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED_IN) \
+)
+
+#define HeapTupleHeaderIsMoved(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED) \
+)
+
+/*
+ * Check if tuple belongs to the second part of the WARM chain.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderIsWarm(tuple) \
+( \
+	HeapTupleHeaderIsWarmUpdated(tuple) && \
+	(((tuple)->t_infomask & HEAP_WARM_TUPLE) != 0) \
+)
+
+/*
+ * Mark tuple as a member of the second part of the chain. Must only be done on
+ * a tuple which is already marked a WARM-tuple.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderSetWarm(tuple) \
+( \
+	AssertMacro(HeapTupleHeaderIsWarmUpdated(tuple)), \
+	(tuple)->t_infomask |= HEAP_WARM_TUPLE \
+)
+
+#define HeapTupleHeaderClearWarm(tuple) \
+( \
+	(tuple)->t_infomask &= ~HEAP_WARM_TUPLE \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
@@ -785,6 +871,24 @@ struct MinimalTupleData
 #define HeapTupleClearHeapOnly(tuple) \
 		HeapTupleHeaderClearHeapOnly((tuple)->t_data)
 
+#define HeapTupleIsWarmUpdated(tuple) \
+		HeapTupleHeaderIsWarmUpdated((tuple)->t_data)
+
+#define HeapTupleSetWarmUpdated(tuple) \
+		HeapTupleHeaderSetWarmUpdated((tuple)->t_data)
+
+#define HeapTupleClearWarmUpdated(tuple) \
+		HeapTupleHeaderClearWarmUpdated((tuple)->t_data)
+
+#define HeapTupleIsWarm(tuple) \
+		HeapTupleHeaderIsWarm((tuple)->t_data)
+
+#define HeapTupleSetWarm(tuple) \
+		HeapTupleHeaderSetWarm((tuple)->t_data)
+
+#define HeapTupleClearWarm(tuple) \
+		HeapTupleHeaderClearWarm((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git b/src/include/access/nbtree.h a/src/include/access/nbtree.h
index f9304db..163180d 100644
--- b/src/include/access/nbtree.h
+++ a/src/include/access/nbtree.h
@@ -427,6 +427,12 @@ typedef BTScanOpaqueData *BTScanOpaque;
 #define SK_BT_NULLS_FIRST	(INDOPTION_NULLS_FIRST << SK_BT_INDOPTION_SHIFT)
 
 /*
+ * Flags overloaded on t_tid.ip_posid field. They are managed by
+ * ItemPointerSetFlags and corresponing routines.
+ */
+#define BTREE_INDEX_WARM_POINTER	0x01
+
+/*
  * external entry points for btree, in nbtree.c
  */
 extern IndexBuildResult *btbuild(Relation heap, Relation index,
@@ -436,6 +442,10 @@ extern bool btinsert(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
 		 struct IndexInfo *indexInfo);
+extern bool btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 struct IndexInfo *indexInfo);
 extern IndexScanDesc btbeginscan(Relation rel, int nkeys, int norderbys);
 extern Size btestimateparallelscan(void);
 extern void btinitparallelscan(void *target);
@@ -487,10 +497,12 @@ extern void _bt_pageinit(Page page, Size size);
 extern bool _bt_page_recyclable(Page page);
 extern void _bt_delitems_delete(Relation rel, Buffer buf,
 					OffsetNumber *itemnos, int nitems, Relation heapRel);
-extern void _bt_delitems_vacuum(Relation rel, Buffer buf,
-					OffsetNumber *itemnos, int nitems,
-					BlockNumber lastBlockVacuumed);
+extern void _bt_handleitems_vacuum(Relation rel, Buffer buf,
+					OffsetNumber *delitemnos, int ndelitems,
+					OffsetNumber *clearitemnos, int nclearitems);
 extern int	_bt_pagedel(Relation rel, Buffer buf);
+extern void	_bt_clear_items(Page page, OffsetNumber *clearitemnos,
+					uint16 nclearitems);
 
 /*
  * prototypes for functions in nbtsearch.c
@@ -537,6 +549,9 @@ extern bytea *btoptions(Datum reloptions, bool validate);
 extern bool btproperty(Oid index_oid, int attno,
 		   IndexAMProperty prop, const char *propname,
 		   bool *res, bool *isnull);
+extern bool btrecheck(Relation indexRel, struct IndexInfo *indexInfo,
+		IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * prototypes for functions in nbtvalidate.c
diff --git b/src/include/access/nbtxlog.h a/src/include/access/nbtxlog.h
index d6a3085..6a86628 100644
--- b/src/include/access/nbtxlog.h
+++ a/src/include/access/nbtxlog.h
@@ -142,7 +142,8 @@ typedef struct xl_btree_reuse_page
 /*
  * This is what we need to know about vacuum of individual leaf index tuples.
  * The WAL record can represent deletion of any number of index tuples on a
- * single index page when executed by VACUUM.
+ * single index page when executed by VACUUM. It also includes tuples which
+ * are cleared of WARM bits by VACUUM.
  *
  * For MVCC scans, lastBlockVacuumed will be set to InvalidBlockNumber.
  * For a non-MVCC index scans there is an additional correctness requirement
@@ -165,11 +166,12 @@ typedef struct xl_btree_reuse_page
 typedef struct xl_btree_vacuum
 {
 	BlockNumber lastBlockVacuumed;
-
-	/* TARGET OFFSET NUMBERS FOLLOW */
+	uint16		ndelitems;
+	uint16		nclearitems;
+	/* ndelitems + nclearitems TARGET OFFSET NUMBERS FOLLOW */
 } xl_btree_vacuum;
 
-#define SizeOfBtreeVacuum	(offsetof(xl_btree_vacuum, lastBlockVacuumed) + sizeof(BlockNumber))
+#define SizeOfBtreeVacuum	(offsetof(xl_btree_vacuum, nclearitems) + sizeof(uint16))
 
 /*
  * This is what we need to know about marking an empty branch for deletion.
diff --git b/src/include/access/relscan.h a/src/include/access/relscan.h
index 3fc726d..fa178d3 100644
--- b/src/include/access/relscan.h
+++ a/src/include/access/relscan.h
@@ -104,6 +104,9 @@ typedef struct IndexScanDescData
 	/* index access method's private state */
 	void	   *opaque;			/* access-method-specific info */
 
+	/* IndexInfo structure for this index */
+	struct IndexInfo  *indexInfo;
+
 	/*
 	 * In an index-only scan, a successful amgettuple call must fill either
 	 * xs_itup (and xs_itupdesc) or xs_hitup (and xs_hitupdesc) to provide the
@@ -119,7 +122,7 @@ typedef struct IndexScanDescData
 	HeapTupleData xs_ctup;		/* current heap tuple, if any */
 	Buffer		xs_cbuf;		/* current heap buffer in scan, if any */
 	/* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */
-	bool		xs_recheck;		/* T means scan keys must be rechecked */
+	bool		xs_recheck;		/* T means scan keys must be rechecked for each tuple */
 
 	/*
 	 * When fetching with an ordering operator, the values of the ORDER BY
diff --git b/src/include/catalog/index.h a/src/include/catalog/index.h
index 20bec90..f92ec29 100644
--- b/src/include/catalog/index.h
+++ a/src/include/catalog/index.h
@@ -89,6 +89,13 @@ extern void FormIndexDatum(IndexInfo *indexInfo,
 			   Datum *values,
 			   bool *isnull);
 
+extern void FormIndexPlainDatum(IndexInfo *indexInfo,
+			   Relation heapRel,
+			   HeapTuple heapTup,
+			   Datum *values,
+			   bool *isnull,
+			   bool *isavail);
+
 extern void index_build(Relation heapRelation,
 			Relation indexRelation,
 			IndexInfo *indexInfo,
diff --git b/src/include/catalog/pg_proc.h a/src/include/catalog/pg_proc.h
index 79f9b90..8587135 100644
--- b/src/include/catalog/pg_proc.h
+++ a/src/include/catalog/pg_proc.h
@@ -2783,6 +2783,8 @@ DATA(insert OID = 1933 (  pg_stat_get_tuples_deleted	PGNSP PGUID 12 1 0 0 0 f f
 DESCR("statistics: number of tuples deleted");
 DATA(insert OID = 1972 (  pg_stat_get_tuples_hot_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated");
+DATA(insert OID = 3373 (  pg_stat_get_tuples_warm_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated");
 DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_live_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
@@ -2935,6 +2937,8 @@ DATA(insert OID = 3042 (  pg_stat_get_xact_tuples_deleted		PGNSP PGUID 12 1 0 0
 DESCR("statistics: number of tuples deleted in current transaction");
 DATA(insert OID = 3043 (  pg_stat_get_xact_tuples_hot_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated in current transaction");
+DATA(insert OID = 3359 (  pg_stat_get_xact_tuples_warm_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated in current transaction");
 DATA(insert OID = 3044 (  pg_stat_get_xact_blocks_fetched		PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_fetched _null_ _null_ _null_ ));
 DESCR("statistics: number of blocks fetched in current transaction");
 DATA(insert OID = 3045 (  pg_stat_get_xact_blocks_hit			PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_hit _null_ _null_ _null_ ));
diff --git b/src/include/commands/progress.h a/src/include/commands/progress.h
index 9472ecc..b355b61 100644
--- b/src/include/commands/progress.h
+++ a/src/include/commands/progress.h
@@ -25,6 +25,7 @@
 #define PROGRESS_VACUUM_NUM_INDEX_VACUUMS		4
 #define PROGRESS_VACUUM_MAX_DEAD_TUPLES			5
 #define PROGRESS_VACUUM_NUM_DEAD_TUPLES			6
+#define PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED	7
 
 /* Phases of vacuum (as advertised via PROGRESS_VACUUM_PHASE) */
 #define PROGRESS_VACUUM_PHASE_SCAN_HEAP			1
diff --git b/src/include/executor/executor.h a/src/include/executor/executor.h
index d3849b9..7e1ec56 100644
--- b/src/include/executor/executor.h
+++ a/src/include/executor/executor.h
@@ -506,6 +506,7 @@ extern int	ExecCleanTargetListLength(List *targetlist);
 extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
 extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
 extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
+					  ItemPointer root_tid, Bitmapset *modified_attrs,
 					  EState *estate, bool noDupErr, bool *specConflict,
 					  List *arbiterIndexes);
 extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
diff --git b/src/include/executor/nodeIndexscan.h a/src/include/executor/nodeIndexscan.h
index ea3f3a5..ebeec74 100644
--- b/src/include/executor/nodeIndexscan.h
+++ a/src/include/executor/nodeIndexscan.h
@@ -41,5 +41,4 @@ extern void ExecIndexEvalRuntimeKeys(ExprContext *econtext,
 extern bool ExecIndexEvalArrayKeys(ExprContext *econtext,
 					   IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 extern bool ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
-
 #endif   /* NODEINDEXSCAN_H */
diff --git b/src/include/nodes/execnodes.h a/src/include/nodes/execnodes.h
index 11a6850..d2991db 100644
--- b/src/include/nodes/execnodes.h
+++ a/src/include/nodes/execnodes.h
@@ -132,6 +132,7 @@ typedef struct IndexInfo
 	NodeTag		type;
 	int			ii_NumIndexAttrs;
 	AttrNumber	ii_KeyAttrNumbers[INDEX_MAX_KEYS];
+	Bitmapset  *ii_indxattrs;	/* bitmap of all columns used in this index */
 	List	   *ii_Expressions; /* list of Expr */
 	List	   *ii_ExpressionsState;	/* list of ExprState */
 	List	   *ii_Predicate;	/* list of Expr */
diff --git b/src/include/pgstat.h a/src/include/pgstat.h
index e29397f..99bdc8b 100644
--- b/src/include/pgstat.h
+++ a/src/include/pgstat.h
@@ -105,6 +105,7 @@ typedef struct PgStat_TableCounts
 	PgStat_Counter t_tuples_updated;
 	PgStat_Counter t_tuples_deleted;
 	PgStat_Counter t_tuples_hot_updated;
+	PgStat_Counter t_tuples_warm_updated;
 	bool		t_truncated;
 
 	PgStat_Counter t_delta_live_tuples;
@@ -625,6 +626,7 @@ typedef struct PgStat_StatTabEntry
 	PgStat_Counter tuples_updated;
 	PgStat_Counter tuples_deleted;
 	PgStat_Counter tuples_hot_updated;
+	PgStat_Counter tuples_warm_updated;
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
@@ -1285,7 +1287,7 @@ pgstat_report_wait_end(void)
 	(pgStatBlockWriteTime += (n))
 
 extern void pgstat_count_heap_insert(Relation rel, PgStat_Counter n);
-extern void pgstat_count_heap_update(Relation rel, bool hot);
+extern void pgstat_count_heap_update(Relation rel, bool hot, bool warm);
 extern void pgstat_count_heap_delete(Relation rel);
 extern void pgstat_count_truncate(Relation rel);
 extern void pgstat_update_heap_dead_tuples(Relation rel, int delta);
diff --git b/src/include/storage/bufpage.h a/src/include/storage/bufpage.h
index e956dc3..1852195 100644
--- b/src/include/storage/bufpage.h
+++ a/src/include/storage/bufpage.h
@@ -433,6 +433,8 @@ extern void PageIndexMultiDelete(Page page, OffsetNumber *itemnos, int nitems);
 extern void PageIndexTupleDeleteNoCompact(Page page, OffsetNumber offset);
 extern bool PageIndexTupleOverwrite(Page page, OffsetNumber offnum,
 						Item newtup, Size newsize);
+extern void PageIndexClearWarmTuples(Page page, OffsetNumber *clearitemnos,
+						uint16 nclearitems);
 extern char *PageSetChecksumCopy(Page page, BlockNumber blkno);
 extern void PageSetChecksumInplace(Page page, BlockNumber blkno);
 
diff --git b/src/include/utils/rel.h a/src/include/utils/rel.h
index ab875bb..cd1976a 100644
--- b/src/include/utils/rel.h
+++ a/src/include/utils/rel.h
@@ -142,9 +142,14 @@ typedef struct RelationData
 
 	/* data managed by RelationGetIndexAttrBitmap: */
 	Bitmapset  *rd_indexattr;	/* identifies columns used in indexes */
+	Bitmapset  *rd_exprindexattr; /* indentified columns used in expression or
+									 predicate indexes */
+	Bitmapset  *rd_indxnotreadyattr;	/* columns used by indexes not yet
+										   ready */
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_pkattr;		/* cols included in primary key */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	PublicationActions  *rd_pubactions;	/* publication actions */
 
diff --git b/src/include/utils/relcache.h a/src/include/utils/relcache.h
index 81af3ae..d5b3072 100644
--- b/src/include/utils/relcache.h
+++ a/src/include/utils/relcache.h
@@ -51,7 +51,9 @@ typedef enum IndexAttrBitmapKind
 	INDEX_ATTR_BITMAP_ALL,
 	INDEX_ATTR_BITMAP_KEY,
 	INDEX_ATTR_BITMAP_PRIMARY_KEY,
-	INDEX_ATTR_BITMAP_IDENTITY_KEY
+	INDEX_ATTR_BITMAP_IDENTITY_KEY,
+	INDEX_ATTR_BITMAP_EXPR_PREDICATE,
+	INDEX_ATTR_BITMAP_NOTREADY
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
diff --git b/src/test/regress/expected/alter_generic.out a/src/test/regress/expected/alter_generic.out
index ce581bb..85e4c70 100644
--- b/src/test/regress/expected/alter_generic.out
+++ a/src/test/regress/expected/alter_generic.out
@@ -161,15 +161,15 @@ ALTER SERVER alt_fserv1 RENAME TO alt_fserv3;   -- OK
 SELECT fdwname FROM pg_foreign_data_wrapper WHERE fdwname like 'alt_fdw%';
  fdwname  
 ----------
- alt_fdw2
  alt_fdw3
+ alt_fdw2
 (2 rows)
 
 SELECT srvname FROM pg_foreign_server WHERE srvname like 'alt_fserv%';
   srvname   
 ------------
- alt_fserv2
  alt_fserv3
+ alt_fserv2
 (2 rows)
 
 --
diff --git b/src/test/regress/expected/rules.out a/src/test/regress/expected/rules.out
index d706f42..f7dc4a4 100644
--- b/src/test/regress/expected/rules.out
+++ a/src/test/regress/expected/rules.out
@@ -1756,6 +1756,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_tuples_deleted(c.oid) AS n_tup_del,
     pg_stat_get_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
@@ -1903,6 +1904,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1946,6 +1948,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1983,7 +1986,8 @@ pg_stat_xact_all_tables| SELECT c.oid AS relid,
     pg_stat_get_xact_tuples_inserted(c.oid) AS n_tup_ins,
     pg_stat_get_xact_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_xact_tuples_deleted(c.oid) AS n_tup_del,
-    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd
+    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_xact_tuples_warm_updated(c.oid) AS n_tup_warm_upd
    FROM ((pg_class c
      LEFT JOIN pg_index i ON ((c.oid = i.indrelid)))
      LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
@@ -1999,7 +2003,8 @@ pg_stat_xact_sys_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname = ANY (ARRAY['pg_catalog'::name, 'information_schema'::name])) OR (pg_stat_xact_all_tables.schemaname ~ '^pg_toast'::text));
 pg_stat_xact_user_functions| SELECT p.oid AS funcid,
@@ -2021,7 +2026,8 @@ pg_stat_xact_user_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname <> ALL (ARRAY['pg_catalog'::name, 'information_schema'::name])) AND (pg_stat_xact_all_tables.schemaname !~ '^pg_toast'::text));
 pg_statio_all_indexes| SELECT c.oid AS relid,
diff --git b/src/test/regress/expected/warm.out a/src/test/regress/expected/warm.out
new file mode 100644
index 0000000..1ae2f40
--- /dev/null
+++ a/src/test/regress/expected/warm.out
@@ -0,0 +1,914 @@
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Check if index only scan works correctly
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx1 on updtst_tab1
+   Index Cond: (b = 140001)
+(2 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab1;
+------------------
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE a = 1;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE b = 701;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+VACUUM updtst_tab2;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx2 on updtst_tab2
+   Index Cond: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab2 WHERE b = 701;
+  b  
+-----
+ 701
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab2;
+------------------
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 1;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+       QUERY PLAN        
+-------------------------
+ Seq Scan on updtst_tab3
+   Filter: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 701;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+  b   
+------
+ 1421
+(1 row)
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+SET enable_seqscan = false;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    98
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 2;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx3 on updtst_tab3
+   Index Cond: (b = 702)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 702;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+  b   
+------
+ 1422
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab3;
+------------------
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+explain select * from test_warm where lower(a) = 'test';
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on test_warm  (cost=4.18..12.65 rows=4 width=64)
+   Recheck Cond: (lower(a) = 'test'::text)
+   ->  Bitmap Index Scan on test_warmindx  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (lower(a) = 'test'::text)
+(4 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+select *, ctid from test_warm where a = 'test';
+ a | b | ctid 
+---+---+------
+(0 rows)
+
+select *, ctid from test_warm where a = 'TEST';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+                                   QUERY PLAN                                    
+---------------------------------------------------------------------------------
+ Index Scan using test_warmindx on test_warm  (cost=0.15..20.22 rows=4 width=64)
+   Index Cond: (lower(a) = 'test'::text)
+(2 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+DROP TABLE test_warm;
+--- Test with toast data types
+CREATE TABLE test_toast_warm (a int unique, b text, c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+-- insert a large enough value to cause index datum compression
+INSERT INTO test_toast_warm VALUES (1, repeat('a', 600), 100);
+INSERT INTO test_toast_warm VALUES (2, repeat('b', 2), 100);
+INSERT INTO test_toast_warm VALUES (3, repeat('c', 4), 100);
+INSERT INTO test_toast_warm VALUES (4, repeat('d', 63), 100);
+INSERT INTO test_toast_warm VALUES (5, repeat('e', 126), 100);
+INSERT INTO test_toast_warm VALUES (6, repeat('f', 127), 100);
+INSERT INTO test_toast_warm VALUES (7, repeat('g', 128), 100);
+INSERT INTO test_toast_warm VALUES (8, repeat('h', 3200), 100);
+UPDATE test_toast_warm SET b = repeat('q', 600) WHERE a = 1;
+UPDATE test_toast_warm SET b = repeat('r', 2) WHERE a = 2;
+UPDATE test_toast_warm SET b = repeat('s', 4) WHERE a = 3;
+UPDATE test_toast_warm SET b = repeat('t', 63) WHERE a = 4;
+UPDATE test_toast_warm SET b = repeat('u', 126) WHERE a = 5;
+UPDATE test_toast_warm SET b = repeat('v', 127) WHERE a = 6;
+UPDATE test_toast_warm SET b = repeat('w', 128) WHERE a = 7;
+UPDATE test_toast_warm SET b = repeat('x', 3200) WHERE a = 8;
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+                        QUERY PLAN                         
+-----------------------------------------------------------
+ Index Scan using test_toast_warm_a_key on test_toast_warm
+   Index Cond: (a = 1)
+(2 rows)
+
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+                                                                                                                                                                                                                                                                                                                      QUERY PLAN                                                                                                                                                                                                                                                                                                                      
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Index Scan using test_toast_warm_index on test_toast_warm
+   Index Cond: (b = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'::text)
+(2 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+                                                                                                                                                                                                                                                                                                                      QUERY PLAN                                                                                                                                                                                                                                                                                                                      
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Index Only Scan using test_toast_warm_index on test_toast_warm
+   Index Cond: (b = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'::text)
+(2 rows)
+
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+ a |                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+---+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 1 | qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+ a | b 
+---+---
+(0 rows)
+
+SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+ b 
+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+ a |                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+---+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 1 | qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+ a | b  
+---+----
+ 2 | rr
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+ a |  b   
+---+------
+ 3 | ssss
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+ a |                                b                                
+---+-----------------------------------------------------------------
+ 4 | ttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttt
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+ a |                                                               b                                                                
+---+--------------------------------------------------------------------------------------------------------------------------------
+ 5 | uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+ a |                                                                b                                                                
+---+---------------------------------------------------------------------------------------------------------------------------------
+ 6 | vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+ a |                                                                b                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------
+ 7 | wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+ a |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                b                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 8 | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+(1 row)
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                                    QUERY PLAN                                                                                                                                                                                                                                                                                                                    
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 'qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq'::text)
+(2 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                                    QUERY PLAN                                                                                                                                                                                                                                                                                                                    
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 'qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq'::text)
+(2 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+ a |                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+---+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 1 | qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+ a | b  
+---+----
+ 2 | rr
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+ a |  b   
+---+------
+ 3 | ssss
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+ a |                                b                                
+---+-----------------------------------------------------------------
+ 4 | ttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttt
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+ a |                                                               b                                                                
+---+--------------------------------------------------------------------------------------------------------------------------------
+ 5 | uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+ a |                                                                b                                                                
+---+---------------------------------------------------------------------------------------------------------------------------------
+ 6 | vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+ a |                                                                b                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------
+ 7 | wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+ a |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                b                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 8 | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+(1 row)
+
+DROP TABLE test_toast_warm;
+-- Test with numeric data type
+CREATE TABLE test_toast_warm (a int unique, b numeric(10,2), c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+INSERT INTO test_toast_warm VALUES (1, 10.2, 100);
+INSERT INTO test_toast_warm VALUES (2, 11.22, 100);
+INSERT INTO test_toast_warm VALUES (3, 12.222, 100);
+INSERT INTO test_toast_warm VALUES (4, 13.20, 100);
+INSERT INTO test_toast_warm VALUES (5, 14.201, 100);
+UPDATE test_toast_warm SET b = 100.2 WHERE a = 1;
+UPDATE test_toast_warm SET b = 101.22 WHERE a = 2;
+UPDATE test_toast_warm SET b = 102.222 WHERE a = 3;
+UPDATE test_toast_warm SET b = 103.20 WHERE a = 4;
+UPDATE test_toast_warm SET b = 104.201 WHERE a = 5;
+SELECT * FROM test_toast_warm;
+ a |   b    |  c  
+---+--------+-----
+ 1 | 100.20 | 100
+ 2 | 101.22 | 100
+ 3 | 102.22 | 100
+ 4 | 103.20 | 100
+ 5 | 104.20 | 100
+(5 rows)
+
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+                        QUERY PLAN                         
+-----------------------------------------------------------
+ Index Scan using test_toast_warm_a_key on test_toast_warm
+   Index Cond: (a = 1)
+(2 rows)
+
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+                    QUERY PLAN                    
+--------------------------------------------------
+ Bitmap Heap Scan on test_toast_warm
+   Recheck Cond: (b = 10.2)
+   ->  Bitmap Index Scan on test_toast_warm_index
+         Index Cond: (b = 10.2)
+(4 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+                    QUERY PLAN                    
+--------------------------------------------------
+ Bitmap Heap Scan on test_toast_warm
+   Recheck Cond: (b = 100.2)
+   ->  Bitmap Index Scan on test_toast_warm_index
+         Index Cond: (b = 100.2)
+(4 rows)
+
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+ a | b 
+---+---
+(0 rows)
+
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+ b 
+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+   b    
+--------
+ 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+ a |   b    
+---+--------
+ 2 | 101.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+ a |   b    
+---+--------
+ 3 | 102.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+ a |   b    
+---+--------
+ 4 | 103.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+ a |   b    
+---+--------
+ 5 | 104.20
+(1 row)
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+         QUERY PLAN          
+-----------------------------
+ Seq Scan on test_toast_warm
+   Filter: (a = 1)
+(2 rows)
+
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+         QUERY PLAN          
+-----------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 10.2)
+(2 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+         QUERY PLAN          
+-----------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 100.2)
+(2 rows)
+
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+ a | b 
+---+---
+(0 rows)
+
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+ b 
+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+   b    
+--------
+ 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+ a |   b    
+---+--------
+ 2 | 101.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+ a |   b    
+---+--------
+ 3 | 102.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+ a |   b    
+---+--------
+ 4 | 103.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+ a |   b    
+---+--------
+ 5 | 104.20
+(1 row)
+
+DROP TABLE test_toast_warm;
+-- Toasted heap attributes
+CREATE TABLE toasttest(descr text , cnt int DEFAULT 0, f1 text, f2 text);
+CREATE INDEX testindx1 ON toasttest(descr);
+CREATE INDEX testindx2 ON toasttest(f1);
+INSERT INTO toasttest(descr, f1, f2) VALUES('two-compressed', repeat('1234567890',1000), repeat('1234567890',1000));
+INSERT INTO toasttest(descr, f1, f2) VALUES('two-toasted', repeat('1234567890',20000), repeat('1234567890',50000));
+INSERT INTO toasttest(descr, f1, f2) VALUES('one-compressed,one-toasted', repeat('1234567890',1000), repeat('1234567890',50000));
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,1) | (two-compressed,0,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012
+ (0,2) | (two-toasted,0,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ (0,3) | ("one-compressed,one-toasted",0,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+-- UPDATE f1 by doing string manipulation, but the updated value remains the
+-- same as the old value
+UPDATE toasttest SET cnt = cnt +1, f1 = trim(leading '-' from '-'||f1) RETURNING substring(toasttest::text, 1, 200);
+                                                                                                substring                                                                                                 
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (two-compressed,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012
+ (two-toasted,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ ("one-compressed,one-toasted",1,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+                  QUERY PLAN                   
+-----------------------------------------------
+ Seq Scan on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,4) | (two-compressed,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012
+ (0,5) | (two-toasted,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ (0,6) | ("one-compressed,one-toasted",1,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+                           QUERY PLAN                            
+-----------------------------------------------------------------
+ Index Scan using testindx2 on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,6) | ("one-compressed,one-toasted",1,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+ (0,4) | (two-compressed,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012
+ (0,5) | (two-toasted,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+(3 rows)
+
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+-- UPDATE f1 for real this time
+UPDATE toasttest SET cnt = cnt +1, f1 = '-'||f1 RETURNING substring(toasttest::text, 1, 200);
+                                                                                                substring                                                                                                 
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (two-toasted,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234
+ ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+(3 rows)
+
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+                  QUERY PLAN                   
+-----------------------------------------------
+ Seq Scan on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,7) | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,8) | (two-toasted,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234
+ (0,9) | ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+(3 rows)
+
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+                           QUERY PLAN                            
+-----------------------------------------------------------------
+ Index Scan using testindx2 on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,9) | ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+ (0,7) | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,8) | (two-toasted,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234
+(3 rows)
+
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+-- UPDATE f1 from toasted to compressed
+UPDATE toasttest SET cnt = cnt +1, f1 = repeat('1234567890',1000) WHERE descr = 'two-toasted';
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+                  QUERY PLAN                   
+-----------------------------------------------
+ Seq Scan on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+  ctid  |                                                                                                substring                                                                                                 
+--------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,7)  | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,9)  | ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+ (0,10) | (two-toasted,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+(3 rows)
+
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+                           QUERY PLAN                            
+-----------------------------------------------------------------
+ Index Scan using testindx2 on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+  ctid  |                                                                                                substring                                                                                                 
+--------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,9)  | ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+ (0,7)  | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,10) | (two-toasted,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+(3 rows)
+
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+-- UPDATE f1 from compressed to toasted
+UPDATE toasttest SET cnt = cnt +1, f1 = repeat('1234567890',2000) WHERE descr = 'one-compressed,one-toasted';
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+                  QUERY PLAN                   
+-----------------------------------------------
+ Seq Scan on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+  ctid  |                                                                                                substring                                                                                                 
+--------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,7)  | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,10) | (two-toasted,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ (0,11) | ("one-compressed,one-toasted",3,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+                           QUERY PLAN                            
+-----------------------------------------------------------------
+ Index Scan using testindx2 on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+  ctid  |                                                                                                substring                                                                                                 
+--------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,7)  | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,10) | (two-toasted,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ (0,11) | ("one-compressed,one-toasted",3,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+DROP TABLE toasttest;
diff --git b/src/test/regress/parallel_schedule a/src/test/regress/parallel_schedule
index 9f95b01..cd99f88 100644
--- b/src/test/regress/parallel_schedule
+++ a/src/test/regress/parallel_schedule
@@ -42,6 +42,8 @@ test: create_type
 test: create_table
 test: create_function_2
 
+test: warm
+
 # ----------
 # Load huge amounts of data
 # We should split the data files into single files and then
diff --git b/src/test/regress/sql/warm.sql a/src/test/regress/sql/warm.sql
new file mode 100644
index 0000000..fb1f93e
--- /dev/null
+++ a/src/test/regress/sql/warm.sql
@@ -0,0 +1,344 @@
+
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Check if index only scan works correctly
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab1;
+
+------------------
+
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE a = 1;
+
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE b = 701;
+
+VACUUM updtst_tab2;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+SELECT b FROM updtst_tab2 WHERE b = 701;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab2;
+------------------
+
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+SELECT * FROM updtst_tab3 WHERE a = 1;
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+SET enable_seqscan = false;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+SELECT * FROM updtst_tab3 WHERE a = 2;
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab3;
+------------------
+
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where a = 'test';
+select *, ctid from test_warm where a = 'TEST';
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+DROP TABLE test_warm;
+
+--- Test with toast data types
+
+CREATE TABLE test_toast_warm (a int unique, b text, c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+
+-- insert a large enough value to cause index datum compression
+INSERT INTO test_toast_warm VALUES (1, repeat('a', 600), 100);
+INSERT INTO test_toast_warm VALUES (2, repeat('b', 2), 100);
+INSERT INTO test_toast_warm VALUES (3, repeat('c', 4), 100);
+INSERT INTO test_toast_warm VALUES (4, repeat('d', 63), 100);
+INSERT INTO test_toast_warm VALUES (5, repeat('e', 126), 100);
+INSERT INTO test_toast_warm VALUES (6, repeat('f', 127), 100);
+INSERT INTO test_toast_warm VALUES (7, repeat('g', 128), 100);
+INSERT INTO test_toast_warm VALUES (8, repeat('h', 3200), 100);
+
+UPDATE test_toast_warm SET b = repeat('q', 600) WHERE a = 1;
+UPDATE test_toast_warm SET b = repeat('r', 2) WHERE a = 2;
+UPDATE test_toast_warm SET b = repeat('s', 4) WHERE a = 3;
+UPDATE test_toast_warm SET b = repeat('t', 63) WHERE a = 4;
+UPDATE test_toast_warm SET b = repeat('u', 126) WHERE a = 5;
+UPDATE test_toast_warm SET b = repeat('v', 127) WHERE a = 6;
+UPDATE test_toast_warm SET b = repeat('w', 128) WHERE a = 7;
+UPDATE test_toast_warm SET b = repeat('x', 3200) WHERE a = 8;
+
+
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+
+DROP TABLE test_toast_warm;
+
+-- Test with numeric data type
+
+CREATE TABLE test_toast_warm (a int unique, b numeric(10,2), c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+
+INSERT INTO test_toast_warm VALUES (1, 10.2, 100);
+INSERT INTO test_toast_warm VALUES (2, 11.22, 100);
+INSERT INTO test_toast_warm VALUES (3, 12.222, 100);
+INSERT INTO test_toast_warm VALUES (4, 13.20, 100);
+INSERT INTO test_toast_warm VALUES (5, 14.201, 100);
+
+UPDATE test_toast_warm SET b = 100.2 WHERE a = 1;
+UPDATE test_toast_warm SET b = 101.22 WHERE a = 2;
+UPDATE test_toast_warm SET b = 102.222 WHERE a = 3;
+UPDATE test_toast_warm SET b = 103.20 WHERE a = 4;
+UPDATE test_toast_warm SET b = 104.201 WHERE a = 5;
+
+SELECT * FROM test_toast_warm;
+
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+
+DROP TABLE test_toast_warm;
+
+-- Toasted heap attributes
+CREATE TABLE toasttest(descr text , cnt int DEFAULT 0, f1 text, f2 text);
+CREATE INDEX testindx1 ON toasttest(descr);
+CREATE INDEX testindx2 ON toasttest(f1);
+
+INSERT INTO toasttest(descr, f1, f2) VALUES('two-compressed', repeat('1234567890',1000), repeat('1234567890',1000));
+INSERT INTO toasttest(descr, f1, f2) VALUES('two-toasted', repeat('1234567890',20000), repeat('1234567890',50000));
+INSERT INTO toasttest(descr, f1, f2) VALUES('one-compressed,one-toasted', repeat('1234567890',1000), repeat('1234567890',50000));
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+
+-- UPDATE f1 by doing string manipulation, but the updated value remains the
+-- same as the old value
+UPDATE toasttest SET cnt = cnt +1, f1 = trim(leading '-' from '-'||f1) RETURNING substring(toasttest::text, 1, 200);
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+
+-- UPDATE f1 for real this time
+UPDATE toasttest SET cnt = cnt +1, f1 = '-'||f1 RETURNING substring(toasttest::text, 1, 200);
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+
+-- UPDATE f1 from toasted to compressed
+UPDATE toasttest SET cnt = cnt +1, f1 = repeat('1234567890',1000) WHERE descr = 'two-toasted';
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+
+-- UPDATE f1 from compressed to toasted
+UPDATE toasttest SET cnt = cnt +1, f1 = repeat('1234567890',2000) WHERE descr = 'one-compressed,one-toasted';
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+
+DROP TABLE toasttest;
#198Amit Kapila
amit.kapila16@gmail.com
In reply to: Pavan Deolasee (#194)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 28, 2017 at 10:35 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Tue, Mar 28, 2017 at 7:04 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

For such an heap insert, we will pass
the actual value of column to index_form_tuple during index insert.
However during recheck when we fetch the value of c2 from heap tuple
and pass it index tuple, the value is already in compressed form and
index_form_tuple might again try to compress it because the size will
still be greater than TOAST_INDEX_TARGET and if it does so, it might
make recheck fail.

Would it? I thought "if
(!VARATT_IS_EXTENDED(DatumGetPointer(untoasted_values[i]))" check should
prevent that. But I could be reading those macros wrong. They are probably
heavily uncommented and it's not clear what each of those VARATT_* macro do.

That won't handle the case where it is simply compressed. You need
check like VARATT_IS_COMPRESSED to take care of compressed heap
tuples, but then also it won't work because heap_tuple_fetch_attr()
doesn't handle compressed tuples. You need to use
heap_tuple_untoast_attr() to handle the compressed case. Also, we
probably need to handle other type of var attrs. Now, If we want to
do all of that index_form_tuple() might not be the right place, we
probably want to handle it in caller or provide an alternate API.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#199Amit Kapila
amit.kapila16@gmail.com
In reply to: Amit Kapila (#198)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Mar 29, 2017 at 11:52 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:

On Tue, Mar 28, 2017 at 10:35 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Tue, Mar 28, 2017 at 7:04 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

For such an heap insert, we will pass
the actual value of column to index_form_tuple during index insert.
However during recheck when we fetch the value of c2 from heap tuple
and pass it index tuple, the value is already in compressed form and
index_form_tuple might again try to compress it because the size will
still be greater than TOAST_INDEX_TARGET and if it does so, it might
make recheck fail.

Would it? I thought "if
(!VARATT_IS_EXTENDED(DatumGetPointer(untoasted_values[i]))" check should
prevent that. But I could be reading those macros wrong. They are probably
heavily uncommented and it's not clear what each of those VARATT_* macro do.

That won't handle the case where it is simply compressed. You need
check like VARATT_IS_COMPRESSED to take care of compressed heap
tuples, but then also it won't work because heap_tuple_fetch_attr()
doesn't handle compressed tuples. You need to use
heap_tuple_untoast_attr() to handle the compressed case. Also, we
probably need to handle other type of var attrs. Now, If we want to
do all of that index_form_tuple() might not be the right place, we
probably want to handle it in caller or provide an alternate API.

Another related, index_form_tuple() has a check for VARATT_IS_EXTERNAL
not VARATT_IS_EXTENDED, so may be that is cause of confusion for you,
but as I mentioned even if you change the check heap_tuple_fetch_attr
won't suffice the need.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#200Amit Kapila
amit.kapila16@gmail.com
In reply to: Pavan Deolasee (#192)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 28, 2017 at 10:31 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Tue, Mar 28, 2017 at 4:05 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

As asked previously, can you explain me on what basis are you
considering it robust? The comments on top of datumIsEqual() clearly
indicates the danger of using it for toasted values (Also, it will
probably not give the answer you want if either datum has been
"toasted".).

Hmm. I don' see why the new code in recheck is unsafe. The index values
themselves can't be toasted (IIUC), but they can be compressed.
index_form_tuple() already untoasts any toasted heap attributes and
compresses them if needed. So once we pass heap values via
index_form_tuple() we should have exactly the same index values as they were
inserted. Or am I missing something obvious here?

I don't think relying on datum comparison for compressed values from
heap and index is safe (even after you try to form index tuple from
heap value again during recheck) and I have mentioned one of the
hazards of doing so upthread. Do you see any place else where we rely
on comparison of compressed values?

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#201Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Amit Kapila (#199)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Mar 29, 2017 at 12:02 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Wed, Mar 29, 2017 at 11:52 AM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Tue, Mar 28, 2017 at 10:35 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Tue, Mar 28, 2017 at 7:04 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

For such an heap insert, we will pass
the actual value of column to index_form_tuple during index insert.
However during recheck when we fetch the value of c2 from heap tuple
and pass it index tuple, the value is already in compressed form and
index_form_tuple might again try to compress it because the size will
still be greater than TOAST_INDEX_TARGET and if it does so, it might
make recheck fail.

Would it? I thought "if
(!VARATT_IS_EXTENDED(DatumGetPointer(untoasted_values[i]))" check

should

prevent that. But I could be reading those macros wrong. They are

probably

heavily uncommented and it's not clear what each of those VARATT_*

macro do.

That won't handle the case where it is simply compressed. You need
check like VARATT_IS_COMPRESSED to take care of compressed heap
tuples, but then also it won't work because heap_tuple_fetch_attr()
doesn't handle compressed tuples. You need to use
heap_tuple_untoast_attr() to handle the compressed case. Also, we
probably need to handle other type of var attrs. Now, If we want to
do all of that index_form_tuple() might not be the right place, we
probably want to handle it in caller or provide an alternate API.

Another related, index_form_tuple() has a check for VARATT_IS_EXTERNAL
not VARATT_IS_EXTENDED, so may be that is cause of confusion for you,
but as I mentioned even if you change the check heap_tuple_fetch_attr
won't suffice the need.

I am confused :-(

Assuming big-endian machine:

VARATT_IS_4B_U - !toasted && !compressed
VARATT_IS_4B_C - compressed (may or may not be toasted)
VARATT_IS_4B - !toasted (may or may not be compressed)
VARATT_IS_1B_E - toasted

#define VARATT_IS_EXTERNAL(PTR) VARATT_IS_1B_E(PTR)
#define VARATT_IS_EXTENDED(PTR) (!VARATT_IS_4B_U(PTR))

So VARATT_IS_EXTENDED means that the value is (toasted || compressed). If
we are looking at a value from the heap (untoasted) then it implies in-heap
compression. If we are looking at untoasted value, then it means
compression in the toast.

index_form_tuple() first checks if the value is externally toasted and
fetches the untoasted value if so. After that it checks if
!VARATT_IS_EXTENDED i.e. if the value is (!toasted && !compressed) and then
only try to apply compression on that. It can't be a toasted value because
if it was, we just untoasted it. But it can be compressed either in the
heap or in the toast, in which case we don't try to compress it again. That
makes sense because if the value is already compressed there is not point
applying compression again.

Now what you're suggesting (it seems) is that when in-heap compression is
used and ExecInsertIndexTuples calls FormIndexDatum to create index tuple
values, it always passes uncompressed heap values. So when the index tuple
is originally inserted, it index_form_tuple() will try to compress it and
see if it fits in the index.

Then during recheck, we pass already compressed values to
index_form_tuple(). But my point is, the following code will ensure that we
don't compress it again. My reading is that the first check for
!VARATT_IS_EXTENDED will return false if the value is already compressed.

/*
* If value is above size target, and is of a compressible datatype,
* try to compress it in-line.
*/
if (!VARATT_IS_EXTENDED(DatumGetPointer(untoasted_values[i])) &&
VARSIZE(DatumGetPointer(untoasted_values[i])) > TOAST_INDEX_TARGET
&&
(att->attstorage == 'x' || att->attstorage == 'm'))
{
Datum cvalue = toast_compress_datum(untoasted_values[i]);

if (DatumGetPointer(cvalue) != NULL)
{
/* successful compression */
if (untoasted_free[i])
pfree(DatumGetPointer(untoasted_values[i]));
untoasted_values[i] = cvalue;
untoasted_free[i] = true;
}
}

TBH I couldn't find why the original index insertion code will always
supply uncompressed values. But even if does, and even if the recheck gets
it in compressed form, I don't see how we will double-compress that.

As far as, comparing two compressed values go, I don't see a problem there.
Exact same compressed values should decompress to exact same value. So
comparing two compressed values and two uncompressed values should give us
the same result.

Would you mind creating a test case to explain the situation? I added a few
more test cases to src/test/regress/sql/warm.sql and it also shows how to
check for duplicate key scans. If you could come up with a case that shows
the problem, it will help immensely.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#202Amit Kapila
amit.kapila16@gmail.com
In reply to: Pavan Deolasee (#201)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Mar 29, 2017 at 1:10 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Wed, Mar 29, 2017 at 12:02 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Wed, Mar 29, 2017 at 11:52 AM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

Then during recheck, we pass already compressed values to
index_form_tuple(). But my point is, the following code will ensure that we
don't compress it again. My reading is that the first check for
!VARATT_IS_EXTENDED will return false if the value is already compressed.

You are right. I was confused with previous check of VARATT_IS_EXTERNAL.

TBH I couldn't find why the original index insertion code will always supply
uncompressed values.

Just try by inserting large value of text column ('aaaaaa.....bbb')
upto 2.5K. Then have a breakpoint in heap_prepare_insert and
index_form_tuple, and debug both the functions, you can find out that
even though we compress during insertion in heap, the index will
compress the original value again.

But even if does, and even if the recheck gets it in
compressed form, I don't see how we will double-compress that.

No as I agreed above, it won't double-compress, but still looks
slightly risky to rely on different set of values passed to
index_form_tuple and then compare them.

As far as, comparing two compressed values go, I don't see a problem there.
Exact same compressed values should decompress to exact same value. So
comparing two compressed values and two uncompressed values should give us
the same result.

Yeah probably you are right, but I am not sure if it is good idea to
compare compressed values.

I think with this new changes in btrecheck, it would appear to be much
costlier as compare to what you have few versions back. I am afraid
that it can impact performance for cases where there are few WARM
updates in chain and many HOT updates as it will run recheck for all
such updates. Did we any time try to measure the performance of cases
like that?

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#203Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#202)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Mar 29, 2017 at 7:12 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:

No as I agreed above, it won't double-compress, but still looks
slightly risky to rely on different set of values passed to
index_form_tuple and then compare them.

It assumes that the compressor is completely deterministic, which I'm
fairly is true today, but might be false in the future. For example:

https://groups.google.com/forum/#!topic/snappy-compression/W8v_ydnEPuc

We've talked about using snappy as a compression algorithm before, and
if the above post is correct, an upgrade to the snappy library version
is an example of a change that would break the assumption in question.
I think it's generally true for almost any modern compression
algorithm (including pglz) that there are multiple compressed texts
that would decompress to the same uncompressed text. Any algorithm is
required to promise that it will always produce one of the compressed
texts that decompress back to the original, but not necessarily that
it will always produce the same one.

As another example of this, consider that zlib (gzip) has a variety of
options to control compression behavior, such as, most obviously, the
compression level (1 .. 9).

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#204Alvaro Herrera
alvherre@2ndquadrant.com
In reply to: Pavan Deolasee (#197)
1 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

Pavan Deolasee wrote:

On Wed, Mar 29, 2017 at 3:42 AM, Alvaro Herrera <alvherre@2ndquadrant.com>
wrote:

I pushed 0002 after some makeup, since it's just cosmetic and not
controversial.

Thanks. I think your patch of tracking interesting attributes seems ok too
after the performance issue was addressed. Even though we can still improve
that further, at least Mithun confirmed that there is no significant
regression anymore and in fact for one artificial case, patch does better
than even master.

Great, thanks. I pushed it, too. One optimization we could try is
using slot deform instead of repeated heap_getattr(). Patch is
attached. I haven't benchmarked it.

On top of that, but perhaps getting in the realm of excessive
complication, we could see if the bitmapset is a singleton, and if it is
then do heap_getattr without creating the slot. That'd require to have
a second copy of heap_tuple_attr_equals() that takes a HeapTuple instead
of a TupleTableSlot.

--
�lvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachments:

interesting-speedup.patchtext/plain; charset=us-asciiDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 0c3e2b0..976de99 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -56,6 +56,7 @@
 #include "access/xlogutils.h"
 #include "catalog/catalog.h"
 #include "catalog/namespace.h"
+#include "executor/tuptable.h"
 #include "miscadmin.h"
 #include "pgstat.h"
 #include "storage/bufmgr.h"
@@ -4337,7 +4338,7 @@ l2:
  */
 static bool
 heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
-					   HeapTuple tup1, HeapTuple tup2)
+					   TupleTableSlot *tup1, TupleTableSlot *tup2)
 {
 	Datum		value1,
 				value2;
@@ -4366,13 +4367,10 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 	}
 
 	/*
-	 * Extract the corresponding values.  XXX this is pretty inefficient if
-	 * there are many indexed columns.  Should HeapDetermineModifiedColumns do
-	 * a single heap_deform_tuple call on each tuple, instead?	But that
-	 * doesn't work for system columns ...
+	 * Extract the corresponding values.
 	 */
-	value1 = heap_getattr(tup1, attrnum, tupdesc, &isnull1);
-	value2 = heap_getattr(tup2, attrnum, tupdesc, &isnull2);
+	value1 = slot_getattr(tup1, attrnum, &isnull1);
+	value2 = slot_getattr(tup2, attrnum, &isnull2);
 
 	/*
 	 * If one value is NULL and other is not, then they are certainly not
@@ -4424,17 +4422,27 @@ HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
 {
 	int		attnum;
 	Bitmapset *modified = NULL;
+	TupleTableSlot *oldslot;
+	TupleTableSlot *newslot;
+
+	oldslot = MakeSingleTupleTableSlot(RelationGetDescr(relation));
+	ExecStoreTuple(oldtup, oldslot, InvalidBuffer, false);
+	newslot = MakeSingleTupleTableSlot(RelationGetDescr(relation));
+	ExecStoreTuple(newtup, newslot, InvalidBuffer, false);
 
 	while ((attnum = bms_first_member(interesting_cols)) >= 0)
 	{
 		attnum += FirstLowInvalidHeapAttributeNumber;
 
 		if (!heap_tuple_attr_equals(RelationGetDescr(relation),
-								   attnum, oldtup, newtup))
+									attnum, oldslot, newslot))
 			modified = bms_add_member(modified,
 									  attnum - FirstLowInvalidHeapAttributeNumber);
 	}
 
+	ExecDropSingleTupleTableSlot(oldslot);
+	ExecDropSingleTupleTableSlot(newslot);
+
 	return modified;
 }
 
#205Dilip Kumar
dilipbalaut@gmail.com
In reply to: Pavan Deolasee (#197)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Mar 29, 2017 at 11:51 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

Thanks. I think your patch of tracking interesting attributes seems ok too
after the performance issue was addressed. Even though we can still improve
that further, at least Mithun confirmed that there is no significant
regression anymore and in fact for one artificial case, patch does better
than even master.

I was trying to compile these patches on latest
head(f90d23d0c51895e0d7db7910538e85d3d38691f0) for some testing but I
was not able to compile it.

make[3]: *** [postgres.bki] Error 1
make[3]: Leaving directory
`/home/dilip/work/pg_codes/pbms_final/postgresql/src/backend/catalog'
make[2]: *** [submake-schemapg] Error 2
make[2]: Leaving directory
`/home/dilip/work/pg_codes/pbms_final/postgresql/src/backend'
make[1]: *** [all-backend-recurse] Error 2
make[1]: Leaving directory `/home/dilip/work/pg_codes/pbms_final/postgresql/src'
make: *** [all-src-recurse] Error 2

I tried doing maintainer-clean, deleting postgres.bki but still the same error.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#206Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Amit Kapila (#202)
1 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Mar 29, 2017 at 4:42 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Wed, Mar 29, 2017 at 1:10 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Wed, Mar 29, 2017 at 12:02 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Wed, Mar 29, 2017 at 11:52 AM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

Then during recheck, we pass already compressed values to
index_form_tuple(). But my point is, the following code will ensure that

we

don't compress it again. My reading is that the first check for
!VARATT_IS_EXTENDED will return false if the value is already compressed.

You are right. I was confused with previous check of VARATT_IS_EXTERNAL.

Ok, thanks.

TBH I couldn't find why the original index insertion code will always

supply

uncompressed values.

Just try by inserting large value of text column ('aaaaaa.....bbb')
upto 2.5K. Then have a breakpoint in heap_prepare_insert and
index_form_tuple, and debug both the functions, you can find out that
even though we compress during insertion in heap, the index will
compress the original value again.

Ok, tried that. AFAICS index_form_tuple gets compressed values.

Yeah probably you are right, but I am not sure if it is good idea to
compare compressed values.

Again, I don't see a problem there.

I think with this new changes in btrecheck, it would appear to be much
costlier as compare to what you have few versions back. I am afraid
that it can impact performance for cases where there are few WARM
updates in chain and many HOT updates as it will run recheck for all
such updates.

My feeling is that the recheck could be costly for very fat indexes, but
not doing WARM could be costly too for such indexes. We can possibly
construct a worst case where
1. set up a table with a fat index.
2. do a WARM update to a tuple
3. then do several HOT updates to the same tuple
4. query the row via the fat index.

Initialisation:

-- Adjust parameters to force index scans
-- enable_seqscan to false
-- seq_page_cost = 10000

DROP TABLE IF EXISTS pgbench_accounts;

CREATE TABLE pgbench_accounts (
aid text,
bid bigint,
abalance bigint,
filler1 text DEFAULT md5(random()::text),
filler2 text DEFAULT md5(random()::text),
filler3 text DEFAULT md5(random()::text),
filler4 text DEFAULT md5(random()::text),
filler5 text DEFAULT md5(random()::text),
filler6 text DEFAULT md5(random()::text),
filler7 text DEFAULT md5(random()::text),
filler8 text DEFAULT md5(random()::text),
filler9 text DEFAULT md5(random()::text),
filler10 text DEFAULT md5(random()::text),
filler11 text DEFAULT md5(random()::text),
filler12 text DEFAULT md5(random()::text)
) WITH (fillfactor=90);
\set end 0
\set start (:end + 1)
\set end (:start + (:scale * 100))

INSERT INTO pgbench_accounts SELECT generate_series(:start, :end )::text ||
<2300 chars string>, (random()::bigint) % :scale, 0;

CREATE UNIQUE INDEX pgb_a_aid ON pgbench_accounts(aid);
CREATE INDEX pgb_a_filler1 ON pgbench_accounts(filler1);
CREATE INDEX pgb_a_filler2 ON pgbench_accounts(filler2);
CREATE INDEX pgb_a_filler3 ON pgbench_accounts(filler3);
CREATE INDEX pgb_a_filler4 ON pgbench_accounts(filler4);

-- Force a WARM update on one row
UPDATE pgbench_accounts SET filler1 = 'X' WHERE aid = '100' ||
repeat('abcdefghij', 20000);

Test:
-- Fetch the row using the fat index. Since the row contains a
BEGIN;
SELECT substring(aid, 1, 10) FROM pgbench_accounts WHERE aid = '100' ||
<2300 chars string> ORDER BY aid;
UPDATE pgbench_accounts SET abalance = abalance + 100 WHERE aid = '100' ||
<2300 chars string>;
END;

I did 4 5-minutes runs with master and WARM and there is probably a 2-3%
regression.

(Results with 5 mins tests, txns is total for 5 mins, idx_scan is number of
scans on the fat index)
master:
txns idx_scan
414117 828233
411109 822217
411848 823695
408424 816847

WARM:
txns idx_scan
404139 808277
398880 797759
399949 799897
397927 795853

==========

I then also repeated the tests, but this time using compressible values.
The regression in this case is much higher, may be 15% or more.

INSERT INTO pgbench_accounts SELECT generate_series(:start, :end )::text ||
repeat('abcdefghij', 20000), (random()::bigint) % :scale, 0;

-- Fetch the row using the fat index. Since the row contains a
BEGIN;
SELECT substring(aid, 1, 10) FROM pgbench_accounts WHERE aid = '100' ||
repeat('abcdefghij', 20000) ORDER BY aid;
UPDATE pgbench_accounts SET abalance = abalance + 100 WHERE aid = '100' ||
repeat('abcdefghij', 20000);
END;

(Results with 5 mins tests, txns is total for 5 mins, idx_scan is number of
scans on the fat index)
master:
txns idx_scan
56976 113953
56822 113645
56915 113831
56865 113731

WARM:
txns idx_scan
49044 98087
49020 98039
49007 98013
49006 98011

But TBH I believe this regression is coming from the changes
to heap_tuple_attr_equals where we are decompressing both old and new
values and then comparing them. For 200K bytes long values, that must be
something. Another reason why I think so is because I accidentally did one
run which did not use index scans and did not perform any WARM updates, but
the regression was kinda similar. So that makes me think that the
regression is coming from somewhere else and change in
heap_tuple_attr_equals seems like a good candidate.

I think we can fix that by comparing compressed values. I know you had
raised concerns, but Robert confirmed that (IIUC) it's not a problem today.
We will figure out how to deal with it if we ever add support for different
compression algorithms or compression levels. And I also think this is
kinda synthetic use case and the fact that there is not much regression
with indexes as large as 2K bytes seems quite comforting to me.

===========

Apart from this, I also ran some benchmarks by removing index on the
abalance column in my test suite so that all updates are HOT updates. I did
not find any regression in that scenario. WARM was a percentage or more
better, but I assume that's just noise. These benchmarks were done on scale
factor 100, running for 1hr each. Headline numbers are:

WARM: 5802 txns/sec
master: 5719 txns/sec.

===========

Another workload where WARM could cause regression is where there are many
indexes on a table and UPDATEs update all but one indexes. We will do WARM
update in this case but since N-1 indexes will anyways get a new index
entry, benefits of WARM will be marginal. There will be increased cost of
AV because we will scan N-1 indexes for cleanup.

While this could be an atypical workload, its probably worth to guard
against this. I propose that we stop WARM at the source if we detect that
more than certain percentage of indexes will be updated by an UPDATE
statement. Of course, we can be more fancy and look at each index structure
and arrive at a cost model. But a simple 50% rule seems a good starting
point. So if an UPDATE is going to modify more than 50% indexes, do a
non-WARM update. Attached patch adds that support.

I ran tests by modifying the benchmark used for previous tests by adding
abalance column to all indexes except one on aid. With the patch applied,
there are zero WARM updates on the table (as expected). The headline
numbers are:

master: 4101 txns/sec
WARM: 4033 txns/sec

So probably within acceptable range.

============

Finally, I tested another workload where we have total 6 indexes and 3 of
them are modified by each UPDATE and 3 are not. Ran it with scale factor
100 for 1hr each. The headline numbers:

master: 3679 txns/sec (I don't see a reason why master should worse
compared to 5 index update case, so probably needs more runs to check
aberration)
WARM: 4050 txns/sec (not much difference from no WARM update case, but
since master degenerated, probably worth doing another round.. I am using
AWS instance and it's not first time I am seeing aberrations).

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0008_disable_warm_on_manyindex_update.patchapplication/octet-stream; name=0008_disable_warm_on_manyindex_update.patchDownload
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 5de683a..2b5d8d2 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -3712,6 +3712,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Bitmapset  *interesting_attrs;
 	Bitmapset  *modified_attrs;
 	Bitmapset  *notready_attrs;
+	List	   *indexattrsList;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3786,6 +3787,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	notready_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_NOTREADY);
 
+	indexattrsList = RelationGetIndexAttrList(relation);
 
 	block = ItemPointerGetBlockNumber(otid);
 	buffer = ReadBuffer(relation, block);
@@ -4455,7 +4457,28 @@ l2:
 				!bms_overlap(modified_attrs, exprindx_attrs) &&
 				!bms_is_subset(hot_attrs, modified_attrs) &&
 				!bms_overlap(notready_attrs, modified_attrs))
-				use_warm_update = true;
+			{
+				int num_indexes, num_updating_indexes;
+				ListCell *l;
+
+				/*
+				 * Everything else is Ok. Now check if the update will require
+				 * less than or equal to 50% index updates. Anything above
+				 * that, we can just do a regular update and save on WARM
+				 * cleanup cost.
+				 */
+				num_indexes = list_length(indexattrsList);
+				num_updating_indexes = 0;
+				foreach (l, indexattrsList)
+				{
+					Bitmapset  *b = (Bitmapset *) lfirst(l);
+					if (bms_overlap(b, modified_attrs))
+						num_updating_indexes++;
+				}
+
+				if ((double)num_updating_indexes/num_indexes <= 0.5)
+					use_warm_update = true;
+			}
 		}
 	}
 	else
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index 4be2445..c7266d7 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -4852,6 +4852,7 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 	Bitmapset  *indxnotreadyattrs;	/* columns in not ready indexes */
 	List	   *indexoidlist;
 	List	   *newindexoidlist;
+	List	   *indexattrsList;
 	Oid			relpkindex;
 	Oid			relreplindex;
 	ListCell   *l;
@@ -4920,6 +4921,7 @@ restart:
 	pkindexattrs = NULL;
 	idindexattrs = NULL;
 	indxnotreadyattrs = NULL;
+	indexattrsList = NIL;
 	foreach(l, indexoidlist)
 	{
 		Oid			indexOid = lfirst_oid(l);
@@ -4929,6 +4931,7 @@ restart:
 		bool		isKey;		/* candidate key */
 		bool		isPK;		/* primary key */
 		bool		isIDKey;	/* replica identity index */
+		Bitmapset	*thisindexattrs = NULL;
 
 		indexDesc = index_open(indexOid, AccessShareLock);
 
@@ -4953,6 +4956,9 @@ restart:
 
 			if (attrnum != 0)
 			{
+				thisindexattrs = bms_add_member(thisindexattrs,
+							   attrnum - FirstLowInvalidHeapAttributeNumber);
+
 				indexattrs = bms_add_member(indexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
 
@@ -4985,6 +4991,7 @@ restart:
 		 * and predicates too.
 		 */
 		indexattrs = bms_add_members(indexattrs, exprindexattrs);
+		thisindexattrs = bms_add_members(thisindexattrs, exprindexattrs);
 
 		if (!indexInfo->ii_ReadyForInserts)
 			indxnotreadyattrs = bms_add_members(indxnotreadyattrs,
@@ -4998,6 +5005,7 @@ restart:
 		if (!indexDesc->rd_amroutine->amrecheck)
 			supportswarm = false;
 
+		indexattrsList = lappend(indexattrsList, thisindexattrs);
 
 		index_close(indexDesc, AccessShareLock);
 	}
@@ -5026,7 +5034,7 @@ restart:
 		bms_free(pkindexattrs);
 		bms_free(idindexattrs);
 		bms_free(indexattrs);
-
+		list_free_deep(indexattrsList);
 		goto restart;
 	}
 
@@ -5046,6 +5054,8 @@ restart:
 	relation->rd_idattr = NULL;
 	bms_free(relation->rd_indxnotreadyattr);
 	relation->rd_indxnotreadyattr = NULL;
+	list_free_deep(relation->rd_indexattrsList);
+	relation->rd_indexattrsList = NIL;
 
 	/*
 	 * Now save copies of the bitmaps in the relcache entry.  We intentionally
@@ -5061,6 +5071,18 @@ restart:
 	relation->rd_exprindexattr = bms_copy(exprindexattrs);
 	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
 	relation->rd_indxnotreadyattr = bms_copy(indxnotreadyattrs);
+
+	/*
+	 * create a deep copy of the list, copying each bitmap in the
+	 * CurrentMemoryContext.
+	 */
+	foreach(l, indexattrsList)
+	{
+		Bitmapset *b = (Bitmapset *) lfirst(l);
+		relation->rd_indexattrsList = lappend(relation->rd_indexattrsList,
+				bms_copy(b));
+	}
+
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -5085,6 +5107,34 @@ restart:
 }
 
 /*
+ * Get a list of bitmaps, where each bitmap contains a list of attributes used
+ * by one index.
+ *
+ * The actual information is computed in RelationGetIndexAttrBitmap, but
+ * currently the only consumer of this function calls it immediately after
+ * calling RelationGetIndexAttrBitmap, we should be fine. We don't expect any
+ * relcache invalidation to come between these two calls and hence don't expect
+ * the cached information to change underneath.
+ */
+List *
+RelationGetIndexAttrList(Relation relation)
+{
+	ListCell   *l;
+	List	   *indexattrsList = NIL;
+
+	/*
+	 * Create a deep copy of the list by copying bitmaps in the
+	 * CurrentMemoryContext.
+	 */
+	foreach(l, relation->rd_indexattrsList)
+	{
+		Bitmapset *b = (Bitmapset *) lfirst(l);
+		indexattrsList = lappend(indexattrsList, bms_copy(b));
+	}
+	return indexattrsList;
+}
+
+/*
  * RelationGetExclusionInfo -- get info about index's exclusion constraint
  *
  * This should be called only for an index that is known to have an
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index cd1976a..4b173b5 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -149,6 +149,8 @@ typedef struct RelationData
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_pkattr;		/* cols included in primary key */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	List	   *rd_indexattrsList;	/* List of bitmaps, describing list of
+									   attributes for each index */
 	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	PublicationActions  *rd_pubactions;	/* publication actions */
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index d5b3072..06c0183 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -58,6 +58,7 @@ typedef enum IndexAttrBitmapKind
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
 						   IndexAttrBitmapKind keyAttrs);
+extern List *RelationGetIndexAttrList(Relation relation);
 
 extern void RelationGetExclusionInfo(Relation indexRelation,
 						 Oid **operators,
#207Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Dilip Kumar (#205)
4 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 30, 2017 at 3:29 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Wed, Mar 29, 2017 at 11:51 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

Thanks. I think your patch of tracking interesting attributes seems ok

too

after the performance issue was addressed. Even though we can still

improve

that further, at least Mithun confirmed that there is no significant
regression anymore and in fact for one artificial case, patch does better
than even master.

I was trying to compile these patches on latest
head(f90d23d0c51895e0d7db7910538e85d3d38691f0) for some testing but I
was not able to compile it.

make[3]: *** [postgres.bki] Error 1

Looks like OID conflict to me.. Please try rebased set.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0001-Track-root-line-pointer-v23_v23.patchapplication/octet-stream; name=0001-Track-root-line-pointer-v23_v23.patchDownload
From 3adc9e74e64719674b92c902b80c8c2abd9aa3af Mon Sep 17 00:00:00 2001
From: Pavan Deolasee <pavan.deolasee@gmail.com>
Date: Tue, 28 Feb 2017 10:34:30 +0530
Subject: [PATCH 1/4] Track root line pointer - v23

Store the root line pointer of the WARM chain in the t_ctid.ip_posid field of
the last tuple in the chain and mark the tuple header with HEAP_TUPLE_LATEST
flag to record that fact.
---
 src/backend/access/heap/heapam.c      | 209 ++++++++++++++++++++++++++++------
 src/backend/access/heap/hio.c         |  25 +++-
 src/backend/access/heap/pruneheap.c   | 126 ++++++++++++++++++--
 src/backend/access/heap/rewriteheap.c |  21 +++-
 src/backend/executor/execIndexing.c   |   3 +-
 src/backend/executor/execMain.c       |   4 +-
 src/include/access/heapam.h           |   1 +
 src/include/access/heapam_xlog.h      |   4 +-
 src/include/access/hio.h              |   4 +-
 src/include/access/htup_details.h     |  97 +++++++++++++++-
 10 files changed, 428 insertions(+), 66 deletions(-)

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 0c3e2b0..30262ef 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -94,7 +94,8 @@ static HeapTuple heap_prepare_insert(Relation relation, HeapTuple tup,
 					TransactionId xid, CommandId cid, int options);
 static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
-				HeapTuple newtup, HeapTuple old_key_tup,
+				HeapTuple newtup, OffsetNumber root_offnum,
+				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
 static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
 							 Bitmapset *interesting_cols,
@@ -2264,13 +2265,13 @@ heap_get_latest_tid(Relation relation,
 		 */
 		if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) ||
 			HeapTupleHeaderIsOnlyLocked(tp.t_data) ||
-			ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid))
+			HeapTupleHeaderIsHeapLatest(tp.t_data, &ctid))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
 		}
 
-		ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tp.t_data, &ctid);
 		priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		UnlockReleaseBuffer(buffer);
 	}							/* end of loop */
@@ -2401,6 +2402,7 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	bool		all_visible_cleared = false;
+	OffsetNumber	root_offnum;
 
 	/*
 	 * Fill in tuple header fields, assign an OID, and toast the tuple if
@@ -2439,8 +2441,13 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
-	RelationPutHeapTuple(relation, buffer, heaptup,
-						 (options & HEAP_INSERT_SPECULATIVE) != 0);
+	root_offnum = RelationPutHeapTuple(relation, buffer, heaptup,
+						 (options & HEAP_INSERT_SPECULATIVE) != 0,
+						 InvalidOffsetNumber);
+
+	/* We must not overwrite the speculative insertion token. */
+	if ((options & HEAP_INSERT_SPECULATIVE) == 0)
+		HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 	if (PageIsAllVisible(BufferGetPage(buffer)))
 	{
@@ -2668,6 +2675,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 	Size		saveFreeSpace;
 	bool		need_tuple_data = RelationIsLogicallyLogged(relation);
 	bool		need_cids = RelationIsAccessibleInLogicalDecoding(relation);
+	OffsetNumber	root_offnum;
 
 	needwal = !(options & HEAP_INSERT_SKIP_WAL) && RelationNeedsWAL(relation);
 	saveFreeSpace = RelationGetTargetPageFreeSpace(relation,
@@ -2738,7 +2746,12 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		 * RelationGetBufferForTuple has ensured that the first tuple fits.
 		 * Put that on the page, and then as many other tuples as fit.
 		 */
-		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false);
+		root_offnum = RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false,
+				InvalidOffsetNumber);
+
+		/* Mark this tuple as the latest and also set root offset. */
+		HeapTupleHeaderSetHeapLatest(heaptuples[ndone]->t_data, root_offnum);
+
 		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++)
 		{
 			HeapTuple	heaptup = heaptuples[ndone + nthispage];
@@ -2746,7 +2759,10 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace)
 				break;
 
-			RelationPutHeapTuple(relation, buffer, heaptup, false);
+			root_offnum = RelationPutHeapTuple(relation, buffer, heaptup, false,
+					InvalidOffsetNumber);
+			/* Mark each tuple as the latest and also set root offset. */
+			HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 			/*
 			 * We don't use heap_multi_insert for catalog tuples yet, but
@@ -3018,6 +3034,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	HeapTupleData tp;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	TransactionId new_xmax;
@@ -3028,6 +3045,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	bool		all_visible_cleared = false;
 	HeapTuple	old_key_tuple = NULL;	/* replica identity of the tuple */
 	bool		old_key_copied = false;
+	OffsetNumber	root_offnum;
 
 	Assert(ItemPointerIsValid(tid));
 
@@ -3069,7 +3087,8 @@ heap_delete(Relation relation, ItemPointer tid,
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 	}
 
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
+	offnum = ItemPointerGetOffsetNumber(tid);
+	lp = PageGetItemId(page, offnum);
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
@@ -3199,7 +3218,17 @@ l1:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tp.t_data->t_ctid;
+
+		/*
+		 * If we're at the end of the chain, then just return the same TID back
+		 * to the caller. The caller uses that as a hint to know if we have hit
+		 * the end of the chain.
+		 */
+		if (!HeapTupleHeaderIsHeapLatest(tp.t_data, &tp.t_self))
+			HeapTupleHeaderGetNextTid(tp.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&tp.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tp.t_data);
@@ -3248,6 +3277,22 @@ l1:
 							  xid, LockTupleExclusive, true,
 							  &new_xmax, &new_infomask, &new_infomask2);
 
+	/*
+	 * heap_get_root_tuple_one() may call palloc, which is disallowed once we
+	 * enter the critical section. So check if the root offset is cached in the
+	 * tuple and if not, fetch that information hard way before entering the
+	 * critical section.
+	 *
+	 * Most often and unless we are dealing with a pg-upgraded cluster, the
+	 * root offset information should be cached. So there should not be too
+	 * much overhead of fetching this information. Also, once a tuple is
+	 * updated, the information will be copied to the new version. So it's not
+	 * as if we're going to pay this price forever.
+	 */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tp.t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -3275,8 +3320,10 @@ l1:
 	HeapTupleHeaderClearHotUpdated(tp.t_data);
 	HeapTupleHeaderSetXmax(tp.t_data, new_xmax);
 	HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo);
-	/* Make sure there is no forward chain link in t_ctid */
-	tp.t_data->t_ctid = tp.t_self;
+
+	/* Mark this tuple as the latest tuple in the update chain. */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		HeapTupleHeaderSetHeapLatest(tp.t_data, root_offnum);
 
 	MarkBufferDirty(buffer);
 
@@ -3477,6 +3524,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
+	OffsetNumber	root_offnum;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3536,6 +3585,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 
 
 	block = ItemPointerGetBlockNumber(otid);
+	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3839,7 +3889,12 @@ l2:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = oldtup.t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(oldtup.t_data, &oldtup.t_self))
+			HeapTupleHeaderGetNextTid(oldtup.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&oldtup.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
@@ -3979,6 +4034,7 @@ l2:
 		uint16		infomask_lock_old_tuple,
 					infomask2_lock_old_tuple;
 		bool		cleared_all_frozen = false;
+		OffsetNumber	root_offnum;
 
 		/*
 		 * To prevent concurrent sessions from updating the tuple, we have to
@@ -4006,6 +4062,14 @@ l2:
 
 		Assert(HEAP_XMAX_IS_LOCKED_ONLY(infomask_lock_old_tuple));
 
+		/*
+		 * Fetch root offset before entering the critical section. We do this
+		 * only if the information is not already available.
+		 */
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&oldtup.t_self));
+
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
@@ -4020,7 +4084,8 @@ l2:
 		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			HeapTupleHeaderSetHeapLatest(oldtup.t_data, root_offnum);
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4179,6 +4244,10 @@ l2:
 										   bms_overlap(modified_attrs, id_attrs),
 										   &old_key_copied);
 
+	if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
@@ -4204,6 +4273,17 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way (we should have done
+		 * that before entering the critical section above).
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
 	else
 	{
@@ -4211,10 +4291,22 @@ l2:
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		root_offnum = InvalidOffsetNumber;
 	}
 
-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
-
+	/* insert new tuple */
+	root_offnum = RelationPutHeapTuple(relation, newbuf, heaptup, false,
+									   root_offnum);
+	/*
+	 * Also mark both copies as latest and set the root offset information. If
+	 * we're doing a HOT/WARM update, then we just copy the information from
+	 * old tuple, if available or computed above. For regular updates,
+	 * RelationPutHeapTuple must have returned us the actual offset number
+	 * where the new version was inserted and we store the same value since the
+	 * update resulted in a new HOT-chain.
+	 */
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
 	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
@@ -4227,7 +4319,7 @@ l2:
 	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	HeapTupleHeaderSetNextTid(oldtup.t_data, &(heaptup->t_self));
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4266,6 +4358,7 @@ l2:
 
 		recptr = log_heap_update(relation, buffer,
 								 newbuf, &oldtup, heaptup,
+								 root_offnum,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4546,7 +4639,8 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	ItemId		lp;
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
-	BlockNumber block;
+	BlockNumber	block;
+	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4555,9 +4649,11 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	bool		first_time = true;
 	bool		have_tuple_lock = false;
 	bool		cleared_all_frozen = false;
+	OffsetNumber	root_offnum;
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
+	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -4577,6 +4673,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	tuple->t_data = (HeapTupleHeader) PageGetItem(page, lp);
 	tuple->t_len = ItemIdGetLength(lp);
 	tuple->t_tableOid = RelationGetRelid(relation);
+	tuple->t_self = *tid;
 
 l3:
 	result = HeapTupleSatisfiesUpdate(tuple, cid, *buffer);
@@ -4604,7 +4701,11 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &t_ctid);
+		else
+			ItemPointerCopy(tid, &t_ctid);
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -5042,7 +5143,12 @@ failed:
 		Assert(result == HeapTupleSelfUpdated || result == HeapTupleUpdated ||
 			   result == HeapTupleWouldBlock);
 		Assert(!(tuple->t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tuple->t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(tid, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tuple->t_data);
@@ -5090,6 +5196,10 @@ failed:
 							  GetCurrentTransactionId(), mode, false,
 							  &xid, &new_infomask, &new_infomask2);
 
+	if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tuple->t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -5118,7 +5228,10 @@ failed:
 	 * the tuple as well.
 	 */
 	if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask))
-		tuple->t_data->t_ctid = *tid;
+	{
+		if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+			HeapTupleHeaderSetHeapLatest(tuple->t_data, root_offnum);
+	}
 
 	/* Clear only the all-frozen bit on visibility map if needed */
 	if (PageIsAllVisible(page) &&
@@ -5632,6 +5745,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
+	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5640,6 +5754,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
+		offnum = ItemPointerGetOffsetNumber(&tupid);
+
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
@@ -5869,7 +5985,7 @@ l4:
 
 		/* if we find the end of update chain, we're done. */
 		if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID ||
-			ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) ||
+			HeapTupleHeaderIsHeapLatest(mytup.t_data, &mytup.t_self) ||
 			HeapTupleHeaderIsOnlyLocked(mytup.t_data))
 		{
 			result = HeapTupleMayBeUpdated;
@@ -5878,7 +5994,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		HeapTupleHeaderGetNextTid(mytup.t_data, &tupid);
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
@@ -5995,7 +6111,7 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	 * Replace the speculative insertion token with a real t_ctid, pointing to
 	 * itself like it does on regular tuples.
 	 */
-	htup->t_ctid = tuple->t_self;
+	HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 	/* XLOG stuff */
 	if (RelationNeedsWAL(relation))
@@ -6121,8 +6237,7 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
 
 	/* Clear the speculative insertion token too */
-	tp.t_data->t_ctid = tp.t_self;
-
+	HeapTupleHeaderSetHeapLatest(tp.t_data, ItemPointerGetOffsetNumber(tid));
 	MarkBufferDirty(buffer);
 
 	/*
@@ -7470,6 +7585,7 @@ log_heap_visible(RelFileNode rnode, Buffer heap_buffer, Buffer vm_buffer,
 static XLogRecPtr
 log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup, HeapTuple newtup,
+				OffsetNumber root_offnum,
 				HeapTuple old_key_tuple,
 				bool all_visible_cleared, bool new_all_visible_cleared)
 {
@@ -7590,6 +7706,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self);
 	xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data);
 
+	Assert(OffsetNumberIsValid(root_offnum));
+	xlrec.root_offnum = root_offnum;
+
 	bufflags = REGBUF_STANDARD;
 	if (init)
 		bufflags |= REGBUF_WILL_INIT;
@@ -8244,7 +8363,13 @@ heap_xlog_delete(XLogReaderState *record)
 			PageClearAllVisible(page);
 
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = target_tid;
+		if (!HeapTupleHeaderHasRootOffset(htup))
+		{
+			OffsetNumber	root_offnum;
+			root_offnum = heap_get_root_tuple(page, xlrec->offnum); 
+			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+		}
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8334,7 +8459,8 @@ heap_xlog_insert(XLogReaderState *record)
 		htup->t_hoff = xlhdr.t_hoff;
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
-		htup->t_ctid = target_tid;
+
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->offnum);
 
 		if (PageAddItem(page, (Item) htup, newlen, xlrec->offnum,
 						true, true) == InvalidOffsetNumber)
@@ -8469,8 +8595,8 @@ heap_xlog_multi_insert(XLogReaderState *record)
 			htup->t_hoff = xlhdr->t_hoff;
 			HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 			HeapTupleHeaderSetCmin(htup, FirstCommandId);
-			ItemPointerSetBlockNumber(&htup->t_ctid, blkno);
-			ItemPointerSetOffsetNumber(&htup->t_ctid, offnum);
+
+			HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 			offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 			if (offnum == InvalidOffsetNumber)
@@ -8606,7 +8732,7 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 		/* Set forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetNextTid(htup, &newtid);
 
 		/* Mark the page as a candidate for pruning */
 		PageSetPrunable(page, XLogRecGetXid(record));
@@ -8739,13 +8865,17 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
-		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = newtid;
 
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
 
+		/*
+		 * Make sure the tuple is marked as the latest and root offset
+		 * information is restored.
+		 */
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->root_offnum);
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
@@ -8808,6 +8938,9 @@ heap_xlog_confirm(XLogReaderState *record)
 		 */
 		ItemPointerSet(&htup->t_ctid, BufferGetBlockNumber(buffer), offnum);
 
+		/* For newly inserted tuple, set root offset to itself. */
+		HeapTupleHeaderSetHeapLatest(htup, offnum);
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8871,11 +9004,17 @@ heap_xlog_lock(XLogReaderState *record)
 		 */
 		if (HEAP_XMAX_IS_LOCKED_ONLY(htup->t_infomask))
 		{
+			ItemPointerData	target_tid;
+
+			ItemPointerSet(&target_tid, BufferGetBlockNumber(buffer), offnum);
 			HeapTupleHeaderClearHotUpdated(htup);
 			/* Make sure there is no forward chain link in t_ctid */
-			ItemPointerSet(&htup->t_ctid,
-						   BufferGetBlockNumber(buffer),
-						   offnum);
+			if (!HeapTupleHeaderHasRootOffset(htup))
+			{
+				OffsetNumber	root_offnum;
+				root_offnum = heap_get_root_tuple(page, offnum);
+				HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+			}
 		}
 		HeapTupleHeaderSetXmax(htup, xlrec->locking_xid);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index 6529fe3..8052519 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -31,12 +31,20 @@
  * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
  * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *
+ * The caller can optionally tell us to set the root offset to the given value.
+ * Otherwise, the root offset is set to the offset of the new location once its
+ * known. The former is used while updating an existing tuple where the caller
+ * tells us about the root line pointer of the chain.  The latter is used
+ * during insertion of a new row, hence root line pointer is set to the offset
+ * where this tuple is inserted.
  */
-void
+OffsetNumber
 RelationPutHeapTuple(Relation relation,
 					 Buffer buffer,
 					 HeapTuple tuple,
-					 bool token)
+					 bool token,
+					 OffsetNumber root_offnum)
 {
 	Page		pageHeader;
 	OffsetNumber offnum;
@@ -60,17 +68,24 @@ RelationPutHeapTuple(Relation relation,
 	ItemPointerSet(&(tuple->t_self), BufferGetBlockNumber(buffer), offnum);
 
 	/*
-	 * Insert the correct position into CTID of the stored tuple, too (unless
-	 * this is a speculative insertion, in which case the token is held in
-	 * CTID field instead)
+	 * Set block number and the root offset into CTID of the stored tuple, too
+	 * (unless this is a speculative insertion, in which case the token is held
+	 * in CTID field instead).
 	 */
 	if (!token)
 	{
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
+		/* Copy t_ctid to set the correct block number. */
 		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+
+		if (!OffsetNumberIsValid(root_offnum))
+			root_offnum = offnum;
+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item, root_offnum);
 	}
+
+	return root_offnum;
 }
 
 /*
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index d69a266..f54337c 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -55,6 +55,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
 
+static void heap_get_root_tuples_internal(Page page,
+				OffsetNumber target_offnum, OffsetNumber *root_offsets);
 
 /*
  * Optionally prune and repair fragmentation in the specified page.
@@ -553,6 +555,17 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 		if (!HeapTupleHeaderIsHotUpdated(htup))
 			break;
 
+
+		/*
+		 * If the tuple was HOT-updated and the update was later
+		 * aborted, someone could mark this tuple to be the last tuple
+		 * in the chain, without clearing the HOT-updated flag. So we must
+		 * check if this is the last tuple in the chain and stop following the
+		 * CTID, else we risk getting into an infinite recursion (though
+		 * prstate->marked[] currently protects against that).
+		 */
+		if (HeapTupleHeaderHasRootOffset(htup))
+			break;
 		/*
 		 * Advance to next chain member.
 		 */
@@ -726,27 +739,47 @@ heap_page_prune_execute(Buffer buffer,
 
 
 /*
- * For all items in this page, find their respective root line pointers.
- * If item k is part of a HOT-chain with root at item j, then we set
- * root_offsets[k - 1] = j.
+ * Either for all items in this page or for the given item, find their
+ * respective root line pointers.
+ *
+ * When target_offnum is a valid offset number, the caller is interested in
+ * just one item. In that case, the root line pointer is returned in
+ * root_offsets.
  *
- * The passed-in root_offsets array must have MaxHeapTuplesPerPage entries.
- * We zero out all unused entries.
+ * When target_offnum is a InvalidOffsetNumber then the caller wants to know
+ * the root line pointers of all the items in this page. The root_offsets array
+ * must have MaxHeapTuplesPerPage entries in that case. If item k is part of a
+ * HOT-chain with root at item j, then we set root_offsets[k - 1] = j. We zero
+ * out all unused entries.
  *
  * The function must be called with at least share lock on the buffer, to
  * prevent concurrent prune operations.
  *
+ * This is not a cheap function since it must scan through all line pointers
+ * and tuples on the page in order to find the root line pointers. To minimize
+ * the cost, we break early if target_offnum is specified and root line pointer
+ * to target_offnum is found.
+ *
  * Note: The information collected here is valid only as long as the caller
  * holds a pin on the buffer. Once pin is released, a tuple might be pruned
  * and reused by a completely unrelated tuple.
+ *
+ * Note: This function must not be called inside a critical section because it
+ * internally calls HeapTupleHeaderGetUpdateXid which somewhere down the stack
+ * may try to allocate heap memory. Memory allocation is disallowed in a
+ * critical section.
  */
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
 {
 	OffsetNumber offnum,
 				maxoff;
 
-	MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
+	if (OffsetNumberIsValid(target_offnum))
+		*root_offsets = InvalidOffsetNumber;
+	else
+		MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
 
 	maxoff = PageGetMaxOffsetNumber(page);
 	for (offnum = FirstOffsetNumber; offnum <= maxoff; offnum = OffsetNumberNext(offnum))
@@ -774,9 +807,28 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 
 			/*
 			 * This is either a plain tuple or the root of a HOT-chain.
-			 * Remember it in the mapping.
+			 *
+			 * If the target_offnum is specified and if we found its mapping,
+			 * return.
 			 */
-			root_offsets[offnum - 1] = offnum;
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (target_offnum == offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember it in the mapping. */
+				root_offsets[offnum - 1] = offnum;
+			}
 
 			/* If it's not the start of a HOT-chain, we're done with it */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
@@ -817,15 +869,65 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 				!TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(htup)))
 				break;
 
-			/* Remember the root line pointer for this item */
-			root_offsets[nextoffnum - 1] = offnum;
+			/*
+			 * If target_offnum is specified and we found its mapping, return.
+			 */
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (nextoffnum == target_offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember the root line pointer for this item. */
+				root_offsets[nextoffnum - 1] = offnum;
+			}
 
 			/* Advance to next chain member, if any */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				break;
 
+			/*
+			 * If the tuple was HOT-updated and the update was later aborted,
+			 * someone could mark this tuple to be the last tuple in the chain
+			 * and store root offset in CTID, without clearing the HOT-updated
+			 * flag. So we must check if CTID is actually root offset and break
+			 * to avoid infinite recursion.
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
 		}
 	}
 }
+
+/*
+ * Get root line pointer for the given tuple.
+ */
+OffsetNumber
+heap_get_root_tuple(Page page, OffsetNumber target_offnum)
+{
+	OffsetNumber offnum = InvalidOffsetNumber;
+	heap_get_root_tuples_internal(page, target_offnum, &offnum);
+	return offnum;
+}
+
+/*
+ * Get root line pointers for all tuples in the page
+ */
+void
+heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+{
+	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+			root_offsets);
+}
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index d7f65a5..2d3ae9b 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -421,14 +421,18 @@ rewrite_heap_tuple(RewriteState state,
 	 */
 	if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
 		  HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
-		!(ItemPointerEquals(&(old_tuple->t_self),
-							&(old_tuple->t_data->t_ctid))))
+		!(HeapTupleHeaderIsHeapLatest(old_tuple->t_data, &old_tuple->t_self)))
 	{
 		OldToNewMapping mapping;
 
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
-		hashkey.tid = old_tuple->t_data->t_ctid;
+
+		/* 
+		 * We've already checked that this is not the last tuple in the chain,
+		 * so fetch the next TID in the chain.
+		 */
+		HeapTupleHeaderGetNextTid(old_tuple->t_data, &hashkey.tid);
 
 		mapping = (OldToNewMapping)
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -441,7 +445,7 @@ rewrite_heap_tuple(RewriteState state,
 			 * set the ctid of this tuple to point to the new location, and
 			 * insert it right away.
 			 */
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextTid(new_tuple->t_data, &mapping->new_tid);
 
 			/* We don't need the mapping entry anymore */
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -527,7 +531,7 @@ rewrite_heap_tuple(RewriteState state,
 				new_tuple = unresolved->tuple;
 				free_new = true;
 				old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextTid(new_tuple->t_data, &new_tid);
 
 				/*
 				 * We don't need the hash entry anymore, but don't free its
@@ -733,7 +737,12 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		onpage_tup->t_ctid = tup->t_self;
+		/* 
+		 * Set t_ctid just to ensure that block number is copied correctly, but
+		 * then immediately mark the tuple as the latest.
+		 */
+		HeapTupleHeaderSetNextTid(onpage_tup, &tup->t_self);
+		HeapTupleHeaderSetHeapLatest(onpage_tup, newoff);
 	}
 
 	/* If heaptup is a private copy, release it. */
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 108060a..c3f1873 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -785,7 +785,8 @@ retry:
 			  DirtySnapshot.speculativeToken &&
 			  TransactionIdPrecedes(GetCurrentTransactionId(), xwait))))
 		{
-			ctid_wait = tup->t_data->t_ctid;
+			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
+				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index f2995f2..73e9c4a 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2623,7 +2623,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		 * As above, it should be safe to examine xmax and t_ctid without the
 		 * buffer content lock, because they can't be changing.
 		 */
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, &tuple.t_self))
 		{
 			/* deleted, so forget about it */
 			ReleaseBuffer(buffer);
@@ -2631,7 +2631,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		}
 
 		/* updated, so look at the updated row */
-		tuple.t_self = tuple.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tuple.t_data, &tuple.t_self);
 		/* updated row should have xmin matching this xmax */
 		priorXmax = HeapTupleHeaderGetUpdateXid(tuple.t_data);
 		ReleaseBuffer(buffer);
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 7e85510..5540e12 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -190,6 +190,7 @@ extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
+extern OffsetNumber heap_get_root_tuple(Page page, OffsetNumber target_offnum);
 extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
 
 /* in heap/syncscan.c */
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index b285f17..e6019d5 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -193,6 +193,8 @@ typedef struct xl_heap_update
 	uint8		flags;
 	TransactionId new_xmax;		/* xmax of the new tuple */
 	OffsetNumber new_offnum;	/* new tuple's offset */
+	OffsetNumber root_offnum;	/* offset of the root line pointer in case of
+								   HOT or WARM update */
 
 	/*
 	 * If XLOG_HEAP_CONTAINS_OLD_TUPLE or XLOG_HEAP_CONTAINS_OLD_KEY flags are
@@ -200,7 +202,7 @@ typedef struct xl_heap_update
 	 */
 } xl_heap_update;
 
-#define SizeOfHeapUpdate	(offsetof(xl_heap_update, new_offnum) + sizeof(OffsetNumber))
+#define SizeOfHeapUpdate	(offsetof(xl_heap_update, root_offnum) + sizeof(OffsetNumber))
 
 /*
  * This is what we need to know about vacuum page cleanup/redirect
diff --git a/src/include/access/hio.h b/src/include/access/hio.h
index 2824f23..921cb37 100644
--- a/src/include/access/hio.h
+++ b/src/include/access/hio.h
@@ -35,8 +35,8 @@ typedef struct BulkInsertStateData
 }	BulkInsertStateData;
 
 
-extern void RelationPutHeapTuple(Relation relation, Buffer buffer,
-					 HeapTuple tuple, bool token);
+extern OffsetNumber RelationPutHeapTuple(Relation relation, Buffer buffer,
+					 HeapTuple tuple, bool token, OffsetNumber root_offnum);
 extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
 						  Buffer otherBuffer, int options,
 						  BulkInsertState bistate,
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 7b6285d..24433c7 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,13 +260,19 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1800 are available */
+/* bits 0x0800 are available */
+#define HEAP_LATEST_TUPLE		0x1000	/*
+										 * This is the last tuple in chain and
+										 * ip_posid points to the root line
+										 * pointer
+										 */
 #define HEAP_KEYS_UPDATED		0x2000	/* tuple was updated and key cols
 										 * modified, or tuple deleted */
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xE000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+
 
 /*
  * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins.  It is
@@ -504,6 +510,43 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+/*
+ * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
+ * the TID of the tuple itself in t_ctid field to mark the end of the chain.
+ * But starting PG v10, we use a special flag HEAP_LATEST_TUPLE to identify the
+ * last tuple and store the root line pointer of the HOT chain in t_ctid field
+ * instead.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetHeapLatest(tup, offnum) \
+do { \
+	AssertMacro(OffsetNumberIsValid(offnum)); \
+	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE; \
+	ItemPointerSetOffsetNumber(&(tup)->t_ctid, (offnum)); \
+} while (0)
+
+#define HeapTupleHeaderClearHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 &= ~HEAP_LATEST_TUPLE \
+)
+
+/*
+ * Starting from PostgreSQL 10, the latest tuple in an update chain has
+ * HEAP_LATEST_TUPLE set; but tuples upgraded from earlier versions do not.
+ * For those, we determine whether a tuple is latest by testing that its t_ctid
+ * points to itself.
+ *
+ * Note: beware of multiple evaluations of "tup" and "tid" arguments.
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  (((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(tid))) \
+)
+
+
 #define HeapTupleHeaderSetHeapOnly(tup) \
 ( \
   (tup)->t_infomask2 |= HEAP_ONLY_TUPLE \
@@ -542,6 +585,56 @@ do { \
 
 
 /*
+ * Set the t_ctid chain and also clear the HEAP_LATEST_TUPLE flag since we
+ * now have a new tuple in the chain and this is no longer the last tuple of
+ * the chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetNextTid(tup, tid) \
+do { \
+		ItemPointerCopy((tid), &((tup)->t_ctid)); \
+		HeapTupleHeaderClearHeapLatest((tup)); \
+} while (0)
+
+/*
+ * Get TID of next tuple in the update chain. Caller must have checked that
+ * we are not already at the end of the chain because in that case t_ctid may
+ * actually store the root line pointer of the HOT chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetNextTid(tup, next_ctid) \
+do { \
+	AssertMacro(!((tup)->t_infomask2 & HEAP_LATEST_TUPLE)); \
+	ItemPointerCopy(&(tup)->t_ctid, (next_ctid)); \
+} while (0)
+
+/*
+ * Get the root line pointer of the HOT chain. The caller should have confirmed
+ * that the root offset is cached before calling this macro.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+	AssertMacro(((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0), \
+	ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)
+
+/*
+ * Return whether the tuple has a cached root offset.  We don't use
+ * HeapTupleHeaderIsHeapLatest because that one also considers the case of
+ * t_ctid pointing to itself, for tuples migrated from pre v10 clusters. Here
+ * we are only interested in the tuples which are marked with HEAP_LATEST_TUPLE
+ * flag.
+ */
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+	((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0 \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
-- 
2.9.3 (Apple Git-75)

0002-Free-3-bits-in-ip_posid-field-of-the-ItemPointer_v23.patchapplication/octet-stream; name=0002-Free-3-bits-in-ip_posid-field-of-the-ItemPointer_v23.patchDownload
From 9d578be735ec561ab57b469599ffc019d7165a0b Mon Sep 17 00:00:00 2001
From: Pavan Deolasee <pavan.deolasee@gmail.com>
Date: Wed, 29 Mar 2017 10:44:01 +0530
Subject: [PATCH 2/4] Free 3-bits in ip_posid field of the ItemPointerData.

We can use those for storing some other information. Right now only index
methods will use those to store WARM/CLEAR property of an index pointer.
---
 src/include/access/ginblock.h     |  3 ++-
 src/include/access/htup_details.h |  2 +-
 src/include/storage/itemptr.h     | 30 +++++++++++++++++++++++++++---
 src/include/storage/off.h         | 11 ++++++++++-
 4 files changed, 40 insertions(+), 6 deletions(-)

diff --git a/src/include/access/ginblock.h b/src/include/access/ginblock.h
index 438912c..316ab65 100644
--- a/src/include/access/ginblock.h
+++ b/src/include/access/ginblock.h
@@ -135,7 +135,8 @@ typedef struct GinMetaPageData
 	(ItemPointerGetBlockNumberNoCheck(pointer))
 
 #define GinItemPointerGetOffsetNumber(pointer) \
-	(ItemPointerGetOffsetNumberNoCheck(pointer))
+	(ItemPointerGetOffsetNumberNoCheck(pointer) | \
+	 (ItemPointerGetFlags(pointer) << OffsetNumberBits))
 
 #define GinItemPointerSetBlockNumber(pointer, blkno) \
 	(ItemPointerSetBlockNumber((pointer), (blkno)))
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 24433c7..4d614b7 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -288,7 +288,7 @@ struct HeapTupleHeaderData
  * than MaxOffsetNumber, so that it can be distinguished from a valid
  * offset number in a regular item pointer.
  */
-#define SpecTokenOffsetNumber		0xfffe
+#define SpecTokenOffsetNumber		OffsetNumberPrev(OffsetNumberMask)
 
 /*
  * HeapTupleHeader accessor macros
diff --git a/src/include/storage/itemptr.h b/src/include/storage/itemptr.h
index c21d2ad..74eed4e 100644
--- a/src/include/storage/itemptr.h
+++ b/src/include/storage/itemptr.h
@@ -57,7 +57,7 @@ typedef ItemPointerData *ItemPointer;
  *		True iff the disk item pointer is not NULL.
  */
 #define ItemPointerIsValid(pointer) \
-	((bool) (PointerIsValid(pointer) && ((pointer)->ip_posid != 0)))
+	((bool) (PointerIsValid(pointer) && (((pointer)->ip_posid & OffsetNumberMask) != 0)))
 
 /*
  * ItemPointerGetBlockNumberNoCheck
@@ -84,7 +84,7 @@ typedef ItemPointerData *ItemPointer;
  */
 #define ItemPointerGetOffsetNumberNoCheck(pointer) \
 ( \
-	(pointer)->ip_posid \
+	((pointer)->ip_posid & OffsetNumberMask) \
 )
 
 /*
@@ -98,6 +98,30 @@ typedef ItemPointerData *ItemPointer;
 )
 
 /*
+ * Get the flags stored in high order bits in the OffsetNumber.
+ */
+#define ItemPointerGetFlags(pointer) \
+( \
+	((pointer)->ip_posid & ~OffsetNumberMask) >> OffsetNumberBits \
+)
+
+/*
+ * Set the flag bits. We first left-shift since flags are defined starting 0x01
+ */
+#define ItemPointerSetFlags(pointer, flags) \
+( \
+	((pointer)->ip_posid |= ((flags) << OffsetNumberBits)) \
+)
+
+/*
+ * Clear all flags.
+ */
+#define ItemPointerClearFlags(pointer) \
+( \
+	((pointer)->ip_posid &= OffsetNumberMask) \
+)
+
+/*
  * ItemPointerSet
  *		Sets a disk item pointer to the specified block and offset.
  */
@@ -105,7 +129,7 @@ typedef ItemPointerData *ItemPointer;
 ( \
 	AssertMacro(PointerIsValid(pointer)), \
 	BlockIdSet(&((pointer)->ip_blkid), blockNumber), \
-	(pointer)->ip_posid = offNum \
+	(pointer)->ip_posid = (offNum) \
 )
 
 /*
diff --git a/src/include/storage/off.h b/src/include/storage/off.h
index fe8638f..f058fe1 100644
--- a/src/include/storage/off.h
+++ b/src/include/storage/off.h
@@ -26,7 +26,16 @@ typedef uint16 OffsetNumber;
 #define InvalidOffsetNumber		((OffsetNumber) 0)
 #define FirstOffsetNumber		((OffsetNumber) 1)
 #define MaxOffsetNumber			((OffsetNumber) (BLCKSZ / sizeof(ItemIdData)))
-#define OffsetNumberMask		(0xffff)		/* valid uint16 bits */
+
+/*
+ * The biggest BLCKSZ we support is 32kB, and each ItemId takes 6 bytes.
+ * That limits the number of line pointers in a page to 32kB/6B = 5461.
+ * Therefore, 13 bits in OffsetNumber are enough to represent all valid
+ * on-disk line pointers.  Hence, we can reserve the high-order bits in
+ * OffsetNumber for other purposes.
+ */
+#define OffsetNumberBits		13
+#define OffsetNumberMask		((((uint16) 1) << OffsetNumberBits) - 1)
 
 /* ----------------
  *		support macros
-- 
2.9.3 (Apple Git-75)

0003-Main-WARM-patch_v23.patchapplication/octet-stream; name=0003-Main-WARM-patch_v23.patchDownload
From 3496d52e349715f6115018fdd925fb654cb7c70a Mon Sep 17 00:00:00 2001
From: Pavan Deolasee <pavan.deolasee@gmail.com>
Date: Sun, 26 Mar 2017 15:03:45 +0530
Subject: [PATCH 3/4] Main WARM patch.

We perform WARM update if the update is not modifying all indexes, but
modifying at least one index and has enough free space in the heap block to
keep the new version of the tuple.

The update works pretty much the same way as HOT updates, but the index whose
key values have changed must receive another index entry, pointing to the same
root of the HOT chain. Such chains which may have more than one index pointers
in at least one index, are called WARM chains. But now since there are 2 index
pointers to the same chain, we must do recheck to confirm that the index
pointer should or should not see the tuple. HOT pruning and other technique
remain the same.

WARM chains must subsequently be cleaned up by removing additional index
pointers. Once cleaned up, they are further be WARM updated and
index-only-scans will work.

To ensure that we don't do wasteful work, we only do WARM update if less than
50% indexes need updates. For anything above that, it probably does not make
sense to do WARM updates because most indexes will receive an update anyways
and cleanup cost will be high.
---
 contrib/bloom/blutils.c                     |   1 +
 contrib/bloom/blvacuum.c                    |   2 +-
 src/backend/access/brin/brin.c              |   1 +
 src/backend/access/gin/ginvacuum.c          |   3 +-
 src/backend/access/gist/gist.c              |   1 +
 src/backend/access/gist/gistvacuum.c        |   3 +-
 src/backend/access/hash/hash.c              |  18 +-
 src/backend/access/hash/hashsearch.c        |   5 +
 src/backend/access/heap/README.WARM         | 308 ++++++++++
 src/backend/access/heap/heapam.c            | 621 +++++++++++++++++--
 src/backend/access/heap/pruneheap.c         |   9 +-
 src/backend/access/heap/rewriteheap.c       |  12 +-
 src/backend/access/heap/tuptoaster.c        |   3 +-
 src/backend/access/index/genam.c            |   2 +
 src/backend/access/index/indexam.c          |  95 ++-
 src/backend/access/nbtree/nbtinsert.c       | 228 ++++---
 src/backend/access/nbtree/nbtpage.c         |  56 +-
 src/backend/access/nbtree/nbtree.c          |  76 ++-
 src/backend/access/nbtree/nbtutils.c        |  93 +++
 src/backend/access/nbtree/nbtxlog.c         |  27 +-
 src/backend/access/rmgrdesc/heapdesc.c      |  26 +-
 src/backend/access/rmgrdesc/nbtdesc.c       |   4 +-
 src/backend/access/spgist/spgutils.c        |   1 +
 src/backend/access/spgist/spgvacuum.c       |  12 +-
 src/backend/catalog/index.c                 |  71 ++-
 src/backend/catalog/indexing.c              |  60 +-
 src/backend/catalog/system_views.sql        |   4 +-
 src/backend/commands/constraint.c           |   7 +-
 src/backend/commands/copy.c                 |   3 +
 src/backend/commands/indexcmds.c            |  17 +-
 src/backend/commands/vacuumlazy.c           | 649 +++++++++++++++++++-
 src/backend/executor/execIndexing.c         |  21 +-
 src/backend/executor/execReplication.c      |  30 +-
 src/backend/executor/nodeBitmapHeapscan.c   |  21 +-
 src/backend/executor/nodeIndexscan.c        |   4 +-
 src/backend/executor/nodeModifyTable.c      |  27 +-
 src/backend/postmaster/pgstat.c             |   7 +-
 src/backend/replication/logical/decode.c    |  13 +-
 src/backend/storage/page/bufpage.c          |  23 +
 src/backend/utils/adt/pgstatfuncs.c         |  31 +
 src/backend/utils/cache/relcache.c          | 113 +++-
 src/backend/utils/time/combocid.c           |   4 +-
 src/backend/utils/time/tqual.c              |  24 +-
 src/include/access/amapi.h                  |  18 +
 src/include/access/genam.h                  |  22 +-
 src/include/access/heapam.h                 |  30 +-
 src/include/access/heapam_xlog.h            |  24 +-
 src/include/access/htup_details.h           | 116 +++-
 src/include/access/nbtree.h                 |  21 +-
 src/include/access/nbtxlog.h                |  10 +-
 src/include/access/relscan.h                |   5 +-
 src/include/catalog/index.h                 |   7 +
 src/include/catalog/pg_proc.h               |   4 +
 src/include/commands/progress.h             |   1 +
 src/include/executor/executor.h             |   1 +
 src/include/executor/nodeIndexscan.h        |   1 -
 src/include/nodes/execnodes.h               |   1 +
 src/include/pgstat.h                        |   4 +-
 src/include/storage/bufpage.h               |   2 +
 src/include/utils/rel.h                     |   7 +
 src/include/utils/relcache.h                |   5 +-
 src/test/regress/expected/alter_generic.out |   4 +-
 src/test/regress/expected/rules.out         |  12 +-
 src/test/regress/expected/warm.out          | 914 ++++++++++++++++++++++++++++
 src/test/regress/parallel_schedule          |   2 +
 src/test/regress/sql/warm.sql               | 344 +++++++++++
 66 files changed, 3960 insertions(+), 331 deletions(-)
 create mode 100644 src/backend/access/heap/README.WARM
 create mode 100644 src/test/regress/expected/warm.out
 create mode 100644 src/test/regress/sql/warm.sql

diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c
index f2eda67..b356e2b 100644
--- a/contrib/bloom/blutils.c
+++ b/contrib/bloom/blutils.c
@@ -142,6 +142,7 @@ blhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/contrib/bloom/blvacuum.c b/contrib/bloom/blvacuum.c
index 04abd0f..ff50361 100644
--- a/contrib/bloom/blvacuum.c
+++ b/contrib/bloom/blvacuum.c
@@ -88,7 +88,7 @@ blbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 		while (itup < itupEnd)
 		{
 			/* Do we have to delete this tuple? */
-			if (callback(&itup->heapPtr, callback_state))
+			if (callback(&itup->heapPtr, false, callback_state) == IBDCR_DELETE)
 			{
 				/* Yes; adjust count of tuples that will be left on page */
 				BloomPageGetOpaque(page)->maxoff--;
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index b22563b..b4a1465 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -116,6 +116,7 @@ brinhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gin/ginvacuum.c b/src/backend/access/gin/ginvacuum.c
index 26c077a..46ed4fe 100644
--- a/src/backend/access/gin/ginvacuum.c
+++ b/src/backend/access/gin/ginvacuum.c
@@ -56,7 +56,8 @@ ginVacuumItemPointers(GinVacuumState *gvs, ItemPointerData *items,
 	 */
 	for (i = 0; i < nitem; i++)
 	{
-		if (gvs->callback(items + i, gvs->callback_state))
+		if (gvs->callback(items + i, false, gvs->callback_state) ==
+				IBDCR_DELETE)
 		{
 			gvs->result->tuples_removed += 1;
 			if (!tmpitems)
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index 6593771..843389b 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -94,6 +94,7 @@ gisthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gist/gistvacuum.c b/src/backend/access/gist/gistvacuum.c
index 77d9d12..0955db6 100644
--- a/src/backend/access/gist/gistvacuum.c
+++ b/src/backend/access/gist/gistvacuum.c
@@ -202,7 +202,8 @@ gistbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 				iid = PageGetItemId(page, i);
 				idxtuple = (IndexTuple) PageGetItem(page, iid);
 
-				if (callback(&(idxtuple->t_tid), callback_state))
+				if (callback(&(idxtuple->t_tid), false, callback_state) ==
+						IBDCR_DELETE)
 					todelete[ntodelete++] = i;
 				else
 					stats->num_index_tuples += 1;
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index 34cc08f..ad56d6d 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -75,6 +75,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = hashbuild;
 	amroutine->ambuildempty = hashbuildempty;
 	amroutine->aminsert = hashinsert;
+	amroutine->amwarminsert = NULL;
 	amroutine->ambulkdelete = hashbulkdelete;
 	amroutine->amvacuumcleanup = hashvacuumcleanup;
 	amroutine->amcanreturn = NULL;
@@ -92,6 +93,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -807,6 +809,7 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 			IndexTuple	itup;
 			Bucket		bucket;
 			bool		kill_tuple = false;
+			IndexBulkDeleteCallbackResult	result;
 
 			itup = (IndexTuple) PageGetItem(page,
 											PageGetItemId(page, offno));
@@ -816,13 +819,18 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 			 * To remove the dead tuples, we strictly want to rely on results
 			 * of callback function.  refer btvacuumpage for detailed reason.
 			 */
-			if (callback && callback(htup, callback_state))
+			if (callback)
 			{
-				kill_tuple = true;
-				if (tuples_removed)
-					*tuples_removed += 1;
+				result = callback(htup, false, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					kill_tuple = true;
+					if (tuples_removed)
+						*tuples_removed += 1;
+				}
 			}
-			else if (split_cleanup)
+
+			if (!kill_tuple && split_cleanup)
 			{
 				/* delete the tuples that are moved by split. */
 				bucket = _hash_hashkey2bucket(_hash_get_indextuple_hashkey(itup),
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index 2d92049..330ccc5 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -59,6 +59,8 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
 	return true;
 }
 
@@ -367,6 +369,9 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
+
 	return true;
 }
 
diff --git a/src/backend/access/heap/README.WARM b/src/backend/access/heap/README.WARM
new file mode 100644
index 0000000..7c93a70
--- /dev/null
+++ b/src/backend/access/heap/README.WARM
@@ -0,0 +1,308 @@
+src/backend/access/heap/README.WARM
+
+Write Amplification Reduction Method (WARM)
+===========================================
+
+The Heap Only Tuple (HOT) feature greatly eliminated redudant index
+entries and allowed re-use of the dead space occupied by previously
+updated or deleted tuples (see src/backend/access/heap/README.HOT)
+
+One of the necessary conditions for satisfying HOT update is that the
+update must not change a column used in any of the indexes on the table.
+The condition is sometimes hard to meet, especially for complex
+workloads with several indexes on large yet frequently updated tables.
+Worse, sometimes only one or two index columns may be updated, but the
+regular non-HOT update will still insert a new index entry in every
+index on the table, irrespective of whether the key pertaining to the
+index is changed or not.
+
+WARM is a technique devised to address these problems.
+
+
+Update Chains With Multiple Index Entries Pointing to the Root
+--------------------------------------------------------------
+
+When a non-HOT update is caused by an index key change, a new index
+entry must be inserted for the changed index. But if the index key
+hasn't changed for other indexes, we don't really need to insert a new
+entry. Even though the existing index entry is pointing to the old
+tuple, the new tuple is reachable via the t_ctid chain. To keep things
+simple, a WARM update requires that the heap block must have enough
+space to store the new version of the tuple. This is same as HOT
+updates.
+
+In WARM, we ensure that every index entry always points to the root of
+the WARM chain. In fact, a WARM chain looks exactly like a HOT chain
+except for the fact that there could be multiple index entries pointing
+to the root of the chain. So when new entry is inserted in an index for
+updated tuple, and if we are doing a WARM update, the new entry is made
+point to the root of the WARM chain.
+
+For example, if we have a table with two columns and two indexes on each of the
+column. When a tuple is first inserted into the table, we have exactly one
+index entry pointing to the tuple from both indexes.
+
+	lp [1]
+	[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's entry (aaaa) also points to 1
+
+Now if the tuple's second column is updated and if there is room on the
+page, we perform a WARM update. To do so, Index1 does not get any new
+entry and Index2's new entry will still point to the root tuple of the
+chain.
+
+	lp [1]  [2]
+	[1111, aaaa]->[1111, bbbb]
+
+	Index1's entry (1111) points to 1
+	Index2's old entry (aaaa) points to 1
+	Index2's new entry (bbbb) also points to 1
+
+"A update chain which has more than one index entries pointing to its
+root line pointer is called WARM chain and the action that creates a
+WARM chain is called WARM update."
+
+Since all indexes always point to the root of the WARM chain, even when
+there are more than one index entries, WARM chains can be pruned and
+dead tuples can be removed without a need to do corresponding index
+cleanup.
+
+While this solves the problem of pruning dead tuples from a HOT/WARM
+chain, it also opens up a new technical challenge because now we have a
+situation where a heap tuple is reachable from multiple index entries,
+each having a different index key. While MVCC still ensures that only
+valid tuples are returned, a tuple with a wrong index key may be
+returned because of wrong index entries. In the above example, tuple
+[1111, bbbb] is reachable from both keys (aaaa) as well as (bbbb). For
+this reason, tuples returned from a WARM chain must always be rechecked
+for index key-match.
+
+Recheck Index Key Againt Heap Tuple
+-----------------------------------
+
+Since every Index AM has it's own notion of index tuples, each Index AM
+must implement its own method to recheck heap tuples. For example, a
+hash index stores the hash value of the column and hence recheck routine
+for hash AM must first compute the hash value of the heap attribute and
+then compare it against the value stored in the index tuple.
+
+The patch currently implement recheck routines for hash and btree
+indexes. If the table has an index which doesn't support recheck
+routine, WARM updates are disabled on such tables.
+
+Problem With Duplicate (key, ctid) Index Entries
+------------------------------------------------
+
+The index-key recheck logic works as long as there are no duplicate
+index keys, both pointing to the same WARM chain. In that case, the same
+valid tuple will be reachable via multiple index keys, yet satisfying
+the index key checks. In the above example, if the tuple [1111, bbbb] is
+again updated to [1111, aaaa] and if we insert a new index entry (aaaa)
+pointing to the root line pointer, we will end up with the following
+structure:
+
+	lp [1]  [2]  [3]
+	[1111, aaaa]->[1111, bbbb]->[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's oldest entry (aaaa) points to 1
+	Index2's old entry (bbbb) also points to 1
+	Index2's new entry (aaaa) also points to 1
+
+We must solve this problem to ensure that the same tuple is not
+reachable via multiple index pointers. There are couple of ways to
+address this issue:
+
+1. Do not allow WARM update to a tuple from a WARM chain. This
+guarantees that there can never be duplicate index entries to the same
+root line pointer because we must have checked for old and new index
+keys while doing the first WARM update.
+
+2. Do not allow duplicate (key, ctid) index pointers. In the above
+example, since (aaaa, 1) already exists in the index, we must not insert
+a duplicate index entry.
+
+The patch currently implements 1 i.e. do not do WARM updates to a tuple
+from a WARM chain. HOT updates are fine because they do not add a new
+index entry.
+
+Even with the restriction, this is a significant improvement because the
+number of regular UPDATEs are curtailed down to half.
+
+Expression and Partial Indexes
+------------------------------
+
+Expressions may evaluate to the same value even if the underlying column
+values have changed. A simple example is an index on "lower(col)" which
+will return the same value if the new heap value only differs in the
+case sensitivity. So we can not solely rely on the heap column check to
+decide whether or not to insert a new index entry for expression
+indexes. Similarly, for partial indexes, the predicate expression must
+be evaluated to decide whether or not to cause a new index entry when
+columns referred in the predicate expressions change.
+
+(None of these things are currently implemented and we squarely disallow
+WARM update if a column from expression indexes or predicate has
+changed).
+
+
+Efficiently Finding the Root Line Pointer
+-----------------------------------------
+
+During WARM update, we must be able to find the root line pointer of the
+tuple being updated. It must be noted that the t_ctid field in the heap
+tuple header is usually used to find the next tuple in the update chain.
+But the tuple that we are updating, must be the last tuple in the update
+chain. In such cases, the c_tid field usually points the tuple itself.
+So in theory, we could use the t_ctid to store additional information in
+the last tuple of the update chain, if the information about the tuple
+being the last tuple is stored elsewhere.
+
+We now utilize another bit from t_infomask2 to explicitly identify that
+this is the last tuple in the update chain.
+
+HEAP_LATEST_TUPLE - When this bit is set, the tuple is the last tuple in
+the update chain. The OffsetNumber part of t_ctid points to the root
+line pointer of the chain when HEAP_LATEST_TUPLE flag is set.
+
+If UPDATE operation is aborted, the last tuple in the update chain
+becomes dead. The root line pointer information stored in the tuple
+which remains the last valid tuple in the chain is also lost. In such
+rare cases, the root line pointer must be found in a hard way by
+scanning the entire heap page.
+
+Tracking WARM Chains
+--------------------
+
+When a tuple is WARM updated, the old, the new and every subsequent tuple in
+the chain is marked with a special HEAP_WARM_UPDATED flag. We use the last
+remaining bit in t_infomask2 to store this information.
+
+When a tuple is returned from a WARM chain, the caller must do additional
+checks to ensure that the tuple matches the index key. Even if the tuple
+precedes the WARM update in the chain, it must still be rechecked for the index
+key match (case when old tuple is returned by the new index key). So we must
+follow the update chain everytime to the end to check if this is a WARM
+chain.
+
+Converting WARM chains back to HOT chains (VACUUM ?)
+----------------------------------------------------
+
+The current implementation of WARM allows only one WARM update per
+chain. This simplifies the design and addresses certain issues around
+duplicate key scans. But this also implies that the benefit of WARM will be
+no more than 50%, which is still significant, but if we could return
+WARM chains back to normal status, we could do far more WARM updates.
+
+A distinct property of a WARM chain is that at least one index has more
+than one live index entries pointing to the root of the chain. In other
+words, if we can remove duplicate entry from every index or conclusively
+prove that there are no duplicate index entries for the root line
+pointer, the chain can again be marked as HOT.
+
+Here is one idea:
+
+A WARM chain has two parts, separated by the tuple that caused WARM
+update. All tuples in each part has matching index keys, but certain
+index keys may not match between these two parts. Lets say we mark heap
+tuples in the second part with a special HEAP_WARM_TUPLE flag. Similarly, the
+new index entries caused by the first WARM update are also marked with
+INDEX_WARM_POINTER flags.
+
+There are two distinct parts of the WARM chain. The first part where none of
+the tuples have HEAP_WARM_TUPLE flag set and the second part where every tuple
+has the flag set. Each of these parts satisfy HOT property on its own i.e. all
+tuples have the same value for indexed columns. But these two parts are
+separated by the WARM update which breaks HOT property for one or more indexes.
+
+Heap chain: [1] [2] [3] [4]
+			[aaaa, 1111] -> [aaaa, 1111] -> [bbbb, 1111]W -> [bbbb, 1111]W
+
+Index1: 	(aaaa) points to 1 (satisfies only tuples without W)
+			(bbbb)W points to 1 (satisfies only tuples marked with W)
+
+Index2:		(1111) points to 1 (satisfies tuples with and without W)
+
+
+It's clear that for indexes with both pointers, a heap tuple without
+HEAP_WARM_TUPLE flag will be reachable from the index pointer cleared of
+INDEX_WARM_POINTER flag and that with HEAP_WARM_TUPLE flag will be reachable
+from the pointer with INDEX_WARM_POINTER. But for indexes which did not create
+a new entry, tuples with and without the HEAP_WARM_TUPLE flag will be reachable
+from the original index pointer which doesn't have the INDEX_WARM_POINTER flag.
+(there is no pointer with INDEX_WARM_POINTER in such indexes).
+
+During first heap scan of VACUUM, we look for tuples with HEAP_WARM_UPDATED
+set.  If all or none of the live tuples in the chain are marked with
+HEAP_WARM_TUPLE flag, then the chain is a candidate for HOT conversion. We
+remember the root line pointer and whether the tuples in the chain had
+HEAP_WARM_TUPLE flags set or not.
+
+If we have a WARM chain with HEAP_WARM_TUPLE set, then our goal is to remove
+the index pointers without INDEX_WARM_POINTER flags and vice versa. But there
+is a catch. For Index2 above, there is only one pointer and it does not have
+the INDEX_WARM_POINTER flag set. Since all heap tuples are reachable only via
+this pointer, it must not be removed. IOW we should remove index pointer
+without INDEX_WARM_POINTER iff a another index pointer with INDEX_WARM_POINTER
+exists. Since index vacuum may visit these pointers in any order, we will need
+another index pass to detect redundant index pointers, which can safely be
+removed because all live tuples are reachable via the other index pointer. So
+in the first index pass we check which WARM candidates have 2 index pointers.
+In the second pass, we remove the redundant pointer and clear the
+INDEX_WARM_POINTER flag if that's the surviving index pointer. Note that
+all index pointers, either CLEAR or WARM, to dead tuples are removed during the
+first index scan itself.
+
+During the second heap scan, we fix WARM chain by clearing HEAP_WARM_UPDATED
+and HEAP_WARM_TUPLE flags on tuples.
+
+There are some more problems around aborted vacuums. For example, if vacuum
+aborts after clearing INDEX_WARM_POINTER flag but before removing the other
+index pointer, we will end up with two index pointers and none of those will
+have INDEX_WARM_POINTER set.  But since the HEAP_WARM_UPDATED flag on the heap
+tuple is still set, further WARM updates to the chain will be blocked. I guess
+we will need some special handling for case with multiple index pointers where
+none of the index pointers has INDEX_WARM_POINTER flag set. We can either leave
+these WARM chains alone and let them die with a subsequent non-WARM update or
+must apply heap-recheck logic during index vacuum to find the dead pointer.
+Given that vacuum-aborts are not common, I am inclined to leave this case
+unhandled. We must still check for presence of multiple index pointers without
+INDEX_WARM_POINTER flags and ensure that we don't accidently remove either of
+these pointers and also must not clear WARM chains.
+
+CREATE INDEX CONCURRENTLY
+-------------------------
+
+Currently CREATE INDEX CONCURRENTLY (CIC) is implemented as a 3-phase
+process.  In the first phase, we create catalog entry for the new index
+so that the index is visible to all other backends, but still don't use
+it for either read or write.  But we ensure that no new broken HOT
+chains are created by new transactions. In the second phase, we build
+the new index using a MVCC snapshot and then make the index available
+for inserts. We then do another pass over the index and insert any
+missing tuples, everytime indexing only it's root line pointer. See
+README.HOT for details about how HOT impacts CIC and how various
+challenges are tackeled.
+
+WARM poses another challenge because it allows creation of HOT chains
+even when an index key is changed. But since the index is not ready for
+insertion until the second phase is over, we might end up with a
+situation where the HOT chain has tuples with different index columns,
+yet only one of these values are indexed by the new index. Note that
+during the third phase, we only index tuples whose root line pointer is
+missing from the index. But we can't easily check if the existing index
+tuple is actually indexing the heap tuple visible to the new MVCC
+snapshot. Finding that information will require us to query the index
+again for every tuple in the chain, especially if it's a WARM tuple.
+This would require repeated access to the index. Another option would be
+to return index keys along with the heap TIDs when index is scanned for
+collecting all indexed TIDs during third phase. We can then compare the
+heap tuple against the already indexed key and decide whether or not to
+index the new tuple.
+
+We solve this problem more simply by disallowing WARM updates until the
+index is ready for insertion. We don't need to disallow WARM on a
+wholesale basis, but only those updates that change the columns of the
+new index are disallowed to be WARM updates.
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 30262ef..b29b7da 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -1974,6 +1974,212 @@ heap_fetch(Relation relation,
 }
 
 /*
+ * Check status of a (possibly) WARM chain.
+ *
+ * This function looks at a HOT/WARM chain starting at tid and return a bitmask
+ * of information. We only follow the chain as long as it's known to be valid
+ * HOT chain. Information returned by the function consists of:
+ *
+ *  HCWC_WARM_UPDATED_TUPLE - a tuple with HEAP_WARM_UPDATED is found somewhere
+ *  						  in the chain. Note that when a tuple is WARM
+ *  						  updated, both old and new versions are marked
+ *  						  with this flag. So presence of this flag
+ *  						  indicates that a WARM update was performed on
+ *  						  this chain, but the update may have either
+ *  						  committed or aborted.
+ *
+ *  HCWC_WARM_TUPLE  - a tuple with HEAP_WARM_TUPLE is found somewhere in
+ *					  the chain. This flag is set only on the new version of
+ *					  the tuple while performing WARM update.
+ *
+ *  HCWC_CLEAR_TUPLE - a tuple without HEAP_WARM_TUPLE is found somewhere in
+ *  					 the chain. This either implies that the WARM updated
+ *  					 either aborted or it's recent enough that the old
+ *  					 tuple is still not pruned away by chain pruning logic.
+ *
+ *	If stop_at_warm is true, we stop when the first HEAP_WARM_UPDATED tuple is
+ *	found and return information collected so far.
+ */
+HeapCheckWarmChainStatus
+heap_check_warm_chain(Page dp, ItemPointer tid, bool stop_at_warm)
+{
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	HeapCheckWarmChainStatus	status = 0;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
+			break;
+		}
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		if (HeapTupleHeaderIsWarmUpdated(heapTuple.t_data))
+		{
+			/* We found a WARM_UPDATED tuple */
+			status |= HCWC_WARM_UPDATED_TUPLE;
+
+			/*
+			 * If we've been told to stop at the first WARM_UPDATED tuple, just
+			 * return whatever information collected so far.
+			 */
+			if (stop_at_warm)
+				return status;
+
+			/*
+			 * Remember whether it's a CLEAR or a WARM tuple.
+			 */
+			if (HeapTupleHeaderIsWarm(heapTuple.t_data))
+				status |= HCWC_WARM_TUPLE;
+			else
+				status |= HCWC_CLEAR_TUPLE;
+		}
+		else
+			/* Must be a regular, non-WARM tuple */
+			status |= HCWC_CLEAR_TUPLE;
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	/* All OK. No need to recheck */
+	return status;
+}
+
+/*
+ * Scan through the WARM chain starting at tid and reset all WARM related
+ * flags. At the end, the chain will have all characteristics of a regular HOT
+ * chain.
+ *
+ * Return the number of cleared offnums. Cleared offnums are returned in the
+ * passed-in cleared_offnums array. The caller must ensure that the array is
+ * large enough to hold maximum offnums that can be cleared by this invokation
+ * of heap_clear_warm_chain().
+ */
+int
+heap_clear_warm_chain(Page dp, ItemPointer tid, OffsetNumber *cleared_offnums)
+{
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	int							num_cleared = 0;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
+			break;
+		}
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Clear WARM_UPDATED and WARM flags.
+		 */
+		if (HeapTupleHeaderIsWarmUpdated(heapTuple.t_data))
+		{
+			HeapTupleHeaderClearWarmUpdated(heapTuple.t_data);
+			HeapTupleHeaderClearWarm(heapTuple.t_data);
+			cleared_offnums[num_cleared++] = offnum;
+		}
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	return num_cleared;
+}
+
+/*
  *	heap_hot_search_buffer	- search HOT chain for tuple satisfying snapshot
  *
  * On entry, *tid is the TID of a tuple (either a simple tuple, or the root
@@ -1993,11 +2199,14 @@ heap_fetch(Relation relation,
  * Unlike heap_fetch, the caller must already have pin and (at least) share
  * lock on the buffer; it is still pinned/locked at exit.  Also unlike
  * heap_fetch, we do not report any pgstats count; caller may do so if wanted.
+ *
+ * recheck should be set false on entry by caller, will be set true on exit
+ * if a WARM tuple is encountered.
  */
 bool
 heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 					   Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call)
+					   bool *all_dead, bool first_call, bool *recheck)
 {
 	Page		dp = (Page) BufferGetPage(buffer);
 	TransactionId prev_xmax = InvalidTransactionId;
@@ -2051,9 +2260,12 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		ItemPointerSetOffsetNumber(&heapTuple->t_self, offnum);
 
 		/*
-		 * Shouldn't see a HEAP_ONLY tuple at chain start.
+		 * Shouldn't see a HEAP_ONLY tuple at chain start, unless we are
+		 * dealing with a WARM updated tuple in which case deferred triggers
+		 * may request to fetch a WARM tuple from middle of a chain.
 		 */
-		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple))
+		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple) &&
+				!HeapTupleIsWarmUpdated(heapTuple))
 			break;
 
 		/*
@@ -2066,6 +2278,20 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 			break;
 
 		/*
+		 * Check if there exists a WARM tuple somewhere down the chain and set
+		 * recheck to TRUE.
+		 *
+		 * XXX This is not very efficient right now, and we should look for
+		 * possible improvements here.
+		 */
+		if (recheck && *recheck == false)
+		{
+			HeapCheckWarmChainStatus status;
+			status = heap_check_warm_chain(dp, &heapTuple->t_self, true);
+			*recheck = HCWC_IS_WARM_UPDATED(status);
+		}
+
+		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
 		 * return the first tuple we find.  But on later passes, heapTuple
 		 * will initially be pointing to the tuple we returned last time.
@@ -2114,7 +2340,8 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		 * Check to see if HOT chain continues past this tuple; if so fetch
 		 * the next offnum and loop around.
 		 */
-		if (HeapTupleIsHotUpdated(heapTuple))
+		if (HeapTupleIsHotUpdated(heapTuple) &&
+			!HeapTupleHeaderHasRootOffset(heapTuple->t_data))
 		{
 			Assert(ItemPointerGetBlockNumber(&heapTuple->t_data->t_ctid) ==
 				   ItemPointerGetBlockNumber(tid));
@@ -2138,18 +2365,41 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
  */
 bool
 heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
-				bool *all_dead)
+				bool *all_dead, bool *recheck, Buffer *cbuffer,
+				HeapTuple heapTuple)
 {
 	bool		result;
 	Buffer		buffer;
-	HeapTupleData heapTuple;
+	ItemPointerData ret_tid = *tid;
 
 	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	LockBuffer(buffer, BUFFER_LOCK_SHARE);
-	result = heap_hot_search_buffer(tid, relation, buffer, snapshot,
-									&heapTuple, all_dead, true);
-	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buffer);
+	result = heap_hot_search_buffer(&ret_tid, relation, buffer, snapshot,
+									heapTuple, all_dead, true, recheck);
+
+	/*
+	 * If we are returning a potential candidate tuple from this chain and the
+	 * caller has requested for "recheck" hint, keep the buffer locked and
+	 * pinned. The caller must release the lock and pin on the buffer in all
+	 * such cases.
+	 */
+	if (!result || !recheck || !(*recheck))
+	{
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	/*
+	 * Set the caller supplied tid with the actual location of the tuple being
+	 * returned.
+	 */
+	if (result)
+	{
+		*tid = ret_tid;
+		if (cbuffer)
+			*cbuffer = buffer;
+	}
+
 	return result;
 }
 
@@ -2792,7 +3042,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		{
 			XLogRecPtr	recptr;
 			xl_heap_multi_insert *xlrec;
-			uint8		info = XLOG_HEAP2_MULTI_INSERT;
+			uint8		info = XLOG_HEAP_MULTI_INSERT;
 			char	   *tupledata;
 			int			totaldatalen;
 			char	   *scratchptr = scratch;
@@ -2889,7 +3139,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			/* filtering by origin on a row level is much more efficient */
 			XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);
 
-			recptr = XLogInsert(RM_HEAP2_ID, info);
+			recptr = XLogInsert(RM_HEAP_ID, info);
 
 			PageSetLSN(page, recptr);
 		}
@@ -3278,7 +3528,7 @@ l1:
 							  &new_xmax, &new_infomask, &new_infomask2);
 
 	/*
-	 * heap_get_root_tuple_one() may call palloc, which is disallowed once we
+	 * heap_get_root_tuple() may call palloc, which is disallowed once we
 	 * enter the critical section. So check if the root offset is cached in the
 	 * tuple and if not, fetch that information hard way before entering the
 	 * critical section.
@@ -3313,7 +3563,9 @@ l1:
 	}
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	tp.t_data->t_infomask |= new_infomask;
 	tp.t_data->t_infomask2 |= new_infomask2;
@@ -3508,15 +3760,19 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 HTSU_Result
 heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode)
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update)
 {
 	HTSU_Result result;
 	TransactionId xid = GetCurrentTransactionId();
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *exprindx_attrs;
 	Bitmapset  *interesting_attrs;
 	Bitmapset  *modified_attrs;
+	Bitmapset  *notready_attrs;
+	List	   *indexattrsList;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3537,6 +3793,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		use_hot_update = false;
+	bool		use_warm_update = false;
 	bool		hot_attrs_checked = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
@@ -3562,6 +3819,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot update tuples during a parallel operation")));
 
+	/* Assume no-warm update */
+	if (warm_update)
+		*warm_update = false;
+
 	/*
 	 * Fetch the list of attributes to be checked for various operations.
 	 *
@@ -3582,7 +3843,12 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	exprindx_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_EXPR_PREDICATE);
+	notready_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_NOTREADY);
 
+	indexattrsList = RelationGetIndexAttrList(relation);
 
 	block = ItemPointerGetBlockNumber(otid);
 	offnum = ItemPointerGetOffsetNumber(otid);
@@ -3605,8 +3871,11 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 		interesting_attrs = bms_add_members(interesting_attrs, hot_attrs);
 		hot_attrs_checked = true;
 	}
+
 	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, exprindx_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, notready_attrs);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -3653,6 +3922,9 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
 												  &oldtup, newtup);
 
+	if (modified_attrsp)
+		*modified_attrsp = bms_copy(modified_attrs);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3908,8 +4180,10 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(exprindx_attrs);
 		bms_free(modified_attrs);
 		bms_free(interesting_attrs);
+		bms_free(notready_attrs);
 		return result;
 	}
 
@@ -4073,7 +4347,9 @@ l2:
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
-		oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(oldtup.t_data))
+			oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 		oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleClearHotUpdated(&oldtup);
 		/* ... and store info about transaction updating this tuple */
@@ -4227,6 +4503,60 @@ l2:
 		 */
 		if (hot_attrs_checked && !bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
+		else
+		{
+			/*
+			 * If no WARM updates yet on this chain, let this update be a WARM
+			 * update. We must not do any WARM update even if the previous WARM
+			 * updated at the end aborted. That's why we look at
+			 * HEAP_WARM_UPDATED flag.
+			 *
+			 * We don't do WARM updates if one of the columns used in index
+			 * expressions is being modified. Since expressions may evaluate to
+			 * the same value, even when heap values change, we don't have a
+			 * good way to deal with duplicate key scans when expressions are
+			 * used in the index.
+			 *
+			 * We check if the HOT attrs are a subset of the modified
+			 * attributes. Since HOT attrs include all index attributes, this
+			 * allows to avoid doing a WARM update when all index attributes
+			 * are being updated. Performing a WARM update is not a great idea
+			 * because all indexes will receive a new entry anyways.
+			 *
+			 * We also disable WARM temporarily if we are modifying a column
+			 * which is used by a new index that's being added. We can't insert
+			 * new entries to such indexes and hence we must not allow creating
+			 * on WARM chains which are broken with respect to the new index
+			 * being added.
+			 */
+			if (relation->rd_supportswarm &&
+				!HeapTupleIsWarmUpdated(&oldtup) &&
+				!bms_overlap(modified_attrs, exprindx_attrs) &&
+				!bms_is_subset(hot_attrs, modified_attrs) &&
+				!bms_overlap(notready_attrs, modified_attrs))
+			{
+				int num_indexes, num_updating_indexes;
+				ListCell *l;
+
+				/*
+				 * Everything else is Ok. Now check if the update will require
+				 * less than or equal to 50% index updates. Anything above
+				 * that, we can just do a regular update and save on WARM
+				 * cleanup cost.
+				 */
+				num_indexes = list_length(indexattrsList);
+				num_updating_indexes = 0;
+				foreach (l, indexattrsList)
+				{
+					Bitmapset  *b = (Bitmapset *) lfirst(l);
+					if (bms_overlap(b, modified_attrs))
+						num_updating_indexes++;
+				}
+
+				if ((double)num_updating_indexes/num_indexes <= 0.5)
+					use_warm_update = true;
+			}
+		}
 	}
 	else
 	{
@@ -4273,6 +4603,32 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+
+		/*
+		 * Even if we are doing a HOT update, we must carry forward the WARM
+		 * flag because we may have already inserted another index entry
+		 * pointing to our root and a third entry may create duplicates.
+		 *
+		 * Note: If we ever have a mechanism to avoid duplicate <key, TID> in
+		 * indexes, we could look at relaxing this restriction and allow even
+		 * more WARM udpates.
+		 */
+		if (HeapTupleIsWarmUpdated(&oldtup))
+		{
+			HeapTupleSetWarmUpdated(heaptup);
+			HeapTupleSetWarmUpdated(newtup);
+		}
+
+		/*
+		 * If the old tuple is a WARM tuple then mark the new tuple as a WARM
+		 * tuple as well.
+		 */
+		if (HeapTupleIsWarm(&oldtup))
+		{
+			HeapTupleSetWarm(heaptup);
+			HeapTupleSetWarm(newtup);
+		}
+
 		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
@@ -4285,12 +4641,45 @@ l2:
 		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
 			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
+	else if (use_warm_update)
+	{
+		/* Mark the old tuple as HOT-updated */
+		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetWarmUpdated(&oldtup);
+
+		/* And mark the new tuple as heap-only */
+		HeapTupleSetHeapOnly(heaptup);
+		/* Mark the new tuple as WARM tuple */
+		HeapTupleSetWarmUpdated(heaptup);
+		/* This update also starts the WARM chain */
+		HeapTupleSetWarm(heaptup);
+		Assert(!HeapTupleIsWarm(&oldtup));
+
+		/* Mark the caller's copy too, in case different from heaptup */
+		HeapTupleSetHeapOnly(newtup);
+		HeapTupleSetWarmUpdated(newtup);
+		HeapTupleSetWarm(newtup);
+
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
+		/* Let the caller know we did a WARM update */
+		if (warm_update)
+			*warm_update = true;
+	}
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		HeapTupleClearWarmUpdated(heaptup);
+		HeapTupleClearWarmUpdated(newtup);
+		HeapTupleClearWarm(heaptup);
+		HeapTupleClearWarm(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4309,7 +4698,9 @@ l2:
 	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
-	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(oldtup.t_data))
+		oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 	oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	/* ... and store info about transaction updating this tuple */
 	Assert(TransactionIdIsValid(xmax_old_tuple));
@@ -4400,7 +4791,10 @@ l2:
 	if (have_tuple_lock)
 		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
 
-	pgstat_count_heap_update(relation, use_hot_update);
+	/*
+	 * Count HOT and WARM updates separately
+	 */
+	pgstat_count_heap_update(relation, use_hot_update, use_warm_update);
 
 	/*
 	 * If heaptup is a private copy, release it.  Don't forget to copy t_self
@@ -4420,6 +4814,8 @@ l2:
 	bms_free(id_attrs);
 	bms_free(modified_attrs);
 	bms_free(interesting_attrs);
+	bms_free(exprindx_attrs);
+	bms_free(notready_attrs);
 
 	return HeapTupleMayBeUpdated;
 }
@@ -4496,9 +4892,47 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 	}
 	else
 	{
+		bool res;
+		bool value1_free = false, value2_free = false;
+
 		Assert(attrnum <= tupdesc->natts);
 		att = tupdesc->attrs[attrnum - 1];
-		return datumIsEqual(value1, value2, att->attbyval, att->attlen);
+
+		/*
+		 * Fetch untoasted values before doing the comparison.
+		 *
+		 * While it's ok for HOT to declare certain values are non-equal even
+		 * if they are physically equal. At worst, this can cause certain
+		 * potential HOT updates to be done in a non-HOT manner. But WARM
+		 * relies on index recheck to decide which index pointer should return
+		 * which row in a WARM chain. For this it's necessary that if old and
+		 * new heap values are declared unequal here, they better produce
+		 * different index values too. We are not so much bothered about
+		 * logical equality since recheck also uses datumIsEqual, but if
+		 * datumIsEqual returns false here, it should return false during index
+		 * recheck too. So we must detoast heap values and then do the
+		 * comparison. As a bonus, it might result in a HOT update which may
+		 * have been ignored earlier.
+		 */
+		if ((att->attlen == -1) && VARATT_IS_EXTENDED(value1))
+		{
+			value1 = PointerGetDatum(heap_tuple_untoast_attr((struct varlena *)
+					DatumGetPointer(value1)));
+			value1_free = true;
+		}
+
+		if ((att->attlen == -1) && VARATT_IS_EXTENDED(value2))
+		{
+			value2 = PointerGetDatum(heap_tuple_untoast_attr((struct varlena *)
+					DatumGetPointer(value2)));
+			value2_free = true;
+		}
+		res = datumIsEqual(value1, value2, att->attbyval, att->attlen);
+		if (value1_free)
+			pfree(DatumGetPointer(value1));
+		if (value2_free)
+			pfree(DatumGetPointer(value2));
+		return res;
 	}
 }
 
@@ -4540,7 +4974,8 @@ HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
  * via ereport().
  */
 void
-simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
+simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup,
+		Bitmapset **modified_attrs, bool *warm_update)
 {
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
@@ -4549,7 +4984,7 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &hufd, &lockmode);
+						 &hufd, &lockmode, modified_attrs, warm_update);
 	switch (result)
 	{
 		case HeapTupleSelfUpdated:
@@ -6226,7 +6661,9 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	PageSetPrunable(page, RecentGlobalXmin);
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 
 	/*
@@ -6800,7 +7237,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 	 * Old-style VACUUM FULL is gone, but we have to keep this code as long as
 	 * we support having MOVED_OFF/MOVED_IN tuples in the database.
 	 */
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 
@@ -6819,7 +7256,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			 * have failed; whereas a non-dead MOVED_IN tuple must mean the
 			 * xvac transaction succeeded.
 			 */
-			if (tuple->t_infomask & HEAP_MOVED_OFF)
+			if (HeapTupleHeaderIsMovedOff(tuple))
 				frz->frzflags |= XLH_INVALID_XVAC;
 			else
 				frz->frzflags |= XLH_FREEZE_XVAC;
@@ -7289,7 +7726,7 @@ heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple)
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid))
@@ -7372,7 +7809,7 @@ heap_tuple_needs_freeze(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid) &&
@@ -7398,7 +7835,7 @@ HeapTupleHeaderAdvanceLatestRemovedXid(HeapTupleHeader tuple,
 	TransactionId xmax = HeapTupleHeaderGetUpdateXid(tuple);
 	TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		if (TransactionIdPrecedes(*latestRemovedXid, xvac))
 			*latestRemovedXid = xvac;
@@ -7447,6 +7884,36 @@ log_heap_cleanup_info(RelFileNode rnode, TransactionId latestRemovedXid)
 }
 
 /*
+ * Perform XLogInsert for a heap-warm-clear operation.  Caller must already
+ * have modified the buffer and marked it dirty.
+ */
+XLogRecPtr
+log_heap_warmclear(Relation reln, Buffer buffer,
+			   OffsetNumber *cleared, int ncleared)
+{
+	xl_heap_warmclear	xlrec;
+	XLogRecPtr			recptr;
+
+	/* Caller should not call me on a non-WAL-logged relation */
+	Assert(RelationNeedsWAL(reln));
+
+	xlrec.ncleared = ncleared;
+
+	XLogBeginInsert();
+	XLogRegisterData((char *) &xlrec, SizeOfHeapWarmClear);
+
+	XLogRegisterBuffer(0, buffer, REGBUF_STANDARD);
+
+	if (ncleared > 0)
+		XLogRegisterBufData(0, (char *) cleared,
+							ncleared * sizeof(OffsetNumber));
+
+	recptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_WARMCLEAR);
+
+	return recptr;
+}
+
+/*
  * Perform XLogInsert for a heap-clean operation.  Caller must already
  * have modified the buffer and marked it dirty.
  *
@@ -7601,6 +8068,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
+	bool		warm_update = false;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
@@ -7612,6 +8080,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	else
 		info = XLOG_HEAP_UPDATE;
 
+	if (HeapTupleIsWarmUpdated(newtup))
+		warm_update = true;
+
 	/*
 	 * If the old and new tuple are on the same page, we only need to log the
 	 * parts of the new tuple that were changed.  That saves on the amount of
@@ -7685,6 +8156,8 @@ log_heap_update(Relation reln, Buffer oldbuf,
 				xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY;
 		}
 	}
+	if (warm_update)
+		xlrec.flags |= XLH_UPDATE_WARM_UPDATE;
 
 	/* If new tuple is the single and first tuple on page... */
 	if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber &&
@@ -8099,6 +8572,60 @@ heap_xlog_clean(XLogReaderState *record)
 		XLogRecordPageWithFreeSpace(rnode, blkno, freespace);
 }
 
+
+/*
+ * Handles HEAP2_WARMCLEAR record type
+ */
+static void
+heap_xlog_warmclear(XLogReaderState *record)
+{
+	XLogRecPtr	lsn = record->EndRecPtr;
+	xl_heap_warmclear	*xlrec = (xl_heap_warmclear *) XLogRecGetData(record);
+	Buffer		buffer;
+	RelFileNode rnode;
+	BlockNumber blkno;
+	XLogRedoAction action;
+
+	XLogRecGetBlockTag(record, 0, &rnode, NULL, &blkno);
+
+	/*
+	 * If we have a full-page image, restore it (using a cleanup lock) and
+	 * we're done.
+	 */
+	action = XLogReadBufferForRedoExtended(record, 0, RBM_NORMAL, true,
+										   &buffer);
+	if (action == BLK_NEEDS_REDO)
+	{
+		Page		page = (Page) BufferGetPage(buffer);
+		OffsetNumber *cleared;
+		int			ncleared;
+		Size		datalen;
+		int			i;
+
+		cleared = (OffsetNumber *) XLogRecGetBlockData(record, 0, &datalen);
+
+		ncleared = xlrec->ncleared;
+
+		for (i = 0; i < ncleared; i++)
+		{
+			ItemId			lp;
+			OffsetNumber	offnum = cleared[i];
+			HeapTupleData	heapTuple;
+
+			lp = PageGetItemId(page, offnum);
+			heapTuple.t_data = (HeapTupleHeader) PageGetItem(page, lp);
+
+			HeapTupleHeaderClearWarmUpdated(heapTuple.t_data);
+			HeapTupleHeaderClearWarm(heapTuple.t_data);
+		}
+
+		PageSetLSN(page, lsn);
+		MarkBufferDirty(buffer);
+	}
+	if (BufferIsValid(buffer))
+		UnlockReleaseBuffer(buffer);
+}
+
 /*
  * Replay XLOG_HEAP2_VISIBLE record.
  *
@@ -8345,7 +8872,9 @@ heap_xlog_delete(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleHeaderClearHotUpdated(htup);
 		fix_infomask_from_infobits(xlrec->infobits_set,
@@ -8366,7 +8895,7 @@ heap_xlog_delete(XLogReaderState *record)
 		if (!HeapTupleHeaderHasRootOffset(htup))
 		{
 			OffsetNumber	root_offnum;
-			root_offnum = heap_get_root_tuple(page, xlrec->offnum); 
+			root_offnum = heap_get_root_tuple(page, xlrec->offnum);
 			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
 		}
 
@@ -8662,16 +9191,22 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	Size		freespace = 0;
 	XLogRedoAction oldaction;
 	XLogRedoAction newaction;
+	bool		warm_update = false;
 
 	/* initialize to keep the compiler quiet */
 	oldtup.t_data = NULL;
 	oldtup.t_len = 0;
 
+	if (xlrec->flags & XLH_UPDATE_WARM_UPDATE)
+		warm_update = true;
+
 	XLogRecGetBlockTag(record, 0, &rnode, NULL, &newblk);
 	if (XLogRecGetBlockTag(record, 1, NULL, NULL, &oldblk))
 	{
 		/* HOT updates are never done across pages */
 		Assert(!hot_update);
+		/* WARM updates are never done across pages */
+		Assert(!warm_update);
 	}
 	else
 		oldblk = newblk;
@@ -8731,6 +9266,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 								   &htup->t_infomask2);
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
+
+		/* Mark the old tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetWarmUpdated(htup);
+
 		/* Set forward chain link in t_ctid */
 		HeapTupleHeaderSetNextTid(htup, &newtid);
 
@@ -8866,6 +9406,10 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
 
+		/* Mark the new tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetWarmUpdated(htup);
+
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
@@ -8993,7 +9537,9 @@ heap_xlog_lock(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9072,7 +9618,9 @@ heap_xlog_lock_updated(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9141,6 +9689,9 @@ heap_redo(XLogReaderState *record)
 		case XLOG_HEAP_INSERT:
 			heap_xlog_insert(record);
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			heap_xlog_multi_insert(record);
+			break;
 		case XLOG_HEAP_DELETE:
 			heap_xlog_delete(record);
 			break;
@@ -9169,7 +9720,7 @@ heap2_redo(XLogReaderState *record)
 {
 	uint8		info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
 
-	switch (info & XLOG_HEAP_OPMASK)
+	switch (info & XLOG_HEAP2_OPMASK)
 	{
 		case XLOG_HEAP2_CLEAN:
 			heap_xlog_clean(record);
@@ -9183,9 +9734,6 @@ heap2_redo(XLogReaderState *record)
 		case XLOG_HEAP2_VISIBLE:
 			heap_xlog_visible(record);
 			break;
-		case XLOG_HEAP2_MULTI_INSERT:
-			heap_xlog_multi_insert(record);
-			break;
 		case XLOG_HEAP2_LOCK_UPDATED:
 			heap_xlog_lock_updated(record);
 			break;
@@ -9199,6 +9747,9 @@ heap2_redo(XLogReaderState *record)
 		case XLOG_HEAP2_REWRITE:
 			heap_xlog_logical_rewrite(record);
 			break;
+		case XLOG_HEAP2_WARMCLEAR:
+			heap_xlog_warmclear(record);
+			break;
 		default:
 			elog(PANIC, "heap2_redo: unknown op code %u", info);
 	}
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index f54337c..6a3baff 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -834,6 +834,13 @@ heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				continue;
 
+			/*
+			 * If the tuple has root line pointer, it must be the end of the
+			 * chain
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			/* Set up to scan the HOT-chain */
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
@@ -928,6 +935,6 @@ heap_get_root_tuple(Page page, OffsetNumber target_offnum)
 void
 heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 {
-	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+	heap_get_root_tuples_internal(page, InvalidOffsetNumber,
 			root_offsets);
 }
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index 2d3ae9b..bd469ee 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -404,6 +404,14 @@ rewrite_heap_tuple(RewriteState state,
 		old_tuple->t_data->t_infomask & HEAP_XACT_MASK;
 
 	/*
+	 * We must clear the HEAP_WARM_TUPLE flag if the HEAP_WARM_UPDATED is
+	 * cleared above.
+	 */
+	if (HeapTupleHeaderIsWarmUpdated(old_tuple->t_data))
+		HeapTupleHeaderClearWarm(new_tuple->t_data);
+
+
+	/*
 	 * While we have our hands on the tuple, we may as well freeze any
 	 * eligible xmin or xmax, so that future VACUUM effort can be saved.
 	 */
@@ -428,7 +436,7 @@ rewrite_heap_tuple(RewriteState state,
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
 
-		/* 
+		/*
 		 * We've already checked that this is not the last tuple in the chain,
 		 * so fetch the next TID in the chain.
 		 */
@@ -737,7 +745,7 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		/* 
+		/*
 		 * Set t_ctid just to ensure that block number is copied correctly, but
 		 * then immediately mark the tuple as the latest.
 		 */
diff --git a/src/backend/access/heap/tuptoaster.c b/src/backend/access/heap/tuptoaster.c
index aa5a45d..bab48fd 100644
--- a/src/backend/access/heap/tuptoaster.c
+++ b/src/backend/access/heap/tuptoaster.c
@@ -1688,7 +1688,8 @@ toast_save_datum(Relation rel, Datum value,
 							 toastrel,
 							 toastidxs[i]->rd_index->indisunique ?
 							 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-							 NULL);
+							 NULL,
+							 false);
 		}
 
 		/*
diff --git a/src/backend/access/index/genam.c b/src/backend/access/index/genam.c
index a91fda7..d523c8f 100644
--- a/src/backend/access/index/genam.c
+++ b/src/backend/access/index/genam.c
@@ -127,6 +127,8 @@ RelationGetIndexScan(Relation indexRelation, int nkeys, int norderbys)
 	scan->xs_cbuf = InvalidBuffer;
 	scan->xs_continue_hot = false;
 
+	scan->indexInfo = NULL;
+
 	return scan;
 }
 
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index cc5ac8b..d048714 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -197,7 +197,8 @@ index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 IndexInfo *indexInfo)
+			 IndexInfo *indexInfo,
+			 bool warm_update)
 {
 	RELATION_CHECKS;
 	CHECK_REL_PROCEDURE(aminsert);
@@ -207,6 +208,12 @@ index_insert(Relation indexRelation,
 									   (HeapTuple) NULL,
 									   InvalidBuffer);
 
+	if (warm_update)
+	{
+		Assert(indexRelation->rd_amroutine->amwarminsert != NULL);
+		return indexRelation->rd_amroutine->amwarminsert(indexRelation, values,
+				isnull, heap_t_ctid, heapRelation, checkUnique, indexInfo);
+	}
 	return indexRelation->rd_amroutine->aminsert(indexRelation, values, isnull,
 												 heap_t_ctid, heapRelation,
 												 checkUnique, indexInfo);
@@ -291,6 +298,25 @@ index_beginscan_internal(Relation indexRelation,
 	scan->parallel_scan = pscan;
 	scan->xs_temp_snap = temp_snap;
 
+	/*
+	 * If the index supports recheck, make sure that index tuple is saved
+	 * during index scans. Also build and cache IndexInfo which is used by
+	 * amrecheck routine.
+	 *
+	 * XXX Ideally, we should look at all indexes on the table and check if
+	 * WARM is at all supported on the base table. If WARM is not supported
+	 * then we don't need to do any recheck. RelationGetIndexAttrBitmap() does
+	 * do that and sets rd_supportswarm after looking at all indexes. But we
+	 * don't know if the function was called earlier in the session when we're
+	 * here. We can't call it now because there exists a risk of causing
+	 * deadlock.
+	 */
+	if (indexRelation->rd_amroutine->amrecheck)
+	{
+		scan->xs_want_itup = true;
+		scan->indexInfo = BuildIndexInfo(indexRelation);
+	}
+
 	return scan;
 }
 
@@ -358,6 +384,10 @@ index_endscan(IndexScanDesc scan)
 	if (scan->xs_temp_snap)
 		UnregisterSnapshot(scan->xs_snapshot);
 
+	/* Free cached IndexInfo, if any */
+	if (scan->indexInfo)
+		pfree(scan->indexInfo);
+
 	/* Release the scan data structure itself */
 	IndexScanEnd(scan);
 }
@@ -535,7 +565,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
 	/*
 	 * The AM's amgettuple proc finds the next index entry matching the scan
 	 * keys, and puts the TID into scan->xs_ctup.t_self.  It should also set
-	 * scan->xs_recheck and possibly scan->xs_itup/scan->xs_hitup, though we
+	 * scan->xs_tuple_recheck and possibly scan->xs_itup/scan->xs_hitup, though we
 	 * pay no attention to those fields here.
 	 */
 	found = scan->indexRelation->rd_amroutine->amgettuple(scan, direction);
@@ -574,7 +604,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
  * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
  * call).
  *
- * Note: caller must check scan->xs_recheck, and perform rechecking of the
+ * Note: caller must check scan->xs_tuple_recheck, and perform rechecking of the
  * scan keys if required.  We do not do that here because we don't have
  * enough information to do it efficiently in the general case.
  * ----------------
@@ -585,6 +615,7 @@ index_fetch_heap(IndexScanDesc scan)
 	ItemPointer tid = &scan->xs_ctup.t_self;
 	bool		all_dead = false;
 	bool		got_heap_tuple;
+	bool		tuple_recheck;
 
 	/* We can skip the buffer-switching logic if we're in mid-HOT chain. */
 	if (!scan->xs_continue_hot)
@@ -603,6 +634,8 @@ index_fetch_heap(IndexScanDesc scan)
 			heap_page_prune_opt(scan->heapRelation, scan->xs_cbuf);
 	}
 
+	tuple_recheck = false;
+
 	/* Obtain share-lock on the buffer so we can examine visibility */
 	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
 	got_heap_tuple = heap_hot_search_buffer(tid, scan->heapRelation,
@@ -610,32 +643,60 @@ index_fetch_heap(IndexScanDesc scan)
 											scan->xs_snapshot,
 											&scan->xs_ctup,
 											&all_dead,
-											!scan->xs_continue_hot);
-	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+											!scan->xs_continue_hot,
+											&tuple_recheck);
 
 	if (got_heap_tuple)
 	{
+		bool res = true;
+
+		/*
+		 * Ok we got a tuple which satisfies the snapshot, but if its part of a
+		 * WARM chain, we must do additional checks to ensure that we are
+		 * indeed returning a correct tuple. Note that if the index AM does not
+		 * implement amrecheck method, then we don't any additional checks
+		 * since WARM must have been disabled on such tables.
+		 */
+		if (tuple_recheck && scan->xs_itup &&
+			scan->indexRelation->rd_amroutine->amrecheck)
+		{
+			res = scan->indexRelation->rd_amroutine->amrecheck(
+						scan->indexRelation,
+						scan->indexInfo,
+						scan->xs_itup,
+						scan->heapRelation,
+						&scan->xs_ctup);
+		}
+
+		LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+
 		/*
 		 * Only in a non-MVCC snapshot can more than one member of the HOT
 		 * chain be visible.
 		 */
 		scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
 		pgstat_count_heap_fetch(scan->indexRelation);
-		return &scan->xs_ctup;
+
+		if (res)
+			return &scan->xs_ctup;
 	}
+	else
+	{
+		LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
 
-	/* We've reached the end of the HOT chain. */
-	scan->xs_continue_hot = false;
+		/* We've reached the end of the HOT chain. */
+		scan->xs_continue_hot = false;
 
-	/*
-	 * If we scanned a whole HOT chain and found only dead tuples, tell index
-	 * AM to kill its entry for that TID (this will take effect in the next
-	 * amgettuple call, in index_getnext_tid).  We do not do this when in
-	 * recovery because it may violate MVCC to do so.  See comments in
-	 * RelationGetIndexScan().
-	 */
-	if (!scan->xactStartedInRecovery)
-		scan->kill_prior_tuple = all_dead;
+		/*
+		 * If we scanned a whole HOT chain and found only dead tuples, tell index
+		 * AM to kill its entry for that TID (this will take effect in the next
+		 * amgettuple call, in index_getnext_tid).  We do not do this when in
+		 * recovery because it may violate MVCC to do so.  See comments in
+		 * RelationGetIndexScan().
+		 */
+		if (!scan->xactStartedInRecovery)
+			scan->kill_prior_tuple = all_dead;
+	}
 
 	return NULL;
 }
diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c
index 6dca810..463d4bf 100644
--- a/src/backend/access/nbtree/nbtinsert.c
+++ b/src/backend/access/nbtree/nbtinsert.c
@@ -20,6 +20,7 @@
 #include "access/nbtxlog.h"
 #include "access/transam.h"
 #include "access/xloginsert.h"
+#include "catalog/index.h"
 #include "miscadmin.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
@@ -250,6 +251,10 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 	BTPageOpaque opaque;
 	Buffer		nbuf = InvalidBuffer;
 	bool		found = false;
+	Buffer		buffer;
+	HeapTupleData	heapTuple;
+	bool		recheck = false;
+	IndexInfo	*indexInfo = BuildIndexInfo(rel);
 
 	/* Assume unique until we find a duplicate */
 	*is_unique = true;
@@ -309,6 +314,8 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				curitup = (IndexTuple) PageGetItem(page, curitemid);
 				htid = curitup->t_tid;
 
+				recheck = false;
+
 				/*
 				 * If we are doing a recheck, we expect to find the tuple we
 				 * are rechecking.  It's not a duplicate, but we have to keep
@@ -326,112 +333,153 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				 * have just a single index entry for the entire chain.
 				 */
 				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
-										 &all_dead))
+							&all_dead, &recheck, &buffer,
+							&heapTuple))
 				{
 					TransactionId xwait;
+					bool result = true;
 
 					/*
-					 * It is a duplicate. If we are only doing a partial
-					 * check, then don't bother checking if the tuple is being
-					 * updated in another transaction. Just return the fact
-					 * that it is a potential conflict and leave the full
-					 * check till later.
+					 * If the tuple was WARM update, we may again see our own
+					 * tuple. Since WARM updates don't create new index
+					 * entries, our own tuple is only reachable via the old
+					 * index pointer.
 					 */
-					if (checkUnique == UNIQUE_CHECK_PARTIAL)
+					if (checkUnique == UNIQUE_CHECK_EXISTING &&
+							ItemPointerCompare(&htid, &itup->t_tid) == 0)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						*is_unique = false;
-						return InvalidTransactionId;
+						found = true;
+						result = false;
+						if (recheck)
+							UnlockReleaseBuffer(buffer);
 					}
-
-					/*
-					 * If this tuple is being updated by other transaction
-					 * then we have to wait for its commit/abort.
-					 */
-					xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
-						SnapshotDirty.xmin : SnapshotDirty.xmax;
-
-					if (TransactionIdIsValid(xwait))
+					else if (recheck)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						/* Tell _bt_doinsert to wait... */
-						*speculativeToken = SnapshotDirty.speculativeToken;
-						return xwait;
+						result = btrecheck(rel, indexInfo, curitup, heapRel, &heapTuple);
+						UnlockReleaseBuffer(buffer);
 					}
 
-					/*
-					 * Otherwise we have a definite conflict.  But before
-					 * complaining, look to see if the tuple we want to insert
-					 * is itself now committed dead --- if so, don't complain.
-					 * This is a waste of time in normal scenarios but we must
-					 * do it to support CREATE INDEX CONCURRENTLY.
-					 *
-					 * We must follow HOT-chains here because during
-					 * concurrent index build, we insert the root TID though
-					 * the actual tuple may be somewhere in the HOT-chain.
-					 * While following the chain we might not stop at the
-					 * exact tuple which triggered the insert, but that's OK
-					 * because if we find a live tuple anywhere in this chain,
-					 * we have a unique key conflict.  The other live tuple is
-					 * not part of this chain because it had a different index
-					 * entry.
-					 */
-					htid = itup->t_tid;
-					if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL))
-					{
-						/* Normal case --- it's still live */
-					}
-					else
+					if (result)
 					{
 						/*
-						 * It's been deleted, so no error, and no need to
-						 * continue searching
+						 * It is a duplicate. If we are only doing a partial
+						 * check, then don't bother checking if the tuple is being
+						 * updated in another transaction. Just return the fact
+						 * that it is a potential conflict and leave the full
+						 * check till later.
 						 */
-						break;
-					}
+						if (checkUnique == UNIQUE_CHECK_PARTIAL)
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							*is_unique = false;
+							return InvalidTransactionId;
+						}
 
-					/*
-					 * Check for a conflict-in as we would if we were going to
-					 * write to this page.  We aren't actually going to write,
-					 * but we want a chance to report SSI conflicts that would
-					 * otherwise be masked by this unique constraint
-					 * violation.
-					 */
-					CheckForSerializableConflictIn(rel, NULL, buf);
+						/*
+						 * If this tuple is being updated by other transaction
+						 * then we have to wait for its commit/abort.
+						 */
+						xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
+							SnapshotDirty.xmin : SnapshotDirty.xmax;
+
+						if (TransactionIdIsValid(xwait))
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							/* Tell _bt_doinsert to wait... */
+							*speculativeToken = SnapshotDirty.speculativeToken;
+							return xwait;
+						}
 
-					/*
-					 * This is a definite conflict.  Break the tuple down into
-					 * datums and report the error.  But first, make sure we
-					 * release the buffer locks we're holding ---
-					 * BuildIndexValueDescription could make catalog accesses,
-					 * which in the worst case might touch this same index and
-					 * cause deadlocks.
-					 */
-					if (nbuf != InvalidBuffer)
-						_bt_relbuf(rel, nbuf);
-					_bt_relbuf(rel, buf);
+						/*
+						 * Otherwise we have a definite conflict.  But before
+						 * complaining, look to see if the tuple we want to insert
+						 * is itself now committed dead --- if so, don't complain.
+						 * This is a waste of time in normal scenarios but we must
+						 * do it to support CREATE INDEX CONCURRENTLY.
+						 *
+						 * We must follow HOT-chains here because during
+						 * concurrent index build, we insert the root TID though
+						 * the actual tuple may be somewhere in the HOT-chain.
+						 * While following the chain we might not stop at the
+						 * exact tuple which triggered the insert, but that's OK
+						 * because if we find a live tuple anywhere in this chain,
+						 * we have a unique key conflict.  The other live tuple is
+						 * not part of this chain because it had a different index
+						 * entry.
+						 */
+						recheck = false;
+						ItemPointerCopy(&itup->t_tid, &htid);
+						if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL,
+									&recheck, &buffer, &heapTuple))
+						{
+							bool result = true;
+							if (recheck)
+							{
+								/*
+								 * Recheck if the tuple actually satisfies the
+								 * index key. Otherwise, we might be following
+								 * a wrong index pointer and mustn't entertain
+								 * this tuple.
+								 */
+								result = btrecheck(rel, indexInfo, itup, heapRel, &heapTuple);
+								UnlockReleaseBuffer(buffer);
+							}
+							if (!result)
+								break;
+							/* Normal case --- it's still live */
+						}
+						else
+						{
+							/*
+							 * It's been deleted, so no error, and no need to
+							 * continue searching.
+							 */
+							break;
+						}
 
-					{
-						Datum		values[INDEX_MAX_KEYS];
-						bool		isnull[INDEX_MAX_KEYS];
-						char	   *key_desc;
-
-						index_deform_tuple(itup, RelationGetDescr(rel),
-										   values, isnull);
-
-						key_desc = BuildIndexValueDescription(rel, values,
-															  isnull);
-
-						ereport(ERROR,
-								(errcode(ERRCODE_UNIQUE_VIOLATION),
-								 errmsg("duplicate key value violates unique constraint \"%s\"",
-										RelationGetRelationName(rel)),
-							   key_desc ? errdetail("Key %s already exists.",
-													key_desc) : 0,
-								 errtableconstraint(heapRel,
-											 RelationGetRelationName(rel))));
+						/*
+						 * Check for a conflict-in as we would if we were going to
+						 * write to this page.  We aren't actually going to write,
+						 * but we want a chance to report SSI conflicts that would
+						 * otherwise be masked by this unique constraint
+						 * violation.
+						 */
+						CheckForSerializableConflictIn(rel, NULL, buf);
+
+						/*
+						 * This is a definite conflict.  Break the tuple down into
+						 * datums and report the error.  But first, make sure we
+						 * release the buffer locks we're holding ---
+						 * BuildIndexValueDescription could make catalog accesses,
+						 * which in the worst case might touch this same index and
+						 * cause deadlocks.
+						 */
+						if (nbuf != InvalidBuffer)
+							_bt_relbuf(rel, nbuf);
+						_bt_relbuf(rel, buf);
+
+						{
+							Datum		values[INDEX_MAX_KEYS];
+							bool		isnull[INDEX_MAX_KEYS];
+							char	   *key_desc;
+
+							index_deform_tuple(itup, RelationGetDescr(rel),
+									values, isnull);
+
+							key_desc = BuildIndexValueDescription(rel, values,
+									isnull);
+
+							ereport(ERROR,
+									(errcode(ERRCODE_UNIQUE_VIOLATION),
+									 errmsg("duplicate key value violates unique constraint \"%s\"",
+										 RelationGetRelationName(rel)),
+									 key_desc ? errdetail("Key %s already exists.",
+										 key_desc) : 0,
+									 errtableconstraint(heapRel,
+										 RelationGetRelationName(rel))));
+						}
 					}
 				}
 				else if (all_dead)
diff --git a/src/backend/access/nbtree/nbtpage.c b/src/backend/access/nbtree/nbtpage.c
index f815fd4..061c8d4 100644
--- a/src/backend/access/nbtree/nbtpage.c
+++ b/src/backend/access/nbtree/nbtpage.c
@@ -766,29 +766,20 @@ _bt_page_recyclable(Page page)
 }
 
 /*
- * Delete item(s) from a btree page during VACUUM.
+ * Delete item(s) and clear WARM item(s) on a btree page during VACUUM.
  *
  * This must only be used for deleting leaf items.  Deleting an item on a
  * non-leaf page has to be done as part of an atomic action that includes
- * deleting the page it points to.
+ * deleting the page it points to. We don't ever clear pointers on a non-leaf
+ * page.
  *
  * This routine assumes that the caller has pinned and locked the buffer.
  * Also, the given itemnos *must* appear in increasing order in the array.
- *
- * We record VACUUMs and b-tree deletes differently in WAL. InHotStandby
- * we need to be able to pin all of the blocks in the btree in physical
- * order when replaying the effects of a VACUUM, just as we do for the
- * original VACUUM itself. lastBlockVacuumed allows us to tell whether an
- * intermediate range of blocks has had no changes at all by VACUUM,
- * and so must be scanned anyway during replay. We always write a WAL record
- * for the last block in the index, whether or not it contained any items
- * to be removed. This allows us to scan right up to end of index to
- * ensure correct locking.
  */
 void
-_bt_delitems_vacuum(Relation rel, Buffer buf,
-					OffsetNumber *itemnos, int nitems,
-					BlockNumber lastBlockVacuumed)
+_bt_handleitems_vacuum(Relation rel, Buffer buf,
+					OffsetNumber *delitemnos, int ndelitems,
+					OffsetNumber *clearitemnos, int nclearitems)
 {
 	Page		page = BufferGetPage(buf);
 	BTPageOpaque opaque;
@@ -796,9 +787,20 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 	/* No ereport(ERROR) until changes are logged */
 	START_CRIT_SECTION();
 
+	/*
+	 * Clear the WARM pointers.
+	 *
+	 * We must do this before dealing with the dead items because
+	 * PageIndexMultiDelete may move items around to compactify the array and
+	 * hence offnums recorded earlier won't make any sense after
+	 * PageIndexMultiDelete is called.
+	 */
+	if (nclearitems > 0)
+		_bt_clear_items(page, clearitemnos, nclearitems);
+
 	/* Fix the page */
-	if (nitems > 0)
-		PageIndexMultiDelete(page, itemnos, nitems);
+	if (ndelitems > 0)
+		PageIndexMultiDelete(page, delitemnos, ndelitems);
 
 	/*
 	 * We can clear the vacuum cycle ID since this page has certainly been
@@ -824,7 +826,8 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 		XLogRecPtr	recptr;
 		xl_btree_vacuum xlrec_vacuum;
 
-		xlrec_vacuum.lastBlockVacuumed = lastBlockVacuumed;
+		xlrec_vacuum.ndelitems = ndelitems;
+		xlrec_vacuum.nclearitems = nclearitems;
 
 		XLogBeginInsert();
 		XLogRegisterBuffer(0, buf, REGBUF_STANDARD);
@@ -835,8 +838,11 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 		 * is.  When XLogInsert stores the whole buffer, the offsets array
 		 * need not be stored too.
 		 */
-		if (nitems > 0)
-			XLogRegisterBufData(0, (char *) itemnos, nitems * sizeof(OffsetNumber));
+		if (ndelitems > 0)
+			XLogRegisterBufData(0, (char *) delitemnos, ndelitems * sizeof(OffsetNumber));
+
+		if (nclearitems > 0)
+			XLogRegisterBufData(0, (char *) clearitemnos, nclearitems * sizeof(OffsetNumber));
 
 		recptr = XLogInsert(RM_BTREE_ID, XLOG_BTREE_VACUUM);
 
@@ -1882,3 +1888,13 @@ _bt_unlink_halfdead_page(Relation rel, Buffer leafbuf, bool *rightsib_empty)
 
 	return true;
 }
+
+/*
+ * Currently just a wrapper arounf PageIndexClearWarmTuples, but in theory each
+ * index may have it's own way to handle WARM tuples.
+ */
+void
+_bt_clear_items(Page page, OffsetNumber *clearitemnos, uint16 nclearitems)
+{
+	PageIndexClearWarmTuples(page, clearitemnos, nclearitems);
+}
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 775f2ff..6d558af 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -146,6 +146,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = btbuild;
 	amroutine->ambuildempty = btbuildempty;
 	amroutine->aminsert = btinsert;
+	amroutine->amwarminsert = btwarminsert;
 	amroutine->ambulkdelete = btbulkdelete;
 	amroutine->amvacuumcleanup = btvacuumcleanup;
 	amroutine->amcanreturn = btcanreturn;
@@ -163,6 +164,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = btestimateparallelscan;
 	amroutine->aminitparallelscan = btinitparallelscan;
 	amroutine->amparallelrescan = btparallelrescan;
+	amroutine->amrecheck = btrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -315,11 +317,12 @@ btbuildempty(Relation index)
  *		Descend the tree recursively, find the appropriate location for our
  *		new tuple, and put it there.
  */
-bool
-btinsert(Relation rel, Datum *values, bool *isnull,
+static bool
+btinsert_internal(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
-		 IndexInfo *indexInfo)
+		 IndexInfo *indexInfo,
+		 bool warm_update)
 {
 	bool		result;
 	IndexTuple	itup;
@@ -328,6 +331,11 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	itup = index_form_tuple(RelationGetDescr(rel), values, isnull);
 	itup->t_tid = *ht_ctid;
 
+	if (warm_update)
+		ItemPointerSetFlags(&itup->t_tid, BTREE_INDEX_WARM_POINTER);
+	else
+		ItemPointerClearFlags(&itup->t_tid);
+
 	result = _bt_doinsert(rel, itup, checkUnique, heapRel);
 
 	pfree(itup);
@@ -335,6 +343,26 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	return result;
 }
 
+bool
+btinsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, false);
+}
+
+bool
+btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, true);
+}
+
 /*
  *	btgettuple() -- Get the next tuple in the scan.
  */
@@ -1103,7 +1131,7 @@ btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 								 RBM_NORMAL, info->strategy);
 		LockBufferForCleanup(buf);
 		_bt_checkpage(rel, buf);
-		_bt_delitems_vacuum(rel, buf, NULL, 0, vstate.lastBlockVacuumed);
+		_bt_handleitems_vacuum(rel, buf, NULL, 0, NULL, 0);
 		_bt_relbuf(rel, buf);
 	}
 
@@ -1201,6 +1229,8 @@ restart:
 	{
 		OffsetNumber deletable[MaxOffsetNumber];
 		int			ndeletable;
+		OffsetNumber clearwarm[MaxOffsetNumber];
+		int			nclearwarm;
 		OffsetNumber offnum,
 					minoff,
 					maxoff;
@@ -1239,7 +1269,7 @@ restart:
 		 * Scan over all items to see which ones need deleted according to the
 		 * callback function.
 		 */
-		ndeletable = 0;
+		ndeletable = nclearwarm = 0;
 		minoff = P_FIRSTDATAKEY(opaque);
 		maxoff = PageGetMaxOffsetNumber(page);
 		if (callback)
@@ -1250,6 +1280,9 @@ restart:
 			{
 				IndexTuple	itup;
 				ItemPointer htup;
+				int			flags;
+				bool		is_warm = false;
+				IndexBulkDeleteCallbackResult	result;
 
 				itup = (IndexTuple) PageGetItem(page,
 												PageGetItemId(page, offnum));
@@ -1276,16 +1309,36 @@ restart:
 				 * applies to *any* type of index that marks index tuples as
 				 * killed.
 				 */
-				if (callback(htup, callback_state))
+				flags = ItemPointerGetFlags(&itup->t_tid);
+				is_warm = ((flags & BTREE_INDEX_WARM_POINTER) != 0);
+
+				if (is_warm)
+					stats->num_warm_pointers++;
+				else
+					stats->num_clear_pointers++;
+
+				result = callback(htup, is_warm, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					if (is_warm)
+						stats->warm_pointers_removed++;
+					else
+						stats->clear_pointers_removed++;
 					deletable[ndeletable++] = offnum;
+				}
+				else if (result == IBDCR_CLEAR_WARM)
+				{
+					clearwarm[nclearwarm++] = offnum;
+				}
 			}
 		}
 
 		/*
-		 * Apply any needed deletes.  We issue just one _bt_delitems_vacuum()
-		 * call per page, so as to minimize WAL traffic.
+		 * Apply any needed deletes and clearing.  We issue just one
+		 * _bt_handleitems_vacuum() call per page, so as to minimize WAL
+		 * traffic.
 		 */
-		if (ndeletable > 0)
+		if (ndeletable > 0 || nclearwarm > 0)
 		{
 			/*
 			 * Notice that the issued XLOG_BTREE_VACUUM WAL record includes
@@ -1301,8 +1354,8 @@ restart:
 			 * doesn't seem worth the amount of bookkeeping it'd take to avoid
 			 * that.
 			 */
-			_bt_delitems_vacuum(rel, buf, deletable, ndeletable,
-								vstate->lastBlockVacuumed);
+			_bt_handleitems_vacuum(rel, buf, deletable, ndeletable,
+								clearwarm, nclearwarm);
 
 			/*
 			 * Remember highest leaf page number we've issued a
@@ -1312,6 +1365,7 @@ restart:
 				vstate->lastBlockVacuumed = blkno;
 
 			stats->tuples_removed += ndeletable;
+			stats->pointers_cleared += nclearwarm;
 			/* must recompute maxoff */
 			maxoff = PageGetMaxOffsetNumber(page);
 		}
diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c
index 5b259a3..2765809 100644
--- a/src/backend/access/nbtree/nbtutils.c
+++ b/src/backend/access/nbtree/nbtutils.c
@@ -20,11 +20,14 @@
 #include "access/nbtree.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "access/tuptoaster.h"
+#include "catalog/index.h"
 #include "miscadmin.h"
 #include "utils/array.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 typedef struct BTSortArrayContext
@@ -2069,3 +2072,93 @@ btproperty(Oid index_oid, int attno,
 			return false;		/* punt to generic code */
 	}
 }
+
+/*
+ * Check if the index tuple's key matches the one computed from the given heap
+ * tuple's attribute
+ */
+bool
+btrecheck(Relation indexRel, IndexInfo *indexInfo, IndexTuple indexTuple1,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	bool		isavail[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+	IndexTuple	indexTuple2;
+
+	/*
+	 * Get the index values, except for expression attributes. Since WARM is
+	 * not used when a column used by expressions in an index is modified, we
+	 * can safely assume that those index attributes are never changed by a
+	 * WARM update.
+	 *
+	 * We cannot use FormIndexDatum here because that requires access to
+	 * executor state and we don't have that here.
+	 */
+	FormIndexPlainDatum(indexInfo, heapRel, heapTuple, values, isnull, isavail);
+
+	/*
+	 * Form an index tuple using the heap values first. This allows to then
+	 * fetch index attributes from the current index tuple and the one that is
+	 * formed from the heap values and then do a binary comparison using
+	 * datumIsEqual().
+	 *
+	 * This takes care of doing the right comparison for compressed index
+	 * attributes (we just compare the compressed versions in both tuples) and
+	 * also ensure that we correctly detoast heap values, if need be.
+	 */
+	indexTuple2 = index_form_tuple(RelationGetDescr(indexRel), values, isnull);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue1;
+		bool	indxisnull1;
+		Datum	indxvalue2;
+		bool	indxisnull2;
+
+		/* No need to compare if the attribute value is not available */
+		if (!isavail[i - 1])
+			continue;
+
+		indxvalue1 = index_getattr(indexTuple1, i, indexRel->rd_att,
+								   &indxisnull1);
+		indxvalue2 = index_getattr(indexTuple2, i, indexRel->rd_att,
+								   &indxisnull2);
+
+		/*
+		 * If both are NULL, then they are equal
+		 */
+		if (indxisnull1 && indxisnull2)
+			continue;
+
+		/*
+		 * If just one is NULL, then they are not equal
+		 */
+		if (indxisnull1 || indxisnull2)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now just do a raw memory comparison. If the index tuple was formed
+		 * using this heap tuple, the computed index values must match
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(indxvalue1, indxvalue2, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	pfree(indexTuple2);
+
+	return equal;
+}
diff --git a/src/backend/access/nbtree/nbtxlog.c b/src/backend/access/nbtree/nbtxlog.c
index ac60db0..ef24738 100644
--- a/src/backend/access/nbtree/nbtxlog.c
+++ b/src/backend/access/nbtree/nbtxlog.c
@@ -390,8 +390,8 @@ btree_xlog_vacuum(XLogReaderState *record)
 	Buffer		buffer;
 	Page		page;
 	BTPageOpaque opaque;
-#ifdef UNUSED
 	xl_btree_vacuum *xlrec = (xl_btree_vacuum *) XLogRecGetData(record);
+#ifdef UNUSED
 
 	/*
 	 * This section of code is thought to be no longer needed, after analysis
@@ -482,19 +482,30 @@ btree_xlog_vacuum(XLogReaderState *record)
 
 		if (len > 0)
 		{
-			OffsetNumber *unused;
-			OffsetNumber *unend;
+			OffsetNumber *offnums = (OffsetNumber *) ptr;
 
-			unused = (OffsetNumber *) ptr;
-			unend = (OffsetNumber *) ((char *) ptr + len);
+			/*
+			 * Clear the WARM pointers.
+			 *
+			 * We must do this before dealing with the dead items because
+			 * PageIndexMultiDelete may move items around to compactify the
+			 * array and hence offnums recorded earlier won't make any sense
+			 * after PageIndexMultiDelete is called.
+			 */
+			if (xlrec->nclearitems > 0)
+				_bt_clear_items(page, offnums + xlrec->ndelitems,
+						xlrec->nclearitems);
 
-			if ((unend - unused) > 0)
-				PageIndexMultiDelete(page, unused, unend - unused);
+			/*
+			 * And handle the deleted items too
+			 */
+			if (xlrec->ndelitems > 0)
+				PageIndexMultiDelete(page, offnums, xlrec->ndelitems);
 		}
 
 		/*
 		 * Mark the page as not containing any LP_DEAD items --- see comments
-		 * in _bt_delitems_vacuum().
+		 * in _bt_handleitems_vacuum().
 		 */
 		opaque = (BTPageOpaque) PageGetSpecialPointer(page);
 		opaque->btpo_flags &= ~BTP_HAS_GARBAGE;
diff --git a/src/backend/access/rmgrdesc/heapdesc.c b/src/backend/access/rmgrdesc/heapdesc.c
index 44d2d63..d373e61 100644
--- a/src/backend/access/rmgrdesc/heapdesc.c
+++ b/src/backend/access/rmgrdesc/heapdesc.c
@@ -44,6 +44,12 @@ heap_desc(StringInfo buf, XLogReaderState *record)
 
 		appendStringInfo(buf, "off %u", xlrec->offnum);
 	}
+	else if (info == XLOG_HEAP_MULTI_INSERT)
+	{
+		xl_heap_multi_insert *xlrec = (xl_heap_multi_insert *) rec;
+
+		appendStringInfo(buf, "%d tuples", xlrec->ntuples);
+	}
 	else if (info == XLOG_HEAP_DELETE)
 	{
 		xl_heap_delete *xlrec = (xl_heap_delete *) rec;
@@ -102,7 +108,7 @@ heap2_desc(StringInfo buf, XLogReaderState *record)
 	char	   *rec = XLogRecGetData(record);
 	uint8		info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
 
-	info &= XLOG_HEAP_OPMASK;
+	info &= XLOG_HEAP2_OPMASK;
 	if (info == XLOG_HEAP2_CLEAN)
 	{
 		xl_heap_clean *xlrec = (xl_heap_clean *) rec;
@@ -129,12 +135,6 @@ heap2_desc(StringInfo buf, XLogReaderState *record)
 		appendStringInfo(buf, "cutoff xid %u flags %d",
 						 xlrec->cutoff_xid, xlrec->flags);
 	}
-	else if (info == XLOG_HEAP2_MULTI_INSERT)
-	{
-		xl_heap_multi_insert *xlrec = (xl_heap_multi_insert *) rec;
-
-		appendStringInfo(buf, "%d tuples", xlrec->ntuples);
-	}
 	else if (info == XLOG_HEAP2_LOCK_UPDATED)
 	{
 		xl_heap_lock_updated *xlrec = (xl_heap_lock_updated *) rec;
@@ -171,6 +171,12 @@ heap_identify(uint8 info)
 		case XLOG_HEAP_INSERT | XLOG_HEAP_INIT_PAGE:
 			id = "INSERT+INIT";
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			id = "MULTI_INSERT";
+			break;
+		case XLOG_HEAP_MULTI_INSERT | XLOG_HEAP_INIT_PAGE:
+			id = "MULTI_INSERT+INIT";
+			break;
 		case XLOG_HEAP_DELETE:
 			id = "DELETE";
 			break;
@@ -219,12 +225,6 @@ heap2_identify(uint8 info)
 		case XLOG_HEAP2_VISIBLE:
 			id = "VISIBLE";
 			break;
-		case XLOG_HEAP2_MULTI_INSERT:
-			id = "MULTI_INSERT";
-			break;
-		case XLOG_HEAP2_MULTI_INSERT | XLOG_HEAP_INIT_PAGE:
-			id = "MULTI_INSERT+INIT";
-			break;
 		case XLOG_HEAP2_LOCK_UPDATED:
 			id = "LOCK_UPDATED";
 			break;
diff --git a/src/backend/access/rmgrdesc/nbtdesc.c b/src/backend/access/rmgrdesc/nbtdesc.c
index fbde9d6..6b2c5d6 100644
--- a/src/backend/access/rmgrdesc/nbtdesc.c
+++ b/src/backend/access/rmgrdesc/nbtdesc.c
@@ -48,8 +48,8 @@ btree_desc(StringInfo buf, XLogReaderState *record)
 			{
 				xl_btree_vacuum *xlrec = (xl_btree_vacuum *) rec;
 
-				appendStringInfo(buf, "lastBlockVacuumed %u",
-								 xlrec->lastBlockVacuumed);
+				appendStringInfo(buf, "ndelitems %u, nclearitems %u",
+								 xlrec->ndelitems, xlrec->nclearitems);
 				break;
 			}
 		case XLOG_BTREE_DELETE:
diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c
index e57ac49..59ef7f3 100644
--- a/src/backend/access/spgist/spgutils.c
+++ b/src/backend/access/spgist/spgutils.c
@@ -72,6 +72,7 @@ spghandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/spgist/spgvacuum.c b/src/backend/access/spgist/spgvacuum.c
index cce9b3f..711d351 100644
--- a/src/backend/access/spgist/spgvacuum.c
+++ b/src/backend/access/spgist/spgvacuum.c
@@ -155,7 +155,8 @@ vacuumLeafPage(spgBulkDeleteState *bds, Relation index, Buffer buffer,
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				deletable[i] = true;
@@ -425,7 +426,8 @@ vacuumLeafRoot(spgBulkDeleteState *bds, Relation index, Buffer buffer)
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				toDelete[xlrec.nDelete] = i;
@@ -902,10 +904,10 @@ spgbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 }
 
 /* Dummy callback to delete no tuples during spgvacuumcleanup */
-static bool
-dummy_callback(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+dummy_callback(ItemPointer itemptr, bool is_warm, void *state)
 {
-	return false;
+	return IBDCR_KEEP;
 }
 
 /*
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 1eb163f..2c27661 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -54,6 +54,7 @@
 #include "nodes/makefuncs.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/var.h"
 #include "parser/parser.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
@@ -114,7 +115,7 @@ static void IndexCheckExclusion(Relation heapRelation,
 					IndexInfo *indexInfo);
 static inline int64 itemptr_encode(ItemPointer itemptr);
 static inline void itemptr_decode(ItemPointer itemptr, int64 encoded);
-static bool validate_index_callback(ItemPointer itemptr, void *opaque);
+static IndexBulkDeleteCallbackResult validate_index_callback(ItemPointer itemptr, bool is_warm, void *opaque);
 static void validate_index_heapscan(Relation heapRelation,
 						Relation indexRelation,
 						IndexInfo *indexInfo,
@@ -1691,6 +1692,20 @@ BuildIndexInfo(Relation index)
 	ii->ii_AmCache = NULL;
 	ii->ii_Context = CurrentMemoryContext;
 
+	/* build a bitmap of all table attributes referred by this index */
+	for (i = 0; i < ii->ii_NumIndexAttrs; i++)
+	{
+		AttrNumber attr = ii->ii_KeyAttrNumbers[i];
+		ii->ii_indxattrs = bms_add_member(ii->ii_indxattrs, attr -
+				FirstLowInvalidHeapAttributeNumber);
+	}
+
+	/* Collect all attributes used in expressions, too */
+	pull_varattnos((Node *) ii->ii_Expressions, 1, &ii->ii_indxattrs);
+
+	/* Collect all attributes in the index predicate, too */
+	pull_varattnos((Node *) ii->ii_Predicate, 1, &ii->ii_indxattrs);
+
 	return ii;
 }
 
@@ -1815,6 +1830,51 @@ FormIndexDatum(IndexInfo *indexInfo,
 		elog(ERROR, "wrong number of index expressions");
 }
 
+/*
+ * This is same as FormIndexDatum but we don't compute any expression
+ * attributes and hence can be used when executor interfaces are not available.
+ * If i'th attribute is available then isavail[i] is set to true, else set to
+ * false. The caller must always check if an attribute value is available
+ * before trying to do anything useful with that.
+ */
+void
+FormIndexPlainDatum(IndexInfo *indexInfo,
+			   Relation heapRel,
+			   HeapTuple heapTup,
+			   Datum *values,
+			   bool *isnull,
+			   bool *isavail)
+{
+	int			i;
+
+	for (i = 0; i < indexInfo->ii_NumIndexAttrs; i++)
+	{
+		int			keycol = indexInfo->ii_KeyAttrNumbers[i];
+		Datum		iDatum;
+		bool		isNull;
+
+		if (keycol != 0)
+		{
+			/*
+			 * Plain index column; get the value we need directly from the
+			 * heap tuple.
+			 */
+			iDatum = heap_getattr(heapTup, keycol, RelationGetDescr(heapRel), &isNull);
+			values[i] = iDatum;
+			isnull[i] = isNull;
+			isavail[i] = true;
+		}
+		else
+		{
+			/*
+			 * This is an expression attribute which can't be computed by us.
+			 * So just inform the caller about it.
+			 */
+			isavail[i] = false;
+			isnull[i] = true;
+		}
+	}
+}
 
 /*
  * index_update_stats --- update pg_class entry after CREATE INDEX or REINDEX
@@ -2929,15 +2989,15 @@ itemptr_decode(ItemPointer itemptr, int64 encoded)
 /*
  * validate_index_callback - bulkdelete callback to collect the index TIDs
  */
-static bool
-validate_index_callback(ItemPointer itemptr, void *opaque)
+static IndexBulkDeleteCallbackResult
+validate_index_callback(ItemPointer itemptr, bool is_warm, void *opaque)
 {
 	v_i_state  *state = (v_i_state *) opaque;
 	int64		encoded = itemptr_encode(itemptr);
 
 	tuplesort_putdatum(state->tuplesort, Int64GetDatum(encoded), false);
 	state->itups += 1;
-	return false;				/* never actually delete anything */
+	return IBDCR_KEEP;				/* never actually delete anything */
 }
 
 /*
@@ -3156,7 +3216,8 @@ validate_index_heapscan(Relation heapRelation,
 						 heapRelation,
 						 indexInfo->ii_Unique ?
 						 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-						 indexInfo);
+						 indexInfo,
+						 false);
 
 			state->tups_inserted += 1;
 		}
diff --git a/src/backend/catalog/indexing.c b/src/backend/catalog/indexing.c
index abc344a..6392f33 100644
--- a/src/backend/catalog/indexing.c
+++ b/src/backend/catalog/indexing.c
@@ -66,10 +66,15 @@ CatalogCloseIndexes(CatalogIndexState indstate)
  *
  * This should be called for each inserted or updated catalog tuple.
  *
+ * If the tuple was WARM updated, the modified_attrs contains the list of
+ * columns updated by the update. We must not insert new index entries for
+ * indexes which do not refer to any of the modified columns.
+ *
  * This is effectively a cut-down version of ExecInsertIndexTuples.
  */
 static void
-CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
+CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple,
+		Bitmapset *modified_attrs, bool warm_update)
 {
 	int			i;
 	int			numIndexes;
@@ -79,12 +84,28 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 	IndexInfo **indexInfoArray;
 	Datum		values[INDEX_MAX_KEYS];
 	bool		isnull[INDEX_MAX_KEYS];
+	ItemPointerData root_tid;
 
-	/* HOT update does not require index inserts */
-	if (HeapTupleIsHeapOnly(heapTuple))
+	/*
+	 * HOT update does not require index inserts, but WARM may need for some
+	 * indexes.
+	 */
+	if (HeapTupleIsHeapOnly(heapTuple) && !warm_update)
 		return;
 
 	/*
+	 * If we've done a WARM update, then we must index the TID of the root line
+	 * pointer and not the actual TID of the new tuple.
+	 */
+	if (warm_update)
+		ItemPointerSet(&root_tid,
+				ItemPointerGetBlockNumber(&(heapTuple->t_self)),
+				HeapTupleHeaderGetRootOffset(heapTuple->t_data));
+	else
+		ItemPointerCopy(&heapTuple->t_self, &root_tid);
+
+
+	/*
 	 * Get information from the state structure.  Fall out if nothing to do.
 	 */
 	numIndexes = indstate->ri_NumIndices;
@@ -112,6 +133,17 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 			continue;
 
 		/*
+		 * If we've done WARM update, then we must not insert a new index tuple
+		 * if none of the index keys have changed. This is not just an
+		 * optimization, but a requirement for WARM to work correctly.
+		 */
+		if (warm_update)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
+		/*
 		 * Expressional and partial indexes on system catalogs are not
 		 * supported, nor exclusion constraints, nor deferred uniqueness
 		 */
@@ -136,11 +168,12 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 		index_insert(relationDescs[i],	/* index relation */
 					 values,	/* array of index Datums */
 					 isnull,	/* is-null flags */
-					 &(heapTuple->t_self),		/* tid of heap tuple */
+					 &root_tid,
 					 heapRelation,
 					 relationDescs[i]->rd_index->indisunique ?
 					 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-					 indexInfo);
+					 indexInfo,
+					 warm_update);
 	}
 
 	ExecDropSingleTupleTableSlot(slot);
@@ -168,7 +201,7 @@ CatalogTupleInsert(Relation heapRel, HeapTuple tup)
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 	CatalogCloseIndexes(indstate);
 
 	return oid;
@@ -190,7 +223,7 @@ CatalogTupleInsertWithInfo(Relation heapRel, HeapTuple tup,
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 
 	return oid;
 }
@@ -210,12 +243,14 @@ void
 CatalogTupleUpdate(Relation heapRel, ItemPointer otid, HeapTuple tup)
 {
 	CatalogIndexState indstate;
+	bool	warm_update;
+	Bitmapset	*modified_attrs;
 
 	indstate = CatalogOpenIndexes(heapRel);
 
-	simple_heap_update(heapRel, otid, tup);
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 	CatalogCloseIndexes(indstate);
 }
 
@@ -231,9 +266,12 @@ void
 CatalogTupleUpdateWithInfo(Relation heapRel, ItemPointer otid, HeapTuple tup,
 						   CatalogIndexState indstate)
 {
-	simple_heap_update(heapRel, otid, tup);
+	Bitmapset  *modified_attrs;
+	bool		warm_update;
+
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 }
 
 /*
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index d357c8b..66a39d0 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -530,6 +530,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_tuples_deleted(C.oid) AS n_tup_del,
             pg_stat_get_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
@@ -560,7 +561,8 @@ CREATE VIEW pg_stat_xact_all_tables AS
             pg_stat_get_xact_tuples_inserted(C.oid) AS n_tup_ins,
             pg_stat_get_xact_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_xact_tuples_deleted(C.oid) AS n_tup_del,
-            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd
+            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_xact_tuples_warm_updated(C.oid) AS n_tup_warm_upd
     FROM pg_class C LEFT JOIN
          pg_index I ON C.oid = I.indrelid
          LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index e2544e5..330b661 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -40,6 +40,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	TriggerData *trigdata = castNode(TriggerData, fcinfo->context);
 	const char *funcname = "unique_key_recheck";
 	HeapTuple	new_row;
+	HeapTupleData heapTuple;
 	ItemPointerData tmptid;
 	Relation	indexRel;
 	IndexInfo  *indexInfo;
@@ -102,7 +103,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	 * removed.
 	 */
 	tmptid = new_row->t_self;
-	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL,
+				NULL, NULL, &heapTuple))
 	{
 		/*
 		 * All rows in the HOT chain are dead, so skip the check.
@@ -166,7 +168,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 		 */
 		index_insert(indexRel, values, isnull, &(new_row->t_self),
 					 trigdata->tg_relation, UNIQUE_CHECK_EXISTING,
-					 indexInfo);
+					 indexInfo,
+					 false);
 	}
 	else
 	{
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index 0158eda..d6ef4a8 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -2688,6 +2688,8 @@ CopyFrom(CopyState cstate)
 					if (resultRelInfo->ri_NumIndices > 0)
 						recheckIndexes = ExecInsertIndexTuples(slot,
 															&(tuple->t_self),
+															&(tuple->t_self),
+															NULL,
 															   estate,
 															   false,
 															   NULL,
@@ -2842,6 +2844,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
 			ExecStoreTuple(bufferedTuples[i], myslot, InvalidBuffer, false);
 			recheckIndexes =
 				ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
+									  &(bufferedTuples[i]->t_self), NULL,
 									  estate, false, NULL, NIL);
 			ExecARInsertTriggers(estate, resultRelInfo,
 								 bufferedTuples[i],
diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c
index 4861799..b62b0e9 100644
--- a/src/backend/commands/indexcmds.c
+++ b/src/backend/commands/indexcmds.c
@@ -694,7 +694,14 @@ DefineIndex(Oid relationId,
 	 * visible to other transactions before we start to build the index. That
 	 * will prevent them from making incompatible HOT updates.  The new index
 	 * will be marked not indisready and not indisvalid, so that no one else
-	 * tries to either insert into it or use it for queries.
+	 * tries to either insert into it or use it for queries. In addition to
+	 * that, WARM updates will be disallowed if an update is modifying one of
+	 * the columns used by this new index. This is necessary to ensure that we
+	 * don't create WARM tuples which do not have corresponding entry in this
+	 * index. It must be noted that during the second phase, we will index only
+	 * those heap tuples whose root line pointer is not already in the index,
+	 * hence it's important that all tuples in a given chain, has the same
+	 * value for any indexed column (including this new index).
 	 *
 	 * We must commit our current transaction so that the index becomes
 	 * visible; then start another.  Note that all the data structures we just
@@ -742,7 +749,10 @@ DefineIndex(Oid relationId,
 	 * marked as "not-ready-for-inserts".  The index is consulted while
 	 * deciding HOT-safety though.  This arrangement ensures that no new HOT
 	 * chains can be created where the new tuple and the old tuple in the
-	 * chain have different index keys.
+	 * chain have different index keys. Also, the new index is consulted for
+	 * deciding whether a WARM update is possible, and WARM update is not done
+	 * if a column used by this index is being updated. This ensures that we
+	 * don't create WARM tuples which are not indexed by this index.
 	 *
 	 * We now take a new snapshot, and build the index using all tuples that
 	 * are visible in this snapshot.  We can be sure that any HOT updates to
@@ -777,7 +787,8 @@ DefineIndex(Oid relationId,
 	/*
 	 * Update the pg_index row to mark the index as ready for inserts. Once we
 	 * commit this transaction, any new transactions that open the table must
-	 * insert new entries into the index for insertions and non-HOT updates.
+	 * insert new entries into the index for insertions and non-HOT updates or
+	 * WARM updates where this index needs new entry.
 	 */
 	index_set_state_flags(indexRelationId, INDEX_CREATE_SET_READY);
 
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 5b43a66..f52490f 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -104,6 +104,39 @@
  */
 #define PREFETCH_SIZE			((BlockNumber) 32)
 
+/*
+ * Structure to track WARM chains that can be converted into HOT chains during
+ * this run.
+ *
+ * To reduce space requirement, we're using bitfields. But the way things are
+ * laid down, we're still wasting 1-byte per candidate chain.
+ */
+typedef struct LVWarmChain
+{
+	ItemPointerData	chain_tid;			/* root of the chain */
+
+	/*
+	 * 1 - if the chain contains only post-warm tuples
+	 * 0 - if the chain contains only pre-warm tuples
+	 */
+	uint8			is_postwarm_chain:2;
+
+	/* 1 - if this chain must remain a WARM chain */
+	uint8			keep_warm_chain:2;
+
+	/*
+	 * Number of CLEAR pointers to this root TID found so far - must never be
+	 * more than 2.
+	 */
+	uint8			num_clear_pointers:2;
+
+	/*
+	 * Number of WARM pointers to this root TID found so far - must never be
+	 * more than 1.
+	 */
+	uint8			num_warm_pointers:2;
+} LVWarmChain;
+
 typedef struct LVRelStats
 {
 	/* hasindex = true means two-pass strategy; false means one-pass */
@@ -122,6 +155,14 @@ typedef struct LVRelStats
 	BlockNumber pages_removed;
 	double		tuples_deleted;
 	BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */
+
+	/* List of candidate WARM chains that can be converted into HOT chains */
+	/* NB: this list is ordered by TID of the root pointers */
+	int				num_warm_chains;	/* current # of entries */
+	int				max_warm_chains;	/* # slots allocated in array */
+	LVWarmChain 	*warm_chains;		/* array of LVWarmChain */
+	double			num_non_convertible_warm_chains;
+
 	/* List of TIDs of tuples we intend to delete */
 	/* NB: this list is ordered by TID address */
 	int			num_dead_tuples;	/* current # of entries */
@@ -150,6 +191,7 @@ static void lazy_scan_heap(Relation onerel, int options,
 static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats);
 static bool lazy_check_needs_freeze(Buffer buf, bool *hastup);
 static void lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats);
 static void lazy_cleanup_index(Relation indrel,
@@ -157,6 +199,10 @@ static void lazy_cleanup_index(Relation indrel,
 				   LVRelStats *vacrelstats);
 static int lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 				 int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer);
+static int lazy_warmclear_page(Relation onerel, BlockNumber blkno,
+				 Buffer buffer, int chainindex, LVRelStats *vacrelstats,
+				 Buffer *vmbuffer, bool check_all_visible);
+static void lazy_reset_warm_pointer_count(LVRelStats *vacrelstats);
 static bool should_attempt_truncation(LVRelStats *vacrelstats);
 static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
 static BlockNumber count_nondeletable_pages(Relation onerel,
@@ -164,8 +210,15 @@ static BlockNumber count_nondeletable_pages(Relation onerel,
 static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks);
 static void lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr);
-static bool lazy_tid_reaped(ItemPointer itemptr, void *state);
+static void lazy_record_warm_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static void lazy_record_clear_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static IndexBulkDeleteCallbackResult lazy_tid_reaped(ItemPointer itemptr, bool is_warm, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state);
 static int	vac_cmp_itemptr(const void *left, const void *right);
+static int vac_cmp_warm_chain(const void *left, const void *right);
 static bool heap_page_is_all_visible(Relation rel, Buffer buf,
 					 TransactionId *visibility_cutoff_xid, bool *all_frozen);
 
@@ -690,8 +743,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		 * If we are close to overrunning the available space for dead-tuple
 		 * TIDs, pause and do a cycle of vacuuming before we tackle this page.
 		 */
-		if ((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
-			vacrelstats->num_dead_tuples > 0)
+		if (((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
+			vacrelstats->num_dead_tuples > 0) ||
+			((vacrelstats->max_warm_chains - vacrelstats->num_warm_chains) < MaxHeapTuplesPerPage &&
+			 vacrelstats->num_warm_chains > 0))
 		{
 			const int	hvp_index[] = {
 				PROGRESS_VACUUM_PHASE,
@@ -721,6 +776,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			/* Remove index entries */
 			for (i = 0; i < nindexes; i++)
 				lazy_vacuum_index(Irel[i],
+								  (vacrelstats->num_warm_chains > 0),
 								  &indstats[i],
 								  vacrelstats);
 
@@ -743,6 +799,9 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			 * valid.
 			 */
 			vacrelstats->num_dead_tuples = 0;
+			vacrelstats->num_warm_chains = 0;
+			memset(vacrelstats->warm_chains, 0,
+					vacrelstats->max_warm_chains * sizeof (LVWarmChain));
 			vacrelstats->num_index_scans++;
 
 			/* Report that we are once again scanning the heap */
@@ -947,15 +1006,31 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 				continue;
 			}
 
+			ItemPointerSet(&(tuple.t_self), blkno, offnum);
+
 			/* Redirect items mustn't be touched */
 			if (ItemIdIsRedirected(itemid))
 			{
+				HeapCheckWarmChainStatus status = heap_check_warm_chain(page,
+						&tuple.t_self, false);
+				if (HCWC_IS_WARM_UPDATED(status))
+				{
+					/*
+					 * A chain which is either complete WARM or CLEAR is a
+					 * candidate for chain conversion. Remember the chain and
+					 * whether the chain has all WARM tuples or not.
+					 */
+					if (HCWC_IS_ALL_WARM(status))
+						lazy_record_warm_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_CLEAR(status))
+						lazy_record_clear_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
 				hastup = true;	/* this page won't be truncatable */
 				continue;
 			}
 
-			ItemPointerSet(&(tuple.t_self), blkno, offnum);
-
 			/*
 			 * DEAD item pointers are to be vacuumed normally; but we don't
 			 * count them in tups_vacuumed, else we'd be double-counting (at
@@ -975,6 +1050,26 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			tuple.t_len = ItemIdGetLength(itemid);
 			tuple.t_tableOid = RelationGetRelid(onerel);
 
+			if (!HeapTupleIsHeapOnly(&tuple))
+			{
+				HeapCheckWarmChainStatus status = heap_check_warm_chain(page,
+						&tuple.t_self, false);
+				if (HCWC_IS_WARM_UPDATED(status))
+				{
+					/*
+					 * A chain which is either complete WARM or CLEAR is a
+					 * candidate for chain conversion. Remember the chain and
+					 * its color.
+					 */
+					if (HCWC_IS_ALL_WARM(status))
+						lazy_record_warm_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_CLEAR(status))
+						lazy_record_clear_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
+			}
+
 			tupgone = false;
 
 			switch (HeapTupleSatisfiesVacuum(&tuple, OldestXmin, buf))
@@ -1040,6 +1135,19 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 							break;
 						}
 
+						/*
+						 * If this tuple was ever WARM updated or is a WARM
+						 * tuple, there could be multiple index entries
+						 * pointing to the root of this chain. We can't do
+						 * index-only scans for such tuples without verifying
+						 * index key check. So mark the page as !all_visible
+						 */
+						if (HeapTupleHeaderIsWarmUpdated(tuple.t_data))
+						{
+							all_visible = false;
+							break;
+						}
+
 						/* Track newest xmin on page. */
 						if (TransactionIdFollows(xmin, visibility_cutoff_xid))
 							visibility_cutoff_xid = xmin;
@@ -1282,7 +1390,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 
 	/* If any tuples need to be deleted, perform final vacuum cycle */
 	/* XXX put a threshold on min number of tuples here? */
-	if (vacrelstats->num_dead_tuples > 0)
+	if (vacrelstats->num_dead_tuples > 0 || vacrelstats->num_warm_chains > 0)
 	{
 		const int	hvp_index[] = {
 			PROGRESS_VACUUM_PHASE,
@@ -1300,6 +1408,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		/* Remove index entries */
 		for (i = 0; i < nindexes; i++)
 			lazy_vacuum_index(Irel[i],
+							  (vacrelstats->num_warm_chains > 0),
 							  &indstats[i],
 							  vacrelstats);
 
@@ -1371,7 +1480,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
  *
  *		This routine marks dead tuples as unused and compacts out free
  *		space on their pages.  Pages not having dead tuples recorded from
- *		lazy_scan_heap are not visited at all.
+ *		lazy_scan_heap are not visited at all. This routine also converts
+ *		candidate WARM chains to HOT chains by clearing WARM related flags. The
+ *		candidate chains are determined by the preceeding index scans after
+ *		looking at the data collected by the first heap scan.
  *
  * Note: the reason for doing this as a second pass is we cannot remove
  * the tuples until we've removed their index entries, and we want to
@@ -1380,7 +1492,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 static void
 lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 {
-	int			tupindex;
+	int			tupindex, chainindex;
 	int			npages;
 	PGRUsage	ru0;
 	Buffer		vmbuffer = InvalidBuffer;
@@ -1389,33 +1501,69 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 	npages = 0;
 
 	tupindex = 0;
-	while (tupindex < vacrelstats->num_dead_tuples)
+	chainindex = 0;
+	while (tupindex < vacrelstats->num_dead_tuples ||
+		   chainindex < vacrelstats->num_warm_chains)
 	{
-		BlockNumber tblk;
+		BlockNumber tblk, chainblk, vacblk;
 		Buffer		buf;
 		Page		page;
 		Size		freespace;
 
 		vacuum_delay_point();
 
-		tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
-		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, tblk, RBM_NORMAL,
+		tblk = chainblk = InvalidBlockNumber;
+		if (chainindex < vacrelstats->num_warm_chains)
+			chainblk =
+				ItemPointerGetBlockNumber(&(vacrelstats->warm_chains[chainindex].chain_tid));
+
+		if (tupindex < vacrelstats->num_dead_tuples)
+			tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
+
+		if (tblk == InvalidBlockNumber)
+			vacblk = chainblk;
+		else if (chainblk == InvalidBlockNumber)
+			vacblk = tblk;
+		else
+			vacblk = Min(chainblk, tblk);
+
+		Assert(vacblk != InvalidBlockNumber);
+
+		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, vacblk, RBM_NORMAL,
 								 vac_strategy);
-		if (!ConditionalLockBufferForCleanup(buf))
+
+
+		if (vacblk == chainblk)
+			LockBufferForCleanup(buf);
+		else if (!ConditionalLockBufferForCleanup(buf))
 		{
 			ReleaseBuffer(buf);
 			++tupindex;
 			continue;
 		}
-		tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
-									&vmbuffer);
+
+		/*
+		 * Convert WARM chains on this page. This should be done before
+		 * vacuuming the page to ensure that we can correctly set visibility
+		 * bits after clearing WARM chains.
+		 *
+		 * If we are going to vacuum this page then don't check for
+		 * all-visibility just yet.
+		*/
+		if (vacblk == chainblk)
+			chainindex = lazy_warmclear_page(onerel, chainblk, buf, chainindex,
+					vacrelstats, &vmbuffer, chainblk != tblk);
+
+		if (vacblk == tblk)
+			tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
+					&vmbuffer);
 
 		/* Now that we've compacted the page, record its available space */
 		page = BufferGetPage(buf);
 		freespace = PageGetHeapFreeSpace(page);
 
 		UnlockReleaseBuffer(buf);
-		RecordPageWithFreeSpace(onerel, tblk, freespace);
+		RecordPageWithFreeSpace(onerel, vacblk, freespace);
 		npages++;
 	}
 
@@ -1434,6 +1582,107 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 }
 
 /*
+ *	lazy_warmclear_page() -- clear various WARM bits on the tuples.
+ *
+ * Caller must hold pin and buffer cleanup lock on the buffer.
+ *
+ * chainindex is the index in vacrelstats->warm_chains of the first dead
+ * tuple for this page.  We assume the rest follow sequentially.
+ * The return value is the first tupindex after the tuples of this page.
+ *
+ * If check_all_visible is set then we also check if the page has now become
+ * all visible and update visibility map.
+ */
+static int
+lazy_warmclear_page(Relation onerel, BlockNumber blkno, Buffer buffer,
+				 int chainindex, LVRelStats *vacrelstats, Buffer *vmbuffer,
+				 bool check_all_visible)
+{
+	Page			page = BufferGetPage(buffer);
+	OffsetNumber	cleared_offnums[MaxHeapTuplesPerPage];
+	int				num_cleared = 0;
+	TransactionId	visibility_cutoff_xid;
+	bool			all_frozen;
+
+	pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED, blkno);
+
+	START_CRIT_SECTION();
+
+	for (; chainindex < vacrelstats->num_warm_chains ; chainindex++)
+	{
+		BlockNumber tblk;
+		LVWarmChain	*chain;
+
+		chain = &vacrelstats->warm_chains[chainindex];
+
+		tblk = ItemPointerGetBlockNumber(&chain->chain_tid);
+		if (tblk != blkno)
+			break;				/* past end of tuples for this block */
+
+		/*
+		 * Since a heap page can have no more than MaxHeapTuplesPerPage
+		 * offnums and we process each offnum only once, MaxHeapTuplesPerPage
+		 * size array should be enough to hold all cleared tuples in this page.
+		 */
+		if (!chain->keep_warm_chain)
+			num_cleared += heap_clear_warm_chain(page, &chain->chain_tid,
+					cleared_offnums + num_cleared);
+	}
+
+	/*
+	 * Mark buffer dirty before we write WAL.
+	 */
+	MarkBufferDirty(buffer);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(onerel))
+	{
+		XLogRecPtr	recptr;
+
+		recptr = log_heap_warmclear(onerel, buffer,
+								cleared_offnums, num_cleared);
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	/* If not checking for all-visibility then we're done */
+	if (!check_all_visible)
+		return chainindex;
+
+	/*
+	 * The following code should match the corresponding code in
+	 * lazy_vacuum_page
+	 **/
+	if (heap_page_is_all_visible(onerel, buffer, &visibility_cutoff_xid,
+								 &all_frozen))
+		PageSetAllVisible(page);
+
+	/*
+	 * All the changes to the heap page have been done. If the all-visible
+	 * flag is now set, also set the VM all-visible bit (and, if possible, the
+	 * all-frozen bit) unless this has already been done previously.
+	 */
+	if (PageIsAllVisible(page))
+	{
+		uint8		vm_status = visibilitymap_get_status(onerel, blkno, vmbuffer);
+		uint8		flags = 0;
+
+		/* Set the VM all-frozen bit to flag, if needed */
+		if ((vm_status & VISIBILITYMAP_ALL_VISIBLE) == 0)
+			flags |= VISIBILITYMAP_ALL_VISIBLE;
+		if ((vm_status & VISIBILITYMAP_ALL_FROZEN) == 0 && all_frozen)
+			flags |= VISIBILITYMAP_ALL_FROZEN;
+
+		Assert(BufferIsValid(*vmbuffer));
+		if (flags != 0)
+			visibilitymap_set(onerel, blkno, buffer, InvalidXLogRecPtr,
+							  *vmbuffer, visibility_cutoff_xid, flags);
+	}
+	return chainindex;
+}
+
+/*
  *	lazy_vacuum_page() -- free dead tuples on a page
  *					 and repair its fragmentation.
  *
@@ -1586,6 +1835,24 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
 	return false;
 }
 
+/*
+ * Reset counters tracking number of WARM and CLEAR pointers per candidate TID.
+ * These counters are maintained per index and cleared when the next index is
+ * picked up for cleanup.
+ *
+ * We don't touch the keep_warm_chain since once a chain is known to be
+ * non-convertible, we must remember that across all indexes.
+ */
+static void
+lazy_reset_warm_pointer_count(LVRelStats *vacrelstats)
+{
+	int i;
+	for (i = 0; i < vacrelstats->num_warm_chains; i++)
+	{
+		LVWarmChain *chain = &vacrelstats->warm_chains[i];
+		chain->num_clear_pointers = chain->num_warm_pointers = 0;
+	}
+}
 
 /*
  *	lazy_vacuum_index() -- vacuum one index relation.
@@ -1595,6 +1862,7 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
  */
 static void
 lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats)
 {
@@ -1610,15 +1878,87 @@ lazy_vacuum_index(Relation indrel,
 	ivinfo.num_heap_tuples = vacrelstats->old_rel_tuples;
 	ivinfo.strategy = vac_strategy;
 
-	/* Do bulk deletion */
-	*stats = index_bulk_delete(&ivinfo, *stats,
-							   lazy_tid_reaped, (void *) vacrelstats);
+	/*
+	 * If told, convert WARM chains into HOT chains.
+	 *
+	 * We must have already collected candidate WARM chains i.e. chains that
+	 * have either all tuples with HEAP_WARM_TUPLE flag set or none.
+	 *
+	 * This works in two phases. In the first phase, we do a complete index
+	 * scan and collect information about index pointers to the candidate
+	 * chains, but we don't do conversion. To be precise, we count the number
+	 * of WARM and CLEAR index pointers to each candidate chain and use that
+	 * knowledge to arrive at a decision and do the actual conversion during
+	 * the second phase (we kill known dead pointers though in this phase).
+	 *
+	 * In the second phase, for each candidate chain we check if we have seen a
+	 * WARM index pointer. For such chains, we kill the CLEAR pointer and
+	 * convert the WARM pointer into a CLEAR pointer. The heap tuples are
+	 * cleared of WARM flags in the second heap scan. If we did not find any
+	 * WARM pointer to a WARM chain, that means that the chain is reachable
+	 * from the CLEAR pointer (because say WARM update did not add a new entry
+	 * for this index). In that case, we do nothing.  There is a third case
+	 * where we find two CLEAR pointers to a candidate chain. This can happen
+	 * because of aborted vacuums. We don't handle that case yet, but it should
+	 * be possible to apply the same recheck logic and find which of the clear
+	 * pointers is redundant and should be removed.
+	 *
+	 * For CLEAR chains, we just kill the WARM pointer, if it exists, and keep
+	 * the CLEAR pointer.
+	 */
+	if (clear_warm)
+	{
+		/*
+		 * Before starting the index scan, reset the counters of WARM and CLEAR
+		 * pointers, probably carried forward from the previous index.
+		 */
+		lazy_reset_warm_pointer_count(vacrelstats);
+
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase1, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row version, found "
+						"%0.f warm pointers, %0.f clear pointers, removed "
+						"%0.f warm pointers, removed %0.f clear pointers",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples,
+						(*stats)->num_warm_pointers,
+						(*stats)->num_clear_pointers,
+						(*stats)->warm_pointers_removed,
+						(*stats)->clear_pointers_removed)));
+
+		(*stats)->num_warm_pointers = 0;
+		(*stats)->num_clear_pointers = 0;
+		(*stats)->warm_pointers_removed = 0;
+		(*stats)->clear_pointers_removed = 0;
+		(*stats)->pointers_cleared = 0;
+
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase2, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to convert WARM pointers, found "
+						"%0.f WARM pointers, %0.f CLEAR pointers, removed "
+						"%0.f WARM pointers, removed %0.f CLEAR pointers, "
+						"cleared %0.f WARM pointers",
+						RelationGetRelationName(indrel),
+						(*stats)->num_warm_pointers,
+						(*stats)->num_clear_pointers,
+						(*stats)->warm_pointers_removed,
+						(*stats)->clear_pointers_removed,
+						(*stats)->pointers_cleared)));
+	}
+	else
+	{
+		/* Do bulk deletion */
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_tid_reaped, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row versions",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples),
+				 errdetail("%s.", pg_rusage_show(&ru0))));
+	}
 
-	ereport(elevel,
-			(errmsg("scanned index \"%s\" to remove %d row versions",
-					RelationGetRelationName(indrel),
-					vacrelstats->num_dead_tuples),
-			 errdetail("%s.", pg_rusage_show(&ru0))));
 }
 
 /*
@@ -1992,9 +2332,11 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 
 	if (vacrelstats->hasindex)
 	{
-		maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData);
+		maxtuples = (vac_work_mem * 1024L) / (sizeof(ItemPointerData) +
+				sizeof(LVWarmChain));
 		maxtuples = Min(maxtuples, INT_MAX);
-		maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData));
+		maxtuples = Min(maxtuples, MaxAllocSize / (sizeof(ItemPointerData) +
+					sizeof(LVWarmChain)));
 
 		/* curious coding here to ensure the multiplication can't overflow */
 		if ((BlockNumber) (maxtuples / LAZY_ALLOC_TUPLES) > relblocks)
@@ -2012,6 +2354,57 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 	vacrelstats->max_dead_tuples = (int) maxtuples;
 	vacrelstats->dead_tuples = (ItemPointer)
 		palloc(maxtuples * sizeof(ItemPointerData));
+
+	/*
+	 * XXX Cheat for now and allocate the same size array for tracking warm
+	 * chains. maxtuples must have been already adjusted above to ensure we
+	 * don't cross vac_work_mem.
+	 */
+	vacrelstats->num_warm_chains = 0;
+	vacrelstats->max_warm_chains = (int) maxtuples;
+	vacrelstats->warm_chains = (LVWarmChain *)
+		palloc0(maxtuples * sizeof(LVWarmChain));
+
+}
+
+/*
+ * lazy_record_clear_chain - remember one CLEAR chain
+ */
+static void
+lazy_record_clear_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	{
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 0;
+		vacrelstats->num_warm_chains++;
+	}
+}
+
+/*
+ * lazy_record_warm_chain - remember one WARM chain
+ */
+static void
+lazy_record_warm_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	{
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 1;
+		vacrelstats->num_warm_chains++;
+	}
 }
 
 /*
@@ -2042,8 +2435,8 @@ lazy_record_dead_tuple(LVRelStats *vacrelstats,
  *
  *		Assumes dead_tuples array is in sorted order.
  */
-static bool
-lazy_tid_reaped(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+lazy_tid_reaped(ItemPointer itemptr, bool is_warm, void *state)
 {
 	LVRelStats *vacrelstats = (LVRelStats *) state;
 	ItemPointer res;
@@ -2054,7 +2447,193 @@ lazy_tid_reaped(ItemPointer itemptr, void *state)
 								sizeof(ItemPointerData),
 								vac_cmp_itemptr);
 
-	return (res != NULL);
+	return (res != NULL) ? IBDCR_DELETE : IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase1() -- run first pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	ItemPointer		res;
+	LVWarmChain	*chain;
+
+	res = (ItemPointer) bsearch((void *) itemptr,
+								(void *) vacrelstats->dead_tuples,
+								vacrelstats->num_dead_tuples,
+								sizeof(ItemPointerData),
+								vac_cmp_itemptr);
+
+	if (res != NULL)
+		return IBDCR_DELETE;
+
+	chain = (LVWarmChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->warm_chains,
+								vacrelstats->num_warm_chains,
+								sizeof(LVWarmChain),
+								vac_cmp_warm_chain);
+	if (chain != NULL)
+	{
+		if (is_warm)
+			chain->num_warm_pointers++;
+		else
+			chain->num_clear_pointers++;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase2() -- run second pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	LVWarmChain	*chain;
+
+	chain = (LVWarmChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->warm_chains,
+								vacrelstats->num_warm_chains,
+								sizeof(LVWarmChain),
+								vac_cmp_warm_chain);
+
+	if (chain != NULL && (chain->keep_warm_chain != 1))
+	{
+		/*
+		 * At no point, we can have more than 1 warm pointer to any chain and
+		 * no more than 2 clear pointers.
+		 */
+		Assert(chain->num_warm_pointers <= 1);
+		Assert(chain->num_clear_pointers <= 2);
+
+		if (chain->is_postwarm_chain == 1)
+		{
+			if (is_warm)
+			{
+				/*
+				 * A WARM pointer, pointing to a WARM chain.
+				 *
+				 * Clear the warm pointer (and delete the CLEAR pointer). We
+				 * may have already seen the CLEAR pointer in the scan and
+				 * deleted that or we may see it later in the scan. It doesn't
+				 * matter if we fail at any point because we won't clear up
+				 * WARM bits on the heap tuples until we have dealt with the
+				 * index pointers cleanly.
+				 */
+				return IBDCR_CLEAR_WARM;
+			}
+			else
+			{
+				/*
+				 * CLEAR pointer to a WARM chain.
+				 */
+				if (chain->num_warm_pointers > 0)
+				{
+					/*
+					 * If there exists a WARM pointer to the chain, we can
+					 * delete the CLEAR pointer and clear the WARM bits on the
+					 * heap tuples.
+					 */
+					return IBDCR_DELETE;
+				}
+				else if (chain->num_clear_pointers == 1)
+				{
+					/*
+					 * If this is the only pointer to a WARM chain, we must
+					 * keep the CLEAR pointer.
+					 *
+					 * The presence of WARM chain indicates that the WARM update
+					 * must have been committed good. But during the update
+					 * this index was probably not updated and hence it
+					 * contains just one, original CLEAR pointer to the chain.
+					 * We should be able to clear the WARM bits on heap tuples
+					 * unless we later find another index which prevents the
+					 * cleanup.
+					 */
+					return IBDCR_KEEP;
+				}
+			}
+		}
+		else
+		{
+			/*
+			 * This is a CLEAR chain.
+			 */
+			if (is_warm)
+			{
+				/*
+				 * A WARM pointer to a CLEAR chain.
+				 *
+				 * This can happen when a WARM update is aborted. Later the HOT
+				 * chain is pruned leaving behind only CLEAR tuples in the
+				 * chain. But the WARM index pointer inserted in the index
+				 * remains and it must now be deleted before we clear WARM bits
+				 * from the heap tuple.
+				 */
+				return IBDCR_DELETE;
+			}
+
+			/*
+			 * CLEAR pointer to a CLEAR chain.
+			 *
+			 * If this is the only surviving CLEAR pointer, keep it and clear
+			 * the WARM bits from the heap tuples.
+			 */
+			if (chain->num_clear_pointers == 1)
+				return IBDCR_KEEP;
+
+			/*
+			 * If there are more than 1 CLEAR pointers to this chain, we can
+			 * apply the recheck logic and kill the redudant CLEAR pointer and
+			 * convert the chain. But that's not yet done.
+			 */
+		}
+
+		/*
+		 * For everything else, we must keep the WARM bits and also keep the
+		 * index pointers.
+		 */
+		chain->keep_warm_chain = 1;
+		return IBDCR_KEEP;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ * Comparator routines for use with qsort() and bsearch(). Similar to
+ * vac_cmp_itemptr, but right hand argument is LVWarmChain struct pointer.
+ */
+static int
+vac_cmp_warm_chain(const void *left, const void *right)
+{
+	BlockNumber lblk,
+				rblk;
+	OffsetNumber loff,
+				roff;
+
+	lblk = ItemPointerGetBlockNumber((ItemPointer) left);
+	rblk = ItemPointerGetBlockNumber(&((LVWarmChain *) right)->chain_tid);
+
+	if (lblk < rblk)
+		return -1;
+	if (lblk > rblk)
+		return 1;
+
+	loff = ItemPointerGetOffsetNumber((ItemPointer) left);
+	roff = ItemPointerGetOffsetNumber(&((LVWarmChain *) right)->chain_tid);
+
+	if (loff < roff)
+		return -1;
+	if (loff > roff)
+		return 1;
+
+	return 0;
 }
 
 /*
@@ -2170,6 +2749,18 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 						break;
 					}
 
+					/*
+					 * If this or any other tuple in the chain ever WARM
+					 * updated, there could be multiple index entries pointing
+					 * to the root of this chain. We can't do index-only scans
+					 * for such tuples without verifying index key check. So
+					 * mark the page as !all_visible
+					 */
+					if (HeapTupleHeaderIsWarmUpdated(tuple.t_data))
+					{
+						all_visible = false;
+					}
+
 					/* Track newest xmin on page. */
 					if (TransactionIdFollows(xmin, *visibility_cutoff_xid))
 						*visibility_cutoff_xid = xmin;
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index c3f1873..2143978 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -270,6 +270,8 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
 List *
 ExecInsertIndexTuples(TupleTableSlot *slot,
 					  ItemPointer tupleid,
+					  ItemPointer root_tid,
+					  Bitmapset *modified_attrs,
 					  EState *estate,
 					  bool noDupErr,
 					  bool *specConflict,
@@ -324,6 +326,17 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 		if (!indexInfo->ii_ReadyForInserts)
 			continue;
 
+		/*
+		 * If modified_attrs is set, we only insert index entries for those
+		 * indexes whose column has changed. All other indexes can use their
+		 * existing index pointers to look up the new tuple
+		 */
+		if (modified_attrs)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
 		/* Check for partial index */
 		if (indexInfo->ii_Predicate != NIL)
 		{
@@ -387,10 +400,11 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 			index_insert(indexRelation, /* index relation */
 						 values,	/* array of index Datums */
 						 isnull,	/* null flags */
-						 tupleid,		/* tid of heap tuple */
+						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique,	/* type of uniqueness check to do */
-						 indexInfo);	/* index AM may need this */
+						 indexInfo,	/* index AM may need this */
+						 (modified_attrs != NULL));	/* is it a WARM update? */
 
 		/*
 		 * If the index has an associated exclusion constraint, check that.
@@ -787,6 +801,9 @@ retry:
 		{
 			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
 				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
+			else
+				ItemPointerCopy(&tup->t_self, &ctid_wait);
+
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index f20d728..747e4ce 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -399,6 +399,8 @@ ExecSimpleRelationInsert(EState *estate, TupleTableSlot *slot)
 
 		if (resultRelInfo->ri_NumIndices > 0)
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &(tuple->t_self),
+												   NULL,
 												   estate, false, NULL,
 												   NIL);
 
@@ -445,6 +447,8 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 	if (!skip_tuple)
 	{
 		List	   *recheckIndexes = NIL;
+		bool		warm_update;
+		Bitmapset  *modified_attrs;
 
 		/* Check the constraints of the tuple */
 		if (rel->rd_att->constr)
@@ -455,13 +459,35 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 
 		/* OK, update the tuple and index entries for it */
 		simple_heap_update(rel, &searchslot->tts_tuple->t_self,
-						   slot->tts_tuple);
+						   slot->tts_tuple, &modified_attrs, &warm_update);
 
 		if (resultRelInfo->ri_NumIndices > 0 &&
-			!HeapTupleIsHeapOnly(slot->tts_tuple))
+			(!HeapTupleIsHeapOnly(slot->tts_tuple) || warm_update))
+		{
+			ItemPointerData root_tid;
+
+			/*
+			 * If we did a WARM update then we must index the tuple using its
+			 * root line pointer and not the tuple TID itself.
+			 */
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self,
+						&root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
+
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL,
 												   NIL);
+		}
 
 		/* AFTER ROW UPDATE Triggers */
 		ExecARUpdateTriggers(estate, resultRelInfo,
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index 19eb175..ef3653c 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -39,6 +39,7 @@
 
 #include "access/relscan.h"
 #include "access/transam.h"
+#include "access/valid.h"
 #include "executor/execdebug.h"
 #include "executor/nodeBitmapHeapscan.h"
 #include "pgstat.h"
@@ -395,11 +396,27 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			OffsetNumber offnum = tbmres->offsets[curslot];
 			ItemPointerData tid;
 			HeapTupleData heapTuple;
+			bool recheck = false;
 
 			ItemPointerSet(&tid, page, offnum);
 			if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
-									   &heapTuple, NULL, true))
-				scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+									   &heapTuple, NULL, true, &recheck))
+			{
+				bool valid = true;
+
+				if (scan->rs_key)
+					HeapKeyTest(&heapTuple, RelationGetDescr(scan->rs_rd),
+							scan->rs_nkeys, scan->rs_key, valid);
+				if (valid)
+					scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+
+				/*
+				 * If the heap tuple needs a recheck because of a WARM update,
+				 * it's a lossy case.
+				 */
+				if (recheck)
+					tbmres->recheck = true;
+			}
 		}
 	}
 	else
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index 5afd02e..6e48c2e 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -142,8 +142,8 @@ IndexNext(IndexScanState *node)
 					   false);	/* don't pfree */
 
 		/*
-		 * If the index was lossy, we have to recheck the index quals using
-		 * the fetched tuple.
+		 * If the index was lossy or the tuple was WARM, we have to recheck
+		 * the index quals using the fetched tuple.
 		 */
 		if (scandesc->xs_recheck)
 		{
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index 0b524e0..2ad4a2c 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -513,6 +513,7 @@ ExecInsert(ModifyTableState *mtstate,
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												 &(tuple->t_self), NULL,
 												 estate, true, &specConflict,
 												   arbiterIndexes);
 
@@ -559,6 +560,7 @@ ExecInsert(ModifyTableState *mtstate,
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
 				recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+													   &(tuple->t_self), NULL,
 													   estate, false, NULL,
 													   arbiterIndexes);
 		}
@@ -892,6 +894,9 @@ ExecUpdate(ItemPointer tupleid,
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
 	List	   *recheckIndexes = NIL;
+	Bitmapset  *modified_attrs = NULL;
+	ItemPointerData	root_tid;
+	bool		warm_update;
 
 	/*
 	 * abort the operation if not running transactions
@@ -1008,7 +1013,7 @@ lreplace:;
 							 estate->es_output_cid,
 							 estate->es_crosscheck_snapshot,
 							 true /* wait for commit */ ,
-							 &hufd, &lockmode);
+							 &hufd, &lockmode, &modified_attrs, &warm_update);
 		switch (result)
 		{
 			case HeapTupleSelfUpdated:
@@ -1095,10 +1100,28 @@ lreplace:;
 		 * the t_self field.
 		 *
 		 * If it's a HOT update, we mustn't insert new index entries.
+		 *
+		 * If it's a WARM update, then we must insert new entries with TID
+		 * pointing to the root of the WARM chain.
 		 */
-		if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
+		if (resultRelInfo->ri_NumIndices > 0 &&
+			(!HeapTupleIsHeapOnly(tuple) || warm_update))
+		{
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL, NIL);
+		}
 	}
 
 	if (canSetTag)
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index 56a8bf2..52fe4ba 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -1888,7 +1888,7 @@ pgstat_count_heap_insert(Relation rel, PgStat_Counter n)
  * pgstat_count_heap_update - count a tuple update
  */
 void
-pgstat_count_heap_update(Relation rel, bool hot)
+pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 {
 	PgStat_TableStatus *pgstat_info = rel->pgstat_info;
 
@@ -1906,6 +1906,8 @@ pgstat_count_heap_update(Relation rel, bool hot)
 		/* t_tuples_hot_updated is nontransactional, so just advance it */
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
+		else if (warm)
+			pgstat_info->t_counts.t_tuples_warm_updated++;
 	}
 }
 
@@ -4521,6 +4523,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_updated = 0;
 		result->tuples_deleted = 0;
 		result->tuples_hot_updated = 0;
+		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
 		result->changes_since_analyze = 0;
@@ -5630,6 +5633,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated = tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted = tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated = tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
@@ -5657,6 +5661,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated += tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted += tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated += tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated += tabmsg->t_counts.t_tuples_warm_updated;
 			/* If table was truncated, first reset the live/dead counters */
 			if (tabmsg->t_counts.t_truncated)
 			{
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 5c13d26..7a9b48a 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -347,7 +347,7 @@ DecodeStandbyOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 static void
 DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 {
-	uint8		info = XLogRecGetInfo(buf->record) & XLOG_HEAP_OPMASK;
+	uint8		info = XLogRecGetInfo(buf->record) & XLOG_HEAP2_OPMASK;
 	TransactionId xid = XLogRecGetXid(buf->record);
 	SnapBuild  *builder = ctx->snapshot_builder;
 
@@ -359,10 +359,6 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 
 	switch (info)
 	{
-		case XLOG_HEAP2_MULTI_INSERT:
-			if (SnapBuildProcessChange(builder, xid, buf->origptr))
-				DecodeMultiInsert(ctx, buf);
-			break;
 		case XLOG_HEAP2_NEW_CID:
 			{
 				xl_heap_new_cid *xlrec;
@@ -390,6 +386,7 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 		case XLOG_HEAP2_CLEANUP_INFO:
 		case XLOG_HEAP2_VISIBLE:
 		case XLOG_HEAP2_LOCK_UPDATED:
+		case XLOG_HEAP2_WARMCLEAR:
 			break;
 		default:
 			elog(ERROR, "unexpected RM_HEAP2_ID record type: %u", info);
@@ -418,6 +415,10 @@ DecodeHeapOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 			if (SnapBuildProcessChange(builder, xid, buf->origptr))
 				DecodeInsert(ctx, buf);
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			if (SnapBuildProcessChange(builder, xid, buf->origptr))
+				DecodeMultiInsert(ctx, buf);
+			break;
 
 			/*
 			 * Treat HOT update as normal updates. There is no useful
@@ -809,7 +810,7 @@ DecodeDelete(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 }
 
 /*
- * Decode XLOG_HEAP2_MULTI_INSERT_insert record into multiple tuplebufs.
+ * Decode XLOG_HEAP_MULTI_INSERT_insert record into multiple tuplebufs.
  *
  * Currently MULTI_INSERT will always contain the full tuples.
  */
diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index fdf045a..8d23e92 100644
--- a/src/backend/storage/page/bufpage.c
+++ b/src/backend/storage/page/bufpage.c
@@ -1151,6 +1151,29 @@ PageIndexTupleOverwrite(Page page, OffsetNumber offnum,
 	return true;
 }
 
+/*
+ * PageIndexClearWarmTuples
+ *
+ * Clear the given WARM pointers by resetting the flags stored in the TID
+ * field. We assume there is nothing else in the TID flags other than the WARM
+ * information and clearing all flag bits is safe. If that changes, we must
+ * change this routine as well.
+ */
+void
+PageIndexClearWarmTuples(Page page, OffsetNumber *clearitemnos,
+						 uint16 nclearitems)
+{
+	int			i;
+	ItemId		itemid;
+	IndexTuple	itup;
+
+	for (i = 0; i < nclearitems; i++)
+	{
+		itemid = PageGetItemId(page, clearitemnos[i]);
+		itup = (IndexTuple) PageGetItem(page, itemid);
+		ItemPointerClearFlags(&itup->t_tid);
+	}
+}
 
 /*
  * Set checksum for a page in shared buffers.
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index dd2b924..713d731 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -146,6 +146,22 @@ pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS)
 
 
 Datum
+pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+
+Datum
 pg_stat_get_live_tuples(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
@@ -1672,6 +1688,21 @@ pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS)
 }
 
 Datum
+pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_TableStatus *tabentry;
+
+	if ((tabentry = find_tabstat_entry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->t_counts.t_tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+Datum
 pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index bc22098..c7266d7 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2339,6 +2339,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	list_free_deep(relation->rd_fkeylist);
 	list_free(relation->rd_indexlist);
 	bms_free(relation->rd_indexattr);
+	bms_free(relation->rd_exprindexattr);
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_pkattr);
 	bms_free(relation->rd_idattr);
@@ -4353,6 +4354,13 @@ RelationGetIndexList(Relation relation)
 		return list_copy(relation->rd_indexlist);
 
 	/*
+	 * If the index list was invalidated, we better also invalidate the index
+	 * attribute list (which should automatically invalidate other attributes
+	 * such as primary key and replica identity)
+	 */
+	relation->rd_indexattr = NULL;
+
+	/*
 	 * We build the list we intend to return (in the caller's context) while
 	 * doing the scan.  After successfully completing the scan, we copy that
 	 * list into the relcache entry.  This avoids cache-context memory leakage
@@ -4836,15 +4844,20 @@ Bitmapset *
 RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 {
 	Bitmapset  *indexattrs;		/* indexed columns */
+	Bitmapset  *exprindexattrs;	/* indexed columns in expression/prediacate
+									 indexes */
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *pkindexattrs;	/* columns in the primary index */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
+	Bitmapset  *indxnotreadyattrs;	/* columns in not ready indexes */
 	List	   *indexoidlist;
 	List	   *newindexoidlist;
+	List	   *indexattrsList;
 	Oid			relpkindex;
 	Oid			relreplindex;
 	ListCell   *l;
 	MemoryContext oldcxt;
+	bool		supportswarm = true;/* True if the table can be WARM updated */
 
 	/* Quick exit if we already computed the result. */
 	if (relation->rd_indexattr != NULL)
@@ -4859,6 +4872,10 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_pkattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
+			case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+				return bms_copy(relation->rd_exprindexattr);
+			case INDEX_ATTR_BITMAP_NOTREADY:
+				return bms_copy(relation->rd_indxnotreadyattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -4899,9 +4916,12 @@ restart:
 	 * won't be returned at all by RelationGetIndexList.
 	 */
 	indexattrs = NULL;
+	exprindexattrs = NULL;
 	uindexattrs = NULL;
 	pkindexattrs = NULL;
 	idindexattrs = NULL;
+	indxnotreadyattrs = NULL;
+	indexattrsList = NIL;
 	foreach(l, indexoidlist)
 	{
 		Oid			indexOid = lfirst_oid(l);
@@ -4911,6 +4931,7 @@ restart:
 		bool		isKey;		/* candidate key */
 		bool		isPK;		/* primary key */
 		bool		isIDKey;	/* replica identity index */
+		Bitmapset	*thisindexattrs = NULL;
 
 		indexDesc = index_open(indexOid, AccessShareLock);
 
@@ -4935,9 +4956,16 @@ restart:
 
 			if (attrnum != 0)
 			{
+				thisindexattrs = bms_add_member(thisindexattrs,
+							   attrnum - FirstLowInvalidHeapAttributeNumber);
+
 				indexattrs = bms_add_member(indexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
 
+				if (!indexInfo->ii_ReadyForInserts)
+					indxnotreadyattrs = bms_add_member(indxnotreadyattrs,
+							   attrnum - FirstLowInvalidHeapAttributeNumber);
+
 				if (isKey)
 					uindexattrs = bms_add_member(uindexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
@@ -4953,10 +4981,31 @@ restart:
 		}
 
 		/* Collect all attributes used in expressions, too */
-		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &exprindexattrs);
 
 		/* Collect all attributes in the index predicate, too */
-		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
+
+		/*
+		 * indexattrs should include attributes referenced in index expressions
+		 * and predicates too.
+		 */
+		indexattrs = bms_add_members(indexattrs, exprindexattrs);
+		thisindexattrs = bms_add_members(thisindexattrs, exprindexattrs);
+
+		if (!indexInfo->ii_ReadyForInserts)
+			indxnotreadyattrs = bms_add_members(indxnotreadyattrs,
+					exprindexattrs);
+
+		/*
+		 * Check if the index has amrecheck method defined. If the method is
+		 * not defined, the index does not support WARM update. Completely
+		 * disable WARM updates on such tables.
+		 */
+		if (!indexDesc->rd_amroutine->amrecheck)
+			supportswarm = false;
+
+		indexattrsList = lappend(indexattrsList, thisindexattrs);
 
 		index_close(indexDesc, AccessShareLock);
 	}
@@ -4985,19 +5034,28 @@ restart:
 		bms_free(pkindexattrs);
 		bms_free(idindexattrs);
 		bms_free(indexattrs);
-
+		list_free_deep(indexattrsList);
 		goto restart;
 	}
 
+	/* Remember if the table can do WARM updates */
+	relation->rd_supportswarm = supportswarm;
+
 	/* Don't leak the old values of these bitmaps, if any */
 	bms_free(relation->rd_indexattr);
 	relation->rd_indexattr = NULL;
+	bms_free(relation->rd_exprindexattr);
+	relation->rd_exprindexattr = NULL;
 	bms_free(relation->rd_keyattr);
 	relation->rd_keyattr = NULL;
 	bms_free(relation->rd_pkattr);
 	relation->rd_pkattr = NULL;
 	bms_free(relation->rd_idattr);
 	relation->rd_idattr = NULL;
+	bms_free(relation->rd_indxnotreadyattr);
+	relation->rd_indxnotreadyattr = NULL;
+	list_free_deep(relation->rd_indexattrsList);
+	relation->rd_indexattrsList = NIL;
 
 	/*
 	 * Now save copies of the bitmaps in the relcache entry.  We intentionally
@@ -5010,7 +5068,21 @@ restart:
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_pkattr = bms_copy(pkindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_indexattr = bms_copy(indexattrs);
+	relation->rd_exprindexattr = bms_copy(exprindexattrs);
+	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
+	relation->rd_indxnotreadyattr = bms_copy(indxnotreadyattrs);
+
+	/*
+	 * create a deep copy of the list, copying each bitmap in the
+	 * CurrentMemoryContext.
+	 */
+	foreach(l, indexattrsList)
+	{
+		Bitmapset *b = (Bitmapset *) lfirst(l);
+		relation->rd_indexattrsList = lappend(relation->rd_indexattrsList,
+				bms_copy(b));
+	}
+
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -5024,6 +5096,10 @@ restart:
 			return bms_copy(relation->rd_pkattr);
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
+		case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+			return exprindexattrs;
+		case INDEX_ATTR_BITMAP_NOTREADY:
+			return indxnotreadyattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
@@ -5031,6 +5107,34 @@ restart:
 }
 
 /*
+ * Get a list of bitmaps, where each bitmap contains a list of attributes used
+ * by one index.
+ *
+ * The actual information is computed in RelationGetIndexAttrBitmap, but
+ * currently the only consumer of this function calls it immediately after
+ * calling RelationGetIndexAttrBitmap, we should be fine. We don't expect any
+ * relcache invalidation to come between these two calls and hence don't expect
+ * the cached information to change underneath.
+ */
+List *
+RelationGetIndexAttrList(Relation relation)
+{
+	ListCell   *l;
+	List	   *indexattrsList = NIL;
+
+	/*
+	 * Create a deep copy of the list by copying bitmaps in the
+	 * CurrentMemoryContext.
+	 */
+	foreach(l, relation->rd_indexattrsList)
+	{
+		Bitmapset *b = (Bitmapset *) lfirst(l);
+		indexattrsList = lappend(indexattrsList, bms_copy(b));
+	}
+	return indexattrsList;
+}
+
+/*
  * RelationGetExclusionInfo -- get info about index's exclusion constraint
  *
  * This should be called only for an index that is known to have an
@@ -5636,6 +5740,7 @@ load_relcache_init_file(bool shared)
 		rel->rd_keyattr = NULL;
 		rel->rd_pkattr = NULL;
 		rel->rd_idattr = NULL;
+		rel->rd_indxnotreadyattr = NULL;
 		rel->rd_pubactions = NULL;
 		rel->rd_statvalid = false;
 		rel->rd_statlist = NIL;
diff --git a/src/backend/utils/time/combocid.c b/src/backend/utils/time/combocid.c
index baff998..6a2e2f2 100644
--- a/src/backend/utils/time/combocid.c
+++ b/src/backend/utils/time/combocid.c
@@ -106,7 +106,7 @@ HeapTupleHeaderGetCmin(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 	Assert(TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmin(tup)));
 
 	if (tup->t_infomask & HEAP_COMBOCID)
@@ -120,7 +120,7 @@ HeapTupleHeaderGetCmax(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 
 	/*
 	 * Because GetUpdateXid() performs memory allocations if xmax is a
diff --git a/src/backend/utils/time/tqual.c b/src/backend/utils/time/tqual.c
index 519f3b6..e54d0df 100644
--- a/src/backend/utils/time/tqual.c
+++ b/src/backend/utils/time/tqual.c
@@ -186,7 +186,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -205,7 +205,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -377,7 +377,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -396,7 +396,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -471,7 +471,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			return HeapTupleInvisible;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -490,7 +490,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -753,7 +753,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -772,7 +772,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -974,7 +974,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -993,7 +993,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1180,7 +1180,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 		if (HeapTupleHeaderXminInvalid(tuple))
 			return HEAPTUPLE_DEAD;
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_OFF)
+		else if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1198,7 +1198,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 						InvalidTransactionId);
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h
index f919cf8..8b7af1e 100644
--- a/src/include/access/amapi.h
+++ b/src/include/access/amapi.h
@@ -13,6 +13,7 @@
 #define AMAPI_H
 
 #include "access/genam.h"
+#include "access/itup.h"
 
 /*
  * We don't wish to include planner header files here, since most of an index
@@ -74,6 +75,14 @@ typedef bool (*aminsert_function) (Relation indexRelation,
 											   Relation heapRelation,
 											   IndexUniqueCheck checkUnique,
 											   struct IndexInfo *indexInfo);
+/* insert this WARM tuple */
+typedef bool (*amwarminsert_function) (Relation indexRelation,
+											   Datum *values,
+											   bool *isnull,
+											   ItemPointer heap_tid,
+											   Relation heapRelation,
+											   IndexUniqueCheck checkUnique,
+											   struct IndexInfo *indexInfo);
 
 /* bulk delete */
 typedef IndexBulkDeleteResult *(*ambulkdelete_function) (IndexVacuumInfo *info,
@@ -152,6 +161,11 @@ typedef void (*aminitparallelscan_function) (void *target);
 /* (re)start parallel index scan */
 typedef void (*amparallelrescan_function) (IndexScanDesc scan);
 
+/* recheck index tuple and heap tuple match */
+typedef bool (*amrecheck_function) (Relation indexRel,
+		struct IndexInfo *indexInfo, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 /*
  * API struct for an index AM.  Note this must be stored in a single palloc'd
  * chunk of memory.
@@ -198,6 +212,7 @@ typedef struct IndexAmRoutine
 	ambuild_function ambuild;
 	ambuildempty_function ambuildempty;
 	aminsert_function aminsert;
+	amwarminsert_function amwarminsert;
 	ambulkdelete_function ambulkdelete;
 	amvacuumcleanup_function amvacuumcleanup;
 	amcanreturn_function amcanreturn;	/* can be NULL */
@@ -217,6 +232,9 @@ typedef struct IndexAmRoutine
 	amestimateparallelscan_function amestimateparallelscan;		/* can be NULL */
 	aminitparallelscan_function aminitparallelscan;		/* can be NULL */
 	amparallelrescan_function amparallelrescan; /* can be NULL */
+
+	/* interface function to support WARM */
+	amrecheck_function amrecheck;		/* can be NULL */
 } IndexAmRoutine;
 
 
diff --git a/src/include/access/genam.h b/src/include/access/genam.h
index f467b18..965be45 100644
--- a/src/include/access/genam.h
+++ b/src/include/access/genam.h
@@ -75,12 +75,29 @@ typedef struct IndexBulkDeleteResult
 	bool		estimated_count;	/* num_index_tuples is an estimate */
 	double		num_index_tuples;		/* tuples remaining */
 	double		tuples_removed; /* # removed during vacuum operation */
+	double		num_warm_pointers;	/* # WARM pointers found */
+	double		num_clear_pointers;	/* # CLEAR pointers found */
+	double		pointers_cleared;	/* # WARM pointers cleared */
+	double		warm_pointers_removed;	/* # WARM pointers removed */
+	double		clear_pointers_removed;	/* # CLEAR pointers removed */
 	BlockNumber pages_deleted;	/* # unused pages in index */
 	BlockNumber pages_free;		/* # pages available for reuse */
 } IndexBulkDeleteResult;
 
+/*
+ * IndexBulkDeleteCallback should return one of the following
+ */
+typedef enum IndexBulkDeleteCallbackResult
+{
+	IBDCR_KEEP,			/* index tuple should be preserved */
+	IBDCR_DELETE,		/* index tuple should be deleted */
+	IBDCR_CLEAR_WARM	/* index tuple should be cleared of WARM bit */
+} IndexBulkDeleteCallbackResult;
+
 /* Typedef for callback function to determine if a tuple is bulk-deletable */
-typedef bool (*IndexBulkDeleteCallback) (ItemPointer itemptr, void *state);
+typedef IndexBulkDeleteCallbackResult (*IndexBulkDeleteCallback) (
+										 ItemPointer itemptr,
+										 bool is_warm, void *state);
 
 /* struct definitions appear in relscan.h */
 typedef struct IndexScanDescData *IndexScanDesc;
@@ -135,7 +152,8 @@ extern bool index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 struct IndexInfo *indexInfo);
+			 struct IndexInfo *indexInfo,
+			 bool warm_update);
 
 extern IndexScanDesc index_beginscan(Relation heapRelation,
 				Relation indexRelation,
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 5540e12..2217af9 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -72,6 +72,20 @@ typedef struct HeapUpdateFailureData
 	CommandId	cmax;
 } HeapUpdateFailureData;
 
+typedef int HeapCheckWarmChainStatus;
+
+#define HCWC_CLEAR_TUPLE		0x0001
+#define	HCWC_WARM_TUPLE			0x0002
+#define HCWC_WARM_UPDATED_TUPLE	0x0004
+
+#define HCWC_IS_MIXED(status) \
+	(((status) & (HCWC_CLEAR_TUPLE | HCWC_WARM_TUPLE)) != 0)
+#define HCWC_IS_ALL_WARM(status) \
+	(((status) & HCWC_CLEAR_TUPLE) == 0)
+#define HCWC_IS_ALL_CLEAR(status) \
+	(((status) & HCWC_WARM_TUPLE) == 0)
+#define HCWC_IS_WARM_UPDATED(status) \
+	(((status) & HCWC_WARM_UPDATED_TUPLE) != 0)
 
 /* ----------------
  *		function prototypes for heap access method
@@ -137,9 +151,10 @@ extern bool heap_fetch(Relation relation, Snapshot snapshot,
 		   Relation stats_relation);
 extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
 					   Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call);
+					   bool *all_dead, bool first_call, bool *recheck);
 extern bool heap_hot_search(ItemPointer tid, Relation relation,
-				Snapshot snapshot, bool *all_dead);
+				Snapshot snapshot, bool *all_dead,
+				bool *recheck, Buffer *buffer, HeapTuple heapTuple);
 
 extern void heap_get_latest_tid(Relation relation, Snapshot snapshot,
 					ItemPointer tid);
@@ -161,7 +176,8 @@ extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
 extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
@@ -176,10 +192,16 @@ extern bool heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple);
 extern Oid	simple_heap_insert(Relation relation, HeapTuple tup);
 extern void simple_heap_delete(Relation relation, ItemPointer tid);
 extern void simple_heap_update(Relation relation, ItemPointer otid,
-				   HeapTuple tup);
+				   HeapTuple tup,
+				   Bitmapset **modified_attrs,
+				   bool *warm_update);
 
 extern void heap_sync(Relation relation);
 extern void heap_update_snapshot(HeapScanDesc scan, Snapshot snapshot);
+extern HeapCheckWarmChainStatus heap_check_warm_chain(Page dp,
+				   ItemPointer tid, bool stop_at_warm);
+extern int heap_clear_warm_chain(Page dp, ItemPointer tid,
+				   OffsetNumber *cleared_offnums);
 
 /* in heap/pruneheap.c */
 extern void heap_page_prune_opt(Relation relation, Buffer buffer);
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index e6019d5..66fd0ea 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -32,7 +32,7 @@
 #define XLOG_HEAP_INSERT		0x00
 #define XLOG_HEAP_DELETE		0x10
 #define XLOG_HEAP_UPDATE		0x20
-/* 0x030 is free, was XLOG_HEAP_MOVE */
+#define XLOG_HEAP_MULTI_INSERT	0x30
 #define XLOG_HEAP_HOT_UPDATE	0x40
 #define XLOG_HEAP_CONFIRM		0x50
 #define XLOG_HEAP_LOCK			0x60
@@ -47,18 +47,23 @@
 /*
  * We ran out of opcodes, so heapam.c now has a second RmgrId.  These opcodes
  * are associated with RM_HEAP2_ID, but are not logically different from
- * the ones above associated with RM_HEAP_ID.  XLOG_HEAP_OPMASK applies to
- * these, too.
+ * the ones above associated with RM_HEAP_ID.
+ *
+ * In PG 10, we moved XLOG_HEAP2_MULTI_INSERT to RM_HEAP_ID. That allows us to
+ * use 0x80 bit in RM_HEAP2_ID, thus potentially creating another 8 possible
+ * opcodes in RM_HEAP2_ID.
  */
 #define XLOG_HEAP2_REWRITE		0x00
 #define XLOG_HEAP2_CLEAN		0x10
 #define XLOG_HEAP2_FREEZE_PAGE	0x20
 #define XLOG_HEAP2_CLEANUP_INFO 0x30
 #define XLOG_HEAP2_VISIBLE		0x40
-#define XLOG_HEAP2_MULTI_INSERT 0x50
+#define XLOG_HEAP2_WARMCLEAR	0x50
 #define XLOG_HEAP2_LOCK_UPDATED 0x60
 #define XLOG_HEAP2_NEW_CID		0x70
 
+#define XLOG_HEAP2_OPMASK		0x70
+
 /*
  * xl_heap_insert/xl_heap_multi_insert flag values, 8 bits are available.
  */
@@ -80,6 +85,7 @@
 #define XLH_UPDATE_CONTAINS_NEW_TUPLE			(1<<4)
 #define XLH_UPDATE_PREFIX_FROM_OLD				(1<<5)
 #define XLH_UPDATE_SUFFIX_FROM_OLD				(1<<6)
+#define XLH_UPDATE_WARM_UPDATE					(1<<7)
 
 /* convenience macro for checking whether any form of old tuple was logged */
 #define XLH_UPDATE_CONTAINS_OLD						\
@@ -225,6 +231,14 @@ typedef struct xl_heap_clean
 
 #define SizeOfHeapClean (offsetof(xl_heap_clean, ndead) + sizeof(uint16))
 
+typedef struct xl_heap_warmclear
+{
+	uint16		ncleared;
+	/* OFFSET NUMBERS are in the block reference 0 */
+} xl_heap_warmclear;
+
+#define SizeOfHeapWarmClear (offsetof(xl_heap_warmclear, ncleared) + sizeof(uint16))
+
 /*
  * Cleanup_info is required in some cases during a lazy VACUUM.
  * Used for reporting the results of HeapTupleHeaderAdvanceLatestRemovedXid()
@@ -388,6 +402,8 @@ extern XLogRecPtr log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid);
+extern XLogRecPtr log_heap_warmclear(Relation reln, Buffer buffer,
+			   OffsetNumber *cleared, int ncleared);
 extern XLogRecPtr log_heap_freeze(Relation reln, Buffer buffer,
 				TransactionId cutoff_xid, xl_heap_freeze_tuple *tuples,
 				int ntuples);
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 4d614b7..bcefba6 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -201,6 +201,21 @@ struct HeapTupleHeaderData
 										 * upgrade support */
 #define HEAP_MOVED (HEAP_MOVED_OFF | HEAP_MOVED_IN)
 
+/*
+ * A WARM chain usually consists of two parts. Each of these parts are HOT
+ * chains in themselves i.e. all indexed columns has the same value, but a WARM
+ * update separates these parts. We need a mechanism to identify which part a
+ * tuple belongs to. We can't just look at if it's a
+ * HeapTupleHeaderIsWarmUpdated() because during WARM update, both old and new
+ * tuples are marked as WARM tuples.
+ *
+ * We need another infomask bit for this. But we use the same infomask bit that
+ * was earlier used for by old-style VACUUM FULL. This is safe because
+ * HEAP_WARM_TUPLE flag will always be set along with HEAP_WARM_UPDATED. So if
+ * HEAP_WARM_TUPLE and HEAP_WARM_UPDATED is set then we know that it's
+ * referring to red part of the WARM chain.
+ */
+#define HEAP_WARM_TUPLE			0x4000
 #define HEAP_XACT_MASK			0xFFF0	/* visibility-related bits */
 
 /*
@@ -260,7 +275,11 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x0800 are available */
+#define HEAP_WARM_UPDATED		0x0800	/*
+										 * This or a prior version of this
+										 * tuple in the current HOT chain was
+										 * once WARM updated
+										 */
 #define HEAP_LATEST_TUPLE		0x1000	/*
 										 * This is the last tuple in chain and
 										 * ip_posid points to the root line
@@ -271,7 +290,7 @@ struct HeapTupleHeaderData
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF800	/* visibility-related bits */
 
 
 /*
@@ -396,7 +415,7 @@ struct HeapTupleHeaderData
 /* SetCmin is reasonably simple since we never need a combo CID */
 #define HeapTupleHeaderSetCmin(tup, cid) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	(tup)->t_infomask &= ~HEAP_COMBOCID; \
 } while (0)
@@ -404,7 +423,7 @@ do { \
 /* SetCmax must be used after HeapTupleHeaderAdjustCmax; see combocid.c */
 #define HeapTupleHeaderSetCmax(tup, cid, iscombo) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	if (iscombo) \
 		(tup)->t_infomask |= HEAP_COMBOCID; \
@@ -414,7 +433,7 @@ do { \
 
 #define HeapTupleHeaderGetXvac(tup) \
 ( \
-	((tup)->t_infomask & HEAP_MOVED) ? \
+	HeapTupleHeaderIsMoved(tup) ? \
 		(tup)->t_choice.t_heap.t_field3.t_xvac \
 	: \
 		InvalidTransactionId \
@@ -422,7 +441,7 @@ do { \
 
 #define HeapTupleHeaderSetXvac(tup, xid) \
 do { \
-	Assert((tup)->t_infomask & HEAP_MOVED); \
+	Assert(HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_xvac = (xid); \
 } while (0)
 
@@ -510,6 +529,21 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+#define HeapTupleHeaderSetWarmUpdated(tup) \
+do { \
+	(tup)->t_infomask2 |= HEAP_WARM_UPDATED; \
+} while (0)
+
+#define HeapTupleHeaderClearWarmUpdated(tup) \
+do { \
+	(tup)->t_infomask2 &= ~HEAP_WARM_UPDATED; \
+} while (0)
+
+#define HeapTupleHeaderIsWarmUpdated(tup) \
+( \
+  ((tup)->t_infomask2 & HEAP_WARM_UPDATED) != 0 \
+)
+
 /*
  * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
  * the TID of the tuple itself in t_ctid field to mark the end of the chain.
@@ -635,6 +669,58 @@ do { \
 )
 
 /*
+ * Macros to check if tuple is a moved-off/in tuple by VACUUM FULL in from
+ * pre-9.0 era. Such tuple must not have HEAP_WARM_TUPLE flag set.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderIsMovedOff(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED_OFF) \
+)
+
+#define HeapTupleHeaderIsMovedIn(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED_IN) \
+)
+
+#define HeapTupleHeaderIsMoved(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED) \
+)
+
+/*
+ * Check if tuple belongs to the second part of the WARM chain.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderIsWarm(tuple) \
+( \
+	HeapTupleHeaderIsWarmUpdated(tuple) && \
+	(((tuple)->t_infomask & HEAP_WARM_TUPLE) != 0) \
+)
+
+/*
+ * Mark tuple as a member of the second part of the chain. Must only be done on
+ * a tuple which is already marked a WARM-tuple.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderSetWarm(tuple) \
+( \
+	AssertMacro(HeapTupleHeaderIsWarmUpdated(tuple)), \
+	(tuple)->t_infomask |= HEAP_WARM_TUPLE \
+)
+
+#define HeapTupleHeaderClearWarm(tuple) \
+( \
+	(tuple)->t_infomask &= ~HEAP_WARM_TUPLE \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
@@ -785,6 +871,24 @@ struct MinimalTupleData
 #define HeapTupleClearHeapOnly(tuple) \
 		HeapTupleHeaderClearHeapOnly((tuple)->t_data)
 
+#define HeapTupleIsWarmUpdated(tuple) \
+		HeapTupleHeaderIsWarmUpdated((tuple)->t_data)
+
+#define HeapTupleSetWarmUpdated(tuple) \
+		HeapTupleHeaderSetWarmUpdated((tuple)->t_data)
+
+#define HeapTupleClearWarmUpdated(tuple) \
+		HeapTupleHeaderClearWarmUpdated((tuple)->t_data)
+
+#define HeapTupleIsWarm(tuple) \
+		HeapTupleHeaderIsWarm((tuple)->t_data)
+
+#define HeapTupleSetWarm(tuple) \
+		HeapTupleHeaderSetWarm((tuple)->t_data)
+
+#define HeapTupleClearWarm(tuple) \
+		HeapTupleHeaderClearWarm((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index f9304db..163180d 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -427,6 +427,12 @@ typedef BTScanOpaqueData *BTScanOpaque;
 #define SK_BT_NULLS_FIRST	(INDOPTION_NULLS_FIRST << SK_BT_INDOPTION_SHIFT)
 
 /*
+ * Flags overloaded on t_tid.ip_posid field. They are managed by
+ * ItemPointerSetFlags and corresponing routines.
+ */
+#define BTREE_INDEX_WARM_POINTER	0x01
+
+/*
  * external entry points for btree, in nbtree.c
  */
 extern IndexBuildResult *btbuild(Relation heap, Relation index,
@@ -436,6 +442,10 @@ extern bool btinsert(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
 		 struct IndexInfo *indexInfo);
+extern bool btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 struct IndexInfo *indexInfo);
 extern IndexScanDesc btbeginscan(Relation rel, int nkeys, int norderbys);
 extern Size btestimateparallelscan(void);
 extern void btinitparallelscan(void *target);
@@ -487,10 +497,12 @@ extern void _bt_pageinit(Page page, Size size);
 extern bool _bt_page_recyclable(Page page);
 extern void _bt_delitems_delete(Relation rel, Buffer buf,
 					OffsetNumber *itemnos, int nitems, Relation heapRel);
-extern void _bt_delitems_vacuum(Relation rel, Buffer buf,
-					OffsetNumber *itemnos, int nitems,
-					BlockNumber lastBlockVacuumed);
+extern void _bt_handleitems_vacuum(Relation rel, Buffer buf,
+					OffsetNumber *delitemnos, int ndelitems,
+					OffsetNumber *clearitemnos, int nclearitems);
 extern int	_bt_pagedel(Relation rel, Buffer buf);
+extern void	_bt_clear_items(Page page, OffsetNumber *clearitemnos,
+					uint16 nclearitems);
 
 /*
  * prototypes for functions in nbtsearch.c
@@ -537,6 +549,9 @@ extern bytea *btoptions(Datum reloptions, bool validate);
 extern bool btproperty(Oid index_oid, int attno,
 		   IndexAMProperty prop, const char *propname,
 		   bool *res, bool *isnull);
+extern bool btrecheck(Relation indexRel, struct IndexInfo *indexInfo,
+		IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * prototypes for functions in nbtvalidate.c
diff --git a/src/include/access/nbtxlog.h b/src/include/access/nbtxlog.h
index d6a3085..6a86628 100644
--- a/src/include/access/nbtxlog.h
+++ b/src/include/access/nbtxlog.h
@@ -142,7 +142,8 @@ typedef struct xl_btree_reuse_page
 /*
  * This is what we need to know about vacuum of individual leaf index tuples.
  * The WAL record can represent deletion of any number of index tuples on a
- * single index page when executed by VACUUM.
+ * single index page when executed by VACUUM. It also includes tuples which
+ * are cleared of WARM bits by VACUUM.
  *
  * For MVCC scans, lastBlockVacuumed will be set to InvalidBlockNumber.
  * For a non-MVCC index scans there is an additional correctness requirement
@@ -165,11 +166,12 @@ typedef struct xl_btree_reuse_page
 typedef struct xl_btree_vacuum
 {
 	BlockNumber lastBlockVacuumed;
-
-	/* TARGET OFFSET NUMBERS FOLLOW */
+	uint16		ndelitems;
+	uint16		nclearitems;
+	/* ndelitems + nclearitems TARGET OFFSET NUMBERS FOLLOW */
 } xl_btree_vacuum;
 
-#define SizeOfBtreeVacuum	(offsetof(xl_btree_vacuum, lastBlockVacuumed) + sizeof(BlockNumber))
+#define SizeOfBtreeVacuum	(offsetof(xl_btree_vacuum, nclearitems) + sizeof(uint16))
 
 /*
  * This is what we need to know about marking an empty branch for deletion.
diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h
index 3fc726d..fa178d3 100644
--- a/src/include/access/relscan.h
+++ b/src/include/access/relscan.h
@@ -104,6 +104,9 @@ typedef struct IndexScanDescData
 	/* index access method's private state */
 	void	   *opaque;			/* access-method-specific info */
 
+	/* IndexInfo structure for this index */
+	struct IndexInfo  *indexInfo;
+
 	/*
 	 * In an index-only scan, a successful amgettuple call must fill either
 	 * xs_itup (and xs_itupdesc) or xs_hitup (and xs_hitupdesc) to provide the
@@ -119,7 +122,7 @@ typedef struct IndexScanDescData
 	HeapTupleData xs_ctup;		/* current heap tuple, if any */
 	Buffer		xs_cbuf;		/* current heap buffer in scan, if any */
 	/* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */
-	bool		xs_recheck;		/* T means scan keys must be rechecked */
+	bool		xs_recheck;		/* T means scan keys must be rechecked for each tuple */
 
 	/*
 	 * When fetching with an ordering operator, the values of the ORDER BY
diff --git a/src/include/catalog/index.h b/src/include/catalog/index.h
index 20bec90..f92ec29 100644
--- a/src/include/catalog/index.h
+++ b/src/include/catalog/index.h
@@ -89,6 +89,13 @@ extern void FormIndexDatum(IndexInfo *indexInfo,
 			   Datum *values,
 			   bool *isnull);
 
+extern void FormIndexPlainDatum(IndexInfo *indexInfo,
+			   Relation heapRel,
+			   HeapTuple heapTup,
+			   Datum *values,
+			   bool *isnull,
+			   bool *isavail);
+
 extern void index_build(Relation heapRelation,
 			Relation indexRelation,
 			IndexInfo *indexInfo,
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 220ba7b..a6cb5c6 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2785,6 +2785,8 @@ DATA(insert OID = 1933 (  pg_stat_get_tuples_deleted	PGNSP PGUID 12 1 0 0 0 f f
 DESCR("statistics: number of tuples deleted");
 DATA(insert OID = 1972 (  pg_stat_get_tuples_hot_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated");
+DATA(insert OID = 3373 (  pg_stat_get_tuples_warm_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated");
 DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_live_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
@@ -2937,6 +2939,8 @@ DATA(insert OID = 3042 (  pg_stat_get_xact_tuples_deleted		PGNSP PGUID 12 1 0 0
 DESCR("statistics: number of tuples deleted in current transaction");
 DATA(insert OID = 3043 (  pg_stat_get_xact_tuples_hot_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated in current transaction");
+DATA(insert OID = 3375 (  pg_stat_get_xact_tuples_warm_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated in current transaction");
 DATA(insert OID = 3044 (  pg_stat_get_xact_blocks_fetched		PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_fetched _null_ _null_ _null_ ));
 DESCR("statistics: number of blocks fetched in current transaction");
 DATA(insert OID = 3045 (  pg_stat_get_xact_blocks_hit			PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_hit _null_ _null_ _null_ ));
diff --git a/src/include/commands/progress.h b/src/include/commands/progress.h
index 9472ecc..b355b61 100644
--- a/src/include/commands/progress.h
+++ b/src/include/commands/progress.h
@@ -25,6 +25,7 @@
 #define PROGRESS_VACUUM_NUM_INDEX_VACUUMS		4
 #define PROGRESS_VACUUM_MAX_DEAD_TUPLES			5
 #define PROGRESS_VACUUM_NUM_DEAD_TUPLES			6
+#define PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED	7
 
 /* Phases of vacuum (as advertised via PROGRESS_VACUUM_PHASE) */
 #define PROGRESS_VACUUM_PHASE_SCAN_HEAP			1
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index d3849b9..7e1ec56 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -506,6 +506,7 @@ extern int	ExecCleanTargetListLength(List *targetlist);
 extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
 extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
 extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
+					  ItemPointer root_tid, Bitmapset *modified_attrs,
 					  EState *estate, bool noDupErr, bool *specConflict,
 					  List *arbiterIndexes);
 extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
diff --git a/src/include/executor/nodeIndexscan.h b/src/include/executor/nodeIndexscan.h
index ea3f3a5..ebeec74 100644
--- a/src/include/executor/nodeIndexscan.h
+++ b/src/include/executor/nodeIndexscan.h
@@ -41,5 +41,4 @@ extern void ExecIndexEvalRuntimeKeys(ExprContext *econtext,
 extern bool ExecIndexEvalArrayKeys(ExprContext *econtext,
 					   IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 extern bool ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
-
 #endif   /* NODEINDEXSCAN_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 11a6850..d2991db 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -132,6 +132,7 @@ typedef struct IndexInfo
 	NodeTag		type;
 	int			ii_NumIndexAttrs;
 	AttrNumber	ii_KeyAttrNumbers[INDEX_MAX_KEYS];
+	Bitmapset  *ii_indxattrs;	/* bitmap of all columns used in this index */
 	List	   *ii_Expressions; /* list of Expr */
 	List	   *ii_ExpressionsState;	/* list of ExprState */
 	List	   *ii_Predicate;	/* list of Expr */
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index e29397f..99bdc8b 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -105,6 +105,7 @@ typedef struct PgStat_TableCounts
 	PgStat_Counter t_tuples_updated;
 	PgStat_Counter t_tuples_deleted;
 	PgStat_Counter t_tuples_hot_updated;
+	PgStat_Counter t_tuples_warm_updated;
 	bool		t_truncated;
 
 	PgStat_Counter t_delta_live_tuples;
@@ -625,6 +626,7 @@ typedef struct PgStat_StatTabEntry
 	PgStat_Counter tuples_updated;
 	PgStat_Counter tuples_deleted;
 	PgStat_Counter tuples_hot_updated;
+	PgStat_Counter tuples_warm_updated;
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
@@ -1285,7 +1287,7 @@ pgstat_report_wait_end(void)
 	(pgStatBlockWriteTime += (n))
 
 extern void pgstat_count_heap_insert(Relation rel, PgStat_Counter n);
-extern void pgstat_count_heap_update(Relation rel, bool hot);
+extern void pgstat_count_heap_update(Relation rel, bool hot, bool warm);
 extern void pgstat_count_heap_delete(Relation rel);
 extern void pgstat_count_truncate(Relation rel);
 extern void pgstat_update_heap_dead_tuples(Relation rel, int delta);
diff --git a/src/include/storage/bufpage.h b/src/include/storage/bufpage.h
index e956dc3..1852195 100644
--- a/src/include/storage/bufpage.h
+++ b/src/include/storage/bufpage.h
@@ -433,6 +433,8 @@ extern void PageIndexMultiDelete(Page page, OffsetNumber *itemnos, int nitems);
 extern void PageIndexTupleDeleteNoCompact(Page page, OffsetNumber offset);
 extern bool PageIndexTupleOverwrite(Page page, OffsetNumber offnum,
 						Item newtup, Size newsize);
+extern void PageIndexClearWarmTuples(Page page, OffsetNumber *clearitemnos,
+						uint16 nclearitems);
 extern char *PageSetChecksumCopy(Page page, BlockNumber blkno);
 extern void PageSetChecksumInplace(Page page, BlockNumber blkno);
 
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index ab875bb..4b173b5 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -142,9 +142,16 @@ typedef struct RelationData
 
 	/* data managed by RelationGetIndexAttrBitmap: */
 	Bitmapset  *rd_indexattr;	/* identifies columns used in indexes */
+	Bitmapset  *rd_exprindexattr; /* indentified columns used in expression or
+									 predicate indexes */
+	Bitmapset  *rd_indxnotreadyattr;	/* columns used by indexes not yet
+										   ready */
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_pkattr;		/* cols included in primary key */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	List	   *rd_indexattrsList;	/* List of bitmaps, describing list of
+									   attributes for each index */
+	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	PublicationActions  *rd_pubactions;	/* publication actions */
 
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index 81af3ae..06c0183 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -51,11 +51,14 @@ typedef enum IndexAttrBitmapKind
 	INDEX_ATTR_BITMAP_ALL,
 	INDEX_ATTR_BITMAP_KEY,
 	INDEX_ATTR_BITMAP_PRIMARY_KEY,
-	INDEX_ATTR_BITMAP_IDENTITY_KEY
+	INDEX_ATTR_BITMAP_IDENTITY_KEY,
+	INDEX_ATTR_BITMAP_EXPR_PREDICATE,
+	INDEX_ATTR_BITMAP_NOTREADY
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
 						   IndexAttrBitmapKind keyAttrs);
+extern List *RelationGetIndexAttrList(Relation relation);
 
 extern void RelationGetExclusionInfo(Relation indexRelation,
 						 Oid **operators,
diff --git a/src/test/regress/expected/alter_generic.out b/src/test/regress/expected/alter_generic.out
index ce581bb..85e4c70 100644
--- a/src/test/regress/expected/alter_generic.out
+++ b/src/test/regress/expected/alter_generic.out
@@ -161,15 +161,15 @@ ALTER SERVER alt_fserv1 RENAME TO alt_fserv3;   -- OK
 SELECT fdwname FROM pg_foreign_data_wrapper WHERE fdwname like 'alt_fdw%';
  fdwname  
 ----------
- alt_fdw2
  alt_fdw3
+ alt_fdw2
 (2 rows)
 
 SELECT srvname FROM pg_foreign_server WHERE srvname like 'alt_fserv%';
   srvname   
 ------------
- alt_fserv2
  alt_fserv3
+ alt_fserv2
 (2 rows)
 
 --
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index d706f42..f7dc4a4 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1756,6 +1756,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_tuples_deleted(c.oid) AS n_tup_del,
     pg_stat_get_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
@@ -1903,6 +1904,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1946,6 +1948,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1983,7 +1986,8 @@ pg_stat_xact_all_tables| SELECT c.oid AS relid,
     pg_stat_get_xact_tuples_inserted(c.oid) AS n_tup_ins,
     pg_stat_get_xact_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_xact_tuples_deleted(c.oid) AS n_tup_del,
-    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd
+    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_xact_tuples_warm_updated(c.oid) AS n_tup_warm_upd
    FROM ((pg_class c
      LEFT JOIN pg_index i ON ((c.oid = i.indrelid)))
      LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
@@ -1999,7 +2003,8 @@ pg_stat_xact_sys_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname = ANY (ARRAY['pg_catalog'::name, 'information_schema'::name])) OR (pg_stat_xact_all_tables.schemaname ~ '^pg_toast'::text));
 pg_stat_xact_user_functions| SELECT p.oid AS funcid,
@@ -2021,7 +2026,8 @@ pg_stat_xact_user_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname <> ALL (ARRAY['pg_catalog'::name, 'information_schema'::name])) AND (pg_stat_xact_all_tables.schemaname !~ '^pg_toast'::text));
 pg_statio_all_indexes| SELECT c.oid AS relid,
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
new file mode 100644
index 0000000..1ae2f40
--- /dev/null
+++ b/src/test/regress/expected/warm.out
@@ -0,0 +1,914 @@
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Check if index only scan works correctly
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx1 on updtst_tab1
+   Index Cond: (b = 140001)
+(2 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab1;
+------------------
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE a = 1;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE b = 701;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+VACUUM updtst_tab2;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx2 on updtst_tab2
+   Index Cond: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab2 WHERE b = 701;
+  b  
+-----
+ 701
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab2;
+------------------
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 1;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+       QUERY PLAN        
+-------------------------
+ Seq Scan on updtst_tab3
+   Filter: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 701;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+  b   
+------
+ 1421
+(1 row)
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+SET enable_seqscan = false;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    98
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 2;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx3 on updtst_tab3
+   Index Cond: (b = 702)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 702;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+  b   
+------
+ 1422
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab3;
+------------------
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+explain select * from test_warm where lower(a) = 'test';
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on test_warm  (cost=4.18..12.65 rows=4 width=64)
+   Recheck Cond: (lower(a) = 'test'::text)
+   ->  Bitmap Index Scan on test_warmindx  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (lower(a) = 'test'::text)
+(4 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+select *, ctid from test_warm where a = 'test';
+ a | b | ctid 
+---+---+------
+(0 rows)
+
+select *, ctid from test_warm where a = 'TEST';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+                                   QUERY PLAN                                    
+---------------------------------------------------------------------------------
+ Index Scan using test_warmindx on test_warm  (cost=0.15..20.22 rows=4 width=64)
+   Index Cond: (lower(a) = 'test'::text)
+(2 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+DROP TABLE test_warm;
+--- Test with toast data types
+CREATE TABLE test_toast_warm (a int unique, b text, c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+-- insert a large enough value to cause index datum compression
+INSERT INTO test_toast_warm VALUES (1, repeat('a', 600), 100);
+INSERT INTO test_toast_warm VALUES (2, repeat('b', 2), 100);
+INSERT INTO test_toast_warm VALUES (3, repeat('c', 4), 100);
+INSERT INTO test_toast_warm VALUES (4, repeat('d', 63), 100);
+INSERT INTO test_toast_warm VALUES (5, repeat('e', 126), 100);
+INSERT INTO test_toast_warm VALUES (6, repeat('f', 127), 100);
+INSERT INTO test_toast_warm VALUES (7, repeat('g', 128), 100);
+INSERT INTO test_toast_warm VALUES (8, repeat('h', 3200), 100);
+UPDATE test_toast_warm SET b = repeat('q', 600) WHERE a = 1;
+UPDATE test_toast_warm SET b = repeat('r', 2) WHERE a = 2;
+UPDATE test_toast_warm SET b = repeat('s', 4) WHERE a = 3;
+UPDATE test_toast_warm SET b = repeat('t', 63) WHERE a = 4;
+UPDATE test_toast_warm SET b = repeat('u', 126) WHERE a = 5;
+UPDATE test_toast_warm SET b = repeat('v', 127) WHERE a = 6;
+UPDATE test_toast_warm SET b = repeat('w', 128) WHERE a = 7;
+UPDATE test_toast_warm SET b = repeat('x', 3200) WHERE a = 8;
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+                        QUERY PLAN                         
+-----------------------------------------------------------
+ Index Scan using test_toast_warm_a_key on test_toast_warm
+   Index Cond: (a = 1)
+(2 rows)
+
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+                                                                                                                                                                                                                                                                                                                      QUERY PLAN                                                                                                                                                                                                                                                                                                                      
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Index Scan using test_toast_warm_index on test_toast_warm
+   Index Cond: (b = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'::text)
+(2 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+                                                                                                                                                                                                                                                                                                                      QUERY PLAN                                                                                                                                                                                                                                                                                                                      
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Index Only Scan using test_toast_warm_index on test_toast_warm
+   Index Cond: (b = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'::text)
+(2 rows)
+
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+ a |                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+---+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 1 | qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+ a | b 
+---+---
+(0 rows)
+
+SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+ b 
+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+ a |                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+---+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 1 | qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+ a | b  
+---+----
+ 2 | rr
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+ a |  b   
+---+------
+ 3 | ssss
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+ a |                                b                                
+---+-----------------------------------------------------------------
+ 4 | ttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttt
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+ a |                                                               b                                                                
+---+--------------------------------------------------------------------------------------------------------------------------------
+ 5 | uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+ a |                                                                b                                                                
+---+---------------------------------------------------------------------------------------------------------------------------------
+ 6 | vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+ a |                                                                b                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------
+ 7 | wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+ a |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                b                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 8 | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+(1 row)
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                                    QUERY PLAN                                                                                                                                                                                                                                                                                                                    
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 'qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq'::text)
+(2 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                                    QUERY PLAN                                                                                                                                                                                                                                                                                                                    
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 'qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq'::text)
+(2 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+ a |                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+---+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 1 | qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+ a | b  
+---+----
+ 2 | rr
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+ a |  b   
+---+------
+ 3 | ssss
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+ a |                                b                                
+---+-----------------------------------------------------------------
+ 4 | ttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttt
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+ a |                                                               b                                                                
+---+--------------------------------------------------------------------------------------------------------------------------------
+ 5 | uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+ a |                                                                b                                                                
+---+---------------------------------------------------------------------------------------------------------------------------------
+ 6 | vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+ a |                                                                b                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------
+ 7 | wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+ a |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                b                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 8 | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+(1 row)
+
+DROP TABLE test_toast_warm;
+-- Test with numeric data type
+CREATE TABLE test_toast_warm (a int unique, b numeric(10,2), c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+INSERT INTO test_toast_warm VALUES (1, 10.2, 100);
+INSERT INTO test_toast_warm VALUES (2, 11.22, 100);
+INSERT INTO test_toast_warm VALUES (3, 12.222, 100);
+INSERT INTO test_toast_warm VALUES (4, 13.20, 100);
+INSERT INTO test_toast_warm VALUES (5, 14.201, 100);
+UPDATE test_toast_warm SET b = 100.2 WHERE a = 1;
+UPDATE test_toast_warm SET b = 101.22 WHERE a = 2;
+UPDATE test_toast_warm SET b = 102.222 WHERE a = 3;
+UPDATE test_toast_warm SET b = 103.20 WHERE a = 4;
+UPDATE test_toast_warm SET b = 104.201 WHERE a = 5;
+SELECT * FROM test_toast_warm;
+ a |   b    |  c  
+---+--------+-----
+ 1 | 100.20 | 100
+ 2 | 101.22 | 100
+ 3 | 102.22 | 100
+ 4 | 103.20 | 100
+ 5 | 104.20 | 100
+(5 rows)
+
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+                        QUERY PLAN                         
+-----------------------------------------------------------
+ Index Scan using test_toast_warm_a_key on test_toast_warm
+   Index Cond: (a = 1)
+(2 rows)
+
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+                    QUERY PLAN                    
+--------------------------------------------------
+ Bitmap Heap Scan on test_toast_warm
+   Recheck Cond: (b = 10.2)
+   ->  Bitmap Index Scan on test_toast_warm_index
+         Index Cond: (b = 10.2)
+(4 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+                    QUERY PLAN                    
+--------------------------------------------------
+ Bitmap Heap Scan on test_toast_warm
+   Recheck Cond: (b = 100.2)
+   ->  Bitmap Index Scan on test_toast_warm_index
+         Index Cond: (b = 100.2)
+(4 rows)
+
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+ a | b 
+---+---
+(0 rows)
+
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+ b 
+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+   b    
+--------
+ 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+ a |   b    
+---+--------
+ 2 | 101.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+ a |   b    
+---+--------
+ 3 | 102.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+ a |   b    
+---+--------
+ 4 | 103.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+ a |   b    
+---+--------
+ 5 | 104.20
+(1 row)
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+         QUERY PLAN          
+-----------------------------
+ Seq Scan on test_toast_warm
+   Filter: (a = 1)
+(2 rows)
+
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+         QUERY PLAN          
+-----------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 10.2)
+(2 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+         QUERY PLAN          
+-----------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 100.2)
+(2 rows)
+
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+ a | b 
+---+---
+(0 rows)
+
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+ b 
+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+   b    
+--------
+ 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+ a |   b    
+---+--------
+ 2 | 101.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+ a |   b    
+---+--------
+ 3 | 102.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+ a |   b    
+---+--------
+ 4 | 103.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+ a |   b    
+---+--------
+ 5 | 104.20
+(1 row)
+
+DROP TABLE test_toast_warm;
+-- Toasted heap attributes
+CREATE TABLE toasttest(descr text , cnt int DEFAULT 0, f1 text, f2 text);
+CREATE INDEX testindx1 ON toasttest(descr);
+CREATE INDEX testindx2 ON toasttest(f1);
+INSERT INTO toasttest(descr, f1, f2) VALUES('two-compressed', repeat('1234567890',1000), repeat('1234567890',1000));
+INSERT INTO toasttest(descr, f1, f2) VALUES('two-toasted', repeat('1234567890',20000), repeat('1234567890',50000));
+INSERT INTO toasttest(descr, f1, f2) VALUES('one-compressed,one-toasted', repeat('1234567890',1000), repeat('1234567890',50000));
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,1) | (two-compressed,0,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012
+ (0,2) | (two-toasted,0,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ (0,3) | ("one-compressed,one-toasted",0,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+-- UPDATE f1 by doing string manipulation, but the updated value remains the
+-- same as the old value
+UPDATE toasttest SET cnt = cnt +1, f1 = trim(leading '-' from '-'||f1) RETURNING substring(toasttest::text, 1, 200);
+                                                                                                substring                                                                                                 
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (two-compressed,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012
+ (two-toasted,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ ("one-compressed,one-toasted",1,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+                  QUERY PLAN                   
+-----------------------------------------------
+ Seq Scan on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,4) | (two-compressed,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012
+ (0,5) | (two-toasted,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ (0,6) | ("one-compressed,one-toasted",1,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+                           QUERY PLAN                            
+-----------------------------------------------------------------
+ Index Scan using testindx2 on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,6) | ("one-compressed,one-toasted",1,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+ (0,4) | (two-compressed,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012
+ (0,5) | (two-toasted,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+(3 rows)
+
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+-- UPDATE f1 for real this time
+UPDATE toasttest SET cnt = cnt +1, f1 = '-'||f1 RETURNING substring(toasttest::text, 1, 200);
+                                                                                                substring                                                                                                 
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (two-toasted,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234
+ ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+(3 rows)
+
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+                  QUERY PLAN                   
+-----------------------------------------------
+ Seq Scan on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,7) | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,8) | (two-toasted,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234
+ (0,9) | ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+(3 rows)
+
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+                           QUERY PLAN                            
+-----------------------------------------------------------------
+ Index Scan using testindx2 on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,9) | ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+ (0,7) | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,8) | (two-toasted,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234
+(3 rows)
+
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+-- UPDATE f1 from toasted to compressed
+UPDATE toasttest SET cnt = cnt +1, f1 = repeat('1234567890',1000) WHERE descr = 'two-toasted';
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+                  QUERY PLAN                   
+-----------------------------------------------
+ Seq Scan on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+  ctid  |                                                                                                substring                                                                                                 
+--------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,7)  | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,9)  | ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+ (0,10) | (two-toasted,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+(3 rows)
+
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+                           QUERY PLAN                            
+-----------------------------------------------------------------
+ Index Scan using testindx2 on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+  ctid  |                                                                                                substring                                                                                                 
+--------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,9)  | ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+ (0,7)  | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,10) | (two-toasted,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+(3 rows)
+
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+-- UPDATE f1 from compressed to toasted
+UPDATE toasttest SET cnt = cnt +1, f1 = repeat('1234567890',2000) WHERE descr = 'one-compressed,one-toasted';
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+                  QUERY PLAN                   
+-----------------------------------------------
+ Seq Scan on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+  ctid  |                                                                                                substring                                                                                                 
+--------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,7)  | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,10) | (two-toasted,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ (0,11) | ("one-compressed,one-toasted",3,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+                           QUERY PLAN                            
+-----------------------------------------------------------------
+ Index Scan using testindx2 on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+  ctid  |                                                                                                substring                                                                                                 
+--------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,7)  | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,10) | (two-toasted,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ (0,11) | ("one-compressed,one-toasted",3,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+DROP TABLE toasttest;
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 9f95b01..cd99f88 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -42,6 +42,8 @@ test: create_type
 test: create_table
 test: create_function_2
 
+test: warm
+
 # ----------
 # Load huge amounts of data
 # We should split the data files into single files and then
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
new file mode 100644
index 0000000..fb1f93e
--- /dev/null
+++ b/src/test/regress/sql/warm.sql
@@ -0,0 +1,344 @@
+
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Check if index only scan works correctly
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab1;
+
+------------------
+
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE a = 1;
+
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE b = 701;
+
+VACUUM updtst_tab2;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+SELECT b FROM updtst_tab2 WHERE b = 701;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab2;
+------------------
+
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+SELECT * FROM updtst_tab3 WHERE a = 1;
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+SET enable_seqscan = false;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+SELECT * FROM updtst_tab3 WHERE a = 2;
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab3;
+------------------
+
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where a = 'test';
+select *, ctid from test_warm where a = 'TEST';
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+DROP TABLE test_warm;
+
+--- Test with toast data types
+
+CREATE TABLE test_toast_warm (a int unique, b text, c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+
+-- insert a large enough value to cause index datum compression
+INSERT INTO test_toast_warm VALUES (1, repeat('a', 600), 100);
+INSERT INTO test_toast_warm VALUES (2, repeat('b', 2), 100);
+INSERT INTO test_toast_warm VALUES (3, repeat('c', 4), 100);
+INSERT INTO test_toast_warm VALUES (4, repeat('d', 63), 100);
+INSERT INTO test_toast_warm VALUES (5, repeat('e', 126), 100);
+INSERT INTO test_toast_warm VALUES (6, repeat('f', 127), 100);
+INSERT INTO test_toast_warm VALUES (7, repeat('g', 128), 100);
+INSERT INTO test_toast_warm VALUES (8, repeat('h', 3200), 100);
+
+UPDATE test_toast_warm SET b = repeat('q', 600) WHERE a = 1;
+UPDATE test_toast_warm SET b = repeat('r', 2) WHERE a = 2;
+UPDATE test_toast_warm SET b = repeat('s', 4) WHERE a = 3;
+UPDATE test_toast_warm SET b = repeat('t', 63) WHERE a = 4;
+UPDATE test_toast_warm SET b = repeat('u', 126) WHERE a = 5;
+UPDATE test_toast_warm SET b = repeat('v', 127) WHERE a = 6;
+UPDATE test_toast_warm SET b = repeat('w', 128) WHERE a = 7;
+UPDATE test_toast_warm SET b = repeat('x', 3200) WHERE a = 8;
+
+
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+
+DROP TABLE test_toast_warm;
+
+-- Test with numeric data type
+
+CREATE TABLE test_toast_warm (a int unique, b numeric(10,2), c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+
+INSERT INTO test_toast_warm VALUES (1, 10.2, 100);
+INSERT INTO test_toast_warm VALUES (2, 11.22, 100);
+INSERT INTO test_toast_warm VALUES (3, 12.222, 100);
+INSERT INTO test_toast_warm VALUES (4, 13.20, 100);
+INSERT INTO test_toast_warm VALUES (5, 14.201, 100);
+
+UPDATE test_toast_warm SET b = 100.2 WHERE a = 1;
+UPDATE test_toast_warm SET b = 101.22 WHERE a = 2;
+UPDATE test_toast_warm SET b = 102.222 WHERE a = 3;
+UPDATE test_toast_warm SET b = 103.20 WHERE a = 4;
+UPDATE test_toast_warm SET b = 104.201 WHERE a = 5;
+
+SELECT * FROM test_toast_warm;
+
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+
+DROP TABLE test_toast_warm;
+
+-- Toasted heap attributes
+CREATE TABLE toasttest(descr text , cnt int DEFAULT 0, f1 text, f2 text);
+CREATE INDEX testindx1 ON toasttest(descr);
+CREATE INDEX testindx2 ON toasttest(f1);
+
+INSERT INTO toasttest(descr, f1, f2) VALUES('two-compressed', repeat('1234567890',1000), repeat('1234567890',1000));
+INSERT INTO toasttest(descr, f1, f2) VALUES('two-toasted', repeat('1234567890',20000), repeat('1234567890',50000));
+INSERT INTO toasttest(descr, f1, f2) VALUES('one-compressed,one-toasted', repeat('1234567890',1000), repeat('1234567890',50000));
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+
+-- UPDATE f1 by doing string manipulation, but the updated value remains the
+-- same as the old value
+UPDATE toasttest SET cnt = cnt +1, f1 = trim(leading '-' from '-'||f1) RETURNING substring(toasttest::text, 1, 200);
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+
+-- UPDATE f1 for real this time
+UPDATE toasttest SET cnt = cnt +1, f1 = '-'||f1 RETURNING substring(toasttest::text, 1, 200);
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+
+-- UPDATE f1 from toasted to compressed
+UPDATE toasttest SET cnt = cnt +1, f1 = repeat('1234567890',1000) WHERE descr = 'two-toasted';
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+
+-- UPDATE f1 from compressed to toasted
+UPDATE toasttest SET cnt = cnt +1, f1 = repeat('1234567890',2000) WHERE descr = 'one-compressed,one-toasted';
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+
+DROP TABLE toasttest;
-- 
2.9.3 (Apple Git-75)

0004-Provide-control-knobs-to-decide-when-to-do-heap-_v23.patchapplication/octet-stream; name=0004-Provide-control-knobs-to-decide-when-to-do-heap-_v23.patchDownload
From 678c6784d81e241a465092620c44022e6ec721f6 Mon Sep 17 00:00:00 2001
From: Pavan Deolasee <pavan.deolasee@gmail.com>
Date: Wed, 29 Mar 2017 11:16:29 +0530
Subject: [PATCH 4/4] Provide control knobs to decide when to do heap and index
 WARM cleanup.

We provide two knobs to control maintenance activity on WARM. A guc
autovacuum_warm_cleanup_scale_factor can be set to trigger WARM cleanup.
Similarly, a GUC autovacuum_warm_cleanup_index_scale_factor can be set to
determine when to do index cleanup. Note that in the current design VACUUM
needs two index scans to remove a WARM index pointer. Hence we want to do that
work only when it makes sense (i.e. the index has significant number of WARM
pointers)

Similarly, VACUUM command is enhanced to accept another parameter, WARMCLEAN,
and if specified then only WARM cleanup will be carried out.
---
 src/backend/access/common/reloptions.c |  22 +++
 src/backend/catalog/system_views.sql   |   1 +
 src/backend/commands/analyze.c         |  60 +++++--
 src/backend/commands/vacuum.c          |   2 +
 src/backend/commands/vacuumlazy.c      | 319 +++++++++++++++++++++++++--------
 src/backend/parser/gram.y              |  26 ++-
 src/backend/postmaster/autovacuum.c    |  58 +++++-
 src/backend/postmaster/pgstat.c        |  50 +++++-
 src/backend/utils/adt/pgstatfuncs.c    |  15 ++
 src/backend/utils/init/globals.c       |   3 +
 src/backend/utils/misc/guc.c           |  30 ++++
 src/include/catalog/pg_proc.h          |   2 +
 src/include/commands/vacuum.h          |   2 +
 src/include/foreign/fdwapi.h           |   3 +-
 src/include/miscadmin.h                |   1 +
 src/include/nodes/parsenodes.h         |   3 +-
 src/include/parser/kwlist.h            |   1 +
 src/include/pgstat.h                   |  11 +-
 src/include/postmaster/autovacuum.h    |   2 +
 src/include/utils/guc_tables.h         |   1 +
 src/include/utils/rel.h                |   2 +
 src/test/regress/expected/rules.out    |   3 +
 src/test/regress/expected/warm.out     |  58 ++++++
 src/test/regress/sql/warm.sql          |  46 +++++
 24 files changed, 611 insertions(+), 110 deletions(-)

diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index 72e1253..b856503 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -338,6 +338,24 @@ static relopt_real realRelOpts[] =
 	},
 	{
 		{
+			"autovacuum_warmcleanup_scale_factor",
+			"Number of WARM chains prior to WARM cleanup as a fraction of reltuples",
+			RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
+			ShareUpdateExclusiveLock
+		},
+		-1, 0.0, 100.0
+	},
+	{
+		{
+			"autovacuum_warmcleanup_index_scale_factor",
+			"Number of WARM pointers in an index prior to WARM cleanup as a fraction of total WARM chains",
+			RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
+			ShareUpdateExclusiveLock
+		},
+		-1, 0.0, 100.0
+	},
+	{
+		{
 			"autovacuum_analyze_scale_factor",
 			"Number of tuple inserts, updates or deletes prior to analyze as a fraction of reltuples",
 			RELOPT_KIND_HEAP,
@@ -1341,6 +1359,10 @@ default_reloptions(Datum reloptions, bool validate, relopt_kind kind)
 		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, vacuum_scale_factor)},
 		{"autovacuum_analyze_scale_factor", RELOPT_TYPE_REAL,
 		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, analyze_scale_factor)},
+		{"autovacuum_warmcleanup_scale_factor", RELOPT_TYPE_REAL,
+		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, warmcleanup_scale_factor)},
+		{"autovacuum_warmcleanup_index_scale_factor", RELOPT_TYPE_REAL,
+		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, warmcleanup_index_scale)},
 		{"user_catalog_table", RELOPT_TYPE_BOOL,
 		offsetof(StdRdOptions, user_catalog_table)},
 		{"parallel_workers", RELOPT_TYPE_INT,
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 66a39d0..2a4d782 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -533,6 +533,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
+            pg_stat_get_warm_chains(C.oid) AS n_warm_chains,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
             pg_stat_get_last_vacuum_time(C.oid) as last_vacuum,
             pg_stat_get_last_autovacuum_time(C.oid) as last_autovacuum,
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index 404acb2..6c4fc4e 100644
--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -93,7 +93,8 @@ static VacAttrStats *examine_attribute(Relation onerel, int attnum,
 				  Node *index_expr);
 static int acquire_sample_rows(Relation onerel, int elevel,
 					HeapTuple *rows, int targrows,
-					double *totalrows, double *totaldeadrows);
+					double *totalrows, double *totaldeadrows,
+					double *totalwarmchains);
 static int	compare_rows(const void *a, const void *b);
 static int acquire_inherited_sample_rows(Relation onerel, int elevel,
 							  HeapTuple *rows, int targrows,
@@ -320,7 +321,8 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params,
 	int			targrows,
 				numrows;
 	double		totalrows,
-				totaldeadrows;
+				totaldeadrows,
+				totalwarmchains;
 	HeapTuple  *rows;
 	PGRUsage	ru0;
 	TimestampTz starttime = 0;
@@ -501,7 +503,8 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params,
 	else
 		numrows = (*acquirefunc) (onerel, elevel,
 								  rows, targrows,
-								  &totalrows, &totaldeadrows);
+								  &totalrows, &totaldeadrows,
+								  &totalwarmchains);
 
 	/*
 	 * Compute the statistics.  Temporary results during the calculations for
@@ -631,7 +634,7 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params,
 	 */
 	if (!inh)
 		pgstat_report_analyze(onerel, totalrows, totaldeadrows,
-							  (va_cols == NIL));
+							  totalwarmchains, (va_cols == NIL));
 
 	/* If this isn't part of VACUUM ANALYZE, let index AMs do cleanup */
 	if (!(options & VACOPT_VACUUM))
@@ -991,12 +994,14 @@ examine_attribute(Relation onerel, int attnum, Node *index_expr)
 static int
 acquire_sample_rows(Relation onerel, int elevel,
 					HeapTuple *rows, int targrows,
-					double *totalrows, double *totaldeadrows)
+					double *totalrows, double *totaldeadrows,
+					double *totalwarmchains)
 {
 	int			numrows = 0;	/* # rows now in reservoir */
 	double		samplerows = 0; /* total # rows collected */
 	double		liverows = 0;	/* # live rows seen */
 	double		deadrows = 0;	/* # dead rows seen */
+	double		warmchains = 0;
 	double		rowstoskip = -1;	/* -1 means not set yet */
 	BlockNumber totalblocks;
 	TransactionId OldestXmin;
@@ -1023,9 +1028,14 @@ acquire_sample_rows(Relation onerel, int elevel,
 		Page		targpage;
 		OffsetNumber targoffset,
 					maxoffset;
+		bool		marked[MaxHeapTuplesPerPage];
+		OffsetNumber root_offsets[MaxHeapTuplesPerPage];
 
 		vacuum_delay_point();
 
+		/* Track which root line pointers are already counted. */
+		memset(marked, 0, sizeof (marked));
+
 		/*
 		 * We must maintain a pin on the target page's buffer to ensure that
 		 * the maxoffset value stays good (else concurrent VACUUM might delete
@@ -1041,6 +1051,9 @@ acquire_sample_rows(Relation onerel, int elevel,
 		targpage = BufferGetPage(targbuffer);
 		maxoffset = PageGetMaxOffsetNumber(targpage);
 
+		/* Get all root line pointers first */
+		heap_get_root_tuples(targpage, root_offsets);
+
 		/* Inner loop over all tuples on the selected page */
 		for (targoffset = FirstOffsetNumber; targoffset <= maxoffset; targoffset++)
 		{
@@ -1069,6 +1082,22 @@ acquire_sample_rows(Relation onerel, int elevel,
 			targtuple.t_data = (HeapTupleHeader) PageGetItem(targpage, itemid);
 			targtuple.t_len = ItemIdGetLength(itemid);
 
+			/*
+			 * If this is a WARM-updated tuple, check if we have already seen
+			 * the root line pointer. If not, count this as a WARM chain. This
+			 * ensures that we count every WARM-chain just once, irrespective
+			 * of how many tuples exist in the chain.
+			 */
+			if (HeapTupleHeaderIsWarmUpdated(targtuple.t_data))
+			{
+				OffsetNumber root_offnum = root_offsets[targoffset];
+				if (!marked[root_offnum])
+				{
+					warmchains += 1;
+					marked[root_offnum] = true;
+				}
+			}
+
 			switch (HeapTupleSatisfiesVacuum(&targtuple,
 											 OldestXmin,
 											 targbuffer))
@@ -1200,18 +1229,24 @@ acquire_sample_rows(Relation onerel, int elevel,
 
 	/*
 	 * Estimate total numbers of rows in relation.  For live rows, use
-	 * vac_estimate_reltuples; for dead rows, we have no source of old
-	 * information, so we have to assume the density is the same in unseen
-	 * pages as in the pages we scanned.
+	 * vac_estimate_reltuples; for dead rows and WARM chains, we have no source
+	 * of old information, so we have to assume the density is the same in
+	 * unseen pages as in the pages we scanned.
 	 */
 	*totalrows = vac_estimate_reltuples(onerel, true,
 										totalblocks,
 										bs.m,
 										liverows);
 	if (bs.m > 0)
+	{
 		*totaldeadrows = floor((deadrows / bs.m) * totalblocks + 0.5);
+		*totalwarmchains = floor((warmchains / bs.m) * totalblocks + 0.5);
+	}
 	else
+	{
 		*totaldeadrows = 0.0;
+		*totalwarmchains = 0.0;
+	}
 
 	/*
 	 * Emit some interesting relation info
@@ -1219,11 +1254,13 @@ acquire_sample_rows(Relation onerel, int elevel,
 	ereport(elevel,
 			(errmsg("\"%s\": scanned %d of %u pages, "
 					"containing %.0f live rows and %.0f dead rows; "
-					"%d rows in sample, %.0f estimated total rows",
+					"%d rows in sample, %.0f estimated total rows; "
+					"%.0f warm chains",
 					RelationGetRelationName(onerel),
 					bs.m, totalblocks,
 					liverows, deadrows,
-					numrows, *totalrows)));
+					numrows, *totalrows,
+					*totalwarmchains)));
 
 	return numrows;
 }
@@ -1428,11 +1465,12 @@ acquire_inherited_sample_rows(Relation onerel, int elevel,
 				int			childrows;
 				double		trows,
 							tdrows;
+				double		twarmchains;
 
 				/* Fetch a random sample of the child's rows */
 				childrows = (*acquirefunc) (childrel, elevel,
 											rows + numrows, childtargrows,
-											&trows, &tdrows);
+											&trows, &tdrows, &twarmchains);
 
 				/* We may need to convert from child's rowtype to parent's */
 				if (childrows > 0 &&
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index 9fbb0eb..52a7838 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -103,6 +103,7 @@ ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel)
 		params.freeze_table_age = 0;
 		params.multixact_freeze_min_age = 0;
 		params.multixact_freeze_table_age = 0;
+		params.warmcleanup_index_scale = -1;
 	}
 	else
 	{
@@ -110,6 +111,7 @@ ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel)
 		params.freeze_table_age = -1;
 		params.multixact_freeze_min_age = -1;
 		params.multixact_freeze_table_age = -1;
+		params.warmcleanup_index_scale = -1;
 	}
 
 	/* user-invoked vacuum is never "for wraparound" */
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index f52490f..d68b4fb 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -156,18 +156,23 @@ typedef struct LVRelStats
 	double		tuples_deleted;
 	BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */
 
+	int			maxtuples;		/* maxtuples computed while allocating space */
+	Size		work_area_size;	/* working area size */
+	char		*work_area;		/* working area for storing dead tuples and
+								 * warm chains */
 	/* List of candidate WARM chains that can be converted into HOT chains */
-	/* NB: this list is ordered by TID of the root pointers */
+	/* 
+	 * NB: this list grows from bottom to top and is ordered by TID of the root
+	 * pointers, with the lowest entry at the bottom
+	 */
 	int				num_warm_chains;	/* current # of entries */
-	int				max_warm_chains;	/* # slots allocated in array */
 	LVWarmChain 	*warm_chains;		/* array of LVWarmChain */
 	double			num_non_convertible_warm_chains;
-
 	/* List of TIDs of tuples we intend to delete */
 	/* NB: this list is ordered by TID address */
 	int			num_dead_tuples;	/* current # of entries */
-	int			max_dead_tuples;	/* # slots allocated in array */
 	ItemPointer dead_tuples;	/* array of ItemPointerData */
+
 	int			num_index_scans;
 	TransactionId latestRemovedXid;
 	bool		lock_waiter_detected;
@@ -187,11 +192,12 @@ static BufferAccessStrategy vac_strategy;
 /* non-export function prototypes */
 static void lazy_scan_heap(Relation onerel, int options,
 			   LVRelStats *vacrelstats, Relation *Irel, int nindexes,
-			   bool aggressive);
+			   bool aggressive, double warmcleanup_index_scale);
 static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats);
 static bool lazy_check_needs_freeze(Buffer buf, bool *hastup);
 static void lazy_vacuum_index(Relation indrel,
 				  bool clear_warm,
+				  double warmcleanup_index_scale,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats);
 static void lazy_cleanup_index(Relation indrel,
@@ -207,7 +213,8 @@ static bool should_attempt_truncation(LVRelStats *vacrelstats);
 static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
 static BlockNumber count_nondeletable_pages(Relation onerel,
 						 LVRelStats *vacrelstats);
-static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks);
+static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks,
+					   bool dowarmcleanup);
 static void lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr);
 static void lazy_record_warm_chain(LVRelStats *vacrelstats,
@@ -283,6 +290,9 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 						  &OldestXmin, &FreezeLimit, &xidFullScanLimit,
 						  &MultiXactCutoff, &mxactFullScanLimit);
 
+	/* Use default if the caller hasn't specified any value */
+	if (params->warmcleanup_index_scale == -1)
+		params->warmcleanup_index_scale = VacuumWarmCleanupIndexScale;
 	/*
 	 * We request an aggressive scan if the table's frozen Xid is now older
 	 * than or equal to the requested Xid full-table scan limit; or if the
@@ -309,7 +319,8 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 	vacrelstats->hasindex = (nindexes > 0);
 
 	/* Do the vacuuming */
-	lazy_scan_heap(onerel, options, vacrelstats, Irel, nindexes, aggressive);
+	lazy_scan_heap(onerel, options, vacrelstats, Irel, nindexes, aggressive,
+			params->warmcleanup_index_scale);
 
 	/* Done with indexes */
 	vac_close_indexes(nindexes, Irel, NoLock);
@@ -396,7 +407,8 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 	pgstat_report_vacuum(RelationGetRelid(onerel),
 						 onerel->rd_rel->relisshared,
 						 new_live_tuples,
-						 vacrelstats->new_dead_tuples);
+						 vacrelstats->new_dead_tuples,
+						 vacrelstats->num_non_convertible_warm_chains);
 	pgstat_progress_end_command();
 
 	/* and log the action if appropriate */
@@ -507,10 +519,19 @@ vacuum_log_cleanup_info(Relation rel, LVRelStats *vacrelstats)
  *		If there are no indexes then we can reclaim line pointers on the fly;
  *		dead line pointers need only be retained until all index pointers that
  *		reference them have been killed.
+ *
+ *		warmcleanup_index_scale specifies the number of WARM pointers in an
+ *		index as a fraction of total candidate WARM chains. If we find less
+ *		WARM pointers in an index than the specified fraction, then we don't
+ *		invoke cleanup that index. If WARM cleanup is skipped for any one
+ *		index, the WARM chain can't be cleared in the heap and no further WARM
+ *		updates are possible to such chains. Such chains are also not
+ *		considered for WARM cleanup in other indexes.
  */
 static void
 lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
-			   Relation *Irel, int nindexes, bool aggressive)
+			   Relation *Irel, int nindexes, bool aggressive,
+			   double warmcleanup_index_scale)
 {
 	BlockNumber nblocks,
 				blkno;
@@ -536,6 +557,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		PROGRESS_VACUUM_MAX_DEAD_TUPLES
 	};
 	int64		initprog_val[3];
+	bool		dowarmcleanup = ((options & VACOPT_WARM_CLEANUP) != 0);
 
 	pg_rusage_init(&ru0);
 
@@ -558,13 +580,13 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 	vacrelstats->nonempty_pages = 0;
 	vacrelstats->latestRemovedXid = InvalidTransactionId;
 
-	lazy_space_alloc(vacrelstats, nblocks);
+	lazy_space_alloc(vacrelstats, nblocks, dowarmcleanup);
 	frozen = palloc(sizeof(xl_heap_freeze_tuple) * MaxHeapTuplesPerPage);
 
 	/* Report that we're scanning the heap, advertising total # of blocks */
 	initprog_val[0] = PROGRESS_VACUUM_PHASE_SCAN_HEAP;
 	initprog_val[1] = nblocks;
-	initprog_val[2] = vacrelstats->max_dead_tuples;
+	initprog_val[2] = vacrelstats->maxtuples;
 	pgstat_progress_update_multi_param(3, initprog_index, initprog_val);
 
 	/*
@@ -656,6 +678,11 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		bool		all_frozen = true;	/* provided all_visible is also true */
 		bool		has_dead_tuples;
 		TransactionId visibility_cutoff_xid = InvalidTransactionId;
+		char		*end_deads;
+		char		*end_warms;
+		Size		free_work_area;
+		int			avail_dead_tuples;
+		int			avail_warm_chains;
 
 		/* see note above about forcing scanning of last page */
 #define FORCE_CHECK_PAGE() \
@@ -740,13 +767,38 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		vacuum_delay_point();
 
 		/*
+		 * The dead tuples are stored starting from the start of the work
+		 * area and growing downwards. The candidate warm chains are stored
+		 * starting from the bottom on the work area and growing upwards. Once
+		 * the difference between these two segments is too small to accomodate
+		 * potentially all tuples in the current page, we stop and do one round
+		 * of index cleanup.
+		 */
+		end_deads = (char *)(vacrelstats->dead_tuples + vacrelstats->num_dead_tuples);
+
+		/*
+		 * If we are not doing WARM cleanup, then the entire work area is used
+		 * by the dead tuples.
+		 */
+		if (vacrelstats->warm_chains)
+		{
+			end_warms = (char *)(vacrelstats->warm_chains - vacrelstats->num_warm_chains);
+			free_work_area = end_warms - end_deads;
+			avail_warm_chains = (free_work_area / sizeof (LVWarmChain));
+		}
+		else
+		{
+			free_work_area = vacrelstats->work_area +
+				vacrelstats->work_area_size - end_deads;
+		}
+		avail_dead_tuples = (free_work_area / sizeof (ItemPointerData));
+
+		/*
 		 * If we are close to overrunning the available space for dead-tuple
 		 * TIDs, pause and do a cycle of vacuuming before we tackle this page.
 		 */
-		if (((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
-			vacrelstats->num_dead_tuples > 0) ||
-			((vacrelstats->max_warm_chains - vacrelstats->num_warm_chains) < MaxHeapTuplesPerPage &&
-			 vacrelstats->num_warm_chains > 0))
+		if ((avail_dead_tuples < MaxHeapTuplesPerPage && vacrelstats->num_dead_tuples > 0) ||
+			(avail_warm_chains < MaxHeapTuplesPerPage && vacrelstats->num_warm_chains > 0))
 		{
 			const int	hvp_index[] = {
 				PROGRESS_VACUUM_PHASE,
@@ -776,7 +828,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			/* Remove index entries */
 			for (i = 0; i < nindexes; i++)
 				lazy_vacuum_index(Irel[i],
-								  (vacrelstats->num_warm_chains > 0),
+								  dowarmcleanup && (vacrelstats->num_warm_chains > 0),
+								  warmcleanup_index_scale,
 								  &indstats[i],
 								  vacrelstats);
 
@@ -800,8 +853,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			 */
 			vacrelstats->num_dead_tuples = 0;
 			vacrelstats->num_warm_chains = 0;
-			memset(vacrelstats->warm_chains, 0,
-					vacrelstats->max_warm_chains * sizeof (LVWarmChain));
+			memset(vacrelstats->work_area, 0, vacrelstats->work_area_size);
 			vacrelstats->num_index_scans++;
 
 			/* Report that we are once again scanning the heap */
@@ -1408,7 +1460,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		/* Remove index entries */
 		for (i = 0; i < nindexes; i++)
 			lazy_vacuum_index(Irel[i],
-							  (vacrelstats->num_warm_chains > 0),
+							  dowarmcleanup && (vacrelstats->num_warm_chains > 0),
+							  warmcleanup_index_scale,
 							  &indstats[i],
 							  vacrelstats);
 
@@ -1513,9 +1566,12 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 		vacuum_delay_point();
 
 		tblk = chainblk = InvalidBlockNumber;
-		if (chainindex < vacrelstats->num_warm_chains)
-			chainblk =
-				ItemPointerGetBlockNumber(&(vacrelstats->warm_chains[chainindex].chain_tid));
+		if (vacrelstats->warm_chains &&
+			chainindex < vacrelstats->num_warm_chains)
+		{
+			LVWarmChain *chain = vacrelstats->warm_chains - (chainindex + 1);
+			chainblk = ItemPointerGetBlockNumber(&chain->chain_tid);
+		}
 
 		if (tupindex < vacrelstats->num_dead_tuples)
 			tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
@@ -1613,7 +1669,8 @@ lazy_warmclear_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 		BlockNumber tblk;
 		LVWarmChain	*chain;
 
-		chain = &vacrelstats->warm_chains[chainindex];
+		/* The warm chains are indexed from bottom */
+		chain = vacrelstats->warm_chains - (chainindex + 1);
 
 		tblk = ItemPointerGetBlockNumber(&chain->chain_tid);
 		if (tblk != blkno)
@@ -1847,9 +1904,11 @@ static void
 lazy_reset_warm_pointer_count(LVRelStats *vacrelstats)
 {
 	int i;
-	for (i = 0; i < vacrelstats->num_warm_chains; i++)
+
+	/* Start from the bottom and move upwards */
+	for (i = 1; i <= vacrelstats->num_warm_chains; i++)
 	{
-		LVWarmChain *chain = &vacrelstats->warm_chains[i];
+		LVWarmChain *chain = (vacrelstats->warm_chains - i);
 		chain->num_clear_pointers = chain->num_warm_pointers = 0;
 	}
 }
@@ -1863,6 +1922,7 @@ lazy_reset_warm_pointer_count(LVRelStats *vacrelstats)
 static void
 lazy_vacuum_index(Relation indrel,
 				  bool clear_warm,
+				  double warmcleanup_index_scale,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats)
 {
@@ -1927,25 +1987,57 @@ lazy_vacuum_index(Relation indrel,
 						(*stats)->warm_pointers_removed,
 						(*stats)->clear_pointers_removed)));
 
-		(*stats)->num_warm_pointers = 0;
-		(*stats)->num_clear_pointers = 0;
-		(*stats)->warm_pointers_removed = 0;
-		(*stats)->clear_pointers_removed = 0;
-		(*stats)->pointers_cleared = 0;
+		/*
+		 * If the number of WARM pointers found in the index are more than the
+		 * configured fraction of total candidate WARM chains, then do the
+		 * second index scan to clean up WARM chains.
+		 *
+		 * Otherwise we must set these WARM chains as non-convertible chains.
+		 */
+		if ((*stats)->num_warm_pointers >
+				((double)vacrelstats->num_warm_chains * warmcleanup_index_scale))
+		{
+			(*stats)->num_warm_pointers = 0;
+			(*stats)->num_clear_pointers = 0;
+			(*stats)->warm_pointers_removed = 0;
+			(*stats)->clear_pointers_removed = 0;
+			(*stats)->pointers_cleared = 0;
+
+			*stats = index_bulk_delete(&ivinfo, *stats,
+					lazy_indexvac_phase2, (void *) vacrelstats);
+			ereport(elevel,
+					(errmsg("scanned index \"%s\" to convert WARM pointers, found "
+							"%0.f WARM pointers, %0.f CLEAR pointers, removed "
+							"%0.f WARM pointers, removed %0.f CLEAR pointers, "
+							"cleared %0.f WARM pointers",
+							RelationGetRelationName(indrel),
+							(*stats)->num_warm_pointers,
+							(*stats)->num_clear_pointers,
+							(*stats)->warm_pointers_removed,
+							(*stats)->clear_pointers_removed,
+							(*stats)->pointers_cleared)));
+		}
+		else
+		{
+			int ii;
 
-		*stats = index_bulk_delete(&ivinfo, *stats,
-				lazy_indexvac_phase2, (void *) vacrelstats);
-		ereport(elevel,
-				(errmsg("scanned index \"%s\" to convert WARM pointers, found "
-						"%0.f WARM pointers, %0.f CLEAR pointers, removed "
-						"%0.f WARM pointers, removed %0.f CLEAR pointers, "
-						"cleared %0.f WARM pointers",
-						RelationGetRelationName(indrel),
-						(*stats)->num_warm_pointers,
-						(*stats)->num_clear_pointers,
-						(*stats)->warm_pointers_removed,
-						(*stats)->clear_pointers_removed,
-						(*stats)->pointers_cleared)));
+			/*
+			 * All chains skipped by this index are marked non-convertible.
+			 *
+			 * Start from bottom and move upwards.
+			 */
+			for (ii = 1; ii <= vacrelstats->num_warm_chains; ii++)
+			{
+				LVWarmChain *chain = vacrelstats->warm_chains - ii;
+				if (chain->num_warm_pointers > 0 ||
+					chain->num_clear_pointers > 1)
+				{
+					chain->keep_warm_chain = 1;
+					vacrelstats->num_non_convertible_warm_chains++;
+				}
+			}
+
+		}
 	}
 	else
 	{
@@ -2323,7 +2415,8 @@ count_nondeletable_pages(Relation onerel, LVRelStats *vacrelstats)
  * See the comments at the head of this file for rationale.
  */
 static void
-lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
+lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks,
+				 bool dowarmcleanup)
 {
 	long		maxtuples;
 	int			vac_work_mem = IsAutoVacuumWorkerProcess() &&
@@ -2332,11 +2425,16 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 
 	if (vacrelstats->hasindex)
 	{
+		/*
+		 * If we're not doing WARM cleanup then the entire memory is available
+		 * for tracking dead tuples. Otherwise it gets split between tracking
+		 * dead tuples and tracking WARM chains.
+		 */
 		maxtuples = (vac_work_mem * 1024L) / (sizeof(ItemPointerData) +
-				sizeof(LVWarmChain));
+				dowarmcleanup ? sizeof(LVWarmChain) : 0);
 		maxtuples = Min(maxtuples, INT_MAX);
 		maxtuples = Min(maxtuples, MaxAllocSize / (sizeof(ItemPointerData) +
-					sizeof(LVWarmChain)));
+				dowarmcleanup ? sizeof(LVWarmChain) : 0));
 
 		/* curious coding here to ensure the multiplication can't overflow */
 		if ((BlockNumber) (maxtuples / LAZY_ALLOC_TUPLES) > relblocks)
@@ -2350,21 +2448,29 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 		maxtuples = MaxHeapTuplesPerPage;
 	}
 
-	vacrelstats->num_dead_tuples = 0;
-	vacrelstats->max_dead_tuples = (int) maxtuples;
-	vacrelstats->dead_tuples = (ItemPointer)
-		palloc(maxtuples * sizeof(ItemPointerData));
-
-	/*
-	 * XXX Cheat for now and allocate the same size array for tracking warm
-	 * chains. maxtuples must have been already adjusted above to ensure we
-	 * don't cross vac_work_mem.
+	/* Allocate work area of the desired size and setup dead_tuples and
+	 * warm_chains to the start and the end of the area respectively. They grow
+	 * in opposite directions as dead tuples and warm chains are added. Note
+	 * that if we are not doing WARM cleanup then the entire area will only be
+	 * used for tracking dead tuples.
 	 */
-	vacrelstats->num_warm_chains = 0;
-	vacrelstats->max_warm_chains = (int) maxtuples;
-	vacrelstats->warm_chains = (LVWarmChain *)
-		palloc0(maxtuples * sizeof(LVWarmChain));
+	vacrelstats->work_area_size = maxtuples * (sizeof(ItemPointerData) +
+				dowarmcleanup ? sizeof(LVWarmChain) : 0);
+	vacrelstats->work_area = (char *) palloc0(vacrelstats->work_area_size);
+	vacrelstats->num_dead_tuples = 0;
+	vacrelstats->dead_tuples = (ItemPointer)vacrelstats->work_area;
+	vacrelstats->maxtuples = maxtuples;
 
+	if (dowarmcleanup)
+	{
+		vacrelstats->num_warm_chains = 0;
+		vacrelstats->warm_chains = (LVWarmChain *)
+			(vacrelstats->work_area + vacrelstats->work_area_size);
+	}
+	else
+	{
+		vacrelstats->warm_chains = NULL;
+	}
 }
 
 /*
@@ -2374,17 +2480,38 @@ static void
 lazy_record_clear_chain(LVRelStats *vacrelstats,
 					   ItemPointer itemptr)
 {
+	char *end_deads, *end_warms;
+	Size free_work_area;
+
+	if (vacrelstats->warm_chains == NULL)
+	{
+		vacrelstats->num_non_convertible_warm_chains++;
+		return;
+	}
+
+	end_deads = (char *) (vacrelstats->dead_tuples +
+					vacrelstats->num_dead_tuples);
+	end_warms = (char *) (vacrelstats->warm_chains -
+					vacrelstats->num_warm_chains);
+	free_work_area = (end_warms - end_deads);
+
+	Assert(free_work_area >= 0);
 	/*
 	 * The array shouldn't overflow under normal behavior, but perhaps it
 	 * could if we are given a really small maintenance_work_mem. In that
 	 * case, just forget the last few tuples (we'll get 'em next time).
 	 */
-	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	if (free_work_area >= sizeof (LVWarmChain))
 	{
-		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
-		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 0;
+		LVWarmChain *chain;
+
 		vacrelstats->num_warm_chains++;
+	   	chain = vacrelstats->warm_chains - vacrelstats->num_warm_chains;
+		chain->chain_tid = *itemptr;
+		chain->is_postwarm_chain = 0;
 	}
+	else
+		vacrelstats->num_non_convertible_warm_chains++;
 }
 
 /*
@@ -2394,17 +2521,39 @@ static void
 lazy_record_warm_chain(LVRelStats *vacrelstats,
 					   ItemPointer itemptr)
 {
+	char *end_deads, *end_warms;
+	Size free_work_area;
+
+	if (vacrelstats->warm_chains == NULL)
+	{
+		vacrelstats->num_non_convertible_warm_chains++;
+		return;
+	}
+
+	end_deads = (char *) (vacrelstats->dead_tuples +
+					vacrelstats->num_dead_tuples);
+	end_warms = (char *) (vacrelstats->warm_chains -
+					vacrelstats->num_warm_chains);
+	free_work_area = (end_warms - end_deads);
+
+	Assert(free_work_area >= 0);
+
 	/*
 	 * The array shouldn't overflow under normal behavior, but perhaps it
 	 * could if we are given a really small maintenance_work_mem. In that
 	 * case, just forget the last few tuples (we'll get 'em next time).
 	 */
-	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	if (free_work_area >= sizeof (LVWarmChain))
 	{
-		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
-		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 1;
+		LVWarmChain *chain;
+
 		vacrelstats->num_warm_chains++;
+	   	chain = vacrelstats->warm_chains - vacrelstats->num_warm_chains;
+		chain->chain_tid = *itemptr;
+		chain->is_postwarm_chain = 1;
 	}
+	else
+		vacrelstats->num_non_convertible_warm_chains++;
 }
 
 /*
@@ -2414,12 +2563,20 @@ static void
 lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr)
 {
+	char *end_deads = (char *) (vacrelstats->dead_tuples +
+		 	vacrelstats->num_dead_tuples);
+	char *end_warms = (char *) (vacrelstats->warm_chains -
+			vacrelstats->num_warm_chains);
+	Size freespace = (end_warms - end_deads);
+
+	Assert(freespace >= 0);
+	
 	/*
 	 * The array shouldn't overflow under normal behavior, but perhaps it
 	 * could if we are given a really small maintenance_work_mem. In that
 	 * case, just forget the last few tuples (we'll get 'em next time).
 	 */
-	if (vacrelstats->num_dead_tuples < vacrelstats->max_dead_tuples)
+	if (freespace >= sizeof (ItemPointer))
 	{
 		vacrelstats->dead_tuples[vacrelstats->num_dead_tuples] = *itemptr;
 		vacrelstats->num_dead_tuples++;
@@ -2472,10 +2629,10 @@ lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state)
 		return IBDCR_DELETE;
 
 	chain = (LVWarmChain *) bsearch((void *) itemptr,
-								(void *) vacrelstats->warm_chains,
-								vacrelstats->num_warm_chains,
-								sizeof(LVWarmChain),
-								vac_cmp_warm_chain);
+				(void *) (vacrelstats->warm_chains - vacrelstats->num_warm_chains),
+				vacrelstats->num_warm_chains,
+				sizeof(LVWarmChain),
+				vac_cmp_warm_chain);
 	if (chain != NULL)
 	{
 		if (is_warm)
@@ -2495,13 +2652,13 @@ static IndexBulkDeleteCallbackResult
 lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
 {
 	LVRelStats		*vacrelstats = (LVRelStats *) state;
-	LVWarmChain	*chain;
+	LVWarmChain		*chain;
 
 	chain = (LVWarmChain *) bsearch((void *) itemptr,
-								(void *) vacrelstats->warm_chains,
-								vacrelstats->num_warm_chains,
-								sizeof(LVWarmChain),
-								vac_cmp_warm_chain);
+				(void *) (vacrelstats->warm_chains - vacrelstats->num_warm_chains),
+				vacrelstats->num_warm_chains,
+				sizeof(LVWarmChain),
+				vac_cmp_warm_chain);
 
 	if (chain != NULL && (chain->keep_warm_chain != 1))
 	{
@@ -2600,6 +2757,7 @@ lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
 		 * index pointers.
 		 */
 		chain->keep_warm_chain = 1;
+		vacrelstats->num_non_convertible_warm_chains++;
 		return IBDCR_KEEP;
 	}
 	return IBDCR_KEEP;
@@ -2608,6 +2766,9 @@ lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
 /*
  * Comparator routines for use with qsort() and bsearch(). Similar to
  * vac_cmp_itemptr, but right hand argument is LVWarmChain struct pointer.
+ *
+ * The warm_chains array is sorted in descending order hence the return values
+ * are flipped.
  */
 static int
 vac_cmp_warm_chain(const void *left, const void *right)
@@ -2621,17 +2782,17 @@ vac_cmp_warm_chain(const void *left, const void *right)
 	rblk = ItemPointerGetBlockNumber(&((LVWarmChain *) right)->chain_tid);
 
 	if (lblk < rblk)
-		return -1;
-	if (lblk > rblk)
 		return 1;
+	if (lblk > rblk)
+		return -1;
 
 	loff = ItemPointerGetOffsetNumber((ItemPointer) left);
 	roff = ItemPointerGetOffsetNumber(&((LVWarmChain *) right)->chain_tid);
 
 	if (loff < roff)
-		return -1;
-	if (loff > roff)
 		return 1;
+	if (loff > roff)
+		return -1;
 
 	return 0;
 }
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9d53a29..1592220 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -433,7 +433,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	overlay_placing substr_from substr_for
 
 %type <boolean> opt_instead
-%type <boolean> opt_unique opt_concurrently opt_verbose opt_full
+%type <boolean> opt_unique opt_concurrently opt_verbose opt_full opt_warmclean
 %type <boolean> opt_freeze opt_default opt_recheck
 %type <defelt>	opt_binary opt_oids copy_delimiter
 
@@ -684,7 +684,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	VACUUM VALID VALIDATE VALIDATOR VALUE_P VALUES VARCHAR VARIADIC VARYING
 	VERBOSE VERSION_P VIEW VIEWS VOLATILE
 
-	WHEN WHERE WHITESPACE_P WINDOW WITH WITHIN WITHOUT WORK WRAPPER WRITE
+	WARMCLEAN WHEN WHERE WHITESPACE_P WINDOW WITH WITHIN WITHOUT WORK WRAPPER WRITE
 
 	XML_P XMLATTRIBUTES XMLCONCAT XMLELEMENT XMLEXISTS XMLFOREST XMLNAMESPACES
 	XMLPARSE XMLPI XMLROOT XMLSERIALIZE XMLTABLE
@@ -10059,7 +10059,7 @@ cluster_index_specification:
  *
  *****************************************************************************/
 
-VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
+VacuumStmt: VACUUM opt_full opt_freeze opt_verbose opt_warmclean
 				{
 					VacuumStmt *n = makeNode(VacuumStmt);
 					n->options = VACOPT_VACUUM;
@@ -10069,11 +10069,13 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
 						n->options |= VACOPT_FREEZE;
 					if ($4)
 						n->options |= VACOPT_VERBOSE;
+					if ($5)
+						n->options |= VACOPT_WARM_CLEANUP;
 					n->relation = NULL;
 					n->va_cols = NIL;
 					$$ = (Node *)n;
 				}
-			| VACUUM opt_full opt_freeze opt_verbose qualified_name
+			| VACUUM opt_full opt_freeze opt_verbose opt_warmclean qualified_name
 				{
 					VacuumStmt *n = makeNode(VacuumStmt);
 					n->options = VACOPT_VACUUM;
@@ -10083,13 +10085,15 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
 						n->options |= VACOPT_FREEZE;
 					if ($4)
 						n->options |= VACOPT_VERBOSE;
-					n->relation = $5;
+					if ($5)
+						n->options |= VACOPT_WARM_CLEANUP;
+					n->relation = $6;
 					n->va_cols = NIL;
 					$$ = (Node *)n;
 				}
-			| VACUUM opt_full opt_freeze opt_verbose AnalyzeStmt
+			| VACUUM opt_full opt_freeze opt_verbose opt_warmclean AnalyzeStmt
 				{
-					VacuumStmt *n = (VacuumStmt *) $5;
+					VacuumStmt *n = (VacuumStmt *) $6;
 					n->options |= VACOPT_VACUUM;
 					if ($2)
 						n->options |= VACOPT_FULL;
@@ -10097,6 +10101,8 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
 						n->options |= VACOPT_FREEZE;
 					if ($4)
 						n->options |= VACOPT_VERBOSE;
+					if ($5)
+						n->options |= VACOPT_WARM_CLEANUP;
 					$$ = (Node *)n;
 				}
 			| VACUUM '(' vacuum_option_list ')'
@@ -10129,6 +10135,7 @@ vacuum_option_elem:
 			| VERBOSE			{ $$ = VACOPT_VERBOSE; }
 			| FREEZE			{ $$ = VACOPT_FREEZE; }
 			| FULL				{ $$ = VACOPT_FULL; }
+			| WARMCLEAN			{ $$ = VACOPT_WARM_CLEANUP; }
 			| IDENT
 				{
 					if (strcmp($1, "disable_page_skipping") == 0)
@@ -10182,6 +10189,10 @@ opt_freeze: FREEZE									{ $$ = TRUE; }
 			| /*EMPTY*/								{ $$ = FALSE; }
 		;
 
+opt_warmclean: WARMCLEAN							{ $$ = TRUE; }
+			| /*EMPTY*/								{ $$ = FALSE; }
+		;
+
 opt_name_list:
 			'(' name_list ')'						{ $$ = $2; }
 			| /*EMPTY*/								{ $$ = NIL; }
@@ -14886,6 +14897,7 @@ type_func_name_keyword:
 			| SIMILAR
 			| TABLESAMPLE
 			| VERBOSE
+			| WARMCLEAN
 		;
 
 /* Reserved keyword --- these keywords are usable only as a ColLabel.
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 33ca749..91793e4 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -115,6 +115,8 @@ int			autovacuum_vac_thresh;
 double		autovacuum_vac_scale;
 int			autovacuum_anl_thresh;
 double		autovacuum_anl_scale;
+double		autovacuum_warmcleanup_scale;
+double		autovacuum_warmcleanup_index_scale;
 int			autovacuum_freeze_max_age;
 int			autovacuum_multixact_freeze_max_age;
 
@@ -307,7 +309,8 @@ static void relation_needs_vacanalyze(Oid relid, AutoVacOpts *relopts,
 						  Form_pg_class classForm,
 						  PgStat_StatTabEntry *tabentry,
 						  int effective_multixact_freeze_max_age,
-						  bool *dovacuum, bool *doanalyze, bool *wraparound);
+						  bool *dovacuum, bool *doanalyze, bool *wraparound,
+						  bool *dowarmcleanup);
 
 static void autovacuum_do_vac_analyze(autovac_table *tab,
 						  BufferAccessStrategy bstrategy);
@@ -2010,6 +2013,7 @@ do_autovacuum(void)
 		bool		dovacuum;
 		bool		doanalyze;
 		bool		wraparound;
+		bool		dowarmcleanup;
 
 		if (classForm->relkind != RELKIND_RELATION &&
 			classForm->relkind != RELKIND_MATVIEW)
@@ -2049,10 +2053,14 @@ do_autovacuum(void)
 		tabentry = get_pgstat_tabentry_relid(relid, classForm->relisshared,
 											 shared, dbentry);
 
-		/* Check if it needs vacuum or analyze */
+		/* 
+		 * Check if it needs vacuum or analyze. For vacuum, also check if it
+		 * needs WARM cleanup.
+		 */
 		relation_needs_vacanalyze(relid, relopts, classForm, tabentry,
 								  effective_multixact_freeze_max_age,
-								  &dovacuum, &doanalyze, &wraparound);
+								  &dovacuum, &doanalyze, &wraparound,
+								  &dowarmcleanup);
 
 		/* Relations that need work are added to table_oids */
 		if (dovacuum || doanalyze)
@@ -2105,6 +2113,7 @@ do_autovacuum(void)
 		bool		dovacuum;
 		bool		doanalyze;
 		bool		wraparound;
+		bool		dowarmcleanup;
 
 		/*
 		 * We cannot safely process other backends' temp tables, so skip 'em.
@@ -2135,7 +2144,8 @@ do_autovacuum(void)
 
 		relation_needs_vacanalyze(relid, relopts, classForm, tabentry,
 								  effective_multixact_freeze_max_age,
-								  &dovacuum, &doanalyze, &wraparound);
+								  &dovacuum, &doanalyze, &wraparound,
+								  &dowarmcleanup);
 
 		/* ignore analyze for toast tables */
 		if (dovacuum)
@@ -2566,6 +2576,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 	HeapTuple	classTup;
 	bool		dovacuum;
 	bool		doanalyze;
+	bool		dowarmcleanup;
 	autovac_table *tab = NULL;
 	PgStat_StatTabEntry *tabentry;
 	PgStat_StatDBEntry *shared;
@@ -2607,7 +2618,8 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 
 	relation_needs_vacanalyze(relid, avopts, classForm, tabentry,
 							  effective_multixact_freeze_max_age,
-							  &dovacuum, &doanalyze, &wraparound);
+							  &dovacuum, &doanalyze, &wraparound,
+							  &dowarmcleanup);
 
 	/* ignore ANALYZE for toast tables */
 	if (classForm->relkind == RELKIND_TOASTVALUE)
@@ -2623,6 +2635,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		int			vac_cost_limit;
 		int			vac_cost_delay;
 		int			log_min_duration;
+		double		warmcleanup_index_scale;
 
 		/*
 		 * Calculate the vacuum cost parameters and the freeze ages.  If there
@@ -2669,19 +2682,26 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 			? avopts->multixact_freeze_table_age
 			: default_multixact_freeze_table_age;
 
+		warmcleanup_index_scale = (avopts &&
+								   avopts->warmcleanup_index_scale >= 0)
+			? avopts->warmcleanup_index_scale
+			: autovacuum_warmcleanup_index_scale;
+
 		tab = palloc(sizeof(autovac_table));
 		tab->at_relid = relid;
 		tab->at_sharedrel = classForm->relisshared;
 		tab->at_vacoptions = VACOPT_SKIPTOAST |
 			(dovacuum ? VACOPT_VACUUM : 0) |
 			(doanalyze ? VACOPT_ANALYZE : 0) |
-			(!wraparound ? VACOPT_NOWAIT : 0);
+			(!wraparound ? VACOPT_NOWAIT : 0) |
+			(dowarmcleanup ? VACOPT_WARM_CLEANUP : 0);
 		tab->at_params.freeze_min_age = freeze_min_age;
 		tab->at_params.freeze_table_age = freeze_table_age;
 		tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age;
 		tab->at_params.multixact_freeze_table_age = multixact_freeze_table_age;
 		tab->at_params.is_wraparound = wraparound;
 		tab->at_params.log_min_duration = log_min_duration;
+		tab->at_params.warmcleanup_index_scale = warmcleanup_index_scale;
 		tab->at_vacuum_cost_limit = vac_cost_limit;
 		tab->at_vacuum_cost_delay = vac_cost_delay;
 		tab->at_relname = NULL;
@@ -2748,7 +2768,8 @@ relation_needs_vacanalyze(Oid relid,
  /* output params below */
 						  bool *dovacuum,
 						  bool *doanalyze,
-						  bool *wraparound)
+						  bool *wraparound,
+						  bool *dowarmcleanup)
 {
 	bool		force_vacuum;
 	bool		av_enabled;
@@ -2760,6 +2781,9 @@ relation_needs_vacanalyze(Oid relid,
 	float4		vac_scale_factor,
 				anl_scale_factor;
 
+	/* constant from reloptions or GUC valriable */
+	float4		warmcleanup_scale_factor;
+
 	/* thresholds calculated from above constants */
 	float4		vacthresh,
 				anlthresh;
@@ -2768,6 +2792,9 @@ relation_needs_vacanalyze(Oid relid,
 	float4		vactuples,
 				anltuples;
 
+	/* number of WARM chains in the table */
+	float4		warmchains;
+
 	/* freeze parameters */
 	int			freeze_max_age;
 	int			multixact_freeze_max_age;
@@ -2800,6 +2827,11 @@ relation_needs_vacanalyze(Oid relid,
 		? relopts->analyze_threshold
 		: autovacuum_anl_thresh;
 
+	/* Use table specific value or the GUC value */
+	warmcleanup_scale_factor = (relopts && relopts->warmcleanup_scale_factor >= 0)
+		? relopts->warmcleanup_scale_factor
+		: autovacuum_warmcleanup_scale;
+
 	freeze_max_age = (relopts && relopts->freeze_max_age >= 0)
 		? Min(relopts->freeze_max_age, autovacuum_freeze_max_age)
 		: autovacuum_freeze_max_age;
@@ -2847,6 +2879,7 @@ relation_needs_vacanalyze(Oid relid,
 		reltuples = classForm->reltuples;
 		vactuples = tabentry->n_dead_tuples;
 		anltuples = tabentry->changes_since_analyze;
+		warmchains = tabentry->n_warm_chains;
 
 		vacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples;
 		anlthresh = (float4) anl_base_thresh + anl_scale_factor * reltuples;
@@ -2863,6 +2896,17 @@ relation_needs_vacanalyze(Oid relid,
 		/* Determine if this table needs vacuum or analyze. */
 		*dovacuum = force_vacuum || (vactuples > vacthresh);
 		*doanalyze = (anltuples > anlthresh);
+
+		/*
+		 * If the number of WARM chains in the is more than the configured
+		 * fraction, then we also do a WARM cleanup. This only triggers at the
+		 * table level, but we then look at each index and do cleanup for the
+		 * index only if the WARM pointers in the index are more than
+		 * configured index-level scale factor. lazy_vacuum_index() later deals
+		 * with that.
+		 */
+		if (*dovacuum && (warmcleanup_scale_factor * reltuples < warmchains))
+			*dowarmcleanup = true;
 	}
 	else
 	{
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index 52fe4ba..f38ce8a 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -226,9 +226,11 @@ typedef struct TwoPhasePgStatRecord
 	PgStat_Counter tuples_inserted;		/* tuples inserted in xact */
 	PgStat_Counter tuples_updated;		/* tuples updated in xact */
 	PgStat_Counter tuples_deleted;		/* tuples deleted in xact */
+	PgStat_Counter tuples_warm_updated;	/* tuples warm updated in xact */
 	PgStat_Counter inserted_pre_trunc;	/* tuples inserted prior to truncate */
 	PgStat_Counter updated_pre_trunc;	/* tuples updated prior to truncate */
 	PgStat_Counter deleted_pre_trunc;	/* tuples deleted prior to truncate */
+	PgStat_Counter warm_updated_pre_trunc;	/* tuples warm updated prior to truncate */
 	Oid			t_id;			/* table's OID */
 	bool		t_shared;		/* is it a shared catalog? */
 	bool		t_truncated;	/* was the relation truncated? */
@@ -1367,7 +1369,8 @@ pgstat_report_autovac(Oid dboid)
  */
 void
 pgstat_report_vacuum(Oid tableoid, bool shared,
-					 PgStat_Counter livetuples, PgStat_Counter deadtuples)
+					 PgStat_Counter livetuples, PgStat_Counter deadtuples,
+					 PgStat_Counter warmchains)
 {
 	PgStat_MsgVacuum msg;
 
@@ -1381,6 +1384,7 @@ pgstat_report_vacuum(Oid tableoid, bool shared,
 	msg.m_vacuumtime = GetCurrentTimestamp();
 	msg.m_live_tuples = livetuples;
 	msg.m_dead_tuples = deadtuples;
+	msg.m_warm_chains = warmchains;
 	pgstat_send(&msg, sizeof(msg));
 }
 
@@ -1396,7 +1400,7 @@ pgstat_report_vacuum(Oid tableoid, bool shared,
 void
 pgstat_report_analyze(Relation rel,
 					  PgStat_Counter livetuples, PgStat_Counter deadtuples,
-					  bool resetcounter)
+					  PgStat_Counter warmchains, bool resetcounter)
 {
 	PgStat_MsgAnalyze msg;
 
@@ -1421,12 +1425,14 @@ pgstat_report_analyze(Relation rel,
 		{
 			livetuples -= trans->tuples_inserted - trans->tuples_deleted;
 			deadtuples -= trans->tuples_updated + trans->tuples_deleted;
+			warmchains -= trans->tuples_warm_updated;
 		}
 		/* count stuff inserted by already-aborted subxacts, too */
 		deadtuples -= rel->pgstat_info->t_counts.t_delta_dead_tuples;
 		/* Since ANALYZE's counts are estimates, we could have underflowed */
 		livetuples = Max(livetuples, 0);
 		deadtuples = Max(deadtuples, 0);
+		warmchains = Max(warmchains, 0);
 	}
 
 	pgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_ANALYZE);
@@ -1437,6 +1443,7 @@ pgstat_report_analyze(Relation rel,
 	msg.m_analyzetime = GetCurrentTimestamp();
 	msg.m_live_tuples = livetuples;
 	msg.m_dead_tuples = deadtuples;
+	msg.m_warm_chains = warmchains;
 	pgstat_send(&msg, sizeof(msg));
 }
 
@@ -1907,7 +1914,10 @@ pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
 		else if (warm)
+		{
+			pgstat_info->trans->tuples_warm_updated++;
 			pgstat_info->t_counts.t_tuples_warm_updated++;
+		}
 	}
 }
 
@@ -2070,6 +2080,12 @@ AtEOXact_PgStat(bool isCommit)
 				/* update and delete each create a dead tuple */
 				tabstat->t_counts.t_delta_dead_tuples +=
 					trans->tuples_updated + trans->tuples_deleted;
+				/* 
+				 * commit or abort, a WARM update generates a WARM chain which
+				 * needs cleanup.
+				 */
+				tabstat->t_counts.t_delta_warm_chains +=
+					trans->tuples_warm_updated;
 				/* insert, update, delete each count as one change event */
 				tabstat->t_counts.t_changed_tuples +=
 					trans->tuples_inserted + trans->tuples_updated +
@@ -2080,6 +2096,12 @@ AtEOXact_PgStat(bool isCommit)
 				/* inserted tuples are dead, deleted tuples are unaffected */
 				tabstat->t_counts.t_delta_dead_tuples +=
 					trans->tuples_inserted + trans->tuples_updated;
+				/* 
+				 * commit or abort, a WARM update generates a WARM chain which
+				 * needs cleanup.
+				 */
+				tabstat->t_counts.t_delta_warm_chains +=
+					trans->tuples_warm_updated;
 				/* an aborted xact generates no changed_tuple events */
 			}
 			tabstat->trans = NULL;
@@ -2136,12 +2158,16 @@ AtEOSubXact_PgStat(bool isCommit, int nestDepth)
 						trans->upper->tuples_inserted = trans->tuples_inserted;
 						trans->upper->tuples_updated = trans->tuples_updated;
 						trans->upper->tuples_deleted = trans->tuples_deleted;
+						trans->upper->tuples_warm_updated =
+							trans->tuples_warm_updated;
 					}
 					else
 					{
 						trans->upper->tuples_inserted += trans->tuples_inserted;
 						trans->upper->tuples_updated += trans->tuples_updated;
 						trans->upper->tuples_deleted += trans->tuples_deleted;
+						trans->upper->tuples_warm_updated +=
+							trans->tuples_warm_updated;
 					}
 					tabstat->trans = trans->upper;
 					pfree(trans);
@@ -2177,9 +2203,13 @@ AtEOSubXact_PgStat(bool isCommit, int nestDepth)
 				tabstat->t_counts.t_tuples_inserted += trans->tuples_inserted;
 				tabstat->t_counts.t_tuples_updated += trans->tuples_updated;
 				tabstat->t_counts.t_tuples_deleted += trans->tuples_deleted;
+				tabstat->t_counts.t_tuples_warm_updated +=
+					trans->tuples_warm_updated;
 				/* inserted tuples are dead, deleted tuples are unaffected */
 				tabstat->t_counts.t_delta_dead_tuples +=
 					trans->tuples_inserted + trans->tuples_updated;
+				tabstat->t_counts.t_delta_warm_chains +=
+					trans->tuples_warm_updated;
 				tabstat->trans = trans->upper;
 				pfree(trans);
 			}
@@ -2221,9 +2251,11 @@ AtPrepare_PgStat(void)
 			record.tuples_inserted = trans->tuples_inserted;
 			record.tuples_updated = trans->tuples_updated;
 			record.tuples_deleted = trans->tuples_deleted;
+			record.tuples_warm_updated = trans->tuples_warm_updated;
 			record.inserted_pre_trunc = trans->inserted_pre_trunc;
 			record.updated_pre_trunc = trans->updated_pre_trunc;
 			record.deleted_pre_trunc = trans->deleted_pre_trunc;
+			record.warm_updated_pre_trunc = trans->warm_updated_pre_trunc;
 			record.t_id = tabstat->t_id;
 			record.t_shared = tabstat->t_shared;
 			record.t_truncated = trans->truncated;
@@ -2298,11 +2330,14 @@ pgstat_twophase_postcommit(TransactionId xid, uint16 info,
 		/* forget live/dead stats seen by backend thus far */
 		pgstat_info->t_counts.t_delta_live_tuples = 0;
 		pgstat_info->t_counts.t_delta_dead_tuples = 0;
+		pgstat_info->t_counts.t_delta_warm_chains = 0;
 	}
 	pgstat_info->t_counts.t_delta_live_tuples +=
 		rec->tuples_inserted - rec->tuples_deleted;
 	pgstat_info->t_counts.t_delta_dead_tuples +=
 		rec->tuples_updated + rec->tuples_deleted;
+	pgstat_info->t_counts.t_delta_warm_chains +=
+		rec->tuples_warm_updated;
 	pgstat_info->t_counts.t_changed_tuples +=
 		rec->tuples_inserted + rec->tuples_updated +
 		rec->tuples_deleted;
@@ -2330,12 +2365,16 @@ pgstat_twophase_postabort(TransactionId xid, uint16 info,
 		rec->tuples_inserted = rec->inserted_pre_trunc;
 		rec->tuples_updated = rec->updated_pre_trunc;
 		rec->tuples_deleted = rec->deleted_pre_trunc;
+		rec->tuples_warm_updated = rec->warm_updated_pre_trunc;
 	}
 	pgstat_info->t_counts.t_tuples_inserted += rec->tuples_inserted;
 	pgstat_info->t_counts.t_tuples_updated += rec->tuples_updated;
 	pgstat_info->t_counts.t_tuples_deleted += rec->tuples_deleted;
+	pgstat_info->t_counts.t_tuples_warm_updated += rec->tuples_warm_updated;
 	pgstat_info->t_counts.t_delta_dead_tuples +=
 		rec->tuples_inserted + rec->tuples_updated;
+	pgstat_info->t_counts.t_delta_warm_chains +=
+		rec->tuples_warm_updated;
 }
 
 
@@ -4526,6 +4565,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
+		result->n_warm_chains = 0;
 		result->changes_since_analyze = 0;
 		result->blocks_fetched = 0;
 		result->blocks_hit = 0;
@@ -5636,6 +5676,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
+			tabentry->n_warm_chains = tabmsg->t_counts.t_delta_warm_chains;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
 			tabentry->blocks_fetched = tabmsg->t_counts.t_blocks_fetched;
 			tabentry->blocks_hit = tabmsg->t_counts.t_blocks_hit;
@@ -5667,9 +5708,11 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			{
 				tabentry->n_live_tuples = 0;
 				tabentry->n_dead_tuples = 0;
+				tabentry->n_warm_chains = 0;
 			}
 			tabentry->n_live_tuples += tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples += tabmsg->t_counts.t_delta_dead_tuples;
+			tabentry->n_warm_chains += tabmsg->t_counts.t_delta_warm_chains;
 			tabentry->changes_since_analyze += tabmsg->t_counts.t_changed_tuples;
 			tabentry->blocks_fetched += tabmsg->t_counts.t_blocks_fetched;
 			tabentry->blocks_hit += tabmsg->t_counts.t_blocks_hit;
@@ -5679,6 +5722,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 		tabentry->n_live_tuples = Max(tabentry->n_live_tuples, 0);
 		/* Likewise for n_dead_tuples */
 		tabentry->n_dead_tuples = Max(tabentry->n_dead_tuples, 0);
+		tabentry->n_warm_chains = Max(tabentry->n_warm_chains, 0);
 
 		/*
 		 * Add per-table stats to the per-database entry, too.
@@ -5904,6 +5948,7 @@ pgstat_recv_vacuum(PgStat_MsgVacuum *msg, int len)
 
 	tabentry->n_live_tuples = msg->m_live_tuples;
 	tabentry->n_dead_tuples = msg->m_dead_tuples;
+	tabentry->n_warm_chains = msg->m_warm_chains;
 
 	if (msg->m_autovacuum)
 	{
@@ -5938,6 +5983,7 @@ pgstat_recv_analyze(PgStat_MsgAnalyze *msg, int len)
 
 	tabentry->n_live_tuples = msg->m_live_tuples;
 	tabentry->n_dead_tuples = msg->m_dead_tuples;
+	tabentry->n_warm_chains = msg->m_warm_chains;
 
 	/*
 	 * If commanded, reset changes_since_analyze to zero.  This forgets any
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 713d731..907e570 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -192,6 +192,21 @@ pg_stat_get_dead_tuples(PG_FUNCTION_ARGS)
 	PG_RETURN_INT64(result);
 }
 
+Datum
+pg_stat_get_warm_chains(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->n_warm_chains);
+
+	PG_RETURN_INT64(result);
+}
+
 
 Datum
 pg_stat_get_mod_since_analyze(PG_FUNCTION_ARGS)
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 08b6030..81fec03 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,6 +130,7 @@ int			VacuumCostPageMiss = 10;
 int			VacuumCostPageDirty = 20;
 int			VacuumCostLimit = 200;
 int			VacuumCostDelay = 0;
+double		VacuumWarmCleanupScale;
 
 int			VacuumPageHit = 0;
 int			VacuumPageMiss = 0;
@@ -137,3 +138,5 @@ int			VacuumPageDirty = 0;
 
 int			VacuumCostBalance = 0;		/* working state for vacuum */
 bool		VacuumCostActive = false;
+
+double		VacuumWarmCleanupIndexScale = 1;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index e9d561b..96b8918 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -3016,6 +3016,36 @@ static struct config_real ConfigureNamesReal[] =
 	},
 
 	{
+		{"autovacuum_warmcleanup_scale_factor", PGC_SIGHUP, AUTOVACUUM,
+			gettext_noop("Number of WARM chains prior to cleanup as a fraction of reltuples."),
+			NULL
+		},
+		&autovacuum_warmcleanup_scale,
+		0.1, 0.0, 100.0,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"autovacuum_warmcleanup_index_scale_factor", PGC_SIGHUP, AUTOVACUUM,
+			gettext_noop("Number of WARM pointers prior to cleanup as a fraction of total WARM chains."),
+			NULL
+		},
+		&autovacuum_warmcleanup_index_scale,
+		0.2, 0.0, 100.0,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"vacuum_warmcleanup_index_scale_factor", PGC_USERSET, WARM_CLEANUP,
+			gettext_noop("Number of WARM pointers in the index prior to cleanup as a fraction of total WARM chains."),
+			NULL
+		},
+		&VacuumWarmCleanupIndexScale,
+		0.2, 0.0, 100.0,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"checkpoint_completion_target", PGC_SIGHUP, WAL_CHECKPOINTS,
 			gettext_noop("Time spent flushing dirty buffers during checkpoint, as fraction of checkpoint interval."),
 			NULL
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index a6cb5c6..957a184 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2791,6 +2791,8 @@ DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of dead tuples");
+DATA(insert OID = 3374 (  pg_stat_get_warm_chains	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_warm_chains _null_ _null_ _null_ ));
+DESCR("statistics: number of warm chains");
 DATA(insert OID = 3177 (  pg_stat_get_mod_since_analyze PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_mod_since_analyze _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples changed since last analyze");
 DATA(insert OID = 1934 (  pg_stat_get_blocks_fetched	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_blocks_fetched _null_ _null_ _null_ ));
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index 541c2fa..9914143 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -145,6 +145,8 @@ typedef struct VacuumParams
 	int			log_min_duration;		/* minimum execution threshold in ms
 										 * at which  verbose logs are
 										 * activated, -1 to use default */
+	double		warmcleanup_index_scale; /* Fraction of WARM pointers to cause
+										  * index WARM cleanup */
 } VacuumParams;
 
 /* GUC parameters */
diff --git a/src/include/foreign/fdwapi.h b/src/include/foreign/fdwapi.h
index 6ca44f7..2993b1a 100644
--- a/src/include/foreign/fdwapi.h
+++ b/src/include/foreign/fdwapi.h
@@ -134,7 +134,8 @@ typedef void (*ExplainDirectModify_function) (ForeignScanState *node,
 typedef int (*AcquireSampleRowsFunc) (Relation relation, int elevel,
 											   HeapTuple *rows, int targrows,
 												  double *totalrows,
-												  double *totaldeadrows);
+												  double *totaldeadrows,
+												  double *totalwarmchains);
 
 typedef bool (*AnalyzeForeignTable_function) (Relation relation,
 												 AcquireSampleRowsFunc *func,
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 4c607b2..901960a 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -255,6 +255,7 @@ extern int	VacuumPageDirty;
 extern int	VacuumCostBalance;
 extern bool VacuumCostActive;
 
+extern double VacuumWarmCleanupIndexScale;
 
 /* in tcop/postgres.c */
 
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a71dd5..f842374 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3035,7 +3035,8 @@ typedef enum VacuumOption
 	VACOPT_FULL = 1 << 4,		/* FULL (non-concurrent) vacuum */
 	VACOPT_NOWAIT = 1 << 5,		/* don't wait to get lock (autovacuum only) */
 	VACOPT_SKIPTOAST = 1 << 6,	/* don't process the TOAST table, if any */
-	VACOPT_DISABLE_PAGE_SKIPPING = 1 << 7		/* don't skip any pages */
+	VACOPT_DISABLE_PAGE_SKIPPING = 1 << 7,		/* don't skip any pages */
+	VACOPT_WARM_CLEANUP = 1 << 8	/* do WARM cleanup */
 } VacuumOption;
 
 typedef struct VacuumStmt
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index cd21a78..7d9818b 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -433,6 +433,7 @@ PG_KEYWORD("version", VERSION_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("view", VIEW, UNRESERVED_KEYWORD)
 PG_KEYWORD("views", VIEWS, UNRESERVED_KEYWORD)
 PG_KEYWORD("volatile", VOLATILE, UNRESERVED_KEYWORD)
+PG_KEYWORD("warmclean", WARMCLEAN, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("when", WHEN, RESERVED_KEYWORD)
 PG_KEYWORD("where", WHERE, RESERVED_KEYWORD)
 PG_KEYWORD("whitespace", WHITESPACE_P, UNRESERVED_KEYWORD)
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 99bdc8b..883cbd4 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -110,6 +110,7 @@ typedef struct PgStat_TableCounts
 
 	PgStat_Counter t_delta_live_tuples;
 	PgStat_Counter t_delta_dead_tuples;
+	PgStat_Counter t_delta_warm_chains;
 	PgStat_Counter t_changed_tuples;
 
 	PgStat_Counter t_blocks_fetched;
@@ -167,11 +168,13 @@ typedef struct PgStat_TableXactStatus
 {
 	PgStat_Counter tuples_inserted;		/* tuples inserted in (sub)xact */
 	PgStat_Counter tuples_updated;		/* tuples updated in (sub)xact */
+	PgStat_Counter tuples_warm_updated;	/* tuples warm-updated in (sub)xact */
 	PgStat_Counter tuples_deleted;		/* tuples deleted in (sub)xact */
 	bool		truncated;		/* relation truncated in this (sub)xact */
 	PgStat_Counter inserted_pre_trunc;	/* tuples inserted prior to truncate */
 	PgStat_Counter updated_pre_trunc;	/* tuples updated prior to truncate */
 	PgStat_Counter deleted_pre_trunc;	/* tuples deleted prior to truncate */
+	PgStat_Counter warm_updated_pre_trunc;	/* tuples warm updated prior to truncate */
 	int			nest_level;		/* subtransaction nest level */
 	/* links to other structs for same relation: */
 	struct PgStat_TableXactStatus *upper;		/* next higher subxact if any */
@@ -370,6 +373,7 @@ typedef struct PgStat_MsgVacuum
 	TimestampTz m_vacuumtime;
 	PgStat_Counter m_live_tuples;
 	PgStat_Counter m_dead_tuples;
+	PgStat_Counter m_warm_chains;
 } PgStat_MsgVacuum;
 
 
@@ -388,6 +392,7 @@ typedef struct PgStat_MsgAnalyze
 	TimestampTz m_analyzetime;
 	PgStat_Counter m_live_tuples;
 	PgStat_Counter m_dead_tuples;
+	PgStat_Counter m_warm_chains;
 } PgStat_MsgAnalyze;
 
 
@@ -630,6 +635,7 @@ typedef struct PgStat_StatTabEntry
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
+	PgStat_Counter n_warm_chains;
 	PgStat_Counter changes_since_analyze;
 
 	PgStat_Counter blocks_fetched;
@@ -1156,10 +1162,11 @@ extern void pgstat_reset_single_counter(Oid objectid, PgStat_Single_Reset_Type t
 
 extern void pgstat_report_autovac(Oid dboid);
 extern void pgstat_report_vacuum(Oid tableoid, bool shared,
-					 PgStat_Counter livetuples, PgStat_Counter deadtuples);
+					 PgStat_Counter livetuples, PgStat_Counter deadtuples,
+					 PgStat_Counter warmchains);
 extern void pgstat_report_analyze(Relation rel,
 					  PgStat_Counter livetuples, PgStat_Counter deadtuples,
-					  bool resetcounter);
+					  PgStat_Counter warmchains, bool resetcounter);
 
 extern void pgstat_report_recovery_conflict(int reason);
 extern void pgstat_report_deadlock(void);
diff --git a/src/include/postmaster/autovacuum.h b/src/include/postmaster/autovacuum.h
index 99d7f09..5ac9c8f 100644
--- a/src/include/postmaster/autovacuum.h
+++ b/src/include/postmaster/autovacuum.h
@@ -28,6 +28,8 @@ extern int	autovacuum_freeze_max_age;
 extern int	autovacuum_multixact_freeze_max_age;
 extern int	autovacuum_vac_cost_delay;
 extern int	autovacuum_vac_cost_limit;
+extern double autovacuum_warmcleanup_scale;
+extern double autovacuum_warmcleanup_index_scale;
 
 /* autovacuum launcher PID, only valid when worker is shutting down */
 extern int	AutovacuumLauncherPid;
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index 2da9115..cd4532b 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -68,6 +68,7 @@ enum config_group
 	WAL_SETTINGS,
 	WAL_CHECKPOINTS,
 	WAL_ARCHIVING,
+	WARM_CLEANUP,
 	REPLICATION,
 	REPLICATION_SENDING,
 	REPLICATION_MASTER,
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index 4b173b5..05b3542 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -278,6 +278,8 @@ typedef struct AutoVacOpts
 	int			log_min_duration;
 	float8		vacuum_scale_factor;
 	float8		analyze_scale_factor;
+	float8		warmcleanup_scale_factor;
+	float8		warmcleanup_index_scale;
 } AutoVacOpts;
 
 typedef struct StdRdOptions
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index f7dc4a4..d34aa68 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1759,6 +1759,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
+    pg_stat_get_warm_chains(c.oid) AS n_warm_chains,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
     pg_stat_get_last_vacuum_time(c.oid) AS last_vacuum,
     pg_stat_get_last_autovacuum_time(c.oid) AS last_autovacuum,
@@ -1907,6 +1908,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
+    pg_stat_all_tables.n_warm_chains,
     pg_stat_all_tables.n_mod_since_analyze,
     pg_stat_all_tables.last_vacuum,
     pg_stat_all_tables.last_autovacuum,
@@ -1951,6 +1953,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
+    pg_stat_all_tables.n_warm_chains,
     pg_stat_all_tables.n_mod_since_analyze,
     pg_stat_all_tables.last_vacuum,
     pg_stat_all_tables.last_autovacuum,
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
index 1ae2f40..92f8136 100644
--- a/src/test/regress/expected/warm.out
+++ b/src/test/regress/expected/warm.out
@@ -745,6 +745,64 @@ SELECT a, b FROM test_toast_warm WHERE b = 104.20;
 (1 row)
 
 DROP TABLE test_toast_warm;
+-- Test VACUUM
+CREATE TABLE test_vacuum_warm (a int unique, b text, c int, d int);
+CREATE INDEX test_vacuum_warm_index1 ON test_vacuum_warm(b);
+CREATE INDEX test_vacuum_warm_index2 ON test_vacuum_warm(c);
+INSERT INTO test_vacuum_warm VALUES (1, 'a', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (2, 'b', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (3, 'c', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (4, 'd', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (5, 'e', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (6, 'f', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (7, 'g', 100, 200);
+UPDATE test_vacuum_warm SET b = 'u', c = 300 WHERE a = 1;
+UPDATE test_vacuum_warm SET b = 'v', c = 300 WHERE a = 2;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 3;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 4;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 5;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 6;
+-- a plain vacuum cannot clear WARM chains.
+SET enable_seqscan = false;
+SET enable_bitmapscan = false;
+SET seq_page_cost = 10000;
+VACUUM test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
+   Index Cond: (b = 'u'::text)
+   Heap Fetches: 1
+(3 rows)
+
+-- Now set vacuum_warmcleanup_index_scale_factor such that only
+-- test_vacuum_warm_index2 can be cleaned up.
+SET vacuum_warmcleanup_index_scale_factor=0.5;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
+   Index Cond: (b = 'u'::text)
+   Heap Fetches: 1
+(3 rows)
+
+-- All WARM chains cleaned up, so index-only scan should be used now without
+-- any heap fetches
+SET vacuum_warmcleanup_index_scale_factor=0;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect zero heap-fetches now
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
+   Index Cond: (b = 'u'::text)
+   Heap Fetches: 0
+(3 rows)
+
+DROP TABLE test_vacuum_warm;
 -- Toasted heap attributes
 CREATE TABLE toasttest(descr text , cnt int DEFAULT 0, f1 text, f2 text);
 CREATE INDEX testindx1 ON toasttest(descr);
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
index fb1f93e..964bb6e 100644
--- a/src/test/regress/sql/warm.sql
+++ b/src/test/regress/sql/warm.sql
@@ -285,6 +285,52 @@ SELECT a, b FROM test_toast_warm WHERE b = 104.20;
 
 DROP TABLE test_toast_warm;
 
+-- Test VACUUM
+
+CREATE TABLE test_vacuum_warm (a int unique, b text, c int, d int);
+CREATE INDEX test_vacuum_warm_index1 ON test_vacuum_warm(b);
+CREATE INDEX test_vacuum_warm_index2 ON test_vacuum_warm(c);
+
+INSERT INTO test_vacuum_warm VALUES (1, 'a', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (2, 'b', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (3, 'c', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (4, 'd', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (5, 'e', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (6, 'f', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (7, 'g', 100, 200);
+
+UPDATE test_vacuum_warm SET b = 'u', c = 300 WHERE a = 1;
+UPDATE test_vacuum_warm SET b = 'v', c = 300 WHERE a = 2;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 3;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 4;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 5;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 6;
+
+-- a plain vacuum cannot clear WARM chains.
+SET enable_seqscan = false;
+SET enable_bitmapscan = false;
+SET seq_page_cost = 10000;
+VACUUM test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+
+-- Now set vacuum_warmcleanup_index_scale_factor such that only
+-- test_vacuum_warm_index2 can be cleaned up.
+SET vacuum_warmcleanup_index_scale_factor=0.5;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+
+
+-- All WARM chains cleaned up, so index-only scan should be used now without
+-- any heap fetches
+SET vacuum_warmcleanup_index_scale_factor=0;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect zero heap-fetches now
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+
+DROP TABLE test_vacuum_warm;
+
 -- Toasted heap attributes
 CREATE TABLE toasttest(descr text , cnt int DEFAULT 0, f1 text, f2 text);
 CREATE INDEX testindx1 ON toasttest(descr);
-- 
2.9.3 (Apple Git-75)

#208Amit Kapila
amit.kapila16@gmail.com
In reply to: Pavan Deolasee (#206)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 30, 2017 at 4:07 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Wed, Mar 29, 2017 at 4:42 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Wed, Mar 29, 2017 at 1:10 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Wed, Mar 29, 2017 at 12:02 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Wed, Mar 29, 2017 at 11:52 AM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

Then during recheck, we pass already compressed values to
index_form_tuple(). But my point is, the following code will ensure that
we
don't compress it again. My reading is that the first check for
!VARATT_IS_EXTENDED will return false if the value is already
compressed.

You are right. I was confused with previous check of VARATT_IS_EXTERNAL.

Ok, thanks.

TBH I couldn't find why the original index insertion code will always
supply
uncompressed values.

Just try by inserting large value of text column ('aaaaaa.....bbb')
upto 2.5K. Then have a breakpoint in heap_prepare_insert and
index_form_tuple, and debug both the functions, you can find out that
even though we compress during insertion in heap, the index will
compress the original value again.

Ok, tried that. AFAICS index_form_tuple gets compressed values.

How have you verified that? Have you checked that in
heap_prepare_insert it has called toast_insert_or_update() and then
returned a tuple different from what the input tup is? Basically, I
am easily able to see it and even the reason why the heap and index
tuples will be different. Let me try to explain,
toast_insert_or_update returns a new tuple which contains compressed
data and this tuple is inserted in heap where as slot still refers to
original tuple (uncompressed one) which is passed to heap_insert.
Now, ExecInsertIndexTuples and the calls under it like FormIndexDatum
will refer to the tuple in slot which is uncompressed and form the
values[] using uncompressed value. Try with a simple case as below:

Create table t_comp(c1 int, c2 text);
Create index idx_t_comp_c2 on t_comp(c2);
Create index idx_t_comp_c1 on t_comp(c1);

Insert into t_comp(1,'aaaa ...aaa');

Repeat 'a' in above line for 2700 times or so. You should notice what
I am explaining above.

Yeah probably you are right, but I am not sure if it is good idea to
compare compressed values.

Again, I don't see a problem there.

I think with this new changes in btrecheck, it would appear to be much
costlier as compare to what you have few versions back. I am afraid
that it can impact performance for cases where there are few WARM
updates in chain and many HOT updates as it will run recheck for all
such updates.

INSERT INTO pgbench_accounts SELECT generate_series(:start, :end )::text ||
<2300 chars string>, (random()::bigint) % :scale, 0;

CREATE UNIQUE INDEX pgb_a_aid ON pgbench_accounts(aid);
CREATE INDEX pgb_a_filler1 ON pgbench_accounts(filler1);
CREATE INDEX pgb_a_filler2 ON pgbench_accounts(filler2);
CREATE INDEX pgb_a_filler3 ON pgbench_accounts(filler3);
CREATE INDEX pgb_a_filler4 ON pgbench_accounts(filler4);

-- Force a WARM update on one row
UPDATE pgbench_accounts SET filler1 = 'X' WHERE aid = '100' ||
repeat('abcdefghij', 20000);

Test:
-- Fetch the row using the fat index. Since the row contains a
BEGIN;
SELECT substring(aid, 1, 10) FROM pgbench_accounts WHERE aid = '100' ||
<2300 chars string> ORDER BY aid;
UPDATE pgbench_accounts SET abalance = abalance + 100 WHERE aid = '100' ||
<2300 chars string>;
END;

I did 4 5-minutes runs with master and WARM and there is probably a 2-3%
regression.

So IIUC, in above test during initialization you have one WARM update
and then during actual test all are HOT updates, won't in such a case
the WARM chain will be converted to HOT by vacuum and then all updates
from thereon will be HOT and probably no rechecks?

(Results with 5 mins tests, txns is total for 5 mins, idx_scan is number of
scans on the fat index)
master:
txns idx_scan
414117 828233
411109 822217
411848 823695
408424 816847

WARM:
txns idx_scan
404139 808277
398880 797759
399949 799897
397927 795853

==========

I then also repeated the tests, but this time using compressible values. The
regression in this case is much higher, may be 15% or more.

Sounds on higher side.

INSERT INTO pgbench_accounts SELECT generate_series(:start, :end )::text ||
repeat('abcdefghij', 20000), (random()::bigint) % :scale, 0;

-- Fetch the row using the fat index. Since the row contains a
BEGIN;
SELECT substring(aid, 1, 10) FROM pgbench_accounts WHERE aid = '100' ||
repeat('abcdefghij', 20000) ORDER BY aid;
UPDATE pgbench_accounts SET abalance = abalance + 100 WHERE aid = '100' ||
repeat('abcdefghij', 20000);
END;

(Results with 5 mins tests, txns is total for 5 mins, idx_scan is number of
scans on the fat index)
master:
txns idx_scan
56976 113953
56822 113645
56915 113831
56865 113731

WARM:
txns idx_scan
49044 98087
49020 98039
49007 98013
49006 98011

But TBH I believe this regression is coming from the changes to
heap_tuple_attr_equals where we are decompressing both old and new values
and then comparing them. For 200K bytes long values, that must be something.
Another reason why I think so is because I accidentally did one run which
did not use index scans and did not perform any WARM updates, but the
regression was kinda similar. So that makes me think that the regression is
coming from somewhere else and change in heap_tuple_attr_equals seems like a
good candidate.

I think we can fix that by comparing compressed values.

IIUC, by the time you are comparing tuple attrs to check for modified
columns, you don't have the compressed values for new tuple.

I know you had
raised concerns, but Robert confirmed that (IIUC) it's not a problem today.

Yeah, but I am not sure if we can take Robert's statement as some sort
of endorsement for what the patch does.

We will figure out how to deal with it if we ever add support for different
compression algorithms or compression levels. And I also think this is kinda
synthetic use case and the fact that there is not much regression with
indexes as large as 2K bytes seems quite comforting to me.

I am not sure if we can consider it as completely synthetic because we
might see some similar cases for json datatypes. Can we once try to
see the impact when the same test runs from multiple clients? For
your information, I am also trying to setup some tests along with one
of my colleague and we will report the results once the tests are
complete.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#209Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Amit Kapila (#208)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 30, 2017 at 5:27 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

How have you verified that? Have you checked that in
heap_prepare_insert it has called toast_insert_or_update() and then
returned a tuple different from what the input tup is? Basically, I
am easily able to see it and even the reason why the heap and index
tuples will be different. Let me try to explain,
toast_insert_or_update returns a new tuple which contains compressed
data and this tuple is inserted in heap where as slot still refers to
original tuple (uncompressed one) which is passed to heap_insert.
Now, ExecInsertIndexTuples and the calls under it like FormIndexDatum
will refer to the tuple in slot which is uncompressed and form the
values[] using uncompressed value.

Ah, yes. You're right. Not sure why I saw things differently. That doesn't
anything though because during recheck we'll get compressed value and not
do anything with it. In the index we already have compressed value and we
can compare them. Even if we decide to decompress everything and do the
comparison, that should be possible. So I don't see a problem as far as
correctness goes.

So IIUC, in above test during initialization you have one WARM update
and then during actual test all are HOT updates, won't in such a case
the WARM chain will be converted to HOT by vacuum and then all updates
from thereon will be HOT and probably no rechecks?

There is no AV.. Just 1 tuple being HOT updated out of 100 tuples.
Confirmed by looking at pg_stat_user_tables. Also made sure that the tuple
doesn't get non-HOT updated in between, thus breaking the WARM chain.

I then also repeated the tests, but this time using compressible values.

The

regression in this case is much higher, may be 15% or more.

Sounds on higher side.

Yes, definitely. If we can't reduce that, we might want to provide table
level option to explicitly turn WARM off on such tables.

IIUC, by the time you are comparing tuple attrs to check for modified
columns, you don't have the compressed values for new tuple.

I think it depends. If the value is not being modified, then we will get
both values as compressed. At least I confirmed with your example and
running an update which only changes c1. Don't know if that holds for all
cases.

I know you had
raised concerns, but Robert confirmed that (IIUC) it's not a problem

today.

Yeah, but I am not sure if we can take Robert's statement as some sort
of endorsement for what the patch does.

Sure.

We will figure out how to deal with it if we ever add support for

different

compression algorithms or compression levels. And I also think this is

kinda

synthetic use case and the fact that there is not much regression with
indexes as large as 2K bytes seems quite comforting to me.

I am not sure if we can consider it as completely synthetic because we
might see some similar cases for json datatypes. Can we once try to
see the impact when the same test runs from multiple clients?

Ok. Might become hard to control HOT behaviour though. Or will need to do
mix of WARM/HOT updates. Will see if this is something easily doable by
setting high FF etc.

For
your information, I am also trying to setup some tests along with one
of my colleague and we will report the results once the tests are
complete.

That'll be extremely helpful, especially if its a something close to
real-world scenario. Thanks for doing that.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#210Robert Haas
robertmhaas@gmail.com
In reply to: Pavan Deolasee (#206)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 30, 2017 at 6:37 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

I think we can fix that by comparing compressed values. I know you had
raised concerns, but Robert confirmed that (IIUC) it's not a problem today.

I'm not sure that's an entirely fair interpretation of what I said.
My point was that, while it may not be broken today, it might not be a
good idea to rely for correctness on it always being true.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#211Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#210)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 30, 2017 at 7:27 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Thu, Mar 30, 2017 at 6:37 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

I think we can fix that by comparing compressed values. I know you had
raised concerns, but Robert confirmed that (IIUC) it's not a problem

today.

I'm not sure that's an entirely fair interpretation of what I said.
My point was that, while it may not be broken today, it might not be a
good idea to rely for correctness on it always being true.

I take that point. We have a choice of fixing it today or whenever to
support multiple compression techniques. We don't even know how that will
look like and whether we will be able to look at compressed data and tell
whether two values are compressed by exact same way.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#212Amit Kapila
amit.kapila16@gmail.com
In reply to: Pavan Deolasee (#209)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 30, 2017 at 5:55 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Thu, Mar 30, 2017 at 5:27 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

How have you verified that? Have you checked that in
heap_prepare_insert it has called toast_insert_or_update() and then
returned a tuple different from what the input tup is? Basically, I
am easily able to see it and even the reason why the heap and index
tuples will be different. Let me try to explain,
toast_insert_or_update returns a new tuple which contains compressed
data and this tuple is inserted in heap where as slot still refers to
original tuple (uncompressed one) which is passed to heap_insert.
Now, ExecInsertIndexTuples and the calls under it like FormIndexDatum
will refer to the tuple in slot which is uncompressed and form the
values[] using uncompressed value.

Ah, yes. You're right. Not sure why I saw things differently. That doesn't
anything though because during recheck we'll get compressed value and not do
anything with it. In the index we already have compressed value and we can
compare them. Even if we decide to decompress everything and do the
comparison, that should be possible.

I think we should not consider doing compression and decompression as
free at this point in code, because we hold a buffer lock during
recheck. Buffer locks are meant for short-term locks (it is even
mentioned in storage/buffer/README), doing all the
compression/decompression/detoast stuff under these locks doesn't
sound advisable to me. It can block many concurrent operations.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#213Robert Haas
robertmhaas@gmail.com
In reply to: Amit Kapila (#212)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 30, 2017 at 10:08 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:

I think we should not consider doing compression and decompression as
free at this point in code, because we hold a buffer lock during
recheck. Buffer locks are meant for short-term locks (it is even
mentioned in storage/buffer/README), doing all the
compression/decompression/detoast stuff under these locks doesn't
sound advisable to me. It can block many concurrent operations.

Compression and decompression might cause performance problems, but
try to access the TOAST table would be fatal; that probably would have
deadlock hazards among other problems.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#214Petr Jelinek
petr.jelinek@2ndquadrant.com
In reply to: Pavan Deolasee (#211)
Re: Patch: Write Amplification Reduction Method (WARM)

On 30/03/17 16:04, Pavan Deolasee wrote:

On Thu, Mar 30, 2017 at 7:27 PM, Robert Haas <robertmhaas@gmail.com
<mailto:robertmhaas@gmail.com>> wrote:

On Thu, Mar 30, 2017 at 6:37 AM, Pavan Deolasee
<pavan.deolasee@gmail.com <mailto:pavan.deolasee@gmail.com>> wrote:

I think we can fix that by comparing compressed values. I know you had
raised concerns, but Robert confirmed that (IIUC) it's not a problem today.

I'm not sure that's an entirely fair interpretation of what I said.
My point was that, while it may not be broken today, it might not be a
good idea to rely for correctness on it always being true.

I take that point. We have a choice of fixing it today or whenever to
support multiple compression techniques. We don't even know how that
will look like and whether we will be able to look at compressed data
and tell whether two values are compressed by exact same way.

While reading this thread I am thinking if we could just not do WARM on
TOAST and compressed values if we know there might be regressions there.
I mean I've seen the problem WARM tries to solve mostly on timestamp or
boolean values and sometimes counters so it would still be helpful to
quite a lot of people even if we didn't do TOAST and compressed values
in v1. It's not like not doing WARM sometimes is somehow terrible, we'll
just fall back to current behavior.

--
Petr Jelinek http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#215Andres Freund
andres@anarazel.de
In reply to: Pavan Deolasee (#207)
Re: Patch: Write Amplification Reduction Method (WARM)

Hi,

On 2017-03-30 16:43:41 +0530, Pavan Deolasee wrote:

Looks like OID conflict to me.. Please try rebased set.

Pavan, Alvaro, everyone: I know you guys are working very hard on this,
but I think at this point it's too late to commit this for v10. This is
patch that's affecting the on-disk format, in quite subtle
ways. Committing this just at the end of the development cyle / shortly
before feature freeze, seems too dangerous to me.

Let's commit this just at the beginning of the cycle, so we have time to
shake out the bugs.

- Andres

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#216Robert Haas
robertmhaas@gmail.com
In reply to: Andres Freund (#215)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 30, 2017 at 11:41 AM, Andres Freund <andres@anarazel.de> wrote:

On 2017-03-30 16:43:41 +0530, Pavan Deolasee wrote:

Looks like OID conflict to me.. Please try rebased set.

Pavan, Alvaro, everyone: I know you guys are working very hard on this,
but I think at this point it's too late to commit this for v10. This is
patch that's affecting the on-disk format, in quite subtle
ways. Committing this just at the end of the development cyle / shortly
before feature freeze, seems too dangerous to me.

Let's commit this just at the beginning of the cycle, so we have time to
shake out the bugs.

+1, although I think it should also have substantially more review first.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#217Bruce Momjian
bruce@momjian.us
In reply to: Bruce Momjian (#145)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Mar 21, 2017 at 04:04:58PM -0400, Bruce Momjian wrote:

On Tue, Mar 21, 2017 at 04:56:16PM -0300, Alvaro Herrera wrote:

Bruce Momjian wrote:

On Tue, Mar 21, 2017 at 04:43:58PM -0300, Alvaro Herrera wrote:

Bruce Momjian wrote:

I don't think it makes sense to try and save bits and add complexity
when we have no idea if we will ever use them,

If we find ourselves in dire need of additional bits, there is a known
mechanism to get back 2 bits from old-style VACUUM FULL. I assume that
the reason nobody has bothered to write the code for that is that
there's no *that* much interest.

We have no way of tracking if users still have pages that used the bits
via pg_upgrade before they were removed.

Yes, that's exactly the code that needs to be written.

Yes, but once it is written it will take years before those bits can be
used on most installations.

Actually, the 2 bits from old-style VACUUM FULL bits could be reused if
one of the WARM bits would be set when it is checked. The WARM bits
will all be zero on pre-9.0. The check would have to be checking the
old-style VACUUM FULL bit and checking that a WARM bit is set.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#218Dilip Kumar
dilipbalaut@gmail.com
In reply to: Amit Kapila (#208)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 30, 2017 at 5:27 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:

I am not sure if we can consider it as completely synthetic because we
might see some similar cases for json datatypes. Can we once try to
see the impact when the same test runs from multiple clients? For
your information, I am also trying to setup some tests along with one
of my colleague and we will report the results once the tests are
complete.

We have done some testing and below is the test details and results.

Test:
I have derived this test from above test given by pavan[1]/messages/by-id/CABOikdMduu9wOhfvNzqVuNW4YdBgbgwv-A=HNFCL7R5Tmbx7JA@mail.gmail.com except
below difference.

- I have reduced the fill factor to 40 to ensure that multiple there
is scope in the page to store multiple WARM chains.
- WARM updated all the tuples.
- Executed a large select to enforce lot of recheck tuple within single query.
- Smaller tuple size (aid field is around ~100 bytes) just to ensure
tuple have sufficient space on a page to get WARM updated.

Results:
-----------
* I can see more than 15% of regression in this case. This regression
is repeatable.
* If I increase the fill factor to 90 than regression reduced to 7%,
may be only fewer tuples are getting WARM updated and others are not
because of no space left on page after few WARM update.

Test Setup:
----------------
Machine Information:

Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz
RAM: 64GB

Config Change:
synchronous_commit=off

——Setup.sql—

DROP TABLE IF EXISTS pgbench_accounts;
CREATE TABLE pgbench_accounts (
aid text,
bid bigint,
abalance bigint,
filler1 text DEFAULT md5(random()::text),
filler2 text DEFAULT md5(random()::text),
filler3 text DEFAULT md5(random()::text),
filler4 text DEFAULT md5(random()::text),
filler5 text DEFAULT md5(random()::text),
filler6 text DEFAULT md5(random()::text),
filler7 text DEFAULT md5(random()::text),
filler8 text DEFAULT md5(random()::text),
filler9 text DEFAULT md5(random()::text),
filler10 text DEFAULT md5(random()::text),
filler11 text DEFAULT md5(random()::text),
filler12 text DEFAULT md5(random()::text)
) WITH (fillfactor=40);

\set scale 10
\set end 0
\set start (:end + 1)
\set end (:start + (:scale * 100))

INSERT INTO pgbench_accounts SELECT generate_series(:start, :end
)::text || repeat('a', 100), (random()::bigint) % :scale, 0;

CREATE UNIQUE INDEX pgb_a_aid ON pgbench_accounts(aid);
CREATE INDEX pgb_a_filler1 ON pgbench_accounts(filler1);
CREATE INDEX pgb_a_filler2 ON pgbench_accounts(filler2);
CREATE INDEX pgb_a_filler3 ON pgbench_accounts(filler3);
CREATE INDEX pgb_a_filler4 ON pgbench_accounts(filler4);

UPDATE pgbench_accounts SET filler1 = 'X'; --WARM update all the tuples

—Test.sql—
set enable_seqscan=off;
set enable_bitmapscan=off;
explain analyze select * FROM pgbench_accounts WHERE aid < '400' ||
repeat('a', 100) ORDER BY aid

—Script.sh—
./psql -d postgres -f setup.sql
./pgbench -c1 -j1 -T300 -M prepared -f test.sql postgres

Patch:
tps = 3554.345313 (including connections establishing)
tps = 3554.880776 (excluding connections establishing)

Head:
tps = 4208.876829 (including connections establishing)
tps = 4209.440321 (excluding connections establishing)

*** After changing fill factor to 90 —

Patch:
tps = 3794.414770 (including connections establishing)
tps = 3794.919592 (excluding connections establishing)

Head:
tps = 4206.445608 (including connections establishing)
tps = 4207.033559 (excluding connections establishing)

[1]: /messages/by-id/CABOikdMduu9wOhfvNzqVuNW4YdBgbgwv-A=HNFCL7R5Tmbx7JA@mail.gmail.com
/messages/by-id/CABOikdMduu9wOhfvNzqVuNW4YdBgbgwv-A=HNFCL7R5Tmbx7JA@mail.gmail.com

I have done some perfing for the patch and I have noticed that time is
increased in heap_check_warm_chain function.

Top 10 functions in perf results (with patch):
+    8.98%     1.04%  postgres  postgres            [.] varstr_cmp
+    7.24%     0.00%  postgres  [unknown]           [.] 0000000000000000
+    6.34%     0.36%  postgres  libc-2.17.so        [.] clock_gettime
+    6.34%     0.00%  postgres  [unknown]           [.] 0x0000000000030000
+    6.18%     6.15%  postgres  [vdso]              [.] __vdso_clock_gettime
+    5.72%     0.02%  postgres  [kernel.kallsyms]   [k] system_call_fastpath
+    4.08%     4.06%  postgres  libc-2.17.so        [.] __memcpy_ssse3_back
+    4.08%     4.06%  postgres  libc-2.17.so        [.] get_next_seq
+    3.92%     0.00%  postgres  [unknown]           [.] 0x6161616161616161
+    3.07%     3.05%  postgres  postgres            [.] heap_check_warm_chain

Thanks to Amit for helping in discussing the test ideas.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#219Simon Riggs
simon@2ndquadrant.com
In reply to: Robert Haas (#216)
Re: Patch: Write Amplification Reduction Method (WARM)

On 30 March 2017 at 16:50, Robert Haas <robertmhaas@gmail.com> wrote:

On Thu, Mar 30, 2017 at 11:41 AM, Andres Freund <andres@anarazel.de> wrote:

On 2017-03-30 16:43:41 +0530, Pavan Deolasee wrote:

Looks like OID conflict to me.. Please try rebased set.

Pavan, Alvaro, everyone: I know you guys are working very hard on this,
but I think at this point it's too late to commit this for v10. This is
patch that's affecting the on-disk format, in quite subtle
ways. Committing this just at the end of the development cyle / shortly
before feature freeze, seems too dangerous to me.

Let's commit this just at the beginning of the cycle, so we have time to
shake out the bugs.

+1, although I think it should also have substantially more review first.

So Andres says defer this, but Robert says "more review", which is
more than just deferral.

We have some risky things in this release such as Hash Indexes,
function changes. I perfectly understand that perception of risk is
affected significantly by whether you wrote something or not. Andres
and Robert did not write it and so they see problems. I confess that
those two mentioned changes make me very scared and I'm wondering
whether we should disable them. Fear is normal.

A risk perspective is a good one to take. What I think we should do is
strip out the areas of complexity, like TOAST to reduce the footprint
and minimize the risk. There is benefit in WARM and PostgreSQL has
received public critiscism around our performance in this area. This
is more important than just a nice few % points of performance.

The bottom line is that this is written by Pavan, the guy we've
trusted for a decade to write and support HOT. We all know he can and
will fix any problems that emerge because he has shown us many times
he can and does.

We also observe that people from the same company sometimes support
their colleagues when they should not. I see no reason to believe that
is influencing my comments here.

The question is not whether this is ready today, but will it be
trusted and safe to use by Sept. Given the RMT, I would say yes, it
can be.

So I say we should commit WARM in PG10, with some restrictions.

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#220Robert Haas
robertmhaas@gmail.com
In reply to: Simon Riggs (#219)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Mar 31, 2017 at 7:53 AM, Simon Riggs <simon@2ndquadrant.com> wrote:

So Andres says defer this, but Robert says "more review", which is
more than just deferral.

We have some risky things in this release such as Hash Indexes,
function changes. I perfectly understand that perception of risk is
affected significantly by whether you wrote something or not. Andres
and Robert did not write it and so they see problems.

While that's probably true, I don't think that's the only thing going on here:

1. Hash indexes were reviewed and reworked repeatedly until nobody
could find any more problems, including people like Jesper Pederson
who do not work for EDB and who did extensive testing. Similarly with
the expression evaluation stuff, which got some review from Heikki and
even more from Tom. Now, several people who do not work for
2ndQuadrant have recently started looking at WARM and many of those
reviews have found problems and regressions. If we're to hold things
to the same standard, those things should be looked into and fixed
before there is any talk of committing anything. My concern is that
there seems to be (even with the patches already committed) a desire
to minimize the importance of the problems that have been found --
which I think is probably because fixing them would take time, and we
don't have much time left in this release cycle. We should regard the
time between feature freeze and release as a time to fix the things
that good review missed, not as a substitute for fixing things that
should have (or actually were) found during review prior to commit.

2. WARM is a non-optional feature which touches the on-disk format.
There is nothing more dangerous than that. If hash indexes have bugs,
people can avoid those bugs by not using them; there are good reasons
to suppose that hash indexes have very few existing users. The
expression evaluation changes, IMHO, are much more dangerous because
everyone will be exposed to them, but they will not likely corrupt
your data because they don't touch the on-disk format. WARM is even a
little more dangerous than that; everyone is exposed to those bugs,
and in the worst case they could eat your data.

I agree that WARM could be a pretty great feature, but I think you're
underestimating the negative effects that could result from committing
it too soon.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#221Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#220)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Mar 31, 2017 at 6:47 PM, Robert Haas <robertmhaas@gmail.com> wrote:

2. WARM is a non-optional feature which touches the on-disk format.
There is nothing more dangerous than that. If hash indexes have bugs,
people can avoid those bugs by not using them; there are good reasons
to suppose that hash indexes have very few existing users. The
expression evaluation changes, IMHO, are much more dangerous because
everyone will be exposed to them, but they will not likely corrupt
your data because they don't touch the on-disk format. WARM is even a
little more dangerous than that; everyone is exposed to those bugs,
and in the worst case they could eat your data.

Having worked on it for some time now, I can say that WARM uses pretty much
the same infrastructure that HOT uses for cleanup/pruning tuples from the
heap. So the risk of having a bug which can eat your data from the heap is
very low. Sure, it might mess up with indexes, return duplicate keys, not
return a row when it should have. Not saying they are not bad bugs, but
probably much less severe than someone removing live rows from the heap.

And we can make it a table level property, keep it off by default, turn it
off on system tables in this release and change the defaults only when we
get more confidence assuming people use it by explicitly turning it on. Now
may be that's not the right approach and keeping it off by default will
mean it receives much less testing than we would like. So we can keep it on
in the beta cycle and then take a call. I went a good length to make it
work on system tables because during HOT development, Tom told me that it
better work for everything or it doesn't work at all. But with WARM it
works for system tables and I know no known bugs, but if we don't want to
risk system tables, we might want to turn it off (just prior to release may
be).

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#222Robert Haas
robertmhaas@gmail.com
In reply to: Petr Jelinek (#214)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 30, 2017 at 10:49 AM, Petr Jelinek
<petr.jelinek@2ndquadrant.com> wrote:

While reading this thread I am thinking if we could just not do WARM on
TOAST and compressed values if we know there might be regressions there.
I mean I've seen the problem WARM tries to solve mostly on timestamp or
boolean values and sometimes counters so it would still be helpful to
quite a lot of people even if we didn't do TOAST and compressed values
in v1. It's not like not doing WARM sometimes is somehow terrible, we'll
just fall back to current behavior.

Good point.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#223Robert Haas
robertmhaas@gmail.com
In reply to: Pavan Deolasee (#221)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Mar 31, 2017 at 10:24 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

Having worked on it for some time now, I can say that WARM uses pretty much
the same infrastructure that HOT uses for cleanup/pruning tuples from the
heap. So the risk of having a bug which can eat your data from the heap is
very low. Sure, it might mess up with indexes, return duplicate keys, not
return a row when it should have. Not saying they are not bad bugs, but
probably much less severe than someone removing live rows from the heap.

Yes, that's true. If there's nothing wrong with the way pruning
works, then any other problem can be fixed by reindexing, I suppose.

And we can make it a table level property, keep it off by default, turn it
off on system tables in this release and change the defaults only when we
get more confidence assuming people use it by explicitly turning it on. Now
may be that's not the right approach and keeping it off by default will mean
it receives much less testing than we would like. So we can keep it on in
the beta cycle and then take a call. I went a good length to make it work on
system tables because during HOT development, Tom told me that it better
work for everything or it doesn't work at all. But with WARM it works for
system tables and I know no known bugs, but if we don't want to risk system
tables, we might want to turn it off (just prior to release may be).

I'm not generally a huge fan of on-off switches for things like this,
but I know Simon likes them. I think the question is how much they
really insulate us from bugs. For the hash index patch, for example,
the only way to really get insulation from bugs added in this release
would be to ship both the old and the new code in separate index AMs
(hash, hash2). The code has been restructured so much in the process
of doing all of this that any other form of on-off switch would be
pretty hit-or-miss whether it actually provided any protection.

Now, I am less sure about this case, but my guess is that you can't
really have this be something that can be flipped on and off for a
table. Once a table has any WARM updates in it, the code that knows
how to cope with that has to be enabled, and it'll work as well or
poorly as it does. Now, I understand you to be suggesting a flag at
table-creation time that would, maybe, be immutable after that, but
even then - are we going to run completely unmodified 9.6 code for
tables where that's not enabled, and only go through any of the WARM
logic when it is enabled? Doesn't sound likely. The commits already
made from this patch series certainly affect everybody, and I can't
see us adding switches that bypass
ce96ce60ca2293f75f36c3661e4657a3c79ffd61 for example.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#224Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#223)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Mar 31, 2017 at 11:16 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Fri, Mar 31, 2017 at 10:24 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

Having worked on it for some time now, I can say that WARM uses pretty

much

the same infrastructure that HOT uses for cleanup/pruning tuples from the
heap. So the risk of having a bug which can eat your data from the heap

is

very low. Sure, it might mess up with indexes, return duplicate keys, not
return a row when it should have. Not saying they are not bad bugs, but
probably much less severe than someone removing live rows from the heap.

Yes, that's true. If there's nothing wrong with the way pruning
works, then any other problem can be fixed by reindexing, I suppose.

Yeah, I think so.

I'm not generally a huge fan of on-off switches for things like this,
but I know Simon likes them. I think the question is how much they
really insulate us from bugs. For the hash index patch, for example,
the only way to really get insulation from bugs added in this release
would be to ship both the old and the new code in separate index AMs
(hash, hash2). The code has been restructured so much in the process
of doing all of this that any other form of on-off switch would be
pretty hit-or-miss whether it actually provided any protection.

Now, I am less sure about this case, but my guess is that you can't
really have this be something that can be flipped on and off for a
table. Once a table has any WARM updates in it, the code that knows
how to cope with that has to be enabled, and it'll work as well or
poorly as it does.

That's correct. Once enabled, we will need to handle the case of two index
pointers pointing to the same root. The only way to get rid of that is
probably do a complete rewrite/reindex, I suppose. But I was mostly talking
about immutable flag at table creation time as rightly guessed.

Now, I understand you to be suggesting a flag at
table-creation time that would, maybe, be immutable after that, but
even then - are we going to run completely unmodified 9.6 code for
tables where that's not enabled, and only go through any of the WARM
logic when it is enabled? Doesn't sound likely. The commits already
made from this patch series certainly affect everybody, and I can't
see us adding switches that bypass
ce96ce60ca2293f75f36c3661e4657a3c79ffd61 for example.

I don't think I am going to claim that either. But probably only 5% of the
new code would then be involved. Which is a lot less and a lot more
manageable. Having said that, I think if we at all do this, we should only
do it based on our experiences in the beta cycle, as a last resort. Based
on my own experiences during HOT development, long running pgbench tests,
with several concurrent clients, subjected to multiple AV cycles and
periodic consistency checks, usually brings up issues related to heap
corruption. So my confidence level is relatively high on that part of the
code. That's not to suggest that there can't be any bugs.

Obviously then there are other things such as regression to some workload
or additional work required by vacuum etc. And I think we should address
them and I'm fairly certain we can do that. It may not happen immediately,
but if we provide right knobs, may be those who are affected can fall back
to the old behaviour or not use the new code at all while we improve things
for them. Some of these things I could have already implemented, but
without a clear understanding of whether the feature will get in or not,
it's hard to keep putting infinite efforts into the patch. All
non-committers go through that dilemma all the time, I'm sure.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#225Jeff Janes
jeff.janes@gmail.com
In reply to: Pavan Deolasee (#207)
1 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 30, 2017 at 4:13 AM, Pavan Deolasee <pavan.deolasee@gmail.com>
wrote:

On Thu, Mar 30, 2017 at 3:29 PM, Dilip Kumar <dilipbalaut@gmail.com>
wrote:

On Wed, Mar 29, 2017 at 11:51 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

Thanks. I think your patch of tracking interesting attributes seems ok

too

after the performance issue was addressed. Even though we can still

improve

that further, at least Mithun confirmed that there is no significant
regression anymore and in fact for one artificial case, patch does

better

than even master.

I was trying to compile these patches on latest
head(f90d23d0c51895e0d7db7910538e85d3d38691f0) for some testing but I
was not able to compile it.

make[3]: *** [postgres.bki] Error 1

Looks like OID conflict to me.. Please try rebased set.

broken again on oid conflicts for 3373 to 3375 from the monitoring
permissions commi 25fff40798fc4.

After bumping those, I get these compiler warnings:

heapam.c: In function 'heap_delete':
heapam.c:3298: warning: 'root_offnum' may be used uninitialized in this
function
heapam.c: In function 'heap_update':
heapam.c:4311: warning: 'root_offnum' may be used uninitialized in this
function
heapam.c:4311: note: 'root_offnum' was declared here
heapam.c:3784: warning: 'root_offnum' may be used uninitialized in this
function
heapam.c: In function 'heap_lock_tuple':
heapam.c:5087: warning: 'root_offnum' may be used uninitialized in this
function

And I get a regression test failure, attached.

Cheers,

Jeff

Attachments:

regression.diffsapplication/octet-stream; name=regression.diffsDownload
*** /home/jjanes/pgsql/git/src/test/regress/expected/warm.out	Fri Mar 31 11:53:15 2017
--- /home/jjanes/pgsql/git/src/test/regress/results/warm.out	Fri Mar 31 11:58:25 2017
***************
*** 786,792 ****
  -------------------------------------------------------------------------------------------
   Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
     Index Cond: (b = 'u'::text)
!    Heap Fetches: 1
  (3 rows)
  
  -- All WARM chains cleaned up, so index-only scan should be used now without
--- 786,792 ----
  -------------------------------------------------------------------------------------------
   Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
     Index Cond: (b = 'u'::text)
!    Heap Fetches: 0
  (3 rows)
  
  -- All WARM chains cleaned up, so index-only scan should be used now without

======================================================================

#226Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Jeff Janes (#225)
5 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Sat, Apr 1, 2017 at 12:39 AM, Jeff Janes <jeff.janes@gmail.com> wrote:

On Thu, Mar 30, 2017 at 4:13 AM, Pavan Deolasee <pavan.deolasee@gmail.com>
wrote:

On Thu, Mar 30, 2017 at 3:29 PM, Dilip Kumar <dilipbalaut@gmail.com>
wrote:

On Wed, Mar 29, 2017 at 11:51 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

Thanks. I think your patch of tracking interesting attributes seems ok

too

after the performance issue was addressed. Even though we can still

improve

that further, at least Mithun confirmed that there is no significant
regression anymore and in fact for one artificial case, patch does

better

than even master.

I was trying to compile these patches on latest
head(f90d23d0c51895e0d7db7910538e85d3d38691f0) for some testing but I
was not able to compile it.

make[3]: *** [postgres.bki] Error 1

Looks like OID conflict to me.. Please try rebased set.

broken again on oid conflicts for 3373 to 3375 from the monitoring
permissions commi 25fff40798fc4.

Hi Jeff,

Thanks for trying. Much appreciated,

After bumping those, I get these compiler warnings:

heapam.c: In function 'heap_delete':
heapam.c:3298: warning: 'root_offnum' may be used uninitialized in this
function
heapam.c: In function 'heap_update':
heapam.c:4311: warning: 'root_offnum' may be used uninitialized in this
function
heapam.c:4311: note: 'root_offnum' was declared here
heapam.c:3784: warning: 'root_offnum' may be used uninitialized in this
function
heapam.c: In function 'heap_lock_tuple':
heapam.c:5087: warning: 'root_offnum' may be used uninitialized in this
function

Thanks. I don't see them on my LLVM compiler even at -O2. Anyways, I
inspected. They all looked non-problematic, but fixed in the attached
version v24, along with some others I could see on another linux machine.

And I get a regression test failure, attached.

Thanks again. Seems like my last changes to disallow WARM updates if more
than 50% indexes are updated caused this regression. Having various
features in different branches and merging them right before sending out
the patchset was probably not the smartest thing to do. I've fixed the
regression simply by adding another index on that table and making changes
to the expected output.

BTW I still need 2 regression failures, but I see them on the master too,
so not related to the patch. Attached here.

Thanks,
Pavan

Attachments:

0004-Provide-control-knobs-to-decide-when-to-do-heap-_v24.patchapplication/octet-stream; name=0004-Provide-control-knobs-to-decide-when-to-do-heap-_v24.patchDownload
From c75a37bfdfa47f821faaf53e8201dd09d401c7b8 Mon Sep 17 00:00:00 2001
From: Pavan Deolasee <pavan.deolasee@gmail.com>
Date: Wed, 29 Mar 2017 11:16:29 +0530
Subject: [PATCH 4/4] Provide control knobs to decide when to do heap and index
 WARM cleanup.

We provide two knobs to control maintenance activity on WARM. A guc
autovacuum_warm_cleanup_scale_factor can be set to trigger WARM cleanup.
Similarly, a GUC autovacuum_warm_cleanup_index_scale_factor can be set to
determine when to do index cleanup. Note that in the current design VACUUM
needs two index scans to remove a WARM index pointer. Hence we want to do that
work only when it makes sense (i.e. the index has significant number of WARM
pointers)

Similarly, VACUUM command is enhanced to accept another parameter, WARMCLEAN,
and if specified then only WARM cleanup will be carried out.
---
 src/backend/access/common/reloptions.c |  22 +++
 src/backend/catalog/system_views.sql   |   1 +
 src/backend/commands/analyze.c         |  60 +++++--
 src/backend/commands/vacuum.c          |   2 +
 src/backend/commands/vacuumlazy.c      | 320 +++++++++++++++++++++++++--------
 src/backend/parser/gram.y              |  26 ++-
 src/backend/postmaster/autovacuum.c    |  58 +++++-
 src/backend/postmaster/pgstat.c        |  50 +++++-
 src/backend/utils/adt/pgstatfuncs.c    |  15 ++
 src/backend/utils/init/globals.c       |   3 +
 src/backend/utils/misc/guc.c           |  30 ++++
 src/include/catalog/pg_proc.h          |   2 +
 src/include/commands/vacuum.h          |   2 +
 src/include/foreign/fdwapi.h           |   3 +-
 src/include/miscadmin.h                |   1 +
 src/include/nodes/parsenodes.h         |   3 +-
 src/include/parser/kwlist.h            |   1 +
 src/include/pgstat.h                   |  11 +-
 src/include/postmaster/autovacuum.h    |   2 +
 src/include/utils/guc_tables.h         |   1 +
 src/include/utils/rel.h                |   2 +
 src/test/regress/expected/rules.out    |   3 +
 src/test/regress/expected/warm.out     |  59 ++++++
 src/test/regress/sql/warm.sql          |  47 +++++
 24 files changed, 614 insertions(+), 110 deletions(-)

diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index de7507a..82823c8 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -338,6 +338,24 @@ static relopt_real realRelOpts[] =
 	},
 	{
 		{
+			"autovacuum_warmcleanup_scale_factor",
+			"Number of WARM chains prior to WARM cleanup as a fraction of reltuples",
+			RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
+			ShareUpdateExclusiveLock
+		},
+		-1, 0.0, 100.0
+	},
+	{
+		{
+			"autovacuum_warmcleanup_index_scale_factor",
+			"Number of WARM pointers in an index prior to WARM cleanup as a fraction of total WARM chains",
+			RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
+			ShareUpdateExclusiveLock
+		},
+		-1, 0.0, 100.0
+	},
+	{
+		{
 			"autovacuum_analyze_scale_factor",
 			"Number of tuple inserts, updates or deletes prior to analyze as a fraction of reltuples",
 			RELOPT_KIND_HEAP,
@@ -1339,6 +1357,10 @@ default_reloptions(Datum reloptions, bool validate, relopt_kind kind)
 		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, vacuum_scale_factor)},
 		{"autovacuum_analyze_scale_factor", RELOPT_TYPE_REAL,
 		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, analyze_scale_factor)},
+		{"autovacuum_warmcleanup_scale_factor", RELOPT_TYPE_REAL,
+		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, warmcleanup_scale_factor)},
+		{"autovacuum_warmcleanup_index_scale_factor", RELOPT_TYPE_REAL,
+		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, warmcleanup_index_scale)},
 		{"user_catalog_table", RELOPT_TYPE_BOOL,
 		offsetof(StdRdOptions, user_catalog_table)},
 		{"parallel_workers", RELOPT_TYPE_INT,
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 4ef964f..363fdf0 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -533,6 +533,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
+            pg_stat_get_warm_chains(C.oid) AS n_warm_chains,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
             pg_stat_get_last_vacuum_time(C.oid) as last_vacuum,
             pg_stat_get_last_autovacuum_time(C.oid) as last_autovacuum,
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index 404acb2..6c4fc4e 100644
--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -93,7 +93,8 @@ static VacAttrStats *examine_attribute(Relation onerel, int attnum,
 				  Node *index_expr);
 static int acquire_sample_rows(Relation onerel, int elevel,
 					HeapTuple *rows, int targrows,
-					double *totalrows, double *totaldeadrows);
+					double *totalrows, double *totaldeadrows,
+					double *totalwarmchains);
 static int	compare_rows(const void *a, const void *b);
 static int acquire_inherited_sample_rows(Relation onerel, int elevel,
 							  HeapTuple *rows, int targrows,
@@ -320,7 +321,8 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params,
 	int			targrows,
 				numrows;
 	double		totalrows,
-				totaldeadrows;
+				totaldeadrows,
+				totalwarmchains;
 	HeapTuple  *rows;
 	PGRUsage	ru0;
 	TimestampTz starttime = 0;
@@ -501,7 +503,8 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params,
 	else
 		numrows = (*acquirefunc) (onerel, elevel,
 								  rows, targrows,
-								  &totalrows, &totaldeadrows);
+								  &totalrows, &totaldeadrows,
+								  &totalwarmchains);
 
 	/*
 	 * Compute the statistics.  Temporary results during the calculations for
@@ -631,7 +634,7 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params,
 	 */
 	if (!inh)
 		pgstat_report_analyze(onerel, totalrows, totaldeadrows,
-							  (va_cols == NIL));
+							  totalwarmchains, (va_cols == NIL));
 
 	/* If this isn't part of VACUUM ANALYZE, let index AMs do cleanup */
 	if (!(options & VACOPT_VACUUM))
@@ -991,12 +994,14 @@ examine_attribute(Relation onerel, int attnum, Node *index_expr)
 static int
 acquire_sample_rows(Relation onerel, int elevel,
 					HeapTuple *rows, int targrows,
-					double *totalrows, double *totaldeadrows)
+					double *totalrows, double *totaldeadrows,
+					double *totalwarmchains)
 {
 	int			numrows = 0;	/* # rows now in reservoir */
 	double		samplerows = 0; /* total # rows collected */
 	double		liverows = 0;	/* # live rows seen */
 	double		deadrows = 0;	/* # dead rows seen */
+	double		warmchains = 0;
 	double		rowstoskip = -1;	/* -1 means not set yet */
 	BlockNumber totalblocks;
 	TransactionId OldestXmin;
@@ -1023,9 +1028,14 @@ acquire_sample_rows(Relation onerel, int elevel,
 		Page		targpage;
 		OffsetNumber targoffset,
 					maxoffset;
+		bool		marked[MaxHeapTuplesPerPage];
+		OffsetNumber root_offsets[MaxHeapTuplesPerPage];
 
 		vacuum_delay_point();
 
+		/* Track which root line pointers are already counted. */
+		memset(marked, 0, sizeof (marked));
+
 		/*
 		 * We must maintain a pin on the target page's buffer to ensure that
 		 * the maxoffset value stays good (else concurrent VACUUM might delete
@@ -1041,6 +1051,9 @@ acquire_sample_rows(Relation onerel, int elevel,
 		targpage = BufferGetPage(targbuffer);
 		maxoffset = PageGetMaxOffsetNumber(targpage);
 
+		/* Get all root line pointers first */
+		heap_get_root_tuples(targpage, root_offsets);
+
 		/* Inner loop over all tuples on the selected page */
 		for (targoffset = FirstOffsetNumber; targoffset <= maxoffset; targoffset++)
 		{
@@ -1069,6 +1082,22 @@ acquire_sample_rows(Relation onerel, int elevel,
 			targtuple.t_data = (HeapTupleHeader) PageGetItem(targpage, itemid);
 			targtuple.t_len = ItemIdGetLength(itemid);
 
+			/*
+			 * If this is a WARM-updated tuple, check if we have already seen
+			 * the root line pointer. If not, count this as a WARM chain. This
+			 * ensures that we count every WARM-chain just once, irrespective
+			 * of how many tuples exist in the chain.
+			 */
+			if (HeapTupleHeaderIsWarmUpdated(targtuple.t_data))
+			{
+				OffsetNumber root_offnum = root_offsets[targoffset];
+				if (!marked[root_offnum])
+				{
+					warmchains += 1;
+					marked[root_offnum] = true;
+				}
+			}
+
 			switch (HeapTupleSatisfiesVacuum(&targtuple,
 											 OldestXmin,
 											 targbuffer))
@@ -1200,18 +1229,24 @@ acquire_sample_rows(Relation onerel, int elevel,
 
 	/*
 	 * Estimate total numbers of rows in relation.  For live rows, use
-	 * vac_estimate_reltuples; for dead rows, we have no source of old
-	 * information, so we have to assume the density is the same in unseen
-	 * pages as in the pages we scanned.
+	 * vac_estimate_reltuples; for dead rows and WARM chains, we have no source
+	 * of old information, so we have to assume the density is the same in
+	 * unseen pages as in the pages we scanned.
 	 */
 	*totalrows = vac_estimate_reltuples(onerel, true,
 										totalblocks,
 										bs.m,
 										liverows);
 	if (bs.m > 0)
+	{
 		*totaldeadrows = floor((deadrows / bs.m) * totalblocks + 0.5);
+		*totalwarmchains = floor((warmchains / bs.m) * totalblocks + 0.5);
+	}
 	else
+	{
 		*totaldeadrows = 0.0;
+		*totalwarmchains = 0.0;
+	}
 
 	/*
 	 * Emit some interesting relation info
@@ -1219,11 +1254,13 @@ acquire_sample_rows(Relation onerel, int elevel,
 	ereport(elevel,
 			(errmsg("\"%s\": scanned %d of %u pages, "
 					"containing %.0f live rows and %.0f dead rows; "
-					"%d rows in sample, %.0f estimated total rows",
+					"%d rows in sample, %.0f estimated total rows; "
+					"%.0f warm chains",
 					RelationGetRelationName(onerel),
 					bs.m, totalblocks,
 					liverows, deadrows,
-					numrows, *totalrows)));
+					numrows, *totalrows,
+					*totalwarmchains)));
 
 	return numrows;
 }
@@ -1428,11 +1465,12 @@ acquire_inherited_sample_rows(Relation onerel, int elevel,
 				int			childrows;
 				double		trows,
 							tdrows;
+				double		twarmchains;
 
 				/* Fetch a random sample of the child's rows */
 				childrows = (*acquirefunc) (childrel, elevel,
 											rows + numrows, childtargrows,
-											&trows, &tdrows);
+											&trows, &tdrows, &twarmchains);
 
 				/* We may need to convert from child's rowtype to parent's */
 				if (childrows > 0 &&
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index 9fbb0eb..52a7838 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -103,6 +103,7 @@ ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel)
 		params.freeze_table_age = 0;
 		params.multixact_freeze_min_age = 0;
 		params.multixact_freeze_table_age = 0;
+		params.warmcleanup_index_scale = -1;
 	}
 	else
 	{
@@ -110,6 +111,7 @@ ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel)
 		params.freeze_table_age = -1;
 		params.multixact_freeze_min_age = -1;
 		params.multixact_freeze_table_age = -1;
+		params.warmcleanup_index_scale = -1;
 	}
 
 	/* user-invoked vacuum is never "for wraparound" */
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index f52490f..2b0742c 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -156,18 +156,23 @@ typedef struct LVRelStats
 	double		tuples_deleted;
 	BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */
 
+	int			maxtuples;		/* maxtuples computed while allocating space */
+	Size		work_area_size;	/* working area size */
+	char		*work_area;		/* working area for storing dead tuples and
+								 * warm chains */
 	/* List of candidate WARM chains that can be converted into HOT chains */
-	/* NB: this list is ordered by TID of the root pointers */
+	/* 
+	 * NB: this list grows from bottom to top and is ordered by TID of the root
+	 * pointers, with the lowest entry at the bottom
+	 */
 	int				num_warm_chains;	/* current # of entries */
-	int				max_warm_chains;	/* # slots allocated in array */
 	LVWarmChain 	*warm_chains;		/* array of LVWarmChain */
 	double			num_non_convertible_warm_chains;
-
 	/* List of TIDs of tuples we intend to delete */
 	/* NB: this list is ordered by TID address */
 	int			num_dead_tuples;	/* current # of entries */
-	int			max_dead_tuples;	/* # slots allocated in array */
 	ItemPointer dead_tuples;	/* array of ItemPointerData */
+
 	int			num_index_scans;
 	TransactionId latestRemovedXid;
 	bool		lock_waiter_detected;
@@ -187,11 +192,12 @@ static BufferAccessStrategy vac_strategy;
 /* non-export function prototypes */
 static void lazy_scan_heap(Relation onerel, int options,
 			   LVRelStats *vacrelstats, Relation *Irel, int nindexes,
-			   bool aggressive);
+			   bool aggressive, double warmcleanup_index_scale);
 static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats);
 static bool lazy_check_needs_freeze(Buffer buf, bool *hastup);
 static void lazy_vacuum_index(Relation indrel,
 				  bool clear_warm,
+				  double warmcleanup_index_scale,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats);
 static void lazy_cleanup_index(Relation indrel,
@@ -207,7 +213,8 @@ static bool should_attempt_truncation(LVRelStats *vacrelstats);
 static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
 static BlockNumber count_nondeletable_pages(Relation onerel,
 						 LVRelStats *vacrelstats);
-static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks);
+static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks,
+					   bool dowarmcleanup);
 static void lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr);
 static void lazy_record_warm_chain(LVRelStats *vacrelstats,
@@ -283,6 +290,9 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 						  &OldestXmin, &FreezeLimit, &xidFullScanLimit,
 						  &MultiXactCutoff, &mxactFullScanLimit);
 
+	/* Use default if the caller hasn't specified any value */
+	if (params->warmcleanup_index_scale == -1)
+		params->warmcleanup_index_scale = VacuumWarmCleanupIndexScale;
 	/*
 	 * We request an aggressive scan if the table's frozen Xid is now older
 	 * than or equal to the requested Xid full-table scan limit; or if the
@@ -309,7 +319,8 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 	vacrelstats->hasindex = (nindexes > 0);
 
 	/* Do the vacuuming */
-	lazy_scan_heap(onerel, options, vacrelstats, Irel, nindexes, aggressive);
+	lazy_scan_heap(onerel, options, vacrelstats, Irel, nindexes, aggressive,
+			params->warmcleanup_index_scale);
 
 	/* Done with indexes */
 	vac_close_indexes(nindexes, Irel, NoLock);
@@ -396,7 +407,8 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 	pgstat_report_vacuum(RelationGetRelid(onerel),
 						 onerel->rd_rel->relisshared,
 						 new_live_tuples,
-						 vacrelstats->new_dead_tuples);
+						 vacrelstats->new_dead_tuples,
+						 vacrelstats->num_non_convertible_warm_chains);
 	pgstat_progress_end_command();
 
 	/* and log the action if appropriate */
@@ -507,10 +519,19 @@ vacuum_log_cleanup_info(Relation rel, LVRelStats *vacrelstats)
  *		If there are no indexes then we can reclaim line pointers on the fly;
  *		dead line pointers need only be retained until all index pointers that
  *		reference them have been killed.
+ *
+ *		warmcleanup_index_scale specifies the number of WARM pointers in an
+ *		index as a fraction of total candidate WARM chains. If we find less
+ *		WARM pointers in an index than the specified fraction, then we don't
+ *		invoke cleanup that index. If WARM cleanup is skipped for any one
+ *		index, the WARM chain can't be cleared in the heap and no further WARM
+ *		updates are possible to such chains. Such chains are also not
+ *		considered for WARM cleanup in other indexes.
  */
 static void
 lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
-			   Relation *Irel, int nindexes, bool aggressive)
+			   Relation *Irel, int nindexes, bool aggressive,
+			   double warmcleanup_index_scale)
 {
 	BlockNumber nblocks,
 				blkno;
@@ -536,6 +557,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		PROGRESS_VACUUM_MAX_DEAD_TUPLES
 	};
 	int64		initprog_val[3];
+	bool		dowarmcleanup = ((options & VACOPT_WARM_CLEANUP) != 0);
 
 	pg_rusage_init(&ru0);
 
@@ -558,13 +580,13 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 	vacrelstats->nonempty_pages = 0;
 	vacrelstats->latestRemovedXid = InvalidTransactionId;
 
-	lazy_space_alloc(vacrelstats, nblocks);
+	lazy_space_alloc(vacrelstats, nblocks, dowarmcleanup);
 	frozen = palloc(sizeof(xl_heap_freeze_tuple) * MaxHeapTuplesPerPage);
 
 	/* Report that we're scanning the heap, advertising total # of blocks */
 	initprog_val[0] = PROGRESS_VACUUM_PHASE_SCAN_HEAP;
 	initprog_val[1] = nblocks;
-	initprog_val[2] = vacrelstats->max_dead_tuples;
+	initprog_val[2] = vacrelstats->maxtuples;
 	pgstat_progress_update_multi_param(3, initprog_index, initprog_val);
 
 	/*
@@ -656,6 +678,11 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		bool		all_frozen = true;	/* provided all_visible is also true */
 		bool		has_dead_tuples;
 		TransactionId visibility_cutoff_xid = InvalidTransactionId;
+		char		*end_deads;
+		char		*end_warms;
+		Size		free_work_area;
+		int			avail_dead_tuples;
+		int			avail_warm_chains;
 
 		/* see note above about forcing scanning of last page */
 #define FORCE_CHECK_PAGE() \
@@ -740,13 +767,39 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		vacuum_delay_point();
 
 		/*
+		 * The dead tuples are stored starting from the start of the work
+		 * area and growing downwards. The candidate warm chains are stored
+		 * starting from the bottom on the work area and growing upwards. Once
+		 * the difference between these two segments is too small to accomodate
+		 * potentially all tuples in the current page, we stop and do one round
+		 * of index cleanup.
+		 */
+		end_deads = (char *)(vacrelstats->dead_tuples + vacrelstats->num_dead_tuples);
+
+		/*
+		 * If we are not doing WARM cleanup, then the entire work area is used
+		 * by the dead tuples.
+		 */
+		if (vacrelstats->warm_chains)
+		{
+			end_warms = (char *)(vacrelstats->warm_chains - vacrelstats->num_warm_chains);
+			free_work_area = end_warms - end_deads;
+			avail_warm_chains = (free_work_area / sizeof (LVWarmChain));
+		}
+		else
+		{
+			free_work_area = vacrelstats->work_area +
+				vacrelstats->work_area_size - end_deads;
+			avail_warm_chains = 0;
+		}
+		avail_dead_tuples = (free_work_area / sizeof (ItemPointerData));
+
+		/*
 		 * If we are close to overrunning the available space for dead-tuple
 		 * TIDs, pause and do a cycle of vacuuming before we tackle this page.
 		 */
-		if (((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
-			vacrelstats->num_dead_tuples > 0) ||
-			((vacrelstats->max_warm_chains - vacrelstats->num_warm_chains) < MaxHeapTuplesPerPage &&
-			 vacrelstats->num_warm_chains > 0))
+		if ((avail_dead_tuples < MaxHeapTuplesPerPage && vacrelstats->num_dead_tuples > 0) ||
+			(avail_warm_chains < MaxHeapTuplesPerPage && vacrelstats->num_warm_chains > 0))
 		{
 			const int	hvp_index[] = {
 				PROGRESS_VACUUM_PHASE,
@@ -776,7 +829,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			/* Remove index entries */
 			for (i = 0; i < nindexes; i++)
 				lazy_vacuum_index(Irel[i],
-								  (vacrelstats->num_warm_chains > 0),
+								  dowarmcleanup && (vacrelstats->num_warm_chains > 0),
+								  warmcleanup_index_scale,
 								  &indstats[i],
 								  vacrelstats);
 
@@ -800,8 +854,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			 */
 			vacrelstats->num_dead_tuples = 0;
 			vacrelstats->num_warm_chains = 0;
-			memset(vacrelstats->warm_chains, 0,
-					vacrelstats->max_warm_chains * sizeof (LVWarmChain));
+			memset(vacrelstats->work_area, 0, vacrelstats->work_area_size);
 			vacrelstats->num_index_scans++;
 
 			/* Report that we are once again scanning the heap */
@@ -1408,7 +1461,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		/* Remove index entries */
 		for (i = 0; i < nindexes; i++)
 			lazy_vacuum_index(Irel[i],
-							  (vacrelstats->num_warm_chains > 0),
+							  dowarmcleanup && (vacrelstats->num_warm_chains > 0),
+							  warmcleanup_index_scale,
 							  &indstats[i],
 							  vacrelstats);
 
@@ -1513,9 +1567,12 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 		vacuum_delay_point();
 
 		tblk = chainblk = InvalidBlockNumber;
-		if (chainindex < vacrelstats->num_warm_chains)
-			chainblk =
-				ItemPointerGetBlockNumber(&(vacrelstats->warm_chains[chainindex].chain_tid));
+		if (vacrelstats->warm_chains &&
+			chainindex < vacrelstats->num_warm_chains)
+		{
+			LVWarmChain *chain = vacrelstats->warm_chains - (chainindex + 1);
+			chainblk = ItemPointerGetBlockNumber(&chain->chain_tid);
+		}
 
 		if (tupindex < vacrelstats->num_dead_tuples)
 			tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
@@ -1613,7 +1670,8 @@ lazy_warmclear_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 		BlockNumber tblk;
 		LVWarmChain	*chain;
 
-		chain = &vacrelstats->warm_chains[chainindex];
+		/* The warm chains are indexed from bottom */
+		chain = vacrelstats->warm_chains - (chainindex + 1);
 
 		tblk = ItemPointerGetBlockNumber(&chain->chain_tid);
 		if (tblk != blkno)
@@ -1847,9 +1905,11 @@ static void
 lazy_reset_warm_pointer_count(LVRelStats *vacrelstats)
 {
 	int i;
-	for (i = 0; i < vacrelstats->num_warm_chains; i++)
+
+	/* Start from the bottom and move upwards */
+	for (i = 1; i <= vacrelstats->num_warm_chains; i++)
 	{
-		LVWarmChain *chain = &vacrelstats->warm_chains[i];
+		LVWarmChain *chain = (vacrelstats->warm_chains - i);
 		chain->num_clear_pointers = chain->num_warm_pointers = 0;
 	}
 }
@@ -1863,6 +1923,7 @@ lazy_reset_warm_pointer_count(LVRelStats *vacrelstats)
 static void
 lazy_vacuum_index(Relation indrel,
 				  bool clear_warm,
+				  double warmcleanup_index_scale,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats)
 {
@@ -1927,25 +1988,57 @@ lazy_vacuum_index(Relation indrel,
 						(*stats)->warm_pointers_removed,
 						(*stats)->clear_pointers_removed)));
 
-		(*stats)->num_warm_pointers = 0;
-		(*stats)->num_clear_pointers = 0;
-		(*stats)->warm_pointers_removed = 0;
-		(*stats)->clear_pointers_removed = 0;
-		(*stats)->pointers_cleared = 0;
+		/*
+		 * If the number of WARM pointers found in the index are more than the
+		 * configured fraction of total candidate WARM chains, then do the
+		 * second index scan to clean up WARM chains.
+		 *
+		 * Otherwise we must set these WARM chains as non-convertible chains.
+		 */
+		if ((*stats)->num_warm_pointers >
+				((double)vacrelstats->num_warm_chains * warmcleanup_index_scale))
+		{
+			(*stats)->num_warm_pointers = 0;
+			(*stats)->num_clear_pointers = 0;
+			(*stats)->warm_pointers_removed = 0;
+			(*stats)->clear_pointers_removed = 0;
+			(*stats)->pointers_cleared = 0;
+
+			*stats = index_bulk_delete(&ivinfo, *stats,
+					lazy_indexvac_phase2, (void *) vacrelstats);
+			ereport(elevel,
+					(errmsg("scanned index \"%s\" to convert WARM pointers, found "
+							"%0.f WARM pointers, %0.f CLEAR pointers, removed "
+							"%0.f WARM pointers, removed %0.f CLEAR pointers, "
+							"cleared %0.f WARM pointers",
+							RelationGetRelationName(indrel),
+							(*stats)->num_warm_pointers,
+							(*stats)->num_clear_pointers,
+							(*stats)->warm_pointers_removed,
+							(*stats)->clear_pointers_removed,
+							(*stats)->pointers_cleared)));
+		}
+		else
+		{
+			int ii;
 
-		*stats = index_bulk_delete(&ivinfo, *stats,
-				lazy_indexvac_phase2, (void *) vacrelstats);
-		ereport(elevel,
-				(errmsg("scanned index \"%s\" to convert WARM pointers, found "
-						"%0.f WARM pointers, %0.f CLEAR pointers, removed "
-						"%0.f WARM pointers, removed %0.f CLEAR pointers, "
-						"cleared %0.f WARM pointers",
-						RelationGetRelationName(indrel),
-						(*stats)->num_warm_pointers,
-						(*stats)->num_clear_pointers,
-						(*stats)->warm_pointers_removed,
-						(*stats)->clear_pointers_removed,
-						(*stats)->pointers_cleared)));
+			/*
+			 * All chains skipped by this index are marked non-convertible.
+			 *
+			 * Start from bottom and move upwards.
+			 */
+			for (ii = 1; ii <= vacrelstats->num_warm_chains; ii++)
+			{
+				LVWarmChain *chain = vacrelstats->warm_chains - ii;
+				if (chain->num_warm_pointers > 0 ||
+					chain->num_clear_pointers > 1)
+				{
+					chain->keep_warm_chain = 1;
+					vacrelstats->num_non_convertible_warm_chains++;
+				}
+			}
+
+		}
 	}
 	else
 	{
@@ -2323,7 +2416,8 @@ count_nondeletable_pages(Relation onerel, LVRelStats *vacrelstats)
  * See the comments at the head of this file for rationale.
  */
 static void
-lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
+lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks,
+				 bool dowarmcleanup)
 {
 	long		maxtuples;
 	int			vac_work_mem = IsAutoVacuumWorkerProcess() &&
@@ -2332,11 +2426,16 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 
 	if (vacrelstats->hasindex)
 	{
+		/*
+		 * If we're not doing WARM cleanup then the entire memory is available
+		 * for tracking dead tuples. Otherwise it gets split between tracking
+		 * dead tuples and tracking WARM chains.
+		 */
 		maxtuples = (vac_work_mem * 1024L) / (sizeof(ItemPointerData) +
-				sizeof(LVWarmChain));
+				dowarmcleanup ? sizeof(LVWarmChain) : 0);
 		maxtuples = Min(maxtuples, INT_MAX);
 		maxtuples = Min(maxtuples, MaxAllocSize / (sizeof(ItemPointerData) +
-					sizeof(LVWarmChain)));
+				dowarmcleanup ? sizeof(LVWarmChain) : 0));
 
 		/* curious coding here to ensure the multiplication can't overflow */
 		if ((BlockNumber) (maxtuples / LAZY_ALLOC_TUPLES) > relblocks)
@@ -2350,21 +2449,29 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 		maxtuples = MaxHeapTuplesPerPage;
 	}
 
-	vacrelstats->num_dead_tuples = 0;
-	vacrelstats->max_dead_tuples = (int) maxtuples;
-	vacrelstats->dead_tuples = (ItemPointer)
-		palloc(maxtuples * sizeof(ItemPointerData));
-
-	/*
-	 * XXX Cheat for now and allocate the same size array for tracking warm
-	 * chains. maxtuples must have been already adjusted above to ensure we
-	 * don't cross vac_work_mem.
+	/* Allocate work area of the desired size and setup dead_tuples and
+	 * warm_chains to the start and the end of the area respectively. They grow
+	 * in opposite directions as dead tuples and warm chains are added. Note
+	 * that if we are not doing WARM cleanup then the entire area will only be
+	 * used for tracking dead tuples.
 	 */
-	vacrelstats->num_warm_chains = 0;
-	vacrelstats->max_warm_chains = (int) maxtuples;
-	vacrelstats->warm_chains = (LVWarmChain *)
-		palloc0(maxtuples * sizeof(LVWarmChain));
+	vacrelstats->work_area_size = maxtuples * (sizeof(ItemPointerData) +
+				dowarmcleanup ? sizeof(LVWarmChain) : 0);
+	vacrelstats->work_area = (char *) palloc0(vacrelstats->work_area_size);
+	vacrelstats->num_dead_tuples = 0;
+	vacrelstats->dead_tuples = (ItemPointer)vacrelstats->work_area;
+	vacrelstats->maxtuples = maxtuples;
 
+	if (dowarmcleanup)
+	{
+		vacrelstats->num_warm_chains = 0;
+		vacrelstats->warm_chains = (LVWarmChain *)
+			(vacrelstats->work_area + vacrelstats->work_area_size);
+	}
+	else
+	{
+		vacrelstats->warm_chains = NULL;
+	}
 }
 
 /*
@@ -2374,17 +2481,38 @@ static void
 lazy_record_clear_chain(LVRelStats *vacrelstats,
 					   ItemPointer itemptr)
 {
+	char *end_deads, *end_warms;
+	Size free_work_area;
+
+	if (vacrelstats->warm_chains == NULL)
+	{
+		vacrelstats->num_non_convertible_warm_chains++;
+		return;
+	}
+
+	end_deads = (char *) (vacrelstats->dead_tuples +
+					vacrelstats->num_dead_tuples);
+	end_warms = (char *) (vacrelstats->warm_chains -
+					vacrelstats->num_warm_chains);
+	free_work_area = (end_warms - end_deads);
+
+	Assert(free_work_area >= 0);
 	/*
 	 * The array shouldn't overflow under normal behavior, but perhaps it
 	 * could if we are given a really small maintenance_work_mem. In that
 	 * case, just forget the last few tuples (we'll get 'em next time).
 	 */
-	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	if (free_work_area >= sizeof (LVWarmChain))
 	{
-		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
-		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 0;
+		LVWarmChain *chain;
+
 		vacrelstats->num_warm_chains++;
+	   	chain = vacrelstats->warm_chains - vacrelstats->num_warm_chains;
+		chain->chain_tid = *itemptr;
+		chain->is_postwarm_chain = 0;
 	}
+	else
+		vacrelstats->num_non_convertible_warm_chains++;
 }
 
 /*
@@ -2394,17 +2522,39 @@ static void
 lazy_record_warm_chain(LVRelStats *vacrelstats,
 					   ItemPointer itemptr)
 {
+	char *end_deads, *end_warms;
+	Size free_work_area;
+
+	if (vacrelstats->warm_chains == NULL)
+	{
+		vacrelstats->num_non_convertible_warm_chains++;
+		return;
+	}
+
+	end_deads = (char *) (vacrelstats->dead_tuples +
+					vacrelstats->num_dead_tuples);
+	end_warms = (char *) (vacrelstats->warm_chains -
+					vacrelstats->num_warm_chains);
+	free_work_area = (end_warms - end_deads);
+
+	Assert(free_work_area >= 0);
+
 	/*
 	 * The array shouldn't overflow under normal behavior, but perhaps it
 	 * could if we are given a really small maintenance_work_mem. In that
 	 * case, just forget the last few tuples (we'll get 'em next time).
 	 */
-	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	if (free_work_area >= sizeof (LVWarmChain))
 	{
-		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
-		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 1;
+		LVWarmChain *chain;
+
 		vacrelstats->num_warm_chains++;
+	   	chain = vacrelstats->warm_chains - vacrelstats->num_warm_chains;
+		chain->chain_tid = *itemptr;
+		chain->is_postwarm_chain = 1;
 	}
+	else
+		vacrelstats->num_non_convertible_warm_chains++;
 }
 
 /*
@@ -2414,12 +2564,20 @@ static void
 lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr)
 {
+	char *end_deads = (char *) (vacrelstats->dead_tuples +
+		 	vacrelstats->num_dead_tuples);
+	char *end_warms = (char *) (vacrelstats->warm_chains -
+			vacrelstats->num_warm_chains);
+	Size freespace = (end_warms - end_deads);
+
+	Assert(freespace >= 0);
+	
 	/*
 	 * The array shouldn't overflow under normal behavior, but perhaps it
 	 * could if we are given a really small maintenance_work_mem. In that
 	 * case, just forget the last few tuples (we'll get 'em next time).
 	 */
-	if (vacrelstats->num_dead_tuples < vacrelstats->max_dead_tuples)
+	if (freespace >= sizeof (ItemPointer))
 	{
 		vacrelstats->dead_tuples[vacrelstats->num_dead_tuples] = *itemptr;
 		vacrelstats->num_dead_tuples++;
@@ -2472,10 +2630,10 @@ lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state)
 		return IBDCR_DELETE;
 
 	chain = (LVWarmChain *) bsearch((void *) itemptr,
-								(void *) vacrelstats->warm_chains,
-								vacrelstats->num_warm_chains,
-								sizeof(LVWarmChain),
-								vac_cmp_warm_chain);
+				(void *) (vacrelstats->warm_chains - vacrelstats->num_warm_chains),
+				vacrelstats->num_warm_chains,
+				sizeof(LVWarmChain),
+				vac_cmp_warm_chain);
 	if (chain != NULL)
 	{
 		if (is_warm)
@@ -2495,13 +2653,13 @@ static IndexBulkDeleteCallbackResult
 lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
 {
 	LVRelStats		*vacrelstats = (LVRelStats *) state;
-	LVWarmChain	*chain;
+	LVWarmChain		*chain;
 
 	chain = (LVWarmChain *) bsearch((void *) itemptr,
-								(void *) vacrelstats->warm_chains,
-								vacrelstats->num_warm_chains,
-								sizeof(LVWarmChain),
-								vac_cmp_warm_chain);
+				(void *) (vacrelstats->warm_chains - vacrelstats->num_warm_chains),
+				vacrelstats->num_warm_chains,
+				sizeof(LVWarmChain),
+				vac_cmp_warm_chain);
 
 	if (chain != NULL && (chain->keep_warm_chain != 1))
 	{
@@ -2600,6 +2758,7 @@ lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
 		 * index pointers.
 		 */
 		chain->keep_warm_chain = 1;
+		vacrelstats->num_non_convertible_warm_chains++;
 		return IBDCR_KEEP;
 	}
 	return IBDCR_KEEP;
@@ -2608,6 +2767,9 @@ lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
 /*
  * Comparator routines for use with qsort() and bsearch(). Similar to
  * vac_cmp_itemptr, but right hand argument is LVWarmChain struct pointer.
+ *
+ * The warm_chains array is sorted in descending order hence the return values
+ * are flipped.
  */
 static int
 vac_cmp_warm_chain(const void *left, const void *right)
@@ -2621,17 +2783,17 @@ vac_cmp_warm_chain(const void *left, const void *right)
 	rblk = ItemPointerGetBlockNumber(&((LVWarmChain *) right)->chain_tid);
 
 	if (lblk < rblk)
-		return -1;
-	if (lblk > rblk)
 		return 1;
+	if (lblk > rblk)
+		return -1;
 
 	loff = ItemPointerGetOffsetNumber((ItemPointer) left);
 	roff = ItemPointerGetOffsetNumber(&((LVWarmChain *) right)->chain_tid);
 
 	if (loff < roff)
-		return -1;
-	if (loff > roff)
 		return 1;
+	if (loff > roff)
+		return -1;
 
 	return 0;
 }
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9d53a29..1592220 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -433,7 +433,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	overlay_placing substr_from substr_for
 
 %type <boolean> opt_instead
-%type <boolean> opt_unique opt_concurrently opt_verbose opt_full
+%type <boolean> opt_unique opt_concurrently opt_verbose opt_full opt_warmclean
 %type <boolean> opt_freeze opt_default opt_recheck
 %type <defelt>	opt_binary opt_oids copy_delimiter
 
@@ -684,7 +684,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	VACUUM VALID VALIDATE VALIDATOR VALUE_P VALUES VARCHAR VARIADIC VARYING
 	VERBOSE VERSION_P VIEW VIEWS VOLATILE
 
-	WHEN WHERE WHITESPACE_P WINDOW WITH WITHIN WITHOUT WORK WRAPPER WRITE
+	WARMCLEAN WHEN WHERE WHITESPACE_P WINDOW WITH WITHIN WITHOUT WORK WRAPPER WRITE
 
 	XML_P XMLATTRIBUTES XMLCONCAT XMLELEMENT XMLEXISTS XMLFOREST XMLNAMESPACES
 	XMLPARSE XMLPI XMLROOT XMLSERIALIZE XMLTABLE
@@ -10059,7 +10059,7 @@ cluster_index_specification:
  *
  *****************************************************************************/
 
-VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
+VacuumStmt: VACUUM opt_full opt_freeze opt_verbose opt_warmclean
 				{
 					VacuumStmt *n = makeNode(VacuumStmt);
 					n->options = VACOPT_VACUUM;
@@ -10069,11 +10069,13 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
 						n->options |= VACOPT_FREEZE;
 					if ($4)
 						n->options |= VACOPT_VERBOSE;
+					if ($5)
+						n->options |= VACOPT_WARM_CLEANUP;
 					n->relation = NULL;
 					n->va_cols = NIL;
 					$$ = (Node *)n;
 				}
-			| VACUUM opt_full opt_freeze opt_verbose qualified_name
+			| VACUUM opt_full opt_freeze opt_verbose opt_warmclean qualified_name
 				{
 					VacuumStmt *n = makeNode(VacuumStmt);
 					n->options = VACOPT_VACUUM;
@@ -10083,13 +10085,15 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
 						n->options |= VACOPT_FREEZE;
 					if ($4)
 						n->options |= VACOPT_VERBOSE;
-					n->relation = $5;
+					if ($5)
+						n->options |= VACOPT_WARM_CLEANUP;
+					n->relation = $6;
 					n->va_cols = NIL;
 					$$ = (Node *)n;
 				}
-			| VACUUM opt_full opt_freeze opt_verbose AnalyzeStmt
+			| VACUUM opt_full opt_freeze opt_verbose opt_warmclean AnalyzeStmt
 				{
-					VacuumStmt *n = (VacuumStmt *) $5;
+					VacuumStmt *n = (VacuumStmt *) $6;
 					n->options |= VACOPT_VACUUM;
 					if ($2)
 						n->options |= VACOPT_FULL;
@@ -10097,6 +10101,8 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
 						n->options |= VACOPT_FREEZE;
 					if ($4)
 						n->options |= VACOPT_VERBOSE;
+					if ($5)
+						n->options |= VACOPT_WARM_CLEANUP;
 					$$ = (Node *)n;
 				}
 			| VACUUM '(' vacuum_option_list ')'
@@ -10129,6 +10135,7 @@ vacuum_option_elem:
 			| VERBOSE			{ $$ = VACOPT_VERBOSE; }
 			| FREEZE			{ $$ = VACOPT_FREEZE; }
 			| FULL				{ $$ = VACOPT_FULL; }
+			| WARMCLEAN			{ $$ = VACOPT_WARM_CLEANUP; }
 			| IDENT
 				{
 					if (strcmp($1, "disable_page_skipping") == 0)
@@ -10182,6 +10189,10 @@ opt_freeze: FREEZE									{ $$ = TRUE; }
 			| /*EMPTY*/								{ $$ = FALSE; }
 		;
 
+opt_warmclean: WARMCLEAN							{ $$ = TRUE; }
+			| /*EMPTY*/								{ $$ = FALSE; }
+		;
+
 opt_name_list:
 			'(' name_list ')'						{ $$ = $2; }
 			| /*EMPTY*/								{ $$ = NIL; }
@@ -14886,6 +14897,7 @@ type_func_name_keyword:
 			| SIMILAR
 			| TABLESAMPLE
 			| VERBOSE
+			| WARMCLEAN
 		;
 
 /* Reserved keyword --- these keywords are usable only as a ColLabel.
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 33ca749..91793e4 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -115,6 +115,8 @@ int			autovacuum_vac_thresh;
 double		autovacuum_vac_scale;
 int			autovacuum_anl_thresh;
 double		autovacuum_anl_scale;
+double		autovacuum_warmcleanup_scale;
+double		autovacuum_warmcleanup_index_scale;
 int			autovacuum_freeze_max_age;
 int			autovacuum_multixact_freeze_max_age;
 
@@ -307,7 +309,8 @@ static void relation_needs_vacanalyze(Oid relid, AutoVacOpts *relopts,
 						  Form_pg_class classForm,
 						  PgStat_StatTabEntry *tabentry,
 						  int effective_multixact_freeze_max_age,
-						  bool *dovacuum, bool *doanalyze, bool *wraparound);
+						  bool *dovacuum, bool *doanalyze, bool *wraparound,
+						  bool *dowarmcleanup);
 
 static void autovacuum_do_vac_analyze(autovac_table *tab,
 						  BufferAccessStrategy bstrategy);
@@ -2010,6 +2013,7 @@ do_autovacuum(void)
 		bool		dovacuum;
 		bool		doanalyze;
 		bool		wraparound;
+		bool		dowarmcleanup;
 
 		if (classForm->relkind != RELKIND_RELATION &&
 			classForm->relkind != RELKIND_MATVIEW)
@@ -2049,10 +2053,14 @@ do_autovacuum(void)
 		tabentry = get_pgstat_tabentry_relid(relid, classForm->relisshared,
 											 shared, dbentry);
 
-		/* Check if it needs vacuum or analyze */
+		/* 
+		 * Check if it needs vacuum or analyze. For vacuum, also check if it
+		 * needs WARM cleanup.
+		 */
 		relation_needs_vacanalyze(relid, relopts, classForm, tabentry,
 								  effective_multixact_freeze_max_age,
-								  &dovacuum, &doanalyze, &wraparound);
+								  &dovacuum, &doanalyze, &wraparound,
+								  &dowarmcleanup);
 
 		/* Relations that need work are added to table_oids */
 		if (dovacuum || doanalyze)
@@ -2105,6 +2113,7 @@ do_autovacuum(void)
 		bool		dovacuum;
 		bool		doanalyze;
 		bool		wraparound;
+		bool		dowarmcleanup;
 
 		/*
 		 * We cannot safely process other backends' temp tables, so skip 'em.
@@ -2135,7 +2144,8 @@ do_autovacuum(void)
 
 		relation_needs_vacanalyze(relid, relopts, classForm, tabentry,
 								  effective_multixact_freeze_max_age,
-								  &dovacuum, &doanalyze, &wraparound);
+								  &dovacuum, &doanalyze, &wraparound,
+								  &dowarmcleanup);
 
 		/* ignore analyze for toast tables */
 		if (dovacuum)
@@ -2566,6 +2576,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 	HeapTuple	classTup;
 	bool		dovacuum;
 	bool		doanalyze;
+	bool		dowarmcleanup;
 	autovac_table *tab = NULL;
 	PgStat_StatTabEntry *tabentry;
 	PgStat_StatDBEntry *shared;
@@ -2607,7 +2618,8 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 
 	relation_needs_vacanalyze(relid, avopts, classForm, tabentry,
 							  effective_multixact_freeze_max_age,
-							  &dovacuum, &doanalyze, &wraparound);
+							  &dovacuum, &doanalyze, &wraparound,
+							  &dowarmcleanup);
 
 	/* ignore ANALYZE for toast tables */
 	if (classForm->relkind == RELKIND_TOASTVALUE)
@@ -2623,6 +2635,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		int			vac_cost_limit;
 		int			vac_cost_delay;
 		int			log_min_duration;
+		double		warmcleanup_index_scale;
 
 		/*
 		 * Calculate the vacuum cost parameters and the freeze ages.  If there
@@ -2669,19 +2682,26 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 			? avopts->multixact_freeze_table_age
 			: default_multixact_freeze_table_age;
 
+		warmcleanup_index_scale = (avopts &&
+								   avopts->warmcleanup_index_scale >= 0)
+			? avopts->warmcleanup_index_scale
+			: autovacuum_warmcleanup_index_scale;
+
 		tab = palloc(sizeof(autovac_table));
 		tab->at_relid = relid;
 		tab->at_sharedrel = classForm->relisshared;
 		tab->at_vacoptions = VACOPT_SKIPTOAST |
 			(dovacuum ? VACOPT_VACUUM : 0) |
 			(doanalyze ? VACOPT_ANALYZE : 0) |
-			(!wraparound ? VACOPT_NOWAIT : 0);
+			(!wraparound ? VACOPT_NOWAIT : 0) |
+			(dowarmcleanup ? VACOPT_WARM_CLEANUP : 0);
 		tab->at_params.freeze_min_age = freeze_min_age;
 		tab->at_params.freeze_table_age = freeze_table_age;
 		tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age;
 		tab->at_params.multixact_freeze_table_age = multixact_freeze_table_age;
 		tab->at_params.is_wraparound = wraparound;
 		tab->at_params.log_min_duration = log_min_duration;
+		tab->at_params.warmcleanup_index_scale = warmcleanup_index_scale;
 		tab->at_vacuum_cost_limit = vac_cost_limit;
 		tab->at_vacuum_cost_delay = vac_cost_delay;
 		tab->at_relname = NULL;
@@ -2748,7 +2768,8 @@ relation_needs_vacanalyze(Oid relid,
  /* output params below */
 						  bool *dovacuum,
 						  bool *doanalyze,
-						  bool *wraparound)
+						  bool *wraparound,
+						  bool *dowarmcleanup)
 {
 	bool		force_vacuum;
 	bool		av_enabled;
@@ -2760,6 +2781,9 @@ relation_needs_vacanalyze(Oid relid,
 	float4		vac_scale_factor,
 				anl_scale_factor;
 
+	/* constant from reloptions or GUC valriable */
+	float4		warmcleanup_scale_factor;
+
 	/* thresholds calculated from above constants */
 	float4		vacthresh,
 				anlthresh;
@@ -2768,6 +2792,9 @@ relation_needs_vacanalyze(Oid relid,
 	float4		vactuples,
 				anltuples;
 
+	/* number of WARM chains in the table */
+	float4		warmchains;
+
 	/* freeze parameters */
 	int			freeze_max_age;
 	int			multixact_freeze_max_age;
@@ -2800,6 +2827,11 @@ relation_needs_vacanalyze(Oid relid,
 		? relopts->analyze_threshold
 		: autovacuum_anl_thresh;
 
+	/* Use table specific value or the GUC value */
+	warmcleanup_scale_factor = (relopts && relopts->warmcleanup_scale_factor >= 0)
+		? relopts->warmcleanup_scale_factor
+		: autovacuum_warmcleanup_scale;
+
 	freeze_max_age = (relopts && relopts->freeze_max_age >= 0)
 		? Min(relopts->freeze_max_age, autovacuum_freeze_max_age)
 		: autovacuum_freeze_max_age;
@@ -2847,6 +2879,7 @@ relation_needs_vacanalyze(Oid relid,
 		reltuples = classForm->reltuples;
 		vactuples = tabentry->n_dead_tuples;
 		anltuples = tabentry->changes_since_analyze;
+		warmchains = tabentry->n_warm_chains;
 
 		vacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples;
 		anlthresh = (float4) anl_base_thresh + anl_scale_factor * reltuples;
@@ -2863,6 +2896,17 @@ relation_needs_vacanalyze(Oid relid,
 		/* Determine if this table needs vacuum or analyze. */
 		*dovacuum = force_vacuum || (vactuples > vacthresh);
 		*doanalyze = (anltuples > anlthresh);
+
+		/*
+		 * If the number of WARM chains in the is more than the configured
+		 * fraction, then we also do a WARM cleanup. This only triggers at the
+		 * table level, but we then look at each index and do cleanup for the
+		 * index only if the WARM pointers in the index are more than
+		 * configured index-level scale factor. lazy_vacuum_index() later deals
+		 * with that.
+		 */
+		if (*dovacuum && (warmcleanup_scale_factor * reltuples < warmchains))
+			*dowarmcleanup = true;
 	}
 	else
 	{
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index 52fe4ba..f38ce8a 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -226,9 +226,11 @@ typedef struct TwoPhasePgStatRecord
 	PgStat_Counter tuples_inserted;		/* tuples inserted in xact */
 	PgStat_Counter tuples_updated;		/* tuples updated in xact */
 	PgStat_Counter tuples_deleted;		/* tuples deleted in xact */
+	PgStat_Counter tuples_warm_updated;	/* tuples warm updated in xact */
 	PgStat_Counter inserted_pre_trunc;	/* tuples inserted prior to truncate */
 	PgStat_Counter updated_pre_trunc;	/* tuples updated prior to truncate */
 	PgStat_Counter deleted_pre_trunc;	/* tuples deleted prior to truncate */
+	PgStat_Counter warm_updated_pre_trunc;	/* tuples warm updated prior to truncate */
 	Oid			t_id;			/* table's OID */
 	bool		t_shared;		/* is it a shared catalog? */
 	bool		t_truncated;	/* was the relation truncated? */
@@ -1367,7 +1369,8 @@ pgstat_report_autovac(Oid dboid)
  */
 void
 pgstat_report_vacuum(Oid tableoid, bool shared,
-					 PgStat_Counter livetuples, PgStat_Counter deadtuples)
+					 PgStat_Counter livetuples, PgStat_Counter deadtuples,
+					 PgStat_Counter warmchains)
 {
 	PgStat_MsgVacuum msg;
 
@@ -1381,6 +1384,7 @@ pgstat_report_vacuum(Oid tableoid, bool shared,
 	msg.m_vacuumtime = GetCurrentTimestamp();
 	msg.m_live_tuples = livetuples;
 	msg.m_dead_tuples = deadtuples;
+	msg.m_warm_chains = warmchains;
 	pgstat_send(&msg, sizeof(msg));
 }
 
@@ -1396,7 +1400,7 @@ pgstat_report_vacuum(Oid tableoid, bool shared,
 void
 pgstat_report_analyze(Relation rel,
 					  PgStat_Counter livetuples, PgStat_Counter deadtuples,
-					  bool resetcounter)
+					  PgStat_Counter warmchains, bool resetcounter)
 {
 	PgStat_MsgAnalyze msg;
 
@@ -1421,12 +1425,14 @@ pgstat_report_analyze(Relation rel,
 		{
 			livetuples -= trans->tuples_inserted - trans->tuples_deleted;
 			deadtuples -= trans->tuples_updated + trans->tuples_deleted;
+			warmchains -= trans->tuples_warm_updated;
 		}
 		/* count stuff inserted by already-aborted subxacts, too */
 		deadtuples -= rel->pgstat_info->t_counts.t_delta_dead_tuples;
 		/* Since ANALYZE's counts are estimates, we could have underflowed */
 		livetuples = Max(livetuples, 0);
 		deadtuples = Max(deadtuples, 0);
+		warmchains = Max(warmchains, 0);
 	}
 
 	pgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_ANALYZE);
@@ -1437,6 +1443,7 @@ pgstat_report_analyze(Relation rel,
 	msg.m_analyzetime = GetCurrentTimestamp();
 	msg.m_live_tuples = livetuples;
 	msg.m_dead_tuples = deadtuples;
+	msg.m_warm_chains = warmchains;
 	pgstat_send(&msg, sizeof(msg));
 }
 
@@ -1907,7 +1914,10 @@ pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
 		else if (warm)
+		{
+			pgstat_info->trans->tuples_warm_updated++;
 			pgstat_info->t_counts.t_tuples_warm_updated++;
+		}
 	}
 }
 
@@ -2070,6 +2080,12 @@ AtEOXact_PgStat(bool isCommit)
 				/* update and delete each create a dead tuple */
 				tabstat->t_counts.t_delta_dead_tuples +=
 					trans->tuples_updated + trans->tuples_deleted;
+				/* 
+				 * commit or abort, a WARM update generates a WARM chain which
+				 * needs cleanup.
+				 */
+				tabstat->t_counts.t_delta_warm_chains +=
+					trans->tuples_warm_updated;
 				/* insert, update, delete each count as one change event */
 				tabstat->t_counts.t_changed_tuples +=
 					trans->tuples_inserted + trans->tuples_updated +
@@ -2080,6 +2096,12 @@ AtEOXact_PgStat(bool isCommit)
 				/* inserted tuples are dead, deleted tuples are unaffected */
 				tabstat->t_counts.t_delta_dead_tuples +=
 					trans->tuples_inserted + trans->tuples_updated;
+				/* 
+				 * commit or abort, a WARM update generates a WARM chain which
+				 * needs cleanup.
+				 */
+				tabstat->t_counts.t_delta_warm_chains +=
+					trans->tuples_warm_updated;
 				/* an aborted xact generates no changed_tuple events */
 			}
 			tabstat->trans = NULL;
@@ -2136,12 +2158,16 @@ AtEOSubXact_PgStat(bool isCommit, int nestDepth)
 						trans->upper->tuples_inserted = trans->tuples_inserted;
 						trans->upper->tuples_updated = trans->tuples_updated;
 						trans->upper->tuples_deleted = trans->tuples_deleted;
+						trans->upper->tuples_warm_updated =
+							trans->tuples_warm_updated;
 					}
 					else
 					{
 						trans->upper->tuples_inserted += trans->tuples_inserted;
 						trans->upper->tuples_updated += trans->tuples_updated;
 						trans->upper->tuples_deleted += trans->tuples_deleted;
+						trans->upper->tuples_warm_updated +=
+							trans->tuples_warm_updated;
 					}
 					tabstat->trans = trans->upper;
 					pfree(trans);
@@ -2177,9 +2203,13 @@ AtEOSubXact_PgStat(bool isCommit, int nestDepth)
 				tabstat->t_counts.t_tuples_inserted += trans->tuples_inserted;
 				tabstat->t_counts.t_tuples_updated += trans->tuples_updated;
 				tabstat->t_counts.t_tuples_deleted += trans->tuples_deleted;
+				tabstat->t_counts.t_tuples_warm_updated +=
+					trans->tuples_warm_updated;
 				/* inserted tuples are dead, deleted tuples are unaffected */
 				tabstat->t_counts.t_delta_dead_tuples +=
 					trans->tuples_inserted + trans->tuples_updated;
+				tabstat->t_counts.t_delta_warm_chains +=
+					trans->tuples_warm_updated;
 				tabstat->trans = trans->upper;
 				pfree(trans);
 			}
@@ -2221,9 +2251,11 @@ AtPrepare_PgStat(void)
 			record.tuples_inserted = trans->tuples_inserted;
 			record.tuples_updated = trans->tuples_updated;
 			record.tuples_deleted = trans->tuples_deleted;
+			record.tuples_warm_updated = trans->tuples_warm_updated;
 			record.inserted_pre_trunc = trans->inserted_pre_trunc;
 			record.updated_pre_trunc = trans->updated_pre_trunc;
 			record.deleted_pre_trunc = trans->deleted_pre_trunc;
+			record.warm_updated_pre_trunc = trans->warm_updated_pre_trunc;
 			record.t_id = tabstat->t_id;
 			record.t_shared = tabstat->t_shared;
 			record.t_truncated = trans->truncated;
@@ -2298,11 +2330,14 @@ pgstat_twophase_postcommit(TransactionId xid, uint16 info,
 		/* forget live/dead stats seen by backend thus far */
 		pgstat_info->t_counts.t_delta_live_tuples = 0;
 		pgstat_info->t_counts.t_delta_dead_tuples = 0;
+		pgstat_info->t_counts.t_delta_warm_chains = 0;
 	}
 	pgstat_info->t_counts.t_delta_live_tuples +=
 		rec->tuples_inserted - rec->tuples_deleted;
 	pgstat_info->t_counts.t_delta_dead_tuples +=
 		rec->tuples_updated + rec->tuples_deleted;
+	pgstat_info->t_counts.t_delta_warm_chains +=
+		rec->tuples_warm_updated;
 	pgstat_info->t_counts.t_changed_tuples +=
 		rec->tuples_inserted + rec->tuples_updated +
 		rec->tuples_deleted;
@@ -2330,12 +2365,16 @@ pgstat_twophase_postabort(TransactionId xid, uint16 info,
 		rec->tuples_inserted = rec->inserted_pre_trunc;
 		rec->tuples_updated = rec->updated_pre_trunc;
 		rec->tuples_deleted = rec->deleted_pre_trunc;
+		rec->tuples_warm_updated = rec->warm_updated_pre_trunc;
 	}
 	pgstat_info->t_counts.t_tuples_inserted += rec->tuples_inserted;
 	pgstat_info->t_counts.t_tuples_updated += rec->tuples_updated;
 	pgstat_info->t_counts.t_tuples_deleted += rec->tuples_deleted;
+	pgstat_info->t_counts.t_tuples_warm_updated += rec->tuples_warm_updated;
 	pgstat_info->t_counts.t_delta_dead_tuples +=
 		rec->tuples_inserted + rec->tuples_updated;
+	pgstat_info->t_counts.t_delta_warm_chains +=
+		rec->tuples_warm_updated;
 }
 
 
@@ -4526,6 +4565,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
+		result->n_warm_chains = 0;
 		result->changes_since_analyze = 0;
 		result->blocks_fetched = 0;
 		result->blocks_hit = 0;
@@ -5636,6 +5676,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
+			tabentry->n_warm_chains = tabmsg->t_counts.t_delta_warm_chains;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
 			tabentry->blocks_fetched = tabmsg->t_counts.t_blocks_fetched;
 			tabentry->blocks_hit = tabmsg->t_counts.t_blocks_hit;
@@ -5667,9 +5708,11 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			{
 				tabentry->n_live_tuples = 0;
 				tabentry->n_dead_tuples = 0;
+				tabentry->n_warm_chains = 0;
 			}
 			tabentry->n_live_tuples += tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples += tabmsg->t_counts.t_delta_dead_tuples;
+			tabentry->n_warm_chains += tabmsg->t_counts.t_delta_warm_chains;
 			tabentry->changes_since_analyze += tabmsg->t_counts.t_changed_tuples;
 			tabentry->blocks_fetched += tabmsg->t_counts.t_blocks_fetched;
 			tabentry->blocks_hit += tabmsg->t_counts.t_blocks_hit;
@@ -5679,6 +5722,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 		tabentry->n_live_tuples = Max(tabentry->n_live_tuples, 0);
 		/* Likewise for n_dead_tuples */
 		tabentry->n_dead_tuples = Max(tabentry->n_dead_tuples, 0);
+		tabentry->n_warm_chains = Max(tabentry->n_warm_chains, 0);
 
 		/*
 		 * Add per-table stats to the per-database entry, too.
@@ -5904,6 +5948,7 @@ pgstat_recv_vacuum(PgStat_MsgVacuum *msg, int len)
 
 	tabentry->n_live_tuples = msg->m_live_tuples;
 	tabentry->n_dead_tuples = msg->m_dead_tuples;
+	tabentry->n_warm_chains = msg->m_warm_chains;
 
 	if (msg->m_autovacuum)
 	{
@@ -5938,6 +5983,7 @@ pgstat_recv_analyze(PgStat_MsgAnalyze *msg, int len)
 
 	tabentry->n_live_tuples = msg->m_live_tuples;
 	tabentry->n_dead_tuples = msg->m_dead_tuples;
+	tabentry->n_warm_chains = msg->m_warm_chains;
 
 	/*
 	 * If commanded, reset changes_since_analyze to zero.  This forgets any
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 227a87d..8804908 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -193,6 +193,21 @@ pg_stat_get_dead_tuples(PG_FUNCTION_ARGS)
 	PG_RETURN_INT64(result);
 }
 
+Datum
+pg_stat_get_warm_chains(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->n_warm_chains);
+
+	PG_RETURN_INT64(result);
+}
+
 
 Datum
 pg_stat_get_mod_since_analyze(PG_FUNCTION_ARGS)
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 08b6030..81fec03 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,6 +130,7 @@ int			VacuumCostPageMiss = 10;
 int			VacuumCostPageDirty = 20;
 int			VacuumCostLimit = 200;
 int			VacuumCostDelay = 0;
+double		VacuumWarmCleanupScale;
 
 int			VacuumPageHit = 0;
 int			VacuumPageMiss = 0;
@@ -137,3 +138,5 @@ int			VacuumPageDirty = 0;
 
 int			VacuumCostBalance = 0;		/* working state for vacuum */
 bool		VacuumCostActive = false;
+
+double		VacuumWarmCleanupIndexScale = 1;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 8b5f064..ecf8028 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -3017,6 +3017,36 @@ static struct config_real ConfigureNamesReal[] =
 	},
 
 	{
+		{"autovacuum_warmcleanup_scale_factor", PGC_SIGHUP, AUTOVACUUM,
+			gettext_noop("Number of WARM chains prior to cleanup as a fraction of reltuples."),
+			NULL
+		},
+		&autovacuum_warmcleanup_scale,
+		0.1, 0.0, 100.0,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"autovacuum_warmcleanup_index_scale_factor", PGC_SIGHUP, AUTOVACUUM,
+			gettext_noop("Number of WARM pointers prior to cleanup as a fraction of total WARM chains."),
+			NULL
+		},
+		&autovacuum_warmcleanup_index_scale,
+		0.2, 0.0, 100.0,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"vacuum_warmcleanup_index_scale_factor", PGC_USERSET, WARM_CLEANUP,
+			gettext_noop("Number of WARM pointers in the index prior to cleanup as a fraction of total WARM chains."),
+			NULL
+		},
+		&VacuumWarmCleanupIndexScale,
+		0.2, 0.0, 100.0,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"checkpoint_completion_target", PGC_SIGHUP, WAL_CHECKPOINTS,
 			gettext_noop("Time spent flushing dirty buffers during checkpoint, as fraction of checkpoint interval."),
 			NULL
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 8585da4..c50a7b0 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2791,6 +2791,8 @@ DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of dead tuples");
+DATA(insert OID = 3403 (  pg_stat_get_warm_chains	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_warm_chains _null_ _null_ _null_ ));
+DESCR("statistics: number of warm chains");
 DATA(insert OID = 3177 (  pg_stat_get_mod_since_analyze PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_mod_since_analyze _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples changed since last analyze");
 DATA(insert OID = 1934 (  pg_stat_get_blocks_fetched	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_blocks_fetched _null_ _null_ _null_ ));
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index 541c2fa..9914143 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -145,6 +145,8 @@ typedef struct VacuumParams
 	int			log_min_duration;		/* minimum execution threshold in ms
 										 * at which  verbose logs are
 										 * activated, -1 to use default */
+	double		warmcleanup_index_scale; /* Fraction of WARM pointers to cause
+										  * index WARM cleanup */
 } VacuumParams;
 
 /* GUC parameters */
diff --git a/src/include/foreign/fdwapi.h b/src/include/foreign/fdwapi.h
index 6ca44f7..2993b1a 100644
--- a/src/include/foreign/fdwapi.h
+++ b/src/include/foreign/fdwapi.h
@@ -134,7 +134,8 @@ typedef void (*ExplainDirectModify_function) (ForeignScanState *node,
 typedef int (*AcquireSampleRowsFunc) (Relation relation, int elevel,
 											   HeapTuple *rows, int targrows,
 												  double *totalrows,
-												  double *totaldeadrows);
+												  double *totaldeadrows,
+												  double *totalwarmchains);
 
 typedef bool (*AnalyzeForeignTable_function) (Relation relation,
 												 AcquireSampleRowsFunc *func,
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 4c607b2..901960a 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -255,6 +255,7 @@ extern int	VacuumPageDirty;
 extern int	VacuumCostBalance;
 extern bool VacuumCostActive;
 
+extern double VacuumWarmCleanupIndexScale;
 
 /* in tcop/postgres.c */
 
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index 3a71dd5..f842374 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3035,7 +3035,8 @@ typedef enum VacuumOption
 	VACOPT_FULL = 1 << 4,		/* FULL (non-concurrent) vacuum */
 	VACOPT_NOWAIT = 1 << 5,		/* don't wait to get lock (autovacuum only) */
 	VACOPT_SKIPTOAST = 1 << 6,	/* don't process the TOAST table, if any */
-	VACOPT_DISABLE_PAGE_SKIPPING = 1 << 7		/* don't skip any pages */
+	VACOPT_DISABLE_PAGE_SKIPPING = 1 << 7,		/* don't skip any pages */
+	VACOPT_WARM_CLEANUP = 1 << 8	/* do WARM cleanup */
 } VacuumOption;
 
 typedef struct VacuumStmt
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index cd21a78..7d9818b 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -433,6 +433,7 @@ PG_KEYWORD("version", VERSION_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("view", VIEW, UNRESERVED_KEYWORD)
 PG_KEYWORD("views", VIEWS, UNRESERVED_KEYWORD)
 PG_KEYWORD("volatile", VOLATILE, UNRESERVED_KEYWORD)
+PG_KEYWORD("warmclean", WARMCLEAN, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("when", WHEN, RESERVED_KEYWORD)
 PG_KEYWORD("where", WHERE, RESERVED_KEYWORD)
 PG_KEYWORD("whitespace", WHITESPACE_P, UNRESERVED_KEYWORD)
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 99bdc8b..883cbd4 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -110,6 +110,7 @@ typedef struct PgStat_TableCounts
 
 	PgStat_Counter t_delta_live_tuples;
 	PgStat_Counter t_delta_dead_tuples;
+	PgStat_Counter t_delta_warm_chains;
 	PgStat_Counter t_changed_tuples;
 
 	PgStat_Counter t_blocks_fetched;
@@ -167,11 +168,13 @@ typedef struct PgStat_TableXactStatus
 {
 	PgStat_Counter tuples_inserted;		/* tuples inserted in (sub)xact */
 	PgStat_Counter tuples_updated;		/* tuples updated in (sub)xact */
+	PgStat_Counter tuples_warm_updated;	/* tuples warm-updated in (sub)xact */
 	PgStat_Counter tuples_deleted;		/* tuples deleted in (sub)xact */
 	bool		truncated;		/* relation truncated in this (sub)xact */
 	PgStat_Counter inserted_pre_trunc;	/* tuples inserted prior to truncate */
 	PgStat_Counter updated_pre_trunc;	/* tuples updated prior to truncate */
 	PgStat_Counter deleted_pre_trunc;	/* tuples deleted prior to truncate */
+	PgStat_Counter warm_updated_pre_trunc;	/* tuples warm updated prior to truncate */
 	int			nest_level;		/* subtransaction nest level */
 	/* links to other structs for same relation: */
 	struct PgStat_TableXactStatus *upper;		/* next higher subxact if any */
@@ -370,6 +373,7 @@ typedef struct PgStat_MsgVacuum
 	TimestampTz m_vacuumtime;
 	PgStat_Counter m_live_tuples;
 	PgStat_Counter m_dead_tuples;
+	PgStat_Counter m_warm_chains;
 } PgStat_MsgVacuum;
 
 
@@ -388,6 +392,7 @@ typedef struct PgStat_MsgAnalyze
 	TimestampTz m_analyzetime;
 	PgStat_Counter m_live_tuples;
 	PgStat_Counter m_dead_tuples;
+	PgStat_Counter m_warm_chains;
 } PgStat_MsgAnalyze;
 
 
@@ -630,6 +635,7 @@ typedef struct PgStat_StatTabEntry
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
+	PgStat_Counter n_warm_chains;
 	PgStat_Counter changes_since_analyze;
 
 	PgStat_Counter blocks_fetched;
@@ -1156,10 +1162,11 @@ extern void pgstat_reset_single_counter(Oid objectid, PgStat_Single_Reset_Type t
 
 extern void pgstat_report_autovac(Oid dboid);
 extern void pgstat_report_vacuum(Oid tableoid, bool shared,
-					 PgStat_Counter livetuples, PgStat_Counter deadtuples);
+					 PgStat_Counter livetuples, PgStat_Counter deadtuples,
+					 PgStat_Counter warmchains);
 extern void pgstat_report_analyze(Relation rel,
 					  PgStat_Counter livetuples, PgStat_Counter deadtuples,
-					  bool resetcounter);
+					  PgStat_Counter warmchains, bool resetcounter);
 
 extern void pgstat_report_recovery_conflict(int reason);
 extern void pgstat_report_deadlock(void);
diff --git a/src/include/postmaster/autovacuum.h b/src/include/postmaster/autovacuum.h
index 99d7f09..5ac9c8f 100644
--- a/src/include/postmaster/autovacuum.h
+++ b/src/include/postmaster/autovacuum.h
@@ -28,6 +28,8 @@ extern int	autovacuum_freeze_max_age;
 extern int	autovacuum_multixact_freeze_max_age;
 extern int	autovacuum_vac_cost_delay;
 extern int	autovacuum_vac_cost_limit;
+extern double autovacuum_warmcleanup_scale;
+extern double autovacuum_warmcleanup_index_scale;
 
 /* autovacuum launcher PID, only valid when worker is shutting down */
 extern int	AutovacuumLauncherPid;
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index 2da9115..cd4532b 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -68,6 +68,7 @@ enum config_group
 	WAL_SETTINGS,
 	WAL_CHECKPOINTS,
 	WAL_ARCHIVING,
+	WARM_CLEANUP,
 	REPLICATION,
 	REPLICATION_SENDING,
 	REPLICATION_MASTER,
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index 4b173b5..05b3542 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -278,6 +278,8 @@ typedef struct AutoVacOpts
 	int			log_min_duration;
 	float8		vacuum_scale_factor;
 	float8		analyze_scale_factor;
+	float8		warmcleanup_scale_factor;
+	float8		warmcleanup_index_scale;
 } AutoVacOpts;
 
 typedef struct StdRdOptions
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index f7dc4a4..d34aa68 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1759,6 +1759,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
+    pg_stat_get_warm_chains(c.oid) AS n_warm_chains,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
     pg_stat_get_last_vacuum_time(c.oid) AS last_vacuum,
     pg_stat_get_last_autovacuum_time(c.oid) AS last_autovacuum,
@@ -1907,6 +1908,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
+    pg_stat_all_tables.n_warm_chains,
     pg_stat_all_tables.n_mod_since_analyze,
     pg_stat_all_tables.last_vacuum,
     pg_stat_all_tables.last_autovacuum,
@@ -1951,6 +1953,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
+    pg_stat_all_tables.n_warm_chains,
     pg_stat_all_tables.n_mod_since_analyze,
     pg_stat_all_tables.last_vacuum,
     pg_stat_all_tables.last_autovacuum,
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
index 1ae2f40..b21a063 100644
--- a/src/test/regress/expected/warm.out
+++ b/src/test/regress/expected/warm.out
@@ -745,6 +745,65 @@ SELECT a, b FROM test_toast_warm WHERE b = 104.20;
 (1 row)
 
 DROP TABLE test_toast_warm;
+-- Test VACUUM
+CREATE TABLE test_vacuum_warm (a int unique, b text, c int, d int, e int);
+CREATE INDEX test_vacuum_warm_index1 ON test_vacuum_warm(b);
+CREATE INDEX test_vacuum_warm_index2 ON test_vacuum_warm(c);
+CREATE INDEX test_vacuum_warm_index3 ON test_vacuum_warm(d);
+INSERT INTO test_vacuum_warm VALUES (1, 'a', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (2, 'b', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (3, 'c', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (4, 'd', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (5, 'e', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (6, 'f', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (7, 'g', 100, 200);
+UPDATE test_vacuum_warm SET b = 'u', c = 300 WHERE a = 1;
+UPDATE test_vacuum_warm SET b = 'v', c = 300 WHERE a = 2;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 3;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 4;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 5;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 6;
+-- a plain vacuum cannot clear WARM chains.
+SET enable_seqscan = false;
+SET enable_bitmapscan = false;
+SET seq_page_cost = 10000;
+VACUUM test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
+   Index Cond: (b = 'u'::text)
+   Heap Fetches: 1
+(3 rows)
+
+-- Now set vacuum_warmcleanup_index_scale_factor such that only
+-- test_vacuum_warm_index2 can be cleaned up.
+SET vacuum_warmcleanup_index_scale_factor=0.5;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
+   Index Cond: (b = 'u'::text)
+   Heap Fetches: 1
+(3 rows)
+
+-- All WARM chains cleaned up, so index-only scan should be used now without
+-- any heap fetches
+SET vacuum_warmcleanup_index_scale_factor=0;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect zero heap-fetches now
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
+   Index Cond: (b = 'u'::text)
+   Heap Fetches: 0
+(3 rows)
+
+DROP TABLE test_vacuum_warm;
 -- Toasted heap attributes
 CREATE TABLE toasttest(descr text , cnt int DEFAULT 0, f1 text, f2 text);
 CREATE INDEX testindx1 ON toasttest(descr);
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
index fb1f93e..9fee54d 100644
--- a/src/test/regress/sql/warm.sql
+++ b/src/test/regress/sql/warm.sql
@@ -285,6 +285,53 @@ SELECT a, b FROM test_toast_warm WHERE b = 104.20;
 
 DROP TABLE test_toast_warm;
 
+-- Test VACUUM
+
+CREATE TABLE test_vacuum_warm (a int unique, b text, c int, d int, e int);
+CREATE INDEX test_vacuum_warm_index1 ON test_vacuum_warm(b);
+CREATE INDEX test_vacuum_warm_index2 ON test_vacuum_warm(c);
+CREATE INDEX test_vacuum_warm_index3 ON test_vacuum_warm(d);
+
+INSERT INTO test_vacuum_warm VALUES (1, 'a', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (2, 'b', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (3, 'c', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (4, 'd', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (5, 'e', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (6, 'f', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (7, 'g', 100, 200);
+
+UPDATE test_vacuum_warm SET b = 'u', c = 300 WHERE a = 1;
+UPDATE test_vacuum_warm SET b = 'v', c = 300 WHERE a = 2;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 3;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 4;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 5;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 6;
+
+-- a plain vacuum cannot clear WARM chains.
+SET enable_seqscan = false;
+SET enable_bitmapscan = false;
+SET seq_page_cost = 10000;
+VACUUM test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+
+-- Now set vacuum_warmcleanup_index_scale_factor such that only
+-- test_vacuum_warm_index2 can be cleaned up.
+SET vacuum_warmcleanup_index_scale_factor=0.5;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+
+
+-- All WARM chains cleaned up, so index-only scan should be used now without
+-- any heap fetches
+SET vacuum_warmcleanup_index_scale_factor=0;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect zero heap-fetches now
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+
+DROP TABLE test_vacuum_warm;
+
 -- Toasted heap attributes
 CREATE TABLE toasttest(descr text , cnt int DEFAULT 0, f1 text, f2 text);
 CREATE INDEX testindx1 ON toasttest(descr);
-- 
2.9.3 (Apple Git-75)

0001-Track-root-line-pointer-v23_v24.patchapplication/octet-stream; name=0001-Track-root-line-pointer-v23_v24.patchDownload
From a26a3ee67e98755f9ca5f59bd711584a35496444 Mon Sep 17 00:00:00 2001
From: Pavan Deolasee <pavan.deolasee@gmail.com>
Date: Tue, 28 Feb 2017 10:34:30 +0530
Subject: [PATCH 1/4] Track root line pointer - v23

Store the root line pointer of the WARM chain in the t_ctid.ip_posid field of
the last tuple in the chain and mark the tuple header with HEAP_TUPLE_LATEST
flag to record that fact.
---
 src/backend/access/heap/heapam.c      | 209 ++++++++++++++++++++++++++++------
 src/backend/access/heap/hio.c         |  25 +++-
 src/backend/access/heap/pruneheap.c   | 126 ++++++++++++++++++--
 src/backend/access/heap/rewriteheap.c |  21 +++-
 src/backend/executor/execIndexing.c   |   3 +-
 src/backend/executor/execMain.c       |   4 +-
 src/include/access/heapam.h           |   1 +
 src/include/access/heapam_xlog.h      |   4 +-
 src/include/access/hio.h              |   4 +-
 src/include/access/htup_details.h     |  97 +++++++++++++++-
 10 files changed, 428 insertions(+), 66 deletions(-)

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 0c3e2b0..30262ef 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -94,7 +94,8 @@ static HeapTuple heap_prepare_insert(Relation relation, HeapTuple tup,
 					TransactionId xid, CommandId cid, int options);
 static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
-				HeapTuple newtup, HeapTuple old_key_tup,
+				HeapTuple newtup, OffsetNumber root_offnum,
+				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
 static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
 							 Bitmapset *interesting_cols,
@@ -2264,13 +2265,13 @@ heap_get_latest_tid(Relation relation,
 		 */
 		if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) ||
 			HeapTupleHeaderIsOnlyLocked(tp.t_data) ||
-			ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid))
+			HeapTupleHeaderIsHeapLatest(tp.t_data, &ctid))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
 		}
 
-		ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tp.t_data, &ctid);
 		priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		UnlockReleaseBuffer(buffer);
 	}							/* end of loop */
@@ -2401,6 +2402,7 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	bool		all_visible_cleared = false;
+	OffsetNumber	root_offnum;
 
 	/*
 	 * Fill in tuple header fields, assign an OID, and toast the tuple if
@@ -2439,8 +2441,13 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
-	RelationPutHeapTuple(relation, buffer, heaptup,
-						 (options & HEAP_INSERT_SPECULATIVE) != 0);
+	root_offnum = RelationPutHeapTuple(relation, buffer, heaptup,
+						 (options & HEAP_INSERT_SPECULATIVE) != 0,
+						 InvalidOffsetNumber);
+
+	/* We must not overwrite the speculative insertion token. */
+	if ((options & HEAP_INSERT_SPECULATIVE) == 0)
+		HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 	if (PageIsAllVisible(BufferGetPage(buffer)))
 	{
@@ -2668,6 +2675,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 	Size		saveFreeSpace;
 	bool		need_tuple_data = RelationIsLogicallyLogged(relation);
 	bool		need_cids = RelationIsAccessibleInLogicalDecoding(relation);
+	OffsetNumber	root_offnum;
 
 	needwal = !(options & HEAP_INSERT_SKIP_WAL) && RelationNeedsWAL(relation);
 	saveFreeSpace = RelationGetTargetPageFreeSpace(relation,
@@ -2738,7 +2746,12 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		 * RelationGetBufferForTuple has ensured that the first tuple fits.
 		 * Put that on the page, and then as many other tuples as fit.
 		 */
-		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false);
+		root_offnum = RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false,
+				InvalidOffsetNumber);
+
+		/* Mark this tuple as the latest and also set root offset. */
+		HeapTupleHeaderSetHeapLatest(heaptuples[ndone]->t_data, root_offnum);
+
 		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++)
 		{
 			HeapTuple	heaptup = heaptuples[ndone + nthispage];
@@ -2746,7 +2759,10 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace)
 				break;
 
-			RelationPutHeapTuple(relation, buffer, heaptup, false);
+			root_offnum = RelationPutHeapTuple(relation, buffer, heaptup, false,
+					InvalidOffsetNumber);
+			/* Mark each tuple as the latest and also set root offset. */
+			HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 			/*
 			 * We don't use heap_multi_insert for catalog tuples yet, but
@@ -3018,6 +3034,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	HeapTupleData tp;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	TransactionId new_xmax;
@@ -3028,6 +3045,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	bool		all_visible_cleared = false;
 	HeapTuple	old_key_tuple = NULL;	/* replica identity of the tuple */
 	bool		old_key_copied = false;
+	OffsetNumber	root_offnum;
 
 	Assert(ItemPointerIsValid(tid));
 
@@ -3069,7 +3087,8 @@ heap_delete(Relation relation, ItemPointer tid,
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 	}
 
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
+	offnum = ItemPointerGetOffsetNumber(tid);
+	lp = PageGetItemId(page, offnum);
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
@@ -3199,7 +3218,17 @@ l1:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tp.t_data->t_ctid;
+
+		/*
+		 * If we're at the end of the chain, then just return the same TID back
+		 * to the caller. The caller uses that as a hint to know if we have hit
+		 * the end of the chain.
+		 */
+		if (!HeapTupleHeaderIsHeapLatest(tp.t_data, &tp.t_self))
+			HeapTupleHeaderGetNextTid(tp.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&tp.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tp.t_data);
@@ -3248,6 +3277,22 @@ l1:
 							  xid, LockTupleExclusive, true,
 							  &new_xmax, &new_infomask, &new_infomask2);
 
+	/*
+	 * heap_get_root_tuple_one() may call palloc, which is disallowed once we
+	 * enter the critical section. So check if the root offset is cached in the
+	 * tuple and if not, fetch that information hard way before entering the
+	 * critical section.
+	 *
+	 * Most often and unless we are dealing with a pg-upgraded cluster, the
+	 * root offset information should be cached. So there should not be too
+	 * much overhead of fetching this information. Also, once a tuple is
+	 * updated, the information will be copied to the new version. So it's not
+	 * as if we're going to pay this price forever.
+	 */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tp.t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -3275,8 +3320,10 @@ l1:
 	HeapTupleHeaderClearHotUpdated(tp.t_data);
 	HeapTupleHeaderSetXmax(tp.t_data, new_xmax);
 	HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo);
-	/* Make sure there is no forward chain link in t_ctid */
-	tp.t_data->t_ctid = tp.t_self;
+
+	/* Mark this tuple as the latest tuple in the update chain. */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		HeapTupleHeaderSetHeapLatest(tp.t_data, root_offnum);
 
 	MarkBufferDirty(buffer);
 
@@ -3477,6 +3524,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
+	OffsetNumber	root_offnum;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3536,6 +3585,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 
 
 	block = ItemPointerGetBlockNumber(otid);
+	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3839,7 +3889,12 @@ l2:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = oldtup.t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(oldtup.t_data, &oldtup.t_self))
+			HeapTupleHeaderGetNextTid(oldtup.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&oldtup.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
@@ -3979,6 +4034,7 @@ l2:
 		uint16		infomask_lock_old_tuple,
 					infomask2_lock_old_tuple;
 		bool		cleared_all_frozen = false;
+		OffsetNumber	root_offnum;
 
 		/*
 		 * To prevent concurrent sessions from updating the tuple, we have to
@@ -4006,6 +4062,14 @@ l2:
 
 		Assert(HEAP_XMAX_IS_LOCKED_ONLY(infomask_lock_old_tuple));
 
+		/*
+		 * Fetch root offset before entering the critical section. We do this
+		 * only if the information is not already available.
+		 */
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&oldtup.t_self));
+
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
@@ -4020,7 +4084,8 @@ l2:
 		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			HeapTupleHeaderSetHeapLatest(oldtup.t_data, root_offnum);
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4179,6 +4244,10 @@ l2:
 										   bms_overlap(modified_attrs, id_attrs),
 										   &old_key_copied);
 
+	if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
@@ -4204,6 +4273,17 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way (we should have done
+		 * that before entering the critical section above).
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
 	else
 	{
@@ -4211,10 +4291,22 @@ l2:
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		root_offnum = InvalidOffsetNumber;
 	}
 
-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
-
+	/* insert new tuple */
+	root_offnum = RelationPutHeapTuple(relation, newbuf, heaptup, false,
+									   root_offnum);
+	/*
+	 * Also mark both copies as latest and set the root offset information. If
+	 * we're doing a HOT/WARM update, then we just copy the information from
+	 * old tuple, if available or computed above. For regular updates,
+	 * RelationPutHeapTuple must have returned us the actual offset number
+	 * where the new version was inserted and we store the same value since the
+	 * update resulted in a new HOT-chain.
+	 */
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
 	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
@@ -4227,7 +4319,7 @@ l2:
 	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	HeapTupleHeaderSetNextTid(oldtup.t_data, &(heaptup->t_self));
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4266,6 +4358,7 @@ l2:
 
 		recptr = log_heap_update(relation, buffer,
 								 newbuf, &oldtup, heaptup,
+								 root_offnum,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4546,7 +4639,8 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	ItemId		lp;
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
-	BlockNumber block;
+	BlockNumber	block;
+	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4555,9 +4649,11 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	bool		first_time = true;
 	bool		have_tuple_lock = false;
 	bool		cleared_all_frozen = false;
+	OffsetNumber	root_offnum;
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
+	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -4577,6 +4673,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	tuple->t_data = (HeapTupleHeader) PageGetItem(page, lp);
 	tuple->t_len = ItemIdGetLength(lp);
 	tuple->t_tableOid = RelationGetRelid(relation);
+	tuple->t_self = *tid;
 
 l3:
 	result = HeapTupleSatisfiesUpdate(tuple, cid, *buffer);
@@ -4604,7 +4701,11 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &t_ctid);
+		else
+			ItemPointerCopy(tid, &t_ctid);
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -5042,7 +5143,12 @@ failed:
 		Assert(result == HeapTupleSelfUpdated || result == HeapTupleUpdated ||
 			   result == HeapTupleWouldBlock);
 		Assert(!(tuple->t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tuple->t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(tid, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tuple->t_data);
@@ -5090,6 +5196,10 @@ failed:
 							  GetCurrentTransactionId(), mode, false,
 							  &xid, &new_infomask, &new_infomask2);
 
+	if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tuple->t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -5118,7 +5228,10 @@ failed:
 	 * the tuple as well.
 	 */
 	if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask))
-		tuple->t_data->t_ctid = *tid;
+	{
+		if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+			HeapTupleHeaderSetHeapLatest(tuple->t_data, root_offnum);
+	}
 
 	/* Clear only the all-frozen bit on visibility map if needed */
 	if (PageIsAllVisible(page) &&
@@ -5632,6 +5745,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
+	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5640,6 +5754,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
+		offnum = ItemPointerGetOffsetNumber(&tupid);
+
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
@@ -5869,7 +5985,7 @@ l4:
 
 		/* if we find the end of update chain, we're done. */
 		if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID ||
-			ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) ||
+			HeapTupleHeaderIsHeapLatest(mytup.t_data, &mytup.t_self) ||
 			HeapTupleHeaderIsOnlyLocked(mytup.t_data))
 		{
 			result = HeapTupleMayBeUpdated;
@@ -5878,7 +5994,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		HeapTupleHeaderGetNextTid(mytup.t_data, &tupid);
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
@@ -5995,7 +6111,7 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	 * Replace the speculative insertion token with a real t_ctid, pointing to
 	 * itself like it does on regular tuples.
 	 */
-	htup->t_ctid = tuple->t_self;
+	HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 	/* XLOG stuff */
 	if (RelationNeedsWAL(relation))
@@ -6121,8 +6237,7 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
 
 	/* Clear the speculative insertion token too */
-	tp.t_data->t_ctid = tp.t_self;
-
+	HeapTupleHeaderSetHeapLatest(tp.t_data, ItemPointerGetOffsetNumber(tid));
 	MarkBufferDirty(buffer);
 
 	/*
@@ -7470,6 +7585,7 @@ log_heap_visible(RelFileNode rnode, Buffer heap_buffer, Buffer vm_buffer,
 static XLogRecPtr
 log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup, HeapTuple newtup,
+				OffsetNumber root_offnum,
 				HeapTuple old_key_tuple,
 				bool all_visible_cleared, bool new_all_visible_cleared)
 {
@@ -7590,6 +7706,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self);
 	xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data);
 
+	Assert(OffsetNumberIsValid(root_offnum));
+	xlrec.root_offnum = root_offnum;
+
 	bufflags = REGBUF_STANDARD;
 	if (init)
 		bufflags |= REGBUF_WILL_INIT;
@@ -8244,7 +8363,13 @@ heap_xlog_delete(XLogReaderState *record)
 			PageClearAllVisible(page);
 
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = target_tid;
+		if (!HeapTupleHeaderHasRootOffset(htup))
+		{
+			OffsetNumber	root_offnum;
+			root_offnum = heap_get_root_tuple(page, xlrec->offnum); 
+			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+		}
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8334,7 +8459,8 @@ heap_xlog_insert(XLogReaderState *record)
 		htup->t_hoff = xlhdr.t_hoff;
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
-		htup->t_ctid = target_tid;
+
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->offnum);
 
 		if (PageAddItem(page, (Item) htup, newlen, xlrec->offnum,
 						true, true) == InvalidOffsetNumber)
@@ -8469,8 +8595,8 @@ heap_xlog_multi_insert(XLogReaderState *record)
 			htup->t_hoff = xlhdr->t_hoff;
 			HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 			HeapTupleHeaderSetCmin(htup, FirstCommandId);
-			ItemPointerSetBlockNumber(&htup->t_ctid, blkno);
-			ItemPointerSetOffsetNumber(&htup->t_ctid, offnum);
+
+			HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 			offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 			if (offnum == InvalidOffsetNumber)
@@ -8606,7 +8732,7 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 		/* Set forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetNextTid(htup, &newtid);
 
 		/* Mark the page as a candidate for pruning */
 		PageSetPrunable(page, XLogRecGetXid(record));
@@ -8739,13 +8865,17 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
-		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = newtid;
 
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
 
+		/*
+		 * Make sure the tuple is marked as the latest and root offset
+		 * information is restored.
+		 */
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->root_offnum);
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
@@ -8808,6 +8938,9 @@ heap_xlog_confirm(XLogReaderState *record)
 		 */
 		ItemPointerSet(&htup->t_ctid, BufferGetBlockNumber(buffer), offnum);
 
+		/* For newly inserted tuple, set root offset to itself. */
+		HeapTupleHeaderSetHeapLatest(htup, offnum);
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8871,11 +9004,17 @@ heap_xlog_lock(XLogReaderState *record)
 		 */
 		if (HEAP_XMAX_IS_LOCKED_ONLY(htup->t_infomask))
 		{
+			ItemPointerData	target_tid;
+
+			ItemPointerSet(&target_tid, BufferGetBlockNumber(buffer), offnum);
 			HeapTupleHeaderClearHotUpdated(htup);
 			/* Make sure there is no forward chain link in t_ctid */
-			ItemPointerSet(&htup->t_ctid,
-						   BufferGetBlockNumber(buffer),
-						   offnum);
+			if (!HeapTupleHeaderHasRootOffset(htup))
+			{
+				OffsetNumber	root_offnum;
+				root_offnum = heap_get_root_tuple(page, offnum);
+				HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+			}
 		}
 		HeapTupleHeaderSetXmax(htup, xlrec->locking_xid);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index 6529fe3..8052519 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -31,12 +31,20 @@
  * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
  * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *
+ * The caller can optionally tell us to set the root offset to the given value.
+ * Otherwise, the root offset is set to the offset of the new location once its
+ * known. The former is used while updating an existing tuple where the caller
+ * tells us about the root line pointer of the chain.  The latter is used
+ * during insertion of a new row, hence root line pointer is set to the offset
+ * where this tuple is inserted.
  */
-void
+OffsetNumber
 RelationPutHeapTuple(Relation relation,
 					 Buffer buffer,
 					 HeapTuple tuple,
-					 bool token)
+					 bool token,
+					 OffsetNumber root_offnum)
 {
 	Page		pageHeader;
 	OffsetNumber offnum;
@@ -60,17 +68,24 @@ RelationPutHeapTuple(Relation relation,
 	ItemPointerSet(&(tuple->t_self), BufferGetBlockNumber(buffer), offnum);
 
 	/*
-	 * Insert the correct position into CTID of the stored tuple, too (unless
-	 * this is a speculative insertion, in which case the token is held in
-	 * CTID field instead)
+	 * Set block number and the root offset into CTID of the stored tuple, too
+	 * (unless this is a speculative insertion, in which case the token is held
+	 * in CTID field instead).
 	 */
 	if (!token)
 	{
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
+		/* Copy t_ctid to set the correct block number. */
 		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+
+		if (!OffsetNumberIsValid(root_offnum))
+			root_offnum = offnum;
+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item, root_offnum);
 	}
+
+	return root_offnum;
 }
 
 /*
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index d69a266..f54337c 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -55,6 +55,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
 
+static void heap_get_root_tuples_internal(Page page,
+				OffsetNumber target_offnum, OffsetNumber *root_offsets);
 
 /*
  * Optionally prune and repair fragmentation in the specified page.
@@ -553,6 +555,17 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 		if (!HeapTupleHeaderIsHotUpdated(htup))
 			break;
 
+
+		/*
+		 * If the tuple was HOT-updated and the update was later
+		 * aborted, someone could mark this tuple to be the last tuple
+		 * in the chain, without clearing the HOT-updated flag. So we must
+		 * check if this is the last tuple in the chain and stop following the
+		 * CTID, else we risk getting into an infinite recursion (though
+		 * prstate->marked[] currently protects against that).
+		 */
+		if (HeapTupleHeaderHasRootOffset(htup))
+			break;
 		/*
 		 * Advance to next chain member.
 		 */
@@ -726,27 +739,47 @@ heap_page_prune_execute(Buffer buffer,
 
 
 /*
- * For all items in this page, find their respective root line pointers.
- * If item k is part of a HOT-chain with root at item j, then we set
- * root_offsets[k - 1] = j.
+ * Either for all items in this page or for the given item, find their
+ * respective root line pointers.
+ *
+ * When target_offnum is a valid offset number, the caller is interested in
+ * just one item. In that case, the root line pointer is returned in
+ * root_offsets.
  *
- * The passed-in root_offsets array must have MaxHeapTuplesPerPage entries.
- * We zero out all unused entries.
+ * When target_offnum is a InvalidOffsetNumber then the caller wants to know
+ * the root line pointers of all the items in this page. The root_offsets array
+ * must have MaxHeapTuplesPerPage entries in that case. If item k is part of a
+ * HOT-chain with root at item j, then we set root_offsets[k - 1] = j. We zero
+ * out all unused entries.
  *
  * The function must be called with at least share lock on the buffer, to
  * prevent concurrent prune operations.
  *
+ * This is not a cheap function since it must scan through all line pointers
+ * and tuples on the page in order to find the root line pointers. To minimize
+ * the cost, we break early if target_offnum is specified and root line pointer
+ * to target_offnum is found.
+ *
  * Note: The information collected here is valid only as long as the caller
  * holds a pin on the buffer. Once pin is released, a tuple might be pruned
  * and reused by a completely unrelated tuple.
+ *
+ * Note: This function must not be called inside a critical section because it
+ * internally calls HeapTupleHeaderGetUpdateXid which somewhere down the stack
+ * may try to allocate heap memory. Memory allocation is disallowed in a
+ * critical section.
  */
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
 {
 	OffsetNumber offnum,
 				maxoff;
 
-	MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
+	if (OffsetNumberIsValid(target_offnum))
+		*root_offsets = InvalidOffsetNumber;
+	else
+		MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
 
 	maxoff = PageGetMaxOffsetNumber(page);
 	for (offnum = FirstOffsetNumber; offnum <= maxoff; offnum = OffsetNumberNext(offnum))
@@ -774,9 +807,28 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 
 			/*
 			 * This is either a plain tuple or the root of a HOT-chain.
-			 * Remember it in the mapping.
+			 *
+			 * If the target_offnum is specified and if we found its mapping,
+			 * return.
 			 */
-			root_offsets[offnum - 1] = offnum;
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (target_offnum == offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember it in the mapping. */
+				root_offsets[offnum - 1] = offnum;
+			}
 
 			/* If it's not the start of a HOT-chain, we're done with it */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
@@ -817,15 +869,65 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 				!TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(htup)))
 				break;
 
-			/* Remember the root line pointer for this item */
-			root_offsets[nextoffnum - 1] = offnum;
+			/*
+			 * If target_offnum is specified and we found its mapping, return.
+			 */
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (nextoffnum == target_offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember the root line pointer for this item. */
+				root_offsets[nextoffnum - 1] = offnum;
+			}
 
 			/* Advance to next chain member, if any */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				break;
 
+			/*
+			 * If the tuple was HOT-updated and the update was later aborted,
+			 * someone could mark this tuple to be the last tuple in the chain
+			 * and store root offset in CTID, without clearing the HOT-updated
+			 * flag. So we must check if CTID is actually root offset and break
+			 * to avoid infinite recursion.
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
 		}
 	}
 }
+
+/*
+ * Get root line pointer for the given tuple.
+ */
+OffsetNumber
+heap_get_root_tuple(Page page, OffsetNumber target_offnum)
+{
+	OffsetNumber offnum = InvalidOffsetNumber;
+	heap_get_root_tuples_internal(page, target_offnum, &offnum);
+	return offnum;
+}
+
+/*
+ * Get root line pointers for all tuples in the page
+ */
+void
+heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+{
+	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+			root_offsets);
+}
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index d7f65a5..2d3ae9b 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -421,14 +421,18 @@ rewrite_heap_tuple(RewriteState state,
 	 */
 	if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
 		  HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
-		!(ItemPointerEquals(&(old_tuple->t_self),
-							&(old_tuple->t_data->t_ctid))))
+		!(HeapTupleHeaderIsHeapLatest(old_tuple->t_data, &old_tuple->t_self)))
 	{
 		OldToNewMapping mapping;
 
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
-		hashkey.tid = old_tuple->t_data->t_ctid;
+
+		/* 
+		 * We've already checked that this is not the last tuple in the chain,
+		 * so fetch the next TID in the chain.
+		 */
+		HeapTupleHeaderGetNextTid(old_tuple->t_data, &hashkey.tid);
 
 		mapping = (OldToNewMapping)
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -441,7 +445,7 @@ rewrite_heap_tuple(RewriteState state,
 			 * set the ctid of this tuple to point to the new location, and
 			 * insert it right away.
 			 */
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextTid(new_tuple->t_data, &mapping->new_tid);
 
 			/* We don't need the mapping entry anymore */
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -527,7 +531,7 @@ rewrite_heap_tuple(RewriteState state,
 				new_tuple = unresolved->tuple;
 				free_new = true;
 				old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextTid(new_tuple->t_data, &new_tid);
 
 				/*
 				 * We don't need the hash entry anymore, but don't free its
@@ -733,7 +737,12 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		onpage_tup->t_ctid = tup->t_self;
+		/* 
+		 * Set t_ctid just to ensure that block number is copied correctly, but
+		 * then immediately mark the tuple as the latest.
+		 */
+		HeapTupleHeaderSetNextTid(onpage_tup, &tup->t_self);
+		HeapTupleHeaderSetHeapLatest(onpage_tup, newoff);
 	}
 
 	/* If heaptup is a private copy, release it. */
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 108060a..c3f1873 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -785,7 +785,8 @@ retry:
 			  DirtySnapshot.speculativeToken &&
 			  TransactionIdPrecedes(GetCurrentTransactionId(), xwait))))
 		{
-			ctid_wait = tup->t_data->t_ctid;
+			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
+				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index f2995f2..73e9c4a 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2623,7 +2623,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		 * As above, it should be safe to examine xmax and t_ctid without the
 		 * buffer content lock, because they can't be changing.
 		 */
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, &tuple.t_self))
 		{
 			/* deleted, so forget about it */
 			ReleaseBuffer(buffer);
@@ -2631,7 +2631,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		}
 
 		/* updated, so look at the updated row */
-		tuple.t_self = tuple.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tuple.t_data, &tuple.t_self);
 		/* updated row should have xmin matching this xmax */
 		priorXmax = HeapTupleHeaderGetUpdateXid(tuple.t_data);
 		ReleaseBuffer(buffer);
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 7e85510..5540e12 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -190,6 +190,7 @@ extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
+extern OffsetNumber heap_get_root_tuple(Page page, OffsetNumber target_offnum);
 extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
 
 /* in heap/syncscan.c */
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index b285f17..e6019d5 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -193,6 +193,8 @@ typedef struct xl_heap_update
 	uint8		flags;
 	TransactionId new_xmax;		/* xmax of the new tuple */
 	OffsetNumber new_offnum;	/* new tuple's offset */
+	OffsetNumber root_offnum;	/* offset of the root line pointer in case of
+								   HOT or WARM update */
 
 	/*
 	 * If XLOG_HEAP_CONTAINS_OLD_TUPLE or XLOG_HEAP_CONTAINS_OLD_KEY flags are
@@ -200,7 +202,7 @@ typedef struct xl_heap_update
 	 */
 } xl_heap_update;
 
-#define SizeOfHeapUpdate	(offsetof(xl_heap_update, new_offnum) + sizeof(OffsetNumber))
+#define SizeOfHeapUpdate	(offsetof(xl_heap_update, root_offnum) + sizeof(OffsetNumber))
 
 /*
  * This is what we need to know about vacuum page cleanup/redirect
diff --git a/src/include/access/hio.h b/src/include/access/hio.h
index 2824f23..921cb37 100644
--- a/src/include/access/hio.h
+++ b/src/include/access/hio.h
@@ -35,8 +35,8 @@ typedef struct BulkInsertStateData
 }	BulkInsertStateData;
 
 
-extern void RelationPutHeapTuple(Relation relation, Buffer buffer,
-					 HeapTuple tuple, bool token);
+extern OffsetNumber RelationPutHeapTuple(Relation relation, Buffer buffer,
+					 HeapTuple tuple, bool token, OffsetNumber root_offnum);
 extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
 						  Buffer otherBuffer, int options,
 						  BulkInsertState bistate,
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 7b6285d..24433c7 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,13 +260,19 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1800 are available */
+/* bits 0x0800 are available */
+#define HEAP_LATEST_TUPLE		0x1000	/*
+										 * This is the last tuple in chain and
+										 * ip_posid points to the root line
+										 * pointer
+										 */
 #define HEAP_KEYS_UPDATED		0x2000	/* tuple was updated and key cols
 										 * modified, or tuple deleted */
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xE000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+
 
 /*
  * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins.  It is
@@ -504,6 +510,43 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+/*
+ * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
+ * the TID of the tuple itself in t_ctid field to mark the end of the chain.
+ * But starting PG v10, we use a special flag HEAP_LATEST_TUPLE to identify the
+ * last tuple and store the root line pointer of the HOT chain in t_ctid field
+ * instead.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetHeapLatest(tup, offnum) \
+do { \
+	AssertMacro(OffsetNumberIsValid(offnum)); \
+	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE; \
+	ItemPointerSetOffsetNumber(&(tup)->t_ctid, (offnum)); \
+} while (0)
+
+#define HeapTupleHeaderClearHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 &= ~HEAP_LATEST_TUPLE \
+)
+
+/*
+ * Starting from PostgreSQL 10, the latest tuple in an update chain has
+ * HEAP_LATEST_TUPLE set; but tuples upgraded from earlier versions do not.
+ * For those, we determine whether a tuple is latest by testing that its t_ctid
+ * points to itself.
+ *
+ * Note: beware of multiple evaluations of "tup" and "tid" arguments.
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  (((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(tid))) \
+)
+
+
 #define HeapTupleHeaderSetHeapOnly(tup) \
 ( \
   (tup)->t_infomask2 |= HEAP_ONLY_TUPLE \
@@ -542,6 +585,56 @@ do { \
 
 
 /*
+ * Set the t_ctid chain and also clear the HEAP_LATEST_TUPLE flag since we
+ * now have a new tuple in the chain and this is no longer the last tuple of
+ * the chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetNextTid(tup, tid) \
+do { \
+		ItemPointerCopy((tid), &((tup)->t_ctid)); \
+		HeapTupleHeaderClearHeapLatest((tup)); \
+} while (0)
+
+/*
+ * Get TID of next tuple in the update chain. Caller must have checked that
+ * we are not already at the end of the chain because in that case t_ctid may
+ * actually store the root line pointer of the HOT chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetNextTid(tup, next_ctid) \
+do { \
+	AssertMacro(!((tup)->t_infomask2 & HEAP_LATEST_TUPLE)); \
+	ItemPointerCopy(&(tup)->t_ctid, (next_ctid)); \
+} while (0)
+
+/*
+ * Get the root line pointer of the HOT chain. The caller should have confirmed
+ * that the root offset is cached before calling this macro.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+	AssertMacro(((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0), \
+	ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)
+
+/*
+ * Return whether the tuple has a cached root offset.  We don't use
+ * HeapTupleHeaderIsHeapLatest because that one also considers the case of
+ * t_ctid pointing to itself, for tuples migrated from pre v10 clusters. Here
+ * we are only interested in the tuples which are marked with HEAP_LATEST_TUPLE
+ * flag.
+ */
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+	((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0 \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
-- 
2.9.3 (Apple Git-75)

0002-Free-3-bits-in-ip_posid-field-of-the-ItemPointer_v24.patchapplication/octet-stream; name=0002-Free-3-bits-in-ip_posid-field-of-the-ItemPointer_v24.patchDownload
From 34a7d7ba3408db1b55643fa3c44d3a9f2e461a37 Mon Sep 17 00:00:00 2001
From: Pavan Deolasee <pavan.deolasee@gmail.com>
Date: Wed, 29 Mar 2017 10:44:01 +0530
Subject: [PATCH 2/4] Free 3-bits in ip_posid field of the ItemPointerData.

We can use those for storing some other information. Right now only index
methods will use those to store WARM/CLEAR property of an index pointer.
---
 src/include/access/ginblock.h     |  3 ++-
 src/include/access/htup_details.h |  2 +-
 src/include/storage/itemptr.h     | 30 +++++++++++++++++++++++++++---
 src/include/storage/off.h         | 11 ++++++++++-
 4 files changed, 40 insertions(+), 6 deletions(-)

diff --git a/src/include/access/ginblock.h b/src/include/access/ginblock.h
index 438912c..316ab65 100644
--- a/src/include/access/ginblock.h
+++ b/src/include/access/ginblock.h
@@ -135,7 +135,8 @@ typedef struct GinMetaPageData
 	(ItemPointerGetBlockNumberNoCheck(pointer))
 
 #define GinItemPointerGetOffsetNumber(pointer) \
-	(ItemPointerGetOffsetNumberNoCheck(pointer))
+	(ItemPointerGetOffsetNumberNoCheck(pointer) | \
+	 (ItemPointerGetFlags(pointer) << OffsetNumberBits))
 
 #define GinItemPointerSetBlockNumber(pointer, blkno) \
 	(ItemPointerSetBlockNumber((pointer), (blkno)))
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 24433c7..4d614b7 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -288,7 +288,7 @@ struct HeapTupleHeaderData
  * than MaxOffsetNumber, so that it can be distinguished from a valid
  * offset number in a regular item pointer.
  */
-#define SpecTokenOffsetNumber		0xfffe
+#define SpecTokenOffsetNumber		OffsetNumberPrev(OffsetNumberMask)
 
 /*
  * HeapTupleHeader accessor macros
diff --git a/src/include/storage/itemptr.h b/src/include/storage/itemptr.h
index c21d2ad..74eed4e 100644
--- a/src/include/storage/itemptr.h
+++ b/src/include/storage/itemptr.h
@@ -57,7 +57,7 @@ typedef ItemPointerData *ItemPointer;
  *		True iff the disk item pointer is not NULL.
  */
 #define ItemPointerIsValid(pointer) \
-	((bool) (PointerIsValid(pointer) && ((pointer)->ip_posid != 0)))
+	((bool) (PointerIsValid(pointer) && (((pointer)->ip_posid & OffsetNumberMask) != 0)))
 
 /*
  * ItemPointerGetBlockNumberNoCheck
@@ -84,7 +84,7 @@ typedef ItemPointerData *ItemPointer;
  */
 #define ItemPointerGetOffsetNumberNoCheck(pointer) \
 ( \
-	(pointer)->ip_posid \
+	((pointer)->ip_posid & OffsetNumberMask) \
 )
 
 /*
@@ -98,6 +98,30 @@ typedef ItemPointerData *ItemPointer;
 )
 
 /*
+ * Get the flags stored in high order bits in the OffsetNumber.
+ */
+#define ItemPointerGetFlags(pointer) \
+( \
+	((pointer)->ip_posid & ~OffsetNumberMask) >> OffsetNumberBits \
+)
+
+/*
+ * Set the flag bits. We first left-shift since flags are defined starting 0x01
+ */
+#define ItemPointerSetFlags(pointer, flags) \
+( \
+	((pointer)->ip_posid |= ((flags) << OffsetNumberBits)) \
+)
+
+/*
+ * Clear all flags.
+ */
+#define ItemPointerClearFlags(pointer) \
+( \
+	((pointer)->ip_posid &= OffsetNumberMask) \
+)
+
+/*
  * ItemPointerSet
  *		Sets a disk item pointer to the specified block and offset.
  */
@@ -105,7 +129,7 @@ typedef ItemPointerData *ItemPointer;
 ( \
 	AssertMacro(PointerIsValid(pointer)), \
 	BlockIdSet(&((pointer)->ip_blkid), blockNumber), \
-	(pointer)->ip_posid = offNum \
+	(pointer)->ip_posid = (offNum) \
 )
 
 /*
diff --git a/src/include/storage/off.h b/src/include/storage/off.h
index fe8638f..f058fe1 100644
--- a/src/include/storage/off.h
+++ b/src/include/storage/off.h
@@ -26,7 +26,16 @@ typedef uint16 OffsetNumber;
 #define InvalidOffsetNumber		((OffsetNumber) 0)
 #define FirstOffsetNumber		((OffsetNumber) 1)
 #define MaxOffsetNumber			((OffsetNumber) (BLCKSZ / sizeof(ItemIdData)))
-#define OffsetNumberMask		(0xffff)		/* valid uint16 bits */
+
+/*
+ * The biggest BLCKSZ we support is 32kB, and each ItemId takes 6 bytes.
+ * That limits the number of line pointers in a page to 32kB/6B = 5461.
+ * Therefore, 13 bits in OffsetNumber are enough to represent all valid
+ * on-disk line pointers.  Hence, we can reserve the high-order bits in
+ * OffsetNumber for other purposes.
+ */
+#define OffsetNumberBits		13
+#define OffsetNumberMask		((((uint16) 1) << OffsetNumberBits) - 1)
 
 /* ----------------
  *		support macros
-- 
2.9.3 (Apple Git-75)

0003-Main-WARM-patch_v24.patchapplication/octet-stream; name=0003-Main-WARM-patch_v24.patchDownload
From ba60f568a0e54d1fad025a830272b8397fb216f2 Mon Sep 17 00:00:00 2001
From: Pavan Deolasee <pavan.deolasee@gmail.com>
Date: Sun, 26 Mar 2017 15:03:45 +0530
Subject: [PATCH 3/4] Main WARM patch.

We perform WARM update if the update is not modifying all indexes, but
modifying at least one index and has enough free space in the heap block to
keep the new version of the tuple.

The update works pretty much the same way as HOT updates, but the index whose
key values have changed must receive another index entry, pointing to the same
root of the HOT chain. Such chains which may have more than one index pointers
in at least one index, are called WARM chains. But now since there are 2 index
pointers to the same chain, we must do recheck to confirm that the index
pointer should or should not see the tuple. HOT pruning and other technique
remain the same.

WARM chains must subsequently be cleaned up by removing additional index
pointers. Once cleaned up, they are further be WARM updated and
index-only-scans will work.

To ensure that we don't do wasteful work, we only do WARM update if less than
50% indexes need updates. For anything above that, it probably does not make
sense to do WARM updates because most indexes will receive an update anyways
and cleanup cost will be high.
---
 contrib/bloom/blutils.c                     |   1 +
 contrib/bloom/blvacuum.c                    |   2 +-
 src/backend/access/brin/brin.c              |   1 +
 src/backend/access/gin/ginvacuum.c          |   3 +-
 src/backend/access/gist/gist.c              |   1 +
 src/backend/access/gist/gistvacuum.c        |   3 +-
 src/backend/access/hash/hash.c              |  18 +-
 src/backend/access/hash/hashsearch.c        |   5 +
 src/backend/access/heap/README.WARM         | 308 ++++++++++
 src/backend/access/heap/heapam.c            | 634 +++++++++++++++++--
 src/backend/access/heap/pruneheap.c         |   9 +-
 src/backend/access/heap/rewriteheap.c       |  12 +-
 src/backend/access/heap/tuptoaster.c        |   3 +-
 src/backend/access/index/genam.c            |   2 +
 src/backend/access/index/indexam.c          |  95 ++-
 src/backend/access/nbtree/nbtinsert.c       | 228 ++++---
 src/backend/access/nbtree/nbtpage.c         |  56 +-
 src/backend/access/nbtree/nbtree.c          |  76 ++-
 src/backend/access/nbtree/nbtutils.c        |  93 +++
 src/backend/access/nbtree/nbtxlog.c         |  27 +-
 src/backend/access/rmgrdesc/heapdesc.c      |  26 +-
 src/backend/access/rmgrdesc/nbtdesc.c       |   4 +-
 src/backend/access/spgist/spgutils.c        |   1 +
 src/backend/access/spgist/spgvacuum.c       |  12 +-
 src/backend/catalog/index.c                 |  71 ++-
 src/backend/catalog/indexing.c              |  60 +-
 src/backend/catalog/system_views.sql        |   4 +-
 src/backend/commands/constraint.c           |   7 +-
 src/backend/commands/copy.c                 |   3 +
 src/backend/commands/indexcmds.c            |  17 +-
 src/backend/commands/vacuumlazy.c           | 649 +++++++++++++++++++-
 src/backend/executor/execIndexing.c         |  21 +-
 src/backend/executor/execReplication.c      |  30 +-
 src/backend/executor/nodeBitmapHeapscan.c   |  21 +-
 src/backend/executor/nodeIndexscan.c        |   4 +-
 src/backend/executor/nodeModifyTable.c      |  27 +-
 src/backend/postmaster/pgstat.c             |   7 +-
 src/backend/replication/logical/decode.c    |  13 +-
 src/backend/storage/page/bufpage.c          |  23 +
 src/backend/utils/adt/pgstatfuncs.c         |  31 +
 src/backend/utils/cache/relcache.c          | 113 +++-
 src/backend/utils/time/combocid.c           |   4 +-
 src/backend/utils/time/tqual.c              |  24 +-
 src/include/access/amapi.h                  |  18 +
 src/include/access/genam.h                  |  22 +-
 src/include/access/heapam.h                 |  30 +-
 src/include/access/heapam_xlog.h            |  24 +-
 src/include/access/htup_details.h           | 116 +++-
 src/include/access/nbtree.h                 |  21 +-
 src/include/access/nbtxlog.h                |  10 +-
 src/include/access/relscan.h                |   5 +-
 src/include/catalog/index.h                 |   7 +
 src/include/catalog/pg_proc.h               |   4 +
 src/include/commands/progress.h             |   1 +
 src/include/executor/executor.h             |   1 +
 src/include/executor/nodeIndexscan.h        |   1 -
 src/include/nodes/execnodes.h               |   1 +
 src/include/pgstat.h                        |   4 +-
 src/include/storage/bufpage.h               |   2 +
 src/include/utils/rel.h                     |   7 +
 src/include/utils/relcache.h                |   5 +-
 src/test/regress/expected/alter_generic.out |   4 +-
 src/test/regress/expected/rules.out         |  12 +-
 src/test/regress/expected/warm.out          | 914 ++++++++++++++++++++++++++++
 src/test/regress/parallel_schedule          |   2 +
 src/test/regress/sql/warm.sql               | 344 +++++++++++
 66 files changed, 3963 insertions(+), 341 deletions(-)
 create mode 100644 src/backend/access/heap/README.WARM
 create mode 100644 src/test/regress/expected/warm.out
 create mode 100644 src/test/regress/sql/warm.sql

diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c
index f2eda67..b356e2b 100644
--- a/contrib/bloom/blutils.c
+++ b/contrib/bloom/blutils.c
@@ -142,6 +142,7 @@ blhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/contrib/bloom/blvacuum.c b/contrib/bloom/blvacuum.c
index 04abd0f..ff50361 100644
--- a/contrib/bloom/blvacuum.c
+++ b/contrib/bloom/blvacuum.c
@@ -88,7 +88,7 @@ blbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 		while (itup < itupEnd)
 		{
 			/* Do we have to delete this tuple? */
-			if (callback(&itup->heapPtr, callback_state))
+			if (callback(&itup->heapPtr, false, callback_state) == IBDCR_DELETE)
 			{
 				/* Yes; adjust count of tuples that will be left on page */
 				BloomPageGetOpaque(page)->maxoff--;
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index b22563b..b4a1465 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -116,6 +116,7 @@ brinhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gin/ginvacuum.c b/src/backend/access/gin/ginvacuum.c
index 26c077a..46ed4fe 100644
--- a/src/backend/access/gin/ginvacuum.c
+++ b/src/backend/access/gin/ginvacuum.c
@@ -56,7 +56,8 @@ ginVacuumItemPointers(GinVacuumState *gvs, ItemPointerData *items,
 	 */
 	for (i = 0; i < nitem; i++)
 	{
-		if (gvs->callback(items + i, gvs->callback_state))
+		if (gvs->callback(items + i, false, gvs->callback_state) ==
+				IBDCR_DELETE)
 		{
 			gvs->result->tuples_removed += 1;
 			if (!tmpitems)
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index 6593771..843389b 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -94,6 +94,7 @@ gisthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gist/gistvacuum.c b/src/backend/access/gist/gistvacuum.c
index 77d9d12..0955db6 100644
--- a/src/backend/access/gist/gistvacuum.c
+++ b/src/backend/access/gist/gistvacuum.c
@@ -202,7 +202,8 @@ gistbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 				iid = PageGetItemId(page, i);
 				idxtuple = (IndexTuple) PageGetItem(page, iid);
 
-				if (callback(&(idxtuple->t_tid), callback_state))
+				if (callback(&(idxtuple->t_tid), false, callback_state) ==
+						IBDCR_DELETE)
 					todelete[ntodelete++] = i;
 				else
 					stats->num_index_tuples += 1;
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index 34cc08f..ad56d6d 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -75,6 +75,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = hashbuild;
 	amroutine->ambuildempty = hashbuildempty;
 	amroutine->aminsert = hashinsert;
+	amroutine->amwarminsert = NULL;
 	amroutine->ambulkdelete = hashbulkdelete;
 	amroutine->amvacuumcleanup = hashvacuumcleanup;
 	amroutine->amcanreturn = NULL;
@@ -92,6 +93,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -807,6 +809,7 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 			IndexTuple	itup;
 			Bucket		bucket;
 			bool		kill_tuple = false;
+			IndexBulkDeleteCallbackResult	result;
 
 			itup = (IndexTuple) PageGetItem(page,
 											PageGetItemId(page, offno));
@@ -816,13 +819,18 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 			 * To remove the dead tuples, we strictly want to rely on results
 			 * of callback function.  refer btvacuumpage for detailed reason.
 			 */
-			if (callback && callback(htup, callback_state))
+			if (callback)
 			{
-				kill_tuple = true;
-				if (tuples_removed)
-					*tuples_removed += 1;
+				result = callback(htup, false, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					kill_tuple = true;
+					if (tuples_removed)
+						*tuples_removed += 1;
+				}
 			}
-			else if (split_cleanup)
+
+			if (!kill_tuple && split_cleanup)
 			{
 				/* delete the tuples that are moved by split. */
 				bucket = _hash_hashkey2bucket(_hash_get_indextuple_hashkey(itup),
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index 2d92049..330ccc5 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -59,6 +59,8 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
 	return true;
 }
 
@@ -367,6 +369,9 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
+
 	return true;
 }
 
diff --git a/src/backend/access/heap/README.WARM b/src/backend/access/heap/README.WARM
new file mode 100644
index 0000000..7c93a70
--- /dev/null
+++ b/src/backend/access/heap/README.WARM
@@ -0,0 +1,308 @@
+src/backend/access/heap/README.WARM
+
+Write Amplification Reduction Method (WARM)
+===========================================
+
+The Heap Only Tuple (HOT) feature greatly eliminated redudant index
+entries and allowed re-use of the dead space occupied by previously
+updated or deleted tuples (see src/backend/access/heap/README.HOT)
+
+One of the necessary conditions for satisfying HOT update is that the
+update must not change a column used in any of the indexes on the table.
+The condition is sometimes hard to meet, especially for complex
+workloads with several indexes on large yet frequently updated tables.
+Worse, sometimes only one or two index columns may be updated, but the
+regular non-HOT update will still insert a new index entry in every
+index on the table, irrespective of whether the key pertaining to the
+index is changed or not.
+
+WARM is a technique devised to address these problems.
+
+
+Update Chains With Multiple Index Entries Pointing to the Root
+--------------------------------------------------------------
+
+When a non-HOT update is caused by an index key change, a new index
+entry must be inserted for the changed index. But if the index key
+hasn't changed for other indexes, we don't really need to insert a new
+entry. Even though the existing index entry is pointing to the old
+tuple, the new tuple is reachable via the t_ctid chain. To keep things
+simple, a WARM update requires that the heap block must have enough
+space to store the new version of the tuple. This is same as HOT
+updates.
+
+In WARM, we ensure that every index entry always points to the root of
+the WARM chain. In fact, a WARM chain looks exactly like a HOT chain
+except for the fact that there could be multiple index entries pointing
+to the root of the chain. So when new entry is inserted in an index for
+updated tuple, and if we are doing a WARM update, the new entry is made
+point to the root of the WARM chain.
+
+For example, if we have a table with two columns and two indexes on each of the
+column. When a tuple is first inserted into the table, we have exactly one
+index entry pointing to the tuple from both indexes.
+
+	lp [1]
+	[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's entry (aaaa) also points to 1
+
+Now if the tuple's second column is updated and if there is room on the
+page, we perform a WARM update. To do so, Index1 does not get any new
+entry and Index2's new entry will still point to the root tuple of the
+chain.
+
+	lp [1]  [2]
+	[1111, aaaa]->[1111, bbbb]
+
+	Index1's entry (1111) points to 1
+	Index2's old entry (aaaa) points to 1
+	Index2's new entry (bbbb) also points to 1
+
+"A update chain which has more than one index entries pointing to its
+root line pointer is called WARM chain and the action that creates a
+WARM chain is called WARM update."
+
+Since all indexes always point to the root of the WARM chain, even when
+there are more than one index entries, WARM chains can be pruned and
+dead tuples can be removed without a need to do corresponding index
+cleanup.
+
+While this solves the problem of pruning dead tuples from a HOT/WARM
+chain, it also opens up a new technical challenge because now we have a
+situation where a heap tuple is reachable from multiple index entries,
+each having a different index key. While MVCC still ensures that only
+valid tuples are returned, a tuple with a wrong index key may be
+returned because of wrong index entries. In the above example, tuple
+[1111, bbbb] is reachable from both keys (aaaa) as well as (bbbb). For
+this reason, tuples returned from a WARM chain must always be rechecked
+for index key-match.
+
+Recheck Index Key Againt Heap Tuple
+-----------------------------------
+
+Since every Index AM has it's own notion of index tuples, each Index AM
+must implement its own method to recheck heap tuples. For example, a
+hash index stores the hash value of the column and hence recheck routine
+for hash AM must first compute the hash value of the heap attribute and
+then compare it against the value stored in the index tuple.
+
+The patch currently implement recheck routines for hash and btree
+indexes. If the table has an index which doesn't support recheck
+routine, WARM updates are disabled on such tables.
+
+Problem With Duplicate (key, ctid) Index Entries
+------------------------------------------------
+
+The index-key recheck logic works as long as there are no duplicate
+index keys, both pointing to the same WARM chain. In that case, the same
+valid tuple will be reachable via multiple index keys, yet satisfying
+the index key checks. In the above example, if the tuple [1111, bbbb] is
+again updated to [1111, aaaa] and if we insert a new index entry (aaaa)
+pointing to the root line pointer, we will end up with the following
+structure:
+
+	lp [1]  [2]  [3]
+	[1111, aaaa]->[1111, bbbb]->[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's oldest entry (aaaa) points to 1
+	Index2's old entry (bbbb) also points to 1
+	Index2's new entry (aaaa) also points to 1
+
+We must solve this problem to ensure that the same tuple is not
+reachable via multiple index pointers. There are couple of ways to
+address this issue:
+
+1. Do not allow WARM update to a tuple from a WARM chain. This
+guarantees that there can never be duplicate index entries to the same
+root line pointer because we must have checked for old and new index
+keys while doing the first WARM update.
+
+2. Do not allow duplicate (key, ctid) index pointers. In the above
+example, since (aaaa, 1) already exists in the index, we must not insert
+a duplicate index entry.
+
+The patch currently implements 1 i.e. do not do WARM updates to a tuple
+from a WARM chain. HOT updates are fine because they do not add a new
+index entry.
+
+Even with the restriction, this is a significant improvement because the
+number of regular UPDATEs are curtailed down to half.
+
+Expression and Partial Indexes
+------------------------------
+
+Expressions may evaluate to the same value even if the underlying column
+values have changed. A simple example is an index on "lower(col)" which
+will return the same value if the new heap value only differs in the
+case sensitivity. So we can not solely rely on the heap column check to
+decide whether or not to insert a new index entry for expression
+indexes. Similarly, for partial indexes, the predicate expression must
+be evaluated to decide whether or not to cause a new index entry when
+columns referred in the predicate expressions change.
+
+(None of these things are currently implemented and we squarely disallow
+WARM update if a column from expression indexes or predicate has
+changed).
+
+
+Efficiently Finding the Root Line Pointer
+-----------------------------------------
+
+During WARM update, we must be able to find the root line pointer of the
+tuple being updated. It must be noted that the t_ctid field in the heap
+tuple header is usually used to find the next tuple in the update chain.
+But the tuple that we are updating, must be the last tuple in the update
+chain. In such cases, the c_tid field usually points the tuple itself.
+So in theory, we could use the t_ctid to store additional information in
+the last tuple of the update chain, if the information about the tuple
+being the last tuple is stored elsewhere.
+
+We now utilize another bit from t_infomask2 to explicitly identify that
+this is the last tuple in the update chain.
+
+HEAP_LATEST_TUPLE - When this bit is set, the tuple is the last tuple in
+the update chain. The OffsetNumber part of t_ctid points to the root
+line pointer of the chain when HEAP_LATEST_TUPLE flag is set.
+
+If UPDATE operation is aborted, the last tuple in the update chain
+becomes dead. The root line pointer information stored in the tuple
+which remains the last valid tuple in the chain is also lost. In such
+rare cases, the root line pointer must be found in a hard way by
+scanning the entire heap page.
+
+Tracking WARM Chains
+--------------------
+
+When a tuple is WARM updated, the old, the new and every subsequent tuple in
+the chain is marked with a special HEAP_WARM_UPDATED flag. We use the last
+remaining bit in t_infomask2 to store this information.
+
+When a tuple is returned from a WARM chain, the caller must do additional
+checks to ensure that the tuple matches the index key. Even if the tuple
+precedes the WARM update in the chain, it must still be rechecked for the index
+key match (case when old tuple is returned by the new index key). So we must
+follow the update chain everytime to the end to check if this is a WARM
+chain.
+
+Converting WARM chains back to HOT chains (VACUUM ?)
+----------------------------------------------------
+
+The current implementation of WARM allows only one WARM update per
+chain. This simplifies the design and addresses certain issues around
+duplicate key scans. But this also implies that the benefit of WARM will be
+no more than 50%, which is still significant, but if we could return
+WARM chains back to normal status, we could do far more WARM updates.
+
+A distinct property of a WARM chain is that at least one index has more
+than one live index entries pointing to the root of the chain. In other
+words, if we can remove duplicate entry from every index or conclusively
+prove that there are no duplicate index entries for the root line
+pointer, the chain can again be marked as HOT.
+
+Here is one idea:
+
+A WARM chain has two parts, separated by the tuple that caused WARM
+update. All tuples in each part has matching index keys, but certain
+index keys may not match between these two parts. Lets say we mark heap
+tuples in the second part with a special HEAP_WARM_TUPLE flag. Similarly, the
+new index entries caused by the first WARM update are also marked with
+INDEX_WARM_POINTER flags.
+
+There are two distinct parts of the WARM chain. The first part where none of
+the tuples have HEAP_WARM_TUPLE flag set and the second part where every tuple
+has the flag set. Each of these parts satisfy HOT property on its own i.e. all
+tuples have the same value for indexed columns. But these two parts are
+separated by the WARM update which breaks HOT property for one or more indexes.
+
+Heap chain: [1] [2] [3] [4]
+			[aaaa, 1111] -> [aaaa, 1111] -> [bbbb, 1111]W -> [bbbb, 1111]W
+
+Index1: 	(aaaa) points to 1 (satisfies only tuples without W)
+			(bbbb)W points to 1 (satisfies only tuples marked with W)
+
+Index2:		(1111) points to 1 (satisfies tuples with and without W)
+
+
+It's clear that for indexes with both pointers, a heap tuple without
+HEAP_WARM_TUPLE flag will be reachable from the index pointer cleared of
+INDEX_WARM_POINTER flag and that with HEAP_WARM_TUPLE flag will be reachable
+from the pointer with INDEX_WARM_POINTER. But for indexes which did not create
+a new entry, tuples with and without the HEAP_WARM_TUPLE flag will be reachable
+from the original index pointer which doesn't have the INDEX_WARM_POINTER flag.
+(there is no pointer with INDEX_WARM_POINTER in such indexes).
+
+During first heap scan of VACUUM, we look for tuples with HEAP_WARM_UPDATED
+set.  If all or none of the live tuples in the chain are marked with
+HEAP_WARM_TUPLE flag, then the chain is a candidate for HOT conversion. We
+remember the root line pointer and whether the tuples in the chain had
+HEAP_WARM_TUPLE flags set or not.
+
+If we have a WARM chain with HEAP_WARM_TUPLE set, then our goal is to remove
+the index pointers without INDEX_WARM_POINTER flags and vice versa. But there
+is a catch. For Index2 above, there is only one pointer and it does not have
+the INDEX_WARM_POINTER flag set. Since all heap tuples are reachable only via
+this pointer, it must not be removed. IOW we should remove index pointer
+without INDEX_WARM_POINTER iff a another index pointer with INDEX_WARM_POINTER
+exists. Since index vacuum may visit these pointers in any order, we will need
+another index pass to detect redundant index pointers, which can safely be
+removed because all live tuples are reachable via the other index pointer. So
+in the first index pass we check which WARM candidates have 2 index pointers.
+In the second pass, we remove the redundant pointer and clear the
+INDEX_WARM_POINTER flag if that's the surviving index pointer. Note that
+all index pointers, either CLEAR or WARM, to dead tuples are removed during the
+first index scan itself.
+
+During the second heap scan, we fix WARM chain by clearing HEAP_WARM_UPDATED
+and HEAP_WARM_TUPLE flags on tuples.
+
+There are some more problems around aborted vacuums. For example, if vacuum
+aborts after clearing INDEX_WARM_POINTER flag but before removing the other
+index pointer, we will end up with two index pointers and none of those will
+have INDEX_WARM_POINTER set.  But since the HEAP_WARM_UPDATED flag on the heap
+tuple is still set, further WARM updates to the chain will be blocked. I guess
+we will need some special handling for case with multiple index pointers where
+none of the index pointers has INDEX_WARM_POINTER flag set. We can either leave
+these WARM chains alone and let them die with a subsequent non-WARM update or
+must apply heap-recheck logic during index vacuum to find the dead pointer.
+Given that vacuum-aborts are not common, I am inclined to leave this case
+unhandled. We must still check for presence of multiple index pointers without
+INDEX_WARM_POINTER flags and ensure that we don't accidently remove either of
+these pointers and also must not clear WARM chains.
+
+CREATE INDEX CONCURRENTLY
+-------------------------
+
+Currently CREATE INDEX CONCURRENTLY (CIC) is implemented as a 3-phase
+process.  In the first phase, we create catalog entry for the new index
+so that the index is visible to all other backends, but still don't use
+it for either read or write.  But we ensure that no new broken HOT
+chains are created by new transactions. In the second phase, we build
+the new index using a MVCC snapshot and then make the index available
+for inserts. We then do another pass over the index and insert any
+missing tuples, everytime indexing only it's root line pointer. See
+README.HOT for details about how HOT impacts CIC and how various
+challenges are tackeled.
+
+WARM poses another challenge because it allows creation of HOT chains
+even when an index key is changed. But since the index is not ready for
+insertion until the second phase is over, we might end up with a
+situation where the HOT chain has tuples with different index columns,
+yet only one of these values are indexed by the new index. Note that
+during the third phase, we only index tuples whose root line pointer is
+missing from the index. But we can't easily check if the existing index
+tuple is actually indexing the heap tuple visible to the new MVCC
+snapshot. Finding that information will require us to query the index
+again for every tuple in the chain, especially if it's a WARM tuple.
+This would require repeated access to the index. Another option would be
+to return index keys along with the heap TIDs when index is scanned for
+collecting all indexed TIDs during third phase. We can then compare the
+heap tuple against the already indexed key and decide whether or not to
+index the new tuple.
+
+We solve this problem more simply by disallowing WARM updates until the
+index is ready for insertion. We don't need to disallow WARM on a
+wholesale basis, but only those updates that change the columns of the
+new index are disallowed to be WARM updates.
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 30262ef..f54955a 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -1974,6 +1974,212 @@ heap_fetch(Relation relation,
 }
 
 /*
+ * Check status of a (possibly) WARM chain.
+ *
+ * This function looks at a HOT/WARM chain starting at tid and return a bitmask
+ * of information. We only follow the chain as long as it's known to be valid
+ * HOT chain. Information returned by the function consists of:
+ *
+ *  HCWC_WARM_UPDATED_TUPLE - a tuple with HEAP_WARM_UPDATED is found somewhere
+ *  						  in the chain. Note that when a tuple is WARM
+ *  						  updated, both old and new versions are marked
+ *  						  with this flag. So presence of this flag
+ *  						  indicates that a WARM update was performed on
+ *  						  this chain, but the update may have either
+ *  						  committed or aborted.
+ *
+ *  HCWC_WARM_TUPLE  - a tuple with HEAP_WARM_TUPLE is found somewhere in
+ *					  the chain. This flag is set only on the new version of
+ *					  the tuple while performing WARM update.
+ *
+ *  HCWC_CLEAR_TUPLE - a tuple without HEAP_WARM_TUPLE is found somewhere in
+ *  					 the chain. This either implies that the WARM updated
+ *  					 either aborted or it's recent enough that the old
+ *  					 tuple is still not pruned away by chain pruning logic.
+ *
+ *	If stop_at_warm is true, we stop when the first HEAP_WARM_UPDATED tuple is
+ *	found and return information collected so far.
+ */
+HeapCheckWarmChainStatus
+heap_check_warm_chain(Page dp, ItemPointer tid, bool stop_at_warm)
+{
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	HeapCheckWarmChainStatus	status = 0;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
+			break;
+		}
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		if (HeapTupleHeaderIsWarmUpdated(heapTuple.t_data))
+		{
+			/* We found a WARM_UPDATED tuple */
+			status |= HCWC_WARM_UPDATED_TUPLE;
+
+			/*
+			 * If we've been told to stop at the first WARM_UPDATED tuple, just
+			 * return whatever information collected so far.
+			 */
+			if (stop_at_warm)
+				return status;
+
+			/*
+			 * Remember whether it's a CLEAR or a WARM tuple.
+			 */
+			if (HeapTupleHeaderIsWarm(heapTuple.t_data))
+				status |= HCWC_WARM_TUPLE;
+			else
+				status |= HCWC_CLEAR_TUPLE;
+		}
+		else
+			/* Must be a regular, non-WARM tuple */
+			status |= HCWC_CLEAR_TUPLE;
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	/* All OK. No need to recheck */
+	return status;
+}
+
+/*
+ * Scan through the WARM chain starting at tid and reset all WARM related
+ * flags. At the end, the chain will have all characteristics of a regular HOT
+ * chain.
+ *
+ * Return the number of cleared offnums. Cleared offnums are returned in the
+ * passed-in cleared_offnums array. The caller must ensure that the array is
+ * large enough to hold maximum offnums that can be cleared by this invokation
+ * of heap_clear_warm_chain().
+ */
+int
+heap_clear_warm_chain(Page dp, ItemPointer tid, OffsetNumber *cleared_offnums)
+{
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	int							num_cleared = 0;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
+			break;
+		}
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Clear WARM_UPDATED and WARM flags.
+		 */
+		if (HeapTupleHeaderIsWarmUpdated(heapTuple.t_data))
+		{
+			HeapTupleHeaderClearWarmUpdated(heapTuple.t_data);
+			HeapTupleHeaderClearWarm(heapTuple.t_data);
+			cleared_offnums[num_cleared++] = offnum;
+		}
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	return num_cleared;
+}
+
+/*
  *	heap_hot_search_buffer	- search HOT chain for tuple satisfying snapshot
  *
  * On entry, *tid is the TID of a tuple (either a simple tuple, or the root
@@ -1993,11 +2199,14 @@ heap_fetch(Relation relation,
  * Unlike heap_fetch, the caller must already have pin and (at least) share
  * lock on the buffer; it is still pinned/locked at exit.  Also unlike
  * heap_fetch, we do not report any pgstats count; caller may do so if wanted.
+ *
+ * recheck should be set false on entry by caller, will be set true on exit
+ * if a WARM tuple is encountered.
  */
 bool
 heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 					   Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call)
+					   bool *all_dead, bool first_call, bool *recheck)
 {
 	Page		dp = (Page) BufferGetPage(buffer);
 	TransactionId prev_xmax = InvalidTransactionId;
@@ -2051,9 +2260,12 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		ItemPointerSetOffsetNumber(&heapTuple->t_self, offnum);
 
 		/*
-		 * Shouldn't see a HEAP_ONLY tuple at chain start.
+		 * Shouldn't see a HEAP_ONLY tuple at chain start, unless we are
+		 * dealing with a WARM updated tuple in which case deferred triggers
+		 * may request to fetch a WARM tuple from middle of a chain.
 		 */
-		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple))
+		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple) &&
+				!HeapTupleIsWarmUpdated(heapTuple))
 			break;
 
 		/*
@@ -2066,6 +2278,20 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 			break;
 
 		/*
+		 * Check if there exists a WARM tuple somewhere down the chain and set
+		 * recheck to TRUE.
+		 *
+		 * XXX This is not very efficient right now, and we should look for
+		 * possible improvements here.
+		 */
+		if (recheck && *recheck == false)
+		{
+			HeapCheckWarmChainStatus status;
+			status = heap_check_warm_chain(dp, &heapTuple->t_self, true);
+			*recheck = HCWC_IS_WARM_UPDATED(status);
+		}
+
+		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
 		 * return the first tuple we find.  But on later passes, heapTuple
 		 * will initially be pointing to the tuple we returned last time.
@@ -2114,7 +2340,8 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		 * Check to see if HOT chain continues past this tuple; if so fetch
 		 * the next offnum and loop around.
 		 */
-		if (HeapTupleIsHotUpdated(heapTuple))
+		if (HeapTupleIsHotUpdated(heapTuple) &&
+			!HeapTupleHeaderHasRootOffset(heapTuple->t_data))
 		{
 			Assert(ItemPointerGetBlockNumber(&heapTuple->t_data->t_ctid) ==
 				   ItemPointerGetBlockNumber(tid));
@@ -2138,18 +2365,41 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
  */
 bool
 heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
-				bool *all_dead)
+				bool *all_dead, bool *recheck, Buffer *cbuffer,
+				HeapTuple heapTuple)
 {
 	bool		result;
 	Buffer		buffer;
-	HeapTupleData heapTuple;
+	ItemPointerData ret_tid = *tid;
 
 	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	LockBuffer(buffer, BUFFER_LOCK_SHARE);
-	result = heap_hot_search_buffer(tid, relation, buffer, snapshot,
-									&heapTuple, all_dead, true);
-	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buffer);
+	result = heap_hot_search_buffer(&ret_tid, relation, buffer, snapshot,
+									heapTuple, all_dead, true, recheck);
+
+	/*
+	 * If we are returning a potential candidate tuple from this chain and the
+	 * caller has requested for "recheck" hint, keep the buffer locked and
+	 * pinned. The caller must release the lock and pin on the buffer in all
+	 * such cases.
+	 */
+	if (!result || !recheck || !(*recheck))
+	{
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	/*
+	 * Set the caller supplied tid with the actual location of the tuple being
+	 * returned.
+	 */
+	if (result)
+	{
+		*tid = ret_tid;
+		if (cbuffer)
+			*cbuffer = buffer;
+	}
+
 	return result;
 }
 
@@ -2792,7 +3042,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		{
 			XLogRecPtr	recptr;
 			xl_heap_multi_insert *xlrec;
-			uint8		info = XLOG_HEAP2_MULTI_INSERT;
+			uint8		info = XLOG_HEAP_MULTI_INSERT;
 			char	   *tupledata;
 			int			totaldatalen;
 			char	   *scratchptr = scratch;
@@ -2889,7 +3139,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			/* filtering by origin on a row level is much more efficient */
 			XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);
 
-			recptr = XLogInsert(RM_HEAP2_ID, info);
+			recptr = XLogInsert(RM_HEAP_ID, info);
 
 			PageSetLSN(page, recptr);
 		}
@@ -3045,7 +3295,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	bool		all_visible_cleared = false;
 	HeapTuple	old_key_tuple = NULL;	/* replica identity of the tuple */
 	bool		old_key_copied = false;
-	OffsetNumber	root_offnum;
+	OffsetNumber	root_offnum = InvalidOffsetNumber;
 
 	Assert(ItemPointerIsValid(tid));
 
@@ -3278,7 +3528,7 @@ l1:
 							  &new_xmax, &new_infomask, &new_infomask2);
 
 	/*
-	 * heap_get_root_tuple_one() may call palloc, which is disallowed once we
+	 * heap_get_root_tuple() may call palloc, which is disallowed once we
 	 * enter the critical section. So check if the root offset is cached in the
 	 * tuple and if not, fetch that information hard way before entering the
 	 * critical section.
@@ -3313,7 +3563,9 @@ l1:
 	}
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	tp.t_data->t_infomask |= new_infomask;
 	tp.t_data->t_infomask2 |= new_infomask2;
@@ -3508,15 +3760,19 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 HTSU_Result
 heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode)
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update)
 {
 	HTSU_Result result;
 	TransactionId xid = GetCurrentTransactionId();
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *exprindx_attrs;
 	Bitmapset  *interesting_attrs;
 	Bitmapset  *modified_attrs;
+	Bitmapset  *notready_attrs;
+	List	   *indexattrsList;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3524,8 +3780,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
-	OffsetNumber	offnum;
-	OffsetNumber	root_offnum;
+	OffsetNumber	root_offnum = InvalidOffsetNumber;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3537,6 +3792,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		use_hot_update = false;
+	bool		use_warm_update = false;
 	bool		hot_attrs_checked = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
@@ -3562,6 +3818,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot update tuples during a parallel operation")));
 
+	/* Assume no-warm update */
+	if (warm_update)
+		*warm_update = false;
+
 	/*
 	 * Fetch the list of attributes to be checked for various operations.
 	 *
@@ -3582,10 +3842,14 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	exprindx_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_EXPR_PREDICATE);
+	notready_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_NOTREADY);
 
+	indexattrsList = RelationGetIndexAttrList(relation);
 
 	block = ItemPointerGetBlockNumber(otid);
-	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3605,8 +3869,11 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 		interesting_attrs = bms_add_members(interesting_attrs, hot_attrs);
 		hot_attrs_checked = true;
 	}
+
 	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, exprindx_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, notready_attrs);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -3653,6 +3920,9 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
 												  &oldtup, newtup);
 
+	if (modified_attrsp)
+		*modified_attrsp = bms_copy(modified_attrs);
+
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
 	 * This allows for more concurrency when we are running simultaneously
@@ -3908,8 +4178,10 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(exprindx_attrs);
 		bms_free(modified_attrs);
 		bms_free(interesting_attrs);
+		bms_free(notready_attrs);
 		return result;
 	}
 
@@ -4034,7 +4306,6 @@ l2:
 		uint16		infomask_lock_old_tuple,
 					infomask2_lock_old_tuple;
 		bool		cleared_all_frozen = false;
-		OffsetNumber	root_offnum;
 
 		/*
 		 * To prevent concurrent sessions from updating the tuple, we have to
@@ -4073,7 +4344,9 @@ l2:
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
-		oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(oldtup.t_data))
+			oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 		oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleClearHotUpdated(&oldtup);
 		/* ... and store info about transaction updating this tuple */
@@ -4227,6 +4500,60 @@ l2:
 		 */
 		if (hot_attrs_checked && !bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
+		else
+		{
+			/*
+			 * If no WARM updates yet on this chain, let this update be a WARM
+			 * update. We must not do any WARM update even if the previous WARM
+			 * updated at the end aborted. That's why we look at
+			 * HEAP_WARM_UPDATED flag.
+			 *
+			 * We don't do WARM updates if one of the columns used in index
+			 * expressions is being modified. Since expressions may evaluate to
+			 * the same value, even when heap values change, we don't have a
+			 * good way to deal with duplicate key scans when expressions are
+			 * used in the index.
+			 *
+			 * We check if the HOT attrs are a subset of the modified
+			 * attributes. Since HOT attrs include all index attributes, this
+			 * allows to avoid doing a WARM update when all index attributes
+			 * are being updated. Performing a WARM update is not a great idea
+			 * because all indexes will receive a new entry anyways.
+			 *
+			 * We also disable WARM temporarily if we are modifying a column
+			 * which is used by a new index that's being added. We can't insert
+			 * new entries to such indexes and hence we must not allow creating
+			 * on WARM chains which are broken with respect to the new index
+			 * being added.
+			 */
+			if (relation->rd_supportswarm &&
+				!HeapTupleIsWarmUpdated(&oldtup) &&
+				!bms_overlap(modified_attrs, exprindx_attrs) &&
+				!bms_is_subset(hot_attrs, modified_attrs) &&
+				!bms_overlap(notready_attrs, modified_attrs))
+			{
+				int num_indexes, num_updating_indexes;
+				ListCell *l;
+
+				/*
+				 * Everything else is Ok. Now check if the update will require
+				 * less than or equal to 50% index updates. Anything above
+				 * that, we can just do a regular update and save on WARM
+				 * cleanup cost.
+				 */
+				num_indexes = list_length(indexattrsList);
+				num_updating_indexes = 0;
+				foreach (l, indexattrsList)
+				{
+					Bitmapset  *b = (Bitmapset *) lfirst(l);
+					if (bms_overlap(b, modified_attrs))
+						num_updating_indexes++;
+				}
+
+				if ((double)num_updating_indexes/num_indexes <= 0.5)
+					use_warm_update = true;
+			}
+		}
 	}
 	else
 	{
@@ -4273,6 +4600,32 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+
+		/*
+		 * Even if we are doing a HOT update, we must carry forward the WARM
+		 * flag because we may have already inserted another index entry
+		 * pointing to our root and a third entry may create duplicates.
+		 *
+		 * Note: If we ever have a mechanism to avoid duplicate <key, TID> in
+		 * indexes, we could look at relaxing this restriction and allow even
+		 * more WARM udpates.
+		 */
+		if (HeapTupleIsWarmUpdated(&oldtup))
+		{
+			HeapTupleSetWarmUpdated(heaptup);
+			HeapTupleSetWarmUpdated(newtup);
+		}
+
+		/*
+		 * If the old tuple is a WARM tuple then mark the new tuple as a WARM
+		 * tuple as well.
+		 */
+		if (HeapTupleIsWarm(&oldtup))
+		{
+			HeapTupleSetWarm(heaptup);
+			HeapTupleSetWarm(newtup);
+		}
+
 		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
@@ -4285,12 +4638,45 @@ l2:
 		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
 			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
+	else if (use_warm_update)
+	{
+		/* Mark the old tuple as HOT-updated */
+		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetWarmUpdated(&oldtup);
+
+		/* And mark the new tuple as heap-only */
+		HeapTupleSetHeapOnly(heaptup);
+		/* Mark the new tuple as WARM tuple */
+		HeapTupleSetWarmUpdated(heaptup);
+		/* This update also starts the WARM chain */
+		HeapTupleSetWarm(heaptup);
+		Assert(!HeapTupleIsWarm(&oldtup));
+
+		/* Mark the caller's copy too, in case different from heaptup */
+		HeapTupleSetHeapOnly(newtup);
+		HeapTupleSetWarmUpdated(newtup);
+		HeapTupleSetWarm(newtup);
+
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
+		/* Let the caller know we did a WARM update */
+		if (warm_update)
+			*warm_update = true;
+	}
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		HeapTupleClearWarmUpdated(heaptup);
+		HeapTupleClearWarmUpdated(newtup);
+		HeapTupleClearWarm(heaptup);
+		HeapTupleClearWarm(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4309,7 +4695,9 @@ l2:
 	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
-	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(oldtup.t_data))
+		oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 	oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	/* ... and store info about transaction updating this tuple */
 	Assert(TransactionIdIsValid(xmax_old_tuple));
@@ -4400,7 +4788,10 @@ l2:
 	if (have_tuple_lock)
 		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
 
-	pgstat_count_heap_update(relation, use_hot_update);
+	/*
+	 * Count HOT and WARM updates separately
+	 */
+	pgstat_count_heap_update(relation, use_hot_update, use_warm_update);
 
 	/*
 	 * If heaptup is a private copy, release it.  Don't forget to copy t_self
@@ -4420,6 +4811,8 @@ l2:
 	bms_free(id_attrs);
 	bms_free(modified_attrs);
 	bms_free(interesting_attrs);
+	bms_free(exprindx_attrs);
+	bms_free(notready_attrs);
 
 	return HeapTupleMayBeUpdated;
 }
@@ -4496,9 +4889,47 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 	}
 	else
 	{
+		bool res;
+		bool value1_free = false, value2_free = false;
+
 		Assert(attrnum <= tupdesc->natts);
 		att = tupdesc->attrs[attrnum - 1];
-		return datumIsEqual(value1, value2, att->attbyval, att->attlen);
+
+		/*
+		 * Fetch untoasted values before doing the comparison.
+		 *
+		 * While it's ok for HOT to declare certain values are non-equal even
+		 * if they are physically equal. At worst, this can cause certain
+		 * potential HOT updates to be done in a non-HOT manner. But WARM
+		 * relies on index recheck to decide which index pointer should return
+		 * which row in a WARM chain. For this it's necessary that if old and
+		 * new heap values are declared unequal here, they better produce
+		 * different index values too. We are not so much bothered about
+		 * logical equality since recheck also uses datumIsEqual, but if
+		 * datumIsEqual returns false here, it should return false during index
+		 * recheck too. So we must detoast heap values and then do the
+		 * comparison. As a bonus, it might result in a HOT update which may
+		 * have been ignored earlier.
+		 */
+		if ((att->attlen == -1) && VARATT_IS_EXTENDED(value1))
+		{
+			value1 = PointerGetDatum(heap_tuple_untoast_attr((struct varlena *)
+					DatumGetPointer(value1)));
+			value1_free = true;
+		}
+
+		if ((att->attlen == -1) && VARATT_IS_EXTENDED(value2))
+		{
+			value2 = PointerGetDatum(heap_tuple_untoast_attr((struct varlena *)
+					DatumGetPointer(value2)));
+			value2_free = true;
+		}
+		res = datumIsEqual(value1, value2, att->attbyval, att->attlen);
+		if (value1_free)
+			pfree(DatumGetPointer(value1));
+		if (value2_free)
+			pfree(DatumGetPointer(value2));
+		return res;
 	}
 }
 
@@ -4540,7 +4971,8 @@ HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
  * via ereport().
  */
 void
-simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
+simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup,
+		Bitmapset **modified_attrs, bool *warm_update)
 {
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
@@ -4549,7 +4981,7 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &hufd, &lockmode);
+						 &hufd, &lockmode, modified_attrs, warm_update);
 	switch (result)
 	{
 		case HeapTupleSelfUpdated:
@@ -4640,7 +5072,6 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber	block;
-	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4649,11 +5080,10 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	bool		first_time = true;
 	bool		have_tuple_lock = false;
 	bool		cleared_all_frozen = false;
-	OffsetNumber	root_offnum;
+	OffsetNumber	root_offnum = InvalidOffsetNumber;
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
-	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -5745,7 +6175,6 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
-	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5754,7 +6183,6 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
-		offnum = ItemPointerGetOffsetNumber(&tupid);
 
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
@@ -6226,7 +6654,9 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	PageSetPrunable(page, RecentGlobalXmin);
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 
 	/*
@@ -6800,7 +7230,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 	 * Old-style VACUUM FULL is gone, but we have to keep this code as long as
 	 * we support having MOVED_OFF/MOVED_IN tuples in the database.
 	 */
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 
@@ -6819,7 +7249,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			 * have failed; whereas a non-dead MOVED_IN tuple must mean the
 			 * xvac transaction succeeded.
 			 */
-			if (tuple->t_infomask & HEAP_MOVED_OFF)
+			if (HeapTupleHeaderIsMovedOff(tuple))
 				frz->frzflags |= XLH_INVALID_XVAC;
 			else
 				frz->frzflags |= XLH_FREEZE_XVAC;
@@ -7289,7 +7719,7 @@ heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple)
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid))
@@ -7372,7 +7802,7 @@ heap_tuple_needs_freeze(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid) &&
@@ -7398,7 +7828,7 @@ HeapTupleHeaderAdvanceLatestRemovedXid(HeapTupleHeader tuple,
 	TransactionId xmax = HeapTupleHeaderGetUpdateXid(tuple);
 	TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		if (TransactionIdPrecedes(*latestRemovedXid, xvac))
 			*latestRemovedXid = xvac;
@@ -7447,6 +7877,36 @@ log_heap_cleanup_info(RelFileNode rnode, TransactionId latestRemovedXid)
 }
 
 /*
+ * Perform XLogInsert for a heap-warm-clear operation.  Caller must already
+ * have modified the buffer and marked it dirty.
+ */
+XLogRecPtr
+log_heap_warmclear(Relation reln, Buffer buffer,
+			   OffsetNumber *cleared, int ncleared)
+{
+	xl_heap_warmclear	xlrec;
+	XLogRecPtr			recptr;
+
+	/* Caller should not call me on a non-WAL-logged relation */
+	Assert(RelationNeedsWAL(reln));
+
+	xlrec.ncleared = ncleared;
+
+	XLogBeginInsert();
+	XLogRegisterData((char *) &xlrec, SizeOfHeapWarmClear);
+
+	XLogRegisterBuffer(0, buffer, REGBUF_STANDARD);
+
+	if (ncleared > 0)
+		XLogRegisterBufData(0, (char *) cleared,
+							ncleared * sizeof(OffsetNumber));
+
+	recptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_WARMCLEAR);
+
+	return recptr;
+}
+
+/*
  * Perform XLogInsert for a heap-clean operation.  Caller must already
  * have modified the buffer and marked it dirty.
  *
@@ -7601,6 +8061,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
+	bool		warm_update = false;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
@@ -7612,6 +8073,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	else
 		info = XLOG_HEAP_UPDATE;
 
+	if (HeapTupleIsWarmUpdated(newtup))
+		warm_update = true;
+
 	/*
 	 * If the old and new tuple are on the same page, we only need to log the
 	 * parts of the new tuple that were changed.  That saves on the amount of
@@ -7685,6 +8149,8 @@ log_heap_update(Relation reln, Buffer oldbuf,
 				xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY;
 		}
 	}
+	if (warm_update)
+		xlrec.flags |= XLH_UPDATE_WARM_UPDATE;
 
 	/* If new tuple is the single and first tuple on page... */
 	if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber &&
@@ -8099,6 +8565,60 @@ heap_xlog_clean(XLogReaderState *record)
 		XLogRecordPageWithFreeSpace(rnode, blkno, freespace);
 }
 
+
+/*
+ * Handles HEAP2_WARMCLEAR record type
+ */
+static void
+heap_xlog_warmclear(XLogReaderState *record)
+{
+	XLogRecPtr	lsn = record->EndRecPtr;
+	xl_heap_warmclear	*xlrec = (xl_heap_warmclear *) XLogRecGetData(record);
+	Buffer		buffer;
+	RelFileNode rnode;
+	BlockNumber blkno;
+	XLogRedoAction action;
+
+	XLogRecGetBlockTag(record, 0, &rnode, NULL, &blkno);
+
+	/*
+	 * If we have a full-page image, restore it (using a cleanup lock) and
+	 * we're done.
+	 */
+	action = XLogReadBufferForRedoExtended(record, 0, RBM_NORMAL, true,
+										   &buffer);
+	if (action == BLK_NEEDS_REDO)
+	{
+		Page		page = (Page) BufferGetPage(buffer);
+		OffsetNumber *cleared;
+		int			ncleared;
+		Size		datalen;
+		int			i;
+
+		cleared = (OffsetNumber *) XLogRecGetBlockData(record, 0, &datalen);
+
+		ncleared = xlrec->ncleared;
+
+		for (i = 0; i < ncleared; i++)
+		{
+			ItemId			lp;
+			OffsetNumber	offnum = cleared[i];
+			HeapTupleData	heapTuple;
+
+			lp = PageGetItemId(page, offnum);
+			heapTuple.t_data = (HeapTupleHeader) PageGetItem(page, lp);
+
+			HeapTupleHeaderClearWarmUpdated(heapTuple.t_data);
+			HeapTupleHeaderClearWarm(heapTuple.t_data);
+		}
+
+		PageSetLSN(page, lsn);
+		MarkBufferDirty(buffer);
+	}
+	if (BufferIsValid(buffer))
+		UnlockReleaseBuffer(buffer);
+}
+
 /*
  * Replay XLOG_HEAP2_VISIBLE record.
  *
@@ -8345,7 +8865,9 @@ heap_xlog_delete(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleHeaderClearHotUpdated(htup);
 		fix_infomask_from_infobits(xlrec->infobits_set,
@@ -8366,7 +8888,7 @@ heap_xlog_delete(XLogReaderState *record)
 		if (!HeapTupleHeaderHasRootOffset(htup))
 		{
 			OffsetNumber	root_offnum;
-			root_offnum = heap_get_root_tuple(page, xlrec->offnum); 
+			root_offnum = heap_get_root_tuple(page, xlrec->offnum);
 			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
 		}
 
@@ -8662,16 +9184,22 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	Size		freespace = 0;
 	XLogRedoAction oldaction;
 	XLogRedoAction newaction;
+	bool		warm_update = false;
 
 	/* initialize to keep the compiler quiet */
 	oldtup.t_data = NULL;
 	oldtup.t_len = 0;
 
+	if (xlrec->flags & XLH_UPDATE_WARM_UPDATE)
+		warm_update = true;
+
 	XLogRecGetBlockTag(record, 0, &rnode, NULL, &newblk);
 	if (XLogRecGetBlockTag(record, 1, NULL, NULL, &oldblk))
 	{
 		/* HOT updates are never done across pages */
 		Assert(!hot_update);
+		/* WARM updates are never done across pages */
+		Assert(!warm_update);
 	}
 	else
 		oldblk = newblk;
@@ -8731,6 +9259,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 								   &htup->t_infomask2);
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
+
+		/* Mark the old tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetWarmUpdated(htup);
+
 		/* Set forward chain link in t_ctid */
 		HeapTupleHeaderSetNextTid(htup, &newtid);
 
@@ -8866,6 +9399,10 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
 
+		/* Mark the new tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetWarmUpdated(htup);
+
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
@@ -8993,7 +9530,9 @@ heap_xlog_lock(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9072,7 +9611,9 @@ heap_xlog_lock_updated(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9141,6 +9682,9 @@ heap_redo(XLogReaderState *record)
 		case XLOG_HEAP_INSERT:
 			heap_xlog_insert(record);
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			heap_xlog_multi_insert(record);
+			break;
 		case XLOG_HEAP_DELETE:
 			heap_xlog_delete(record);
 			break;
@@ -9169,7 +9713,7 @@ heap2_redo(XLogReaderState *record)
 {
 	uint8		info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
 
-	switch (info & XLOG_HEAP_OPMASK)
+	switch (info & XLOG_HEAP2_OPMASK)
 	{
 		case XLOG_HEAP2_CLEAN:
 			heap_xlog_clean(record);
@@ -9183,9 +9727,6 @@ heap2_redo(XLogReaderState *record)
 		case XLOG_HEAP2_VISIBLE:
 			heap_xlog_visible(record);
 			break;
-		case XLOG_HEAP2_MULTI_INSERT:
-			heap_xlog_multi_insert(record);
-			break;
 		case XLOG_HEAP2_LOCK_UPDATED:
 			heap_xlog_lock_updated(record);
 			break;
@@ -9199,6 +9740,9 @@ heap2_redo(XLogReaderState *record)
 		case XLOG_HEAP2_REWRITE:
 			heap_xlog_logical_rewrite(record);
 			break;
+		case XLOG_HEAP2_WARMCLEAR:
+			heap_xlog_warmclear(record);
+			break;
 		default:
 			elog(PANIC, "heap2_redo: unknown op code %u", info);
 	}
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index f54337c..6a3baff 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -834,6 +834,13 @@ heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				continue;
 
+			/*
+			 * If the tuple has root line pointer, it must be the end of the
+			 * chain
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			/* Set up to scan the HOT-chain */
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
@@ -928,6 +935,6 @@ heap_get_root_tuple(Page page, OffsetNumber target_offnum)
 void
 heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 {
-	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+	heap_get_root_tuples_internal(page, InvalidOffsetNumber,
 			root_offsets);
 }
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index 2d3ae9b..bd469ee 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -404,6 +404,14 @@ rewrite_heap_tuple(RewriteState state,
 		old_tuple->t_data->t_infomask & HEAP_XACT_MASK;
 
 	/*
+	 * We must clear the HEAP_WARM_TUPLE flag if the HEAP_WARM_UPDATED is
+	 * cleared above.
+	 */
+	if (HeapTupleHeaderIsWarmUpdated(old_tuple->t_data))
+		HeapTupleHeaderClearWarm(new_tuple->t_data);
+
+
+	/*
 	 * While we have our hands on the tuple, we may as well freeze any
 	 * eligible xmin or xmax, so that future VACUUM effort can be saved.
 	 */
@@ -428,7 +436,7 @@ rewrite_heap_tuple(RewriteState state,
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
 
-		/* 
+		/*
 		 * We've already checked that this is not the last tuple in the chain,
 		 * so fetch the next TID in the chain.
 		 */
@@ -737,7 +745,7 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		/* 
+		/*
 		 * Set t_ctid just to ensure that block number is copied correctly, but
 		 * then immediately mark the tuple as the latest.
 		 */
diff --git a/src/backend/access/heap/tuptoaster.c b/src/backend/access/heap/tuptoaster.c
index aa5a45d..bab48fd 100644
--- a/src/backend/access/heap/tuptoaster.c
+++ b/src/backend/access/heap/tuptoaster.c
@@ -1688,7 +1688,8 @@ toast_save_datum(Relation rel, Datum value,
 							 toastrel,
 							 toastidxs[i]->rd_index->indisunique ?
 							 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-							 NULL);
+							 NULL,
+							 false);
 		}
 
 		/*
diff --git a/src/backend/access/index/genam.c b/src/backend/access/index/genam.c
index a91fda7..d523c8f 100644
--- a/src/backend/access/index/genam.c
+++ b/src/backend/access/index/genam.c
@@ -127,6 +127,8 @@ RelationGetIndexScan(Relation indexRelation, int nkeys, int norderbys)
 	scan->xs_cbuf = InvalidBuffer;
 	scan->xs_continue_hot = false;
 
+	scan->indexInfo = NULL;
+
 	return scan;
 }
 
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index cc5ac8b..d048714 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -197,7 +197,8 @@ index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 IndexInfo *indexInfo)
+			 IndexInfo *indexInfo,
+			 bool warm_update)
 {
 	RELATION_CHECKS;
 	CHECK_REL_PROCEDURE(aminsert);
@@ -207,6 +208,12 @@ index_insert(Relation indexRelation,
 									   (HeapTuple) NULL,
 									   InvalidBuffer);
 
+	if (warm_update)
+	{
+		Assert(indexRelation->rd_amroutine->amwarminsert != NULL);
+		return indexRelation->rd_amroutine->amwarminsert(indexRelation, values,
+				isnull, heap_t_ctid, heapRelation, checkUnique, indexInfo);
+	}
 	return indexRelation->rd_amroutine->aminsert(indexRelation, values, isnull,
 												 heap_t_ctid, heapRelation,
 												 checkUnique, indexInfo);
@@ -291,6 +298,25 @@ index_beginscan_internal(Relation indexRelation,
 	scan->parallel_scan = pscan;
 	scan->xs_temp_snap = temp_snap;
 
+	/*
+	 * If the index supports recheck, make sure that index tuple is saved
+	 * during index scans. Also build and cache IndexInfo which is used by
+	 * amrecheck routine.
+	 *
+	 * XXX Ideally, we should look at all indexes on the table and check if
+	 * WARM is at all supported on the base table. If WARM is not supported
+	 * then we don't need to do any recheck. RelationGetIndexAttrBitmap() does
+	 * do that and sets rd_supportswarm after looking at all indexes. But we
+	 * don't know if the function was called earlier in the session when we're
+	 * here. We can't call it now because there exists a risk of causing
+	 * deadlock.
+	 */
+	if (indexRelation->rd_amroutine->amrecheck)
+	{
+		scan->xs_want_itup = true;
+		scan->indexInfo = BuildIndexInfo(indexRelation);
+	}
+
 	return scan;
 }
 
@@ -358,6 +384,10 @@ index_endscan(IndexScanDesc scan)
 	if (scan->xs_temp_snap)
 		UnregisterSnapshot(scan->xs_snapshot);
 
+	/* Free cached IndexInfo, if any */
+	if (scan->indexInfo)
+		pfree(scan->indexInfo);
+
 	/* Release the scan data structure itself */
 	IndexScanEnd(scan);
 }
@@ -535,7 +565,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
 	/*
 	 * The AM's amgettuple proc finds the next index entry matching the scan
 	 * keys, and puts the TID into scan->xs_ctup.t_self.  It should also set
-	 * scan->xs_recheck and possibly scan->xs_itup/scan->xs_hitup, though we
+	 * scan->xs_tuple_recheck and possibly scan->xs_itup/scan->xs_hitup, though we
 	 * pay no attention to those fields here.
 	 */
 	found = scan->indexRelation->rd_amroutine->amgettuple(scan, direction);
@@ -574,7 +604,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
  * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
  * call).
  *
- * Note: caller must check scan->xs_recheck, and perform rechecking of the
+ * Note: caller must check scan->xs_tuple_recheck, and perform rechecking of the
  * scan keys if required.  We do not do that here because we don't have
  * enough information to do it efficiently in the general case.
  * ----------------
@@ -585,6 +615,7 @@ index_fetch_heap(IndexScanDesc scan)
 	ItemPointer tid = &scan->xs_ctup.t_self;
 	bool		all_dead = false;
 	bool		got_heap_tuple;
+	bool		tuple_recheck;
 
 	/* We can skip the buffer-switching logic if we're in mid-HOT chain. */
 	if (!scan->xs_continue_hot)
@@ -603,6 +634,8 @@ index_fetch_heap(IndexScanDesc scan)
 			heap_page_prune_opt(scan->heapRelation, scan->xs_cbuf);
 	}
 
+	tuple_recheck = false;
+
 	/* Obtain share-lock on the buffer so we can examine visibility */
 	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
 	got_heap_tuple = heap_hot_search_buffer(tid, scan->heapRelation,
@@ -610,32 +643,60 @@ index_fetch_heap(IndexScanDesc scan)
 											scan->xs_snapshot,
 											&scan->xs_ctup,
 											&all_dead,
-											!scan->xs_continue_hot);
-	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+											!scan->xs_continue_hot,
+											&tuple_recheck);
 
 	if (got_heap_tuple)
 	{
+		bool res = true;
+
+		/*
+		 * Ok we got a tuple which satisfies the snapshot, but if its part of a
+		 * WARM chain, we must do additional checks to ensure that we are
+		 * indeed returning a correct tuple. Note that if the index AM does not
+		 * implement amrecheck method, then we don't any additional checks
+		 * since WARM must have been disabled on such tables.
+		 */
+		if (tuple_recheck && scan->xs_itup &&
+			scan->indexRelation->rd_amroutine->amrecheck)
+		{
+			res = scan->indexRelation->rd_amroutine->amrecheck(
+						scan->indexRelation,
+						scan->indexInfo,
+						scan->xs_itup,
+						scan->heapRelation,
+						&scan->xs_ctup);
+		}
+
+		LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+
 		/*
 		 * Only in a non-MVCC snapshot can more than one member of the HOT
 		 * chain be visible.
 		 */
 		scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
 		pgstat_count_heap_fetch(scan->indexRelation);
-		return &scan->xs_ctup;
+
+		if (res)
+			return &scan->xs_ctup;
 	}
+	else
+	{
+		LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
 
-	/* We've reached the end of the HOT chain. */
-	scan->xs_continue_hot = false;
+		/* We've reached the end of the HOT chain. */
+		scan->xs_continue_hot = false;
 
-	/*
-	 * If we scanned a whole HOT chain and found only dead tuples, tell index
-	 * AM to kill its entry for that TID (this will take effect in the next
-	 * amgettuple call, in index_getnext_tid).  We do not do this when in
-	 * recovery because it may violate MVCC to do so.  See comments in
-	 * RelationGetIndexScan().
-	 */
-	if (!scan->xactStartedInRecovery)
-		scan->kill_prior_tuple = all_dead;
+		/*
+		 * If we scanned a whole HOT chain and found only dead tuples, tell index
+		 * AM to kill its entry for that TID (this will take effect in the next
+		 * amgettuple call, in index_getnext_tid).  We do not do this when in
+		 * recovery because it may violate MVCC to do so.  See comments in
+		 * RelationGetIndexScan().
+		 */
+		if (!scan->xactStartedInRecovery)
+			scan->kill_prior_tuple = all_dead;
+	}
 
 	return NULL;
 }
diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c
index 6dca810..463d4bf 100644
--- a/src/backend/access/nbtree/nbtinsert.c
+++ b/src/backend/access/nbtree/nbtinsert.c
@@ -20,6 +20,7 @@
 #include "access/nbtxlog.h"
 #include "access/transam.h"
 #include "access/xloginsert.h"
+#include "catalog/index.h"
 #include "miscadmin.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
@@ -250,6 +251,10 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 	BTPageOpaque opaque;
 	Buffer		nbuf = InvalidBuffer;
 	bool		found = false;
+	Buffer		buffer;
+	HeapTupleData	heapTuple;
+	bool		recheck = false;
+	IndexInfo	*indexInfo = BuildIndexInfo(rel);
 
 	/* Assume unique until we find a duplicate */
 	*is_unique = true;
@@ -309,6 +314,8 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				curitup = (IndexTuple) PageGetItem(page, curitemid);
 				htid = curitup->t_tid;
 
+				recheck = false;
+
 				/*
 				 * If we are doing a recheck, we expect to find the tuple we
 				 * are rechecking.  It's not a duplicate, but we have to keep
@@ -326,112 +333,153 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				 * have just a single index entry for the entire chain.
 				 */
 				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
-										 &all_dead))
+							&all_dead, &recheck, &buffer,
+							&heapTuple))
 				{
 					TransactionId xwait;
+					bool result = true;
 
 					/*
-					 * It is a duplicate. If we are only doing a partial
-					 * check, then don't bother checking if the tuple is being
-					 * updated in another transaction. Just return the fact
-					 * that it is a potential conflict and leave the full
-					 * check till later.
+					 * If the tuple was WARM update, we may again see our own
+					 * tuple. Since WARM updates don't create new index
+					 * entries, our own tuple is only reachable via the old
+					 * index pointer.
 					 */
-					if (checkUnique == UNIQUE_CHECK_PARTIAL)
+					if (checkUnique == UNIQUE_CHECK_EXISTING &&
+							ItemPointerCompare(&htid, &itup->t_tid) == 0)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						*is_unique = false;
-						return InvalidTransactionId;
+						found = true;
+						result = false;
+						if (recheck)
+							UnlockReleaseBuffer(buffer);
 					}
-
-					/*
-					 * If this tuple is being updated by other transaction
-					 * then we have to wait for its commit/abort.
-					 */
-					xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
-						SnapshotDirty.xmin : SnapshotDirty.xmax;
-
-					if (TransactionIdIsValid(xwait))
+					else if (recheck)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						/* Tell _bt_doinsert to wait... */
-						*speculativeToken = SnapshotDirty.speculativeToken;
-						return xwait;
+						result = btrecheck(rel, indexInfo, curitup, heapRel, &heapTuple);
+						UnlockReleaseBuffer(buffer);
 					}
 
-					/*
-					 * Otherwise we have a definite conflict.  But before
-					 * complaining, look to see if the tuple we want to insert
-					 * is itself now committed dead --- if so, don't complain.
-					 * This is a waste of time in normal scenarios but we must
-					 * do it to support CREATE INDEX CONCURRENTLY.
-					 *
-					 * We must follow HOT-chains here because during
-					 * concurrent index build, we insert the root TID though
-					 * the actual tuple may be somewhere in the HOT-chain.
-					 * While following the chain we might not stop at the
-					 * exact tuple which triggered the insert, but that's OK
-					 * because if we find a live tuple anywhere in this chain,
-					 * we have a unique key conflict.  The other live tuple is
-					 * not part of this chain because it had a different index
-					 * entry.
-					 */
-					htid = itup->t_tid;
-					if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL))
-					{
-						/* Normal case --- it's still live */
-					}
-					else
+					if (result)
 					{
 						/*
-						 * It's been deleted, so no error, and no need to
-						 * continue searching
+						 * It is a duplicate. If we are only doing a partial
+						 * check, then don't bother checking if the tuple is being
+						 * updated in another transaction. Just return the fact
+						 * that it is a potential conflict and leave the full
+						 * check till later.
 						 */
-						break;
-					}
+						if (checkUnique == UNIQUE_CHECK_PARTIAL)
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							*is_unique = false;
+							return InvalidTransactionId;
+						}
 
-					/*
-					 * Check for a conflict-in as we would if we were going to
-					 * write to this page.  We aren't actually going to write,
-					 * but we want a chance to report SSI conflicts that would
-					 * otherwise be masked by this unique constraint
-					 * violation.
-					 */
-					CheckForSerializableConflictIn(rel, NULL, buf);
+						/*
+						 * If this tuple is being updated by other transaction
+						 * then we have to wait for its commit/abort.
+						 */
+						xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
+							SnapshotDirty.xmin : SnapshotDirty.xmax;
+
+						if (TransactionIdIsValid(xwait))
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							/* Tell _bt_doinsert to wait... */
+							*speculativeToken = SnapshotDirty.speculativeToken;
+							return xwait;
+						}
 
-					/*
-					 * This is a definite conflict.  Break the tuple down into
-					 * datums and report the error.  But first, make sure we
-					 * release the buffer locks we're holding ---
-					 * BuildIndexValueDescription could make catalog accesses,
-					 * which in the worst case might touch this same index and
-					 * cause deadlocks.
-					 */
-					if (nbuf != InvalidBuffer)
-						_bt_relbuf(rel, nbuf);
-					_bt_relbuf(rel, buf);
+						/*
+						 * Otherwise we have a definite conflict.  But before
+						 * complaining, look to see if the tuple we want to insert
+						 * is itself now committed dead --- if so, don't complain.
+						 * This is a waste of time in normal scenarios but we must
+						 * do it to support CREATE INDEX CONCURRENTLY.
+						 *
+						 * We must follow HOT-chains here because during
+						 * concurrent index build, we insert the root TID though
+						 * the actual tuple may be somewhere in the HOT-chain.
+						 * While following the chain we might not stop at the
+						 * exact tuple which triggered the insert, but that's OK
+						 * because if we find a live tuple anywhere in this chain,
+						 * we have a unique key conflict.  The other live tuple is
+						 * not part of this chain because it had a different index
+						 * entry.
+						 */
+						recheck = false;
+						ItemPointerCopy(&itup->t_tid, &htid);
+						if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL,
+									&recheck, &buffer, &heapTuple))
+						{
+							bool result = true;
+							if (recheck)
+							{
+								/*
+								 * Recheck if the tuple actually satisfies the
+								 * index key. Otherwise, we might be following
+								 * a wrong index pointer and mustn't entertain
+								 * this tuple.
+								 */
+								result = btrecheck(rel, indexInfo, itup, heapRel, &heapTuple);
+								UnlockReleaseBuffer(buffer);
+							}
+							if (!result)
+								break;
+							/* Normal case --- it's still live */
+						}
+						else
+						{
+							/*
+							 * It's been deleted, so no error, and no need to
+							 * continue searching.
+							 */
+							break;
+						}
 
-					{
-						Datum		values[INDEX_MAX_KEYS];
-						bool		isnull[INDEX_MAX_KEYS];
-						char	   *key_desc;
-
-						index_deform_tuple(itup, RelationGetDescr(rel),
-										   values, isnull);
-
-						key_desc = BuildIndexValueDescription(rel, values,
-															  isnull);
-
-						ereport(ERROR,
-								(errcode(ERRCODE_UNIQUE_VIOLATION),
-								 errmsg("duplicate key value violates unique constraint \"%s\"",
-										RelationGetRelationName(rel)),
-							   key_desc ? errdetail("Key %s already exists.",
-													key_desc) : 0,
-								 errtableconstraint(heapRel,
-											 RelationGetRelationName(rel))));
+						/*
+						 * Check for a conflict-in as we would if we were going to
+						 * write to this page.  We aren't actually going to write,
+						 * but we want a chance to report SSI conflicts that would
+						 * otherwise be masked by this unique constraint
+						 * violation.
+						 */
+						CheckForSerializableConflictIn(rel, NULL, buf);
+
+						/*
+						 * This is a definite conflict.  Break the tuple down into
+						 * datums and report the error.  But first, make sure we
+						 * release the buffer locks we're holding ---
+						 * BuildIndexValueDescription could make catalog accesses,
+						 * which in the worst case might touch this same index and
+						 * cause deadlocks.
+						 */
+						if (nbuf != InvalidBuffer)
+							_bt_relbuf(rel, nbuf);
+						_bt_relbuf(rel, buf);
+
+						{
+							Datum		values[INDEX_MAX_KEYS];
+							bool		isnull[INDEX_MAX_KEYS];
+							char	   *key_desc;
+
+							index_deform_tuple(itup, RelationGetDescr(rel),
+									values, isnull);
+
+							key_desc = BuildIndexValueDescription(rel, values,
+									isnull);
+
+							ereport(ERROR,
+									(errcode(ERRCODE_UNIQUE_VIOLATION),
+									 errmsg("duplicate key value violates unique constraint \"%s\"",
+										 RelationGetRelationName(rel)),
+									 key_desc ? errdetail("Key %s already exists.",
+										 key_desc) : 0,
+									 errtableconstraint(heapRel,
+										 RelationGetRelationName(rel))));
+						}
 					}
 				}
 				else if (all_dead)
diff --git a/src/backend/access/nbtree/nbtpage.c b/src/backend/access/nbtree/nbtpage.c
index f815fd4..061c8d4 100644
--- a/src/backend/access/nbtree/nbtpage.c
+++ b/src/backend/access/nbtree/nbtpage.c
@@ -766,29 +766,20 @@ _bt_page_recyclable(Page page)
 }
 
 /*
- * Delete item(s) from a btree page during VACUUM.
+ * Delete item(s) and clear WARM item(s) on a btree page during VACUUM.
  *
  * This must only be used for deleting leaf items.  Deleting an item on a
  * non-leaf page has to be done as part of an atomic action that includes
- * deleting the page it points to.
+ * deleting the page it points to. We don't ever clear pointers on a non-leaf
+ * page.
  *
  * This routine assumes that the caller has pinned and locked the buffer.
  * Also, the given itemnos *must* appear in increasing order in the array.
- *
- * We record VACUUMs and b-tree deletes differently in WAL. InHotStandby
- * we need to be able to pin all of the blocks in the btree in physical
- * order when replaying the effects of a VACUUM, just as we do for the
- * original VACUUM itself. lastBlockVacuumed allows us to tell whether an
- * intermediate range of blocks has had no changes at all by VACUUM,
- * and so must be scanned anyway during replay. We always write a WAL record
- * for the last block in the index, whether or not it contained any items
- * to be removed. This allows us to scan right up to end of index to
- * ensure correct locking.
  */
 void
-_bt_delitems_vacuum(Relation rel, Buffer buf,
-					OffsetNumber *itemnos, int nitems,
-					BlockNumber lastBlockVacuumed)
+_bt_handleitems_vacuum(Relation rel, Buffer buf,
+					OffsetNumber *delitemnos, int ndelitems,
+					OffsetNumber *clearitemnos, int nclearitems)
 {
 	Page		page = BufferGetPage(buf);
 	BTPageOpaque opaque;
@@ -796,9 +787,20 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 	/* No ereport(ERROR) until changes are logged */
 	START_CRIT_SECTION();
 
+	/*
+	 * Clear the WARM pointers.
+	 *
+	 * We must do this before dealing with the dead items because
+	 * PageIndexMultiDelete may move items around to compactify the array and
+	 * hence offnums recorded earlier won't make any sense after
+	 * PageIndexMultiDelete is called.
+	 */
+	if (nclearitems > 0)
+		_bt_clear_items(page, clearitemnos, nclearitems);
+
 	/* Fix the page */
-	if (nitems > 0)
-		PageIndexMultiDelete(page, itemnos, nitems);
+	if (ndelitems > 0)
+		PageIndexMultiDelete(page, delitemnos, ndelitems);
 
 	/*
 	 * We can clear the vacuum cycle ID since this page has certainly been
@@ -824,7 +826,8 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 		XLogRecPtr	recptr;
 		xl_btree_vacuum xlrec_vacuum;
 
-		xlrec_vacuum.lastBlockVacuumed = lastBlockVacuumed;
+		xlrec_vacuum.ndelitems = ndelitems;
+		xlrec_vacuum.nclearitems = nclearitems;
 
 		XLogBeginInsert();
 		XLogRegisterBuffer(0, buf, REGBUF_STANDARD);
@@ -835,8 +838,11 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 		 * is.  When XLogInsert stores the whole buffer, the offsets array
 		 * need not be stored too.
 		 */
-		if (nitems > 0)
-			XLogRegisterBufData(0, (char *) itemnos, nitems * sizeof(OffsetNumber));
+		if (ndelitems > 0)
+			XLogRegisterBufData(0, (char *) delitemnos, ndelitems * sizeof(OffsetNumber));
+
+		if (nclearitems > 0)
+			XLogRegisterBufData(0, (char *) clearitemnos, nclearitems * sizeof(OffsetNumber));
 
 		recptr = XLogInsert(RM_BTREE_ID, XLOG_BTREE_VACUUM);
 
@@ -1882,3 +1888,13 @@ _bt_unlink_halfdead_page(Relation rel, Buffer leafbuf, bool *rightsib_empty)
 
 	return true;
 }
+
+/*
+ * Currently just a wrapper arounf PageIndexClearWarmTuples, but in theory each
+ * index may have it's own way to handle WARM tuples.
+ */
+void
+_bt_clear_items(Page page, OffsetNumber *clearitemnos, uint16 nclearitems)
+{
+	PageIndexClearWarmTuples(page, clearitemnos, nclearitems);
+}
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 775f2ff..6d558af 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -146,6 +146,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = btbuild;
 	amroutine->ambuildempty = btbuildempty;
 	amroutine->aminsert = btinsert;
+	amroutine->amwarminsert = btwarminsert;
 	amroutine->ambulkdelete = btbulkdelete;
 	amroutine->amvacuumcleanup = btvacuumcleanup;
 	amroutine->amcanreturn = btcanreturn;
@@ -163,6 +164,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = btestimateparallelscan;
 	amroutine->aminitparallelscan = btinitparallelscan;
 	amroutine->amparallelrescan = btparallelrescan;
+	amroutine->amrecheck = btrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -315,11 +317,12 @@ btbuildempty(Relation index)
  *		Descend the tree recursively, find the appropriate location for our
  *		new tuple, and put it there.
  */
-bool
-btinsert(Relation rel, Datum *values, bool *isnull,
+static bool
+btinsert_internal(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
-		 IndexInfo *indexInfo)
+		 IndexInfo *indexInfo,
+		 bool warm_update)
 {
 	bool		result;
 	IndexTuple	itup;
@@ -328,6 +331,11 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	itup = index_form_tuple(RelationGetDescr(rel), values, isnull);
 	itup->t_tid = *ht_ctid;
 
+	if (warm_update)
+		ItemPointerSetFlags(&itup->t_tid, BTREE_INDEX_WARM_POINTER);
+	else
+		ItemPointerClearFlags(&itup->t_tid);
+
 	result = _bt_doinsert(rel, itup, checkUnique, heapRel);
 
 	pfree(itup);
@@ -335,6 +343,26 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	return result;
 }
 
+bool
+btinsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, false);
+}
+
+bool
+btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, true);
+}
+
 /*
  *	btgettuple() -- Get the next tuple in the scan.
  */
@@ -1103,7 +1131,7 @@ btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 								 RBM_NORMAL, info->strategy);
 		LockBufferForCleanup(buf);
 		_bt_checkpage(rel, buf);
-		_bt_delitems_vacuum(rel, buf, NULL, 0, vstate.lastBlockVacuumed);
+		_bt_handleitems_vacuum(rel, buf, NULL, 0, NULL, 0);
 		_bt_relbuf(rel, buf);
 	}
 
@@ -1201,6 +1229,8 @@ restart:
 	{
 		OffsetNumber deletable[MaxOffsetNumber];
 		int			ndeletable;
+		OffsetNumber clearwarm[MaxOffsetNumber];
+		int			nclearwarm;
 		OffsetNumber offnum,
 					minoff,
 					maxoff;
@@ -1239,7 +1269,7 @@ restart:
 		 * Scan over all items to see which ones need deleted according to the
 		 * callback function.
 		 */
-		ndeletable = 0;
+		ndeletable = nclearwarm = 0;
 		minoff = P_FIRSTDATAKEY(opaque);
 		maxoff = PageGetMaxOffsetNumber(page);
 		if (callback)
@@ -1250,6 +1280,9 @@ restart:
 			{
 				IndexTuple	itup;
 				ItemPointer htup;
+				int			flags;
+				bool		is_warm = false;
+				IndexBulkDeleteCallbackResult	result;
 
 				itup = (IndexTuple) PageGetItem(page,
 												PageGetItemId(page, offnum));
@@ -1276,16 +1309,36 @@ restart:
 				 * applies to *any* type of index that marks index tuples as
 				 * killed.
 				 */
-				if (callback(htup, callback_state))
+				flags = ItemPointerGetFlags(&itup->t_tid);
+				is_warm = ((flags & BTREE_INDEX_WARM_POINTER) != 0);
+
+				if (is_warm)
+					stats->num_warm_pointers++;
+				else
+					stats->num_clear_pointers++;
+
+				result = callback(htup, is_warm, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					if (is_warm)
+						stats->warm_pointers_removed++;
+					else
+						stats->clear_pointers_removed++;
 					deletable[ndeletable++] = offnum;
+				}
+				else if (result == IBDCR_CLEAR_WARM)
+				{
+					clearwarm[nclearwarm++] = offnum;
+				}
 			}
 		}
 
 		/*
-		 * Apply any needed deletes.  We issue just one _bt_delitems_vacuum()
-		 * call per page, so as to minimize WAL traffic.
+		 * Apply any needed deletes and clearing.  We issue just one
+		 * _bt_handleitems_vacuum() call per page, so as to minimize WAL
+		 * traffic.
 		 */
-		if (ndeletable > 0)
+		if (ndeletable > 0 || nclearwarm > 0)
 		{
 			/*
 			 * Notice that the issued XLOG_BTREE_VACUUM WAL record includes
@@ -1301,8 +1354,8 @@ restart:
 			 * doesn't seem worth the amount of bookkeeping it'd take to avoid
 			 * that.
 			 */
-			_bt_delitems_vacuum(rel, buf, deletable, ndeletable,
-								vstate->lastBlockVacuumed);
+			_bt_handleitems_vacuum(rel, buf, deletable, ndeletable,
+								clearwarm, nclearwarm);
 
 			/*
 			 * Remember highest leaf page number we've issued a
@@ -1312,6 +1365,7 @@ restart:
 				vstate->lastBlockVacuumed = blkno;
 
 			stats->tuples_removed += ndeletable;
+			stats->pointers_cleared += nclearwarm;
 			/* must recompute maxoff */
 			maxoff = PageGetMaxOffsetNumber(page);
 		}
diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c
index 5b259a3..2765809 100644
--- a/src/backend/access/nbtree/nbtutils.c
+++ b/src/backend/access/nbtree/nbtutils.c
@@ -20,11 +20,14 @@
 #include "access/nbtree.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "access/tuptoaster.h"
+#include "catalog/index.h"
 #include "miscadmin.h"
 #include "utils/array.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 typedef struct BTSortArrayContext
@@ -2069,3 +2072,93 @@ btproperty(Oid index_oid, int attno,
 			return false;		/* punt to generic code */
 	}
 }
+
+/*
+ * Check if the index tuple's key matches the one computed from the given heap
+ * tuple's attribute
+ */
+bool
+btrecheck(Relation indexRel, IndexInfo *indexInfo, IndexTuple indexTuple1,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	bool		isavail[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+	IndexTuple	indexTuple2;
+
+	/*
+	 * Get the index values, except for expression attributes. Since WARM is
+	 * not used when a column used by expressions in an index is modified, we
+	 * can safely assume that those index attributes are never changed by a
+	 * WARM update.
+	 *
+	 * We cannot use FormIndexDatum here because that requires access to
+	 * executor state and we don't have that here.
+	 */
+	FormIndexPlainDatum(indexInfo, heapRel, heapTuple, values, isnull, isavail);
+
+	/*
+	 * Form an index tuple using the heap values first. This allows to then
+	 * fetch index attributes from the current index tuple and the one that is
+	 * formed from the heap values and then do a binary comparison using
+	 * datumIsEqual().
+	 *
+	 * This takes care of doing the right comparison for compressed index
+	 * attributes (we just compare the compressed versions in both tuples) and
+	 * also ensure that we correctly detoast heap values, if need be.
+	 */
+	indexTuple2 = index_form_tuple(RelationGetDescr(indexRel), values, isnull);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue1;
+		bool	indxisnull1;
+		Datum	indxvalue2;
+		bool	indxisnull2;
+
+		/* No need to compare if the attribute value is not available */
+		if (!isavail[i - 1])
+			continue;
+
+		indxvalue1 = index_getattr(indexTuple1, i, indexRel->rd_att,
+								   &indxisnull1);
+		indxvalue2 = index_getattr(indexTuple2, i, indexRel->rd_att,
+								   &indxisnull2);
+
+		/*
+		 * If both are NULL, then they are equal
+		 */
+		if (indxisnull1 && indxisnull2)
+			continue;
+
+		/*
+		 * If just one is NULL, then they are not equal
+		 */
+		if (indxisnull1 || indxisnull2)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now just do a raw memory comparison. If the index tuple was formed
+		 * using this heap tuple, the computed index values must match
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(indxvalue1, indxvalue2, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	pfree(indexTuple2);
+
+	return equal;
+}
diff --git a/src/backend/access/nbtree/nbtxlog.c b/src/backend/access/nbtree/nbtxlog.c
index ac60db0..ef24738 100644
--- a/src/backend/access/nbtree/nbtxlog.c
+++ b/src/backend/access/nbtree/nbtxlog.c
@@ -390,8 +390,8 @@ btree_xlog_vacuum(XLogReaderState *record)
 	Buffer		buffer;
 	Page		page;
 	BTPageOpaque opaque;
-#ifdef UNUSED
 	xl_btree_vacuum *xlrec = (xl_btree_vacuum *) XLogRecGetData(record);
+#ifdef UNUSED
 
 	/*
 	 * This section of code is thought to be no longer needed, after analysis
@@ -482,19 +482,30 @@ btree_xlog_vacuum(XLogReaderState *record)
 
 		if (len > 0)
 		{
-			OffsetNumber *unused;
-			OffsetNumber *unend;
+			OffsetNumber *offnums = (OffsetNumber *) ptr;
 
-			unused = (OffsetNumber *) ptr;
-			unend = (OffsetNumber *) ((char *) ptr + len);
+			/*
+			 * Clear the WARM pointers.
+			 *
+			 * We must do this before dealing with the dead items because
+			 * PageIndexMultiDelete may move items around to compactify the
+			 * array and hence offnums recorded earlier won't make any sense
+			 * after PageIndexMultiDelete is called.
+			 */
+			if (xlrec->nclearitems > 0)
+				_bt_clear_items(page, offnums + xlrec->ndelitems,
+						xlrec->nclearitems);
 
-			if ((unend - unused) > 0)
-				PageIndexMultiDelete(page, unused, unend - unused);
+			/*
+			 * And handle the deleted items too
+			 */
+			if (xlrec->ndelitems > 0)
+				PageIndexMultiDelete(page, offnums, xlrec->ndelitems);
 		}
 
 		/*
 		 * Mark the page as not containing any LP_DEAD items --- see comments
-		 * in _bt_delitems_vacuum().
+		 * in _bt_handleitems_vacuum().
 		 */
 		opaque = (BTPageOpaque) PageGetSpecialPointer(page);
 		opaque->btpo_flags &= ~BTP_HAS_GARBAGE;
diff --git a/src/backend/access/rmgrdesc/heapdesc.c b/src/backend/access/rmgrdesc/heapdesc.c
index 44d2d63..d373e61 100644
--- a/src/backend/access/rmgrdesc/heapdesc.c
+++ b/src/backend/access/rmgrdesc/heapdesc.c
@@ -44,6 +44,12 @@ heap_desc(StringInfo buf, XLogReaderState *record)
 
 		appendStringInfo(buf, "off %u", xlrec->offnum);
 	}
+	else if (info == XLOG_HEAP_MULTI_INSERT)
+	{
+		xl_heap_multi_insert *xlrec = (xl_heap_multi_insert *) rec;
+
+		appendStringInfo(buf, "%d tuples", xlrec->ntuples);
+	}
 	else if (info == XLOG_HEAP_DELETE)
 	{
 		xl_heap_delete *xlrec = (xl_heap_delete *) rec;
@@ -102,7 +108,7 @@ heap2_desc(StringInfo buf, XLogReaderState *record)
 	char	   *rec = XLogRecGetData(record);
 	uint8		info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
 
-	info &= XLOG_HEAP_OPMASK;
+	info &= XLOG_HEAP2_OPMASK;
 	if (info == XLOG_HEAP2_CLEAN)
 	{
 		xl_heap_clean *xlrec = (xl_heap_clean *) rec;
@@ -129,12 +135,6 @@ heap2_desc(StringInfo buf, XLogReaderState *record)
 		appendStringInfo(buf, "cutoff xid %u flags %d",
 						 xlrec->cutoff_xid, xlrec->flags);
 	}
-	else if (info == XLOG_HEAP2_MULTI_INSERT)
-	{
-		xl_heap_multi_insert *xlrec = (xl_heap_multi_insert *) rec;
-
-		appendStringInfo(buf, "%d tuples", xlrec->ntuples);
-	}
 	else if (info == XLOG_HEAP2_LOCK_UPDATED)
 	{
 		xl_heap_lock_updated *xlrec = (xl_heap_lock_updated *) rec;
@@ -171,6 +171,12 @@ heap_identify(uint8 info)
 		case XLOG_HEAP_INSERT | XLOG_HEAP_INIT_PAGE:
 			id = "INSERT+INIT";
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			id = "MULTI_INSERT";
+			break;
+		case XLOG_HEAP_MULTI_INSERT | XLOG_HEAP_INIT_PAGE:
+			id = "MULTI_INSERT+INIT";
+			break;
 		case XLOG_HEAP_DELETE:
 			id = "DELETE";
 			break;
@@ -219,12 +225,6 @@ heap2_identify(uint8 info)
 		case XLOG_HEAP2_VISIBLE:
 			id = "VISIBLE";
 			break;
-		case XLOG_HEAP2_MULTI_INSERT:
-			id = "MULTI_INSERT";
-			break;
-		case XLOG_HEAP2_MULTI_INSERT | XLOG_HEAP_INIT_PAGE:
-			id = "MULTI_INSERT+INIT";
-			break;
 		case XLOG_HEAP2_LOCK_UPDATED:
 			id = "LOCK_UPDATED";
 			break;
diff --git a/src/backend/access/rmgrdesc/nbtdesc.c b/src/backend/access/rmgrdesc/nbtdesc.c
index fbde9d6..6b2c5d6 100644
--- a/src/backend/access/rmgrdesc/nbtdesc.c
+++ b/src/backend/access/rmgrdesc/nbtdesc.c
@@ -48,8 +48,8 @@ btree_desc(StringInfo buf, XLogReaderState *record)
 			{
 				xl_btree_vacuum *xlrec = (xl_btree_vacuum *) rec;
 
-				appendStringInfo(buf, "lastBlockVacuumed %u",
-								 xlrec->lastBlockVacuumed);
+				appendStringInfo(buf, "ndelitems %u, nclearitems %u",
+								 xlrec->ndelitems, xlrec->nclearitems);
 				break;
 			}
 		case XLOG_BTREE_DELETE:
diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c
index e57ac49..59ef7f3 100644
--- a/src/backend/access/spgist/spgutils.c
+++ b/src/backend/access/spgist/spgutils.c
@@ -72,6 +72,7 @@ spghandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/spgist/spgvacuum.c b/src/backend/access/spgist/spgvacuum.c
index cce9b3f..711d351 100644
--- a/src/backend/access/spgist/spgvacuum.c
+++ b/src/backend/access/spgist/spgvacuum.c
@@ -155,7 +155,8 @@ vacuumLeafPage(spgBulkDeleteState *bds, Relation index, Buffer buffer,
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				deletable[i] = true;
@@ -425,7 +426,8 @@ vacuumLeafRoot(spgBulkDeleteState *bds, Relation index, Buffer buffer)
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				toDelete[xlrec.nDelete] = i;
@@ -902,10 +904,10 @@ spgbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 }
 
 /* Dummy callback to delete no tuples during spgvacuumcleanup */
-static bool
-dummy_callback(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+dummy_callback(ItemPointer itemptr, bool is_warm, void *state)
 {
-	return false;
+	return IBDCR_KEEP;
 }
 
 /*
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 1eb163f..2c27661 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -54,6 +54,7 @@
 #include "nodes/makefuncs.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/var.h"
 #include "parser/parser.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
@@ -114,7 +115,7 @@ static void IndexCheckExclusion(Relation heapRelation,
 					IndexInfo *indexInfo);
 static inline int64 itemptr_encode(ItemPointer itemptr);
 static inline void itemptr_decode(ItemPointer itemptr, int64 encoded);
-static bool validate_index_callback(ItemPointer itemptr, void *opaque);
+static IndexBulkDeleteCallbackResult validate_index_callback(ItemPointer itemptr, bool is_warm, void *opaque);
 static void validate_index_heapscan(Relation heapRelation,
 						Relation indexRelation,
 						IndexInfo *indexInfo,
@@ -1691,6 +1692,20 @@ BuildIndexInfo(Relation index)
 	ii->ii_AmCache = NULL;
 	ii->ii_Context = CurrentMemoryContext;
 
+	/* build a bitmap of all table attributes referred by this index */
+	for (i = 0; i < ii->ii_NumIndexAttrs; i++)
+	{
+		AttrNumber attr = ii->ii_KeyAttrNumbers[i];
+		ii->ii_indxattrs = bms_add_member(ii->ii_indxattrs, attr -
+				FirstLowInvalidHeapAttributeNumber);
+	}
+
+	/* Collect all attributes used in expressions, too */
+	pull_varattnos((Node *) ii->ii_Expressions, 1, &ii->ii_indxattrs);
+
+	/* Collect all attributes in the index predicate, too */
+	pull_varattnos((Node *) ii->ii_Predicate, 1, &ii->ii_indxattrs);
+
 	return ii;
 }
 
@@ -1815,6 +1830,51 @@ FormIndexDatum(IndexInfo *indexInfo,
 		elog(ERROR, "wrong number of index expressions");
 }
 
+/*
+ * This is same as FormIndexDatum but we don't compute any expression
+ * attributes and hence can be used when executor interfaces are not available.
+ * If i'th attribute is available then isavail[i] is set to true, else set to
+ * false. The caller must always check if an attribute value is available
+ * before trying to do anything useful with that.
+ */
+void
+FormIndexPlainDatum(IndexInfo *indexInfo,
+			   Relation heapRel,
+			   HeapTuple heapTup,
+			   Datum *values,
+			   bool *isnull,
+			   bool *isavail)
+{
+	int			i;
+
+	for (i = 0; i < indexInfo->ii_NumIndexAttrs; i++)
+	{
+		int			keycol = indexInfo->ii_KeyAttrNumbers[i];
+		Datum		iDatum;
+		bool		isNull;
+
+		if (keycol != 0)
+		{
+			/*
+			 * Plain index column; get the value we need directly from the
+			 * heap tuple.
+			 */
+			iDatum = heap_getattr(heapTup, keycol, RelationGetDescr(heapRel), &isNull);
+			values[i] = iDatum;
+			isnull[i] = isNull;
+			isavail[i] = true;
+		}
+		else
+		{
+			/*
+			 * This is an expression attribute which can't be computed by us.
+			 * So just inform the caller about it.
+			 */
+			isavail[i] = false;
+			isnull[i] = true;
+		}
+	}
+}
 
 /*
  * index_update_stats --- update pg_class entry after CREATE INDEX or REINDEX
@@ -2929,15 +2989,15 @@ itemptr_decode(ItemPointer itemptr, int64 encoded)
 /*
  * validate_index_callback - bulkdelete callback to collect the index TIDs
  */
-static bool
-validate_index_callback(ItemPointer itemptr, void *opaque)
+static IndexBulkDeleteCallbackResult
+validate_index_callback(ItemPointer itemptr, bool is_warm, void *opaque)
 {
 	v_i_state  *state = (v_i_state *) opaque;
 	int64		encoded = itemptr_encode(itemptr);
 
 	tuplesort_putdatum(state->tuplesort, Int64GetDatum(encoded), false);
 	state->itups += 1;
-	return false;				/* never actually delete anything */
+	return IBDCR_KEEP;				/* never actually delete anything */
 }
 
 /*
@@ -3156,7 +3216,8 @@ validate_index_heapscan(Relation heapRelation,
 						 heapRelation,
 						 indexInfo->ii_Unique ?
 						 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-						 indexInfo);
+						 indexInfo,
+						 false);
 
 			state->tups_inserted += 1;
 		}
diff --git a/src/backend/catalog/indexing.c b/src/backend/catalog/indexing.c
index abc344a..6392f33 100644
--- a/src/backend/catalog/indexing.c
+++ b/src/backend/catalog/indexing.c
@@ -66,10 +66,15 @@ CatalogCloseIndexes(CatalogIndexState indstate)
  *
  * This should be called for each inserted or updated catalog tuple.
  *
+ * If the tuple was WARM updated, the modified_attrs contains the list of
+ * columns updated by the update. We must not insert new index entries for
+ * indexes which do not refer to any of the modified columns.
+ *
  * This is effectively a cut-down version of ExecInsertIndexTuples.
  */
 static void
-CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
+CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple,
+		Bitmapset *modified_attrs, bool warm_update)
 {
 	int			i;
 	int			numIndexes;
@@ -79,12 +84,28 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 	IndexInfo **indexInfoArray;
 	Datum		values[INDEX_MAX_KEYS];
 	bool		isnull[INDEX_MAX_KEYS];
+	ItemPointerData root_tid;
 
-	/* HOT update does not require index inserts */
-	if (HeapTupleIsHeapOnly(heapTuple))
+	/*
+	 * HOT update does not require index inserts, but WARM may need for some
+	 * indexes.
+	 */
+	if (HeapTupleIsHeapOnly(heapTuple) && !warm_update)
 		return;
 
 	/*
+	 * If we've done a WARM update, then we must index the TID of the root line
+	 * pointer and not the actual TID of the new tuple.
+	 */
+	if (warm_update)
+		ItemPointerSet(&root_tid,
+				ItemPointerGetBlockNumber(&(heapTuple->t_self)),
+				HeapTupleHeaderGetRootOffset(heapTuple->t_data));
+	else
+		ItemPointerCopy(&heapTuple->t_self, &root_tid);
+
+
+	/*
 	 * Get information from the state structure.  Fall out if nothing to do.
 	 */
 	numIndexes = indstate->ri_NumIndices;
@@ -112,6 +133,17 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 			continue;
 
 		/*
+		 * If we've done WARM update, then we must not insert a new index tuple
+		 * if none of the index keys have changed. This is not just an
+		 * optimization, but a requirement for WARM to work correctly.
+		 */
+		if (warm_update)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
+		/*
 		 * Expressional and partial indexes on system catalogs are not
 		 * supported, nor exclusion constraints, nor deferred uniqueness
 		 */
@@ -136,11 +168,12 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 		index_insert(relationDescs[i],	/* index relation */
 					 values,	/* array of index Datums */
 					 isnull,	/* is-null flags */
-					 &(heapTuple->t_self),		/* tid of heap tuple */
+					 &root_tid,
 					 heapRelation,
 					 relationDescs[i]->rd_index->indisunique ?
 					 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-					 indexInfo);
+					 indexInfo,
+					 warm_update);
 	}
 
 	ExecDropSingleTupleTableSlot(slot);
@@ -168,7 +201,7 @@ CatalogTupleInsert(Relation heapRel, HeapTuple tup)
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 	CatalogCloseIndexes(indstate);
 
 	return oid;
@@ -190,7 +223,7 @@ CatalogTupleInsertWithInfo(Relation heapRel, HeapTuple tup,
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 
 	return oid;
 }
@@ -210,12 +243,14 @@ void
 CatalogTupleUpdate(Relation heapRel, ItemPointer otid, HeapTuple tup)
 {
 	CatalogIndexState indstate;
+	bool	warm_update;
+	Bitmapset	*modified_attrs;
 
 	indstate = CatalogOpenIndexes(heapRel);
 
-	simple_heap_update(heapRel, otid, tup);
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 	CatalogCloseIndexes(indstate);
 }
 
@@ -231,9 +266,12 @@ void
 CatalogTupleUpdateWithInfo(Relation heapRel, ItemPointer otid, HeapTuple tup,
 						   CatalogIndexState indstate)
 {
-	simple_heap_update(heapRel, otid, tup);
+	Bitmapset  *modified_attrs;
+	bool		warm_update;
+
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 }
 
 /*
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 0217f39..4ef964f 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -530,6 +530,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_tuples_deleted(C.oid) AS n_tup_del,
             pg_stat_get_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
@@ -560,7 +561,8 @@ CREATE VIEW pg_stat_xact_all_tables AS
             pg_stat_get_xact_tuples_inserted(C.oid) AS n_tup_ins,
             pg_stat_get_xact_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_xact_tuples_deleted(C.oid) AS n_tup_del,
-            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd
+            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_xact_tuples_warm_updated(C.oid) AS n_tup_warm_upd
     FROM pg_class C LEFT JOIN
          pg_index I ON C.oid = I.indrelid
          LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index e2544e5..330b661 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -40,6 +40,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	TriggerData *trigdata = castNode(TriggerData, fcinfo->context);
 	const char *funcname = "unique_key_recheck";
 	HeapTuple	new_row;
+	HeapTupleData heapTuple;
 	ItemPointerData tmptid;
 	Relation	indexRel;
 	IndexInfo  *indexInfo;
@@ -102,7 +103,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	 * removed.
 	 */
 	tmptid = new_row->t_self;
-	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL,
+				NULL, NULL, &heapTuple))
 	{
 		/*
 		 * All rows in the HOT chain are dead, so skip the check.
@@ -166,7 +168,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 		 */
 		index_insert(indexRel, values, isnull, &(new_row->t_self),
 					 trigdata->tg_relation, UNIQUE_CHECK_EXISTING,
-					 indexInfo);
+					 indexInfo,
+					 false);
 	}
 	else
 	{
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index 0158eda..d6ef4a8 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -2688,6 +2688,8 @@ CopyFrom(CopyState cstate)
 					if (resultRelInfo->ri_NumIndices > 0)
 						recheckIndexes = ExecInsertIndexTuples(slot,
 															&(tuple->t_self),
+															&(tuple->t_self),
+															NULL,
 															   estate,
 															   false,
 															   NULL,
@@ -2842,6 +2844,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
 			ExecStoreTuple(bufferedTuples[i], myslot, InvalidBuffer, false);
 			recheckIndexes =
 				ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
+									  &(bufferedTuples[i]->t_self), NULL,
 									  estate, false, NULL, NIL);
 			ExecARInsertTriggers(estate, resultRelInfo,
 								 bufferedTuples[i],
diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c
index 4861799..b62b0e9 100644
--- a/src/backend/commands/indexcmds.c
+++ b/src/backend/commands/indexcmds.c
@@ -694,7 +694,14 @@ DefineIndex(Oid relationId,
 	 * visible to other transactions before we start to build the index. That
 	 * will prevent them from making incompatible HOT updates.  The new index
 	 * will be marked not indisready and not indisvalid, so that no one else
-	 * tries to either insert into it or use it for queries.
+	 * tries to either insert into it or use it for queries. In addition to
+	 * that, WARM updates will be disallowed if an update is modifying one of
+	 * the columns used by this new index. This is necessary to ensure that we
+	 * don't create WARM tuples which do not have corresponding entry in this
+	 * index. It must be noted that during the second phase, we will index only
+	 * those heap tuples whose root line pointer is not already in the index,
+	 * hence it's important that all tuples in a given chain, has the same
+	 * value for any indexed column (including this new index).
 	 *
 	 * We must commit our current transaction so that the index becomes
 	 * visible; then start another.  Note that all the data structures we just
@@ -742,7 +749,10 @@ DefineIndex(Oid relationId,
 	 * marked as "not-ready-for-inserts".  The index is consulted while
 	 * deciding HOT-safety though.  This arrangement ensures that no new HOT
 	 * chains can be created where the new tuple and the old tuple in the
-	 * chain have different index keys.
+	 * chain have different index keys. Also, the new index is consulted for
+	 * deciding whether a WARM update is possible, and WARM update is not done
+	 * if a column used by this index is being updated. This ensures that we
+	 * don't create WARM tuples which are not indexed by this index.
 	 *
 	 * We now take a new snapshot, and build the index using all tuples that
 	 * are visible in this snapshot.  We can be sure that any HOT updates to
@@ -777,7 +787,8 @@ DefineIndex(Oid relationId,
 	/*
 	 * Update the pg_index row to mark the index as ready for inserts. Once we
 	 * commit this transaction, any new transactions that open the table must
-	 * insert new entries into the index for insertions and non-HOT updates.
+	 * insert new entries into the index for insertions and non-HOT updates or
+	 * WARM updates where this index needs new entry.
 	 */
 	index_set_state_flags(indexRelationId, INDEX_CREATE_SET_READY);
 
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 5b43a66..f52490f 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -104,6 +104,39 @@
  */
 #define PREFETCH_SIZE			((BlockNumber) 32)
 
+/*
+ * Structure to track WARM chains that can be converted into HOT chains during
+ * this run.
+ *
+ * To reduce space requirement, we're using bitfields. But the way things are
+ * laid down, we're still wasting 1-byte per candidate chain.
+ */
+typedef struct LVWarmChain
+{
+	ItemPointerData	chain_tid;			/* root of the chain */
+
+	/*
+	 * 1 - if the chain contains only post-warm tuples
+	 * 0 - if the chain contains only pre-warm tuples
+	 */
+	uint8			is_postwarm_chain:2;
+
+	/* 1 - if this chain must remain a WARM chain */
+	uint8			keep_warm_chain:2;
+
+	/*
+	 * Number of CLEAR pointers to this root TID found so far - must never be
+	 * more than 2.
+	 */
+	uint8			num_clear_pointers:2;
+
+	/*
+	 * Number of WARM pointers to this root TID found so far - must never be
+	 * more than 1.
+	 */
+	uint8			num_warm_pointers:2;
+} LVWarmChain;
+
 typedef struct LVRelStats
 {
 	/* hasindex = true means two-pass strategy; false means one-pass */
@@ -122,6 +155,14 @@ typedef struct LVRelStats
 	BlockNumber pages_removed;
 	double		tuples_deleted;
 	BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */
+
+	/* List of candidate WARM chains that can be converted into HOT chains */
+	/* NB: this list is ordered by TID of the root pointers */
+	int				num_warm_chains;	/* current # of entries */
+	int				max_warm_chains;	/* # slots allocated in array */
+	LVWarmChain 	*warm_chains;		/* array of LVWarmChain */
+	double			num_non_convertible_warm_chains;
+
 	/* List of TIDs of tuples we intend to delete */
 	/* NB: this list is ordered by TID address */
 	int			num_dead_tuples;	/* current # of entries */
@@ -150,6 +191,7 @@ static void lazy_scan_heap(Relation onerel, int options,
 static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats);
 static bool lazy_check_needs_freeze(Buffer buf, bool *hastup);
 static void lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats);
 static void lazy_cleanup_index(Relation indrel,
@@ -157,6 +199,10 @@ static void lazy_cleanup_index(Relation indrel,
 				   LVRelStats *vacrelstats);
 static int lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 				 int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer);
+static int lazy_warmclear_page(Relation onerel, BlockNumber blkno,
+				 Buffer buffer, int chainindex, LVRelStats *vacrelstats,
+				 Buffer *vmbuffer, bool check_all_visible);
+static void lazy_reset_warm_pointer_count(LVRelStats *vacrelstats);
 static bool should_attempt_truncation(LVRelStats *vacrelstats);
 static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
 static BlockNumber count_nondeletable_pages(Relation onerel,
@@ -164,8 +210,15 @@ static BlockNumber count_nondeletable_pages(Relation onerel,
 static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks);
 static void lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr);
-static bool lazy_tid_reaped(ItemPointer itemptr, void *state);
+static void lazy_record_warm_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static void lazy_record_clear_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static IndexBulkDeleteCallbackResult lazy_tid_reaped(ItemPointer itemptr, bool is_warm, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state);
 static int	vac_cmp_itemptr(const void *left, const void *right);
+static int vac_cmp_warm_chain(const void *left, const void *right);
 static bool heap_page_is_all_visible(Relation rel, Buffer buf,
 					 TransactionId *visibility_cutoff_xid, bool *all_frozen);
 
@@ -690,8 +743,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		 * If we are close to overrunning the available space for dead-tuple
 		 * TIDs, pause and do a cycle of vacuuming before we tackle this page.
 		 */
-		if ((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
-			vacrelstats->num_dead_tuples > 0)
+		if (((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
+			vacrelstats->num_dead_tuples > 0) ||
+			((vacrelstats->max_warm_chains - vacrelstats->num_warm_chains) < MaxHeapTuplesPerPage &&
+			 vacrelstats->num_warm_chains > 0))
 		{
 			const int	hvp_index[] = {
 				PROGRESS_VACUUM_PHASE,
@@ -721,6 +776,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			/* Remove index entries */
 			for (i = 0; i < nindexes; i++)
 				lazy_vacuum_index(Irel[i],
+								  (vacrelstats->num_warm_chains > 0),
 								  &indstats[i],
 								  vacrelstats);
 
@@ -743,6 +799,9 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			 * valid.
 			 */
 			vacrelstats->num_dead_tuples = 0;
+			vacrelstats->num_warm_chains = 0;
+			memset(vacrelstats->warm_chains, 0,
+					vacrelstats->max_warm_chains * sizeof (LVWarmChain));
 			vacrelstats->num_index_scans++;
 
 			/* Report that we are once again scanning the heap */
@@ -947,15 +1006,31 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 				continue;
 			}
 
+			ItemPointerSet(&(tuple.t_self), blkno, offnum);
+
 			/* Redirect items mustn't be touched */
 			if (ItemIdIsRedirected(itemid))
 			{
+				HeapCheckWarmChainStatus status = heap_check_warm_chain(page,
+						&tuple.t_self, false);
+				if (HCWC_IS_WARM_UPDATED(status))
+				{
+					/*
+					 * A chain which is either complete WARM or CLEAR is a
+					 * candidate for chain conversion. Remember the chain and
+					 * whether the chain has all WARM tuples or not.
+					 */
+					if (HCWC_IS_ALL_WARM(status))
+						lazy_record_warm_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_CLEAR(status))
+						lazy_record_clear_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
 				hastup = true;	/* this page won't be truncatable */
 				continue;
 			}
 
-			ItemPointerSet(&(tuple.t_self), blkno, offnum);
-
 			/*
 			 * DEAD item pointers are to be vacuumed normally; but we don't
 			 * count them in tups_vacuumed, else we'd be double-counting (at
@@ -975,6 +1050,26 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			tuple.t_len = ItemIdGetLength(itemid);
 			tuple.t_tableOid = RelationGetRelid(onerel);
 
+			if (!HeapTupleIsHeapOnly(&tuple))
+			{
+				HeapCheckWarmChainStatus status = heap_check_warm_chain(page,
+						&tuple.t_self, false);
+				if (HCWC_IS_WARM_UPDATED(status))
+				{
+					/*
+					 * A chain which is either complete WARM or CLEAR is a
+					 * candidate for chain conversion. Remember the chain and
+					 * its color.
+					 */
+					if (HCWC_IS_ALL_WARM(status))
+						lazy_record_warm_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_CLEAR(status))
+						lazy_record_clear_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
+			}
+
 			tupgone = false;
 
 			switch (HeapTupleSatisfiesVacuum(&tuple, OldestXmin, buf))
@@ -1040,6 +1135,19 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 							break;
 						}
 
+						/*
+						 * If this tuple was ever WARM updated or is a WARM
+						 * tuple, there could be multiple index entries
+						 * pointing to the root of this chain. We can't do
+						 * index-only scans for such tuples without verifying
+						 * index key check. So mark the page as !all_visible
+						 */
+						if (HeapTupleHeaderIsWarmUpdated(tuple.t_data))
+						{
+							all_visible = false;
+							break;
+						}
+
 						/* Track newest xmin on page. */
 						if (TransactionIdFollows(xmin, visibility_cutoff_xid))
 							visibility_cutoff_xid = xmin;
@@ -1282,7 +1390,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 
 	/* If any tuples need to be deleted, perform final vacuum cycle */
 	/* XXX put a threshold on min number of tuples here? */
-	if (vacrelstats->num_dead_tuples > 0)
+	if (vacrelstats->num_dead_tuples > 0 || vacrelstats->num_warm_chains > 0)
 	{
 		const int	hvp_index[] = {
 			PROGRESS_VACUUM_PHASE,
@@ -1300,6 +1408,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		/* Remove index entries */
 		for (i = 0; i < nindexes; i++)
 			lazy_vacuum_index(Irel[i],
+							  (vacrelstats->num_warm_chains > 0),
 							  &indstats[i],
 							  vacrelstats);
 
@@ -1371,7 +1480,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
  *
  *		This routine marks dead tuples as unused and compacts out free
  *		space on their pages.  Pages not having dead tuples recorded from
- *		lazy_scan_heap are not visited at all.
+ *		lazy_scan_heap are not visited at all. This routine also converts
+ *		candidate WARM chains to HOT chains by clearing WARM related flags. The
+ *		candidate chains are determined by the preceeding index scans after
+ *		looking at the data collected by the first heap scan.
  *
  * Note: the reason for doing this as a second pass is we cannot remove
  * the tuples until we've removed their index entries, and we want to
@@ -1380,7 +1492,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 static void
 lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 {
-	int			tupindex;
+	int			tupindex, chainindex;
 	int			npages;
 	PGRUsage	ru0;
 	Buffer		vmbuffer = InvalidBuffer;
@@ -1389,33 +1501,69 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 	npages = 0;
 
 	tupindex = 0;
-	while (tupindex < vacrelstats->num_dead_tuples)
+	chainindex = 0;
+	while (tupindex < vacrelstats->num_dead_tuples ||
+		   chainindex < vacrelstats->num_warm_chains)
 	{
-		BlockNumber tblk;
+		BlockNumber tblk, chainblk, vacblk;
 		Buffer		buf;
 		Page		page;
 		Size		freespace;
 
 		vacuum_delay_point();
 
-		tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
-		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, tblk, RBM_NORMAL,
+		tblk = chainblk = InvalidBlockNumber;
+		if (chainindex < vacrelstats->num_warm_chains)
+			chainblk =
+				ItemPointerGetBlockNumber(&(vacrelstats->warm_chains[chainindex].chain_tid));
+
+		if (tupindex < vacrelstats->num_dead_tuples)
+			tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
+
+		if (tblk == InvalidBlockNumber)
+			vacblk = chainblk;
+		else if (chainblk == InvalidBlockNumber)
+			vacblk = tblk;
+		else
+			vacblk = Min(chainblk, tblk);
+
+		Assert(vacblk != InvalidBlockNumber);
+
+		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, vacblk, RBM_NORMAL,
 								 vac_strategy);
-		if (!ConditionalLockBufferForCleanup(buf))
+
+
+		if (vacblk == chainblk)
+			LockBufferForCleanup(buf);
+		else if (!ConditionalLockBufferForCleanup(buf))
 		{
 			ReleaseBuffer(buf);
 			++tupindex;
 			continue;
 		}
-		tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
-									&vmbuffer);
+
+		/*
+		 * Convert WARM chains on this page. This should be done before
+		 * vacuuming the page to ensure that we can correctly set visibility
+		 * bits after clearing WARM chains.
+		 *
+		 * If we are going to vacuum this page then don't check for
+		 * all-visibility just yet.
+		*/
+		if (vacblk == chainblk)
+			chainindex = lazy_warmclear_page(onerel, chainblk, buf, chainindex,
+					vacrelstats, &vmbuffer, chainblk != tblk);
+
+		if (vacblk == tblk)
+			tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
+					&vmbuffer);
 
 		/* Now that we've compacted the page, record its available space */
 		page = BufferGetPage(buf);
 		freespace = PageGetHeapFreeSpace(page);
 
 		UnlockReleaseBuffer(buf);
-		RecordPageWithFreeSpace(onerel, tblk, freespace);
+		RecordPageWithFreeSpace(onerel, vacblk, freespace);
 		npages++;
 	}
 
@@ -1434,6 +1582,107 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 }
 
 /*
+ *	lazy_warmclear_page() -- clear various WARM bits on the tuples.
+ *
+ * Caller must hold pin and buffer cleanup lock on the buffer.
+ *
+ * chainindex is the index in vacrelstats->warm_chains of the first dead
+ * tuple for this page.  We assume the rest follow sequentially.
+ * The return value is the first tupindex after the tuples of this page.
+ *
+ * If check_all_visible is set then we also check if the page has now become
+ * all visible and update visibility map.
+ */
+static int
+lazy_warmclear_page(Relation onerel, BlockNumber blkno, Buffer buffer,
+				 int chainindex, LVRelStats *vacrelstats, Buffer *vmbuffer,
+				 bool check_all_visible)
+{
+	Page			page = BufferGetPage(buffer);
+	OffsetNumber	cleared_offnums[MaxHeapTuplesPerPage];
+	int				num_cleared = 0;
+	TransactionId	visibility_cutoff_xid;
+	bool			all_frozen;
+
+	pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED, blkno);
+
+	START_CRIT_SECTION();
+
+	for (; chainindex < vacrelstats->num_warm_chains ; chainindex++)
+	{
+		BlockNumber tblk;
+		LVWarmChain	*chain;
+
+		chain = &vacrelstats->warm_chains[chainindex];
+
+		tblk = ItemPointerGetBlockNumber(&chain->chain_tid);
+		if (tblk != blkno)
+			break;				/* past end of tuples for this block */
+
+		/*
+		 * Since a heap page can have no more than MaxHeapTuplesPerPage
+		 * offnums and we process each offnum only once, MaxHeapTuplesPerPage
+		 * size array should be enough to hold all cleared tuples in this page.
+		 */
+		if (!chain->keep_warm_chain)
+			num_cleared += heap_clear_warm_chain(page, &chain->chain_tid,
+					cleared_offnums + num_cleared);
+	}
+
+	/*
+	 * Mark buffer dirty before we write WAL.
+	 */
+	MarkBufferDirty(buffer);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(onerel))
+	{
+		XLogRecPtr	recptr;
+
+		recptr = log_heap_warmclear(onerel, buffer,
+								cleared_offnums, num_cleared);
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	/* If not checking for all-visibility then we're done */
+	if (!check_all_visible)
+		return chainindex;
+
+	/*
+	 * The following code should match the corresponding code in
+	 * lazy_vacuum_page
+	 **/
+	if (heap_page_is_all_visible(onerel, buffer, &visibility_cutoff_xid,
+								 &all_frozen))
+		PageSetAllVisible(page);
+
+	/*
+	 * All the changes to the heap page have been done. If the all-visible
+	 * flag is now set, also set the VM all-visible bit (and, if possible, the
+	 * all-frozen bit) unless this has already been done previously.
+	 */
+	if (PageIsAllVisible(page))
+	{
+		uint8		vm_status = visibilitymap_get_status(onerel, blkno, vmbuffer);
+		uint8		flags = 0;
+
+		/* Set the VM all-frozen bit to flag, if needed */
+		if ((vm_status & VISIBILITYMAP_ALL_VISIBLE) == 0)
+			flags |= VISIBILITYMAP_ALL_VISIBLE;
+		if ((vm_status & VISIBILITYMAP_ALL_FROZEN) == 0 && all_frozen)
+			flags |= VISIBILITYMAP_ALL_FROZEN;
+
+		Assert(BufferIsValid(*vmbuffer));
+		if (flags != 0)
+			visibilitymap_set(onerel, blkno, buffer, InvalidXLogRecPtr,
+							  *vmbuffer, visibility_cutoff_xid, flags);
+	}
+	return chainindex;
+}
+
+/*
  *	lazy_vacuum_page() -- free dead tuples on a page
  *					 and repair its fragmentation.
  *
@@ -1586,6 +1835,24 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
 	return false;
 }
 
+/*
+ * Reset counters tracking number of WARM and CLEAR pointers per candidate TID.
+ * These counters are maintained per index and cleared when the next index is
+ * picked up for cleanup.
+ *
+ * We don't touch the keep_warm_chain since once a chain is known to be
+ * non-convertible, we must remember that across all indexes.
+ */
+static void
+lazy_reset_warm_pointer_count(LVRelStats *vacrelstats)
+{
+	int i;
+	for (i = 0; i < vacrelstats->num_warm_chains; i++)
+	{
+		LVWarmChain *chain = &vacrelstats->warm_chains[i];
+		chain->num_clear_pointers = chain->num_warm_pointers = 0;
+	}
+}
 
 /*
  *	lazy_vacuum_index() -- vacuum one index relation.
@@ -1595,6 +1862,7 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
  */
 static void
 lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats)
 {
@@ -1610,15 +1878,87 @@ lazy_vacuum_index(Relation indrel,
 	ivinfo.num_heap_tuples = vacrelstats->old_rel_tuples;
 	ivinfo.strategy = vac_strategy;
 
-	/* Do bulk deletion */
-	*stats = index_bulk_delete(&ivinfo, *stats,
-							   lazy_tid_reaped, (void *) vacrelstats);
+	/*
+	 * If told, convert WARM chains into HOT chains.
+	 *
+	 * We must have already collected candidate WARM chains i.e. chains that
+	 * have either all tuples with HEAP_WARM_TUPLE flag set or none.
+	 *
+	 * This works in two phases. In the first phase, we do a complete index
+	 * scan and collect information about index pointers to the candidate
+	 * chains, but we don't do conversion. To be precise, we count the number
+	 * of WARM and CLEAR index pointers to each candidate chain and use that
+	 * knowledge to arrive at a decision and do the actual conversion during
+	 * the second phase (we kill known dead pointers though in this phase).
+	 *
+	 * In the second phase, for each candidate chain we check if we have seen a
+	 * WARM index pointer. For such chains, we kill the CLEAR pointer and
+	 * convert the WARM pointer into a CLEAR pointer. The heap tuples are
+	 * cleared of WARM flags in the second heap scan. If we did not find any
+	 * WARM pointer to a WARM chain, that means that the chain is reachable
+	 * from the CLEAR pointer (because say WARM update did not add a new entry
+	 * for this index). In that case, we do nothing.  There is a third case
+	 * where we find two CLEAR pointers to a candidate chain. This can happen
+	 * because of aborted vacuums. We don't handle that case yet, but it should
+	 * be possible to apply the same recheck logic and find which of the clear
+	 * pointers is redundant and should be removed.
+	 *
+	 * For CLEAR chains, we just kill the WARM pointer, if it exists, and keep
+	 * the CLEAR pointer.
+	 */
+	if (clear_warm)
+	{
+		/*
+		 * Before starting the index scan, reset the counters of WARM and CLEAR
+		 * pointers, probably carried forward from the previous index.
+		 */
+		lazy_reset_warm_pointer_count(vacrelstats);
+
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase1, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row version, found "
+						"%0.f warm pointers, %0.f clear pointers, removed "
+						"%0.f warm pointers, removed %0.f clear pointers",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples,
+						(*stats)->num_warm_pointers,
+						(*stats)->num_clear_pointers,
+						(*stats)->warm_pointers_removed,
+						(*stats)->clear_pointers_removed)));
+
+		(*stats)->num_warm_pointers = 0;
+		(*stats)->num_clear_pointers = 0;
+		(*stats)->warm_pointers_removed = 0;
+		(*stats)->clear_pointers_removed = 0;
+		(*stats)->pointers_cleared = 0;
+
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase2, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to convert WARM pointers, found "
+						"%0.f WARM pointers, %0.f CLEAR pointers, removed "
+						"%0.f WARM pointers, removed %0.f CLEAR pointers, "
+						"cleared %0.f WARM pointers",
+						RelationGetRelationName(indrel),
+						(*stats)->num_warm_pointers,
+						(*stats)->num_clear_pointers,
+						(*stats)->warm_pointers_removed,
+						(*stats)->clear_pointers_removed,
+						(*stats)->pointers_cleared)));
+	}
+	else
+	{
+		/* Do bulk deletion */
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_tid_reaped, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row versions",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples),
+				 errdetail("%s.", pg_rusage_show(&ru0))));
+	}
 
-	ereport(elevel,
-			(errmsg("scanned index \"%s\" to remove %d row versions",
-					RelationGetRelationName(indrel),
-					vacrelstats->num_dead_tuples),
-			 errdetail("%s.", pg_rusage_show(&ru0))));
 }
 
 /*
@@ -1992,9 +2332,11 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 
 	if (vacrelstats->hasindex)
 	{
-		maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData);
+		maxtuples = (vac_work_mem * 1024L) / (sizeof(ItemPointerData) +
+				sizeof(LVWarmChain));
 		maxtuples = Min(maxtuples, INT_MAX);
-		maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData));
+		maxtuples = Min(maxtuples, MaxAllocSize / (sizeof(ItemPointerData) +
+					sizeof(LVWarmChain)));
 
 		/* curious coding here to ensure the multiplication can't overflow */
 		if ((BlockNumber) (maxtuples / LAZY_ALLOC_TUPLES) > relblocks)
@@ -2012,6 +2354,57 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 	vacrelstats->max_dead_tuples = (int) maxtuples;
 	vacrelstats->dead_tuples = (ItemPointer)
 		palloc(maxtuples * sizeof(ItemPointerData));
+
+	/*
+	 * XXX Cheat for now and allocate the same size array for tracking warm
+	 * chains. maxtuples must have been already adjusted above to ensure we
+	 * don't cross vac_work_mem.
+	 */
+	vacrelstats->num_warm_chains = 0;
+	vacrelstats->max_warm_chains = (int) maxtuples;
+	vacrelstats->warm_chains = (LVWarmChain *)
+		palloc0(maxtuples * sizeof(LVWarmChain));
+
+}
+
+/*
+ * lazy_record_clear_chain - remember one CLEAR chain
+ */
+static void
+lazy_record_clear_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	{
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 0;
+		vacrelstats->num_warm_chains++;
+	}
+}
+
+/*
+ * lazy_record_warm_chain - remember one WARM chain
+ */
+static void
+lazy_record_warm_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	{
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 1;
+		vacrelstats->num_warm_chains++;
+	}
 }
 
 /*
@@ -2042,8 +2435,8 @@ lazy_record_dead_tuple(LVRelStats *vacrelstats,
  *
  *		Assumes dead_tuples array is in sorted order.
  */
-static bool
-lazy_tid_reaped(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+lazy_tid_reaped(ItemPointer itemptr, bool is_warm, void *state)
 {
 	LVRelStats *vacrelstats = (LVRelStats *) state;
 	ItemPointer res;
@@ -2054,7 +2447,193 @@ lazy_tid_reaped(ItemPointer itemptr, void *state)
 								sizeof(ItemPointerData),
 								vac_cmp_itemptr);
 
-	return (res != NULL);
+	return (res != NULL) ? IBDCR_DELETE : IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase1() -- run first pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	ItemPointer		res;
+	LVWarmChain	*chain;
+
+	res = (ItemPointer) bsearch((void *) itemptr,
+								(void *) vacrelstats->dead_tuples,
+								vacrelstats->num_dead_tuples,
+								sizeof(ItemPointerData),
+								vac_cmp_itemptr);
+
+	if (res != NULL)
+		return IBDCR_DELETE;
+
+	chain = (LVWarmChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->warm_chains,
+								vacrelstats->num_warm_chains,
+								sizeof(LVWarmChain),
+								vac_cmp_warm_chain);
+	if (chain != NULL)
+	{
+		if (is_warm)
+			chain->num_warm_pointers++;
+		else
+			chain->num_clear_pointers++;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase2() -- run second pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	LVWarmChain	*chain;
+
+	chain = (LVWarmChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->warm_chains,
+								vacrelstats->num_warm_chains,
+								sizeof(LVWarmChain),
+								vac_cmp_warm_chain);
+
+	if (chain != NULL && (chain->keep_warm_chain != 1))
+	{
+		/*
+		 * At no point, we can have more than 1 warm pointer to any chain and
+		 * no more than 2 clear pointers.
+		 */
+		Assert(chain->num_warm_pointers <= 1);
+		Assert(chain->num_clear_pointers <= 2);
+
+		if (chain->is_postwarm_chain == 1)
+		{
+			if (is_warm)
+			{
+				/*
+				 * A WARM pointer, pointing to a WARM chain.
+				 *
+				 * Clear the warm pointer (and delete the CLEAR pointer). We
+				 * may have already seen the CLEAR pointer in the scan and
+				 * deleted that or we may see it later in the scan. It doesn't
+				 * matter if we fail at any point because we won't clear up
+				 * WARM bits on the heap tuples until we have dealt with the
+				 * index pointers cleanly.
+				 */
+				return IBDCR_CLEAR_WARM;
+			}
+			else
+			{
+				/*
+				 * CLEAR pointer to a WARM chain.
+				 */
+				if (chain->num_warm_pointers > 0)
+				{
+					/*
+					 * If there exists a WARM pointer to the chain, we can
+					 * delete the CLEAR pointer and clear the WARM bits on the
+					 * heap tuples.
+					 */
+					return IBDCR_DELETE;
+				}
+				else if (chain->num_clear_pointers == 1)
+				{
+					/*
+					 * If this is the only pointer to a WARM chain, we must
+					 * keep the CLEAR pointer.
+					 *
+					 * The presence of WARM chain indicates that the WARM update
+					 * must have been committed good. But during the update
+					 * this index was probably not updated and hence it
+					 * contains just one, original CLEAR pointer to the chain.
+					 * We should be able to clear the WARM bits on heap tuples
+					 * unless we later find another index which prevents the
+					 * cleanup.
+					 */
+					return IBDCR_KEEP;
+				}
+			}
+		}
+		else
+		{
+			/*
+			 * This is a CLEAR chain.
+			 */
+			if (is_warm)
+			{
+				/*
+				 * A WARM pointer to a CLEAR chain.
+				 *
+				 * This can happen when a WARM update is aborted. Later the HOT
+				 * chain is pruned leaving behind only CLEAR tuples in the
+				 * chain. But the WARM index pointer inserted in the index
+				 * remains and it must now be deleted before we clear WARM bits
+				 * from the heap tuple.
+				 */
+				return IBDCR_DELETE;
+			}
+
+			/*
+			 * CLEAR pointer to a CLEAR chain.
+			 *
+			 * If this is the only surviving CLEAR pointer, keep it and clear
+			 * the WARM bits from the heap tuples.
+			 */
+			if (chain->num_clear_pointers == 1)
+				return IBDCR_KEEP;
+
+			/*
+			 * If there are more than 1 CLEAR pointers to this chain, we can
+			 * apply the recheck logic and kill the redudant CLEAR pointer and
+			 * convert the chain. But that's not yet done.
+			 */
+		}
+
+		/*
+		 * For everything else, we must keep the WARM bits and also keep the
+		 * index pointers.
+		 */
+		chain->keep_warm_chain = 1;
+		return IBDCR_KEEP;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ * Comparator routines for use with qsort() and bsearch(). Similar to
+ * vac_cmp_itemptr, but right hand argument is LVWarmChain struct pointer.
+ */
+static int
+vac_cmp_warm_chain(const void *left, const void *right)
+{
+	BlockNumber lblk,
+				rblk;
+	OffsetNumber loff,
+				roff;
+
+	lblk = ItemPointerGetBlockNumber((ItemPointer) left);
+	rblk = ItemPointerGetBlockNumber(&((LVWarmChain *) right)->chain_tid);
+
+	if (lblk < rblk)
+		return -1;
+	if (lblk > rblk)
+		return 1;
+
+	loff = ItemPointerGetOffsetNumber((ItemPointer) left);
+	roff = ItemPointerGetOffsetNumber(&((LVWarmChain *) right)->chain_tid);
+
+	if (loff < roff)
+		return -1;
+	if (loff > roff)
+		return 1;
+
+	return 0;
 }
 
 /*
@@ -2170,6 +2749,18 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 						break;
 					}
 
+					/*
+					 * If this or any other tuple in the chain ever WARM
+					 * updated, there could be multiple index entries pointing
+					 * to the root of this chain. We can't do index-only scans
+					 * for such tuples without verifying index key check. So
+					 * mark the page as !all_visible
+					 */
+					if (HeapTupleHeaderIsWarmUpdated(tuple.t_data))
+					{
+						all_visible = false;
+					}
+
 					/* Track newest xmin on page. */
 					if (TransactionIdFollows(xmin, *visibility_cutoff_xid))
 						*visibility_cutoff_xid = xmin;
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index c3f1873..2143978 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -270,6 +270,8 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
 List *
 ExecInsertIndexTuples(TupleTableSlot *slot,
 					  ItemPointer tupleid,
+					  ItemPointer root_tid,
+					  Bitmapset *modified_attrs,
 					  EState *estate,
 					  bool noDupErr,
 					  bool *specConflict,
@@ -324,6 +326,17 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 		if (!indexInfo->ii_ReadyForInserts)
 			continue;
 
+		/*
+		 * If modified_attrs is set, we only insert index entries for those
+		 * indexes whose column has changed. All other indexes can use their
+		 * existing index pointers to look up the new tuple
+		 */
+		if (modified_attrs)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
 		/* Check for partial index */
 		if (indexInfo->ii_Predicate != NIL)
 		{
@@ -387,10 +400,11 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 			index_insert(indexRelation, /* index relation */
 						 values,	/* array of index Datums */
 						 isnull,	/* null flags */
-						 tupleid,		/* tid of heap tuple */
+						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique,	/* type of uniqueness check to do */
-						 indexInfo);	/* index AM may need this */
+						 indexInfo,	/* index AM may need this */
+						 (modified_attrs != NULL));	/* is it a WARM update? */
 
 		/*
 		 * If the index has an associated exclusion constraint, check that.
@@ -787,6 +801,9 @@ retry:
 		{
 			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
 				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
+			else
+				ItemPointerCopy(&tup->t_self, &ctid_wait);
+
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index f20d728..747e4ce 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -399,6 +399,8 @@ ExecSimpleRelationInsert(EState *estate, TupleTableSlot *slot)
 
 		if (resultRelInfo->ri_NumIndices > 0)
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &(tuple->t_self),
+												   NULL,
 												   estate, false, NULL,
 												   NIL);
 
@@ -445,6 +447,8 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 	if (!skip_tuple)
 	{
 		List	   *recheckIndexes = NIL;
+		bool		warm_update;
+		Bitmapset  *modified_attrs;
 
 		/* Check the constraints of the tuple */
 		if (rel->rd_att->constr)
@@ -455,13 +459,35 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 
 		/* OK, update the tuple and index entries for it */
 		simple_heap_update(rel, &searchslot->tts_tuple->t_self,
-						   slot->tts_tuple);
+						   slot->tts_tuple, &modified_attrs, &warm_update);
 
 		if (resultRelInfo->ri_NumIndices > 0 &&
-			!HeapTupleIsHeapOnly(slot->tts_tuple))
+			(!HeapTupleIsHeapOnly(slot->tts_tuple) || warm_update))
+		{
+			ItemPointerData root_tid;
+
+			/*
+			 * If we did a WARM update then we must index the tuple using its
+			 * root line pointer and not the tuple TID itself.
+			 */
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self,
+						&root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
+
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL,
 												   NIL);
+		}
 
 		/* AFTER ROW UPDATE Triggers */
 		ExecARUpdateTriggers(estate, resultRelInfo,
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index 19eb175..ef3653c 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -39,6 +39,7 @@
 
 #include "access/relscan.h"
 #include "access/transam.h"
+#include "access/valid.h"
 #include "executor/execdebug.h"
 #include "executor/nodeBitmapHeapscan.h"
 #include "pgstat.h"
@@ -395,11 +396,27 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			OffsetNumber offnum = tbmres->offsets[curslot];
 			ItemPointerData tid;
 			HeapTupleData heapTuple;
+			bool recheck = false;
 
 			ItemPointerSet(&tid, page, offnum);
 			if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
-									   &heapTuple, NULL, true))
-				scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+									   &heapTuple, NULL, true, &recheck))
+			{
+				bool valid = true;
+
+				if (scan->rs_key)
+					HeapKeyTest(&heapTuple, RelationGetDescr(scan->rs_rd),
+							scan->rs_nkeys, scan->rs_key, valid);
+				if (valid)
+					scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+
+				/*
+				 * If the heap tuple needs a recheck because of a WARM update,
+				 * it's a lossy case.
+				 */
+				if (recheck)
+					tbmres->recheck = true;
+			}
 		}
 	}
 	else
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index 5afd02e..6e48c2e 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -142,8 +142,8 @@ IndexNext(IndexScanState *node)
 					   false);	/* don't pfree */
 
 		/*
-		 * If the index was lossy, we have to recheck the index quals using
-		 * the fetched tuple.
+		 * If the index was lossy or the tuple was WARM, we have to recheck
+		 * the index quals using the fetched tuple.
 		 */
 		if (scandesc->xs_recheck)
 		{
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index 0b524e0..2ad4a2c 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -513,6 +513,7 @@ ExecInsert(ModifyTableState *mtstate,
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												 &(tuple->t_self), NULL,
 												 estate, true, &specConflict,
 												   arbiterIndexes);
 
@@ -559,6 +560,7 @@ ExecInsert(ModifyTableState *mtstate,
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
 				recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+													   &(tuple->t_self), NULL,
 													   estate, false, NULL,
 													   arbiterIndexes);
 		}
@@ -892,6 +894,9 @@ ExecUpdate(ItemPointer tupleid,
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
 	List	   *recheckIndexes = NIL;
+	Bitmapset  *modified_attrs = NULL;
+	ItemPointerData	root_tid;
+	bool		warm_update;
 
 	/*
 	 * abort the operation if not running transactions
@@ -1008,7 +1013,7 @@ lreplace:;
 							 estate->es_output_cid,
 							 estate->es_crosscheck_snapshot,
 							 true /* wait for commit */ ,
-							 &hufd, &lockmode);
+							 &hufd, &lockmode, &modified_attrs, &warm_update);
 		switch (result)
 		{
 			case HeapTupleSelfUpdated:
@@ -1095,10 +1100,28 @@ lreplace:;
 		 * the t_self field.
 		 *
 		 * If it's a HOT update, we mustn't insert new index entries.
+		 *
+		 * If it's a WARM update, then we must insert new entries with TID
+		 * pointing to the root of the WARM chain.
 		 */
-		if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
+		if (resultRelInfo->ri_NumIndices > 0 &&
+			(!HeapTupleIsHeapOnly(tuple) || warm_update))
+		{
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL, NIL);
+		}
 	}
 
 	if (canSetTag)
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index 56a8bf2..52fe4ba 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -1888,7 +1888,7 @@ pgstat_count_heap_insert(Relation rel, PgStat_Counter n)
  * pgstat_count_heap_update - count a tuple update
  */
 void
-pgstat_count_heap_update(Relation rel, bool hot)
+pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 {
 	PgStat_TableStatus *pgstat_info = rel->pgstat_info;
 
@@ -1906,6 +1906,8 @@ pgstat_count_heap_update(Relation rel, bool hot)
 		/* t_tuples_hot_updated is nontransactional, so just advance it */
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
+		else if (warm)
+			pgstat_info->t_counts.t_tuples_warm_updated++;
 	}
 }
 
@@ -4521,6 +4523,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_updated = 0;
 		result->tuples_deleted = 0;
 		result->tuples_hot_updated = 0;
+		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
 		result->changes_since_analyze = 0;
@@ -5630,6 +5633,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated = tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted = tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated = tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
@@ -5657,6 +5661,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated += tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted += tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated += tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated += tabmsg->t_counts.t_tuples_warm_updated;
 			/* If table was truncated, first reset the live/dead counters */
 			if (tabmsg->t_counts.t_truncated)
 			{
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 5c13d26..7a9b48a 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -347,7 +347,7 @@ DecodeStandbyOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 static void
 DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 {
-	uint8		info = XLogRecGetInfo(buf->record) & XLOG_HEAP_OPMASK;
+	uint8		info = XLogRecGetInfo(buf->record) & XLOG_HEAP2_OPMASK;
 	TransactionId xid = XLogRecGetXid(buf->record);
 	SnapBuild  *builder = ctx->snapshot_builder;
 
@@ -359,10 +359,6 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 
 	switch (info)
 	{
-		case XLOG_HEAP2_MULTI_INSERT:
-			if (SnapBuildProcessChange(builder, xid, buf->origptr))
-				DecodeMultiInsert(ctx, buf);
-			break;
 		case XLOG_HEAP2_NEW_CID:
 			{
 				xl_heap_new_cid *xlrec;
@@ -390,6 +386,7 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 		case XLOG_HEAP2_CLEANUP_INFO:
 		case XLOG_HEAP2_VISIBLE:
 		case XLOG_HEAP2_LOCK_UPDATED:
+		case XLOG_HEAP2_WARMCLEAR:
 			break;
 		default:
 			elog(ERROR, "unexpected RM_HEAP2_ID record type: %u", info);
@@ -418,6 +415,10 @@ DecodeHeapOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 			if (SnapBuildProcessChange(builder, xid, buf->origptr))
 				DecodeInsert(ctx, buf);
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			if (SnapBuildProcessChange(builder, xid, buf->origptr))
+				DecodeMultiInsert(ctx, buf);
+			break;
 
 			/*
 			 * Treat HOT update as normal updates. There is no useful
@@ -809,7 +810,7 @@ DecodeDelete(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 }
 
 /*
- * Decode XLOG_HEAP2_MULTI_INSERT_insert record into multiple tuplebufs.
+ * Decode XLOG_HEAP_MULTI_INSERT_insert record into multiple tuplebufs.
  *
  * Currently MULTI_INSERT will always contain the full tuples.
  */
diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index fdf045a..8d23e92 100644
--- a/src/backend/storage/page/bufpage.c
+++ b/src/backend/storage/page/bufpage.c
@@ -1151,6 +1151,29 @@ PageIndexTupleOverwrite(Page page, OffsetNumber offnum,
 	return true;
 }
 
+/*
+ * PageIndexClearWarmTuples
+ *
+ * Clear the given WARM pointers by resetting the flags stored in the TID
+ * field. We assume there is nothing else in the TID flags other than the WARM
+ * information and clearing all flag bits is safe. If that changes, we must
+ * change this routine as well.
+ */
+void
+PageIndexClearWarmTuples(Page page, OffsetNumber *clearitemnos,
+						 uint16 nclearitems)
+{
+	int			i;
+	ItemId		itemid;
+	IndexTuple	itup;
+
+	for (i = 0; i < nclearitems; i++)
+	{
+		itemid = PageGetItemId(page, clearitemnos[i]);
+		itup = (IndexTuple) PageGetItem(page, itemid);
+		ItemPointerClearFlags(&itup->t_tid);
+	}
+}
 
 /*
  * Set checksum for a page in shared buffers.
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index e0cae1b..227a87d 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -147,6 +147,22 @@ pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS)
 
 
 Datum
+pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+
+Datum
 pg_stat_get_live_tuples(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
@@ -1674,6 +1690,21 @@ pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS)
 }
 
 Datum
+pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_TableStatus *tabentry;
+
+	if ((tabentry = find_tabstat_entry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->t_counts.t_tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+Datum
 pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index bc22098..c7266d7 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2339,6 +2339,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	list_free_deep(relation->rd_fkeylist);
 	list_free(relation->rd_indexlist);
 	bms_free(relation->rd_indexattr);
+	bms_free(relation->rd_exprindexattr);
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_pkattr);
 	bms_free(relation->rd_idattr);
@@ -4353,6 +4354,13 @@ RelationGetIndexList(Relation relation)
 		return list_copy(relation->rd_indexlist);
 
 	/*
+	 * If the index list was invalidated, we better also invalidate the index
+	 * attribute list (which should automatically invalidate other attributes
+	 * such as primary key and replica identity)
+	 */
+	relation->rd_indexattr = NULL;
+
+	/*
 	 * We build the list we intend to return (in the caller's context) while
 	 * doing the scan.  After successfully completing the scan, we copy that
 	 * list into the relcache entry.  This avoids cache-context memory leakage
@@ -4836,15 +4844,20 @@ Bitmapset *
 RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 {
 	Bitmapset  *indexattrs;		/* indexed columns */
+	Bitmapset  *exprindexattrs;	/* indexed columns in expression/prediacate
+									 indexes */
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *pkindexattrs;	/* columns in the primary index */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
+	Bitmapset  *indxnotreadyattrs;	/* columns in not ready indexes */
 	List	   *indexoidlist;
 	List	   *newindexoidlist;
+	List	   *indexattrsList;
 	Oid			relpkindex;
 	Oid			relreplindex;
 	ListCell   *l;
 	MemoryContext oldcxt;
+	bool		supportswarm = true;/* True if the table can be WARM updated */
 
 	/* Quick exit if we already computed the result. */
 	if (relation->rd_indexattr != NULL)
@@ -4859,6 +4872,10 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_pkattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
+			case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+				return bms_copy(relation->rd_exprindexattr);
+			case INDEX_ATTR_BITMAP_NOTREADY:
+				return bms_copy(relation->rd_indxnotreadyattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -4899,9 +4916,12 @@ restart:
 	 * won't be returned at all by RelationGetIndexList.
 	 */
 	indexattrs = NULL;
+	exprindexattrs = NULL;
 	uindexattrs = NULL;
 	pkindexattrs = NULL;
 	idindexattrs = NULL;
+	indxnotreadyattrs = NULL;
+	indexattrsList = NIL;
 	foreach(l, indexoidlist)
 	{
 		Oid			indexOid = lfirst_oid(l);
@@ -4911,6 +4931,7 @@ restart:
 		bool		isKey;		/* candidate key */
 		bool		isPK;		/* primary key */
 		bool		isIDKey;	/* replica identity index */
+		Bitmapset	*thisindexattrs = NULL;
 
 		indexDesc = index_open(indexOid, AccessShareLock);
 
@@ -4935,9 +4956,16 @@ restart:
 
 			if (attrnum != 0)
 			{
+				thisindexattrs = bms_add_member(thisindexattrs,
+							   attrnum - FirstLowInvalidHeapAttributeNumber);
+
 				indexattrs = bms_add_member(indexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
 
+				if (!indexInfo->ii_ReadyForInserts)
+					indxnotreadyattrs = bms_add_member(indxnotreadyattrs,
+							   attrnum - FirstLowInvalidHeapAttributeNumber);
+
 				if (isKey)
 					uindexattrs = bms_add_member(uindexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
@@ -4953,10 +4981,31 @@ restart:
 		}
 
 		/* Collect all attributes used in expressions, too */
-		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &exprindexattrs);
 
 		/* Collect all attributes in the index predicate, too */
-		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
+
+		/*
+		 * indexattrs should include attributes referenced in index expressions
+		 * and predicates too.
+		 */
+		indexattrs = bms_add_members(indexattrs, exprindexattrs);
+		thisindexattrs = bms_add_members(thisindexattrs, exprindexattrs);
+
+		if (!indexInfo->ii_ReadyForInserts)
+			indxnotreadyattrs = bms_add_members(indxnotreadyattrs,
+					exprindexattrs);
+
+		/*
+		 * Check if the index has amrecheck method defined. If the method is
+		 * not defined, the index does not support WARM update. Completely
+		 * disable WARM updates on such tables.
+		 */
+		if (!indexDesc->rd_amroutine->amrecheck)
+			supportswarm = false;
+
+		indexattrsList = lappend(indexattrsList, thisindexattrs);
 
 		index_close(indexDesc, AccessShareLock);
 	}
@@ -4985,19 +5034,28 @@ restart:
 		bms_free(pkindexattrs);
 		bms_free(idindexattrs);
 		bms_free(indexattrs);
-
+		list_free_deep(indexattrsList);
 		goto restart;
 	}
 
+	/* Remember if the table can do WARM updates */
+	relation->rd_supportswarm = supportswarm;
+
 	/* Don't leak the old values of these bitmaps, if any */
 	bms_free(relation->rd_indexattr);
 	relation->rd_indexattr = NULL;
+	bms_free(relation->rd_exprindexattr);
+	relation->rd_exprindexattr = NULL;
 	bms_free(relation->rd_keyattr);
 	relation->rd_keyattr = NULL;
 	bms_free(relation->rd_pkattr);
 	relation->rd_pkattr = NULL;
 	bms_free(relation->rd_idattr);
 	relation->rd_idattr = NULL;
+	bms_free(relation->rd_indxnotreadyattr);
+	relation->rd_indxnotreadyattr = NULL;
+	list_free_deep(relation->rd_indexattrsList);
+	relation->rd_indexattrsList = NIL;
 
 	/*
 	 * Now save copies of the bitmaps in the relcache entry.  We intentionally
@@ -5010,7 +5068,21 @@ restart:
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_pkattr = bms_copy(pkindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_indexattr = bms_copy(indexattrs);
+	relation->rd_exprindexattr = bms_copy(exprindexattrs);
+	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
+	relation->rd_indxnotreadyattr = bms_copy(indxnotreadyattrs);
+
+	/*
+	 * create a deep copy of the list, copying each bitmap in the
+	 * CurrentMemoryContext.
+	 */
+	foreach(l, indexattrsList)
+	{
+		Bitmapset *b = (Bitmapset *) lfirst(l);
+		relation->rd_indexattrsList = lappend(relation->rd_indexattrsList,
+				bms_copy(b));
+	}
+
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -5024,6 +5096,10 @@ restart:
 			return bms_copy(relation->rd_pkattr);
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
+		case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+			return exprindexattrs;
+		case INDEX_ATTR_BITMAP_NOTREADY:
+			return indxnotreadyattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
@@ -5031,6 +5107,34 @@ restart:
 }
 
 /*
+ * Get a list of bitmaps, where each bitmap contains a list of attributes used
+ * by one index.
+ *
+ * The actual information is computed in RelationGetIndexAttrBitmap, but
+ * currently the only consumer of this function calls it immediately after
+ * calling RelationGetIndexAttrBitmap, we should be fine. We don't expect any
+ * relcache invalidation to come between these two calls and hence don't expect
+ * the cached information to change underneath.
+ */
+List *
+RelationGetIndexAttrList(Relation relation)
+{
+	ListCell   *l;
+	List	   *indexattrsList = NIL;
+
+	/*
+	 * Create a deep copy of the list by copying bitmaps in the
+	 * CurrentMemoryContext.
+	 */
+	foreach(l, relation->rd_indexattrsList)
+	{
+		Bitmapset *b = (Bitmapset *) lfirst(l);
+		indexattrsList = lappend(indexattrsList, bms_copy(b));
+	}
+	return indexattrsList;
+}
+
+/*
  * RelationGetExclusionInfo -- get info about index's exclusion constraint
  *
  * This should be called only for an index that is known to have an
@@ -5636,6 +5740,7 @@ load_relcache_init_file(bool shared)
 		rel->rd_keyattr = NULL;
 		rel->rd_pkattr = NULL;
 		rel->rd_idattr = NULL;
+		rel->rd_indxnotreadyattr = NULL;
 		rel->rd_pubactions = NULL;
 		rel->rd_statvalid = false;
 		rel->rd_statlist = NIL;
diff --git a/src/backend/utils/time/combocid.c b/src/backend/utils/time/combocid.c
index baff998..6a2e2f2 100644
--- a/src/backend/utils/time/combocid.c
+++ b/src/backend/utils/time/combocid.c
@@ -106,7 +106,7 @@ HeapTupleHeaderGetCmin(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 	Assert(TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmin(tup)));
 
 	if (tup->t_infomask & HEAP_COMBOCID)
@@ -120,7 +120,7 @@ HeapTupleHeaderGetCmax(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 
 	/*
 	 * Because GetUpdateXid() performs memory allocations if xmax is a
diff --git a/src/backend/utils/time/tqual.c b/src/backend/utils/time/tqual.c
index 519f3b6..e54d0df 100644
--- a/src/backend/utils/time/tqual.c
+++ b/src/backend/utils/time/tqual.c
@@ -186,7 +186,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -205,7 +205,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -377,7 +377,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -396,7 +396,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -471,7 +471,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			return HeapTupleInvisible;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -490,7 +490,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -753,7 +753,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -772,7 +772,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -974,7 +974,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -993,7 +993,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1180,7 +1180,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 		if (HeapTupleHeaderXminInvalid(tuple))
 			return HEAPTUPLE_DEAD;
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_OFF)
+		else if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1198,7 +1198,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 						InvalidTransactionId);
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h
index f919cf8..8b7af1e 100644
--- a/src/include/access/amapi.h
+++ b/src/include/access/amapi.h
@@ -13,6 +13,7 @@
 #define AMAPI_H
 
 #include "access/genam.h"
+#include "access/itup.h"
 
 /*
  * We don't wish to include planner header files here, since most of an index
@@ -74,6 +75,14 @@ typedef bool (*aminsert_function) (Relation indexRelation,
 											   Relation heapRelation,
 											   IndexUniqueCheck checkUnique,
 											   struct IndexInfo *indexInfo);
+/* insert this WARM tuple */
+typedef bool (*amwarminsert_function) (Relation indexRelation,
+											   Datum *values,
+											   bool *isnull,
+											   ItemPointer heap_tid,
+											   Relation heapRelation,
+											   IndexUniqueCheck checkUnique,
+											   struct IndexInfo *indexInfo);
 
 /* bulk delete */
 typedef IndexBulkDeleteResult *(*ambulkdelete_function) (IndexVacuumInfo *info,
@@ -152,6 +161,11 @@ typedef void (*aminitparallelscan_function) (void *target);
 /* (re)start parallel index scan */
 typedef void (*amparallelrescan_function) (IndexScanDesc scan);
 
+/* recheck index tuple and heap tuple match */
+typedef bool (*amrecheck_function) (Relation indexRel,
+		struct IndexInfo *indexInfo, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 /*
  * API struct for an index AM.  Note this must be stored in a single palloc'd
  * chunk of memory.
@@ -198,6 +212,7 @@ typedef struct IndexAmRoutine
 	ambuild_function ambuild;
 	ambuildempty_function ambuildempty;
 	aminsert_function aminsert;
+	amwarminsert_function amwarminsert;
 	ambulkdelete_function ambulkdelete;
 	amvacuumcleanup_function amvacuumcleanup;
 	amcanreturn_function amcanreturn;	/* can be NULL */
@@ -217,6 +232,9 @@ typedef struct IndexAmRoutine
 	amestimateparallelscan_function amestimateparallelscan;		/* can be NULL */
 	aminitparallelscan_function aminitparallelscan;		/* can be NULL */
 	amparallelrescan_function amparallelrescan; /* can be NULL */
+
+	/* interface function to support WARM */
+	amrecheck_function amrecheck;		/* can be NULL */
 } IndexAmRoutine;
 
 
diff --git a/src/include/access/genam.h b/src/include/access/genam.h
index f467b18..965be45 100644
--- a/src/include/access/genam.h
+++ b/src/include/access/genam.h
@@ -75,12 +75,29 @@ typedef struct IndexBulkDeleteResult
 	bool		estimated_count;	/* num_index_tuples is an estimate */
 	double		num_index_tuples;		/* tuples remaining */
 	double		tuples_removed; /* # removed during vacuum operation */
+	double		num_warm_pointers;	/* # WARM pointers found */
+	double		num_clear_pointers;	/* # CLEAR pointers found */
+	double		pointers_cleared;	/* # WARM pointers cleared */
+	double		warm_pointers_removed;	/* # WARM pointers removed */
+	double		clear_pointers_removed;	/* # CLEAR pointers removed */
 	BlockNumber pages_deleted;	/* # unused pages in index */
 	BlockNumber pages_free;		/* # pages available for reuse */
 } IndexBulkDeleteResult;
 
+/*
+ * IndexBulkDeleteCallback should return one of the following
+ */
+typedef enum IndexBulkDeleteCallbackResult
+{
+	IBDCR_KEEP,			/* index tuple should be preserved */
+	IBDCR_DELETE,		/* index tuple should be deleted */
+	IBDCR_CLEAR_WARM	/* index tuple should be cleared of WARM bit */
+} IndexBulkDeleteCallbackResult;
+
 /* Typedef for callback function to determine if a tuple is bulk-deletable */
-typedef bool (*IndexBulkDeleteCallback) (ItemPointer itemptr, void *state);
+typedef IndexBulkDeleteCallbackResult (*IndexBulkDeleteCallback) (
+										 ItemPointer itemptr,
+										 bool is_warm, void *state);
 
 /* struct definitions appear in relscan.h */
 typedef struct IndexScanDescData *IndexScanDesc;
@@ -135,7 +152,8 @@ extern bool index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 struct IndexInfo *indexInfo);
+			 struct IndexInfo *indexInfo,
+			 bool warm_update);
 
 extern IndexScanDesc index_beginscan(Relation heapRelation,
 				Relation indexRelation,
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 5540e12..2217af9 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -72,6 +72,20 @@ typedef struct HeapUpdateFailureData
 	CommandId	cmax;
 } HeapUpdateFailureData;
 
+typedef int HeapCheckWarmChainStatus;
+
+#define HCWC_CLEAR_TUPLE		0x0001
+#define	HCWC_WARM_TUPLE			0x0002
+#define HCWC_WARM_UPDATED_TUPLE	0x0004
+
+#define HCWC_IS_MIXED(status) \
+	(((status) & (HCWC_CLEAR_TUPLE | HCWC_WARM_TUPLE)) != 0)
+#define HCWC_IS_ALL_WARM(status) \
+	(((status) & HCWC_CLEAR_TUPLE) == 0)
+#define HCWC_IS_ALL_CLEAR(status) \
+	(((status) & HCWC_WARM_TUPLE) == 0)
+#define HCWC_IS_WARM_UPDATED(status) \
+	(((status) & HCWC_WARM_UPDATED_TUPLE) != 0)
 
 /* ----------------
  *		function prototypes for heap access method
@@ -137,9 +151,10 @@ extern bool heap_fetch(Relation relation, Snapshot snapshot,
 		   Relation stats_relation);
 extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
 					   Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call);
+					   bool *all_dead, bool first_call, bool *recheck);
 extern bool heap_hot_search(ItemPointer tid, Relation relation,
-				Snapshot snapshot, bool *all_dead);
+				Snapshot snapshot, bool *all_dead,
+				bool *recheck, Buffer *buffer, HeapTuple heapTuple);
 
 extern void heap_get_latest_tid(Relation relation, Snapshot snapshot,
 					ItemPointer tid);
@@ -161,7 +176,8 @@ extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
 extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
@@ -176,10 +192,16 @@ extern bool heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple);
 extern Oid	simple_heap_insert(Relation relation, HeapTuple tup);
 extern void simple_heap_delete(Relation relation, ItemPointer tid);
 extern void simple_heap_update(Relation relation, ItemPointer otid,
-				   HeapTuple tup);
+				   HeapTuple tup,
+				   Bitmapset **modified_attrs,
+				   bool *warm_update);
 
 extern void heap_sync(Relation relation);
 extern void heap_update_snapshot(HeapScanDesc scan, Snapshot snapshot);
+extern HeapCheckWarmChainStatus heap_check_warm_chain(Page dp,
+				   ItemPointer tid, bool stop_at_warm);
+extern int heap_clear_warm_chain(Page dp, ItemPointer tid,
+				   OffsetNumber *cleared_offnums);
 
 /* in heap/pruneheap.c */
 extern void heap_page_prune_opt(Relation relation, Buffer buffer);
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index e6019d5..66fd0ea 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -32,7 +32,7 @@
 #define XLOG_HEAP_INSERT		0x00
 #define XLOG_HEAP_DELETE		0x10
 #define XLOG_HEAP_UPDATE		0x20
-/* 0x030 is free, was XLOG_HEAP_MOVE */
+#define XLOG_HEAP_MULTI_INSERT	0x30
 #define XLOG_HEAP_HOT_UPDATE	0x40
 #define XLOG_HEAP_CONFIRM		0x50
 #define XLOG_HEAP_LOCK			0x60
@@ -47,18 +47,23 @@
 /*
  * We ran out of opcodes, so heapam.c now has a second RmgrId.  These opcodes
  * are associated with RM_HEAP2_ID, but are not logically different from
- * the ones above associated with RM_HEAP_ID.  XLOG_HEAP_OPMASK applies to
- * these, too.
+ * the ones above associated with RM_HEAP_ID.
+ *
+ * In PG 10, we moved XLOG_HEAP2_MULTI_INSERT to RM_HEAP_ID. That allows us to
+ * use 0x80 bit in RM_HEAP2_ID, thus potentially creating another 8 possible
+ * opcodes in RM_HEAP2_ID.
  */
 #define XLOG_HEAP2_REWRITE		0x00
 #define XLOG_HEAP2_CLEAN		0x10
 #define XLOG_HEAP2_FREEZE_PAGE	0x20
 #define XLOG_HEAP2_CLEANUP_INFO 0x30
 #define XLOG_HEAP2_VISIBLE		0x40
-#define XLOG_HEAP2_MULTI_INSERT 0x50
+#define XLOG_HEAP2_WARMCLEAR	0x50
 #define XLOG_HEAP2_LOCK_UPDATED 0x60
 #define XLOG_HEAP2_NEW_CID		0x70
 
+#define XLOG_HEAP2_OPMASK		0x70
+
 /*
  * xl_heap_insert/xl_heap_multi_insert flag values, 8 bits are available.
  */
@@ -80,6 +85,7 @@
 #define XLH_UPDATE_CONTAINS_NEW_TUPLE			(1<<4)
 #define XLH_UPDATE_PREFIX_FROM_OLD				(1<<5)
 #define XLH_UPDATE_SUFFIX_FROM_OLD				(1<<6)
+#define XLH_UPDATE_WARM_UPDATE					(1<<7)
 
 /* convenience macro for checking whether any form of old tuple was logged */
 #define XLH_UPDATE_CONTAINS_OLD						\
@@ -225,6 +231,14 @@ typedef struct xl_heap_clean
 
 #define SizeOfHeapClean (offsetof(xl_heap_clean, ndead) + sizeof(uint16))
 
+typedef struct xl_heap_warmclear
+{
+	uint16		ncleared;
+	/* OFFSET NUMBERS are in the block reference 0 */
+} xl_heap_warmclear;
+
+#define SizeOfHeapWarmClear (offsetof(xl_heap_warmclear, ncleared) + sizeof(uint16))
+
 /*
  * Cleanup_info is required in some cases during a lazy VACUUM.
  * Used for reporting the results of HeapTupleHeaderAdvanceLatestRemovedXid()
@@ -388,6 +402,8 @@ extern XLogRecPtr log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid);
+extern XLogRecPtr log_heap_warmclear(Relation reln, Buffer buffer,
+			   OffsetNumber *cleared, int ncleared);
 extern XLogRecPtr log_heap_freeze(Relation reln, Buffer buffer,
 				TransactionId cutoff_xid, xl_heap_freeze_tuple *tuples,
 				int ntuples);
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 4d614b7..bcefba6 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -201,6 +201,21 @@ struct HeapTupleHeaderData
 										 * upgrade support */
 #define HEAP_MOVED (HEAP_MOVED_OFF | HEAP_MOVED_IN)
 
+/*
+ * A WARM chain usually consists of two parts. Each of these parts are HOT
+ * chains in themselves i.e. all indexed columns has the same value, but a WARM
+ * update separates these parts. We need a mechanism to identify which part a
+ * tuple belongs to. We can't just look at if it's a
+ * HeapTupleHeaderIsWarmUpdated() because during WARM update, both old and new
+ * tuples are marked as WARM tuples.
+ *
+ * We need another infomask bit for this. But we use the same infomask bit that
+ * was earlier used for by old-style VACUUM FULL. This is safe because
+ * HEAP_WARM_TUPLE flag will always be set along with HEAP_WARM_UPDATED. So if
+ * HEAP_WARM_TUPLE and HEAP_WARM_UPDATED is set then we know that it's
+ * referring to red part of the WARM chain.
+ */
+#define HEAP_WARM_TUPLE			0x4000
 #define HEAP_XACT_MASK			0xFFF0	/* visibility-related bits */
 
 /*
@@ -260,7 +275,11 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x0800 are available */
+#define HEAP_WARM_UPDATED		0x0800	/*
+										 * This or a prior version of this
+										 * tuple in the current HOT chain was
+										 * once WARM updated
+										 */
 #define HEAP_LATEST_TUPLE		0x1000	/*
 										 * This is the last tuple in chain and
 										 * ip_posid points to the root line
@@ -271,7 +290,7 @@ struct HeapTupleHeaderData
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF800	/* visibility-related bits */
 
 
 /*
@@ -396,7 +415,7 @@ struct HeapTupleHeaderData
 /* SetCmin is reasonably simple since we never need a combo CID */
 #define HeapTupleHeaderSetCmin(tup, cid) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	(tup)->t_infomask &= ~HEAP_COMBOCID; \
 } while (0)
@@ -404,7 +423,7 @@ do { \
 /* SetCmax must be used after HeapTupleHeaderAdjustCmax; see combocid.c */
 #define HeapTupleHeaderSetCmax(tup, cid, iscombo) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	if (iscombo) \
 		(tup)->t_infomask |= HEAP_COMBOCID; \
@@ -414,7 +433,7 @@ do { \
 
 #define HeapTupleHeaderGetXvac(tup) \
 ( \
-	((tup)->t_infomask & HEAP_MOVED) ? \
+	HeapTupleHeaderIsMoved(tup) ? \
 		(tup)->t_choice.t_heap.t_field3.t_xvac \
 	: \
 		InvalidTransactionId \
@@ -422,7 +441,7 @@ do { \
 
 #define HeapTupleHeaderSetXvac(tup, xid) \
 do { \
-	Assert((tup)->t_infomask & HEAP_MOVED); \
+	Assert(HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_xvac = (xid); \
 } while (0)
 
@@ -510,6 +529,21 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+#define HeapTupleHeaderSetWarmUpdated(tup) \
+do { \
+	(tup)->t_infomask2 |= HEAP_WARM_UPDATED; \
+} while (0)
+
+#define HeapTupleHeaderClearWarmUpdated(tup) \
+do { \
+	(tup)->t_infomask2 &= ~HEAP_WARM_UPDATED; \
+} while (0)
+
+#define HeapTupleHeaderIsWarmUpdated(tup) \
+( \
+  ((tup)->t_infomask2 & HEAP_WARM_UPDATED) != 0 \
+)
+
 /*
  * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
  * the TID of the tuple itself in t_ctid field to mark the end of the chain.
@@ -635,6 +669,58 @@ do { \
 )
 
 /*
+ * Macros to check if tuple is a moved-off/in tuple by VACUUM FULL in from
+ * pre-9.0 era. Such tuple must not have HEAP_WARM_TUPLE flag set.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderIsMovedOff(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED_OFF) \
+)
+
+#define HeapTupleHeaderIsMovedIn(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED_IN) \
+)
+
+#define HeapTupleHeaderIsMoved(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED) \
+)
+
+/*
+ * Check if tuple belongs to the second part of the WARM chain.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderIsWarm(tuple) \
+( \
+	HeapTupleHeaderIsWarmUpdated(tuple) && \
+	(((tuple)->t_infomask & HEAP_WARM_TUPLE) != 0) \
+)
+
+/*
+ * Mark tuple as a member of the second part of the chain. Must only be done on
+ * a tuple which is already marked a WARM-tuple.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderSetWarm(tuple) \
+( \
+	AssertMacro(HeapTupleHeaderIsWarmUpdated(tuple)), \
+	(tuple)->t_infomask |= HEAP_WARM_TUPLE \
+)
+
+#define HeapTupleHeaderClearWarm(tuple) \
+( \
+	(tuple)->t_infomask &= ~HEAP_WARM_TUPLE \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
@@ -785,6 +871,24 @@ struct MinimalTupleData
 #define HeapTupleClearHeapOnly(tuple) \
 		HeapTupleHeaderClearHeapOnly((tuple)->t_data)
 
+#define HeapTupleIsWarmUpdated(tuple) \
+		HeapTupleHeaderIsWarmUpdated((tuple)->t_data)
+
+#define HeapTupleSetWarmUpdated(tuple) \
+		HeapTupleHeaderSetWarmUpdated((tuple)->t_data)
+
+#define HeapTupleClearWarmUpdated(tuple) \
+		HeapTupleHeaderClearWarmUpdated((tuple)->t_data)
+
+#define HeapTupleIsWarm(tuple) \
+		HeapTupleHeaderIsWarm((tuple)->t_data)
+
+#define HeapTupleSetWarm(tuple) \
+		HeapTupleHeaderSetWarm((tuple)->t_data)
+
+#define HeapTupleClearWarm(tuple) \
+		HeapTupleHeaderClearWarm((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index f9304db..163180d 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -427,6 +427,12 @@ typedef BTScanOpaqueData *BTScanOpaque;
 #define SK_BT_NULLS_FIRST	(INDOPTION_NULLS_FIRST << SK_BT_INDOPTION_SHIFT)
 
 /*
+ * Flags overloaded on t_tid.ip_posid field. They are managed by
+ * ItemPointerSetFlags and corresponing routines.
+ */
+#define BTREE_INDEX_WARM_POINTER	0x01
+
+/*
  * external entry points for btree, in nbtree.c
  */
 extern IndexBuildResult *btbuild(Relation heap, Relation index,
@@ -436,6 +442,10 @@ extern bool btinsert(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
 		 struct IndexInfo *indexInfo);
+extern bool btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 struct IndexInfo *indexInfo);
 extern IndexScanDesc btbeginscan(Relation rel, int nkeys, int norderbys);
 extern Size btestimateparallelscan(void);
 extern void btinitparallelscan(void *target);
@@ -487,10 +497,12 @@ extern void _bt_pageinit(Page page, Size size);
 extern bool _bt_page_recyclable(Page page);
 extern void _bt_delitems_delete(Relation rel, Buffer buf,
 					OffsetNumber *itemnos, int nitems, Relation heapRel);
-extern void _bt_delitems_vacuum(Relation rel, Buffer buf,
-					OffsetNumber *itemnos, int nitems,
-					BlockNumber lastBlockVacuumed);
+extern void _bt_handleitems_vacuum(Relation rel, Buffer buf,
+					OffsetNumber *delitemnos, int ndelitems,
+					OffsetNumber *clearitemnos, int nclearitems);
 extern int	_bt_pagedel(Relation rel, Buffer buf);
+extern void	_bt_clear_items(Page page, OffsetNumber *clearitemnos,
+					uint16 nclearitems);
 
 /*
  * prototypes for functions in nbtsearch.c
@@ -537,6 +549,9 @@ extern bytea *btoptions(Datum reloptions, bool validate);
 extern bool btproperty(Oid index_oid, int attno,
 		   IndexAMProperty prop, const char *propname,
 		   bool *res, bool *isnull);
+extern bool btrecheck(Relation indexRel, struct IndexInfo *indexInfo,
+		IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * prototypes for functions in nbtvalidate.c
diff --git a/src/include/access/nbtxlog.h b/src/include/access/nbtxlog.h
index d6a3085..6a86628 100644
--- a/src/include/access/nbtxlog.h
+++ b/src/include/access/nbtxlog.h
@@ -142,7 +142,8 @@ typedef struct xl_btree_reuse_page
 /*
  * This is what we need to know about vacuum of individual leaf index tuples.
  * The WAL record can represent deletion of any number of index tuples on a
- * single index page when executed by VACUUM.
+ * single index page when executed by VACUUM. It also includes tuples which
+ * are cleared of WARM bits by VACUUM.
  *
  * For MVCC scans, lastBlockVacuumed will be set to InvalidBlockNumber.
  * For a non-MVCC index scans there is an additional correctness requirement
@@ -165,11 +166,12 @@ typedef struct xl_btree_reuse_page
 typedef struct xl_btree_vacuum
 {
 	BlockNumber lastBlockVacuumed;
-
-	/* TARGET OFFSET NUMBERS FOLLOW */
+	uint16		ndelitems;
+	uint16		nclearitems;
+	/* ndelitems + nclearitems TARGET OFFSET NUMBERS FOLLOW */
 } xl_btree_vacuum;
 
-#define SizeOfBtreeVacuum	(offsetof(xl_btree_vacuum, lastBlockVacuumed) + sizeof(BlockNumber))
+#define SizeOfBtreeVacuum	(offsetof(xl_btree_vacuum, nclearitems) + sizeof(uint16))
 
 /*
  * This is what we need to know about marking an empty branch for deletion.
diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h
index 3fc726d..fa178d3 100644
--- a/src/include/access/relscan.h
+++ b/src/include/access/relscan.h
@@ -104,6 +104,9 @@ typedef struct IndexScanDescData
 	/* index access method's private state */
 	void	   *opaque;			/* access-method-specific info */
 
+	/* IndexInfo structure for this index */
+	struct IndexInfo  *indexInfo;
+
 	/*
 	 * In an index-only scan, a successful amgettuple call must fill either
 	 * xs_itup (and xs_itupdesc) or xs_hitup (and xs_hitupdesc) to provide the
@@ -119,7 +122,7 @@ typedef struct IndexScanDescData
 	HeapTupleData xs_ctup;		/* current heap tuple, if any */
 	Buffer		xs_cbuf;		/* current heap buffer in scan, if any */
 	/* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */
-	bool		xs_recheck;		/* T means scan keys must be rechecked */
+	bool		xs_recheck;		/* T means scan keys must be rechecked for each tuple */
 
 	/*
 	 * When fetching with an ordering operator, the values of the ORDER BY
diff --git a/src/include/catalog/index.h b/src/include/catalog/index.h
index 20bec90..f92ec29 100644
--- a/src/include/catalog/index.h
+++ b/src/include/catalog/index.h
@@ -89,6 +89,13 @@ extern void FormIndexDatum(IndexInfo *indexInfo,
 			   Datum *values,
 			   bool *isnull);
 
+extern void FormIndexPlainDatum(IndexInfo *indexInfo,
+			   Relation heapRel,
+			   HeapTuple heapTup,
+			   Datum *values,
+			   bool *isnull,
+			   bool *isavail);
+
 extern void index_build(Relation heapRelation,
 			Relation indexRelation,
 			IndexInfo *indexInfo,
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 1132a60..8585da4 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2785,6 +2785,8 @@ DATA(insert OID = 1933 (  pg_stat_get_tuples_deleted	PGNSP PGUID 12 1 0 0 0 f f
 DESCR("statistics: number of tuples deleted");
 DATA(insert OID = 1972 (  pg_stat_get_tuples_hot_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated");
+DATA(insert OID = 3402 (  pg_stat_get_tuples_warm_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated");
 DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_live_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
@@ -2937,6 +2939,8 @@ DATA(insert OID = 3042 (  pg_stat_get_xact_tuples_deleted		PGNSP PGUID 12 1 0 0
 DESCR("statistics: number of tuples deleted in current transaction");
 DATA(insert OID = 3043 (  pg_stat_get_xact_tuples_hot_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated in current transaction");
+DATA(insert OID = 3405 (  pg_stat_get_xact_tuples_warm_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated in current transaction");
 DATA(insert OID = 3044 (  pg_stat_get_xact_blocks_fetched		PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_fetched _null_ _null_ _null_ ));
 DESCR("statistics: number of blocks fetched in current transaction");
 DATA(insert OID = 3045 (  pg_stat_get_xact_blocks_hit			PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_hit _null_ _null_ _null_ ));
diff --git a/src/include/commands/progress.h b/src/include/commands/progress.h
index 9472ecc..b355b61 100644
--- a/src/include/commands/progress.h
+++ b/src/include/commands/progress.h
@@ -25,6 +25,7 @@
 #define PROGRESS_VACUUM_NUM_INDEX_VACUUMS		4
 #define PROGRESS_VACUUM_MAX_DEAD_TUPLES			5
 #define PROGRESS_VACUUM_NUM_DEAD_TUPLES			6
+#define PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED	7
 
 /* Phases of vacuum (as advertised via PROGRESS_VACUUM_PHASE) */
 #define PROGRESS_VACUUM_PHASE_SCAN_HEAP			1
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index d3849b9..7e1ec56 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -506,6 +506,7 @@ extern int	ExecCleanTargetListLength(List *targetlist);
 extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
 extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
 extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
+					  ItemPointer root_tid, Bitmapset *modified_attrs,
 					  EState *estate, bool noDupErr, bool *specConflict,
 					  List *arbiterIndexes);
 extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
diff --git a/src/include/executor/nodeIndexscan.h b/src/include/executor/nodeIndexscan.h
index ea3f3a5..ebeec74 100644
--- a/src/include/executor/nodeIndexscan.h
+++ b/src/include/executor/nodeIndexscan.h
@@ -41,5 +41,4 @@ extern void ExecIndexEvalRuntimeKeys(ExprContext *econtext,
 extern bool ExecIndexEvalArrayKeys(ExprContext *econtext,
 					   IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 extern bool ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
-
 #endif   /* NODEINDEXSCAN_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index 11a6850..d2991db 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -132,6 +132,7 @@ typedef struct IndexInfo
 	NodeTag		type;
 	int			ii_NumIndexAttrs;
 	AttrNumber	ii_KeyAttrNumbers[INDEX_MAX_KEYS];
+	Bitmapset  *ii_indxattrs;	/* bitmap of all columns used in this index */
 	List	   *ii_Expressions; /* list of Expr */
 	List	   *ii_ExpressionsState;	/* list of ExprState */
 	List	   *ii_Predicate;	/* list of Expr */
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index e29397f..99bdc8b 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -105,6 +105,7 @@ typedef struct PgStat_TableCounts
 	PgStat_Counter t_tuples_updated;
 	PgStat_Counter t_tuples_deleted;
 	PgStat_Counter t_tuples_hot_updated;
+	PgStat_Counter t_tuples_warm_updated;
 	bool		t_truncated;
 
 	PgStat_Counter t_delta_live_tuples;
@@ -625,6 +626,7 @@ typedef struct PgStat_StatTabEntry
 	PgStat_Counter tuples_updated;
 	PgStat_Counter tuples_deleted;
 	PgStat_Counter tuples_hot_updated;
+	PgStat_Counter tuples_warm_updated;
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
@@ -1285,7 +1287,7 @@ pgstat_report_wait_end(void)
 	(pgStatBlockWriteTime += (n))
 
 extern void pgstat_count_heap_insert(Relation rel, PgStat_Counter n);
-extern void pgstat_count_heap_update(Relation rel, bool hot);
+extern void pgstat_count_heap_update(Relation rel, bool hot, bool warm);
 extern void pgstat_count_heap_delete(Relation rel);
 extern void pgstat_count_truncate(Relation rel);
 extern void pgstat_update_heap_dead_tuples(Relation rel, int delta);
diff --git a/src/include/storage/bufpage.h b/src/include/storage/bufpage.h
index e956dc3..1852195 100644
--- a/src/include/storage/bufpage.h
+++ b/src/include/storage/bufpage.h
@@ -433,6 +433,8 @@ extern void PageIndexMultiDelete(Page page, OffsetNumber *itemnos, int nitems);
 extern void PageIndexTupleDeleteNoCompact(Page page, OffsetNumber offset);
 extern bool PageIndexTupleOverwrite(Page page, OffsetNumber offnum,
 						Item newtup, Size newsize);
+extern void PageIndexClearWarmTuples(Page page, OffsetNumber *clearitemnos,
+						uint16 nclearitems);
 extern char *PageSetChecksumCopy(Page page, BlockNumber blkno);
 extern void PageSetChecksumInplace(Page page, BlockNumber blkno);
 
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index ab875bb..4b173b5 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -142,9 +142,16 @@ typedef struct RelationData
 
 	/* data managed by RelationGetIndexAttrBitmap: */
 	Bitmapset  *rd_indexattr;	/* identifies columns used in indexes */
+	Bitmapset  *rd_exprindexattr; /* indentified columns used in expression or
+									 predicate indexes */
+	Bitmapset  *rd_indxnotreadyattr;	/* columns used by indexes not yet
+										   ready */
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_pkattr;		/* cols included in primary key */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	List	   *rd_indexattrsList;	/* List of bitmaps, describing list of
+									   attributes for each index */
+	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	PublicationActions  *rd_pubactions;	/* publication actions */
 
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index 81af3ae..06c0183 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -51,11 +51,14 @@ typedef enum IndexAttrBitmapKind
 	INDEX_ATTR_BITMAP_ALL,
 	INDEX_ATTR_BITMAP_KEY,
 	INDEX_ATTR_BITMAP_PRIMARY_KEY,
-	INDEX_ATTR_BITMAP_IDENTITY_KEY
+	INDEX_ATTR_BITMAP_IDENTITY_KEY,
+	INDEX_ATTR_BITMAP_EXPR_PREDICATE,
+	INDEX_ATTR_BITMAP_NOTREADY
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
 						   IndexAttrBitmapKind keyAttrs);
+extern List *RelationGetIndexAttrList(Relation relation);
 
 extern void RelationGetExclusionInfo(Relation indexRelation,
 						 Oid **operators,
diff --git a/src/test/regress/expected/alter_generic.out b/src/test/regress/expected/alter_generic.out
index ce581bb..85e4c70 100644
--- a/src/test/regress/expected/alter_generic.out
+++ b/src/test/regress/expected/alter_generic.out
@@ -161,15 +161,15 @@ ALTER SERVER alt_fserv1 RENAME TO alt_fserv3;   -- OK
 SELECT fdwname FROM pg_foreign_data_wrapper WHERE fdwname like 'alt_fdw%';
  fdwname  
 ----------
- alt_fdw2
  alt_fdw3
+ alt_fdw2
 (2 rows)
 
 SELECT srvname FROM pg_foreign_server WHERE srvname like 'alt_fserv%';
   srvname   
 ------------
- alt_fserv2
  alt_fserv3
+ alt_fserv2
 (2 rows)
 
 --
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index d706f42..f7dc4a4 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1756,6 +1756,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_tuples_deleted(c.oid) AS n_tup_del,
     pg_stat_get_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
@@ -1903,6 +1904,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1946,6 +1948,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1983,7 +1986,8 @@ pg_stat_xact_all_tables| SELECT c.oid AS relid,
     pg_stat_get_xact_tuples_inserted(c.oid) AS n_tup_ins,
     pg_stat_get_xact_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_xact_tuples_deleted(c.oid) AS n_tup_del,
-    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd
+    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_xact_tuples_warm_updated(c.oid) AS n_tup_warm_upd
    FROM ((pg_class c
      LEFT JOIN pg_index i ON ((c.oid = i.indrelid)))
      LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
@@ -1999,7 +2003,8 @@ pg_stat_xact_sys_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname = ANY (ARRAY['pg_catalog'::name, 'information_schema'::name])) OR (pg_stat_xact_all_tables.schemaname ~ '^pg_toast'::text));
 pg_stat_xact_user_functions| SELECT p.oid AS funcid,
@@ -2021,7 +2026,8 @@ pg_stat_xact_user_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname <> ALL (ARRAY['pg_catalog'::name, 'information_schema'::name])) AND (pg_stat_xact_all_tables.schemaname !~ '^pg_toast'::text));
 pg_statio_all_indexes| SELECT c.oid AS relid,
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
new file mode 100644
index 0000000..1ae2f40
--- /dev/null
+++ b/src/test/regress/expected/warm.out
@@ -0,0 +1,914 @@
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Check if index only scan works correctly
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx1 on updtst_tab1
+   Index Cond: (b = 140001)
+(2 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab1;
+------------------
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE a = 1;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE b = 701;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+VACUUM updtst_tab2;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx2 on updtst_tab2
+   Index Cond: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab2 WHERE b = 701;
+  b  
+-----
+ 701
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab2;
+------------------
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 1;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+       QUERY PLAN        
+-------------------------
+ Seq Scan on updtst_tab3
+   Filter: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 701;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+  b   
+------
+ 1421
+(1 row)
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+SET enable_seqscan = false;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    98
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 2;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx3 on updtst_tab3
+   Index Cond: (b = 702)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 702;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+  b   
+------
+ 1422
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab3;
+------------------
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+explain select * from test_warm where lower(a) = 'test';
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on test_warm  (cost=4.18..12.65 rows=4 width=64)
+   Recheck Cond: (lower(a) = 'test'::text)
+   ->  Bitmap Index Scan on test_warmindx  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (lower(a) = 'test'::text)
+(4 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+select *, ctid from test_warm where a = 'test';
+ a | b | ctid 
+---+---+------
+(0 rows)
+
+select *, ctid from test_warm where a = 'TEST';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+                                   QUERY PLAN                                    
+---------------------------------------------------------------------------------
+ Index Scan using test_warmindx on test_warm  (cost=0.15..20.22 rows=4 width=64)
+   Index Cond: (lower(a) = 'test'::text)
+(2 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+DROP TABLE test_warm;
+--- Test with toast data types
+CREATE TABLE test_toast_warm (a int unique, b text, c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+-- insert a large enough value to cause index datum compression
+INSERT INTO test_toast_warm VALUES (1, repeat('a', 600), 100);
+INSERT INTO test_toast_warm VALUES (2, repeat('b', 2), 100);
+INSERT INTO test_toast_warm VALUES (3, repeat('c', 4), 100);
+INSERT INTO test_toast_warm VALUES (4, repeat('d', 63), 100);
+INSERT INTO test_toast_warm VALUES (5, repeat('e', 126), 100);
+INSERT INTO test_toast_warm VALUES (6, repeat('f', 127), 100);
+INSERT INTO test_toast_warm VALUES (7, repeat('g', 128), 100);
+INSERT INTO test_toast_warm VALUES (8, repeat('h', 3200), 100);
+UPDATE test_toast_warm SET b = repeat('q', 600) WHERE a = 1;
+UPDATE test_toast_warm SET b = repeat('r', 2) WHERE a = 2;
+UPDATE test_toast_warm SET b = repeat('s', 4) WHERE a = 3;
+UPDATE test_toast_warm SET b = repeat('t', 63) WHERE a = 4;
+UPDATE test_toast_warm SET b = repeat('u', 126) WHERE a = 5;
+UPDATE test_toast_warm SET b = repeat('v', 127) WHERE a = 6;
+UPDATE test_toast_warm SET b = repeat('w', 128) WHERE a = 7;
+UPDATE test_toast_warm SET b = repeat('x', 3200) WHERE a = 8;
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+                        QUERY PLAN                         
+-----------------------------------------------------------
+ Index Scan using test_toast_warm_a_key on test_toast_warm
+   Index Cond: (a = 1)
+(2 rows)
+
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+                                                                                                                                                                                                                                                                                                                      QUERY PLAN                                                                                                                                                                                                                                                                                                                      
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Index Scan using test_toast_warm_index on test_toast_warm
+   Index Cond: (b = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'::text)
+(2 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+                                                                                                                                                                                                                                                                                                                      QUERY PLAN                                                                                                                                                                                                                                                                                                                      
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Index Only Scan using test_toast_warm_index on test_toast_warm
+   Index Cond: (b = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'::text)
+(2 rows)
+
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+ a |                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+---+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 1 | qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+ a | b 
+---+---
+(0 rows)
+
+SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+ b 
+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+ a |                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+---+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 1 | qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+ a | b  
+---+----
+ 2 | rr
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+ a |  b   
+---+------
+ 3 | ssss
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+ a |                                b                                
+---+-----------------------------------------------------------------
+ 4 | ttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttt
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+ a |                                                               b                                                                
+---+--------------------------------------------------------------------------------------------------------------------------------
+ 5 | uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+ a |                                                                b                                                                
+---+---------------------------------------------------------------------------------------------------------------------------------
+ 6 | vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+ a |                                                                b                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------
+ 7 | wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+ a |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                b                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 8 | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+(1 row)
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                                    QUERY PLAN                                                                                                                                                                                                                                                                                                                    
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 'qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq'::text)
+(2 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                                    QUERY PLAN                                                                                                                                                                                                                                                                                                                    
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 'qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq'::text)
+(2 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+ a |                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+---+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 1 | qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+ a | b  
+---+----
+ 2 | rr
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+ a |  b   
+---+------
+ 3 | ssss
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+ a |                                b                                
+---+-----------------------------------------------------------------
+ 4 | ttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttt
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+ a |                                                               b                                                                
+---+--------------------------------------------------------------------------------------------------------------------------------
+ 5 | uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+ a |                                                                b                                                                
+---+---------------------------------------------------------------------------------------------------------------------------------
+ 6 | vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+ a |                                                                b                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------
+ 7 | wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+ a |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                b                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 8 | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+(1 row)
+
+DROP TABLE test_toast_warm;
+-- Test with numeric data type
+CREATE TABLE test_toast_warm (a int unique, b numeric(10,2), c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+INSERT INTO test_toast_warm VALUES (1, 10.2, 100);
+INSERT INTO test_toast_warm VALUES (2, 11.22, 100);
+INSERT INTO test_toast_warm VALUES (3, 12.222, 100);
+INSERT INTO test_toast_warm VALUES (4, 13.20, 100);
+INSERT INTO test_toast_warm VALUES (5, 14.201, 100);
+UPDATE test_toast_warm SET b = 100.2 WHERE a = 1;
+UPDATE test_toast_warm SET b = 101.22 WHERE a = 2;
+UPDATE test_toast_warm SET b = 102.222 WHERE a = 3;
+UPDATE test_toast_warm SET b = 103.20 WHERE a = 4;
+UPDATE test_toast_warm SET b = 104.201 WHERE a = 5;
+SELECT * FROM test_toast_warm;
+ a |   b    |  c  
+---+--------+-----
+ 1 | 100.20 | 100
+ 2 | 101.22 | 100
+ 3 | 102.22 | 100
+ 4 | 103.20 | 100
+ 5 | 104.20 | 100
+(5 rows)
+
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+                        QUERY PLAN                         
+-----------------------------------------------------------
+ Index Scan using test_toast_warm_a_key on test_toast_warm
+   Index Cond: (a = 1)
+(2 rows)
+
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+                    QUERY PLAN                    
+--------------------------------------------------
+ Bitmap Heap Scan on test_toast_warm
+   Recheck Cond: (b = 10.2)
+   ->  Bitmap Index Scan on test_toast_warm_index
+         Index Cond: (b = 10.2)
+(4 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+                    QUERY PLAN                    
+--------------------------------------------------
+ Bitmap Heap Scan on test_toast_warm
+   Recheck Cond: (b = 100.2)
+   ->  Bitmap Index Scan on test_toast_warm_index
+         Index Cond: (b = 100.2)
+(4 rows)
+
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+ a | b 
+---+---
+(0 rows)
+
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+ b 
+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+   b    
+--------
+ 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+ a |   b    
+---+--------
+ 2 | 101.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+ a |   b    
+---+--------
+ 3 | 102.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+ a |   b    
+---+--------
+ 4 | 103.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+ a |   b    
+---+--------
+ 5 | 104.20
+(1 row)
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+         QUERY PLAN          
+-----------------------------
+ Seq Scan on test_toast_warm
+   Filter: (a = 1)
+(2 rows)
+
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+         QUERY PLAN          
+-----------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 10.2)
+(2 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+         QUERY PLAN          
+-----------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 100.2)
+(2 rows)
+
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+ a | b 
+---+---
+(0 rows)
+
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+ b 
+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+   b    
+--------
+ 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+ a |   b    
+---+--------
+ 2 | 101.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+ a |   b    
+---+--------
+ 3 | 102.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+ a |   b    
+---+--------
+ 4 | 103.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+ a |   b    
+---+--------
+ 5 | 104.20
+(1 row)
+
+DROP TABLE test_toast_warm;
+-- Toasted heap attributes
+CREATE TABLE toasttest(descr text , cnt int DEFAULT 0, f1 text, f2 text);
+CREATE INDEX testindx1 ON toasttest(descr);
+CREATE INDEX testindx2 ON toasttest(f1);
+INSERT INTO toasttest(descr, f1, f2) VALUES('two-compressed', repeat('1234567890',1000), repeat('1234567890',1000));
+INSERT INTO toasttest(descr, f1, f2) VALUES('two-toasted', repeat('1234567890',20000), repeat('1234567890',50000));
+INSERT INTO toasttest(descr, f1, f2) VALUES('one-compressed,one-toasted', repeat('1234567890',1000), repeat('1234567890',50000));
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,1) | (two-compressed,0,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012
+ (0,2) | (two-toasted,0,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ (0,3) | ("one-compressed,one-toasted",0,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+-- UPDATE f1 by doing string manipulation, but the updated value remains the
+-- same as the old value
+UPDATE toasttest SET cnt = cnt +1, f1 = trim(leading '-' from '-'||f1) RETURNING substring(toasttest::text, 1, 200);
+                                                                                                substring                                                                                                 
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (two-compressed,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012
+ (two-toasted,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ ("one-compressed,one-toasted",1,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+                  QUERY PLAN                   
+-----------------------------------------------
+ Seq Scan on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,4) | (two-compressed,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012
+ (0,5) | (two-toasted,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ (0,6) | ("one-compressed,one-toasted",1,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+                           QUERY PLAN                            
+-----------------------------------------------------------------
+ Index Scan using testindx2 on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,6) | ("one-compressed,one-toasted",1,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+ (0,4) | (two-compressed,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012
+ (0,5) | (two-toasted,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+(3 rows)
+
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+-- UPDATE f1 for real this time
+UPDATE toasttest SET cnt = cnt +1, f1 = '-'||f1 RETURNING substring(toasttest::text, 1, 200);
+                                                                                                substring                                                                                                 
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (two-toasted,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234
+ ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+(3 rows)
+
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+                  QUERY PLAN                   
+-----------------------------------------------
+ Seq Scan on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,7) | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,8) | (two-toasted,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234
+ (0,9) | ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+(3 rows)
+
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+                           QUERY PLAN                            
+-----------------------------------------------------------------
+ Index Scan using testindx2 on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,9) | ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+ (0,7) | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,8) | (two-toasted,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234
+(3 rows)
+
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+-- UPDATE f1 from toasted to compressed
+UPDATE toasttest SET cnt = cnt +1, f1 = repeat('1234567890',1000) WHERE descr = 'two-toasted';
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+                  QUERY PLAN                   
+-----------------------------------------------
+ Seq Scan on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+  ctid  |                                                                                                substring                                                                                                 
+--------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,7)  | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,9)  | ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+ (0,10) | (two-toasted,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+(3 rows)
+
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+                           QUERY PLAN                            
+-----------------------------------------------------------------
+ Index Scan using testindx2 on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+  ctid  |                                                                                                substring                                                                                                 
+--------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,9)  | ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+ (0,7)  | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,10) | (two-toasted,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+(3 rows)
+
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+-- UPDATE f1 from compressed to toasted
+UPDATE toasttest SET cnt = cnt +1, f1 = repeat('1234567890',2000) WHERE descr = 'one-compressed,one-toasted';
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+                  QUERY PLAN                   
+-----------------------------------------------
+ Seq Scan on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+  ctid  |                                                                                                substring                                                                                                 
+--------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,7)  | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,10) | (two-toasted,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ (0,11) | ("one-compressed,one-toasted",3,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+                           QUERY PLAN                            
+-----------------------------------------------------------------
+ Index Scan using testindx2 on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+  ctid  |                                                                                                substring                                                                                                 
+--------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,7)  | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,10) | (two-toasted,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ (0,11) | ("one-compressed,one-toasted",3,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+DROP TABLE toasttest;
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 9f95b01..cd99f88 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -42,6 +42,8 @@ test: create_type
 test: create_table
 test: create_function_2
 
+test: warm
+
 # ----------
 # Load huge amounts of data
 # We should split the data files into single files and then
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
new file mode 100644
index 0000000..fb1f93e
--- /dev/null
+++ b/src/test/regress/sql/warm.sql
@@ -0,0 +1,344 @@
+
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Check if index only scan works correctly
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab1;
+
+------------------
+
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE a = 1;
+
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE b = 701;
+
+VACUUM updtst_tab2;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+SELECT b FROM updtst_tab2 WHERE b = 701;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab2;
+------------------
+
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+SELECT * FROM updtst_tab3 WHERE a = 1;
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+SET enable_seqscan = false;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+SELECT * FROM updtst_tab3 WHERE a = 2;
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab3;
+------------------
+
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where a = 'test';
+select *, ctid from test_warm where a = 'TEST';
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+DROP TABLE test_warm;
+
+--- Test with toast data types
+
+CREATE TABLE test_toast_warm (a int unique, b text, c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+
+-- insert a large enough value to cause index datum compression
+INSERT INTO test_toast_warm VALUES (1, repeat('a', 600), 100);
+INSERT INTO test_toast_warm VALUES (2, repeat('b', 2), 100);
+INSERT INTO test_toast_warm VALUES (3, repeat('c', 4), 100);
+INSERT INTO test_toast_warm VALUES (4, repeat('d', 63), 100);
+INSERT INTO test_toast_warm VALUES (5, repeat('e', 126), 100);
+INSERT INTO test_toast_warm VALUES (6, repeat('f', 127), 100);
+INSERT INTO test_toast_warm VALUES (7, repeat('g', 128), 100);
+INSERT INTO test_toast_warm VALUES (8, repeat('h', 3200), 100);
+
+UPDATE test_toast_warm SET b = repeat('q', 600) WHERE a = 1;
+UPDATE test_toast_warm SET b = repeat('r', 2) WHERE a = 2;
+UPDATE test_toast_warm SET b = repeat('s', 4) WHERE a = 3;
+UPDATE test_toast_warm SET b = repeat('t', 63) WHERE a = 4;
+UPDATE test_toast_warm SET b = repeat('u', 126) WHERE a = 5;
+UPDATE test_toast_warm SET b = repeat('v', 127) WHERE a = 6;
+UPDATE test_toast_warm SET b = repeat('w', 128) WHERE a = 7;
+UPDATE test_toast_warm SET b = repeat('x', 3200) WHERE a = 8;
+
+
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+
+DROP TABLE test_toast_warm;
+
+-- Test with numeric data type
+
+CREATE TABLE test_toast_warm (a int unique, b numeric(10,2), c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+
+INSERT INTO test_toast_warm VALUES (1, 10.2, 100);
+INSERT INTO test_toast_warm VALUES (2, 11.22, 100);
+INSERT INTO test_toast_warm VALUES (3, 12.222, 100);
+INSERT INTO test_toast_warm VALUES (4, 13.20, 100);
+INSERT INTO test_toast_warm VALUES (5, 14.201, 100);
+
+UPDATE test_toast_warm SET b = 100.2 WHERE a = 1;
+UPDATE test_toast_warm SET b = 101.22 WHERE a = 2;
+UPDATE test_toast_warm SET b = 102.222 WHERE a = 3;
+UPDATE test_toast_warm SET b = 103.20 WHERE a = 4;
+UPDATE test_toast_warm SET b = 104.201 WHERE a = 5;
+
+SELECT * FROM test_toast_warm;
+
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+
+DROP TABLE test_toast_warm;
+
+-- Toasted heap attributes
+CREATE TABLE toasttest(descr text , cnt int DEFAULT 0, f1 text, f2 text);
+CREATE INDEX testindx1 ON toasttest(descr);
+CREATE INDEX testindx2 ON toasttest(f1);
+
+INSERT INTO toasttest(descr, f1, f2) VALUES('two-compressed', repeat('1234567890',1000), repeat('1234567890',1000));
+INSERT INTO toasttest(descr, f1, f2) VALUES('two-toasted', repeat('1234567890',20000), repeat('1234567890',50000));
+INSERT INTO toasttest(descr, f1, f2) VALUES('one-compressed,one-toasted', repeat('1234567890',1000), repeat('1234567890',50000));
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+
+-- UPDATE f1 by doing string manipulation, but the updated value remains the
+-- same as the old value
+UPDATE toasttest SET cnt = cnt +1, f1 = trim(leading '-' from '-'||f1) RETURNING substring(toasttest::text, 1, 200);
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+
+-- UPDATE f1 for real this time
+UPDATE toasttest SET cnt = cnt +1, f1 = '-'||f1 RETURNING substring(toasttest::text, 1, 200);
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+
+-- UPDATE f1 from toasted to compressed
+UPDATE toasttest SET cnt = cnt +1, f1 = repeat('1234567890',1000) WHERE descr = 'two-toasted';
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+
+-- UPDATE f1 from compressed to toasted
+UPDATE toasttest SET cnt = cnt +1, f1 = repeat('1234567890',2000) WHERE descr = 'one-compressed,one-toasted';
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+
+DROP TABLE toasttest;
-- 
2.9.3 (Apple Git-75)

regression.diffsapplication/octet-stream; name=regression.diffsDownload
*** /Users/pavan/work/SOURCES/postgresql/src/test/regress/expected/json.out	Sat Apr  1 02:35:33 2017
--- /Users/pavan/work/SOURCES/postgresql/src/test/regress/results/json.out	Sat Apr  1 03:23:56 2017
***************
*** 1690,1698 ****
  
  -- json to tsvector with stop words
  select to_tsvector('{"a": "aaa in bbb ddd ccc", "b": ["the eee fff ggg"], "c": {"d": "hhh. iii"}}'::json);
!                                 to_tsvector                                 
! ----------------------------------------------------------------------------
!  'aaa':1 'bbb':3 'ccc':5 'ddd':4 'eee':8 'fff':9 'ggg':10 'hhh':12 'iii':13
  (1 row)
  
  -- ts_vector corner cases
--- 1690,1698 ----
  
  -- json to tsvector with stop words
  select to_tsvector('{"a": "aaa in bbb ddd ccc", "b": ["the eee fff ggg"], "c": {"d": "hhh. iii"}}'::json);
!                                         to_tsvector                                        
! -------------------------------------------------------------------------------------------
!  'aaa':1 'bbb':3 'ccc':5 'ddd':4 'eee':8 'fff':9 'ggg':10 'hhh':12 'iii':13 'in':2 'the':7
  (1 row)
  
  -- ts_vector corner cases

======================================================================

*** /Users/pavan/work/SOURCES/postgresql/src/test/regress/expected/jsonb.out	Sat Apr  1 02:35:33 2017
--- /Users/pavan/work/SOURCES/postgresql/src/test/regress/results/jsonb.out	Sat Apr  1 03:23:57 2017
***************
*** 3490,3498 ****
  
  -- jsonb to tsvector with stop words
  select to_tsvector('{"a": "aaa in bbb ddd ccc", "b": ["the eee fff ggg"], "c": {"d": "hhh. iii"}}'::jsonb);
!                                 to_tsvector                                 
! ----------------------------------------------------------------------------
!  'aaa':1 'bbb':3 'ccc':5 'ddd':4 'eee':8 'fff':9 'ggg':10 'hhh':12 'iii':13
  (1 row)
  
  -- ts_vector corner cases
--- 3490,3498 ----
  
  -- jsonb to tsvector with stop words
  select to_tsvector('{"a": "aaa in bbb ddd ccc", "b": ["the eee fff ggg"], "c": {"d": "hhh. iii"}}'::jsonb);
!                                         to_tsvector                                        
! -------------------------------------------------------------------------------------------
!  'aaa':1 'bbb':3 'ccc':5 'ddd':4 'eee':8 'fff':9 'ggg':10 'hhh':12 'iii':13 'in':2 'the':7
  (1 row)
  
  -- ts_vector corner cases

======================================================================

#227Amit Kapila
amit.kapila16@gmail.com
In reply to: Pavan Deolasee (#224)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Mar 31, 2017 at 11:54 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Fri, Mar 31, 2017 at 11:16 PM, Robert Haas <robertmhaas@gmail.com> wrote:

Now, I understand you to be suggesting a flag at
table-creation time that would, maybe, be immutable after that, but
even then - are we going to run completely unmodified 9.6 code for
tables where that's not enabled, and only go through any of the WARM
logic when it is enabled? Doesn't sound likely. The commits already
made from this patch series certainly affect everybody, and I can't
see us adding switches that bypass
ce96ce60ca2293f75f36c3661e4657a3c79ffd61 for example.

I don't think I am going to claim that either. But probably only 5% of the
new code would then be involved. Which is a lot less and a lot more
manageable. Having said that, I think if we at all do this, we should only
do it based on our experiences in the beta cycle, as a last resort. Based on
my own experiences during HOT development, long running pgbench tests, with
several concurrent clients, subjected to multiple AV cycles and periodic
consistency checks, usually brings up issues related to heap corruption. So
my confidence level is relatively high on that part of the code. That's not
to suggest that there can't be any bugs.

Obviously then there are other things such as regression to some workload or
additional work required by vacuum etc. And I think we should address them
and I'm fairly certain we can do that. It may not happen immediately, but if
we provide right knobs, may be those who are affected can fall back to the
old behaviour or not use the new code at all while we improve things for
them.

Okay, but even if we want to provide knobs, then there should be some
consensus on those. I am sure introducing an additional pass over
index has some impact so either we should have some way to reduce the
impact or have some additional design to handle it. Do you think it
make sense to have a separate thread to discuss and get feedback on
same as I am not seeing much input on the knobs you are proposing to
handle second pass over index?

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#228Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Dilip Kumar (#218)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Mar 31, 2017 at 12:31 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:

On Thu, Mar 30, 2017 at 5:27 PM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

I am not sure if we can consider it as completely synthetic because we
might see some similar cases for json datatypes. Can we once try to
see the impact when the same test runs from multiple clients? For
your information, I am also trying to setup some tests along with one
of my colleague and we will report the results once the tests are
complete.

We have done some testing and below is the test details and results.

Test:
I have derived this test from above test given by pavan[1] except
below difference.

- I have reduced the fill factor to 40 to ensure that multiple there
is scope in the page to store multiple WARM chains.
- WARM updated all the tuples.
- Executed a large select to enforce lot of recheck tuple within single
query.
- Smaller tuple size (aid field is around ~100 bytes) just to ensure
tuple have sufficient space on a page to get WARM updated.

Results:
-----------
* I can see more than 15% of regression in this case. This regression
is repeatable.
* If I increase the fill factor to 90 than regression reduced to 7%,
may be only fewer tuples are getting WARM updated and others are not
because of no space left on page after few WARM update.

Thanks for doing the tests. The tests show us that if the table gets filled
up with WARM chains, and they are not cleaned up and the table is subjected
to read-only workload, we will see regression. Obviously, the test is
completely CPU bound, something WARM is not meant to address.I am not yet
certain if recheck is causing the problem. Yesterday I ran the test where I
was seeing regression with recheck completely turned off and still saw
regression. So there is something else that's going on with this kind of
workload. Will check.

Having said that, I think there are some other ways to fix some of the
common problems with repeated rechecks. One thing that we can do it rely on
the index pointer flags to decide whether recheck is necessary or not. For
example, a WARM pointer to a WARM tuple does not require recheck.
Similarly, a CLEAR pointer to a CLEAR tuple does not require recheck. A
WARM pointer to a CLEAR tuple can be discarded immediately because the only
situation where it can occur is in the case of aborted WARM updates. The
only troublesome situation is a CLEAR pointer to a WARM tuple. That
entirely depends on whether the index had received a WARM insert or not.
What we can do though, if recheck succeeds for the first time and if the
chain has only WARM tuples, we set the WARM bit on the index pointer. We
can use the same hint mechanism as used for marking index pointers dead to
minimise overhead.

Obviously this will only handle the case when the same tuple is rechecked
often. But if a tuple is rechecked only once then may be other overheads
will kick-in, thus reducing the regression significantly.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#229Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Bruce Momjian (#217)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 30, 2017 at 11:17 PM, Bruce Momjian <bruce@momjian.us> wrote:

On Tue, Mar 21, 2017 at 04:04:58PM -0400, Bruce Momjian wrote:

On Tue, Mar 21, 2017 at 04:56:16PM -0300, Alvaro Herrera wrote:

Bruce Momjian wrote:

On Tue, Mar 21, 2017 at 04:43:58PM -0300, Alvaro Herrera wrote:

Bruce Momjian wrote:

I don't think it makes sense to try and save bits and add

complexity

when we have no idea if we will ever use them,

If we find ourselves in dire need of additional bits, there is a

known

mechanism to get back 2 bits from old-style VACUUM FULL. I assume

that

the reason nobody has bothered to write the code for that is that
there's no *that* much interest.

We have no way of tracking if users still have pages that used the

bits

via pg_upgrade before they were removed.

Yes, that's exactly the code that needs to be written.

Yes, but once it is written it will take years before those bits can be
used on most installations.

Actually, the 2 bits from old-style VACUUM FULL bits could be reused if
one of the WARM bits would be set when it is checked. The WARM bits
will all be zero on pre-9.0. The check would have to be checking the
old-style VACUUM FULL bit and checking that a WARM bit is set.

We're already doing that in the submitted patch.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#230Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#222)
4 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Mar 31, 2017 at 11:15 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Thu, Mar 30, 2017 at 10:49 AM, Petr Jelinek
<petr.jelinek@2ndquadrant.com> wrote:

While reading this thread I am thinking if we could just not do WARM on
TOAST and compressed values if we know there might be regressions there.
I mean I've seen the problem WARM tries to solve mostly on timestamp or
boolean values and sometimes counters so it would still be helpful to
quite a lot of people even if we didn't do TOAST and compressed values
in v1. It's not like not doing WARM sometimes is somehow terrible, we'll
just fall back to current behavior.

Good point.

Ok. I've added logic to disable WARM update if either old or the new tuple
has compressed/toasted values. The HeapDetermineModifiedColumns() has been
materially changed to support this because we not only look for
modified_cols, but also toasted and compressed cols and if any of the
toasted or compressed cols overlap with the index attributes, we disable
WARM. HOT updates which do not modify toasted/compressed attributes should
still work.

I am not sure if this will be enough to address the regression that Dilip
reported in his last email. AFAICS that test probably does not use
toasting/compression. I hope to spend some time on that tomorrow and have a
better understanding of why we see the regression.

I've also added a table-level option to turn WARM off on a given table.
Right now the option can only be turned ON, but once turned ON, it can't be
turned OFF. We can add that support if needed. It might be interesting to
get Dilip's test running with enable_warm turned off on the table. That
will at least tell us whether turning WARM off fixes the regression.
Documentation changes for this reloption are missing.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0001-Track-root-line-pointer-v23_v25.patchapplication/octet-stream; name=0001-Track-root-line-pointer-v23_v25.patchDownload
From 6b9ff9be78d8b8d51e63549ab620096a95031606 Mon Sep 17 00:00:00 2001
From: Pavan Deolasee <pavan.deolasee@gmail.com>
Date: Tue, 28 Feb 2017 10:34:30 +0530
Subject: [PATCH 1/4] Track root line pointer - v23

Store the root line pointer of the WARM chain in the t_ctid.ip_posid field of
the last tuple in the chain and mark the tuple header with HEAP_TUPLE_LATEST
flag to record that fact.
---
 src/backend/access/heap/heapam.c      | 209 ++++++++++++++++++++++++++++------
 src/backend/access/heap/hio.c         |  25 +++-
 src/backend/access/heap/pruneheap.c   | 126 ++++++++++++++++++--
 src/backend/access/heap/rewriteheap.c |  21 +++-
 src/backend/executor/execIndexing.c   |   3 +-
 src/backend/executor/execMain.c       |   4 +-
 src/include/access/heapam.h           |   1 +
 src/include/access/heapam_xlog.h      |   4 +-
 src/include/access/hio.h              |   4 +-
 src/include/access/htup_details.h     |  97 +++++++++++++++-
 10 files changed, 428 insertions(+), 66 deletions(-)

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 0c3e2b0..30262ef 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -94,7 +94,8 @@ static HeapTuple heap_prepare_insert(Relation relation, HeapTuple tup,
 					TransactionId xid, CommandId cid, int options);
 static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
-				HeapTuple newtup, HeapTuple old_key_tup,
+				HeapTuple newtup, OffsetNumber root_offnum,
+				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
 static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
 							 Bitmapset *interesting_cols,
@@ -2264,13 +2265,13 @@ heap_get_latest_tid(Relation relation,
 		 */
 		if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) ||
 			HeapTupleHeaderIsOnlyLocked(tp.t_data) ||
-			ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid))
+			HeapTupleHeaderIsHeapLatest(tp.t_data, &ctid))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
 		}
 
-		ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tp.t_data, &ctid);
 		priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		UnlockReleaseBuffer(buffer);
 	}							/* end of loop */
@@ -2401,6 +2402,7 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	bool		all_visible_cleared = false;
+	OffsetNumber	root_offnum;
 
 	/*
 	 * Fill in tuple header fields, assign an OID, and toast the tuple if
@@ -2439,8 +2441,13 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
-	RelationPutHeapTuple(relation, buffer, heaptup,
-						 (options & HEAP_INSERT_SPECULATIVE) != 0);
+	root_offnum = RelationPutHeapTuple(relation, buffer, heaptup,
+						 (options & HEAP_INSERT_SPECULATIVE) != 0,
+						 InvalidOffsetNumber);
+
+	/* We must not overwrite the speculative insertion token. */
+	if ((options & HEAP_INSERT_SPECULATIVE) == 0)
+		HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 	if (PageIsAllVisible(BufferGetPage(buffer)))
 	{
@@ -2668,6 +2675,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 	Size		saveFreeSpace;
 	bool		need_tuple_data = RelationIsLogicallyLogged(relation);
 	bool		need_cids = RelationIsAccessibleInLogicalDecoding(relation);
+	OffsetNumber	root_offnum;
 
 	needwal = !(options & HEAP_INSERT_SKIP_WAL) && RelationNeedsWAL(relation);
 	saveFreeSpace = RelationGetTargetPageFreeSpace(relation,
@@ -2738,7 +2746,12 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		 * RelationGetBufferForTuple has ensured that the first tuple fits.
 		 * Put that on the page, and then as many other tuples as fit.
 		 */
-		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false);
+		root_offnum = RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false,
+				InvalidOffsetNumber);
+
+		/* Mark this tuple as the latest and also set root offset. */
+		HeapTupleHeaderSetHeapLatest(heaptuples[ndone]->t_data, root_offnum);
+
 		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++)
 		{
 			HeapTuple	heaptup = heaptuples[ndone + nthispage];
@@ -2746,7 +2759,10 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace)
 				break;
 
-			RelationPutHeapTuple(relation, buffer, heaptup, false);
+			root_offnum = RelationPutHeapTuple(relation, buffer, heaptup, false,
+					InvalidOffsetNumber);
+			/* Mark each tuple as the latest and also set root offset. */
+			HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 			/*
 			 * We don't use heap_multi_insert for catalog tuples yet, but
@@ -3018,6 +3034,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	HeapTupleData tp;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	TransactionId new_xmax;
@@ -3028,6 +3045,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	bool		all_visible_cleared = false;
 	HeapTuple	old_key_tuple = NULL;	/* replica identity of the tuple */
 	bool		old_key_copied = false;
+	OffsetNumber	root_offnum;
 
 	Assert(ItemPointerIsValid(tid));
 
@@ -3069,7 +3087,8 @@ heap_delete(Relation relation, ItemPointer tid,
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 	}
 
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
+	offnum = ItemPointerGetOffsetNumber(tid);
+	lp = PageGetItemId(page, offnum);
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
@@ -3199,7 +3218,17 @@ l1:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tp.t_data->t_ctid;
+
+		/*
+		 * If we're at the end of the chain, then just return the same TID back
+		 * to the caller. The caller uses that as a hint to know if we have hit
+		 * the end of the chain.
+		 */
+		if (!HeapTupleHeaderIsHeapLatest(tp.t_data, &tp.t_self))
+			HeapTupleHeaderGetNextTid(tp.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&tp.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tp.t_data);
@@ -3248,6 +3277,22 @@ l1:
 							  xid, LockTupleExclusive, true,
 							  &new_xmax, &new_infomask, &new_infomask2);
 
+	/*
+	 * heap_get_root_tuple_one() may call palloc, which is disallowed once we
+	 * enter the critical section. So check if the root offset is cached in the
+	 * tuple and if not, fetch that information hard way before entering the
+	 * critical section.
+	 *
+	 * Most often and unless we are dealing with a pg-upgraded cluster, the
+	 * root offset information should be cached. So there should not be too
+	 * much overhead of fetching this information. Also, once a tuple is
+	 * updated, the information will be copied to the new version. So it's not
+	 * as if we're going to pay this price forever.
+	 */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tp.t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -3275,8 +3320,10 @@ l1:
 	HeapTupleHeaderClearHotUpdated(tp.t_data);
 	HeapTupleHeaderSetXmax(tp.t_data, new_xmax);
 	HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo);
-	/* Make sure there is no forward chain link in t_ctid */
-	tp.t_data->t_ctid = tp.t_self;
+
+	/* Mark this tuple as the latest tuple in the update chain. */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		HeapTupleHeaderSetHeapLatest(tp.t_data, root_offnum);
 
 	MarkBufferDirty(buffer);
 
@@ -3477,6 +3524,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
+	OffsetNumber	root_offnum;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3536,6 +3585,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 
 
 	block = ItemPointerGetBlockNumber(otid);
+	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3839,7 +3889,12 @@ l2:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = oldtup.t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(oldtup.t_data, &oldtup.t_self))
+			HeapTupleHeaderGetNextTid(oldtup.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&oldtup.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
@@ -3979,6 +4034,7 @@ l2:
 		uint16		infomask_lock_old_tuple,
 					infomask2_lock_old_tuple;
 		bool		cleared_all_frozen = false;
+		OffsetNumber	root_offnum;
 
 		/*
 		 * To prevent concurrent sessions from updating the tuple, we have to
@@ -4006,6 +4062,14 @@ l2:
 
 		Assert(HEAP_XMAX_IS_LOCKED_ONLY(infomask_lock_old_tuple));
 
+		/*
+		 * Fetch root offset before entering the critical section. We do this
+		 * only if the information is not already available.
+		 */
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&oldtup.t_self));
+
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
@@ -4020,7 +4084,8 @@ l2:
 		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			HeapTupleHeaderSetHeapLatest(oldtup.t_data, root_offnum);
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4179,6 +4244,10 @@ l2:
 										   bms_overlap(modified_attrs, id_attrs),
 										   &old_key_copied);
 
+	if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
@@ -4204,6 +4273,17 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way (we should have done
+		 * that before entering the critical section above).
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
 	else
 	{
@@ -4211,10 +4291,22 @@ l2:
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		root_offnum = InvalidOffsetNumber;
 	}
 
-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
-
+	/* insert new tuple */
+	root_offnum = RelationPutHeapTuple(relation, newbuf, heaptup, false,
+									   root_offnum);
+	/*
+	 * Also mark both copies as latest and set the root offset information. If
+	 * we're doing a HOT/WARM update, then we just copy the information from
+	 * old tuple, if available or computed above. For regular updates,
+	 * RelationPutHeapTuple must have returned us the actual offset number
+	 * where the new version was inserted and we store the same value since the
+	 * update resulted in a new HOT-chain.
+	 */
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
 	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
@@ -4227,7 +4319,7 @@ l2:
 	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	HeapTupleHeaderSetNextTid(oldtup.t_data, &(heaptup->t_self));
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4266,6 +4358,7 @@ l2:
 
 		recptr = log_heap_update(relation, buffer,
 								 newbuf, &oldtup, heaptup,
+								 root_offnum,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4546,7 +4639,8 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	ItemId		lp;
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
-	BlockNumber block;
+	BlockNumber	block;
+	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4555,9 +4649,11 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	bool		first_time = true;
 	bool		have_tuple_lock = false;
 	bool		cleared_all_frozen = false;
+	OffsetNumber	root_offnum;
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
+	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -4577,6 +4673,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	tuple->t_data = (HeapTupleHeader) PageGetItem(page, lp);
 	tuple->t_len = ItemIdGetLength(lp);
 	tuple->t_tableOid = RelationGetRelid(relation);
+	tuple->t_self = *tid;
 
 l3:
 	result = HeapTupleSatisfiesUpdate(tuple, cid, *buffer);
@@ -4604,7 +4701,11 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &t_ctid);
+		else
+			ItemPointerCopy(tid, &t_ctid);
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -5042,7 +5143,12 @@ failed:
 		Assert(result == HeapTupleSelfUpdated || result == HeapTupleUpdated ||
 			   result == HeapTupleWouldBlock);
 		Assert(!(tuple->t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tuple->t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(tid, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tuple->t_data);
@@ -5090,6 +5196,10 @@ failed:
 							  GetCurrentTransactionId(), mode, false,
 							  &xid, &new_infomask, &new_infomask2);
 
+	if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tuple->t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -5118,7 +5228,10 @@ failed:
 	 * the tuple as well.
 	 */
 	if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask))
-		tuple->t_data->t_ctid = *tid;
+	{
+		if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+			HeapTupleHeaderSetHeapLatest(tuple->t_data, root_offnum);
+	}
 
 	/* Clear only the all-frozen bit on visibility map if needed */
 	if (PageIsAllVisible(page) &&
@@ -5632,6 +5745,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
+	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5640,6 +5754,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
+		offnum = ItemPointerGetOffsetNumber(&tupid);
+
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
@@ -5869,7 +5985,7 @@ l4:
 
 		/* if we find the end of update chain, we're done. */
 		if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID ||
-			ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) ||
+			HeapTupleHeaderIsHeapLatest(mytup.t_data, &mytup.t_self) ||
 			HeapTupleHeaderIsOnlyLocked(mytup.t_data))
 		{
 			result = HeapTupleMayBeUpdated;
@@ -5878,7 +5994,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		HeapTupleHeaderGetNextTid(mytup.t_data, &tupid);
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
@@ -5995,7 +6111,7 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	 * Replace the speculative insertion token with a real t_ctid, pointing to
 	 * itself like it does on regular tuples.
 	 */
-	htup->t_ctid = tuple->t_self;
+	HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 	/* XLOG stuff */
 	if (RelationNeedsWAL(relation))
@@ -6121,8 +6237,7 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
 
 	/* Clear the speculative insertion token too */
-	tp.t_data->t_ctid = tp.t_self;
-
+	HeapTupleHeaderSetHeapLatest(tp.t_data, ItemPointerGetOffsetNumber(tid));
 	MarkBufferDirty(buffer);
 
 	/*
@@ -7470,6 +7585,7 @@ log_heap_visible(RelFileNode rnode, Buffer heap_buffer, Buffer vm_buffer,
 static XLogRecPtr
 log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup, HeapTuple newtup,
+				OffsetNumber root_offnum,
 				HeapTuple old_key_tuple,
 				bool all_visible_cleared, bool new_all_visible_cleared)
 {
@@ -7590,6 +7706,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self);
 	xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data);
 
+	Assert(OffsetNumberIsValid(root_offnum));
+	xlrec.root_offnum = root_offnum;
+
 	bufflags = REGBUF_STANDARD;
 	if (init)
 		bufflags |= REGBUF_WILL_INIT;
@@ -8244,7 +8363,13 @@ heap_xlog_delete(XLogReaderState *record)
 			PageClearAllVisible(page);
 
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = target_tid;
+		if (!HeapTupleHeaderHasRootOffset(htup))
+		{
+			OffsetNumber	root_offnum;
+			root_offnum = heap_get_root_tuple(page, xlrec->offnum); 
+			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+		}
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8334,7 +8459,8 @@ heap_xlog_insert(XLogReaderState *record)
 		htup->t_hoff = xlhdr.t_hoff;
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
-		htup->t_ctid = target_tid;
+
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->offnum);
 
 		if (PageAddItem(page, (Item) htup, newlen, xlrec->offnum,
 						true, true) == InvalidOffsetNumber)
@@ -8469,8 +8595,8 @@ heap_xlog_multi_insert(XLogReaderState *record)
 			htup->t_hoff = xlhdr->t_hoff;
 			HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 			HeapTupleHeaderSetCmin(htup, FirstCommandId);
-			ItemPointerSetBlockNumber(&htup->t_ctid, blkno);
-			ItemPointerSetOffsetNumber(&htup->t_ctid, offnum);
+
+			HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 			offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 			if (offnum == InvalidOffsetNumber)
@@ -8606,7 +8732,7 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 		/* Set forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetNextTid(htup, &newtid);
 
 		/* Mark the page as a candidate for pruning */
 		PageSetPrunable(page, XLogRecGetXid(record));
@@ -8739,13 +8865,17 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
-		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = newtid;
 
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
 
+		/*
+		 * Make sure the tuple is marked as the latest and root offset
+		 * information is restored.
+		 */
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->root_offnum);
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
@@ -8808,6 +8938,9 @@ heap_xlog_confirm(XLogReaderState *record)
 		 */
 		ItemPointerSet(&htup->t_ctid, BufferGetBlockNumber(buffer), offnum);
 
+		/* For newly inserted tuple, set root offset to itself. */
+		HeapTupleHeaderSetHeapLatest(htup, offnum);
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8871,11 +9004,17 @@ heap_xlog_lock(XLogReaderState *record)
 		 */
 		if (HEAP_XMAX_IS_LOCKED_ONLY(htup->t_infomask))
 		{
+			ItemPointerData	target_tid;
+
+			ItemPointerSet(&target_tid, BufferGetBlockNumber(buffer), offnum);
 			HeapTupleHeaderClearHotUpdated(htup);
 			/* Make sure there is no forward chain link in t_ctid */
-			ItemPointerSet(&htup->t_ctid,
-						   BufferGetBlockNumber(buffer),
-						   offnum);
+			if (!HeapTupleHeaderHasRootOffset(htup))
+			{
+				OffsetNumber	root_offnum;
+				root_offnum = heap_get_root_tuple(page, offnum);
+				HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+			}
 		}
 		HeapTupleHeaderSetXmax(htup, xlrec->locking_xid);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index 6529fe3..8052519 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -31,12 +31,20 @@
  * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
  * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *
+ * The caller can optionally tell us to set the root offset to the given value.
+ * Otherwise, the root offset is set to the offset of the new location once its
+ * known. The former is used while updating an existing tuple where the caller
+ * tells us about the root line pointer of the chain.  The latter is used
+ * during insertion of a new row, hence root line pointer is set to the offset
+ * where this tuple is inserted.
  */
-void
+OffsetNumber
 RelationPutHeapTuple(Relation relation,
 					 Buffer buffer,
 					 HeapTuple tuple,
-					 bool token)
+					 bool token,
+					 OffsetNumber root_offnum)
 {
 	Page		pageHeader;
 	OffsetNumber offnum;
@@ -60,17 +68,24 @@ RelationPutHeapTuple(Relation relation,
 	ItemPointerSet(&(tuple->t_self), BufferGetBlockNumber(buffer), offnum);
 
 	/*
-	 * Insert the correct position into CTID of the stored tuple, too (unless
-	 * this is a speculative insertion, in which case the token is held in
-	 * CTID field instead)
+	 * Set block number and the root offset into CTID of the stored tuple, too
+	 * (unless this is a speculative insertion, in which case the token is held
+	 * in CTID field instead).
 	 */
 	if (!token)
 	{
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
+		/* Copy t_ctid to set the correct block number. */
 		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+
+		if (!OffsetNumberIsValid(root_offnum))
+			root_offnum = offnum;
+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item, root_offnum);
 	}
+
+	return root_offnum;
 }
 
 /*
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index d69a266..f54337c 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -55,6 +55,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
 
+static void heap_get_root_tuples_internal(Page page,
+				OffsetNumber target_offnum, OffsetNumber *root_offsets);
 
 /*
  * Optionally prune and repair fragmentation in the specified page.
@@ -553,6 +555,17 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 		if (!HeapTupleHeaderIsHotUpdated(htup))
 			break;
 
+
+		/*
+		 * If the tuple was HOT-updated and the update was later
+		 * aborted, someone could mark this tuple to be the last tuple
+		 * in the chain, without clearing the HOT-updated flag. So we must
+		 * check if this is the last tuple in the chain and stop following the
+		 * CTID, else we risk getting into an infinite recursion (though
+		 * prstate->marked[] currently protects against that).
+		 */
+		if (HeapTupleHeaderHasRootOffset(htup))
+			break;
 		/*
 		 * Advance to next chain member.
 		 */
@@ -726,27 +739,47 @@ heap_page_prune_execute(Buffer buffer,
 
 
 /*
- * For all items in this page, find their respective root line pointers.
- * If item k is part of a HOT-chain with root at item j, then we set
- * root_offsets[k - 1] = j.
+ * Either for all items in this page or for the given item, find their
+ * respective root line pointers.
+ *
+ * When target_offnum is a valid offset number, the caller is interested in
+ * just one item. In that case, the root line pointer is returned in
+ * root_offsets.
  *
- * The passed-in root_offsets array must have MaxHeapTuplesPerPage entries.
- * We zero out all unused entries.
+ * When target_offnum is a InvalidOffsetNumber then the caller wants to know
+ * the root line pointers of all the items in this page. The root_offsets array
+ * must have MaxHeapTuplesPerPage entries in that case. If item k is part of a
+ * HOT-chain with root at item j, then we set root_offsets[k - 1] = j. We zero
+ * out all unused entries.
  *
  * The function must be called with at least share lock on the buffer, to
  * prevent concurrent prune operations.
  *
+ * This is not a cheap function since it must scan through all line pointers
+ * and tuples on the page in order to find the root line pointers. To minimize
+ * the cost, we break early if target_offnum is specified and root line pointer
+ * to target_offnum is found.
+ *
  * Note: The information collected here is valid only as long as the caller
  * holds a pin on the buffer. Once pin is released, a tuple might be pruned
  * and reused by a completely unrelated tuple.
+ *
+ * Note: This function must not be called inside a critical section because it
+ * internally calls HeapTupleHeaderGetUpdateXid which somewhere down the stack
+ * may try to allocate heap memory. Memory allocation is disallowed in a
+ * critical section.
  */
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
 {
 	OffsetNumber offnum,
 				maxoff;
 
-	MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
+	if (OffsetNumberIsValid(target_offnum))
+		*root_offsets = InvalidOffsetNumber;
+	else
+		MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
 
 	maxoff = PageGetMaxOffsetNumber(page);
 	for (offnum = FirstOffsetNumber; offnum <= maxoff; offnum = OffsetNumberNext(offnum))
@@ -774,9 +807,28 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 
 			/*
 			 * This is either a plain tuple or the root of a HOT-chain.
-			 * Remember it in the mapping.
+			 *
+			 * If the target_offnum is specified and if we found its mapping,
+			 * return.
 			 */
-			root_offsets[offnum - 1] = offnum;
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (target_offnum == offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember it in the mapping. */
+				root_offsets[offnum - 1] = offnum;
+			}
 
 			/* If it's not the start of a HOT-chain, we're done with it */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
@@ -817,15 +869,65 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 				!TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(htup)))
 				break;
 
-			/* Remember the root line pointer for this item */
-			root_offsets[nextoffnum - 1] = offnum;
+			/*
+			 * If target_offnum is specified and we found its mapping, return.
+			 */
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (nextoffnum == target_offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember the root line pointer for this item. */
+				root_offsets[nextoffnum - 1] = offnum;
+			}
 
 			/* Advance to next chain member, if any */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				break;
 
+			/*
+			 * If the tuple was HOT-updated and the update was later aborted,
+			 * someone could mark this tuple to be the last tuple in the chain
+			 * and store root offset in CTID, without clearing the HOT-updated
+			 * flag. So we must check if CTID is actually root offset and break
+			 * to avoid infinite recursion.
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
 		}
 	}
 }
+
+/*
+ * Get root line pointer for the given tuple.
+ */
+OffsetNumber
+heap_get_root_tuple(Page page, OffsetNumber target_offnum)
+{
+	OffsetNumber offnum = InvalidOffsetNumber;
+	heap_get_root_tuples_internal(page, target_offnum, &offnum);
+	return offnum;
+}
+
+/*
+ * Get root line pointers for all tuples in the page
+ */
+void
+heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+{
+	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+			root_offsets);
+}
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index d7f65a5..2d3ae9b 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -421,14 +421,18 @@ rewrite_heap_tuple(RewriteState state,
 	 */
 	if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
 		  HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
-		!(ItemPointerEquals(&(old_tuple->t_self),
-							&(old_tuple->t_data->t_ctid))))
+		!(HeapTupleHeaderIsHeapLatest(old_tuple->t_data, &old_tuple->t_self)))
 	{
 		OldToNewMapping mapping;
 
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
-		hashkey.tid = old_tuple->t_data->t_ctid;
+
+		/* 
+		 * We've already checked that this is not the last tuple in the chain,
+		 * so fetch the next TID in the chain.
+		 */
+		HeapTupleHeaderGetNextTid(old_tuple->t_data, &hashkey.tid);
 
 		mapping = (OldToNewMapping)
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -441,7 +445,7 @@ rewrite_heap_tuple(RewriteState state,
 			 * set the ctid of this tuple to point to the new location, and
 			 * insert it right away.
 			 */
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextTid(new_tuple->t_data, &mapping->new_tid);
 
 			/* We don't need the mapping entry anymore */
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -527,7 +531,7 @@ rewrite_heap_tuple(RewriteState state,
 				new_tuple = unresolved->tuple;
 				free_new = true;
 				old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextTid(new_tuple->t_data, &new_tid);
 
 				/*
 				 * We don't need the hash entry anymore, but don't free its
@@ -733,7 +737,12 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		onpage_tup->t_ctid = tup->t_self;
+		/* 
+		 * Set t_ctid just to ensure that block number is copied correctly, but
+		 * then immediately mark the tuple as the latest.
+		 */
+		HeapTupleHeaderSetNextTid(onpage_tup, &tup->t_self);
+		HeapTupleHeaderSetHeapLatest(onpage_tup, newoff);
 	}
 
 	/* If heaptup is a private copy, release it. */
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 108060a..c3f1873 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -785,7 +785,8 @@ retry:
 			  DirtySnapshot.speculativeToken &&
 			  TransactionIdPrecedes(GetCurrentTransactionId(), xwait))))
 		{
-			ctid_wait = tup->t_data->t_ctid;
+			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
+				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 920b120..02f3f32 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2628,7 +2628,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		 * As above, it should be safe to examine xmax and t_ctid without the
 		 * buffer content lock, because they can't be changing.
 		 */
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, &tuple.t_self))
 		{
 			/* deleted, so forget about it */
 			ReleaseBuffer(buffer);
@@ -2636,7 +2636,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		}
 
 		/* updated, so look at the updated row */
-		tuple.t_self = tuple.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tuple.t_data, &tuple.t_self);
 		/* updated row should have xmin matching this xmax */
 		priorXmax = HeapTupleHeaderGetUpdateXid(tuple.t_data);
 		ReleaseBuffer(buffer);
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 7e85510..5540e12 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -190,6 +190,7 @@ extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
+extern OffsetNumber heap_get_root_tuple(Page page, OffsetNumber target_offnum);
 extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
 
 /* in heap/syncscan.c */
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index b285f17..e6019d5 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -193,6 +193,8 @@ typedef struct xl_heap_update
 	uint8		flags;
 	TransactionId new_xmax;		/* xmax of the new tuple */
 	OffsetNumber new_offnum;	/* new tuple's offset */
+	OffsetNumber root_offnum;	/* offset of the root line pointer in case of
+								   HOT or WARM update */
 
 	/*
 	 * If XLOG_HEAP_CONTAINS_OLD_TUPLE or XLOG_HEAP_CONTAINS_OLD_KEY flags are
@@ -200,7 +202,7 @@ typedef struct xl_heap_update
 	 */
 } xl_heap_update;
 
-#define SizeOfHeapUpdate	(offsetof(xl_heap_update, new_offnum) + sizeof(OffsetNumber))
+#define SizeOfHeapUpdate	(offsetof(xl_heap_update, root_offnum) + sizeof(OffsetNumber))
 
 /*
  * This is what we need to know about vacuum page cleanup/redirect
diff --git a/src/include/access/hio.h b/src/include/access/hio.h
index 2824f23..921cb37 100644
--- a/src/include/access/hio.h
+++ b/src/include/access/hio.h
@@ -35,8 +35,8 @@ typedef struct BulkInsertStateData
 }	BulkInsertStateData;
 
 
-extern void RelationPutHeapTuple(Relation relation, Buffer buffer,
-					 HeapTuple tuple, bool token);
+extern OffsetNumber RelationPutHeapTuple(Relation relation, Buffer buffer,
+					 HeapTuple tuple, bool token, OffsetNumber root_offnum);
 extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
 						  Buffer otherBuffer, int options,
 						  BulkInsertState bistate,
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 7b6285d..24433c7 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,13 +260,19 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1800 are available */
+/* bits 0x0800 are available */
+#define HEAP_LATEST_TUPLE		0x1000	/*
+										 * This is the last tuple in chain and
+										 * ip_posid points to the root line
+										 * pointer
+										 */
 #define HEAP_KEYS_UPDATED		0x2000	/* tuple was updated and key cols
 										 * modified, or tuple deleted */
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xE000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+
 
 /*
  * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins.  It is
@@ -504,6 +510,43 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+/*
+ * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
+ * the TID of the tuple itself in t_ctid field to mark the end of the chain.
+ * But starting PG v10, we use a special flag HEAP_LATEST_TUPLE to identify the
+ * last tuple and store the root line pointer of the HOT chain in t_ctid field
+ * instead.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetHeapLatest(tup, offnum) \
+do { \
+	AssertMacro(OffsetNumberIsValid(offnum)); \
+	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE; \
+	ItemPointerSetOffsetNumber(&(tup)->t_ctid, (offnum)); \
+} while (0)
+
+#define HeapTupleHeaderClearHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 &= ~HEAP_LATEST_TUPLE \
+)
+
+/*
+ * Starting from PostgreSQL 10, the latest tuple in an update chain has
+ * HEAP_LATEST_TUPLE set; but tuples upgraded from earlier versions do not.
+ * For those, we determine whether a tuple is latest by testing that its t_ctid
+ * points to itself.
+ *
+ * Note: beware of multiple evaluations of "tup" and "tid" arguments.
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  (((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(tid))) \
+)
+
+
 #define HeapTupleHeaderSetHeapOnly(tup) \
 ( \
   (tup)->t_infomask2 |= HEAP_ONLY_TUPLE \
@@ -542,6 +585,56 @@ do { \
 
 
 /*
+ * Set the t_ctid chain and also clear the HEAP_LATEST_TUPLE flag since we
+ * now have a new tuple in the chain and this is no longer the last tuple of
+ * the chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetNextTid(tup, tid) \
+do { \
+		ItemPointerCopy((tid), &((tup)->t_ctid)); \
+		HeapTupleHeaderClearHeapLatest((tup)); \
+} while (0)
+
+/*
+ * Get TID of next tuple in the update chain. Caller must have checked that
+ * we are not already at the end of the chain because in that case t_ctid may
+ * actually store the root line pointer of the HOT chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetNextTid(tup, next_ctid) \
+do { \
+	AssertMacro(!((tup)->t_infomask2 & HEAP_LATEST_TUPLE)); \
+	ItemPointerCopy(&(tup)->t_ctid, (next_ctid)); \
+} while (0)
+
+/*
+ * Get the root line pointer of the HOT chain. The caller should have confirmed
+ * that the root offset is cached before calling this macro.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+	AssertMacro(((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0), \
+	ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)
+
+/*
+ * Return whether the tuple has a cached root offset.  We don't use
+ * HeapTupleHeaderIsHeapLatest because that one also considers the case of
+ * t_ctid pointing to itself, for tuples migrated from pre v10 clusters. Here
+ * we are only interested in the tuples which are marked with HEAP_LATEST_TUPLE
+ * flag.
+ */
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+	((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0 \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
-- 
2.9.3 (Apple Git-75)

0002-Free-3-bits-in-ip_posid-field-of-the-ItemPointer_v25.patchapplication/octet-stream; name=0002-Free-3-bits-in-ip_posid-field-of-the-ItemPointer_v25.patchDownload
From 046a14badc3f86b1d3a2791db327a61ba51a47e9 Mon Sep 17 00:00:00 2001
From: Pavan Deolasee <pavan.deolasee@gmail.com>
Date: Wed, 29 Mar 2017 10:44:01 +0530
Subject: [PATCH 2/4] Free 3-bits in ip_posid field of the ItemPointerData.

We can use those for storing some other information. Right now only index
methods will use those to store WARM/CLEAR property of an index pointer.
---
 src/include/access/ginblock.h     |  3 ++-
 src/include/access/htup_details.h |  2 +-
 src/include/storage/itemptr.h     | 30 +++++++++++++++++++++++++++---
 src/include/storage/off.h         | 11 ++++++++++-
 4 files changed, 40 insertions(+), 6 deletions(-)

diff --git a/src/include/access/ginblock.h b/src/include/access/ginblock.h
index 438912c..316ab65 100644
--- a/src/include/access/ginblock.h
+++ b/src/include/access/ginblock.h
@@ -135,7 +135,8 @@ typedef struct GinMetaPageData
 	(ItemPointerGetBlockNumberNoCheck(pointer))
 
 #define GinItemPointerGetOffsetNumber(pointer) \
-	(ItemPointerGetOffsetNumberNoCheck(pointer))
+	(ItemPointerGetOffsetNumberNoCheck(pointer) | \
+	 (ItemPointerGetFlags(pointer) << OffsetNumberBits))
 
 #define GinItemPointerSetBlockNumber(pointer, blkno) \
 	(ItemPointerSetBlockNumber((pointer), (blkno)))
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 24433c7..4d614b7 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -288,7 +288,7 @@ struct HeapTupleHeaderData
  * than MaxOffsetNumber, so that it can be distinguished from a valid
  * offset number in a regular item pointer.
  */
-#define SpecTokenOffsetNumber		0xfffe
+#define SpecTokenOffsetNumber		OffsetNumberPrev(OffsetNumberMask)
 
 /*
  * HeapTupleHeader accessor macros
diff --git a/src/include/storage/itemptr.h b/src/include/storage/itemptr.h
index c21d2ad..74eed4e 100644
--- a/src/include/storage/itemptr.h
+++ b/src/include/storage/itemptr.h
@@ -57,7 +57,7 @@ typedef ItemPointerData *ItemPointer;
  *		True iff the disk item pointer is not NULL.
  */
 #define ItemPointerIsValid(pointer) \
-	((bool) (PointerIsValid(pointer) && ((pointer)->ip_posid != 0)))
+	((bool) (PointerIsValid(pointer) && (((pointer)->ip_posid & OffsetNumberMask) != 0)))
 
 /*
  * ItemPointerGetBlockNumberNoCheck
@@ -84,7 +84,7 @@ typedef ItemPointerData *ItemPointer;
  */
 #define ItemPointerGetOffsetNumberNoCheck(pointer) \
 ( \
-	(pointer)->ip_posid \
+	((pointer)->ip_posid & OffsetNumberMask) \
 )
 
 /*
@@ -98,6 +98,30 @@ typedef ItemPointerData *ItemPointer;
 )
 
 /*
+ * Get the flags stored in high order bits in the OffsetNumber.
+ */
+#define ItemPointerGetFlags(pointer) \
+( \
+	((pointer)->ip_posid & ~OffsetNumberMask) >> OffsetNumberBits \
+)
+
+/*
+ * Set the flag bits. We first left-shift since flags are defined starting 0x01
+ */
+#define ItemPointerSetFlags(pointer, flags) \
+( \
+	((pointer)->ip_posid |= ((flags) << OffsetNumberBits)) \
+)
+
+/*
+ * Clear all flags.
+ */
+#define ItemPointerClearFlags(pointer) \
+( \
+	((pointer)->ip_posid &= OffsetNumberMask) \
+)
+
+/*
  * ItemPointerSet
  *		Sets a disk item pointer to the specified block and offset.
  */
@@ -105,7 +129,7 @@ typedef ItemPointerData *ItemPointer;
 ( \
 	AssertMacro(PointerIsValid(pointer)), \
 	BlockIdSet(&((pointer)->ip_blkid), blockNumber), \
-	(pointer)->ip_posid = offNum \
+	(pointer)->ip_posid = (offNum) \
 )
 
 /*
diff --git a/src/include/storage/off.h b/src/include/storage/off.h
index fe8638f..f058fe1 100644
--- a/src/include/storage/off.h
+++ b/src/include/storage/off.h
@@ -26,7 +26,16 @@ typedef uint16 OffsetNumber;
 #define InvalidOffsetNumber		((OffsetNumber) 0)
 #define FirstOffsetNumber		((OffsetNumber) 1)
 #define MaxOffsetNumber			((OffsetNumber) (BLCKSZ / sizeof(ItemIdData)))
-#define OffsetNumberMask		(0xffff)		/* valid uint16 bits */
+
+/*
+ * The biggest BLCKSZ we support is 32kB, and each ItemId takes 6 bytes.
+ * That limits the number of line pointers in a page to 32kB/6B = 5461.
+ * Therefore, 13 bits in OffsetNumber are enough to represent all valid
+ * on-disk line pointers.  Hence, we can reserve the high-order bits in
+ * OffsetNumber for other purposes.
+ */
+#define OffsetNumberBits		13
+#define OffsetNumberMask		((((uint16) 1) << OffsetNumberBits) - 1)
 
 /* ----------------
  *		support macros
-- 
2.9.3 (Apple Git-75)

0003-Main-WARM-patch_v25.patchapplication/octet-stream; name=0003-Main-WARM-patch_v25.patchDownload
From 91ca31e71487453711cd5cab85433a4ccb7268f8 Mon Sep 17 00:00:00 2001
From: Pavan Deolasee <pavan.deolasee@gmail.com>
Date: Sun, 26 Mar 2017 15:03:45 +0530
Subject: [PATCH 3/4] Main WARM patch.

We perform WARM update if the update is not modifying all indexes, but
modifying at least one index and has enough free space in the heap block to
keep the new version of the tuple.

The update works pretty much the same way as HOT updates, but the index whose
key values have changed must receive another index entry, pointing to the same
root of the HOT chain. Such chains which may have more than one index pointers
in at least one index, are called WARM chains. But now since there are 2 index
pointers to the same chain, we must do recheck to confirm that the index
pointer should or should not see the tuple. HOT pruning and other technique
remain the same.

WARM chains must subsequently be cleaned up by removing additional index
pointers. Once cleaned up, they are further be WARM updated and
index-only-scans will work.

To ensure that we don't do wasteful work, we only do WARM update if less than
50% indexes need updates. For anything above that, it probably does not make
sense to do WARM updates because most indexes will receive an update anyways
and cleanup cost will be high.

A new table-level option (enable_warm) is added, the default currently being
ON. When the option is ON, WARM updates are allowed on the table. We allow user
to set enable_warm to OFF. But once it's turned ON, we don't allow turning it OFF
again. This is necessary because once WARM is enabled, the table may have WARM
chains and WARM index pointers and those must be handled correctly.
---
 contrib/bloom/blutils.c                     |   1 +
 contrib/bloom/blvacuum.c                    |   2 +-
 src/backend/access/brin/brin.c              |   1 +
 src/backend/access/common/reloptions.c      |  13 +-
 src/backend/access/gin/ginvacuum.c          |   3 +-
 src/backend/access/gist/gist.c              |   1 +
 src/backend/access/gist/gistvacuum.c        |   3 +-
 src/backend/access/hash/hash.c              |  18 +-
 src/backend/access/hash/hashsearch.c        |   5 +
 src/backend/access/heap/README.WARM         | 308 +++++++++
 src/backend/access/heap/heapam.c            | 776 ++++++++++++++++++++---
 src/backend/access/heap/pruneheap.c         |   9 +-
 src/backend/access/heap/rewriteheap.c       |  12 +-
 src/backend/access/heap/tuptoaster.c        |   3 +-
 src/backend/access/index/genam.c            |   2 +
 src/backend/access/index/indexam.c          |  95 ++-
 src/backend/access/nbtree/nbtinsert.c       | 228 ++++---
 src/backend/access/nbtree/nbtpage.c         |  56 +-
 src/backend/access/nbtree/nbtree.c          |  76 ++-
 src/backend/access/nbtree/nbtutils.c        | 100 +++
 src/backend/access/nbtree/nbtxlog.c         |  27 +-
 src/backend/access/rmgrdesc/heapdesc.c      |  26 +-
 src/backend/access/rmgrdesc/nbtdesc.c       |   4 +-
 src/backend/access/spgist/spgutils.c        |   1 +
 src/backend/access/spgist/spgvacuum.c       |  12 +-
 src/backend/catalog/index.c                 |  71 ++-
 src/backend/catalog/indexing.c              |  60 +-
 src/backend/catalog/system_views.sql        |   4 +-
 src/backend/commands/constraint.c           |   7 +-
 src/backend/commands/copy.c                 |   3 +
 src/backend/commands/indexcmds.c            |  17 +-
 src/backend/commands/tablecmds.c            |  14 +-
 src/backend/commands/vacuumlazy.c           | 654 ++++++++++++++++++-
 src/backend/executor/execIndexing.c         |  21 +-
 src/backend/executor/execReplication.c      |  30 +-
 src/backend/executor/nodeBitmapHeapscan.c   |  21 +-
 src/backend/executor/nodeIndexscan.c        |   4 +-
 src/backend/executor/nodeModifyTable.c      |  27 +-
 src/backend/postmaster/pgstat.c             |   7 +-
 src/backend/replication/logical/decode.c    |  13 +-
 src/backend/storage/page/bufpage.c          |  23 +
 src/backend/utils/adt/pgstatfuncs.c         |  31 +
 src/backend/utils/cache/relcache.c          | 113 +++-
 src/backend/utils/time/combocid.c           |   4 +-
 src/backend/utils/time/tqual.c              |  24 +-
 src/include/access/amapi.h                  |  18 +
 src/include/access/genam.h                  |  22 +-
 src/include/access/heapam.h                 |  30 +-
 src/include/access/heapam_xlog.h            |  24 +-
 src/include/access/htup_details.h           | 116 +++-
 src/include/access/nbtree.h                 |  21 +-
 src/include/access/nbtxlog.h                |  10 +-
 src/include/access/relscan.h                |   5 +-
 src/include/catalog/index.h                 |   7 +
 src/include/catalog/pg_proc.h               |   4 +
 src/include/commands/progress.h             |   1 +
 src/include/executor/executor.h             |   1 +
 src/include/executor/nodeIndexscan.h        |   1 -
 src/include/nodes/execnodes.h               |   1 +
 src/include/pgstat.h                        |   4 +-
 src/include/storage/bufpage.h               |   2 +
 src/include/utils/rel.h                     |  19 +
 src/include/utils/relcache.h                |   5 +-
 src/test/regress/expected/alter_generic.out |   4 +-
 src/test/regress/expected/rules.out         |  12 +-
 src/test/regress/expected/warm.out          | 930 ++++++++++++++++++++++++++++
 src/test/regress/parallel_schedule          |   2 +
 src/test/regress/sql/warm.sql               | 360 +++++++++++
 68 files changed, 4150 insertions(+), 379 deletions(-)
 create mode 100644 src/backend/access/heap/README.WARM
 create mode 100644 src/test/regress/expected/warm.out
 create mode 100644 src/test/regress/sql/warm.sql

diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c
index f2eda67..b356e2b 100644
--- a/contrib/bloom/blutils.c
+++ b/contrib/bloom/blutils.c
@@ -142,6 +142,7 @@ blhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/contrib/bloom/blvacuum.c b/contrib/bloom/blvacuum.c
index 04abd0f..ff50361 100644
--- a/contrib/bloom/blvacuum.c
+++ b/contrib/bloom/blvacuum.c
@@ -88,7 +88,7 @@ blbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 		while (itup < itupEnd)
 		{
 			/* Do we have to delete this tuple? */
-			if (callback(&itup->heapPtr, callback_state))
+			if (callback(&itup->heapPtr, false, callback_state) == IBDCR_DELETE)
 			{
 				/* Yes; adjust count of tuples that will be left on page */
 				BloomPageGetOpaque(page)->maxoff--;
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index 649f348..a0fd203 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -119,6 +119,7 @@ brinhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index 6d1f22f..6b5534f 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -137,6 +137,15 @@ static relopt_bool boolRelOpts[] =
 		},
 		false
 	},
+	{
+		{
+			"enable_warm",
+			"Table supports WARM updates",
+			RELOPT_KIND_HEAP,
+			ShareUpdateExclusiveLock
+		},
+		HEAP_DEFAULT_ENABLE_WARM
+	},
 	/* list terminator */
 	{{NULL}}
 };
@@ -1351,7 +1360,9 @@ default_reloptions(Datum reloptions, bool validate, relopt_kind kind)
 		{"user_catalog_table", RELOPT_TYPE_BOOL,
 		offsetof(StdRdOptions, user_catalog_table)},
 		{"parallel_workers", RELOPT_TYPE_INT,
-		offsetof(StdRdOptions, parallel_workers)}
+		offsetof(StdRdOptions, parallel_workers)},
+		{"enable_warm", RELOPT_TYPE_BOOL,
+		offsetof(StdRdOptions, enable_warm)}
 	};
 
 	options = parseRelOptions(reloptions, validate, kind, &numoptions);
diff --git a/src/backend/access/gin/ginvacuum.c b/src/backend/access/gin/ginvacuum.c
index 26c077a..46ed4fe 100644
--- a/src/backend/access/gin/ginvacuum.c
+++ b/src/backend/access/gin/ginvacuum.c
@@ -56,7 +56,8 @@ ginVacuumItemPointers(GinVacuumState *gvs, ItemPointerData *items,
 	 */
 	for (i = 0; i < nitem; i++)
 	{
-		if (gvs->callback(items + i, gvs->callback_state))
+		if (gvs->callback(items + i, false, gvs->callback_state) ==
+				IBDCR_DELETE)
 		{
 			gvs->result->tuples_removed += 1;
 			if (!tmpitems)
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index 6593771..843389b 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -94,6 +94,7 @@ gisthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gist/gistvacuum.c b/src/backend/access/gist/gistvacuum.c
index 77d9d12..0955db6 100644
--- a/src/backend/access/gist/gistvacuum.c
+++ b/src/backend/access/gist/gistvacuum.c
@@ -202,7 +202,8 @@ gistbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 				iid = PageGetItemId(page, i);
 				idxtuple = (IndexTuple) PageGetItem(page, iid);
 
-				if (callback(&(idxtuple->t_tid), callback_state))
+				if (callback(&(idxtuple->t_tid), false, callback_state) ==
+						IBDCR_DELETE)
 					todelete[ntodelete++] = i;
 				else
 					stats->num_index_tuples += 1;
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index b835f77..571dee8 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -75,6 +75,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = hashbuild;
 	amroutine->ambuildempty = hashbuildempty;
 	amroutine->aminsert = hashinsert;
+	amroutine->amwarminsert = NULL;
 	amroutine->ambulkdelete = hashbulkdelete;
 	amroutine->amvacuumcleanup = hashvacuumcleanup;
 	amroutine->amcanreturn = NULL;
@@ -92,6 +93,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -823,6 +825,7 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 			IndexTuple	itup;
 			Bucket		bucket;
 			bool		kill_tuple = false;
+			IndexBulkDeleteCallbackResult	result;
 
 			itup = (IndexTuple) PageGetItem(page,
 											PageGetItemId(page, offno));
@@ -832,13 +835,18 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 			 * To remove the dead tuples, we strictly want to rely on results
 			 * of callback function.  refer btvacuumpage for detailed reason.
 			 */
-			if (callback && callback(htup, callback_state))
+			if (callback)
 			{
-				kill_tuple = true;
-				if (tuples_removed)
-					*tuples_removed += 1;
+				result = callback(htup, false, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					kill_tuple = true;
+					if (tuples_removed)
+						*tuples_removed += 1;
+				}
 			}
-			else if (split_cleanup)
+
+			if (!kill_tuple && split_cleanup)
 			{
 				/* delete the tuples that are moved by split. */
 				bucket = _hash_hashkey2bucket(_hash_get_indextuple_hashkey(itup),
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index 2d92049..330ccc5 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -59,6 +59,8 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
 	return true;
 }
 
@@ -367,6 +369,9 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
+
 	return true;
 }
 
diff --git a/src/backend/access/heap/README.WARM b/src/backend/access/heap/README.WARM
new file mode 100644
index 0000000..7c93a70
--- /dev/null
+++ b/src/backend/access/heap/README.WARM
@@ -0,0 +1,308 @@
+src/backend/access/heap/README.WARM
+
+Write Amplification Reduction Method (WARM)
+===========================================
+
+The Heap Only Tuple (HOT) feature greatly eliminated redudant index
+entries and allowed re-use of the dead space occupied by previously
+updated or deleted tuples (see src/backend/access/heap/README.HOT)
+
+One of the necessary conditions for satisfying HOT update is that the
+update must not change a column used in any of the indexes on the table.
+The condition is sometimes hard to meet, especially for complex
+workloads with several indexes on large yet frequently updated tables.
+Worse, sometimes only one or two index columns may be updated, but the
+regular non-HOT update will still insert a new index entry in every
+index on the table, irrespective of whether the key pertaining to the
+index is changed or not.
+
+WARM is a technique devised to address these problems.
+
+
+Update Chains With Multiple Index Entries Pointing to the Root
+--------------------------------------------------------------
+
+When a non-HOT update is caused by an index key change, a new index
+entry must be inserted for the changed index. But if the index key
+hasn't changed for other indexes, we don't really need to insert a new
+entry. Even though the existing index entry is pointing to the old
+tuple, the new tuple is reachable via the t_ctid chain. To keep things
+simple, a WARM update requires that the heap block must have enough
+space to store the new version of the tuple. This is same as HOT
+updates.
+
+In WARM, we ensure that every index entry always points to the root of
+the WARM chain. In fact, a WARM chain looks exactly like a HOT chain
+except for the fact that there could be multiple index entries pointing
+to the root of the chain. So when new entry is inserted in an index for
+updated tuple, and if we are doing a WARM update, the new entry is made
+point to the root of the WARM chain.
+
+For example, if we have a table with two columns and two indexes on each of the
+column. When a tuple is first inserted into the table, we have exactly one
+index entry pointing to the tuple from both indexes.
+
+	lp [1]
+	[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's entry (aaaa) also points to 1
+
+Now if the tuple's second column is updated and if there is room on the
+page, we perform a WARM update. To do so, Index1 does not get any new
+entry and Index2's new entry will still point to the root tuple of the
+chain.
+
+	lp [1]  [2]
+	[1111, aaaa]->[1111, bbbb]
+
+	Index1's entry (1111) points to 1
+	Index2's old entry (aaaa) points to 1
+	Index2's new entry (bbbb) also points to 1
+
+"A update chain which has more than one index entries pointing to its
+root line pointer is called WARM chain and the action that creates a
+WARM chain is called WARM update."
+
+Since all indexes always point to the root of the WARM chain, even when
+there are more than one index entries, WARM chains can be pruned and
+dead tuples can be removed without a need to do corresponding index
+cleanup.
+
+While this solves the problem of pruning dead tuples from a HOT/WARM
+chain, it also opens up a new technical challenge because now we have a
+situation where a heap tuple is reachable from multiple index entries,
+each having a different index key. While MVCC still ensures that only
+valid tuples are returned, a tuple with a wrong index key may be
+returned because of wrong index entries. In the above example, tuple
+[1111, bbbb] is reachable from both keys (aaaa) as well as (bbbb). For
+this reason, tuples returned from a WARM chain must always be rechecked
+for index key-match.
+
+Recheck Index Key Againt Heap Tuple
+-----------------------------------
+
+Since every Index AM has it's own notion of index tuples, each Index AM
+must implement its own method to recheck heap tuples. For example, a
+hash index stores the hash value of the column and hence recheck routine
+for hash AM must first compute the hash value of the heap attribute and
+then compare it against the value stored in the index tuple.
+
+The patch currently implement recheck routines for hash and btree
+indexes. If the table has an index which doesn't support recheck
+routine, WARM updates are disabled on such tables.
+
+Problem With Duplicate (key, ctid) Index Entries
+------------------------------------------------
+
+The index-key recheck logic works as long as there are no duplicate
+index keys, both pointing to the same WARM chain. In that case, the same
+valid tuple will be reachable via multiple index keys, yet satisfying
+the index key checks. In the above example, if the tuple [1111, bbbb] is
+again updated to [1111, aaaa] and if we insert a new index entry (aaaa)
+pointing to the root line pointer, we will end up with the following
+structure:
+
+	lp [1]  [2]  [3]
+	[1111, aaaa]->[1111, bbbb]->[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's oldest entry (aaaa) points to 1
+	Index2's old entry (bbbb) also points to 1
+	Index2's new entry (aaaa) also points to 1
+
+We must solve this problem to ensure that the same tuple is not
+reachable via multiple index pointers. There are couple of ways to
+address this issue:
+
+1. Do not allow WARM update to a tuple from a WARM chain. This
+guarantees that there can never be duplicate index entries to the same
+root line pointer because we must have checked for old and new index
+keys while doing the first WARM update.
+
+2. Do not allow duplicate (key, ctid) index pointers. In the above
+example, since (aaaa, 1) already exists in the index, we must not insert
+a duplicate index entry.
+
+The patch currently implements 1 i.e. do not do WARM updates to a tuple
+from a WARM chain. HOT updates are fine because they do not add a new
+index entry.
+
+Even with the restriction, this is a significant improvement because the
+number of regular UPDATEs are curtailed down to half.
+
+Expression and Partial Indexes
+------------------------------
+
+Expressions may evaluate to the same value even if the underlying column
+values have changed. A simple example is an index on "lower(col)" which
+will return the same value if the new heap value only differs in the
+case sensitivity. So we can not solely rely on the heap column check to
+decide whether or not to insert a new index entry for expression
+indexes. Similarly, for partial indexes, the predicate expression must
+be evaluated to decide whether or not to cause a new index entry when
+columns referred in the predicate expressions change.
+
+(None of these things are currently implemented and we squarely disallow
+WARM update if a column from expression indexes or predicate has
+changed).
+
+
+Efficiently Finding the Root Line Pointer
+-----------------------------------------
+
+During WARM update, we must be able to find the root line pointer of the
+tuple being updated. It must be noted that the t_ctid field in the heap
+tuple header is usually used to find the next tuple in the update chain.
+But the tuple that we are updating, must be the last tuple in the update
+chain. In such cases, the c_tid field usually points the tuple itself.
+So in theory, we could use the t_ctid to store additional information in
+the last tuple of the update chain, if the information about the tuple
+being the last tuple is stored elsewhere.
+
+We now utilize another bit from t_infomask2 to explicitly identify that
+this is the last tuple in the update chain.
+
+HEAP_LATEST_TUPLE - When this bit is set, the tuple is the last tuple in
+the update chain. The OffsetNumber part of t_ctid points to the root
+line pointer of the chain when HEAP_LATEST_TUPLE flag is set.
+
+If UPDATE operation is aborted, the last tuple in the update chain
+becomes dead. The root line pointer information stored in the tuple
+which remains the last valid tuple in the chain is also lost. In such
+rare cases, the root line pointer must be found in a hard way by
+scanning the entire heap page.
+
+Tracking WARM Chains
+--------------------
+
+When a tuple is WARM updated, the old, the new and every subsequent tuple in
+the chain is marked with a special HEAP_WARM_UPDATED flag. We use the last
+remaining bit in t_infomask2 to store this information.
+
+When a tuple is returned from a WARM chain, the caller must do additional
+checks to ensure that the tuple matches the index key. Even if the tuple
+precedes the WARM update in the chain, it must still be rechecked for the index
+key match (case when old tuple is returned by the new index key). So we must
+follow the update chain everytime to the end to check if this is a WARM
+chain.
+
+Converting WARM chains back to HOT chains (VACUUM ?)
+----------------------------------------------------
+
+The current implementation of WARM allows only one WARM update per
+chain. This simplifies the design and addresses certain issues around
+duplicate key scans. But this also implies that the benefit of WARM will be
+no more than 50%, which is still significant, but if we could return
+WARM chains back to normal status, we could do far more WARM updates.
+
+A distinct property of a WARM chain is that at least one index has more
+than one live index entries pointing to the root of the chain. In other
+words, if we can remove duplicate entry from every index or conclusively
+prove that there are no duplicate index entries for the root line
+pointer, the chain can again be marked as HOT.
+
+Here is one idea:
+
+A WARM chain has two parts, separated by the tuple that caused WARM
+update. All tuples in each part has matching index keys, but certain
+index keys may not match between these two parts. Lets say we mark heap
+tuples in the second part with a special HEAP_WARM_TUPLE flag. Similarly, the
+new index entries caused by the first WARM update are also marked with
+INDEX_WARM_POINTER flags.
+
+There are two distinct parts of the WARM chain. The first part where none of
+the tuples have HEAP_WARM_TUPLE flag set and the second part where every tuple
+has the flag set. Each of these parts satisfy HOT property on its own i.e. all
+tuples have the same value for indexed columns. But these two parts are
+separated by the WARM update which breaks HOT property for one or more indexes.
+
+Heap chain: [1] [2] [3] [4]
+			[aaaa, 1111] -> [aaaa, 1111] -> [bbbb, 1111]W -> [bbbb, 1111]W
+
+Index1: 	(aaaa) points to 1 (satisfies only tuples without W)
+			(bbbb)W points to 1 (satisfies only tuples marked with W)
+
+Index2:		(1111) points to 1 (satisfies tuples with and without W)
+
+
+It's clear that for indexes with both pointers, a heap tuple without
+HEAP_WARM_TUPLE flag will be reachable from the index pointer cleared of
+INDEX_WARM_POINTER flag and that with HEAP_WARM_TUPLE flag will be reachable
+from the pointer with INDEX_WARM_POINTER. But for indexes which did not create
+a new entry, tuples with and without the HEAP_WARM_TUPLE flag will be reachable
+from the original index pointer which doesn't have the INDEX_WARM_POINTER flag.
+(there is no pointer with INDEX_WARM_POINTER in such indexes).
+
+During first heap scan of VACUUM, we look for tuples with HEAP_WARM_UPDATED
+set.  If all or none of the live tuples in the chain are marked with
+HEAP_WARM_TUPLE flag, then the chain is a candidate for HOT conversion. We
+remember the root line pointer and whether the tuples in the chain had
+HEAP_WARM_TUPLE flags set or not.
+
+If we have a WARM chain with HEAP_WARM_TUPLE set, then our goal is to remove
+the index pointers without INDEX_WARM_POINTER flags and vice versa. But there
+is a catch. For Index2 above, there is only one pointer and it does not have
+the INDEX_WARM_POINTER flag set. Since all heap tuples are reachable only via
+this pointer, it must not be removed. IOW we should remove index pointer
+without INDEX_WARM_POINTER iff a another index pointer with INDEX_WARM_POINTER
+exists. Since index vacuum may visit these pointers in any order, we will need
+another index pass to detect redundant index pointers, which can safely be
+removed because all live tuples are reachable via the other index pointer. So
+in the first index pass we check which WARM candidates have 2 index pointers.
+In the second pass, we remove the redundant pointer and clear the
+INDEX_WARM_POINTER flag if that's the surviving index pointer. Note that
+all index pointers, either CLEAR or WARM, to dead tuples are removed during the
+first index scan itself.
+
+During the second heap scan, we fix WARM chain by clearing HEAP_WARM_UPDATED
+and HEAP_WARM_TUPLE flags on tuples.
+
+There are some more problems around aborted vacuums. For example, if vacuum
+aborts after clearing INDEX_WARM_POINTER flag but before removing the other
+index pointer, we will end up with two index pointers and none of those will
+have INDEX_WARM_POINTER set.  But since the HEAP_WARM_UPDATED flag on the heap
+tuple is still set, further WARM updates to the chain will be blocked. I guess
+we will need some special handling for case with multiple index pointers where
+none of the index pointers has INDEX_WARM_POINTER flag set. We can either leave
+these WARM chains alone and let them die with a subsequent non-WARM update or
+must apply heap-recheck logic during index vacuum to find the dead pointer.
+Given that vacuum-aborts are not common, I am inclined to leave this case
+unhandled. We must still check for presence of multiple index pointers without
+INDEX_WARM_POINTER flags and ensure that we don't accidently remove either of
+these pointers and also must not clear WARM chains.
+
+CREATE INDEX CONCURRENTLY
+-------------------------
+
+Currently CREATE INDEX CONCURRENTLY (CIC) is implemented as a 3-phase
+process.  In the first phase, we create catalog entry for the new index
+so that the index is visible to all other backends, but still don't use
+it for either read or write.  But we ensure that no new broken HOT
+chains are created by new transactions. In the second phase, we build
+the new index using a MVCC snapshot and then make the index available
+for inserts. We then do another pass over the index and insert any
+missing tuples, everytime indexing only it's root line pointer. See
+README.HOT for details about how HOT impacts CIC and how various
+challenges are tackeled.
+
+WARM poses another challenge because it allows creation of HOT chains
+even when an index key is changed. But since the index is not ready for
+insertion until the second phase is over, we might end up with a
+situation where the HOT chain has tuples with different index columns,
+yet only one of these values are indexed by the new index. Note that
+during the third phase, we only index tuples whose root line pointer is
+missing from the index. But we can't easily check if the existing index
+tuple is actually indexing the heap tuple visible to the new MVCC
+snapshot. Finding that information will require us to query the index
+again for every tuple in the chain, especially if it's a WARM tuple.
+This would require repeated access to the index. Another option would be
+to return index keys along with the heap TIDs when index is scanned for
+collecting all indexed TIDs during third phase. We can then compare the
+heap tuple against the already indexed key and decide whether or not to
+index the new tuple.
+
+We solve this problem more simply by disallowing WARM updates until the
+index is ready for insertion. We don't need to disallow WARM on a
+wholesale basis, but only those updates that change the columns of the
+new index are disallowed to be WARM updates.
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 30262ef..fdefbd3 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -97,9 +97,12 @@ static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				HeapTuple newtup, OffsetNumber root_offnum,
 				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
-static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
+static void HeapCheckColumns(Relation relation,
 							 Bitmapset *interesting_cols,
-							 HeapTuple oldtup, HeapTuple newtup);
+							 HeapTuple oldtup, HeapTuple newtup,
+							 Bitmapset **toasted_attrs,
+							 Bitmapset **compressed_attrs,
+							 Bitmapset **modified_attrs);
 static bool heap_acquire_tuplock(Relation relation, ItemPointer tid,
 					 LockTupleMode mode, LockWaitPolicy wait_policy,
 					 bool *have_tuple_lock);
@@ -1974,6 +1977,212 @@ heap_fetch(Relation relation,
 }
 
 /*
+ * Check status of a (possibly) WARM chain.
+ *
+ * This function looks at a HOT/WARM chain starting at tid and return a bitmask
+ * of information. We only follow the chain as long as it's known to be valid
+ * HOT chain. Information returned by the function consists of:
+ *
+ *  HCWC_WARM_UPDATED_TUPLE - a tuple with HEAP_WARM_UPDATED is found somewhere
+ *  						  in the chain. Note that when a tuple is WARM
+ *  						  updated, both old and new versions are marked
+ *  						  with this flag. So presence of this flag
+ *  						  indicates that a WARM update was performed on
+ *  						  this chain, but the update may have either
+ *  						  committed or aborted.
+ *
+ *  HCWC_WARM_TUPLE  - a tuple with HEAP_WARM_TUPLE is found somewhere in
+ *					  the chain. This flag is set only on the new version of
+ *					  the tuple while performing WARM update.
+ *
+ *  HCWC_CLEAR_TUPLE - a tuple without HEAP_WARM_TUPLE is found somewhere in
+ *  					 the chain. This either implies that the WARM updated
+ *  					 either aborted or it's recent enough that the old
+ *  					 tuple is still not pruned away by chain pruning logic.
+ *
+ *	If stop_at_warm is true, we stop when the first HEAP_WARM_UPDATED tuple is
+ *	found and return information collected so far.
+ */
+HeapCheckWarmChainStatus
+heap_check_warm_chain(Page dp, ItemPointer tid, bool stop_at_warm)
+{
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	HeapCheckWarmChainStatus	status = 0;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
+			break;
+		}
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		if (HeapTupleHeaderIsWarmUpdated(heapTuple.t_data))
+		{
+			/* We found a WARM_UPDATED tuple */
+			status |= HCWC_WARM_UPDATED_TUPLE;
+
+			/*
+			 * If we've been told to stop at the first WARM_UPDATED tuple, just
+			 * return whatever information collected so far.
+			 */
+			if (stop_at_warm)
+				return status;
+
+			/*
+			 * Remember whether it's a CLEAR or a WARM tuple.
+			 */
+			if (HeapTupleHeaderIsWarm(heapTuple.t_data))
+				status |= HCWC_WARM_TUPLE;
+			else
+				status |= HCWC_CLEAR_TUPLE;
+		}
+		else
+			/* Must be a regular, non-WARM tuple */
+			status |= HCWC_CLEAR_TUPLE;
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	/* All OK. No need to recheck */
+	return status;
+}
+
+/*
+ * Scan through the WARM chain starting at tid and reset all WARM related
+ * flags. At the end, the chain will have all characteristics of a regular HOT
+ * chain.
+ *
+ * Return the number of cleared offnums. Cleared offnums are returned in the
+ * passed-in cleared_offnums array. The caller must ensure that the array is
+ * large enough to hold maximum offnums that can be cleared by this invokation
+ * of heap_clear_warm_chain().
+ */
+int
+heap_clear_warm_chain(Page dp, ItemPointer tid, OffsetNumber *cleared_offnums)
+{
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	int							num_cleared = 0;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
+			break;
+		}
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Clear WARM_UPDATED and WARM flags.
+		 */
+		if (HeapTupleHeaderIsWarmUpdated(heapTuple.t_data))
+		{
+			HeapTupleHeaderClearWarmUpdated(heapTuple.t_data);
+			HeapTupleHeaderClearWarm(heapTuple.t_data);
+			cleared_offnums[num_cleared++] = offnum;
+		}
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	return num_cleared;
+}
+
+/*
  *	heap_hot_search_buffer	- search HOT chain for tuple satisfying snapshot
  *
  * On entry, *tid is the TID of a tuple (either a simple tuple, or the root
@@ -1993,11 +2202,14 @@ heap_fetch(Relation relation,
  * Unlike heap_fetch, the caller must already have pin and (at least) share
  * lock on the buffer; it is still pinned/locked at exit.  Also unlike
  * heap_fetch, we do not report any pgstats count; caller may do so if wanted.
+ *
+ * recheck should be set false on entry by caller, will be set true on exit
+ * if a WARM tuple is encountered.
  */
 bool
 heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 					   Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call)
+					   bool *all_dead, bool first_call, bool *recheck)
 {
 	Page		dp = (Page) BufferGetPage(buffer);
 	TransactionId prev_xmax = InvalidTransactionId;
@@ -2051,9 +2263,12 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		ItemPointerSetOffsetNumber(&heapTuple->t_self, offnum);
 
 		/*
-		 * Shouldn't see a HEAP_ONLY tuple at chain start.
+		 * Shouldn't see a HEAP_ONLY tuple at chain start, unless we are
+		 * dealing with a WARM updated tuple in which case deferred triggers
+		 * may request to fetch a WARM tuple from middle of a chain.
 		 */
-		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple))
+		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple) &&
+				!HeapTupleIsWarmUpdated(heapTuple))
 			break;
 
 		/*
@@ -2066,6 +2281,26 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 			break;
 
 		/*
+		 * Check if there exists a WARM tuple somewhere down the chain and set
+		 * recheck to TRUE.
+		 *
+		 * XXX This is not very efficient right now, and we should look for
+		 * possible improvements here.
+		 *
+		 * XXX We currently don't support turning enable_warm OFF once it's
+		 * turned ON. But if we ever do that, we must not rely on
+		 * RelationWarmUpdatesEnabled check to decide whether recheck is needed
+		 * or not.
+		 */
+		if (RelationWarmUpdatesEnabled(relation) &&
+			recheck && *recheck == false)
+		{
+			HeapCheckWarmChainStatus status;
+			status = heap_check_warm_chain(dp, &heapTuple->t_self, true);
+			*recheck = HCWC_IS_WARM_UPDATED(status);
+		}
+
+		/*
 		 * When first_call is true (and thus, skip is initially false) we'll
 		 * return the first tuple we find.  But on later passes, heapTuple
 		 * will initially be pointing to the tuple we returned last time.
@@ -2114,7 +2349,8 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		 * Check to see if HOT chain continues past this tuple; if so fetch
 		 * the next offnum and loop around.
 		 */
-		if (HeapTupleIsHotUpdated(heapTuple))
+		if (HeapTupleIsHotUpdated(heapTuple) &&
+			!HeapTupleHeaderHasRootOffset(heapTuple->t_data))
 		{
 			Assert(ItemPointerGetBlockNumber(&heapTuple->t_data->t_ctid) ==
 				   ItemPointerGetBlockNumber(tid));
@@ -2138,18 +2374,41 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
  */
 bool
 heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
-				bool *all_dead)
+				bool *all_dead, bool *recheck, Buffer *cbuffer,
+				HeapTuple heapTuple)
 {
 	bool		result;
 	Buffer		buffer;
-	HeapTupleData heapTuple;
+	ItemPointerData ret_tid = *tid;
 
 	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	LockBuffer(buffer, BUFFER_LOCK_SHARE);
-	result = heap_hot_search_buffer(tid, relation, buffer, snapshot,
-									&heapTuple, all_dead, true);
-	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buffer);
+	result = heap_hot_search_buffer(&ret_tid, relation, buffer, snapshot,
+									heapTuple, all_dead, true, recheck);
+
+	/*
+	 * If we are returning a potential candidate tuple from this chain and the
+	 * caller has requested for "recheck" hint, keep the buffer locked and
+	 * pinned. The caller must release the lock and pin on the buffer in all
+	 * such cases.
+	 */
+	if (!result || !recheck || !(*recheck))
+	{
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	/*
+	 * Set the caller supplied tid with the actual location of the tuple being
+	 * returned.
+	 */
+	if (result)
+	{
+		*tid = ret_tid;
+		if (cbuffer)
+			*cbuffer = buffer;
+	}
+
 	return result;
 }
 
@@ -2792,7 +3051,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		{
 			XLogRecPtr	recptr;
 			xl_heap_multi_insert *xlrec;
-			uint8		info = XLOG_HEAP2_MULTI_INSERT;
+			uint8		info = XLOG_HEAP_MULTI_INSERT;
 			char	   *tupledata;
 			int			totaldatalen;
 			char	   *scratchptr = scratch;
@@ -2889,7 +3148,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			/* filtering by origin on a row level is much more efficient */
 			XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);
 
-			recptr = XLogInsert(RM_HEAP2_ID, info);
+			recptr = XLogInsert(RM_HEAP_ID, info);
 
 			PageSetLSN(page, recptr);
 		}
@@ -3045,7 +3304,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	bool		all_visible_cleared = false;
 	HeapTuple	old_key_tuple = NULL;	/* replica identity of the tuple */
 	bool		old_key_copied = false;
-	OffsetNumber	root_offnum;
+	OffsetNumber	root_offnum = InvalidOffsetNumber;
 
 	Assert(ItemPointerIsValid(tid));
 
@@ -3278,7 +3537,7 @@ l1:
 							  &new_xmax, &new_infomask, &new_infomask2);
 
 	/*
-	 * heap_get_root_tuple_one() may call palloc, which is disallowed once we
+	 * heap_get_root_tuple() may call palloc, which is disallowed once we
 	 * enter the critical section. So check if the root offset is cached in the
 	 * tuple and if not, fetch that information hard way before entering the
 	 * critical section.
@@ -3313,7 +3572,9 @@ l1:
 	}
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	tp.t_data->t_infomask |= new_infomask;
 	tp.t_data->t_infomask2 |= new_infomask2;
@@ -3508,15 +3769,21 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 HTSU_Result
 heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode)
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update)
 {
 	HTSU_Result result;
 	TransactionId xid = GetCurrentTransactionId();
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *exprindx_attrs;
 	Bitmapset  *interesting_attrs;
 	Bitmapset  *modified_attrs;
+	Bitmapset  *notready_attrs;
+	Bitmapset  *compressed_attrs;
+	Bitmapset  *toasted_attrs;
+	List	   *indexattrsList;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3524,8 +3791,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
-	OffsetNumber	offnum;
-	OffsetNumber	root_offnum;
+	OffsetNumber	root_offnum = InvalidOffsetNumber;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3537,6 +3803,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		use_hot_update = false;
+	bool		use_warm_update = false;
 	bool		hot_attrs_checked = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
@@ -3562,6 +3829,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot update tuples during a parallel operation")));
 
+	/* Assume no-warm update */
+	if (warm_update)
+		*warm_update = false;
+
 	/*
 	 * Fetch the list of attributes to be checked for various operations.
 	 *
@@ -3582,10 +3853,14 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	exprindx_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_EXPR_PREDICATE);
+	notready_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_NOTREADY);
 
+	indexattrsList = RelationGetIndexAttrList(relation);
 
 	block = ItemPointerGetBlockNumber(otid);
-	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3605,8 +3880,11 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 		interesting_attrs = bms_add_members(interesting_attrs, hot_attrs);
 		hot_attrs_checked = true;
 	}
+
 	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, exprindx_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, notready_attrs);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -3623,7 +3901,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Assert(ItemIdIsNormal(lp));
 
 	/*
-	 * Fill in enough data in oldtup for HeapDetermineModifiedColumns to work
+	 * Fill in enough data in oldtup for HeapCheckColumns to work
 	 * properly.
 	 */
 	oldtup.t_tableOid = RelationGetRelid(relation);
@@ -3650,8 +3928,12 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	}
 
 	/* Determine columns modified by the update. */
-	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
-												  &oldtup, newtup);
+	HeapCheckColumns(relation, interesting_attrs,
+								 &oldtup, newtup, &toasted_attrs,
+								 &compressed_attrs, &modified_attrs);
+
+	if (modified_attrsp)
+		*modified_attrsp = bms_copy(modified_attrs);
 
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
@@ -3908,8 +4190,10 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(exprindx_attrs);
 		bms_free(modified_attrs);
 		bms_free(interesting_attrs);
+		bms_free(notready_attrs);
 		return result;
 	}
 
@@ -4034,7 +4318,6 @@ l2:
 		uint16		infomask_lock_old_tuple,
 					infomask2_lock_old_tuple;
 		bool		cleared_all_frozen = false;
-		OffsetNumber	root_offnum;
 
 		/*
 		 * To prevent concurrent sessions from updating the tuple, we have to
@@ -4073,7 +4356,9 @@ l2:
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
-		oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(oldtup.t_data))
+			oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 		oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleClearHotUpdated(&oldtup);
 		/* ... and store info about transaction updating this tuple */
@@ -4227,6 +4512,68 @@ l2:
 		 */
 		if (hot_attrs_checked && !bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
+		else if (!need_toast)
+		{
+			/*
+			 * If no WARM updates yet on this chain, let this update be a WARM
+			 * update. We must not do any WARM update even if the previous WARM
+			 * updated at the end aborted. That's why we look at
+			 * HEAP_WARM_UPDATED flag.
+			 *
+			 * We don't do WARM updates if one of the columns used in index
+			 * expressions is being modified. Since expressions may evaluate to
+			 * the same value, even when heap values change, we don't have a
+			 * good way to deal with duplicate key scans when expressions are
+			 * used in the index.
+			 *
+			 * We check if the HOT attrs are a subset of the modified
+			 * attributes. Since HOT attrs include all index attributes, this
+			 * allows to avoid doing a WARM update when all index attributes
+			 * are being updated. Performing a WARM update is not a great idea
+			 * because all indexes will receive a new entry anyways.
+			 *
+			 * We also disable WARM temporarily if we are modifying a column
+			 * which is used by a new index that's being added. We can't insert
+			 * new entries to such indexes and hence we must not allow creating
+			 * on WARM chains which are broken with respect to the new index
+			 * being added.
+			 *
+			 * Finally, we disable WARM if either the old or the new tuple has
+			 * toasted/compressed attributes. That makes the recheck logic a
+			 * lot more complex and can cause significant overhead while
+			 * determining modified columns too.
+			 */
+			if (RelationWarmUpdatesEnabled(relation) &&
+				relation->rd_supportswarm &&
+				!HeapTupleIsWarmUpdated(&oldtup) &&
+				!bms_overlap(modified_attrs, exprindx_attrs) &&
+				!bms_is_subset(hot_attrs, modified_attrs) &&
+				!bms_overlap(notready_attrs, modified_attrs) &&
+				!bms_overlap(toasted_attrs, hot_attrs) &&
+				!bms_overlap(compressed_attrs, hot_attrs))
+			{
+				int num_indexes, num_updating_indexes;
+				ListCell *l;
+
+				/*
+				 * Everything else is Ok. Now check if the update will require
+				 * less than or equal to 50% index updates. Anything above
+				 * that, we can just do a regular update and save on WARM
+				 * cleanup cost.
+				 */
+				num_indexes = list_length(indexattrsList);
+				num_updating_indexes = 0;
+				foreach (l, indexattrsList)
+				{
+					Bitmapset  *b = (Bitmapset *) lfirst(l);
+					if (bms_overlap(b, modified_attrs))
+						num_updating_indexes++;
+				}
+
+				if ((double)num_updating_indexes/num_indexes <= 0.5)
+					use_warm_update = true;
+			}
+		}
 	}
 	else
 	{
@@ -4273,6 +4620,32 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+
+		/*
+		 * Even if we are doing a HOT update, we must carry forward the WARM
+		 * flag because we may have already inserted another index entry
+		 * pointing to our root and a third entry may create duplicates.
+		 *
+		 * Note: If we ever have a mechanism to avoid duplicate <key, TID> in
+		 * indexes, we could look at relaxing this restriction and allow even
+		 * more WARM udpates.
+		 */
+		if (HeapTupleIsWarmUpdated(&oldtup))
+		{
+			HeapTupleSetWarmUpdated(heaptup);
+			HeapTupleSetWarmUpdated(newtup);
+		}
+
+		/*
+		 * If the old tuple is a WARM tuple then mark the new tuple as a WARM
+		 * tuple as well.
+		 */
+		if (HeapTupleIsWarm(&oldtup))
+		{
+			HeapTupleSetWarm(heaptup);
+			HeapTupleSetWarm(newtup);
+		}
+
 		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
@@ -4285,12 +4658,45 @@ l2:
 		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
 			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
+	else if (use_warm_update)
+	{
+		/* Mark the old tuple as HOT-updated */
+		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetWarmUpdated(&oldtup);
+
+		/* And mark the new tuple as heap-only */
+		HeapTupleSetHeapOnly(heaptup);
+		/* Mark the new tuple as WARM tuple */
+		HeapTupleSetWarmUpdated(heaptup);
+		/* This update also starts the WARM chain */
+		HeapTupleSetWarm(heaptup);
+		Assert(!HeapTupleIsWarm(&oldtup));
+
+		/* Mark the caller's copy too, in case different from heaptup */
+		HeapTupleSetHeapOnly(newtup);
+		HeapTupleSetWarmUpdated(newtup);
+		HeapTupleSetWarm(newtup);
+
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
+		/* Let the caller know we did a WARM update */
+		if (warm_update)
+			*warm_update = true;
+	}
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		HeapTupleClearWarmUpdated(heaptup);
+		HeapTupleClearWarmUpdated(newtup);
+		HeapTupleClearWarm(heaptup);
+		HeapTupleClearWarm(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4309,7 +4715,9 @@ l2:
 	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
-	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(oldtup.t_data))
+		oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 	oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	/* ... and store info about transaction updating this tuple */
 	Assert(TransactionIdIsValid(xmax_old_tuple));
@@ -4400,7 +4808,10 @@ l2:
 	if (have_tuple_lock)
 		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
 
-	pgstat_count_heap_update(relation, use_hot_update);
+	/*
+	 * Count HOT and WARM updates separately
+	 */
+	pgstat_count_heap_update(relation, use_hot_update, use_warm_update);
 
 	/*
 	 * If heaptup is a private copy, release it.  Don't forget to copy t_self
@@ -4420,17 +4831,33 @@ l2:
 	bms_free(id_attrs);
 	bms_free(modified_attrs);
 	bms_free(interesting_attrs);
+	bms_free(exprindx_attrs);
+	bms_free(notready_attrs);
 
 	return HeapTupleMayBeUpdated;
 }
 
 /*
- * Check if the specified attribute's value is same in both given tuples.
- * Subroutine for HeapDetermineModifiedColumns.
+ * Check if the specified attribute is toasted or compressed in either
+ * the old or the new tuple. For compressed or toasted attributes, we only do a
+ * very simplistic check for the equality by running datumIsEqual on the
+ * compressed or toasted form. This helps us to perform HOT updates (if other
+ * conditions are favourable) when these attributes are not updated. But we
+ * might not be able to capture all possible scenarios such as when the
+ * toasted/compressed attribute is updated, but the modified value is same as
+ * the original value.
+ *
+ * For WARM updates, we don't care what the equality check returns for
+ * toasted/compressed attributes. If such attributes are used in any of the
+ * indexes, we don't perform WARM updates irrespective of whether they are
+ * modified or not.
+ *
+ * Subroutine for HeapCheckColumns.
  */
-static bool
-heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
-					   HeapTuple tup1, HeapTuple tup2)
+static void
+heap_tuple_attr_check(TupleDesc tupdesc, int attrnum,
+					   HeapTuple tup1, HeapTuple tup2,
+					   bool *toasted, bool *compressed, bool *equal)
 {
 	Datum		value1,
 				value2;
@@ -4438,13 +4865,24 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 				isnull2;
 	Form_pg_attribute att;
 
+	*equal = true;
+	*toasted = *compressed = false;
+
 	/*
 	 * If it's a whole-tuple reference, say "not equal".  It's not really
 	 * worth supporting this case, since it could only succeed after a no-op
 	 * update, which is hardly a case worth optimizing for.
+	 *
+	 * XXX Does thie need special attention in WARM given that we don't want to
+	 * return "not equal" for something that is equal? But how does whole-tuple
+	 * reference ends up in the interesting_attrs list? Regression tests do not
+	 * have covergae for this case as of now.
 	 */
 	if (attrnum == 0)
-		return false;
+	{
+		*equal = false;
+		return;
+	}
 
 	/*
 	 * Likewise, automatically say "not equal" for any system attribute other
@@ -4455,12 +4893,16 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 	{
 		if (attrnum != ObjectIdAttributeNumber &&
 			attrnum != TableOidAttributeNumber)
-			return false;
+		{
+			*equal = false;
+			/* these attributes can neither be toasted not compressed */
+			return;
+		}
 	}
 
 	/*
 	 * Extract the corresponding values.  XXX this is pretty inefficient if
-	 * there are many indexed columns.  Should HeapDetermineModifiedColumns do
+	 * there are many indexed columns.  Should HeapCheckColumns do
 	 * a single heap_deform_tuple call on each tuple, instead?	But that
 	 * doesn't work for system columns ...
 	 */
@@ -4468,17 +4910,32 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 	value2 = heap_getattr(tup2, attrnum, tupdesc, &isnull2);
 
 	/*
+	 * If both are NULL, they can be considered equal.
+	 */
+	if (isnull1 && isnull2)
+	{
+		*equal = true;
+		*toasted = *compressed = false;
+		return;
+	}
+
+	/*
 	 * If one value is NULL and other is not, then they are certainly not
 	 * equal
 	 */
 	if (isnull1 != isnull2)
-		return false;
+		*equal = false;
 
-	/*
-	 * If both are NULL, they can be considered equal.
-	 */
-	if (isnull1)
-		return true;
+	/* attrnum == 0 is already handled above */
+	if ((attrnum < 0) && (*equal))
+	{
+		/*
+		 * The only allowed system columns are OIDs, so do this. OIDs can never
+		 * be compressed or toasted.
+		 */
+		*equal = (DatumGetObjectId(value1) == DatumGetObjectId(value2));
+		return;
+	}
 
 	/*
 	 * We do simple binary comparison of the two datums.  This may be overly
@@ -4489,46 +4946,90 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 	 * classes; furthermore, we cannot safely invoke user-defined functions
 	 * while holding exclusive buffer lock.
 	 */
-	if (attrnum <= 0)
+	Assert(attrnum <= tupdesc->natts);
+	att = tupdesc->attrs[attrnum - 1];
+
+	/*
+	 * If either the old or the new value is toasted, consider the attribute as
+	 * toasted. We don't check if the value is a NULL value.
+	 */
+	if ((att->attlen == -1) && !isnull1 &&
+		(VARATT_IS_EXTERNAL(value1) || VARATT_IS_EXTERNAL(value2)))
 	{
-		/* The only allowed system columns are OIDs, so do this */
-		return (DatumGetObjectId(value1) == DatumGetObjectId(value2));
+		*toasted = true;
 	}
-	else
+
+	/*
+	 * If either the old or the new value is compressed, consider the attribute
+	 * as compressed. We don't check if the value is a NULL value.
+	 */
+	if ((att->attlen == -1) && !isnull2 &&
+		(VARATT_IS_COMPRESSED(value1) || VARATT_IS_COMPRESSED(value2)))
+	{
+		*compressed = true;
+	}
+
+	/*
+	 * Check for equality but only if we haven't already determined above that
+	 * they are not equal. This can happen either because one of the attributes
+	 * is NULL.
+	 */
+	if (*equal)
 	{
-		Assert(attrnum <= tupdesc->natts);
-		att = tupdesc->attrs[attrnum - 1];
-		return datumIsEqual(value1, value2, att->attbyval, att->attlen);
+		*equal = datumIsEqual(value1, value2, att->attbyval, att->attlen);
 	}
 }
 
 /*
- * Check which columns are being updated.
+ * Check the old tuple and the new tuple for the given list of
+ * interesting_cols.
+ *
+ * Given an updated tuple, check if any of the interesting_cols are toasted or
+ * compressed, either in the old or the new tuple. Such columns are returned in
+ * the toasted_attrs and compressed_attrs respectively.
  *
- * Given an updated tuple, determine (and return into the output bitmapset),
- * from those listed as interesting, the set of columns that changed.
+ * Also check which columns are being changed in the update operation. For
+ * toasted/compressed columns, we only do a simple memcmp-based check without
+ * detoasing/decompressing the values. This implies that we might not be able
+ * to capture all cases where two values are equal.
  *
  * The input bitmapset is destructively modified; that is OK since this is
  * invoked at most once in heap_update.
  */
-static Bitmapset *
-HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
-							 HeapTuple oldtup, HeapTuple newtup)
+void
+HeapCheckColumns(Relation relation, Bitmapset *interesting_cols,
+							 HeapTuple oldtup, HeapTuple newtup,
+							 Bitmapset **toasted_attrs,
+							 Bitmapset **compressed_attrs,
+							 Bitmapset **modified_attrs)
 {
 	int		attnum;
-	Bitmapset *modified = NULL;
+
+	*toasted_attrs = NULL;
+	*compressed_attrs = NULL;
+	*modified_attrs = NULL;
 
 	while ((attnum = bms_first_member(interesting_cols)) >= 0)
 	{
+		bool equal, compressed, toasted;
+
 		attnum += FirstLowInvalidHeapAttributeNumber;
 
-		if (!heap_tuple_attr_equals(RelationGetDescr(relation),
-								   attnum, oldtup, newtup))
-			modified = bms_add_member(modified,
+		heap_tuple_attr_check(RelationGetDescr(relation),
+								   attnum, oldtup, newtup, &toasted,
+								   &compressed, &equal);
+		if (!equal)
+			*modified_attrs = bms_add_member(*modified_attrs,
+									  attnum - FirstLowInvalidHeapAttributeNumber);
+		if (toasted)
+			*toasted_attrs = bms_add_member(*toasted_attrs,
+									  attnum - FirstLowInvalidHeapAttributeNumber);
+		if (compressed)
+			*compressed_attrs = bms_add_member(*compressed_attrs,
 									  attnum - FirstLowInvalidHeapAttributeNumber);
 	}
 
-	return modified;
+	return;
 }
 
 /*
@@ -4540,7 +5041,8 @@ HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
  * via ereport().
  */
 void
-simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
+simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup,
+		Bitmapset **modified_attrs, bool *warm_update)
 {
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
@@ -4549,7 +5051,7 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &hufd, &lockmode);
+						 &hufd, &lockmode, modified_attrs, warm_update);
 	switch (result)
 	{
 		case HeapTupleSelfUpdated:
@@ -4640,7 +5142,6 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber	block;
-	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4649,11 +5150,10 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	bool		first_time = true;
 	bool		have_tuple_lock = false;
 	bool		cleared_all_frozen = false;
-	OffsetNumber	root_offnum;
+	OffsetNumber	root_offnum = InvalidOffsetNumber;
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
-	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -5745,7 +6245,6 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
-	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5754,7 +6253,6 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
-		offnum = ItemPointerGetOffsetNumber(&tupid);
 
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
@@ -6226,7 +6724,9 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	PageSetPrunable(page, RecentGlobalXmin);
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 
 	/*
@@ -6800,7 +7300,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 	 * Old-style VACUUM FULL is gone, but we have to keep this code as long as
 	 * we support having MOVED_OFF/MOVED_IN tuples in the database.
 	 */
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 
@@ -6819,7 +7319,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			 * have failed; whereas a non-dead MOVED_IN tuple must mean the
 			 * xvac transaction succeeded.
 			 */
-			if (tuple->t_infomask & HEAP_MOVED_OFF)
+			if (HeapTupleHeaderIsMovedOff(tuple))
 				frz->frzflags |= XLH_INVALID_XVAC;
 			else
 				frz->frzflags |= XLH_FREEZE_XVAC;
@@ -7289,7 +7789,7 @@ heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple)
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid))
@@ -7372,7 +7872,7 @@ heap_tuple_needs_freeze(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid) &&
@@ -7398,7 +7898,7 @@ HeapTupleHeaderAdvanceLatestRemovedXid(HeapTupleHeader tuple,
 	TransactionId xmax = HeapTupleHeaderGetUpdateXid(tuple);
 	TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		if (TransactionIdPrecedes(*latestRemovedXid, xvac))
 			*latestRemovedXid = xvac;
@@ -7447,6 +7947,36 @@ log_heap_cleanup_info(RelFileNode rnode, TransactionId latestRemovedXid)
 }
 
 /*
+ * Perform XLogInsert for a heap-warm-clear operation.  Caller must already
+ * have modified the buffer and marked it dirty.
+ */
+XLogRecPtr
+log_heap_warmclear(Relation reln, Buffer buffer,
+			   OffsetNumber *cleared, int ncleared)
+{
+	xl_heap_warmclear	xlrec;
+	XLogRecPtr			recptr;
+
+	/* Caller should not call me on a non-WAL-logged relation */
+	Assert(RelationNeedsWAL(reln));
+
+	xlrec.ncleared = ncleared;
+
+	XLogBeginInsert();
+	XLogRegisterData((char *) &xlrec, SizeOfHeapWarmClear);
+
+	XLogRegisterBuffer(0, buffer, REGBUF_STANDARD);
+
+	if (ncleared > 0)
+		XLogRegisterBufData(0, (char *) cleared,
+							ncleared * sizeof(OffsetNumber));
+
+	recptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_WARMCLEAR);
+
+	return recptr;
+}
+
+/*
  * Perform XLogInsert for a heap-clean operation.  Caller must already
  * have modified the buffer and marked it dirty.
  *
@@ -7601,6 +8131,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
+	bool		warm_update = false;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
@@ -7612,6 +8143,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	else
 		info = XLOG_HEAP_UPDATE;
 
+	if (HeapTupleIsWarmUpdated(newtup))
+		warm_update = true;
+
 	/*
 	 * If the old and new tuple are on the same page, we only need to log the
 	 * parts of the new tuple that were changed.  That saves on the amount of
@@ -7685,6 +8219,8 @@ log_heap_update(Relation reln, Buffer oldbuf,
 				xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY;
 		}
 	}
+	if (warm_update)
+		xlrec.flags |= XLH_UPDATE_WARM_UPDATE;
 
 	/* If new tuple is the single and first tuple on page... */
 	if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber &&
@@ -8099,6 +8635,60 @@ heap_xlog_clean(XLogReaderState *record)
 		XLogRecordPageWithFreeSpace(rnode, blkno, freespace);
 }
 
+
+/*
+ * Handles HEAP2_WARMCLEAR record type
+ */
+static void
+heap_xlog_warmclear(XLogReaderState *record)
+{
+	XLogRecPtr	lsn = record->EndRecPtr;
+	xl_heap_warmclear	*xlrec = (xl_heap_warmclear *) XLogRecGetData(record);
+	Buffer		buffer;
+	RelFileNode rnode;
+	BlockNumber blkno;
+	XLogRedoAction action;
+
+	XLogRecGetBlockTag(record, 0, &rnode, NULL, &blkno);
+
+	/*
+	 * If we have a full-page image, restore it (using a cleanup lock) and
+	 * we're done.
+	 */
+	action = XLogReadBufferForRedoExtended(record, 0, RBM_NORMAL, true,
+										   &buffer);
+	if (action == BLK_NEEDS_REDO)
+	{
+		Page		page = (Page) BufferGetPage(buffer);
+		OffsetNumber *cleared;
+		int			ncleared;
+		Size		datalen;
+		int			i;
+
+		cleared = (OffsetNumber *) XLogRecGetBlockData(record, 0, &datalen);
+
+		ncleared = xlrec->ncleared;
+
+		for (i = 0; i < ncleared; i++)
+		{
+			ItemId			lp;
+			OffsetNumber	offnum = cleared[i];
+			HeapTupleData	heapTuple;
+
+			lp = PageGetItemId(page, offnum);
+			heapTuple.t_data = (HeapTupleHeader) PageGetItem(page, lp);
+
+			HeapTupleHeaderClearWarmUpdated(heapTuple.t_data);
+			HeapTupleHeaderClearWarm(heapTuple.t_data);
+		}
+
+		PageSetLSN(page, lsn);
+		MarkBufferDirty(buffer);
+	}
+	if (BufferIsValid(buffer))
+		UnlockReleaseBuffer(buffer);
+}
+
 /*
  * Replay XLOG_HEAP2_VISIBLE record.
  *
@@ -8345,7 +8935,9 @@ heap_xlog_delete(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleHeaderClearHotUpdated(htup);
 		fix_infomask_from_infobits(xlrec->infobits_set,
@@ -8366,7 +8958,7 @@ heap_xlog_delete(XLogReaderState *record)
 		if (!HeapTupleHeaderHasRootOffset(htup))
 		{
 			OffsetNumber	root_offnum;
-			root_offnum = heap_get_root_tuple(page, xlrec->offnum); 
+			root_offnum = heap_get_root_tuple(page, xlrec->offnum);
 			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
 		}
 
@@ -8662,16 +9254,22 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	Size		freespace = 0;
 	XLogRedoAction oldaction;
 	XLogRedoAction newaction;
+	bool		warm_update = false;
 
 	/* initialize to keep the compiler quiet */
 	oldtup.t_data = NULL;
 	oldtup.t_len = 0;
 
+	if (xlrec->flags & XLH_UPDATE_WARM_UPDATE)
+		warm_update = true;
+
 	XLogRecGetBlockTag(record, 0, &rnode, NULL, &newblk);
 	if (XLogRecGetBlockTag(record, 1, NULL, NULL, &oldblk))
 	{
 		/* HOT updates are never done across pages */
 		Assert(!hot_update);
+		/* WARM updates are never done across pages */
+		Assert(!warm_update);
 	}
 	else
 		oldblk = newblk;
@@ -8731,6 +9329,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 								   &htup->t_infomask2);
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
+
+		/* Mark the old tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetWarmUpdated(htup);
+
 		/* Set forward chain link in t_ctid */
 		HeapTupleHeaderSetNextTid(htup, &newtid);
 
@@ -8866,6 +9469,10 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
 
+		/* Mark the new tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetWarmUpdated(htup);
+
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
@@ -8993,7 +9600,9 @@ heap_xlog_lock(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9072,7 +9681,9 @@ heap_xlog_lock_updated(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9141,6 +9752,9 @@ heap_redo(XLogReaderState *record)
 		case XLOG_HEAP_INSERT:
 			heap_xlog_insert(record);
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			heap_xlog_multi_insert(record);
+			break;
 		case XLOG_HEAP_DELETE:
 			heap_xlog_delete(record);
 			break;
@@ -9169,7 +9783,7 @@ heap2_redo(XLogReaderState *record)
 {
 	uint8		info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
 
-	switch (info & XLOG_HEAP_OPMASK)
+	switch (info & XLOG_HEAP2_OPMASK)
 	{
 		case XLOG_HEAP2_CLEAN:
 			heap_xlog_clean(record);
@@ -9183,9 +9797,6 @@ heap2_redo(XLogReaderState *record)
 		case XLOG_HEAP2_VISIBLE:
 			heap_xlog_visible(record);
 			break;
-		case XLOG_HEAP2_MULTI_INSERT:
-			heap_xlog_multi_insert(record);
-			break;
 		case XLOG_HEAP2_LOCK_UPDATED:
 			heap_xlog_lock_updated(record);
 			break;
@@ -9199,6 +9810,9 @@ heap2_redo(XLogReaderState *record)
 		case XLOG_HEAP2_REWRITE:
 			heap_xlog_logical_rewrite(record);
 			break;
+		case XLOG_HEAP2_WARMCLEAR:
+			heap_xlog_warmclear(record);
+			break;
 		default:
 			elog(PANIC, "heap2_redo: unknown op code %u", info);
 	}
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index f54337c..6a3baff 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -834,6 +834,13 @@ heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				continue;
 
+			/*
+			 * If the tuple has root line pointer, it must be the end of the
+			 * chain
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			/* Set up to scan the HOT-chain */
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
@@ -928,6 +935,6 @@ heap_get_root_tuple(Page page, OffsetNumber target_offnum)
 void
 heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 {
-	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+	heap_get_root_tuples_internal(page, InvalidOffsetNumber,
 			root_offsets);
 }
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index 2d3ae9b..bd469ee 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -404,6 +404,14 @@ rewrite_heap_tuple(RewriteState state,
 		old_tuple->t_data->t_infomask & HEAP_XACT_MASK;
 
 	/*
+	 * We must clear the HEAP_WARM_TUPLE flag if the HEAP_WARM_UPDATED is
+	 * cleared above.
+	 */
+	if (HeapTupleHeaderIsWarmUpdated(old_tuple->t_data))
+		HeapTupleHeaderClearWarm(new_tuple->t_data);
+
+
+	/*
 	 * While we have our hands on the tuple, we may as well freeze any
 	 * eligible xmin or xmax, so that future VACUUM effort can be saved.
 	 */
@@ -428,7 +436,7 @@ rewrite_heap_tuple(RewriteState state,
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
 
-		/* 
+		/*
 		 * We've already checked that this is not the last tuple in the chain,
 		 * so fetch the next TID in the chain.
 		 */
@@ -737,7 +745,7 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		/* 
+		/*
 		 * Set t_ctid just to ensure that block number is copied correctly, but
 		 * then immediately mark the tuple as the latest.
 		 */
diff --git a/src/backend/access/heap/tuptoaster.c b/src/backend/access/heap/tuptoaster.c
index aa5a45d..bab48fd 100644
--- a/src/backend/access/heap/tuptoaster.c
+++ b/src/backend/access/heap/tuptoaster.c
@@ -1688,7 +1688,8 @@ toast_save_datum(Relation rel, Datum value,
 							 toastrel,
 							 toastidxs[i]->rd_index->indisunique ?
 							 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-							 NULL);
+							 NULL,
+							 false);
 		}
 
 		/*
diff --git a/src/backend/access/index/genam.c b/src/backend/access/index/genam.c
index a91fda7..d523c8f 100644
--- a/src/backend/access/index/genam.c
+++ b/src/backend/access/index/genam.c
@@ -127,6 +127,8 @@ RelationGetIndexScan(Relation indexRelation, int nkeys, int norderbys)
 	scan->xs_cbuf = InvalidBuffer;
 	scan->xs_continue_hot = false;
 
+	scan->indexInfo = NULL;
+
 	return scan;
 }
 
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index cc5ac8b..d048714 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -197,7 +197,8 @@ index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 IndexInfo *indexInfo)
+			 IndexInfo *indexInfo,
+			 bool warm_update)
 {
 	RELATION_CHECKS;
 	CHECK_REL_PROCEDURE(aminsert);
@@ -207,6 +208,12 @@ index_insert(Relation indexRelation,
 									   (HeapTuple) NULL,
 									   InvalidBuffer);
 
+	if (warm_update)
+	{
+		Assert(indexRelation->rd_amroutine->amwarminsert != NULL);
+		return indexRelation->rd_amroutine->amwarminsert(indexRelation, values,
+				isnull, heap_t_ctid, heapRelation, checkUnique, indexInfo);
+	}
 	return indexRelation->rd_amroutine->aminsert(indexRelation, values, isnull,
 												 heap_t_ctid, heapRelation,
 												 checkUnique, indexInfo);
@@ -291,6 +298,25 @@ index_beginscan_internal(Relation indexRelation,
 	scan->parallel_scan = pscan;
 	scan->xs_temp_snap = temp_snap;
 
+	/*
+	 * If the index supports recheck, make sure that index tuple is saved
+	 * during index scans. Also build and cache IndexInfo which is used by
+	 * amrecheck routine.
+	 *
+	 * XXX Ideally, we should look at all indexes on the table and check if
+	 * WARM is at all supported on the base table. If WARM is not supported
+	 * then we don't need to do any recheck. RelationGetIndexAttrBitmap() does
+	 * do that and sets rd_supportswarm after looking at all indexes. But we
+	 * don't know if the function was called earlier in the session when we're
+	 * here. We can't call it now because there exists a risk of causing
+	 * deadlock.
+	 */
+	if (indexRelation->rd_amroutine->amrecheck)
+	{
+		scan->xs_want_itup = true;
+		scan->indexInfo = BuildIndexInfo(indexRelation);
+	}
+
 	return scan;
 }
 
@@ -358,6 +384,10 @@ index_endscan(IndexScanDesc scan)
 	if (scan->xs_temp_snap)
 		UnregisterSnapshot(scan->xs_snapshot);
 
+	/* Free cached IndexInfo, if any */
+	if (scan->indexInfo)
+		pfree(scan->indexInfo);
+
 	/* Release the scan data structure itself */
 	IndexScanEnd(scan);
 }
@@ -535,7 +565,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
 	/*
 	 * The AM's amgettuple proc finds the next index entry matching the scan
 	 * keys, and puts the TID into scan->xs_ctup.t_self.  It should also set
-	 * scan->xs_recheck and possibly scan->xs_itup/scan->xs_hitup, though we
+	 * scan->xs_tuple_recheck and possibly scan->xs_itup/scan->xs_hitup, though we
 	 * pay no attention to those fields here.
 	 */
 	found = scan->indexRelation->rd_amroutine->amgettuple(scan, direction);
@@ -574,7 +604,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
  * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
  * call).
  *
- * Note: caller must check scan->xs_recheck, and perform rechecking of the
+ * Note: caller must check scan->xs_tuple_recheck, and perform rechecking of the
  * scan keys if required.  We do not do that here because we don't have
  * enough information to do it efficiently in the general case.
  * ----------------
@@ -585,6 +615,7 @@ index_fetch_heap(IndexScanDesc scan)
 	ItemPointer tid = &scan->xs_ctup.t_self;
 	bool		all_dead = false;
 	bool		got_heap_tuple;
+	bool		tuple_recheck;
 
 	/* We can skip the buffer-switching logic if we're in mid-HOT chain. */
 	if (!scan->xs_continue_hot)
@@ -603,6 +634,8 @@ index_fetch_heap(IndexScanDesc scan)
 			heap_page_prune_opt(scan->heapRelation, scan->xs_cbuf);
 	}
 
+	tuple_recheck = false;
+
 	/* Obtain share-lock on the buffer so we can examine visibility */
 	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
 	got_heap_tuple = heap_hot_search_buffer(tid, scan->heapRelation,
@@ -610,32 +643,60 @@ index_fetch_heap(IndexScanDesc scan)
 											scan->xs_snapshot,
 											&scan->xs_ctup,
 											&all_dead,
-											!scan->xs_continue_hot);
-	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+											!scan->xs_continue_hot,
+											&tuple_recheck);
 
 	if (got_heap_tuple)
 	{
+		bool res = true;
+
+		/*
+		 * Ok we got a tuple which satisfies the snapshot, but if its part of a
+		 * WARM chain, we must do additional checks to ensure that we are
+		 * indeed returning a correct tuple. Note that if the index AM does not
+		 * implement amrecheck method, then we don't any additional checks
+		 * since WARM must have been disabled on such tables.
+		 */
+		if (tuple_recheck && scan->xs_itup &&
+			scan->indexRelation->rd_amroutine->amrecheck)
+		{
+			res = scan->indexRelation->rd_amroutine->amrecheck(
+						scan->indexRelation,
+						scan->indexInfo,
+						scan->xs_itup,
+						scan->heapRelation,
+						&scan->xs_ctup);
+		}
+
+		LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+
 		/*
 		 * Only in a non-MVCC snapshot can more than one member of the HOT
 		 * chain be visible.
 		 */
 		scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
 		pgstat_count_heap_fetch(scan->indexRelation);
-		return &scan->xs_ctup;
+
+		if (res)
+			return &scan->xs_ctup;
 	}
+	else
+	{
+		LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
 
-	/* We've reached the end of the HOT chain. */
-	scan->xs_continue_hot = false;
+		/* We've reached the end of the HOT chain. */
+		scan->xs_continue_hot = false;
 
-	/*
-	 * If we scanned a whole HOT chain and found only dead tuples, tell index
-	 * AM to kill its entry for that TID (this will take effect in the next
-	 * amgettuple call, in index_getnext_tid).  We do not do this when in
-	 * recovery because it may violate MVCC to do so.  See comments in
-	 * RelationGetIndexScan().
-	 */
-	if (!scan->xactStartedInRecovery)
-		scan->kill_prior_tuple = all_dead;
+		/*
+		 * If we scanned a whole HOT chain and found only dead tuples, tell index
+		 * AM to kill its entry for that TID (this will take effect in the next
+		 * amgettuple call, in index_getnext_tid).  We do not do this when in
+		 * recovery because it may violate MVCC to do so.  See comments in
+		 * RelationGetIndexScan().
+		 */
+		if (!scan->xactStartedInRecovery)
+			scan->kill_prior_tuple = all_dead;
+	}
 
 	return NULL;
 }
diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c
index 6dca810..463d4bf 100644
--- a/src/backend/access/nbtree/nbtinsert.c
+++ b/src/backend/access/nbtree/nbtinsert.c
@@ -20,6 +20,7 @@
 #include "access/nbtxlog.h"
 #include "access/transam.h"
 #include "access/xloginsert.h"
+#include "catalog/index.h"
 #include "miscadmin.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
@@ -250,6 +251,10 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 	BTPageOpaque opaque;
 	Buffer		nbuf = InvalidBuffer;
 	bool		found = false;
+	Buffer		buffer;
+	HeapTupleData	heapTuple;
+	bool		recheck = false;
+	IndexInfo	*indexInfo = BuildIndexInfo(rel);
 
 	/* Assume unique until we find a duplicate */
 	*is_unique = true;
@@ -309,6 +314,8 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				curitup = (IndexTuple) PageGetItem(page, curitemid);
 				htid = curitup->t_tid;
 
+				recheck = false;
+
 				/*
 				 * If we are doing a recheck, we expect to find the tuple we
 				 * are rechecking.  It's not a duplicate, but we have to keep
@@ -326,112 +333,153 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				 * have just a single index entry for the entire chain.
 				 */
 				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
-										 &all_dead))
+							&all_dead, &recheck, &buffer,
+							&heapTuple))
 				{
 					TransactionId xwait;
+					bool result = true;
 
 					/*
-					 * It is a duplicate. If we are only doing a partial
-					 * check, then don't bother checking if the tuple is being
-					 * updated in another transaction. Just return the fact
-					 * that it is a potential conflict and leave the full
-					 * check till later.
+					 * If the tuple was WARM update, we may again see our own
+					 * tuple. Since WARM updates don't create new index
+					 * entries, our own tuple is only reachable via the old
+					 * index pointer.
 					 */
-					if (checkUnique == UNIQUE_CHECK_PARTIAL)
+					if (checkUnique == UNIQUE_CHECK_EXISTING &&
+							ItemPointerCompare(&htid, &itup->t_tid) == 0)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						*is_unique = false;
-						return InvalidTransactionId;
+						found = true;
+						result = false;
+						if (recheck)
+							UnlockReleaseBuffer(buffer);
 					}
-
-					/*
-					 * If this tuple is being updated by other transaction
-					 * then we have to wait for its commit/abort.
-					 */
-					xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
-						SnapshotDirty.xmin : SnapshotDirty.xmax;
-
-					if (TransactionIdIsValid(xwait))
+					else if (recheck)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						/* Tell _bt_doinsert to wait... */
-						*speculativeToken = SnapshotDirty.speculativeToken;
-						return xwait;
+						result = btrecheck(rel, indexInfo, curitup, heapRel, &heapTuple);
+						UnlockReleaseBuffer(buffer);
 					}
 
-					/*
-					 * Otherwise we have a definite conflict.  But before
-					 * complaining, look to see if the tuple we want to insert
-					 * is itself now committed dead --- if so, don't complain.
-					 * This is a waste of time in normal scenarios but we must
-					 * do it to support CREATE INDEX CONCURRENTLY.
-					 *
-					 * We must follow HOT-chains here because during
-					 * concurrent index build, we insert the root TID though
-					 * the actual tuple may be somewhere in the HOT-chain.
-					 * While following the chain we might not stop at the
-					 * exact tuple which triggered the insert, but that's OK
-					 * because if we find a live tuple anywhere in this chain,
-					 * we have a unique key conflict.  The other live tuple is
-					 * not part of this chain because it had a different index
-					 * entry.
-					 */
-					htid = itup->t_tid;
-					if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL))
-					{
-						/* Normal case --- it's still live */
-					}
-					else
+					if (result)
 					{
 						/*
-						 * It's been deleted, so no error, and no need to
-						 * continue searching
+						 * It is a duplicate. If we are only doing a partial
+						 * check, then don't bother checking if the tuple is being
+						 * updated in another transaction. Just return the fact
+						 * that it is a potential conflict and leave the full
+						 * check till later.
 						 */
-						break;
-					}
+						if (checkUnique == UNIQUE_CHECK_PARTIAL)
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							*is_unique = false;
+							return InvalidTransactionId;
+						}
 
-					/*
-					 * Check for a conflict-in as we would if we were going to
-					 * write to this page.  We aren't actually going to write,
-					 * but we want a chance to report SSI conflicts that would
-					 * otherwise be masked by this unique constraint
-					 * violation.
-					 */
-					CheckForSerializableConflictIn(rel, NULL, buf);
+						/*
+						 * If this tuple is being updated by other transaction
+						 * then we have to wait for its commit/abort.
+						 */
+						xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
+							SnapshotDirty.xmin : SnapshotDirty.xmax;
+
+						if (TransactionIdIsValid(xwait))
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							/* Tell _bt_doinsert to wait... */
+							*speculativeToken = SnapshotDirty.speculativeToken;
+							return xwait;
+						}
 
-					/*
-					 * This is a definite conflict.  Break the tuple down into
-					 * datums and report the error.  But first, make sure we
-					 * release the buffer locks we're holding ---
-					 * BuildIndexValueDescription could make catalog accesses,
-					 * which in the worst case might touch this same index and
-					 * cause deadlocks.
-					 */
-					if (nbuf != InvalidBuffer)
-						_bt_relbuf(rel, nbuf);
-					_bt_relbuf(rel, buf);
+						/*
+						 * Otherwise we have a definite conflict.  But before
+						 * complaining, look to see if the tuple we want to insert
+						 * is itself now committed dead --- if so, don't complain.
+						 * This is a waste of time in normal scenarios but we must
+						 * do it to support CREATE INDEX CONCURRENTLY.
+						 *
+						 * We must follow HOT-chains here because during
+						 * concurrent index build, we insert the root TID though
+						 * the actual tuple may be somewhere in the HOT-chain.
+						 * While following the chain we might not stop at the
+						 * exact tuple which triggered the insert, but that's OK
+						 * because if we find a live tuple anywhere in this chain,
+						 * we have a unique key conflict.  The other live tuple is
+						 * not part of this chain because it had a different index
+						 * entry.
+						 */
+						recheck = false;
+						ItemPointerCopy(&itup->t_tid, &htid);
+						if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL,
+									&recheck, &buffer, &heapTuple))
+						{
+							bool result = true;
+							if (recheck)
+							{
+								/*
+								 * Recheck if the tuple actually satisfies the
+								 * index key. Otherwise, we might be following
+								 * a wrong index pointer and mustn't entertain
+								 * this tuple.
+								 */
+								result = btrecheck(rel, indexInfo, itup, heapRel, &heapTuple);
+								UnlockReleaseBuffer(buffer);
+							}
+							if (!result)
+								break;
+							/* Normal case --- it's still live */
+						}
+						else
+						{
+							/*
+							 * It's been deleted, so no error, and no need to
+							 * continue searching.
+							 */
+							break;
+						}
 
-					{
-						Datum		values[INDEX_MAX_KEYS];
-						bool		isnull[INDEX_MAX_KEYS];
-						char	   *key_desc;
-
-						index_deform_tuple(itup, RelationGetDescr(rel),
-										   values, isnull);
-
-						key_desc = BuildIndexValueDescription(rel, values,
-															  isnull);
-
-						ereport(ERROR,
-								(errcode(ERRCODE_UNIQUE_VIOLATION),
-								 errmsg("duplicate key value violates unique constraint \"%s\"",
-										RelationGetRelationName(rel)),
-							   key_desc ? errdetail("Key %s already exists.",
-													key_desc) : 0,
-								 errtableconstraint(heapRel,
-											 RelationGetRelationName(rel))));
+						/*
+						 * Check for a conflict-in as we would if we were going to
+						 * write to this page.  We aren't actually going to write,
+						 * but we want a chance to report SSI conflicts that would
+						 * otherwise be masked by this unique constraint
+						 * violation.
+						 */
+						CheckForSerializableConflictIn(rel, NULL, buf);
+
+						/*
+						 * This is a definite conflict.  Break the tuple down into
+						 * datums and report the error.  But first, make sure we
+						 * release the buffer locks we're holding ---
+						 * BuildIndexValueDescription could make catalog accesses,
+						 * which in the worst case might touch this same index and
+						 * cause deadlocks.
+						 */
+						if (nbuf != InvalidBuffer)
+							_bt_relbuf(rel, nbuf);
+						_bt_relbuf(rel, buf);
+
+						{
+							Datum		values[INDEX_MAX_KEYS];
+							bool		isnull[INDEX_MAX_KEYS];
+							char	   *key_desc;
+
+							index_deform_tuple(itup, RelationGetDescr(rel),
+									values, isnull);
+
+							key_desc = BuildIndexValueDescription(rel, values,
+									isnull);
+
+							ereport(ERROR,
+									(errcode(ERRCODE_UNIQUE_VIOLATION),
+									 errmsg("duplicate key value violates unique constraint \"%s\"",
+										 RelationGetRelationName(rel)),
+									 key_desc ? errdetail("Key %s already exists.",
+										 key_desc) : 0,
+									 errtableconstraint(heapRel,
+										 RelationGetRelationName(rel))));
+						}
 					}
 				}
 				else if (all_dead)
diff --git a/src/backend/access/nbtree/nbtpage.c b/src/backend/access/nbtree/nbtpage.c
index f815fd4..061c8d4 100644
--- a/src/backend/access/nbtree/nbtpage.c
+++ b/src/backend/access/nbtree/nbtpage.c
@@ -766,29 +766,20 @@ _bt_page_recyclable(Page page)
 }
 
 /*
- * Delete item(s) from a btree page during VACUUM.
+ * Delete item(s) and clear WARM item(s) on a btree page during VACUUM.
  *
  * This must only be used for deleting leaf items.  Deleting an item on a
  * non-leaf page has to be done as part of an atomic action that includes
- * deleting the page it points to.
+ * deleting the page it points to. We don't ever clear pointers on a non-leaf
+ * page.
  *
  * This routine assumes that the caller has pinned and locked the buffer.
  * Also, the given itemnos *must* appear in increasing order in the array.
- *
- * We record VACUUMs and b-tree deletes differently in WAL. InHotStandby
- * we need to be able to pin all of the blocks in the btree in physical
- * order when replaying the effects of a VACUUM, just as we do for the
- * original VACUUM itself. lastBlockVacuumed allows us to tell whether an
- * intermediate range of blocks has had no changes at all by VACUUM,
- * and so must be scanned anyway during replay. We always write a WAL record
- * for the last block in the index, whether or not it contained any items
- * to be removed. This allows us to scan right up to end of index to
- * ensure correct locking.
  */
 void
-_bt_delitems_vacuum(Relation rel, Buffer buf,
-					OffsetNumber *itemnos, int nitems,
-					BlockNumber lastBlockVacuumed)
+_bt_handleitems_vacuum(Relation rel, Buffer buf,
+					OffsetNumber *delitemnos, int ndelitems,
+					OffsetNumber *clearitemnos, int nclearitems)
 {
 	Page		page = BufferGetPage(buf);
 	BTPageOpaque opaque;
@@ -796,9 +787,20 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 	/* No ereport(ERROR) until changes are logged */
 	START_CRIT_SECTION();
 
+	/*
+	 * Clear the WARM pointers.
+	 *
+	 * We must do this before dealing with the dead items because
+	 * PageIndexMultiDelete may move items around to compactify the array and
+	 * hence offnums recorded earlier won't make any sense after
+	 * PageIndexMultiDelete is called.
+	 */
+	if (nclearitems > 0)
+		_bt_clear_items(page, clearitemnos, nclearitems);
+
 	/* Fix the page */
-	if (nitems > 0)
-		PageIndexMultiDelete(page, itemnos, nitems);
+	if (ndelitems > 0)
+		PageIndexMultiDelete(page, delitemnos, ndelitems);
 
 	/*
 	 * We can clear the vacuum cycle ID since this page has certainly been
@@ -824,7 +826,8 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 		XLogRecPtr	recptr;
 		xl_btree_vacuum xlrec_vacuum;
 
-		xlrec_vacuum.lastBlockVacuumed = lastBlockVacuumed;
+		xlrec_vacuum.ndelitems = ndelitems;
+		xlrec_vacuum.nclearitems = nclearitems;
 
 		XLogBeginInsert();
 		XLogRegisterBuffer(0, buf, REGBUF_STANDARD);
@@ -835,8 +838,11 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 		 * is.  When XLogInsert stores the whole buffer, the offsets array
 		 * need not be stored too.
 		 */
-		if (nitems > 0)
-			XLogRegisterBufData(0, (char *) itemnos, nitems * sizeof(OffsetNumber));
+		if (ndelitems > 0)
+			XLogRegisterBufData(0, (char *) delitemnos, ndelitems * sizeof(OffsetNumber));
+
+		if (nclearitems > 0)
+			XLogRegisterBufData(0, (char *) clearitemnos, nclearitems * sizeof(OffsetNumber));
 
 		recptr = XLogInsert(RM_BTREE_ID, XLOG_BTREE_VACUUM);
 
@@ -1882,3 +1888,13 @@ _bt_unlink_halfdead_page(Relation rel, Buffer leafbuf, bool *rightsib_empty)
 
 	return true;
 }
+
+/*
+ * Currently just a wrapper arounf PageIndexClearWarmTuples, but in theory each
+ * index may have it's own way to handle WARM tuples.
+ */
+void
+_bt_clear_items(Page page, OffsetNumber *clearitemnos, uint16 nclearitems)
+{
+	PageIndexClearWarmTuples(page, clearitemnos, nclearitems);
+}
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 775f2ff..6d558af 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -146,6 +146,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = btbuild;
 	amroutine->ambuildempty = btbuildempty;
 	amroutine->aminsert = btinsert;
+	amroutine->amwarminsert = btwarminsert;
 	amroutine->ambulkdelete = btbulkdelete;
 	amroutine->amvacuumcleanup = btvacuumcleanup;
 	amroutine->amcanreturn = btcanreturn;
@@ -163,6 +164,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = btestimateparallelscan;
 	amroutine->aminitparallelscan = btinitparallelscan;
 	amroutine->amparallelrescan = btparallelrescan;
+	amroutine->amrecheck = btrecheck;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -315,11 +317,12 @@ btbuildempty(Relation index)
  *		Descend the tree recursively, find the appropriate location for our
  *		new tuple, and put it there.
  */
-bool
-btinsert(Relation rel, Datum *values, bool *isnull,
+static bool
+btinsert_internal(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
-		 IndexInfo *indexInfo)
+		 IndexInfo *indexInfo,
+		 bool warm_update)
 {
 	bool		result;
 	IndexTuple	itup;
@@ -328,6 +331,11 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	itup = index_form_tuple(RelationGetDescr(rel), values, isnull);
 	itup->t_tid = *ht_ctid;
 
+	if (warm_update)
+		ItemPointerSetFlags(&itup->t_tid, BTREE_INDEX_WARM_POINTER);
+	else
+		ItemPointerClearFlags(&itup->t_tid);
+
 	result = _bt_doinsert(rel, itup, checkUnique, heapRel);
 
 	pfree(itup);
@@ -335,6 +343,26 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	return result;
 }
 
+bool
+btinsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, false);
+}
+
+bool
+btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, true);
+}
+
 /*
  *	btgettuple() -- Get the next tuple in the scan.
  */
@@ -1103,7 +1131,7 @@ btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 								 RBM_NORMAL, info->strategy);
 		LockBufferForCleanup(buf);
 		_bt_checkpage(rel, buf);
-		_bt_delitems_vacuum(rel, buf, NULL, 0, vstate.lastBlockVacuumed);
+		_bt_handleitems_vacuum(rel, buf, NULL, 0, NULL, 0);
 		_bt_relbuf(rel, buf);
 	}
 
@@ -1201,6 +1229,8 @@ restart:
 	{
 		OffsetNumber deletable[MaxOffsetNumber];
 		int			ndeletable;
+		OffsetNumber clearwarm[MaxOffsetNumber];
+		int			nclearwarm;
 		OffsetNumber offnum,
 					minoff,
 					maxoff;
@@ -1239,7 +1269,7 @@ restart:
 		 * Scan over all items to see which ones need deleted according to the
 		 * callback function.
 		 */
-		ndeletable = 0;
+		ndeletable = nclearwarm = 0;
 		minoff = P_FIRSTDATAKEY(opaque);
 		maxoff = PageGetMaxOffsetNumber(page);
 		if (callback)
@@ -1250,6 +1280,9 @@ restart:
 			{
 				IndexTuple	itup;
 				ItemPointer htup;
+				int			flags;
+				bool		is_warm = false;
+				IndexBulkDeleteCallbackResult	result;
 
 				itup = (IndexTuple) PageGetItem(page,
 												PageGetItemId(page, offnum));
@@ -1276,16 +1309,36 @@ restart:
 				 * applies to *any* type of index that marks index tuples as
 				 * killed.
 				 */
-				if (callback(htup, callback_state))
+				flags = ItemPointerGetFlags(&itup->t_tid);
+				is_warm = ((flags & BTREE_INDEX_WARM_POINTER) != 0);
+
+				if (is_warm)
+					stats->num_warm_pointers++;
+				else
+					stats->num_clear_pointers++;
+
+				result = callback(htup, is_warm, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					if (is_warm)
+						stats->warm_pointers_removed++;
+					else
+						stats->clear_pointers_removed++;
 					deletable[ndeletable++] = offnum;
+				}
+				else if (result == IBDCR_CLEAR_WARM)
+				{
+					clearwarm[nclearwarm++] = offnum;
+				}
 			}
 		}
 
 		/*
-		 * Apply any needed deletes.  We issue just one _bt_delitems_vacuum()
-		 * call per page, so as to minimize WAL traffic.
+		 * Apply any needed deletes and clearing.  We issue just one
+		 * _bt_handleitems_vacuum() call per page, so as to minimize WAL
+		 * traffic.
 		 */
-		if (ndeletable > 0)
+		if (ndeletable > 0 || nclearwarm > 0)
 		{
 			/*
 			 * Notice that the issued XLOG_BTREE_VACUUM WAL record includes
@@ -1301,8 +1354,8 @@ restart:
 			 * doesn't seem worth the amount of bookkeeping it'd take to avoid
 			 * that.
 			 */
-			_bt_delitems_vacuum(rel, buf, deletable, ndeletable,
-								vstate->lastBlockVacuumed);
+			_bt_handleitems_vacuum(rel, buf, deletable, ndeletable,
+								clearwarm, nclearwarm);
 
 			/*
 			 * Remember highest leaf page number we've issued a
@@ -1312,6 +1365,7 @@ restart:
 				vstate->lastBlockVacuumed = blkno;
 
 			stats->tuples_removed += ndeletable;
+			stats->pointers_cleared += nclearwarm;
 			/* must recompute maxoff */
 			maxoff = PageGetMaxOffsetNumber(page);
 		}
diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c
index 5b259a3..bd63a38 100644
--- a/src/backend/access/nbtree/nbtutils.c
+++ b/src/backend/access/nbtree/nbtutils.c
@@ -20,11 +20,14 @@
 #include "access/nbtree.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "access/tuptoaster.h"
+#include "catalog/index.h"
 #include "miscadmin.h"
 #include "utils/array.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 typedef struct BTSortArrayContext
@@ -2069,3 +2072,100 @@ btproperty(Oid index_oid, int attno,
 			return false;		/* punt to generic code */
 	}
 }
+
+/*
+ * Check if the index tuple's key matches the one computed from the given heap
+ * tuple's attribute
+ */
+bool
+btrecheck(Relation indexRel, IndexInfo *indexInfo, IndexTuple indexTuple1,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	bool		isavail[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+	IndexTuple	indexTuple2;
+
+	/*
+	 * Currently we don't allow enable_warm to be turned OFF after the table is
+	 * created. But if we ever do that, this assert must be removed since we
+	 * must exercise recheck for all existing WARM chains.
+	 */
+	Assert(RelationWarmUpdatesEnabled(heapRel));
+
+	/*
+	 * Get the index values, except for expression attributes. Since WARM is
+	 * not used when a column used by expressions in an index is modified, we
+	 * can safely assume that those index attributes are never changed by a
+	 * WARM update.
+	 *
+	 * We cannot use FormIndexDatum here because that requires access to
+	 * executor state and we don't have that here.
+	 */
+	FormIndexPlainDatum(indexInfo, heapRel, heapTuple, values, isnull, isavail);
+
+	/*
+	 * Form an index tuple using the heap values first. This allows to then
+	 * fetch index attributes from the current index tuple and the one that is
+	 * formed from the heap values and then do a binary comparison using
+	 * datumIsEqual().
+	 *
+	 * This takes care of doing the right comparison for compressed index
+	 * attributes (we just compare the compressed versions in both tuples) and
+	 * also ensure that we correctly detoast heap values, if need be.
+	 */
+	indexTuple2 = index_form_tuple(RelationGetDescr(indexRel), values, isnull);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue1;
+		bool	indxisnull1;
+		Datum	indxvalue2;
+		bool	indxisnull2;
+
+		/* No need to compare if the attribute value is not available */
+		if (!isavail[i - 1])
+			continue;
+
+		indxvalue1 = index_getattr(indexTuple1, i, indexRel->rd_att,
+								   &indxisnull1);
+		indxvalue2 = index_getattr(indexTuple2, i, indexRel->rd_att,
+								   &indxisnull2);
+
+		/*
+		 * If both are NULL, then they are equal
+		 */
+		if (indxisnull1 && indxisnull2)
+			continue;
+
+		/*
+		 * If just one is NULL, then they are not equal
+		 */
+		if (indxisnull1 || indxisnull2)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now just do a raw memory comparison. If the index tuple was formed
+		 * using this heap tuple, the computed index values must match
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(indxvalue1, indxvalue2, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	pfree(indexTuple2);
+
+	return equal;
+}
diff --git a/src/backend/access/nbtree/nbtxlog.c b/src/backend/access/nbtree/nbtxlog.c
index ac60db0..ef24738 100644
--- a/src/backend/access/nbtree/nbtxlog.c
+++ b/src/backend/access/nbtree/nbtxlog.c
@@ -390,8 +390,8 @@ btree_xlog_vacuum(XLogReaderState *record)
 	Buffer		buffer;
 	Page		page;
 	BTPageOpaque opaque;
-#ifdef UNUSED
 	xl_btree_vacuum *xlrec = (xl_btree_vacuum *) XLogRecGetData(record);
+#ifdef UNUSED
 
 	/*
 	 * This section of code is thought to be no longer needed, after analysis
@@ -482,19 +482,30 @@ btree_xlog_vacuum(XLogReaderState *record)
 
 		if (len > 0)
 		{
-			OffsetNumber *unused;
-			OffsetNumber *unend;
+			OffsetNumber *offnums = (OffsetNumber *) ptr;
 
-			unused = (OffsetNumber *) ptr;
-			unend = (OffsetNumber *) ((char *) ptr + len);
+			/*
+			 * Clear the WARM pointers.
+			 *
+			 * We must do this before dealing with the dead items because
+			 * PageIndexMultiDelete may move items around to compactify the
+			 * array and hence offnums recorded earlier won't make any sense
+			 * after PageIndexMultiDelete is called.
+			 */
+			if (xlrec->nclearitems > 0)
+				_bt_clear_items(page, offnums + xlrec->ndelitems,
+						xlrec->nclearitems);
 
-			if ((unend - unused) > 0)
-				PageIndexMultiDelete(page, unused, unend - unused);
+			/*
+			 * And handle the deleted items too
+			 */
+			if (xlrec->ndelitems > 0)
+				PageIndexMultiDelete(page, offnums, xlrec->ndelitems);
 		}
 
 		/*
 		 * Mark the page as not containing any LP_DEAD items --- see comments
-		 * in _bt_delitems_vacuum().
+		 * in _bt_handleitems_vacuum().
 		 */
 		opaque = (BTPageOpaque) PageGetSpecialPointer(page);
 		opaque->btpo_flags &= ~BTP_HAS_GARBAGE;
diff --git a/src/backend/access/rmgrdesc/heapdesc.c b/src/backend/access/rmgrdesc/heapdesc.c
index 44d2d63..d373e61 100644
--- a/src/backend/access/rmgrdesc/heapdesc.c
+++ b/src/backend/access/rmgrdesc/heapdesc.c
@@ -44,6 +44,12 @@ heap_desc(StringInfo buf, XLogReaderState *record)
 
 		appendStringInfo(buf, "off %u", xlrec->offnum);
 	}
+	else if (info == XLOG_HEAP_MULTI_INSERT)
+	{
+		xl_heap_multi_insert *xlrec = (xl_heap_multi_insert *) rec;
+
+		appendStringInfo(buf, "%d tuples", xlrec->ntuples);
+	}
 	else if (info == XLOG_HEAP_DELETE)
 	{
 		xl_heap_delete *xlrec = (xl_heap_delete *) rec;
@@ -102,7 +108,7 @@ heap2_desc(StringInfo buf, XLogReaderState *record)
 	char	   *rec = XLogRecGetData(record);
 	uint8		info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
 
-	info &= XLOG_HEAP_OPMASK;
+	info &= XLOG_HEAP2_OPMASK;
 	if (info == XLOG_HEAP2_CLEAN)
 	{
 		xl_heap_clean *xlrec = (xl_heap_clean *) rec;
@@ -129,12 +135,6 @@ heap2_desc(StringInfo buf, XLogReaderState *record)
 		appendStringInfo(buf, "cutoff xid %u flags %d",
 						 xlrec->cutoff_xid, xlrec->flags);
 	}
-	else if (info == XLOG_HEAP2_MULTI_INSERT)
-	{
-		xl_heap_multi_insert *xlrec = (xl_heap_multi_insert *) rec;
-
-		appendStringInfo(buf, "%d tuples", xlrec->ntuples);
-	}
 	else if (info == XLOG_HEAP2_LOCK_UPDATED)
 	{
 		xl_heap_lock_updated *xlrec = (xl_heap_lock_updated *) rec;
@@ -171,6 +171,12 @@ heap_identify(uint8 info)
 		case XLOG_HEAP_INSERT | XLOG_HEAP_INIT_PAGE:
 			id = "INSERT+INIT";
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			id = "MULTI_INSERT";
+			break;
+		case XLOG_HEAP_MULTI_INSERT | XLOG_HEAP_INIT_PAGE:
+			id = "MULTI_INSERT+INIT";
+			break;
 		case XLOG_HEAP_DELETE:
 			id = "DELETE";
 			break;
@@ -219,12 +225,6 @@ heap2_identify(uint8 info)
 		case XLOG_HEAP2_VISIBLE:
 			id = "VISIBLE";
 			break;
-		case XLOG_HEAP2_MULTI_INSERT:
-			id = "MULTI_INSERT";
-			break;
-		case XLOG_HEAP2_MULTI_INSERT | XLOG_HEAP_INIT_PAGE:
-			id = "MULTI_INSERT+INIT";
-			break;
 		case XLOG_HEAP2_LOCK_UPDATED:
 			id = "LOCK_UPDATED";
 			break;
diff --git a/src/backend/access/rmgrdesc/nbtdesc.c b/src/backend/access/rmgrdesc/nbtdesc.c
index fbde9d6..6b2c5d6 100644
--- a/src/backend/access/rmgrdesc/nbtdesc.c
+++ b/src/backend/access/rmgrdesc/nbtdesc.c
@@ -48,8 +48,8 @@ btree_desc(StringInfo buf, XLogReaderState *record)
 			{
 				xl_btree_vacuum *xlrec = (xl_btree_vacuum *) rec;
 
-				appendStringInfo(buf, "lastBlockVacuumed %u",
-								 xlrec->lastBlockVacuumed);
+				appendStringInfo(buf, "ndelitems %u, nclearitems %u",
+								 xlrec->ndelitems, xlrec->nclearitems);
 				break;
 			}
 		case XLOG_BTREE_DELETE:
diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c
index e57ac49..59ef7f3 100644
--- a/src/backend/access/spgist/spgutils.c
+++ b/src/backend/access/spgist/spgutils.c
@@ -72,6 +72,7 @@ spghandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/spgist/spgvacuum.c b/src/backend/access/spgist/spgvacuum.c
index cce9b3f..711d351 100644
--- a/src/backend/access/spgist/spgvacuum.c
+++ b/src/backend/access/spgist/spgvacuum.c
@@ -155,7 +155,8 @@ vacuumLeafPage(spgBulkDeleteState *bds, Relation index, Buffer buffer,
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				deletable[i] = true;
@@ -425,7 +426,8 @@ vacuumLeafRoot(spgBulkDeleteState *bds, Relation index, Buffer buffer)
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				toDelete[xlrec.nDelete] = i;
@@ -902,10 +904,10 @@ spgbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 }
 
 /* Dummy callback to delete no tuples during spgvacuumcleanup */
-static bool
-dummy_callback(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+dummy_callback(ItemPointer itemptr, bool is_warm, void *state)
 {
-	return false;
+	return IBDCR_KEEP;
 }
 
 /*
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 1eb163f..2c27661 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -54,6 +54,7 @@
 #include "nodes/makefuncs.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/var.h"
 #include "parser/parser.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
@@ -114,7 +115,7 @@ static void IndexCheckExclusion(Relation heapRelation,
 					IndexInfo *indexInfo);
 static inline int64 itemptr_encode(ItemPointer itemptr);
 static inline void itemptr_decode(ItemPointer itemptr, int64 encoded);
-static bool validate_index_callback(ItemPointer itemptr, void *opaque);
+static IndexBulkDeleteCallbackResult validate_index_callback(ItemPointer itemptr, bool is_warm, void *opaque);
 static void validate_index_heapscan(Relation heapRelation,
 						Relation indexRelation,
 						IndexInfo *indexInfo,
@@ -1691,6 +1692,20 @@ BuildIndexInfo(Relation index)
 	ii->ii_AmCache = NULL;
 	ii->ii_Context = CurrentMemoryContext;
 
+	/* build a bitmap of all table attributes referred by this index */
+	for (i = 0; i < ii->ii_NumIndexAttrs; i++)
+	{
+		AttrNumber attr = ii->ii_KeyAttrNumbers[i];
+		ii->ii_indxattrs = bms_add_member(ii->ii_indxattrs, attr -
+				FirstLowInvalidHeapAttributeNumber);
+	}
+
+	/* Collect all attributes used in expressions, too */
+	pull_varattnos((Node *) ii->ii_Expressions, 1, &ii->ii_indxattrs);
+
+	/* Collect all attributes in the index predicate, too */
+	pull_varattnos((Node *) ii->ii_Predicate, 1, &ii->ii_indxattrs);
+
 	return ii;
 }
 
@@ -1815,6 +1830,51 @@ FormIndexDatum(IndexInfo *indexInfo,
 		elog(ERROR, "wrong number of index expressions");
 }
 
+/*
+ * This is same as FormIndexDatum but we don't compute any expression
+ * attributes and hence can be used when executor interfaces are not available.
+ * If i'th attribute is available then isavail[i] is set to true, else set to
+ * false. The caller must always check if an attribute value is available
+ * before trying to do anything useful with that.
+ */
+void
+FormIndexPlainDatum(IndexInfo *indexInfo,
+			   Relation heapRel,
+			   HeapTuple heapTup,
+			   Datum *values,
+			   bool *isnull,
+			   bool *isavail)
+{
+	int			i;
+
+	for (i = 0; i < indexInfo->ii_NumIndexAttrs; i++)
+	{
+		int			keycol = indexInfo->ii_KeyAttrNumbers[i];
+		Datum		iDatum;
+		bool		isNull;
+
+		if (keycol != 0)
+		{
+			/*
+			 * Plain index column; get the value we need directly from the
+			 * heap tuple.
+			 */
+			iDatum = heap_getattr(heapTup, keycol, RelationGetDescr(heapRel), &isNull);
+			values[i] = iDatum;
+			isnull[i] = isNull;
+			isavail[i] = true;
+		}
+		else
+		{
+			/*
+			 * This is an expression attribute which can't be computed by us.
+			 * So just inform the caller about it.
+			 */
+			isavail[i] = false;
+			isnull[i] = true;
+		}
+	}
+}
 
 /*
  * index_update_stats --- update pg_class entry after CREATE INDEX or REINDEX
@@ -2929,15 +2989,15 @@ itemptr_decode(ItemPointer itemptr, int64 encoded)
 /*
  * validate_index_callback - bulkdelete callback to collect the index TIDs
  */
-static bool
-validate_index_callback(ItemPointer itemptr, void *opaque)
+static IndexBulkDeleteCallbackResult
+validate_index_callback(ItemPointer itemptr, bool is_warm, void *opaque)
 {
 	v_i_state  *state = (v_i_state *) opaque;
 	int64		encoded = itemptr_encode(itemptr);
 
 	tuplesort_putdatum(state->tuplesort, Int64GetDatum(encoded), false);
 	state->itups += 1;
-	return false;				/* never actually delete anything */
+	return IBDCR_KEEP;				/* never actually delete anything */
 }
 
 /*
@@ -3156,7 +3216,8 @@ validate_index_heapscan(Relation heapRelation,
 						 heapRelation,
 						 indexInfo->ii_Unique ?
 						 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-						 indexInfo);
+						 indexInfo,
+						 false);
 
 			state->tups_inserted += 1;
 		}
diff --git a/src/backend/catalog/indexing.c b/src/backend/catalog/indexing.c
index abc344a..6392f33 100644
--- a/src/backend/catalog/indexing.c
+++ b/src/backend/catalog/indexing.c
@@ -66,10 +66,15 @@ CatalogCloseIndexes(CatalogIndexState indstate)
  *
  * This should be called for each inserted or updated catalog tuple.
  *
+ * If the tuple was WARM updated, the modified_attrs contains the list of
+ * columns updated by the update. We must not insert new index entries for
+ * indexes which do not refer to any of the modified columns.
+ *
  * This is effectively a cut-down version of ExecInsertIndexTuples.
  */
 static void
-CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
+CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple,
+		Bitmapset *modified_attrs, bool warm_update)
 {
 	int			i;
 	int			numIndexes;
@@ -79,12 +84,28 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 	IndexInfo **indexInfoArray;
 	Datum		values[INDEX_MAX_KEYS];
 	bool		isnull[INDEX_MAX_KEYS];
+	ItemPointerData root_tid;
 
-	/* HOT update does not require index inserts */
-	if (HeapTupleIsHeapOnly(heapTuple))
+	/*
+	 * HOT update does not require index inserts, but WARM may need for some
+	 * indexes.
+	 */
+	if (HeapTupleIsHeapOnly(heapTuple) && !warm_update)
 		return;
 
 	/*
+	 * If we've done a WARM update, then we must index the TID of the root line
+	 * pointer and not the actual TID of the new tuple.
+	 */
+	if (warm_update)
+		ItemPointerSet(&root_tid,
+				ItemPointerGetBlockNumber(&(heapTuple->t_self)),
+				HeapTupleHeaderGetRootOffset(heapTuple->t_data));
+	else
+		ItemPointerCopy(&heapTuple->t_self, &root_tid);
+
+
+	/*
 	 * Get information from the state structure.  Fall out if nothing to do.
 	 */
 	numIndexes = indstate->ri_NumIndices;
@@ -112,6 +133,17 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 			continue;
 
 		/*
+		 * If we've done WARM update, then we must not insert a new index tuple
+		 * if none of the index keys have changed. This is not just an
+		 * optimization, but a requirement for WARM to work correctly.
+		 */
+		if (warm_update)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
+		/*
 		 * Expressional and partial indexes on system catalogs are not
 		 * supported, nor exclusion constraints, nor deferred uniqueness
 		 */
@@ -136,11 +168,12 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 		index_insert(relationDescs[i],	/* index relation */
 					 values,	/* array of index Datums */
 					 isnull,	/* is-null flags */
-					 &(heapTuple->t_self),		/* tid of heap tuple */
+					 &root_tid,
 					 heapRelation,
 					 relationDescs[i]->rd_index->indisunique ?
 					 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-					 indexInfo);
+					 indexInfo,
+					 warm_update);
 	}
 
 	ExecDropSingleTupleTableSlot(slot);
@@ -168,7 +201,7 @@ CatalogTupleInsert(Relation heapRel, HeapTuple tup)
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 	CatalogCloseIndexes(indstate);
 
 	return oid;
@@ -190,7 +223,7 @@ CatalogTupleInsertWithInfo(Relation heapRel, HeapTuple tup,
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 
 	return oid;
 }
@@ -210,12 +243,14 @@ void
 CatalogTupleUpdate(Relation heapRel, ItemPointer otid, HeapTuple tup)
 {
 	CatalogIndexState indstate;
+	bool	warm_update;
+	Bitmapset	*modified_attrs;
 
 	indstate = CatalogOpenIndexes(heapRel);
 
-	simple_heap_update(heapRel, otid, tup);
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 	CatalogCloseIndexes(indstate);
 }
 
@@ -231,9 +266,12 @@ void
 CatalogTupleUpdateWithInfo(Relation heapRel, ItemPointer otid, HeapTuple tup,
 						   CatalogIndexState indstate)
 {
-	simple_heap_update(heapRel, otid, tup);
+	Bitmapset  *modified_attrs;
+	bool		warm_update;
+
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 }
 
 /*
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 0217f39..4ef964f 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -530,6 +530,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_tuples_deleted(C.oid) AS n_tup_del,
             pg_stat_get_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
@@ -560,7 +561,8 @@ CREATE VIEW pg_stat_xact_all_tables AS
             pg_stat_get_xact_tuples_inserted(C.oid) AS n_tup_ins,
             pg_stat_get_xact_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_xact_tuples_deleted(C.oid) AS n_tup_del,
-            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd
+            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_xact_tuples_warm_updated(C.oid) AS n_tup_warm_upd
     FROM pg_class C LEFT JOIN
          pg_index I ON C.oid = I.indrelid
          LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index e2544e5..330b661 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -40,6 +40,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	TriggerData *trigdata = castNode(TriggerData, fcinfo->context);
 	const char *funcname = "unique_key_recheck";
 	HeapTuple	new_row;
+	HeapTupleData heapTuple;
 	ItemPointerData tmptid;
 	Relation	indexRel;
 	IndexInfo  *indexInfo;
@@ -102,7 +103,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	 * removed.
 	 */
 	tmptid = new_row->t_self;
-	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL,
+				NULL, NULL, &heapTuple))
 	{
 		/*
 		 * All rows in the HOT chain are dead, so skip the check.
@@ -166,7 +168,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 		 */
 		index_insert(indexRel, values, isnull, &(new_row->t_self),
 					 trigdata->tg_relation, UNIQUE_CHECK_EXISTING,
-					 indexInfo);
+					 indexInfo,
+					 false);
 	}
 	else
 	{
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index 8c58808..1366398 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -2689,6 +2689,8 @@ CopyFrom(CopyState cstate)
 					if (resultRelInfo->ri_NumIndices > 0)
 						recheckIndexes = ExecInsertIndexTuples(slot,
 															&(tuple->t_self),
+															&(tuple->t_self),
+															NULL,
 															   estate,
 															   false,
 															   NULL,
@@ -2843,6 +2845,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
 			ExecStoreTuple(bufferedTuples[i], myslot, InvalidBuffer, false);
 			recheckIndexes =
 				ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
+									  &(bufferedTuples[i]->t_self), NULL,
 									  estate, false, NULL, NIL);
 			ExecARInsertTriggers(estate, resultRelInfo,
 								 bufferedTuples[i],
diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c
index 4861799..b62b0e9 100644
--- a/src/backend/commands/indexcmds.c
+++ b/src/backend/commands/indexcmds.c
@@ -694,7 +694,14 @@ DefineIndex(Oid relationId,
 	 * visible to other transactions before we start to build the index. That
 	 * will prevent them from making incompatible HOT updates.  The new index
 	 * will be marked not indisready and not indisvalid, so that no one else
-	 * tries to either insert into it or use it for queries.
+	 * tries to either insert into it or use it for queries. In addition to
+	 * that, WARM updates will be disallowed if an update is modifying one of
+	 * the columns used by this new index. This is necessary to ensure that we
+	 * don't create WARM tuples which do not have corresponding entry in this
+	 * index. It must be noted that during the second phase, we will index only
+	 * those heap tuples whose root line pointer is not already in the index,
+	 * hence it's important that all tuples in a given chain, has the same
+	 * value for any indexed column (including this new index).
 	 *
 	 * We must commit our current transaction so that the index becomes
 	 * visible; then start another.  Note that all the data structures we just
@@ -742,7 +749,10 @@ DefineIndex(Oid relationId,
 	 * marked as "not-ready-for-inserts".  The index is consulted while
 	 * deciding HOT-safety though.  This arrangement ensures that no new HOT
 	 * chains can be created where the new tuple and the old tuple in the
-	 * chain have different index keys.
+	 * chain have different index keys. Also, the new index is consulted for
+	 * deciding whether a WARM update is possible, and WARM update is not done
+	 * if a column used by this index is being updated. This ensures that we
+	 * don't create WARM tuples which are not indexed by this index.
 	 *
 	 * We now take a new snapshot, and build the index using all tuples that
 	 * are visible in this snapshot.  We can be sure that any HOT updates to
@@ -777,7 +787,8 @@ DefineIndex(Oid relationId,
 	/*
 	 * Update the pg_index row to mark the index as ready for inserts. Once we
 	 * commit this transaction, any new transactions that open the table must
-	 * insert new entries into the index for insertions and non-HOT updates.
+	 * insert new entries into the index for insertions and non-HOT updates or
+	 * WARM updates where this index needs new entry.
 	 */
 	index_set_state_flags(indexRelationId, INDEX_CREATE_SET_READY);
 
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index d418d56..dbec153 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -9925,6 +9925,7 @@ ATExecSetRelOptions(Relation rel, List *defList, AlterTableType operation,
 	Datum		datum;
 	bool		isnull;
 	Datum		newOptions;
+	Datum		std_options;
 	Datum		repl_val[Natts_pg_class];
 	bool		repl_null[Natts_pg_class];
 	bool		repl_repl[Natts_pg_class];
@@ -9969,7 +9970,7 @@ ATExecSetRelOptions(Relation rel, List *defList, AlterTableType operation,
 		case RELKIND_TOASTVALUE:
 		case RELKIND_MATVIEW:
 		case RELKIND_PARTITIONED_TABLE:
-			(void) heap_reloptions(rel->rd_rel->relkind, newOptions, true);
+			std_options = heap_reloptions(rel->rd_rel->relkind, newOptions, true);
 			break;
 		case RELKIND_VIEW:
 			(void) view_reloptions(newOptions, true);
@@ -9985,6 +9986,17 @@ ATExecSetRelOptions(Relation rel, List *defList, AlterTableType operation,
 			break;
 	}
 
+	if (rel->rd_rel->relkind == RELKIND_RELATION ||
+		rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
+	{
+		bool new_enable_warm = ((StdRdOptions *)(std_options))->enable_warm;
+		if (RelationWarmUpdatesEnabled(rel) && !new_enable_warm)
+			ereport(ERROR,
+					(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					 errmsg("WARM updates cannot be disabled on the table \"%s\"",
+						 RelationGetRelationName(rel))));
+	}
+
 	/* Special-case validation of view options */
 	if (rel->rd_rel->relkind == RELKIND_VIEW)
 	{
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 5b43a66..781eeff 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -104,6 +104,39 @@
  */
 #define PREFETCH_SIZE			((BlockNumber) 32)
 
+/*
+ * Structure to track WARM chains that can be converted into HOT chains during
+ * this run.
+ *
+ * To reduce space requirement, we're using bitfields. But the way things are
+ * laid down, we're still wasting 1-byte per candidate chain.
+ */
+typedef struct LVWarmChain
+{
+	ItemPointerData	chain_tid;			/* root of the chain */
+
+	/*
+	 * 1 - if the chain contains only post-warm tuples
+	 * 0 - if the chain contains only pre-warm tuples
+	 */
+	uint8			is_postwarm_chain:2;
+
+	/* 1 - if this chain must remain a WARM chain */
+	uint8			keep_warm_chain:2;
+
+	/*
+	 * Number of CLEAR pointers to this root TID found so far - must never be
+	 * more than 2.
+	 */
+	uint8			num_clear_pointers:2;
+
+	/*
+	 * Number of WARM pointers to this root TID found so far - must never be
+	 * more than 1.
+	 */
+	uint8			num_warm_pointers:2;
+} LVWarmChain;
+
 typedef struct LVRelStats
 {
 	/* hasindex = true means two-pass strategy; false means one-pass */
@@ -122,6 +155,14 @@ typedef struct LVRelStats
 	BlockNumber pages_removed;
 	double		tuples_deleted;
 	BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */
+
+	/* List of candidate WARM chains that can be converted into HOT chains */
+	/* NB: this list is ordered by TID of the root pointers */
+	int				num_warm_chains;	/* current # of entries */
+	int				max_warm_chains;	/* # slots allocated in array */
+	LVWarmChain 	*warm_chains;		/* array of LVWarmChain */
+	double			num_non_convertible_warm_chains;
+
 	/* List of TIDs of tuples we intend to delete */
 	/* NB: this list is ordered by TID address */
 	int			num_dead_tuples;	/* current # of entries */
@@ -150,6 +191,7 @@ static void lazy_scan_heap(Relation onerel, int options,
 static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats);
 static bool lazy_check_needs_freeze(Buffer buf, bool *hastup);
 static void lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats);
 static void lazy_cleanup_index(Relation indrel,
@@ -157,6 +199,10 @@ static void lazy_cleanup_index(Relation indrel,
 				   LVRelStats *vacrelstats);
 static int lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 				 int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer);
+static int lazy_warmclear_page(Relation onerel, BlockNumber blkno,
+				 Buffer buffer, int chainindex, LVRelStats *vacrelstats,
+				 Buffer *vmbuffer, bool check_all_visible);
+static void lazy_reset_warm_pointer_count(LVRelStats *vacrelstats);
 static bool should_attempt_truncation(LVRelStats *vacrelstats);
 static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
 static BlockNumber count_nondeletable_pages(Relation onerel,
@@ -164,8 +210,15 @@ static BlockNumber count_nondeletable_pages(Relation onerel,
 static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks);
 static void lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr);
-static bool lazy_tid_reaped(ItemPointer itemptr, void *state);
+static void lazy_record_warm_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static void lazy_record_clear_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static IndexBulkDeleteCallbackResult lazy_tid_reaped(ItemPointer itemptr, bool is_warm, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state);
 static int	vac_cmp_itemptr(const void *left, const void *right);
+static int vac_cmp_warm_chain(const void *left, const void *right);
 static bool heap_page_is_all_visible(Relation rel, Buffer buf,
 					 TransactionId *visibility_cutoff_xid, bool *all_frozen);
 
@@ -690,8 +743,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		 * If we are close to overrunning the available space for dead-tuple
 		 * TIDs, pause and do a cycle of vacuuming before we tackle this page.
 		 */
-		if ((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
-			vacrelstats->num_dead_tuples > 0)
+		if (((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
+			vacrelstats->num_dead_tuples > 0) ||
+			((vacrelstats->max_warm_chains - vacrelstats->num_warm_chains) < MaxHeapTuplesPerPage &&
+			 vacrelstats->num_warm_chains > 0))
 		{
 			const int	hvp_index[] = {
 				PROGRESS_VACUUM_PHASE,
@@ -721,6 +776,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			/* Remove index entries */
 			for (i = 0; i < nindexes; i++)
 				lazy_vacuum_index(Irel[i],
+								  (vacrelstats->num_warm_chains > 0),
 								  &indstats[i],
 								  vacrelstats);
 
@@ -743,6 +799,9 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			 * valid.
 			 */
 			vacrelstats->num_dead_tuples = 0;
+			vacrelstats->num_warm_chains = 0;
+			memset(vacrelstats->warm_chains, 0,
+					vacrelstats->max_warm_chains * sizeof (LVWarmChain));
 			vacrelstats->num_index_scans++;
 
 			/* Report that we are once again scanning the heap */
@@ -947,15 +1006,33 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 				continue;
 			}
 
+			ItemPointerSet(&(tuple.t_self), blkno, offnum);
+
 			/* Redirect items mustn't be touched */
 			if (ItemIdIsRedirected(itemid))
 			{
+				HeapCheckWarmChainStatus status = 0;
+
+				if (RelationWarmUpdatesEnabled(onerel))
+					status = heap_check_warm_chain(page, &tuple.t_self, false);
+				if (HCWC_IS_WARM_UPDATED(status))
+				{
+					/*
+					 * A chain which is either complete WARM or CLEAR is a
+					 * candidate for chain conversion. Remember the chain and
+					 * whether the chain has all WARM tuples or not.
+					 */
+					if (HCWC_IS_ALL_WARM(status))
+						lazy_record_warm_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_CLEAR(status))
+						lazy_record_clear_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
 				hastup = true;	/* this page won't be truncatable */
 				continue;
 			}
 
-			ItemPointerSet(&(tuple.t_self), blkno, offnum);
-
 			/*
 			 * DEAD item pointers are to be vacuumed normally; but we don't
 			 * count them in tups_vacuumed, else we'd be double-counting (at
@@ -975,6 +1052,29 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			tuple.t_len = ItemIdGetLength(itemid);
 			tuple.t_tableOid = RelationGetRelid(onerel);
 
+			if (!HeapTupleIsHeapOnly(&tuple))
+			{
+				HeapCheckWarmChainStatus status = 0;
+
+				if (RelationWarmUpdatesEnabled(onerel))
+					status = heap_check_warm_chain(page, &tuple.t_self, false);
+
+				if (HCWC_IS_WARM_UPDATED(status))
+				{
+					/*
+					 * A chain which is either complete WARM or CLEAR is a
+					 * candidate for chain conversion. Remember the chain and
+					 * its color.
+					 */
+					if (HCWC_IS_ALL_WARM(status))
+						lazy_record_warm_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_CLEAR(status))
+						lazy_record_clear_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
+			}
+
 			tupgone = false;
 
 			switch (HeapTupleSatisfiesVacuum(&tuple, OldestXmin, buf))
@@ -1040,6 +1140,19 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 							break;
 						}
 
+						/*
+						 * If this tuple was ever WARM updated or is a WARM
+						 * tuple, there could be multiple index entries
+						 * pointing to the root of this chain. We can't do
+						 * index-only scans for such tuples without verifying
+						 * index key check. So mark the page as !all_visible
+						 */
+						if (HeapTupleHeaderIsWarmUpdated(tuple.t_data))
+						{
+							all_visible = false;
+							break;
+						}
+
 						/* Track newest xmin on page. */
 						if (TransactionIdFollows(xmin, visibility_cutoff_xid))
 							visibility_cutoff_xid = xmin;
@@ -1282,7 +1395,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 
 	/* If any tuples need to be deleted, perform final vacuum cycle */
 	/* XXX put a threshold on min number of tuples here? */
-	if (vacrelstats->num_dead_tuples > 0)
+	if (vacrelstats->num_dead_tuples > 0 || vacrelstats->num_warm_chains > 0)
 	{
 		const int	hvp_index[] = {
 			PROGRESS_VACUUM_PHASE,
@@ -1300,6 +1413,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		/* Remove index entries */
 		for (i = 0; i < nindexes; i++)
 			lazy_vacuum_index(Irel[i],
+							  (vacrelstats->num_warm_chains > 0),
 							  &indstats[i],
 							  vacrelstats);
 
@@ -1371,7 +1485,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
  *
  *		This routine marks dead tuples as unused and compacts out free
  *		space on their pages.  Pages not having dead tuples recorded from
- *		lazy_scan_heap are not visited at all.
+ *		lazy_scan_heap are not visited at all. This routine also converts
+ *		candidate WARM chains to HOT chains by clearing WARM related flags. The
+ *		candidate chains are determined by the preceeding index scans after
+ *		looking at the data collected by the first heap scan.
  *
  * Note: the reason for doing this as a second pass is we cannot remove
  * the tuples until we've removed their index entries, and we want to
@@ -1380,7 +1497,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 static void
 lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 {
-	int			tupindex;
+	int			tupindex, chainindex;
 	int			npages;
 	PGRUsage	ru0;
 	Buffer		vmbuffer = InvalidBuffer;
@@ -1389,33 +1506,69 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 	npages = 0;
 
 	tupindex = 0;
-	while (tupindex < vacrelstats->num_dead_tuples)
+	chainindex = 0;
+	while (tupindex < vacrelstats->num_dead_tuples ||
+		   chainindex < vacrelstats->num_warm_chains)
 	{
-		BlockNumber tblk;
+		BlockNumber tblk, chainblk, vacblk;
 		Buffer		buf;
 		Page		page;
 		Size		freespace;
 
 		vacuum_delay_point();
 
-		tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
-		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, tblk, RBM_NORMAL,
+		tblk = chainblk = InvalidBlockNumber;
+		if (chainindex < vacrelstats->num_warm_chains)
+			chainblk =
+				ItemPointerGetBlockNumber(&(vacrelstats->warm_chains[chainindex].chain_tid));
+
+		if (tupindex < vacrelstats->num_dead_tuples)
+			tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
+
+		if (tblk == InvalidBlockNumber)
+			vacblk = chainblk;
+		else if (chainblk == InvalidBlockNumber)
+			vacblk = tblk;
+		else
+			vacblk = Min(chainblk, tblk);
+
+		Assert(vacblk != InvalidBlockNumber);
+
+		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, vacblk, RBM_NORMAL,
 								 vac_strategy);
-		if (!ConditionalLockBufferForCleanup(buf))
+
+
+		if (vacblk == chainblk)
+			LockBufferForCleanup(buf);
+		else if (!ConditionalLockBufferForCleanup(buf))
 		{
 			ReleaseBuffer(buf);
 			++tupindex;
 			continue;
 		}
-		tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
-									&vmbuffer);
+
+		/*
+		 * Convert WARM chains on this page. This should be done before
+		 * vacuuming the page to ensure that we can correctly set visibility
+		 * bits after clearing WARM chains.
+		 *
+		 * If we are going to vacuum this page then don't check for
+		 * all-visibility just yet.
+		*/
+		if (vacblk == chainblk)
+			chainindex = lazy_warmclear_page(onerel, chainblk, buf, chainindex,
+					vacrelstats, &vmbuffer, chainblk != tblk);
+
+		if (vacblk == tblk)
+			tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
+					&vmbuffer);
 
 		/* Now that we've compacted the page, record its available space */
 		page = BufferGetPage(buf);
 		freespace = PageGetHeapFreeSpace(page);
 
 		UnlockReleaseBuffer(buf);
-		RecordPageWithFreeSpace(onerel, tblk, freespace);
+		RecordPageWithFreeSpace(onerel, vacblk, freespace);
 		npages++;
 	}
 
@@ -1434,6 +1587,107 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 }
 
 /*
+ *	lazy_warmclear_page() -- clear various WARM bits on the tuples.
+ *
+ * Caller must hold pin and buffer cleanup lock on the buffer.
+ *
+ * chainindex is the index in vacrelstats->warm_chains of the first dead
+ * tuple for this page.  We assume the rest follow sequentially.
+ * The return value is the first tupindex after the tuples of this page.
+ *
+ * If check_all_visible is set then we also check if the page has now become
+ * all visible and update visibility map.
+ */
+static int
+lazy_warmclear_page(Relation onerel, BlockNumber blkno, Buffer buffer,
+				 int chainindex, LVRelStats *vacrelstats, Buffer *vmbuffer,
+				 bool check_all_visible)
+{
+	Page			page = BufferGetPage(buffer);
+	OffsetNumber	cleared_offnums[MaxHeapTuplesPerPage];
+	int				num_cleared = 0;
+	TransactionId	visibility_cutoff_xid;
+	bool			all_frozen;
+
+	pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED, blkno);
+
+	START_CRIT_SECTION();
+
+	for (; chainindex < vacrelstats->num_warm_chains ; chainindex++)
+	{
+		BlockNumber tblk;
+		LVWarmChain	*chain;
+
+		chain = &vacrelstats->warm_chains[chainindex];
+
+		tblk = ItemPointerGetBlockNumber(&chain->chain_tid);
+		if (tblk != blkno)
+			break;				/* past end of tuples for this block */
+
+		/*
+		 * Since a heap page can have no more than MaxHeapTuplesPerPage
+		 * offnums and we process each offnum only once, MaxHeapTuplesPerPage
+		 * size array should be enough to hold all cleared tuples in this page.
+		 */
+		if (!chain->keep_warm_chain)
+			num_cleared += heap_clear_warm_chain(page, &chain->chain_tid,
+					cleared_offnums + num_cleared);
+	}
+
+	/*
+	 * Mark buffer dirty before we write WAL.
+	 */
+	MarkBufferDirty(buffer);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(onerel))
+	{
+		XLogRecPtr	recptr;
+
+		recptr = log_heap_warmclear(onerel, buffer,
+								cleared_offnums, num_cleared);
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	/* If not checking for all-visibility then we're done */
+	if (!check_all_visible)
+		return chainindex;
+
+	/*
+	 * The following code should match the corresponding code in
+	 * lazy_vacuum_page
+	 **/
+	if (heap_page_is_all_visible(onerel, buffer, &visibility_cutoff_xid,
+								 &all_frozen))
+		PageSetAllVisible(page);
+
+	/*
+	 * All the changes to the heap page have been done. If the all-visible
+	 * flag is now set, also set the VM all-visible bit (and, if possible, the
+	 * all-frozen bit) unless this has already been done previously.
+	 */
+	if (PageIsAllVisible(page))
+	{
+		uint8		vm_status = visibilitymap_get_status(onerel, blkno, vmbuffer);
+		uint8		flags = 0;
+
+		/* Set the VM all-frozen bit to flag, if needed */
+		if ((vm_status & VISIBILITYMAP_ALL_VISIBLE) == 0)
+			flags |= VISIBILITYMAP_ALL_VISIBLE;
+		if ((vm_status & VISIBILITYMAP_ALL_FROZEN) == 0 && all_frozen)
+			flags |= VISIBILITYMAP_ALL_FROZEN;
+
+		Assert(BufferIsValid(*vmbuffer));
+		if (flags != 0)
+			visibilitymap_set(onerel, blkno, buffer, InvalidXLogRecPtr,
+							  *vmbuffer, visibility_cutoff_xid, flags);
+	}
+	return chainindex;
+}
+
+/*
  *	lazy_vacuum_page() -- free dead tuples on a page
  *					 and repair its fragmentation.
  *
@@ -1586,6 +1840,24 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
 	return false;
 }
 
+/*
+ * Reset counters tracking number of WARM and CLEAR pointers per candidate TID.
+ * These counters are maintained per index and cleared when the next index is
+ * picked up for cleanup.
+ *
+ * We don't touch the keep_warm_chain since once a chain is known to be
+ * non-convertible, we must remember that across all indexes.
+ */
+static void
+lazy_reset_warm_pointer_count(LVRelStats *vacrelstats)
+{
+	int i;
+	for (i = 0; i < vacrelstats->num_warm_chains; i++)
+	{
+		LVWarmChain *chain = &vacrelstats->warm_chains[i];
+		chain->num_clear_pointers = chain->num_warm_pointers = 0;
+	}
+}
 
 /*
  *	lazy_vacuum_index() -- vacuum one index relation.
@@ -1595,6 +1867,7 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
  */
 static void
 lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats)
 {
@@ -1610,15 +1883,87 @@ lazy_vacuum_index(Relation indrel,
 	ivinfo.num_heap_tuples = vacrelstats->old_rel_tuples;
 	ivinfo.strategy = vac_strategy;
 
-	/* Do bulk deletion */
-	*stats = index_bulk_delete(&ivinfo, *stats,
-							   lazy_tid_reaped, (void *) vacrelstats);
+	/*
+	 * If told, convert WARM chains into HOT chains.
+	 *
+	 * We must have already collected candidate WARM chains i.e. chains that
+	 * have either all tuples with HEAP_WARM_TUPLE flag set or none.
+	 *
+	 * This works in two phases. In the first phase, we do a complete index
+	 * scan and collect information about index pointers to the candidate
+	 * chains, but we don't do conversion. To be precise, we count the number
+	 * of WARM and CLEAR index pointers to each candidate chain and use that
+	 * knowledge to arrive at a decision and do the actual conversion during
+	 * the second phase (we kill known dead pointers though in this phase).
+	 *
+	 * In the second phase, for each candidate chain we check if we have seen a
+	 * WARM index pointer. For such chains, we kill the CLEAR pointer and
+	 * convert the WARM pointer into a CLEAR pointer. The heap tuples are
+	 * cleared of WARM flags in the second heap scan. If we did not find any
+	 * WARM pointer to a WARM chain, that means that the chain is reachable
+	 * from the CLEAR pointer (because say WARM update did not add a new entry
+	 * for this index). In that case, we do nothing.  There is a third case
+	 * where we find two CLEAR pointers to a candidate chain. This can happen
+	 * because of aborted vacuums. We don't handle that case yet, but it should
+	 * be possible to apply the same recheck logic and find which of the clear
+	 * pointers is redundant and should be removed.
+	 *
+	 * For CLEAR chains, we just kill the WARM pointer, if it exists, and keep
+	 * the CLEAR pointer.
+	 */
+	if (clear_warm)
+	{
+		/*
+		 * Before starting the index scan, reset the counters of WARM and CLEAR
+		 * pointers, probably carried forward from the previous index.
+		 */
+		lazy_reset_warm_pointer_count(vacrelstats);
+
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase1, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row version, found "
+						"%0.f warm pointers, %0.f clear pointers, removed "
+						"%0.f warm pointers, removed %0.f clear pointers",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples,
+						(*stats)->num_warm_pointers,
+						(*stats)->num_clear_pointers,
+						(*stats)->warm_pointers_removed,
+						(*stats)->clear_pointers_removed)));
+
+		(*stats)->num_warm_pointers = 0;
+		(*stats)->num_clear_pointers = 0;
+		(*stats)->warm_pointers_removed = 0;
+		(*stats)->clear_pointers_removed = 0;
+		(*stats)->pointers_cleared = 0;
+
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase2, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to convert WARM pointers, found "
+						"%0.f WARM pointers, %0.f CLEAR pointers, removed "
+						"%0.f WARM pointers, removed %0.f CLEAR pointers, "
+						"cleared %0.f WARM pointers",
+						RelationGetRelationName(indrel),
+						(*stats)->num_warm_pointers,
+						(*stats)->num_clear_pointers,
+						(*stats)->warm_pointers_removed,
+						(*stats)->clear_pointers_removed,
+						(*stats)->pointers_cleared)));
+	}
+	else
+	{
+		/* Do bulk deletion */
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_tid_reaped, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row versions",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples),
+				 errdetail("%s.", pg_rusage_show(&ru0))));
+	}
 
-	ereport(elevel,
-			(errmsg("scanned index \"%s\" to remove %d row versions",
-					RelationGetRelationName(indrel),
-					vacrelstats->num_dead_tuples),
-			 errdetail("%s.", pg_rusage_show(&ru0))));
 }
 
 /*
@@ -1992,9 +2337,11 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 
 	if (vacrelstats->hasindex)
 	{
-		maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData);
+		maxtuples = (vac_work_mem * 1024L) / (sizeof(ItemPointerData) +
+				sizeof(LVWarmChain));
 		maxtuples = Min(maxtuples, INT_MAX);
-		maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData));
+		maxtuples = Min(maxtuples, MaxAllocSize / (sizeof(ItemPointerData) +
+					sizeof(LVWarmChain)));
 
 		/* curious coding here to ensure the multiplication can't overflow */
 		if ((BlockNumber) (maxtuples / LAZY_ALLOC_TUPLES) > relblocks)
@@ -2012,6 +2359,57 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 	vacrelstats->max_dead_tuples = (int) maxtuples;
 	vacrelstats->dead_tuples = (ItemPointer)
 		palloc(maxtuples * sizeof(ItemPointerData));
+
+	/*
+	 * XXX Cheat for now and allocate the same size array for tracking warm
+	 * chains. maxtuples must have been already adjusted above to ensure we
+	 * don't cross vac_work_mem.
+	 */
+	vacrelstats->num_warm_chains = 0;
+	vacrelstats->max_warm_chains = (int) maxtuples;
+	vacrelstats->warm_chains = (LVWarmChain *)
+		palloc0(maxtuples * sizeof(LVWarmChain));
+
+}
+
+/*
+ * lazy_record_clear_chain - remember one CLEAR chain
+ */
+static void
+lazy_record_clear_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	{
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 0;
+		vacrelstats->num_warm_chains++;
+	}
+}
+
+/*
+ * lazy_record_warm_chain - remember one WARM chain
+ */
+static void
+lazy_record_warm_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	{
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 1;
+		vacrelstats->num_warm_chains++;
+	}
 }
 
 /*
@@ -2042,8 +2440,8 @@ lazy_record_dead_tuple(LVRelStats *vacrelstats,
  *
  *		Assumes dead_tuples array is in sorted order.
  */
-static bool
-lazy_tid_reaped(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+lazy_tid_reaped(ItemPointer itemptr, bool is_warm, void *state)
 {
 	LVRelStats *vacrelstats = (LVRelStats *) state;
 	ItemPointer res;
@@ -2054,7 +2452,193 @@ lazy_tid_reaped(ItemPointer itemptr, void *state)
 								sizeof(ItemPointerData),
 								vac_cmp_itemptr);
 
-	return (res != NULL);
+	return (res != NULL) ? IBDCR_DELETE : IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase1() -- run first pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	ItemPointer		res;
+	LVWarmChain	*chain;
+
+	res = (ItemPointer) bsearch((void *) itemptr,
+								(void *) vacrelstats->dead_tuples,
+								vacrelstats->num_dead_tuples,
+								sizeof(ItemPointerData),
+								vac_cmp_itemptr);
+
+	if (res != NULL)
+		return IBDCR_DELETE;
+
+	chain = (LVWarmChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->warm_chains,
+								vacrelstats->num_warm_chains,
+								sizeof(LVWarmChain),
+								vac_cmp_warm_chain);
+	if (chain != NULL)
+	{
+		if (is_warm)
+			chain->num_warm_pointers++;
+		else
+			chain->num_clear_pointers++;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase2() -- run second pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	LVWarmChain	*chain;
+
+	chain = (LVWarmChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->warm_chains,
+								vacrelstats->num_warm_chains,
+								sizeof(LVWarmChain),
+								vac_cmp_warm_chain);
+
+	if (chain != NULL && (chain->keep_warm_chain != 1))
+	{
+		/*
+		 * At no point, we can have more than 1 warm pointer to any chain and
+		 * no more than 2 clear pointers.
+		 */
+		Assert(chain->num_warm_pointers <= 1);
+		Assert(chain->num_clear_pointers <= 2);
+
+		if (chain->is_postwarm_chain == 1)
+		{
+			if (is_warm)
+			{
+				/*
+				 * A WARM pointer, pointing to a WARM chain.
+				 *
+				 * Clear the warm pointer (and delete the CLEAR pointer). We
+				 * may have already seen the CLEAR pointer in the scan and
+				 * deleted that or we may see it later in the scan. It doesn't
+				 * matter if we fail at any point because we won't clear up
+				 * WARM bits on the heap tuples until we have dealt with the
+				 * index pointers cleanly.
+				 */
+				return IBDCR_CLEAR_WARM;
+			}
+			else
+			{
+				/*
+				 * CLEAR pointer to a WARM chain.
+				 */
+				if (chain->num_warm_pointers > 0)
+				{
+					/*
+					 * If there exists a WARM pointer to the chain, we can
+					 * delete the CLEAR pointer and clear the WARM bits on the
+					 * heap tuples.
+					 */
+					return IBDCR_DELETE;
+				}
+				else if (chain->num_clear_pointers == 1)
+				{
+					/*
+					 * If this is the only pointer to a WARM chain, we must
+					 * keep the CLEAR pointer.
+					 *
+					 * The presence of WARM chain indicates that the WARM update
+					 * must have been committed good. But during the update
+					 * this index was probably not updated and hence it
+					 * contains just one, original CLEAR pointer to the chain.
+					 * We should be able to clear the WARM bits on heap tuples
+					 * unless we later find another index which prevents the
+					 * cleanup.
+					 */
+					return IBDCR_KEEP;
+				}
+			}
+		}
+		else
+		{
+			/*
+			 * This is a CLEAR chain.
+			 */
+			if (is_warm)
+			{
+				/*
+				 * A WARM pointer to a CLEAR chain.
+				 *
+				 * This can happen when a WARM update is aborted. Later the HOT
+				 * chain is pruned leaving behind only CLEAR tuples in the
+				 * chain. But the WARM index pointer inserted in the index
+				 * remains and it must now be deleted before we clear WARM bits
+				 * from the heap tuple.
+				 */
+				return IBDCR_DELETE;
+			}
+
+			/*
+			 * CLEAR pointer to a CLEAR chain.
+			 *
+			 * If this is the only surviving CLEAR pointer, keep it and clear
+			 * the WARM bits from the heap tuples.
+			 */
+			if (chain->num_clear_pointers == 1)
+				return IBDCR_KEEP;
+
+			/*
+			 * If there are more than 1 CLEAR pointers to this chain, we can
+			 * apply the recheck logic and kill the redudant CLEAR pointer and
+			 * convert the chain. But that's not yet done.
+			 */
+		}
+
+		/*
+		 * For everything else, we must keep the WARM bits and also keep the
+		 * index pointers.
+		 */
+		chain->keep_warm_chain = 1;
+		return IBDCR_KEEP;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ * Comparator routines for use with qsort() and bsearch(). Similar to
+ * vac_cmp_itemptr, but right hand argument is LVWarmChain struct pointer.
+ */
+static int
+vac_cmp_warm_chain(const void *left, const void *right)
+{
+	BlockNumber lblk,
+				rblk;
+	OffsetNumber loff,
+				roff;
+
+	lblk = ItemPointerGetBlockNumber((ItemPointer) left);
+	rblk = ItemPointerGetBlockNumber(&((LVWarmChain *) right)->chain_tid);
+
+	if (lblk < rblk)
+		return -1;
+	if (lblk > rblk)
+		return 1;
+
+	loff = ItemPointerGetOffsetNumber((ItemPointer) left);
+	roff = ItemPointerGetOffsetNumber(&((LVWarmChain *) right)->chain_tid);
+
+	if (loff < roff)
+		return -1;
+	if (loff > roff)
+		return 1;
+
+	return 0;
 }
 
 /*
@@ -2170,6 +2754,18 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 						break;
 					}
 
+					/*
+					 * If this or any other tuple in the chain ever WARM
+					 * updated, there could be multiple index entries pointing
+					 * to the root of this chain. We can't do index-only scans
+					 * for such tuples without verifying index key check. So
+					 * mark the page as !all_visible
+					 */
+					if (HeapTupleHeaderIsWarmUpdated(tuple.t_data))
+					{
+						all_visible = false;
+					}
+
 					/* Track newest xmin on page. */
 					if (TransactionIdFollows(xmin, *visibility_cutoff_xid))
 						*visibility_cutoff_xid = xmin;
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index c3f1873..2143978 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -270,6 +270,8 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
 List *
 ExecInsertIndexTuples(TupleTableSlot *slot,
 					  ItemPointer tupleid,
+					  ItemPointer root_tid,
+					  Bitmapset *modified_attrs,
 					  EState *estate,
 					  bool noDupErr,
 					  bool *specConflict,
@@ -324,6 +326,17 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 		if (!indexInfo->ii_ReadyForInserts)
 			continue;
 
+		/*
+		 * If modified_attrs is set, we only insert index entries for those
+		 * indexes whose column has changed. All other indexes can use their
+		 * existing index pointers to look up the new tuple
+		 */
+		if (modified_attrs)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
 		/* Check for partial index */
 		if (indexInfo->ii_Predicate != NIL)
 		{
@@ -387,10 +400,11 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 			index_insert(indexRelation, /* index relation */
 						 values,	/* array of index Datums */
 						 isnull,	/* null flags */
-						 tupleid,		/* tid of heap tuple */
+						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique,	/* type of uniqueness check to do */
-						 indexInfo);	/* index AM may need this */
+						 indexInfo,	/* index AM may need this */
+						 (modified_attrs != NULL));	/* is it a WARM update? */
 
 		/*
 		 * If the index has an associated exclusion constraint, check that.
@@ -787,6 +801,9 @@ retry:
 		{
 			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
 				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
+			else
+				ItemPointerCopy(&tup->t_self, &ctid_wait);
+
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index f20d728..747e4ce 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -399,6 +399,8 @@ ExecSimpleRelationInsert(EState *estate, TupleTableSlot *slot)
 
 		if (resultRelInfo->ri_NumIndices > 0)
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &(tuple->t_self),
+												   NULL,
 												   estate, false, NULL,
 												   NIL);
 
@@ -445,6 +447,8 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 	if (!skip_tuple)
 	{
 		List	   *recheckIndexes = NIL;
+		bool		warm_update;
+		Bitmapset  *modified_attrs;
 
 		/* Check the constraints of the tuple */
 		if (rel->rd_att->constr)
@@ -455,13 +459,35 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 
 		/* OK, update the tuple and index entries for it */
 		simple_heap_update(rel, &searchslot->tts_tuple->t_self,
-						   slot->tts_tuple);
+						   slot->tts_tuple, &modified_attrs, &warm_update);
 
 		if (resultRelInfo->ri_NumIndices > 0 &&
-			!HeapTupleIsHeapOnly(slot->tts_tuple))
+			(!HeapTupleIsHeapOnly(slot->tts_tuple) || warm_update))
+		{
+			ItemPointerData root_tid;
+
+			/*
+			 * If we did a WARM update then we must index the tuple using its
+			 * root line pointer and not the tuple TID itself.
+			 */
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self,
+						&root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
+
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL,
 												   NIL);
+		}
 
 		/* AFTER ROW UPDATE Triggers */
 		ExecARUpdateTriggers(estate, resultRelInfo,
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index d240f9c..24df913 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -39,6 +39,7 @@
 
 #include "access/relscan.h"
 #include "access/transam.h"
+#include "access/valid.h"
 #include "executor/execdebug.h"
 #include "executor/nodeBitmapHeapscan.h"
 #include "pgstat.h"
@@ -395,11 +396,27 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			OffsetNumber offnum = tbmres->offsets[curslot];
 			ItemPointerData tid;
 			HeapTupleData heapTuple;
+			bool recheck = false;
 
 			ItemPointerSet(&tid, page, offnum);
 			if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
-									   &heapTuple, NULL, true))
-				scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+									   &heapTuple, NULL, true, &recheck))
+			{
+				bool valid = true;
+
+				if (scan->rs_key)
+					HeapKeyTest(&heapTuple, RelationGetDescr(scan->rs_rd),
+							scan->rs_nkeys, scan->rs_key, valid);
+				if (valid)
+					scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+
+				/*
+				 * If the heap tuple needs a recheck because of a WARM update,
+				 * it's a lossy case.
+				 */
+				if (recheck)
+					tbmres->recheck = true;
+			}
 		}
 	}
 	else
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index 5afd02e..6e48c2e 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -142,8 +142,8 @@ IndexNext(IndexScanState *node)
 					   false);	/* don't pfree */
 
 		/*
-		 * If the index was lossy, we have to recheck the index quals using
-		 * the fetched tuple.
+		 * If the index was lossy or the tuple was WARM, we have to recheck
+		 * the index quals using the fetched tuple.
 		 */
 		if (scandesc->xs_recheck)
 		{
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index 0b524e0..2ad4a2c 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -513,6 +513,7 @@ ExecInsert(ModifyTableState *mtstate,
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												 &(tuple->t_self), NULL,
 												 estate, true, &specConflict,
 												   arbiterIndexes);
 
@@ -559,6 +560,7 @@ ExecInsert(ModifyTableState *mtstate,
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
 				recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+													   &(tuple->t_self), NULL,
 													   estate, false, NULL,
 													   arbiterIndexes);
 		}
@@ -892,6 +894,9 @@ ExecUpdate(ItemPointer tupleid,
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
 	List	   *recheckIndexes = NIL;
+	Bitmapset  *modified_attrs = NULL;
+	ItemPointerData	root_tid;
+	bool		warm_update;
 
 	/*
 	 * abort the operation if not running transactions
@@ -1008,7 +1013,7 @@ lreplace:;
 							 estate->es_output_cid,
 							 estate->es_crosscheck_snapshot,
 							 true /* wait for commit */ ,
-							 &hufd, &lockmode);
+							 &hufd, &lockmode, &modified_attrs, &warm_update);
 		switch (result)
 		{
 			case HeapTupleSelfUpdated:
@@ -1095,10 +1100,28 @@ lreplace:;
 		 * the t_self field.
 		 *
 		 * If it's a HOT update, we mustn't insert new index entries.
+		 *
+		 * If it's a WARM update, then we must insert new entries with TID
+		 * pointing to the root of the WARM chain.
 		 */
-		if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
+		if (resultRelInfo->ri_NumIndices > 0 &&
+			(!HeapTupleIsHeapOnly(tuple) || warm_update))
+		{
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL, NIL);
+		}
 	}
 
 	if (canSetTag)
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index 56a8bf2..52fe4ba 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -1888,7 +1888,7 @@ pgstat_count_heap_insert(Relation rel, PgStat_Counter n)
  * pgstat_count_heap_update - count a tuple update
  */
 void
-pgstat_count_heap_update(Relation rel, bool hot)
+pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 {
 	PgStat_TableStatus *pgstat_info = rel->pgstat_info;
 
@@ -1906,6 +1906,8 @@ pgstat_count_heap_update(Relation rel, bool hot)
 		/* t_tuples_hot_updated is nontransactional, so just advance it */
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
+		else if (warm)
+			pgstat_info->t_counts.t_tuples_warm_updated++;
 	}
 }
 
@@ -4521,6 +4523,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_updated = 0;
 		result->tuples_deleted = 0;
 		result->tuples_hot_updated = 0;
+		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
 		result->changes_since_analyze = 0;
@@ -5630,6 +5633,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated = tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted = tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated = tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
@@ -5657,6 +5661,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated += tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted += tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated += tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated += tabmsg->t_counts.t_tuples_warm_updated;
 			/* If table was truncated, first reset the live/dead counters */
 			if (tabmsg->t_counts.t_truncated)
 			{
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 5c13d26..7a9b48a 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -347,7 +347,7 @@ DecodeStandbyOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 static void
 DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 {
-	uint8		info = XLogRecGetInfo(buf->record) & XLOG_HEAP_OPMASK;
+	uint8		info = XLogRecGetInfo(buf->record) & XLOG_HEAP2_OPMASK;
 	TransactionId xid = XLogRecGetXid(buf->record);
 	SnapBuild  *builder = ctx->snapshot_builder;
 
@@ -359,10 +359,6 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 
 	switch (info)
 	{
-		case XLOG_HEAP2_MULTI_INSERT:
-			if (SnapBuildProcessChange(builder, xid, buf->origptr))
-				DecodeMultiInsert(ctx, buf);
-			break;
 		case XLOG_HEAP2_NEW_CID:
 			{
 				xl_heap_new_cid *xlrec;
@@ -390,6 +386,7 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 		case XLOG_HEAP2_CLEANUP_INFO:
 		case XLOG_HEAP2_VISIBLE:
 		case XLOG_HEAP2_LOCK_UPDATED:
+		case XLOG_HEAP2_WARMCLEAR:
 			break;
 		default:
 			elog(ERROR, "unexpected RM_HEAP2_ID record type: %u", info);
@@ -418,6 +415,10 @@ DecodeHeapOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 			if (SnapBuildProcessChange(builder, xid, buf->origptr))
 				DecodeInsert(ctx, buf);
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			if (SnapBuildProcessChange(builder, xid, buf->origptr))
+				DecodeMultiInsert(ctx, buf);
+			break;
 
 			/*
 			 * Treat HOT update as normal updates. There is no useful
@@ -809,7 +810,7 @@ DecodeDelete(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 }
 
 /*
- * Decode XLOG_HEAP2_MULTI_INSERT_insert record into multiple tuplebufs.
+ * Decode XLOG_HEAP_MULTI_INSERT_insert record into multiple tuplebufs.
  *
  * Currently MULTI_INSERT will always contain the full tuples.
  */
diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index fdf045a..8d23e92 100644
--- a/src/backend/storage/page/bufpage.c
+++ b/src/backend/storage/page/bufpage.c
@@ -1151,6 +1151,29 @@ PageIndexTupleOverwrite(Page page, OffsetNumber offnum,
 	return true;
 }
 
+/*
+ * PageIndexClearWarmTuples
+ *
+ * Clear the given WARM pointers by resetting the flags stored in the TID
+ * field. We assume there is nothing else in the TID flags other than the WARM
+ * information and clearing all flag bits is safe. If that changes, we must
+ * change this routine as well.
+ */
+void
+PageIndexClearWarmTuples(Page page, OffsetNumber *clearitemnos,
+						 uint16 nclearitems)
+{
+	int			i;
+	ItemId		itemid;
+	IndexTuple	itup;
+
+	for (i = 0; i < nclearitems; i++)
+	{
+		itemid = PageGetItemId(page, clearitemnos[i]);
+		itup = (IndexTuple) PageGetItem(page, itemid);
+		ItemPointerClearFlags(&itup->t_tid);
+	}
+}
 
 /*
  * Set checksum for a page in shared buffers.
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index e0cae1b..227a87d 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -147,6 +147,22 @@ pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS)
 
 
 Datum
+pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+
+Datum
 pg_stat_get_live_tuples(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
@@ -1674,6 +1690,21 @@ pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS)
 }
 
 Datum
+pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_TableStatus *tabentry;
+
+	if ((tabentry = find_tabstat_entry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->t_counts.t_tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+Datum
 pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index bc22098..7bf6c38 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2339,6 +2339,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	list_free_deep(relation->rd_fkeylist);
 	list_free(relation->rd_indexlist);
 	bms_free(relation->rd_indexattr);
+	bms_free(relation->rd_exprindexattr);
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_pkattr);
 	bms_free(relation->rd_idattr);
@@ -4353,6 +4354,13 @@ RelationGetIndexList(Relation relation)
 		return list_copy(relation->rd_indexlist);
 
 	/*
+	 * If the index list was invalidated, we better also invalidate the index
+	 * attribute list (which should automatically invalidate other attributes
+	 * such as primary key and replica identity)
+	 */
+	relation->rd_indexattr = NULL;
+
+	/*
 	 * We build the list we intend to return (in the caller's context) while
 	 * doing the scan.  After successfully completing the scan, we copy that
 	 * list into the relcache entry.  This avoids cache-context memory leakage
@@ -4836,15 +4844,20 @@ Bitmapset *
 RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 {
 	Bitmapset  *indexattrs;		/* indexed columns */
+	Bitmapset  *exprindexattrs;	/* indexed columns in expression/prediacate
+									 indexes */
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *pkindexattrs;	/* columns in the primary index */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
+	Bitmapset  *indxnotreadyattrs;	/* columns in not ready indexes */
 	List	   *indexoidlist;
 	List	   *newindexoidlist;
+	List	   *indexattrsList;
 	Oid			relpkindex;
 	Oid			relreplindex;
 	ListCell   *l;
 	MemoryContext oldcxt;
+	bool		supportswarm = true;/* True if the table can be WARM updated */
 
 	/* Quick exit if we already computed the result. */
 	if (relation->rd_indexattr != NULL)
@@ -4859,6 +4872,10 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_pkattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
+			case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+				return bms_copy(relation->rd_exprindexattr);
+			case INDEX_ATTR_BITMAP_NOTREADY:
+				return bms_copy(relation->rd_indxnotreadyattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -4899,9 +4916,12 @@ restart:
 	 * won't be returned at all by RelationGetIndexList.
 	 */
 	indexattrs = NULL;
+	exprindexattrs = NULL;
 	uindexattrs = NULL;
 	pkindexattrs = NULL;
 	idindexattrs = NULL;
+	indxnotreadyattrs = NULL;
+	indexattrsList = NIL;
 	foreach(l, indexoidlist)
 	{
 		Oid			indexOid = lfirst_oid(l);
@@ -4911,6 +4931,7 @@ restart:
 		bool		isKey;		/* candidate key */
 		bool		isPK;		/* primary key */
 		bool		isIDKey;	/* replica identity index */
+		Bitmapset	*thisindexattrs = NULL;
 
 		indexDesc = index_open(indexOid, AccessShareLock);
 
@@ -4935,9 +4956,16 @@ restart:
 
 			if (attrnum != 0)
 			{
+				thisindexattrs = bms_add_member(thisindexattrs,
+							   attrnum - FirstLowInvalidHeapAttributeNumber);
+
 				indexattrs = bms_add_member(indexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
 
+				if (!indexInfo->ii_ReadyForInserts)
+					indxnotreadyattrs = bms_add_member(indxnotreadyattrs,
+							   attrnum - FirstLowInvalidHeapAttributeNumber);
+
 				if (isKey)
 					uindexattrs = bms_add_member(uindexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
@@ -4953,10 +4981,31 @@ restart:
 		}
 
 		/* Collect all attributes used in expressions, too */
-		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &exprindexattrs);
 
 		/* Collect all attributes in the index predicate, too */
-		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
+
+		/*
+		 * indexattrs should include attributes referenced in index expressions
+		 * and predicates too.
+		 */
+		indexattrs = bms_add_members(indexattrs, exprindexattrs);
+		thisindexattrs = bms_add_members(thisindexattrs, exprindexattrs);
+
+		if (!indexInfo->ii_ReadyForInserts)
+			indxnotreadyattrs = bms_add_members(indxnotreadyattrs,
+					exprindexattrs);
+
+		/*
+		 * Check if the index has amrecheck method defined. If the method is
+		 * not defined, the index does not support WARM update. Completely
+		 * disable WARM updates on such tables.
+		 */
+		if (!indexDesc->rd_amroutine->amrecheck)
+			supportswarm = false;
+
+		indexattrsList = lappend(indexattrsList, thisindexattrs);
 
 		index_close(indexDesc, AccessShareLock);
 	}
@@ -4985,19 +5034,28 @@ restart:
 		bms_free(pkindexattrs);
 		bms_free(idindexattrs);
 		bms_free(indexattrs);
-
+		list_free_deep(indexattrsList);
 		goto restart;
 	}
 
+	/* Remember if the table can do WARM updates */
+	relation->rd_supportswarm = (RelationWarmUpdatesEnabled(relation) && supportswarm);
+
 	/* Don't leak the old values of these bitmaps, if any */
 	bms_free(relation->rd_indexattr);
 	relation->rd_indexattr = NULL;
+	bms_free(relation->rd_exprindexattr);
+	relation->rd_exprindexattr = NULL;
 	bms_free(relation->rd_keyattr);
 	relation->rd_keyattr = NULL;
 	bms_free(relation->rd_pkattr);
 	relation->rd_pkattr = NULL;
 	bms_free(relation->rd_idattr);
 	relation->rd_idattr = NULL;
+	bms_free(relation->rd_indxnotreadyattr);
+	relation->rd_indxnotreadyattr = NULL;
+	list_free_deep(relation->rd_indexattrsList);
+	relation->rd_indexattrsList = NIL;
 
 	/*
 	 * Now save copies of the bitmaps in the relcache entry.  We intentionally
@@ -5010,7 +5068,21 @@ restart:
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_pkattr = bms_copy(pkindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_indexattr = bms_copy(indexattrs);
+	relation->rd_exprindexattr = bms_copy(exprindexattrs);
+	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
+	relation->rd_indxnotreadyattr = bms_copy(indxnotreadyattrs);
+
+	/*
+	 * create a deep copy of the list, copying each bitmap in the
+	 * CurrentMemoryContext.
+	 */
+	foreach(l, indexattrsList)
+	{
+		Bitmapset *b = (Bitmapset *) lfirst(l);
+		relation->rd_indexattrsList = lappend(relation->rd_indexattrsList,
+				bms_copy(b));
+	}
+
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -5024,6 +5096,10 @@ restart:
 			return bms_copy(relation->rd_pkattr);
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
+		case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+			return exprindexattrs;
+		case INDEX_ATTR_BITMAP_NOTREADY:
+			return indxnotreadyattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
@@ -5031,6 +5107,34 @@ restart:
 }
 
 /*
+ * Get a list of bitmaps, where each bitmap contains a list of attributes used
+ * by one index.
+ *
+ * The actual information is computed in RelationGetIndexAttrBitmap, but
+ * currently the only consumer of this function calls it immediately after
+ * calling RelationGetIndexAttrBitmap, we should be fine. We don't expect any
+ * relcache invalidation to come between these two calls and hence don't expect
+ * the cached information to change underneath.
+ */
+List *
+RelationGetIndexAttrList(Relation relation)
+{
+	ListCell   *l;
+	List	   *indexattrsList = NIL;
+
+	/*
+	 * Create a deep copy of the list by copying bitmaps in the
+	 * CurrentMemoryContext.
+	 */
+	foreach(l, relation->rd_indexattrsList)
+	{
+		Bitmapset *b = (Bitmapset *) lfirst(l);
+		indexattrsList = lappend(indexattrsList, bms_copy(b));
+	}
+	return indexattrsList;
+}
+
+/*
  * RelationGetExclusionInfo -- get info about index's exclusion constraint
  *
  * This should be called only for an index that is known to have an
@@ -5636,6 +5740,7 @@ load_relcache_init_file(bool shared)
 		rel->rd_keyattr = NULL;
 		rel->rd_pkattr = NULL;
 		rel->rd_idattr = NULL;
+		rel->rd_indxnotreadyattr = NULL;
 		rel->rd_pubactions = NULL;
 		rel->rd_statvalid = false;
 		rel->rd_statlist = NIL;
diff --git a/src/backend/utils/time/combocid.c b/src/backend/utils/time/combocid.c
index baff998..6a2e2f2 100644
--- a/src/backend/utils/time/combocid.c
+++ b/src/backend/utils/time/combocid.c
@@ -106,7 +106,7 @@ HeapTupleHeaderGetCmin(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 	Assert(TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmin(tup)));
 
 	if (tup->t_infomask & HEAP_COMBOCID)
@@ -120,7 +120,7 @@ HeapTupleHeaderGetCmax(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 
 	/*
 	 * Because GetUpdateXid() performs memory allocations if xmax is a
diff --git a/src/backend/utils/time/tqual.c b/src/backend/utils/time/tqual.c
index 519f3b6..e54d0df 100644
--- a/src/backend/utils/time/tqual.c
+++ b/src/backend/utils/time/tqual.c
@@ -186,7 +186,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -205,7 +205,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -377,7 +377,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -396,7 +396,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -471,7 +471,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			return HeapTupleInvisible;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -490,7 +490,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -753,7 +753,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -772,7 +772,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -974,7 +974,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -993,7 +993,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1180,7 +1180,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 		if (HeapTupleHeaderXminInvalid(tuple))
 			return HEAPTUPLE_DEAD;
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_OFF)
+		else if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1198,7 +1198,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 						InvalidTransactionId);
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h
index f919cf8..8b7af1e 100644
--- a/src/include/access/amapi.h
+++ b/src/include/access/amapi.h
@@ -13,6 +13,7 @@
 #define AMAPI_H
 
 #include "access/genam.h"
+#include "access/itup.h"
 
 /*
  * We don't wish to include planner header files here, since most of an index
@@ -74,6 +75,14 @@ typedef bool (*aminsert_function) (Relation indexRelation,
 											   Relation heapRelation,
 											   IndexUniqueCheck checkUnique,
 											   struct IndexInfo *indexInfo);
+/* insert this WARM tuple */
+typedef bool (*amwarminsert_function) (Relation indexRelation,
+											   Datum *values,
+											   bool *isnull,
+											   ItemPointer heap_tid,
+											   Relation heapRelation,
+											   IndexUniqueCheck checkUnique,
+											   struct IndexInfo *indexInfo);
 
 /* bulk delete */
 typedef IndexBulkDeleteResult *(*ambulkdelete_function) (IndexVacuumInfo *info,
@@ -152,6 +161,11 @@ typedef void (*aminitparallelscan_function) (void *target);
 /* (re)start parallel index scan */
 typedef void (*amparallelrescan_function) (IndexScanDesc scan);
 
+/* recheck index tuple and heap tuple match */
+typedef bool (*amrecheck_function) (Relation indexRel,
+		struct IndexInfo *indexInfo, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
 /*
  * API struct for an index AM.  Note this must be stored in a single palloc'd
  * chunk of memory.
@@ -198,6 +212,7 @@ typedef struct IndexAmRoutine
 	ambuild_function ambuild;
 	ambuildempty_function ambuildempty;
 	aminsert_function aminsert;
+	amwarminsert_function amwarminsert;
 	ambulkdelete_function ambulkdelete;
 	amvacuumcleanup_function amvacuumcleanup;
 	amcanreturn_function amcanreturn;	/* can be NULL */
@@ -217,6 +232,9 @@ typedef struct IndexAmRoutine
 	amestimateparallelscan_function amestimateparallelscan;		/* can be NULL */
 	aminitparallelscan_function aminitparallelscan;		/* can be NULL */
 	amparallelrescan_function amparallelrescan; /* can be NULL */
+
+	/* interface function to support WARM */
+	amrecheck_function amrecheck;		/* can be NULL */
 } IndexAmRoutine;
 
 
diff --git a/src/include/access/genam.h b/src/include/access/genam.h
index f467b18..965be45 100644
--- a/src/include/access/genam.h
+++ b/src/include/access/genam.h
@@ -75,12 +75,29 @@ typedef struct IndexBulkDeleteResult
 	bool		estimated_count;	/* num_index_tuples is an estimate */
 	double		num_index_tuples;		/* tuples remaining */
 	double		tuples_removed; /* # removed during vacuum operation */
+	double		num_warm_pointers;	/* # WARM pointers found */
+	double		num_clear_pointers;	/* # CLEAR pointers found */
+	double		pointers_cleared;	/* # WARM pointers cleared */
+	double		warm_pointers_removed;	/* # WARM pointers removed */
+	double		clear_pointers_removed;	/* # CLEAR pointers removed */
 	BlockNumber pages_deleted;	/* # unused pages in index */
 	BlockNumber pages_free;		/* # pages available for reuse */
 } IndexBulkDeleteResult;
 
+/*
+ * IndexBulkDeleteCallback should return one of the following
+ */
+typedef enum IndexBulkDeleteCallbackResult
+{
+	IBDCR_KEEP,			/* index tuple should be preserved */
+	IBDCR_DELETE,		/* index tuple should be deleted */
+	IBDCR_CLEAR_WARM	/* index tuple should be cleared of WARM bit */
+} IndexBulkDeleteCallbackResult;
+
 /* Typedef for callback function to determine if a tuple is bulk-deletable */
-typedef bool (*IndexBulkDeleteCallback) (ItemPointer itemptr, void *state);
+typedef IndexBulkDeleteCallbackResult (*IndexBulkDeleteCallback) (
+										 ItemPointer itemptr,
+										 bool is_warm, void *state);
 
 /* struct definitions appear in relscan.h */
 typedef struct IndexScanDescData *IndexScanDesc;
@@ -135,7 +152,8 @@ extern bool index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 struct IndexInfo *indexInfo);
+			 struct IndexInfo *indexInfo,
+			 bool warm_update);
 
 extern IndexScanDesc index_beginscan(Relation heapRelation,
 				Relation indexRelation,
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 5540e12..2217af9 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -72,6 +72,20 @@ typedef struct HeapUpdateFailureData
 	CommandId	cmax;
 } HeapUpdateFailureData;
 
+typedef int HeapCheckWarmChainStatus;
+
+#define HCWC_CLEAR_TUPLE		0x0001
+#define	HCWC_WARM_TUPLE			0x0002
+#define HCWC_WARM_UPDATED_TUPLE	0x0004
+
+#define HCWC_IS_MIXED(status) \
+	(((status) & (HCWC_CLEAR_TUPLE | HCWC_WARM_TUPLE)) != 0)
+#define HCWC_IS_ALL_WARM(status) \
+	(((status) & HCWC_CLEAR_TUPLE) == 0)
+#define HCWC_IS_ALL_CLEAR(status) \
+	(((status) & HCWC_WARM_TUPLE) == 0)
+#define HCWC_IS_WARM_UPDATED(status) \
+	(((status) & HCWC_WARM_UPDATED_TUPLE) != 0)
 
 /* ----------------
  *		function prototypes for heap access method
@@ -137,9 +151,10 @@ extern bool heap_fetch(Relation relation, Snapshot snapshot,
 		   Relation stats_relation);
 extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
 					   Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call);
+					   bool *all_dead, bool first_call, bool *recheck);
 extern bool heap_hot_search(ItemPointer tid, Relation relation,
-				Snapshot snapshot, bool *all_dead);
+				Snapshot snapshot, bool *all_dead,
+				bool *recheck, Buffer *buffer, HeapTuple heapTuple);
 
 extern void heap_get_latest_tid(Relation relation, Snapshot snapshot,
 					ItemPointer tid);
@@ -161,7 +176,8 @@ extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
 extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
@@ -176,10 +192,16 @@ extern bool heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple);
 extern Oid	simple_heap_insert(Relation relation, HeapTuple tup);
 extern void simple_heap_delete(Relation relation, ItemPointer tid);
 extern void simple_heap_update(Relation relation, ItemPointer otid,
-				   HeapTuple tup);
+				   HeapTuple tup,
+				   Bitmapset **modified_attrs,
+				   bool *warm_update);
 
 extern void heap_sync(Relation relation);
 extern void heap_update_snapshot(HeapScanDesc scan, Snapshot snapshot);
+extern HeapCheckWarmChainStatus heap_check_warm_chain(Page dp,
+				   ItemPointer tid, bool stop_at_warm);
+extern int heap_clear_warm_chain(Page dp, ItemPointer tid,
+				   OffsetNumber *cleared_offnums);
 
 /* in heap/pruneheap.c */
 extern void heap_page_prune_opt(Relation relation, Buffer buffer);
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index e6019d5..66fd0ea 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -32,7 +32,7 @@
 #define XLOG_HEAP_INSERT		0x00
 #define XLOG_HEAP_DELETE		0x10
 #define XLOG_HEAP_UPDATE		0x20
-/* 0x030 is free, was XLOG_HEAP_MOVE */
+#define XLOG_HEAP_MULTI_INSERT	0x30
 #define XLOG_HEAP_HOT_UPDATE	0x40
 #define XLOG_HEAP_CONFIRM		0x50
 #define XLOG_HEAP_LOCK			0x60
@@ -47,18 +47,23 @@
 /*
  * We ran out of opcodes, so heapam.c now has a second RmgrId.  These opcodes
  * are associated with RM_HEAP2_ID, but are not logically different from
- * the ones above associated with RM_HEAP_ID.  XLOG_HEAP_OPMASK applies to
- * these, too.
+ * the ones above associated with RM_HEAP_ID.
+ *
+ * In PG 10, we moved XLOG_HEAP2_MULTI_INSERT to RM_HEAP_ID. That allows us to
+ * use 0x80 bit in RM_HEAP2_ID, thus potentially creating another 8 possible
+ * opcodes in RM_HEAP2_ID.
  */
 #define XLOG_HEAP2_REWRITE		0x00
 #define XLOG_HEAP2_CLEAN		0x10
 #define XLOG_HEAP2_FREEZE_PAGE	0x20
 #define XLOG_HEAP2_CLEANUP_INFO 0x30
 #define XLOG_HEAP2_VISIBLE		0x40
-#define XLOG_HEAP2_MULTI_INSERT 0x50
+#define XLOG_HEAP2_WARMCLEAR	0x50
 #define XLOG_HEAP2_LOCK_UPDATED 0x60
 #define XLOG_HEAP2_NEW_CID		0x70
 
+#define XLOG_HEAP2_OPMASK		0x70
+
 /*
  * xl_heap_insert/xl_heap_multi_insert flag values, 8 bits are available.
  */
@@ -80,6 +85,7 @@
 #define XLH_UPDATE_CONTAINS_NEW_TUPLE			(1<<4)
 #define XLH_UPDATE_PREFIX_FROM_OLD				(1<<5)
 #define XLH_UPDATE_SUFFIX_FROM_OLD				(1<<6)
+#define XLH_UPDATE_WARM_UPDATE					(1<<7)
 
 /* convenience macro for checking whether any form of old tuple was logged */
 #define XLH_UPDATE_CONTAINS_OLD						\
@@ -225,6 +231,14 @@ typedef struct xl_heap_clean
 
 #define SizeOfHeapClean (offsetof(xl_heap_clean, ndead) + sizeof(uint16))
 
+typedef struct xl_heap_warmclear
+{
+	uint16		ncleared;
+	/* OFFSET NUMBERS are in the block reference 0 */
+} xl_heap_warmclear;
+
+#define SizeOfHeapWarmClear (offsetof(xl_heap_warmclear, ncleared) + sizeof(uint16))
+
 /*
  * Cleanup_info is required in some cases during a lazy VACUUM.
  * Used for reporting the results of HeapTupleHeaderAdvanceLatestRemovedXid()
@@ -388,6 +402,8 @@ extern XLogRecPtr log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid);
+extern XLogRecPtr log_heap_warmclear(Relation reln, Buffer buffer,
+			   OffsetNumber *cleared, int ncleared);
 extern XLogRecPtr log_heap_freeze(Relation reln, Buffer buffer,
 				TransactionId cutoff_xid, xl_heap_freeze_tuple *tuples,
 				int ntuples);
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 4d614b7..bcefba6 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -201,6 +201,21 @@ struct HeapTupleHeaderData
 										 * upgrade support */
 #define HEAP_MOVED (HEAP_MOVED_OFF | HEAP_MOVED_IN)
 
+/*
+ * A WARM chain usually consists of two parts. Each of these parts are HOT
+ * chains in themselves i.e. all indexed columns has the same value, but a WARM
+ * update separates these parts. We need a mechanism to identify which part a
+ * tuple belongs to. We can't just look at if it's a
+ * HeapTupleHeaderIsWarmUpdated() because during WARM update, both old and new
+ * tuples are marked as WARM tuples.
+ *
+ * We need another infomask bit for this. But we use the same infomask bit that
+ * was earlier used for by old-style VACUUM FULL. This is safe because
+ * HEAP_WARM_TUPLE flag will always be set along with HEAP_WARM_UPDATED. So if
+ * HEAP_WARM_TUPLE and HEAP_WARM_UPDATED is set then we know that it's
+ * referring to red part of the WARM chain.
+ */
+#define HEAP_WARM_TUPLE			0x4000
 #define HEAP_XACT_MASK			0xFFF0	/* visibility-related bits */
 
 /*
@@ -260,7 +275,11 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x0800 are available */
+#define HEAP_WARM_UPDATED		0x0800	/*
+										 * This or a prior version of this
+										 * tuple in the current HOT chain was
+										 * once WARM updated
+										 */
 #define HEAP_LATEST_TUPLE		0x1000	/*
 										 * This is the last tuple in chain and
 										 * ip_posid points to the root line
@@ -271,7 +290,7 @@ struct HeapTupleHeaderData
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF800	/* visibility-related bits */
 
 
 /*
@@ -396,7 +415,7 @@ struct HeapTupleHeaderData
 /* SetCmin is reasonably simple since we never need a combo CID */
 #define HeapTupleHeaderSetCmin(tup, cid) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	(tup)->t_infomask &= ~HEAP_COMBOCID; \
 } while (0)
@@ -404,7 +423,7 @@ do { \
 /* SetCmax must be used after HeapTupleHeaderAdjustCmax; see combocid.c */
 #define HeapTupleHeaderSetCmax(tup, cid, iscombo) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	if (iscombo) \
 		(tup)->t_infomask |= HEAP_COMBOCID; \
@@ -414,7 +433,7 @@ do { \
 
 #define HeapTupleHeaderGetXvac(tup) \
 ( \
-	((tup)->t_infomask & HEAP_MOVED) ? \
+	HeapTupleHeaderIsMoved(tup) ? \
 		(tup)->t_choice.t_heap.t_field3.t_xvac \
 	: \
 		InvalidTransactionId \
@@ -422,7 +441,7 @@ do { \
 
 #define HeapTupleHeaderSetXvac(tup, xid) \
 do { \
-	Assert((tup)->t_infomask & HEAP_MOVED); \
+	Assert(HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_xvac = (xid); \
 } while (0)
 
@@ -510,6 +529,21 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+#define HeapTupleHeaderSetWarmUpdated(tup) \
+do { \
+	(tup)->t_infomask2 |= HEAP_WARM_UPDATED; \
+} while (0)
+
+#define HeapTupleHeaderClearWarmUpdated(tup) \
+do { \
+	(tup)->t_infomask2 &= ~HEAP_WARM_UPDATED; \
+} while (0)
+
+#define HeapTupleHeaderIsWarmUpdated(tup) \
+( \
+  ((tup)->t_infomask2 & HEAP_WARM_UPDATED) != 0 \
+)
+
 /*
  * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
  * the TID of the tuple itself in t_ctid field to mark the end of the chain.
@@ -635,6 +669,58 @@ do { \
 )
 
 /*
+ * Macros to check if tuple is a moved-off/in tuple by VACUUM FULL in from
+ * pre-9.0 era. Such tuple must not have HEAP_WARM_TUPLE flag set.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderIsMovedOff(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED_OFF) \
+)
+
+#define HeapTupleHeaderIsMovedIn(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED_IN) \
+)
+
+#define HeapTupleHeaderIsMoved(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED) \
+)
+
+/*
+ * Check if tuple belongs to the second part of the WARM chain.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderIsWarm(tuple) \
+( \
+	HeapTupleHeaderIsWarmUpdated(tuple) && \
+	(((tuple)->t_infomask & HEAP_WARM_TUPLE) != 0) \
+)
+
+/*
+ * Mark tuple as a member of the second part of the chain. Must only be done on
+ * a tuple which is already marked a WARM-tuple.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderSetWarm(tuple) \
+( \
+	AssertMacro(HeapTupleHeaderIsWarmUpdated(tuple)), \
+	(tuple)->t_infomask |= HEAP_WARM_TUPLE \
+)
+
+#define HeapTupleHeaderClearWarm(tuple) \
+( \
+	(tuple)->t_infomask &= ~HEAP_WARM_TUPLE \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
@@ -785,6 +871,24 @@ struct MinimalTupleData
 #define HeapTupleClearHeapOnly(tuple) \
 		HeapTupleHeaderClearHeapOnly((tuple)->t_data)
 
+#define HeapTupleIsWarmUpdated(tuple) \
+		HeapTupleHeaderIsWarmUpdated((tuple)->t_data)
+
+#define HeapTupleSetWarmUpdated(tuple) \
+		HeapTupleHeaderSetWarmUpdated((tuple)->t_data)
+
+#define HeapTupleClearWarmUpdated(tuple) \
+		HeapTupleHeaderClearWarmUpdated((tuple)->t_data)
+
+#define HeapTupleIsWarm(tuple) \
+		HeapTupleHeaderIsWarm((tuple)->t_data)
+
+#define HeapTupleSetWarm(tuple) \
+		HeapTupleHeaderSetWarm((tuple)->t_data)
+
+#define HeapTupleClearWarm(tuple) \
+		HeapTupleHeaderClearWarm((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index f9304db..163180d 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -427,6 +427,12 @@ typedef BTScanOpaqueData *BTScanOpaque;
 #define SK_BT_NULLS_FIRST	(INDOPTION_NULLS_FIRST << SK_BT_INDOPTION_SHIFT)
 
 /*
+ * Flags overloaded on t_tid.ip_posid field. They are managed by
+ * ItemPointerSetFlags and corresponing routines.
+ */
+#define BTREE_INDEX_WARM_POINTER	0x01
+
+/*
  * external entry points for btree, in nbtree.c
  */
 extern IndexBuildResult *btbuild(Relation heap, Relation index,
@@ -436,6 +442,10 @@ extern bool btinsert(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
 		 struct IndexInfo *indexInfo);
+extern bool btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 struct IndexInfo *indexInfo);
 extern IndexScanDesc btbeginscan(Relation rel, int nkeys, int norderbys);
 extern Size btestimateparallelscan(void);
 extern void btinitparallelscan(void *target);
@@ -487,10 +497,12 @@ extern void _bt_pageinit(Page page, Size size);
 extern bool _bt_page_recyclable(Page page);
 extern void _bt_delitems_delete(Relation rel, Buffer buf,
 					OffsetNumber *itemnos, int nitems, Relation heapRel);
-extern void _bt_delitems_vacuum(Relation rel, Buffer buf,
-					OffsetNumber *itemnos, int nitems,
-					BlockNumber lastBlockVacuumed);
+extern void _bt_handleitems_vacuum(Relation rel, Buffer buf,
+					OffsetNumber *delitemnos, int ndelitems,
+					OffsetNumber *clearitemnos, int nclearitems);
 extern int	_bt_pagedel(Relation rel, Buffer buf);
+extern void	_bt_clear_items(Page page, OffsetNumber *clearitemnos,
+					uint16 nclearitems);
 
 /*
  * prototypes for functions in nbtsearch.c
@@ -537,6 +549,9 @@ extern bytea *btoptions(Datum reloptions, bool validate);
 extern bool btproperty(Oid index_oid, int attno,
 		   IndexAMProperty prop, const char *propname,
 		   bool *res, bool *isnull);
+extern bool btrecheck(Relation indexRel, struct IndexInfo *indexInfo,
+		IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
 
 /*
  * prototypes for functions in nbtvalidate.c
diff --git a/src/include/access/nbtxlog.h b/src/include/access/nbtxlog.h
index d6a3085..6a86628 100644
--- a/src/include/access/nbtxlog.h
+++ b/src/include/access/nbtxlog.h
@@ -142,7 +142,8 @@ typedef struct xl_btree_reuse_page
 /*
  * This is what we need to know about vacuum of individual leaf index tuples.
  * The WAL record can represent deletion of any number of index tuples on a
- * single index page when executed by VACUUM.
+ * single index page when executed by VACUUM. It also includes tuples which
+ * are cleared of WARM bits by VACUUM.
  *
  * For MVCC scans, lastBlockVacuumed will be set to InvalidBlockNumber.
  * For a non-MVCC index scans there is an additional correctness requirement
@@ -165,11 +166,12 @@ typedef struct xl_btree_reuse_page
 typedef struct xl_btree_vacuum
 {
 	BlockNumber lastBlockVacuumed;
-
-	/* TARGET OFFSET NUMBERS FOLLOW */
+	uint16		ndelitems;
+	uint16		nclearitems;
+	/* ndelitems + nclearitems TARGET OFFSET NUMBERS FOLLOW */
 } xl_btree_vacuum;
 
-#define SizeOfBtreeVacuum	(offsetof(xl_btree_vacuum, lastBlockVacuumed) + sizeof(BlockNumber))
+#define SizeOfBtreeVacuum	(offsetof(xl_btree_vacuum, nclearitems) + sizeof(uint16))
 
 /*
  * This is what we need to know about marking an empty branch for deletion.
diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h
index 3fc726d..fa178d3 100644
--- a/src/include/access/relscan.h
+++ b/src/include/access/relscan.h
@@ -104,6 +104,9 @@ typedef struct IndexScanDescData
 	/* index access method's private state */
 	void	   *opaque;			/* access-method-specific info */
 
+	/* IndexInfo structure for this index */
+	struct IndexInfo  *indexInfo;
+
 	/*
 	 * In an index-only scan, a successful amgettuple call must fill either
 	 * xs_itup (and xs_itupdesc) or xs_hitup (and xs_hitupdesc) to provide the
@@ -119,7 +122,7 @@ typedef struct IndexScanDescData
 	HeapTupleData xs_ctup;		/* current heap tuple, if any */
 	Buffer		xs_cbuf;		/* current heap buffer in scan, if any */
 	/* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */
-	bool		xs_recheck;		/* T means scan keys must be rechecked */
+	bool		xs_recheck;		/* T means scan keys must be rechecked for each tuple */
 
 	/*
 	 * When fetching with an ordering operator, the values of the ORDER BY
diff --git a/src/include/catalog/index.h b/src/include/catalog/index.h
index 20bec90..f92ec29 100644
--- a/src/include/catalog/index.h
+++ b/src/include/catalog/index.h
@@ -89,6 +89,13 @@ extern void FormIndexDatum(IndexInfo *indexInfo,
 			   Datum *values,
 			   bool *isnull);
 
+extern void FormIndexPlainDatum(IndexInfo *indexInfo,
+			   Relation heapRel,
+			   HeapTuple heapTup,
+			   Datum *values,
+			   bool *isnull,
+			   bool *isavail);
+
 extern void index_build(Relation heapRelation,
 			Relation indexRelation,
 			IndexInfo *indexInfo,
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 711211d..3f1a142 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2789,6 +2789,8 @@ DATA(insert OID = 1933 (  pg_stat_get_tuples_deleted	PGNSP PGUID 12 1 0 0 0 f f
 DESCR("statistics: number of tuples deleted");
 DATA(insert OID = 1972 (  pg_stat_get_tuples_hot_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated");
+DATA(insert OID = 3402 (  pg_stat_get_tuples_warm_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated");
 DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_live_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
@@ -2941,6 +2943,8 @@ DATA(insert OID = 3042 (  pg_stat_get_xact_tuples_deleted		PGNSP PGUID 12 1 0 0
 DESCR("statistics: number of tuples deleted in current transaction");
 DATA(insert OID = 3043 (  pg_stat_get_xact_tuples_hot_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated in current transaction");
+DATA(insert OID = 3405 (  pg_stat_get_xact_tuples_warm_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated in current transaction");
 DATA(insert OID = 3044 (  pg_stat_get_xact_blocks_fetched		PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_fetched _null_ _null_ _null_ ));
 DESCR("statistics: number of blocks fetched in current transaction");
 DATA(insert OID = 3045 (  pg_stat_get_xact_blocks_hit			PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_hit _null_ _null_ _null_ ));
diff --git a/src/include/commands/progress.h b/src/include/commands/progress.h
index 9472ecc..b355b61 100644
--- a/src/include/commands/progress.h
+++ b/src/include/commands/progress.h
@@ -25,6 +25,7 @@
 #define PROGRESS_VACUUM_NUM_INDEX_VACUUMS		4
 #define PROGRESS_VACUUM_MAX_DEAD_TUPLES			5
 #define PROGRESS_VACUUM_NUM_DEAD_TUPLES			6
+#define PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED	7
 
 /* Phases of vacuum (as advertised via PROGRESS_VACUUM_PHASE) */
 #define PROGRESS_VACUUM_PHASE_SCAN_HEAP			1
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index d3849b9..7e1ec56 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -506,6 +506,7 @@ extern int	ExecCleanTargetListLength(List *targetlist);
 extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
 extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
 extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
+					  ItemPointer root_tid, Bitmapset *modified_attrs,
 					  EState *estate, bool noDupErr, bool *specConflict,
 					  List *arbiterIndexes);
 extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
diff --git a/src/include/executor/nodeIndexscan.h b/src/include/executor/nodeIndexscan.h
index ea3f3a5..ebeec74 100644
--- a/src/include/executor/nodeIndexscan.h
+++ b/src/include/executor/nodeIndexscan.h
@@ -41,5 +41,4 @@ extern void ExecIndexEvalRuntimeKeys(ExprContext *econtext,
 extern bool ExecIndexEvalArrayKeys(ExprContext *econtext,
 					   IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 extern bool ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
-
 #endif   /* NODEINDEXSCAN_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index fa99244..eed75a8 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -133,6 +133,7 @@ typedef struct IndexInfo
 	NodeTag		type;
 	int			ii_NumIndexAttrs;
 	AttrNumber	ii_KeyAttrNumbers[INDEX_MAX_KEYS];
+	Bitmapset  *ii_indxattrs;	/* bitmap of all columns used in this index */
 	List	   *ii_Expressions; /* list of Expr */
 	List	   *ii_ExpressionsState;	/* list of ExprState */
 	List	   *ii_Predicate;	/* list of Expr */
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index e29397f..99bdc8b 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -105,6 +105,7 @@ typedef struct PgStat_TableCounts
 	PgStat_Counter t_tuples_updated;
 	PgStat_Counter t_tuples_deleted;
 	PgStat_Counter t_tuples_hot_updated;
+	PgStat_Counter t_tuples_warm_updated;
 	bool		t_truncated;
 
 	PgStat_Counter t_delta_live_tuples;
@@ -625,6 +626,7 @@ typedef struct PgStat_StatTabEntry
 	PgStat_Counter tuples_updated;
 	PgStat_Counter tuples_deleted;
 	PgStat_Counter tuples_hot_updated;
+	PgStat_Counter tuples_warm_updated;
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
@@ -1285,7 +1287,7 @@ pgstat_report_wait_end(void)
 	(pgStatBlockWriteTime += (n))
 
 extern void pgstat_count_heap_insert(Relation rel, PgStat_Counter n);
-extern void pgstat_count_heap_update(Relation rel, bool hot);
+extern void pgstat_count_heap_update(Relation rel, bool hot, bool warm);
 extern void pgstat_count_heap_delete(Relation rel);
 extern void pgstat_count_truncate(Relation rel);
 extern void pgstat_update_heap_dead_tuples(Relation rel, int delta);
diff --git a/src/include/storage/bufpage.h b/src/include/storage/bufpage.h
index e956dc3..1852195 100644
--- a/src/include/storage/bufpage.h
+++ b/src/include/storage/bufpage.h
@@ -433,6 +433,8 @@ extern void PageIndexMultiDelete(Page page, OffsetNumber *itemnos, int nitems);
 extern void PageIndexTupleDeleteNoCompact(Page page, OffsetNumber offset);
 extern bool PageIndexTupleOverwrite(Page page, OffsetNumber offnum,
 						Item newtup, Size newsize);
+extern void PageIndexClearWarmTuples(Page page, OffsetNumber *clearitemnos,
+						uint16 nclearitems);
 extern char *PageSetChecksumCopy(Page page, BlockNumber blkno);
 extern void PageSetChecksumInplace(Page page, BlockNumber blkno);
 
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index ab875bb..2b86054 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -142,9 +142,16 @@ typedef struct RelationData
 
 	/* data managed by RelationGetIndexAttrBitmap: */
 	Bitmapset  *rd_indexattr;	/* identifies columns used in indexes */
+	Bitmapset  *rd_exprindexattr; /* indentified columns used in expression or
+									 predicate indexes */
+	Bitmapset  *rd_indxnotreadyattr;	/* columns used by indexes not yet
+										   ready */
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_pkattr;		/* cols included in primary key */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	List	   *rd_indexattrsList;	/* List of bitmaps, describing list of
+									   attributes for each index */
+	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	PublicationActions  *rd_pubactions;	/* publication actions */
 
@@ -281,6 +288,7 @@ typedef struct StdRdOptions
 	bool		user_catalog_table;		/* use as an additional catalog
 										 * relation */
 	int			parallel_workers;		/* max number of parallel workers */
+	bool		enable_warm;	/* should WARM be allowed on this table */
 } StdRdOptions;
 
 #define HEAP_MIN_FILLFACTOR			10
@@ -319,6 +327,17 @@ typedef struct StdRdOptions
 	  (relation)->rd_rel->relkind == RELKIND_MATVIEW) ? \
 	 ((StdRdOptions *) (relation)->rd_options)->user_catalog_table : false)
 
+#define HEAP_DEFAULT_ENABLE_WARM	true
+/*
+ * RelationWarmUpdatesEnabled
+ * 		Returns whether the relation supports WARM update.
+ */
+#define RelationWarmUpdatesEnabled(relation) \
+	(((relation)->rd_options && \
+	 (relation)->rd_rel->relkind == RELKIND_RELATION) ? \
+	 ((StdRdOptions *) ((relation)->rd_options))->enable_warm : \
+		HEAP_DEFAULT_ENABLE_WARM)
+
 /*
  * RelationGetParallelWorkers
  *		Returns the relation's parallel_workers reloption setting.
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index 81af3ae..06c0183 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -51,11 +51,14 @@ typedef enum IndexAttrBitmapKind
 	INDEX_ATTR_BITMAP_ALL,
 	INDEX_ATTR_BITMAP_KEY,
 	INDEX_ATTR_BITMAP_PRIMARY_KEY,
-	INDEX_ATTR_BITMAP_IDENTITY_KEY
+	INDEX_ATTR_BITMAP_IDENTITY_KEY,
+	INDEX_ATTR_BITMAP_EXPR_PREDICATE,
+	INDEX_ATTR_BITMAP_NOTREADY
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
 						   IndexAttrBitmapKind keyAttrs);
+extern List *RelationGetIndexAttrList(Relation relation);
 
 extern void RelationGetExclusionInfo(Relation indexRelation,
 						 Oid **operators,
diff --git a/src/test/regress/expected/alter_generic.out b/src/test/regress/expected/alter_generic.out
index ce581bb..85e4c70 100644
--- a/src/test/regress/expected/alter_generic.out
+++ b/src/test/regress/expected/alter_generic.out
@@ -161,15 +161,15 @@ ALTER SERVER alt_fserv1 RENAME TO alt_fserv3;   -- OK
 SELECT fdwname FROM pg_foreign_data_wrapper WHERE fdwname like 'alt_fdw%';
  fdwname  
 ----------
- alt_fdw2
  alt_fdw3
+ alt_fdw2
 (2 rows)
 
 SELECT srvname FROM pg_foreign_server WHERE srvname like 'alt_fserv%';
   srvname   
 ------------
- alt_fserv2
  alt_fserv3
+ alt_fserv2
 (2 rows)
 
 --
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index d706f42..f7dc4a4 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1756,6 +1756,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_tuples_deleted(c.oid) AS n_tup_del,
     pg_stat_get_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
@@ -1903,6 +1904,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1946,6 +1948,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1983,7 +1986,8 @@ pg_stat_xact_all_tables| SELECT c.oid AS relid,
     pg_stat_get_xact_tuples_inserted(c.oid) AS n_tup_ins,
     pg_stat_get_xact_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_xact_tuples_deleted(c.oid) AS n_tup_del,
-    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd
+    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_xact_tuples_warm_updated(c.oid) AS n_tup_warm_upd
    FROM ((pg_class c
      LEFT JOIN pg_index i ON ((c.oid = i.indrelid)))
      LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
@@ -1999,7 +2003,8 @@ pg_stat_xact_sys_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname = ANY (ARRAY['pg_catalog'::name, 'information_schema'::name])) OR (pg_stat_xact_all_tables.schemaname ~ '^pg_toast'::text));
 pg_stat_xact_user_functions| SELECT p.oid AS funcid,
@@ -2021,7 +2026,8 @@ pg_stat_xact_user_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname <> ALL (ARRAY['pg_catalog'::name, 'information_schema'::name])) AND (pg_stat_xact_all_tables.schemaname !~ '^pg_toast'::text));
 pg_statio_all_indexes| SELECT c.oid AS relid,
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
new file mode 100644
index 0000000..1f07272
--- /dev/null
+++ b/src/test/regress/expected/warm.out
@@ -0,0 +1,930 @@
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Check if index only scan works correctly
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx1 on updtst_tab1
+   Index Cond: (b = 140001)
+(2 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab1;
+------------------
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE a = 1;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE b = 701;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+VACUUM updtst_tab2;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx2 on updtst_tab2
+   Index Cond: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab2 WHERE b = 701;
+  b  
+-----
+ 701
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab2;
+------------------
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 1;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+       QUERY PLAN        
+-------------------------
+ Seq Scan on updtst_tab3
+   Filter: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 701;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+  b   
+------
+ 1421
+(1 row)
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+SET enable_seqscan = false;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    98
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 2;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx3 on updtst_tab3
+   Index Cond: (b = 702)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 702;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+  b   
+------
+ 1422
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab3;
+------------------
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+explain select * from test_warm where lower(a) = 'test';
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on test_warm  (cost=4.18..12.65 rows=4 width=64)
+   Recheck Cond: (lower(a) = 'test'::text)
+   ->  Bitmap Index Scan on test_warmindx  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (lower(a) = 'test'::text)
+(4 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+select *, ctid from test_warm where a = 'test';
+ a | b | ctid 
+---+---+------
+(0 rows)
+
+select *, ctid from test_warm where a = 'TEST';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+                                   QUERY PLAN                                    
+---------------------------------------------------------------------------------
+ Index Scan using test_warmindx on test_warm  (cost=0.15..20.22 rows=4 width=64)
+   Index Cond: (lower(a) = 'test'::text)
+(2 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+DROP TABLE test_warm;
+--- Test with toast data types
+CREATE TABLE test_toast_warm (a int unique, b text, c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+-- insert a large enough value to cause index datum compression
+INSERT INTO test_toast_warm VALUES (1, repeat('a', 600), 100);
+INSERT INTO test_toast_warm VALUES (2, repeat('b', 2), 100);
+INSERT INTO test_toast_warm VALUES (3, repeat('c', 4), 100);
+INSERT INTO test_toast_warm VALUES (4, repeat('d', 63), 100);
+INSERT INTO test_toast_warm VALUES (5, repeat('e', 126), 100);
+INSERT INTO test_toast_warm VALUES (6, repeat('f', 127), 100);
+INSERT INTO test_toast_warm VALUES (7, repeat('g', 128), 100);
+INSERT INTO test_toast_warm VALUES (8, repeat('h', 3200), 100);
+UPDATE test_toast_warm SET b = repeat('q', 600) WHERE a = 1;
+UPDATE test_toast_warm SET b = repeat('r', 2) WHERE a = 2;
+UPDATE test_toast_warm SET b = repeat('s', 4) WHERE a = 3;
+UPDATE test_toast_warm SET b = repeat('t', 63) WHERE a = 4;
+UPDATE test_toast_warm SET b = repeat('u', 126) WHERE a = 5;
+UPDATE test_toast_warm SET b = repeat('v', 127) WHERE a = 6;
+UPDATE test_toast_warm SET b = repeat('w', 128) WHERE a = 7;
+UPDATE test_toast_warm SET b = repeat('x', 3200) WHERE a = 8;
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+                        QUERY PLAN                         
+-----------------------------------------------------------
+ Index Scan using test_toast_warm_a_key on test_toast_warm
+   Index Cond: (a = 1)
+(2 rows)
+
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+                                                                                                                                                                                                                                                                                                                      QUERY PLAN                                                                                                                                                                                                                                                                                                                      
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Index Scan using test_toast_warm_index on test_toast_warm
+   Index Cond: (b = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'::text)
+(2 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+                                                                                                                                                                                                                                                                                                                      QUERY PLAN                                                                                                                                                                                                                                                                                                                      
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Index Only Scan using test_toast_warm_index on test_toast_warm
+   Index Cond: (b = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'::text)
+(2 rows)
+
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+ a |                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+---+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 1 | qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+ a | b 
+---+---
+(0 rows)
+
+SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+ b 
+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+ a |                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+---+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 1 | qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+ a | b  
+---+----
+ 2 | rr
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+ a |  b   
+---+------
+ 3 | ssss
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+ a |                                b                                
+---+-----------------------------------------------------------------
+ 4 | ttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttt
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+ a |                                                               b                                                                
+---+--------------------------------------------------------------------------------------------------------------------------------
+ 5 | uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+ a |                                                                b                                                                
+---+---------------------------------------------------------------------------------------------------------------------------------
+ 6 | vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+ a |                                                                b                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------
+ 7 | wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+ a |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                b                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 8 | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+(1 row)
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                                    QUERY PLAN                                                                                                                                                                                                                                                                                                                    
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 'qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq'::text)
+(2 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                                    QUERY PLAN                                                                                                                                                                                                                                                                                                                    
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 'qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq'::text)
+(2 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+ a |                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+---+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 1 | qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+ a | b  
+---+----
+ 2 | rr
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+ a |  b   
+---+------
+ 3 | ssss
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+ a |                                b                                
+---+-----------------------------------------------------------------
+ 4 | ttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttt
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+ a |                                                               b                                                                
+---+--------------------------------------------------------------------------------------------------------------------------------
+ 5 | uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+ a |                                                                b                                                                
+---+---------------------------------------------------------------------------------------------------------------------------------
+ 6 | vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+ a |                                                                b                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------
+ 7 | wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+ a |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                b                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 8 | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+(1 row)
+
+DROP TABLE test_toast_warm;
+-- Test with numeric data type
+CREATE TABLE test_toast_warm (a int unique, b numeric(10,2), c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+INSERT INTO test_toast_warm VALUES (1, 10.2, 100);
+INSERT INTO test_toast_warm VALUES (2, 11.22, 100);
+INSERT INTO test_toast_warm VALUES (3, 12.222, 100);
+INSERT INTO test_toast_warm VALUES (4, 13.20, 100);
+INSERT INTO test_toast_warm VALUES (5, 14.201, 100);
+UPDATE test_toast_warm SET b = 100.2 WHERE a = 1;
+UPDATE test_toast_warm SET b = 101.22 WHERE a = 2;
+UPDATE test_toast_warm SET b = 102.222 WHERE a = 3;
+UPDATE test_toast_warm SET b = 103.20 WHERE a = 4;
+UPDATE test_toast_warm SET b = 104.201 WHERE a = 5;
+SELECT * FROM test_toast_warm;
+ a |   b    |  c  
+---+--------+-----
+ 1 | 100.20 | 100
+ 2 | 101.22 | 100
+ 3 | 102.22 | 100
+ 4 | 103.20 | 100
+ 5 | 104.20 | 100
+(5 rows)
+
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+                        QUERY PLAN                         
+-----------------------------------------------------------
+ Index Scan using test_toast_warm_a_key on test_toast_warm
+   Index Cond: (a = 1)
+(2 rows)
+
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+                    QUERY PLAN                    
+--------------------------------------------------
+ Bitmap Heap Scan on test_toast_warm
+   Recheck Cond: (b = 10.2)
+   ->  Bitmap Index Scan on test_toast_warm_index
+         Index Cond: (b = 10.2)
+(4 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+                    QUERY PLAN                    
+--------------------------------------------------
+ Bitmap Heap Scan on test_toast_warm
+   Recheck Cond: (b = 100.2)
+   ->  Bitmap Index Scan on test_toast_warm_index
+         Index Cond: (b = 100.2)
+(4 rows)
+
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+ a | b 
+---+---
+(0 rows)
+
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+ b 
+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+   b    
+--------
+ 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+ a |   b    
+---+--------
+ 2 | 101.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+ a |   b    
+---+--------
+ 3 | 102.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+ a |   b    
+---+--------
+ 4 | 103.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+ a |   b    
+---+--------
+ 5 | 104.20
+(1 row)
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+         QUERY PLAN          
+-----------------------------
+ Seq Scan on test_toast_warm
+   Filter: (a = 1)
+(2 rows)
+
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+         QUERY PLAN          
+-----------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 10.2)
+(2 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+         QUERY PLAN          
+-----------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 100.2)
+(2 rows)
+
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+ a | b 
+---+---
+(0 rows)
+
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+ b 
+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+   b    
+--------
+ 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+ a |   b    
+---+--------
+ 2 | 101.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+ a |   b    
+---+--------
+ 3 | 102.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+ a |   b    
+---+--------
+ 4 | 103.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+ a |   b    
+---+--------
+ 5 | 104.20
+(1 row)
+
+DROP TABLE test_toast_warm;
+-- Toasted heap attributes
+CREATE TABLE toasttest(descr text , cnt int DEFAULT 0, f1 text, f2 text);
+CREATE INDEX testindx1 ON toasttest(descr);
+CREATE INDEX testindx2 ON toasttest(f1);
+INSERT INTO toasttest(descr, f1, f2) VALUES('two-compressed', repeat('1234567890',1000), repeat('1234567890',1000));
+INSERT INTO toasttest(descr, f1, f2) VALUES('two-toasted', repeat('1234567890',20000), repeat('1234567890',50000));
+INSERT INTO toasttest(descr, f1, f2) VALUES('one-compressed,one-toasted', repeat('1234567890',1000), repeat('1234567890',50000));
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,1) | (two-compressed,0,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012
+ (0,2) | (two-toasted,0,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ (0,3) | ("one-compressed,one-toasted",0,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+-- UPDATE f1 by doing string manipulation, but the updated value remains the
+-- same as the old value
+UPDATE toasttest SET cnt = cnt +1, f1 = trim(leading '-' from '-'||f1) RETURNING substring(toasttest::text, 1, 200);
+                                                                                                substring                                                                                                 
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (two-compressed,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012
+ (two-toasted,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ ("one-compressed,one-toasted",1,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+                  QUERY PLAN                   
+-----------------------------------------------
+ Seq Scan on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,4) | (two-compressed,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012
+ (0,5) | (two-toasted,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ (0,6) | ("one-compressed,one-toasted",1,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+                           QUERY PLAN                            
+-----------------------------------------------------------------
+ Index Scan using testindx2 on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,6) | ("one-compressed,one-toasted",1,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+ (0,4) | (two-compressed,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012
+ (0,5) | (two-toasted,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+(3 rows)
+
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+-- UPDATE f1 for real this time
+UPDATE toasttest SET cnt = cnt +1, f1 = '-'||f1 RETURNING substring(toasttest::text, 1, 200);
+                                                                                                substring                                                                                                 
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (two-toasted,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234
+ ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+(3 rows)
+
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+                  QUERY PLAN                   
+-----------------------------------------------
+ Seq Scan on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,7) | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,8) | (two-toasted,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234
+ (0,9) | ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+(3 rows)
+
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+                           QUERY PLAN                            
+-----------------------------------------------------------------
+ Index Scan using testindx2 on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,9) | ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+ (0,7) | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,8) | (two-toasted,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234
+(3 rows)
+
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+-- UPDATE f1 from toasted to compressed
+UPDATE toasttest SET cnt = cnt +1, f1 = repeat('1234567890',1000) WHERE descr = 'two-toasted';
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+                  QUERY PLAN                   
+-----------------------------------------------
+ Seq Scan on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+  ctid  |                                                                                                substring                                                                                                 
+--------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,7)  | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,9)  | ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+ (0,10) | (two-toasted,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+(3 rows)
+
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+                           QUERY PLAN                            
+-----------------------------------------------------------------
+ Index Scan using testindx2 on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+  ctid  |                                                                                                substring                                                                                                 
+--------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,9)  | ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+ (0,7)  | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,10) | (two-toasted,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+(3 rows)
+
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+-- UPDATE f1 from compressed to toasted
+UPDATE toasttest SET cnt = cnt +1, f1 = repeat('1234567890',2000) WHERE descr = 'one-compressed,one-toasted';
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+                  QUERY PLAN                   
+-----------------------------------------------
+ Seq Scan on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+  ctid  |                                                                                                substring                                                                                                 
+--------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,7)  | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,10) | (two-toasted,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ (0,11) | ("one-compressed,one-toasted",3,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+                           QUERY PLAN                            
+-----------------------------------------------------------------
+ Index Scan using testindx2 on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+  ctid  |                                                                                                substring                                                                                                 
+--------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,7)  | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,10) | (two-toasted,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ (0,11) | ("one-compressed,one-toasted",3,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+DROP TABLE toasttest;
+-- Test enable_warm reloption
+CREATE TABLE testrelopt (a int unique, b int, c int) WITH (enable_warm = true);
+ALTER TABLE testrelopt SET (enable_warm = true);
+ALTER TABLE testrelopt RESET (enable_warm);
+-- should fail since we don't allow turning WARM off
+ALTER TABLE testrelopt SET (enable_warm = false);
+ERROR:  WARM updates cannot be disabled on the table "testrelopt"
+DROP TABLE testrelopt;
+CREATE TABLE testrelopt (a int unique, b int, c int) WITH (enable_warm = false);
+-- should be ok since the default is ON and we support turning WARM ON
+ALTER TABLE testrelopt RESET (enable_warm);
+ALTER TABLE testrelopt SET (enable_warm = true);
+-- should fail since we don't allow turning WARM off
+ALTER TABLE testrelopt SET (enable_warm = false);
+ERROR:  WARM updates cannot be disabled on the table "testrelopt"
+DROP TABLE testrelopt;
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 9f95b01..cd99f88 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -42,6 +42,8 @@ test: create_type
 test: create_table
 test: create_function_2
 
+test: warm
+
 # ----------
 # Load huge amounts of data
 # We should split the data files into single files and then
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
new file mode 100644
index 0000000..fc80c0f
--- /dev/null
+++ b/src/test/regress/sql/warm.sql
@@ -0,0 +1,360 @@
+
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Check if index only scan works correctly
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab1;
+
+------------------
+
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE a = 1;
+
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE b = 701;
+
+VACUUM updtst_tab2;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+SELECT b FROM updtst_tab2 WHERE b = 701;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab2;
+------------------
+
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+SELECT * FROM updtst_tab3 WHERE a = 1;
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+SET enable_seqscan = false;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+SELECT * FROM updtst_tab3 WHERE a = 2;
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab3;
+------------------
+
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where a = 'test';
+select *, ctid from test_warm where a = 'TEST';
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+DROP TABLE test_warm;
+
+--- Test with toast data types
+
+CREATE TABLE test_toast_warm (a int unique, b text, c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+
+-- insert a large enough value to cause index datum compression
+INSERT INTO test_toast_warm VALUES (1, repeat('a', 600), 100);
+INSERT INTO test_toast_warm VALUES (2, repeat('b', 2), 100);
+INSERT INTO test_toast_warm VALUES (3, repeat('c', 4), 100);
+INSERT INTO test_toast_warm VALUES (4, repeat('d', 63), 100);
+INSERT INTO test_toast_warm VALUES (5, repeat('e', 126), 100);
+INSERT INTO test_toast_warm VALUES (6, repeat('f', 127), 100);
+INSERT INTO test_toast_warm VALUES (7, repeat('g', 128), 100);
+INSERT INTO test_toast_warm VALUES (8, repeat('h', 3200), 100);
+
+UPDATE test_toast_warm SET b = repeat('q', 600) WHERE a = 1;
+UPDATE test_toast_warm SET b = repeat('r', 2) WHERE a = 2;
+UPDATE test_toast_warm SET b = repeat('s', 4) WHERE a = 3;
+UPDATE test_toast_warm SET b = repeat('t', 63) WHERE a = 4;
+UPDATE test_toast_warm SET b = repeat('u', 126) WHERE a = 5;
+UPDATE test_toast_warm SET b = repeat('v', 127) WHERE a = 6;
+UPDATE test_toast_warm SET b = repeat('w', 128) WHERE a = 7;
+UPDATE test_toast_warm SET b = repeat('x', 3200) WHERE a = 8;
+
+
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+
+DROP TABLE test_toast_warm;
+
+-- Test with numeric data type
+
+CREATE TABLE test_toast_warm (a int unique, b numeric(10,2), c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+
+INSERT INTO test_toast_warm VALUES (1, 10.2, 100);
+INSERT INTO test_toast_warm VALUES (2, 11.22, 100);
+INSERT INTO test_toast_warm VALUES (3, 12.222, 100);
+INSERT INTO test_toast_warm VALUES (4, 13.20, 100);
+INSERT INTO test_toast_warm VALUES (5, 14.201, 100);
+
+UPDATE test_toast_warm SET b = 100.2 WHERE a = 1;
+UPDATE test_toast_warm SET b = 101.22 WHERE a = 2;
+UPDATE test_toast_warm SET b = 102.222 WHERE a = 3;
+UPDATE test_toast_warm SET b = 103.20 WHERE a = 4;
+UPDATE test_toast_warm SET b = 104.201 WHERE a = 5;
+
+SELECT * FROM test_toast_warm;
+
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+
+DROP TABLE test_toast_warm;
+
+-- Toasted heap attributes
+CREATE TABLE toasttest(descr text , cnt int DEFAULT 0, f1 text, f2 text);
+CREATE INDEX testindx1 ON toasttest(descr);
+CREATE INDEX testindx2 ON toasttest(f1);
+
+INSERT INTO toasttest(descr, f1, f2) VALUES('two-compressed', repeat('1234567890',1000), repeat('1234567890',1000));
+INSERT INTO toasttest(descr, f1, f2) VALUES('two-toasted', repeat('1234567890',20000), repeat('1234567890',50000));
+INSERT INTO toasttest(descr, f1, f2) VALUES('one-compressed,one-toasted', repeat('1234567890',1000), repeat('1234567890',50000));
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+
+-- UPDATE f1 by doing string manipulation, but the updated value remains the
+-- same as the old value
+UPDATE toasttest SET cnt = cnt +1, f1 = trim(leading '-' from '-'||f1) RETURNING substring(toasttest::text, 1, 200);
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+
+-- UPDATE f1 for real this time
+UPDATE toasttest SET cnt = cnt +1, f1 = '-'||f1 RETURNING substring(toasttest::text, 1, 200);
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+
+-- UPDATE f1 from toasted to compressed
+UPDATE toasttest SET cnt = cnt +1, f1 = repeat('1234567890',1000) WHERE descr = 'two-toasted';
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+
+-- UPDATE f1 from compressed to toasted
+UPDATE toasttest SET cnt = cnt +1, f1 = repeat('1234567890',2000) WHERE descr = 'one-compressed,one-toasted';
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+
+DROP TABLE toasttest;
+
+-- Test enable_warm reloption
+CREATE TABLE testrelopt (a int unique, b int, c int) WITH (enable_warm = true);
+ALTER TABLE testrelopt SET (enable_warm = true);
+ALTER TABLE testrelopt RESET (enable_warm);
+-- should fail since we don't allow turning WARM off
+ALTER TABLE testrelopt SET (enable_warm = false);
+DROP TABLE testrelopt;
+
+CREATE TABLE testrelopt (a int unique, b int, c int) WITH (enable_warm = false);
+-- should be ok since the default is ON and we support turning WARM ON
+ALTER TABLE testrelopt RESET (enable_warm);
+ALTER TABLE testrelopt SET (enable_warm = true);
+-- should fail since we don't allow turning WARM off
+ALTER TABLE testrelopt SET (enable_warm = false);
+DROP TABLE testrelopt;
-- 
2.9.3 (Apple Git-75)

0004-Provide-control-knobs-to-decide-when-to-do-heap-_v25.patchapplication/octet-stream; name=0004-Provide-control-knobs-to-decide-when-to-do-heap-_v25.patchDownload
From 42d6a84db173ed5bbf763f8e80dd6416e6b1178f Mon Sep 17 00:00:00 2001
From: Pavan Deolasee <pavan.deolasee@gmail.com>
Date: Wed, 29 Mar 2017 11:16:29 +0530
Subject: [PATCH 4/4] Provide control knobs to decide when to do heap and index
 WARM cleanup.

We provide two knobs to control maintenance activity on WARM. A guc
autovacuum_warm_cleanup_scale_factor can be set to trigger WARM cleanup.
Similarly, a GUC autovacuum_warm_cleanup_index_scale_factor can be set to
determine when to do index cleanup. Note that in the current design VACUUM
needs two index scans to remove a WARM index pointer. Hence we want to do that
work only when it makes sense (i.e. the index has significant number of WARM
pointers)

Similarly, VACUUM command is enhanced to accept another parameter, WARMCLEAN,
and if specified then only WARM cleanup will be carried out.
---
 src/backend/access/common/reloptions.c |  22 +++
 src/backend/catalog/system_views.sql   |   1 +
 src/backend/commands/analyze.c         |  60 +++++--
 src/backend/commands/vacuum.c          |   2 +
 src/backend/commands/vacuumlazy.c      | 320 +++++++++++++++++++++++++--------
 src/backend/parser/gram.y              |  26 ++-
 src/backend/postmaster/autovacuum.c    |  58 +++++-
 src/backend/postmaster/pgstat.c        |  50 +++++-
 src/backend/utils/adt/pgstatfuncs.c    |  15 ++
 src/backend/utils/init/globals.c       |   3 +
 src/backend/utils/misc/guc.c           |  30 ++++
 src/include/catalog/pg_proc.h          |   2 +
 src/include/commands/vacuum.h          |   2 +
 src/include/foreign/fdwapi.h           |   3 +-
 src/include/miscadmin.h                |   1 +
 src/include/nodes/parsenodes.h         |   3 +-
 src/include/parser/kwlist.h            |   1 +
 src/include/pgstat.h                   |  11 +-
 src/include/postmaster/autovacuum.h    |   2 +
 src/include/utils/guc_tables.h         |   1 +
 src/include/utils/rel.h                |   2 +
 src/test/regress/expected/rules.out    |   3 +
 src/test/regress/expected/warm.out     |  59 ++++++
 src/test/regress/sql/warm.sql          |  47 +++++
 24 files changed, 614 insertions(+), 110 deletions(-)

diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index 6b5534f..4a80342 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -356,6 +356,24 @@ static relopt_real realRelOpts[] =
 	},
 	{
 		{
+			"autovacuum_warmcleanup_scale_factor",
+			"Number of WARM chains prior to WARM cleanup as a fraction of reltuples",
+			RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
+			ShareUpdateExclusiveLock
+		},
+		-1, 0.0, 100.0
+	},
+	{
+		{
+			"autovacuum_warmcleanup_index_scale_factor",
+			"Number of WARM pointers in an index prior to WARM cleanup as a fraction of total WARM chains",
+			RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
+			ShareUpdateExclusiveLock
+		},
+		-1, 0.0, 100.0
+	},
+	{
+		{
 			"autovacuum_analyze_scale_factor",
 			"Number of tuple inserts, updates or deletes prior to analyze as a fraction of reltuples",
 			RELOPT_KIND_HEAP,
@@ -1357,6 +1375,10 @@ default_reloptions(Datum reloptions, bool validate, relopt_kind kind)
 		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, vacuum_scale_factor)},
 		{"autovacuum_analyze_scale_factor", RELOPT_TYPE_REAL,
 		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, analyze_scale_factor)},
+		{"autovacuum_warmcleanup_scale_factor", RELOPT_TYPE_REAL,
+		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, warmcleanup_scale_factor)},
+		{"autovacuum_warmcleanup_index_scale_factor", RELOPT_TYPE_REAL,
+		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, warmcleanup_index_scale)},
 		{"user_catalog_table", RELOPT_TYPE_BOOL,
 		offsetof(StdRdOptions, user_catalog_table)},
 		{"parallel_workers", RELOPT_TYPE_INT,
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 4ef964f..363fdf0 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -533,6 +533,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
+            pg_stat_get_warm_chains(C.oid) AS n_warm_chains,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
             pg_stat_get_last_vacuum_time(C.oid) as last_vacuum,
             pg_stat_get_last_autovacuum_time(C.oid) as last_autovacuum,
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index 404acb2..6c4fc4e 100644
--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -93,7 +93,8 @@ static VacAttrStats *examine_attribute(Relation onerel, int attnum,
 				  Node *index_expr);
 static int acquire_sample_rows(Relation onerel, int elevel,
 					HeapTuple *rows, int targrows,
-					double *totalrows, double *totaldeadrows);
+					double *totalrows, double *totaldeadrows,
+					double *totalwarmchains);
 static int	compare_rows(const void *a, const void *b);
 static int acquire_inherited_sample_rows(Relation onerel, int elevel,
 							  HeapTuple *rows, int targrows,
@@ -320,7 +321,8 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params,
 	int			targrows,
 				numrows;
 	double		totalrows,
-				totaldeadrows;
+				totaldeadrows,
+				totalwarmchains;
 	HeapTuple  *rows;
 	PGRUsage	ru0;
 	TimestampTz starttime = 0;
@@ -501,7 +503,8 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params,
 	else
 		numrows = (*acquirefunc) (onerel, elevel,
 								  rows, targrows,
-								  &totalrows, &totaldeadrows);
+								  &totalrows, &totaldeadrows,
+								  &totalwarmchains);
 
 	/*
 	 * Compute the statistics.  Temporary results during the calculations for
@@ -631,7 +634,7 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params,
 	 */
 	if (!inh)
 		pgstat_report_analyze(onerel, totalrows, totaldeadrows,
-							  (va_cols == NIL));
+							  totalwarmchains, (va_cols == NIL));
 
 	/* If this isn't part of VACUUM ANALYZE, let index AMs do cleanup */
 	if (!(options & VACOPT_VACUUM))
@@ -991,12 +994,14 @@ examine_attribute(Relation onerel, int attnum, Node *index_expr)
 static int
 acquire_sample_rows(Relation onerel, int elevel,
 					HeapTuple *rows, int targrows,
-					double *totalrows, double *totaldeadrows)
+					double *totalrows, double *totaldeadrows,
+					double *totalwarmchains)
 {
 	int			numrows = 0;	/* # rows now in reservoir */
 	double		samplerows = 0; /* total # rows collected */
 	double		liverows = 0;	/* # live rows seen */
 	double		deadrows = 0;	/* # dead rows seen */
+	double		warmchains = 0;
 	double		rowstoskip = -1;	/* -1 means not set yet */
 	BlockNumber totalblocks;
 	TransactionId OldestXmin;
@@ -1023,9 +1028,14 @@ acquire_sample_rows(Relation onerel, int elevel,
 		Page		targpage;
 		OffsetNumber targoffset,
 					maxoffset;
+		bool		marked[MaxHeapTuplesPerPage];
+		OffsetNumber root_offsets[MaxHeapTuplesPerPage];
 
 		vacuum_delay_point();
 
+		/* Track which root line pointers are already counted. */
+		memset(marked, 0, sizeof (marked));
+
 		/*
 		 * We must maintain a pin on the target page's buffer to ensure that
 		 * the maxoffset value stays good (else concurrent VACUUM might delete
@@ -1041,6 +1051,9 @@ acquire_sample_rows(Relation onerel, int elevel,
 		targpage = BufferGetPage(targbuffer);
 		maxoffset = PageGetMaxOffsetNumber(targpage);
 
+		/* Get all root line pointers first */
+		heap_get_root_tuples(targpage, root_offsets);
+
 		/* Inner loop over all tuples on the selected page */
 		for (targoffset = FirstOffsetNumber; targoffset <= maxoffset; targoffset++)
 		{
@@ -1069,6 +1082,22 @@ acquire_sample_rows(Relation onerel, int elevel,
 			targtuple.t_data = (HeapTupleHeader) PageGetItem(targpage, itemid);
 			targtuple.t_len = ItemIdGetLength(itemid);
 
+			/*
+			 * If this is a WARM-updated tuple, check if we have already seen
+			 * the root line pointer. If not, count this as a WARM chain. This
+			 * ensures that we count every WARM-chain just once, irrespective
+			 * of how many tuples exist in the chain.
+			 */
+			if (HeapTupleHeaderIsWarmUpdated(targtuple.t_data))
+			{
+				OffsetNumber root_offnum = root_offsets[targoffset];
+				if (!marked[root_offnum])
+				{
+					warmchains += 1;
+					marked[root_offnum] = true;
+				}
+			}
+
 			switch (HeapTupleSatisfiesVacuum(&targtuple,
 											 OldestXmin,
 											 targbuffer))
@@ -1200,18 +1229,24 @@ acquire_sample_rows(Relation onerel, int elevel,
 
 	/*
 	 * Estimate total numbers of rows in relation.  For live rows, use
-	 * vac_estimate_reltuples; for dead rows, we have no source of old
-	 * information, so we have to assume the density is the same in unseen
-	 * pages as in the pages we scanned.
+	 * vac_estimate_reltuples; for dead rows and WARM chains, we have no source
+	 * of old information, so we have to assume the density is the same in
+	 * unseen pages as in the pages we scanned.
 	 */
 	*totalrows = vac_estimate_reltuples(onerel, true,
 										totalblocks,
 										bs.m,
 										liverows);
 	if (bs.m > 0)
+	{
 		*totaldeadrows = floor((deadrows / bs.m) * totalblocks + 0.5);
+		*totalwarmchains = floor((warmchains / bs.m) * totalblocks + 0.5);
+	}
 	else
+	{
 		*totaldeadrows = 0.0;
+		*totalwarmchains = 0.0;
+	}
 
 	/*
 	 * Emit some interesting relation info
@@ -1219,11 +1254,13 @@ acquire_sample_rows(Relation onerel, int elevel,
 	ereport(elevel,
 			(errmsg("\"%s\": scanned %d of %u pages, "
 					"containing %.0f live rows and %.0f dead rows; "
-					"%d rows in sample, %.0f estimated total rows",
+					"%d rows in sample, %.0f estimated total rows; "
+					"%.0f warm chains",
 					RelationGetRelationName(onerel),
 					bs.m, totalblocks,
 					liverows, deadrows,
-					numrows, *totalrows)));
+					numrows, *totalrows,
+					*totalwarmchains)));
 
 	return numrows;
 }
@@ -1428,11 +1465,12 @@ acquire_inherited_sample_rows(Relation onerel, int elevel,
 				int			childrows;
 				double		trows,
 							tdrows;
+				double		twarmchains;
 
 				/* Fetch a random sample of the child's rows */
 				childrows = (*acquirefunc) (childrel, elevel,
 											rows + numrows, childtargrows,
-											&trows, &tdrows);
+											&trows, &tdrows, &twarmchains);
 
 				/* We may need to convert from child's rowtype to parent's */
 				if (childrows > 0 &&
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index 9fbb0eb..52a7838 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -103,6 +103,7 @@ ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel)
 		params.freeze_table_age = 0;
 		params.multixact_freeze_min_age = 0;
 		params.multixact_freeze_table_age = 0;
+		params.warmcleanup_index_scale = -1;
 	}
 	else
 	{
@@ -110,6 +111,7 @@ ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel)
 		params.freeze_table_age = -1;
 		params.multixact_freeze_min_age = -1;
 		params.multixact_freeze_table_age = -1;
+		params.warmcleanup_index_scale = -1;
 	}
 
 	/* user-invoked vacuum is never "for wraparound" */
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 781eeff..3c662ef 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -156,18 +156,23 @@ typedef struct LVRelStats
 	double		tuples_deleted;
 	BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */
 
+	int			maxtuples;		/* maxtuples computed while allocating space */
+	Size		work_area_size;	/* working area size */
+	char		*work_area;		/* working area for storing dead tuples and
+								 * warm chains */
 	/* List of candidate WARM chains that can be converted into HOT chains */
-	/* NB: this list is ordered by TID of the root pointers */
+	/* 
+	 * NB: this list grows from bottom to top and is ordered by TID of the root
+	 * pointers, with the lowest entry at the bottom
+	 */
 	int				num_warm_chains;	/* current # of entries */
-	int				max_warm_chains;	/* # slots allocated in array */
 	LVWarmChain 	*warm_chains;		/* array of LVWarmChain */
 	double			num_non_convertible_warm_chains;
-
 	/* List of TIDs of tuples we intend to delete */
 	/* NB: this list is ordered by TID address */
 	int			num_dead_tuples;	/* current # of entries */
-	int			max_dead_tuples;	/* # slots allocated in array */
 	ItemPointer dead_tuples;	/* array of ItemPointerData */
+
 	int			num_index_scans;
 	TransactionId latestRemovedXid;
 	bool		lock_waiter_detected;
@@ -187,11 +192,12 @@ static BufferAccessStrategy vac_strategy;
 /* non-export function prototypes */
 static void lazy_scan_heap(Relation onerel, int options,
 			   LVRelStats *vacrelstats, Relation *Irel, int nindexes,
-			   bool aggressive);
+			   bool aggressive, double warmcleanup_index_scale);
 static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats);
 static bool lazy_check_needs_freeze(Buffer buf, bool *hastup);
 static void lazy_vacuum_index(Relation indrel,
 				  bool clear_warm,
+				  double warmcleanup_index_scale,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats);
 static void lazy_cleanup_index(Relation indrel,
@@ -207,7 +213,8 @@ static bool should_attempt_truncation(LVRelStats *vacrelstats);
 static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
 static BlockNumber count_nondeletable_pages(Relation onerel,
 						 LVRelStats *vacrelstats);
-static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks);
+static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks,
+					   bool dowarmcleanup);
 static void lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr);
 static void lazy_record_warm_chain(LVRelStats *vacrelstats,
@@ -283,6 +290,9 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 						  &OldestXmin, &FreezeLimit, &xidFullScanLimit,
 						  &MultiXactCutoff, &mxactFullScanLimit);
 
+	/* Use default if the caller hasn't specified any value */
+	if (params->warmcleanup_index_scale == -1)
+		params->warmcleanup_index_scale = VacuumWarmCleanupIndexScale;
 	/*
 	 * We request an aggressive scan if the table's frozen Xid is now older
 	 * than or equal to the requested Xid full-table scan limit; or if the
@@ -309,7 +319,8 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 	vacrelstats->hasindex = (nindexes > 0);
 
 	/* Do the vacuuming */
-	lazy_scan_heap(onerel, options, vacrelstats, Irel, nindexes, aggressive);
+	lazy_scan_heap(onerel, options, vacrelstats, Irel, nindexes, aggressive,
+			params->warmcleanup_index_scale);
 
 	/* Done with indexes */
 	vac_close_indexes(nindexes, Irel, NoLock);
@@ -396,7 +407,8 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 	pgstat_report_vacuum(RelationGetRelid(onerel),
 						 onerel->rd_rel->relisshared,
 						 new_live_tuples,
-						 vacrelstats->new_dead_tuples);
+						 vacrelstats->new_dead_tuples,
+						 vacrelstats->num_non_convertible_warm_chains);
 	pgstat_progress_end_command();
 
 	/* and log the action if appropriate */
@@ -507,10 +519,19 @@ vacuum_log_cleanup_info(Relation rel, LVRelStats *vacrelstats)
  *		If there are no indexes then we can reclaim line pointers on the fly;
  *		dead line pointers need only be retained until all index pointers that
  *		reference them have been killed.
+ *
+ *		warmcleanup_index_scale specifies the number of WARM pointers in an
+ *		index as a fraction of total candidate WARM chains. If we find less
+ *		WARM pointers in an index than the specified fraction, then we don't
+ *		invoke cleanup that index. If WARM cleanup is skipped for any one
+ *		index, the WARM chain can't be cleared in the heap and no further WARM
+ *		updates are possible to such chains. Such chains are also not
+ *		considered for WARM cleanup in other indexes.
  */
 static void
 lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
-			   Relation *Irel, int nindexes, bool aggressive)
+			   Relation *Irel, int nindexes, bool aggressive,
+			   double warmcleanup_index_scale)
 {
 	BlockNumber nblocks,
 				blkno;
@@ -536,6 +557,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		PROGRESS_VACUUM_MAX_DEAD_TUPLES
 	};
 	int64		initprog_val[3];
+	bool		dowarmcleanup = ((options & VACOPT_WARM_CLEANUP) != 0);
 
 	pg_rusage_init(&ru0);
 
@@ -558,13 +580,13 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 	vacrelstats->nonempty_pages = 0;
 	vacrelstats->latestRemovedXid = InvalidTransactionId;
 
-	lazy_space_alloc(vacrelstats, nblocks);
+	lazy_space_alloc(vacrelstats, nblocks, dowarmcleanup);
 	frozen = palloc(sizeof(xl_heap_freeze_tuple) * MaxHeapTuplesPerPage);
 
 	/* Report that we're scanning the heap, advertising total # of blocks */
 	initprog_val[0] = PROGRESS_VACUUM_PHASE_SCAN_HEAP;
 	initprog_val[1] = nblocks;
-	initprog_val[2] = vacrelstats->max_dead_tuples;
+	initprog_val[2] = vacrelstats->maxtuples;
 	pgstat_progress_update_multi_param(3, initprog_index, initprog_val);
 
 	/*
@@ -656,6 +678,11 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		bool		all_frozen = true;	/* provided all_visible is also true */
 		bool		has_dead_tuples;
 		TransactionId visibility_cutoff_xid = InvalidTransactionId;
+		char		*end_deads;
+		char		*end_warms;
+		Size		free_work_area;
+		int			avail_dead_tuples;
+		int			avail_warm_chains;
 
 		/* see note above about forcing scanning of last page */
 #define FORCE_CHECK_PAGE() \
@@ -740,13 +767,39 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		vacuum_delay_point();
 
 		/*
+		 * The dead tuples are stored starting from the start of the work
+		 * area and growing downwards. The candidate warm chains are stored
+		 * starting from the bottom on the work area and growing upwards. Once
+		 * the difference between these two segments is too small to accomodate
+		 * potentially all tuples in the current page, we stop and do one round
+		 * of index cleanup.
+		 */
+		end_deads = (char *)(vacrelstats->dead_tuples + vacrelstats->num_dead_tuples);
+
+		/*
+		 * If we are not doing WARM cleanup, then the entire work area is used
+		 * by the dead tuples.
+		 */
+		if (vacrelstats->warm_chains)
+		{
+			end_warms = (char *)(vacrelstats->warm_chains - vacrelstats->num_warm_chains);
+			free_work_area = end_warms - end_deads;
+			avail_warm_chains = (free_work_area / sizeof (LVWarmChain));
+		}
+		else
+		{
+			free_work_area = vacrelstats->work_area +
+				vacrelstats->work_area_size - end_deads;
+			avail_warm_chains = 0;
+		}
+		avail_dead_tuples = (free_work_area / sizeof (ItemPointerData));
+
+		/*
 		 * If we are close to overrunning the available space for dead-tuple
 		 * TIDs, pause and do a cycle of vacuuming before we tackle this page.
 		 */
-		if (((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
-			vacrelstats->num_dead_tuples > 0) ||
-			((vacrelstats->max_warm_chains - vacrelstats->num_warm_chains) < MaxHeapTuplesPerPage &&
-			 vacrelstats->num_warm_chains > 0))
+		if ((avail_dead_tuples < MaxHeapTuplesPerPage && vacrelstats->num_dead_tuples > 0) ||
+			(avail_warm_chains < MaxHeapTuplesPerPage && vacrelstats->num_warm_chains > 0))
 		{
 			const int	hvp_index[] = {
 				PROGRESS_VACUUM_PHASE,
@@ -776,7 +829,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			/* Remove index entries */
 			for (i = 0; i < nindexes; i++)
 				lazy_vacuum_index(Irel[i],
-								  (vacrelstats->num_warm_chains > 0),
+								  dowarmcleanup && (vacrelstats->num_warm_chains > 0),
+								  warmcleanup_index_scale,
 								  &indstats[i],
 								  vacrelstats);
 
@@ -800,8 +854,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			 */
 			vacrelstats->num_dead_tuples = 0;
 			vacrelstats->num_warm_chains = 0;
-			memset(vacrelstats->warm_chains, 0,
-					vacrelstats->max_warm_chains * sizeof (LVWarmChain));
+			memset(vacrelstats->work_area, 0, vacrelstats->work_area_size);
 			vacrelstats->num_index_scans++;
 
 			/* Report that we are once again scanning the heap */
@@ -1413,7 +1466,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		/* Remove index entries */
 		for (i = 0; i < nindexes; i++)
 			lazy_vacuum_index(Irel[i],
-							  (vacrelstats->num_warm_chains > 0),
+							  dowarmcleanup && (vacrelstats->num_warm_chains > 0),
+							  warmcleanup_index_scale,
 							  &indstats[i],
 							  vacrelstats);
 
@@ -1518,9 +1572,12 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 		vacuum_delay_point();
 
 		tblk = chainblk = InvalidBlockNumber;
-		if (chainindex < vacrelstats->num_warm_chains)
-			chainblk =
-				ItemPointerGetBlockNumber(&(vacrelstats->warm_chains[chainindex].chain_tid));
+		if (vacrelstats->warm_chains &&
+			chainindex < vacrelstats->num_warm_chains)
+		{
+			LVWarmChain *chain = vacrelstats->warm_chains - (chainindex + 1);
+			chainblk = ItemPointerGetBlockNumber(&chain->chain_tid);
+		}
 
 		if (tupindex < vacrelstats->num_dead_tuples)
 			tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
@@ -1618,7 +1675,8 @@ lazy_warmclear_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 		BlockNumber tblk;
 		LVWarmChain	*chain;
 
-		chain = &vacrelstats->warm_chains[chainindex];
+		/* The warm chains are indexed from bottom */
+		chain = vacrelstats->warm_chains - (chainindex + 1);
 
 		tblk = ItemPointerGetBlockNumber(&chain->chain_tid);
 		if (tblk != blkno)
@@ -1852,9 +1910,11 @@ static void
 lazy_reset_warm_pointer_count(LVRelStats *vacrelstats)
 {
 	int i;
-	for (i = 0; i < vacrelstats->num_warm_chains; i++)
+
+	/* Start from the bottom and move upwards */
+	for (i = 1; i <= vacrelstats->num_warm_chains; i++)
 	{
-		LVWarmChain *chain = &vacrelstats->warm_chains[i];
+		LVWarmChain *chain = (vacrelstats->warm_chains - i);
 		chain->num_clear_pointers = chain->num_warm_pointers = 0;
 	}
 }
@@ -1868,6 +1928,7 @@ lazy_reset_warm_pointer_count(LVRelStats *vacrelstats)
 static void
 lazy_vacuum_index(Relation indrel,
 				  bool clear_warm,
+				  double warmcleanup_index_scale,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats)
 {
@@ -1932,25 +1993,57 @@ lazy_vacuum_index(Relation indrel,
 						(*stats)->warm_pointers_removed,
 						(*stats)->clear_pointers_removed)));
 
-		(*stats)->num_warm_pointers = 0;
-		(*stats)->num_clear_pointers = 0;
-		(*stats)->warm_pointers_removed = 0;
-		(*stats)->clear_pointers_removed = 0;
-		(*stats)->pointers_cleared = 0;
+		/*
+		 * If the number of WARM pointers found in the index are more than the
+		 * configured fraction of total candidate WARM chains, then do the
+		 * second index scan to clean up WARM chains.
+		 *
+		 * Otherwise we must set these WARM chains as non-convertible chains.
+		 */
+		if ((*stats)->num_warm_pointers >
+				((double)vacrelstats->num_warm_chains * warmcleanup_index_scale))
+		{
+			(*stats)->num_warm_pointers = 0;
+			(*stats)->num_clear_pointers = 0;
+			(*stats)->warm_pointers_removed = 0;
+			(*stats)->clear_pointers_removed = 0;
+			(*stats)->pointers_cleared = 0;
+
+			*stats = index_bulk_delete(&ivinfo, *stats,
+					lazy_indexvac_phase2, (void *) vacrelstats);
+			ereport(elevel,
+					(errmsg("scanned index \"%s\" to convert WARM pointers, found "
+							"%0.f WARM pointers, %0.f CLEAR pointers, removed "
+							"%0.f WARM pointers, removed %0.f CLEAR pointers, "
+							"cleared %0.f WARM pointers",
+							RelationGetRelationName(indrel),
+							(*stats)->num_warm_pointers,
+							(*stats)->num_clear_pointers,
+							(*stats)->warm_pointers_removed,
+							(*stats)->clear_pointers_removed,
+							(*stats)->pointers_cleared)));
+		}
+		else
+		{
+			int ii;
 
-		*stats = index_bulk_delete(&ivinfo, *stats,
-				lazy_indexvac_phase2, (void *) vacrelstats);
-		ereport(elevel,
-				(errmsg("scanned index \"%s\" to convert WARM pointers, found "
-						"%0.f WARM pointers, %0.f CLEAR pointers, removed "
-						"%0.f WARM pointers, removed %0.f CLEAR pointers, "
-						"cleared %0.f WARM pointers",
-						RelationGetRelationName(indrel),
-						(*stats)->num_warm_pointers,
-						(*stats)->num_clear_pointers,
-						(*stats)->warm_pointers_removed,
-						(*stats)->clear_pointers_removed,
-						(*stats)->pointers_cleared)));
+			/*
+			 * All chains skipped by this index are marked non-convertible.
+			 *
+			 * Start from bottom and move upwards.
+			 */
+			for (ii = 1; ii <= vacrelstats->num_warm_chains; ii++)
+			{
+				LVWarmChain *chain = vacrelstats->warm_chains - ii;
+				if (chain->num_warm_pointers > 0 ||
+					chain->num_clear_pointers > 1)
+				{
+					chain->keep_warm_chain = 1;
+					vacrelstats->num_non_convertible_warm_chains++;
+				}
+			}
+
+		}
 	}
 	else
 	{
@@ -2328,7 +2421,8 @@ count_nondeletable_pages(Relation onerel, LVRelStats *vacrelstats)
  * See the comments at the head of this file for rationale.
  */
 static void
-lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
+lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks,
+				 bool dowarmcleanup)
 {
 	long		maxtuples;
 	int			vac_work_mem = IsAutoVacuumWorkerProcess() &&
@@ -2337,11 +2431,16 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 
 	if (vacrelstats->hasindex)
 	{
+		/*
+		 * If we're not doing WARM cleanup then the entire memory is available
+		 * for tracking dead tuples. Otherwise it gets split between tracking
+		 * dead tuples and tracking WARM chains.
+		 */
 		maxtuples = (vac_work_mem * 1024L) / (sizeof(ItemPointerData) +
-				sizeof(LVWarmChain));
+				dowarmcleanup ? sizeof(LVWarmChain) : 0);
 		maxtuples = Min(maxtuples, INT_MAX);
 		maxtuples = Min(maxtuples, MaxAllocSize / (sizeof(ItemPointerData) +
-					sizeof(LVWarmChain)));
+				dowarmcleanup ? sizeof(LVWarmChain) : 0));
 
 		/* curious coding here to ensure the multiplication can't overflow */
 		if ((BlockNumber) (maxtuples / LAZY_ALLOC_TUPLES) > relblocks)
@@ -2355,21 +2454,29 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 		maxtuples = MaxHeapTuplesPerPage;
 	}
 
-	vacrelstats->num_dead_tuples = 0;
-	vacrelstats->max_dead_tuples = (int) maxtuples;
-	vacrelstats->dead_tuples = (ItemPointer)
-		palloc(maxtuples * sizeof(ItemPointerData));
-
-	/*
-	 * XXX Cheat for now and allocate the same size array for tracking warm
-	 * chains. maxtuples must have been already adjusted above to ensure we
-	 * don't cross vac_work_mem.
+	/* Allocate work area of the desired size and setup dead_tuples and
+	 * warm_chains to the start and the end of the area respectively. They grow
+	 * in opposite directions as dead tuples and warm chains are added. Note
+	 * that if we are not doing WARM cleanup then the entire area will only be
+	 * used for tracking dead tuples.
 	 */
-	vacrelstats->num_warm_chains = 0;
-	vacrelstats->max_warm_chains = (int) maxtuples;
-	vacrelstats->warm_chains = (LVWarmChain *)
-		palloc0(maxtuples * sizeof(LVWarmChain));
+	vacrelstats->work_area_size = maxtuples * (sizeof(ItemPointerData) +
+				dowarmcleanup ? sizeof(LVWarmChain) : 0);
+	vacrelstats->work_area = (char *) palloc0(vacrelstats->work_area_size);
+	vacrelstats->num_dead_tuples = 0;
+	vacrelstats->dead_tuples = (ItemPointer)vacrelstats->work_area;
+	vacrelstats->maxtuples = maxtuples;
 
+	if (dowarmcleanup)
+	{
+		vacrelstats->num_warm_chains = 0;
+		vacrelstats->warm_chains = (LVWarmChain *)
+			(vacrelstats->work_area + vacrelstats->work_area_size);
+	}
+	else
+	{
+		vacrelstats->warm_chains = NULL;
+	}
 }
 
 /*
@@ -2379,17 +2486,38 @@ static void
 lazy_record_clear_chain(LVRelStats *vacrelstats,
 					   ItemPointer itemptr)
 {
+	char *end_deads, *end_warms;
+	Size free_work_area;
+
+	if (vacrelstats->warm_chains == NULL)
+	{
+		vacrelstats->num_non_convertible_warm_chains++;
+		return;
+	}
+
+	end_deads = (char *) (vacrelstats->dead_tuples +
+					vacrelstats->num_dead_tuples);
+	end_warms = (char *) (vacrelstats->warm_chains -
+					vacrelstats->num_warm_chains);
+	free_work_area = (end_warms - end_deads);
+
+	Assert(free_work_area >= 0);
 	/*
 	 * The array shouldn't overflow under normal behavior, but perhaps it
 	 * could if we are given a really small maintenance_work_mem. In that
 	 * case, just forget the last few tuples (we'll get 'em next time).
 	 */
-	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	if (free_work_area >= sizeof (LVWarmChain))
 	{
-		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
-		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 0;
+		LVWarmChain *chain;
+
 		vacrelstats->num_warm_chains++;
+	   	chain = vacrelstats->warm_chains - vacrelstats->num_warm_chains;
+		chain->chain_tid = *itemptr;
+		chain->is_postwarm_chain = 0;
 	}
+	else
+		vacrelstats->num_non_convertible_warm_chains++;
 }
 
 /*
@@ -2399,17 +2527,39 @@ static void
 lazy_record_warm_chain(LVRelStats *vacrelstats,
 					   ItemPointer itemptr)
 {
+	char *end_deads, *end_warms;
+	Size free_work_area;
+
+	if (vacrelstats->warm_chains == NULL)
+	{
+		vacrelstats->num_non_convertible_warm_chains++;
+		return;
+	}
+
+	end_deads = (char *) (vacrelstats->dead_tuples +
+					vacrelstats->num_dead_tuples);
+	end_warms = (char *) (vacrelstats->warm_chains -
+					vacrelstats->num_warm_chains);
+	free_work_area = (end_warms - end_deads);
+
+	Assert(free_work_area >= 0);
+
 	/*
 	 * The array shouldn't overflow under normal behavior, but perhaps it
 	 * could if we are given a really small maintenance_work_mem. In that
 	 * case, just forget the last few tuples (we'll get 'em next time).
 	 */
-	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	if (free_work_area >= sizeof (LVWarmChain))
 	{
-		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
-		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 1;
+		LVWarmChain *chain;
+
 		vacrelstats->num_warm_chains++;
+	   	chain = vacrelstats->warm_chains - vacrelstats->num_warm_chains;
+		chain->chain_tid = *itemptr;
+		chain->is_postwarm_chain = 1;
 	}
+	else
+		vacrelstats->num_non_convertible_warm_chains++;
 }
 
 /*
@@ -2419,12 +2569,20 @@ static void
 lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr)
 {
+	char *end_deads = (char *) (vacrelstats->dead_tuples +
+		 	vacrelstats->num_dead_tuples);
+	char *end_warms = (char *) (vacrelstats->warm_chains -
+			vacrelstats->num_warm_chains);
+	Size freespace = (end_warms - end_deads);
+
+	Assert(freespace >= 0);
+	
 	/*
 	 * The array shouldn't overflow under normal behavior, but perhaps it
 	 * could if we are given a really small maintenance_work_mem. In that
 	 * case, just forget the last few tuples (we'll get 'em next time).
 	 */
-	if (vacrelstats->num_dead_tuples < vacrelstats->max_dead_tuples)
+	if (freespace >= sizeof (ItemPointer))
 	{
 		vacrelstats->dead_tuples[vacrelstats->num_dead_tuples] = *itemptr;
 		vacrelstats->num_dead_tuples++;
@@ -2477,10 +2635,10 @@ lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state)
 		return IBDCR_DELETE;
 
 	chain = (LVWarmChain *) bsearch((void *) itemptr,
-								(void *) vacrelstats->warm_chains,
-								vacrelstats->num_warm_chains,
-								sizeof(LVWarmChain),
-								vac_cmp_warm_chain);
+				(void *) (vacrelstats->warm_chains - vacrelstats->num_warm_chains),
+				vacrelstats->num_warm_chains,
+				sizeof(LVWarmChain),
+				vac_cmp_warm_chain);
 	if (chain != NULL)
 	{
 		if (is_warm)
@@ -2500,13 +2658,13 @@ static IndexBulkDeleteCallbackResult
 lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
 {
 	LVRelStats		*vacrelstats = (LVRelStats *) state;
-	LVWarmChain	*chain;
+	LVWarmChain		*chain;
 
 	chain = (LVWarmChain *) bsearch((void *) itemptr,
-								(void *) vacrelstats->warm_chains,
-								vacrelstats->num_warm_chains,
-								sizeof(LVWarmChain),
-								vac_cmp_warm_chain);
+				(void *) (vacrelstats->warm_chains - vacrelstats->num_warm_chains),
+				vacrelstats->num_warm_chains,
+				sizeof(LVWarmChain),
+				vac_cmp_warm_chain);
 
 	if (chain != NULL && (chain->keep_warm_chain != 1))
 	{
@@ -2605,6 +2763,7 @@ lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
 		 * index pointers.
 		 */
 		chain->keep_warm_chain = 1;
+		vacrelstats->num_non_convertible_warm_chains++;
 		return IBDCR_KEEP;
 	}
 	return IBDCR_KEEP;
@@ -2613,6 +2772,9 @@ lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
 /*
  * Comparator routines for use with qsort() and bsearch(). Similar to
  * vac_cmp_itemptr, but right hand argument is LVWarmChain struct pointer.
+ *
+ * The warm_chains array is sorted in descending order hence the return values
+ * are flipped.
  */
 static int
 vac_cmp_warm_chain(const void *left, const void *right)
@@ -2626,17 +2788,17 @@ vac_cmp_warm_chain(const void *left, const void *right)
 	rblk = ItemPointerGetBlockNumber(&((LVWarmChain *) right)->chain_tid);
 
 	if (lblk < rblk)
-		return -1;
-	if (lblk > rblk)
 		return 1;
+	if (lblk > rblk)
+		return -1;
 
 	loff = ItemPointerGetOffsetNumber((ItemPointer) left);
 	roff = ItemPointerGetOffsetNumber(&((LVWarmChain *) right)->chain_tid);
 
 	if (loff < roff)
-		return -1;
-	if (loff > roff)
 		return 1;
+	if (loff > roff)
+		return -1;
 
 	return 0;
 }
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9d53a29..1592220 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -433,7 +433,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	overlay_placing substr_from substr_for
 
 %type <boolean> opt_instead
-%type <boolean> opt_unique opt_concurrently opt_verbose opt_full
+%type <boolean> opt_unique opt_concurrently opt_verbose opt_full opt_warmclean
 %type <boolean> opt_freeze opt_default opt_recheck
 %type <defelt>	opt_binary opt_oids copy_delimiter
 
@@ -684,7 +684,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	VACUUM VALID VALIDATE VALIDATOR VALUE_P VALUES VARCHAR VARIADIC VARYING
 	VERBOSE VERSION_P VIEW VIEWS VOLATILE
 
-	WHEN WHERE WHITESPACE_P WINDOW WITH WITHIN WITHOUT WORK WRAPPER WRITE
+	WARMCLEAN WHEN WHERE WHITESPACE_P WINDOW WITH WITHIN WITHOUT WORK WRAPPER WRITE
 
 	XML_P XMLATTRIBUTES XMLCONCAT XMLELEMENT XMLEXISTS XMLFOREST XMLNAMESPACES
 	XMLPARSE XMLPI XMLROOT XMLSERIALIZE XMLTABLE
@@ -10059,7 +10059,7 @@ cluster_index_specification:
  *
  *****************************************************************************/
 
-VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
+VacuumStmt: VACUUM opt_full opt_freeze opt_verbose opt_warmclean
 				{
 					VacuumStmt *n = makeNode(VacuumStmt);
 					n->options = VACOPT_VACUUM;
@@ -10069,11 +10069,13 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
 						n->options |= VACOPT_FREEZE;
 					if ($4)
 						n->options |= VACOPT_VERBOSE;
+					if ($5)
+						n->options |= VACOPT_WARM_CLEANUP;
 					n->relation = NULL;
 					n->va_cols = NIL;
 					$$ = (Node *)n;
 				}
-			| VACUUM opt_full opt_freeze opt_verbose qualified_name
+			| VACUUM opt_full opt_freeze opt_verbose opt_warmclean qualified_name
 				{
 					VacuumStmt *n = makeNode(VacuumStmt);
 					n->options = VACOPT_VACUUM;
@@ -10083,13 +10085,15 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
 						n->options |= VACOPT_FREEZE;
 					if ($4)
 						n->options |= VACOPT_VERBOSE;
-					n->relation = $5;
+					if ($5)
+						n->options |= VACOPT_WARM_CLEANUP;
+					n->relation = $6;
 					n->va_cols = NIL;
 					$$ = (Node *)n;
 				}
-			| VACUUM opt_full opt_freeze opt_verbose AnalyzeStmt
+			| VACUUM opt_full opt_freeze opt_verbose opt_warmclean AnalyzeStmt
 				{
-					VacuumStmt *n = (VacuumStmt *) $5;
+					VacuumStmt *n = (VacuumStmt *) $6;
 					n->options |= VACOPT_VACUUM;
 					if ($2)
 						n->options |= VACOPT_FULL;
@@ -10097,6 +10101,8 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
 						n->options |= VACOPT_FREEZE;
 					if ($4)
 						n->options |= VACOPT_VERBOSE;
+					if ($5)
+						n->options |= VACOPT_WARM_CLEANUP;
 					$$ = (Node *)n;
 				}
 			| VACUUM '(' vacuum_option_list ')'
@@ -10129,6 +10135,7 @@ vacuum_option_elem:
 			| VERBOSE			{ $$ = VACOPT_VERBOSE; }
 			| FREEZE			{ $$ = VACOPT_FREEZE; }
 			| FULL				{ $$ = VACOPT_FULL; }
+			| WARMCLEAN			{ $$ = VACOPT_WARM_CLEANUP; }
 			| IDENT
 				{
 					if (strcmp($1, "disable_page_skipping") == 0)
@@ -10182,6 +10189,10 @@ opt_freeze: FREEZE									{ $$ = TRUE; }
 			| /*EMPTY*/								{ $$ = FALSE; }
 		;
 
+opt_warmclean: WARMCLEAN							{ $$ = TRUE; }
+			| /*EMPTY*/								{ $$ = FALSE; }
+		;
+
 opt_name_list:
 			'(' name_list ')'						{ $$ = $2; }
 			| /*EMPTY*/								{ $$ = NIL; }
@@ -14886,6 +14897,7 @@ type_func_name_keyword:
 			| SIMILAR
 			| TABLESAMPLE
 			| VERBOSE
+			| WARMCLEAN
 		;
 
 /* Reserved keyword --- these keywords are usable only as a ColLabel.
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 89dd3b3..a157c05 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -117,6 +117,8 @@ int			autovacuum_vac_thresh;
 double		autovacuum_vac_scale;
 int			autovacuum_anl_thresh;
 double		autovacuum_anl_scale;
+double		autovacuum_warmcleanup_scale;
+double		autovacuum_warmcleanup_index_scale;
 int			autovacuum_freeze_max_age;
 int			autovacuum_multixact_freeze_max_age;
 
@@ -338,7 +340,8 @@ static void relation_needs_vacanalyze(Oid relid, AutoVacOpts *relopts,
 						  Form_pg_class classForm,
 						  PgStat_StatTabEntry *tabentry,
 						  int effective_multixact_freeze_max_age,
-						  bool *dovacuum, bool *doanalyze, bool *wraparound);
+						  bool *dovacuum, bool *doanalyze, bool *wraparound,
+						  bool *dowarmcleanup);
 
 static void autovacuum_do_vac_analyze(autovac_table *tab,
 						  BufferAccessStrategy bstrategy);
@@ -2076,6 +2079,7 @@ do_autovacuum(void)
 		bool		dovacuum;
 		bool		doanalyze;
 		bool		wraparound;
+		bool		dowarmcleanup;
 
 		if (classForm->relkind != RELKIND_RELATION &&
 			classForm->relkind != RELKIND_MATVIEW)
@@ -2115,10 +2119,14 @@ do_autovacuum(void)
 		tabentry = get_pgstat_tabentry_relid(relid, classForm->relisshared,
 											 shared, dbentry);
 
-		/* Check if it needs vacuum or analyze */
+		/* 
+		 * Check if it needs vacuum or analyze. For vacuum, also check if it
+		 * needs WARM cleanup.
+		 */
 		relation_needs_vacanalyze(relid, relopts, classForm, tabentry,
 								  effective_multixact_freeze_max_age,
-								  &dovacuum, &doanalyze, &wraparound);
+								  &dovacuum, &doanalyze, &wraparound,
+								  &dowarmcleanup);
 
 		/* Relations that need work are added to table_oids */
 		if (dovacuum || doanalyze)
@@ -2171,6 +2179,7 @@ do_autovacuum(void)
 		bool		dovacuum;
 		bool		doanalyze;
 		bool		wraparound;
+		bool		dowarmcleanup;
 
 		/*
 		 * We cannot safely process other backends' temp tables, so skip 'em.
@@ -2201,7 +2210,8 @@ do_autovacuum(void)
 
 		relation_needs_vacanalyze(relid, relopts, classForm, tabentry,
 								  effective_multixact_freeze_max_age,
-								  &dovacuum, &doanalyze, &wraparound);
+								  &dovacuum, &doanalyze, &wraparound,
+								  &dowarmcleanup);
 
 		/* ignore analyze for toast tables */
 		if (dovacuum)
@@ -2792,6 +2802,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 	HeapTuple	classTup;
 	bool		dovacuum;
 	bool		doanalyze;
+	bool		dowarmcleanup;
 	autovac_table *tab = NULL;
 	PgStat_StatTabEntry *tabentry;
 	PgStat_StatDBEntry *shared;
@@ -2833,7 +2844,8 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 
 	relation_needs_vacanalyze(relid, avopts, classForm, tabentry,
 							  effective_multixact_freeze_max_age,
-							  &dovacuum, &doanalyze, &wraparound);
+							  &dovacuum, &doanalyze, &wraparound,
+							  &dowarmcleanup);
 
 	/* ignore ANALYZE for toast tables */
 	if (classForm->relkind == RELKIND_TOASTVALUE)
@@ -2849,6 +2861,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		int			vac_cost_limit;
 		int			vac_cost_delay;
 		int			log_min_duration;
+		double		warmcleanup_index_scale;
 
 		/*
 		 * Calculate the vacuum cost parameters and the freeze ages.  If there
@@ -2895,19 +2908,26 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 			? avopts->multixact_freeze_table_age
 			: default_multixact_freeze_table_age;
 
+		warmcleanup_index_scale = (avopts &&
+								   avopts->warmcleanup_index_scale >= 0)
+			? avopts->warmcleanup_index_scale
+			: autovacuum_warmcleanup_index_scale;
+
 		tab = palloc(sizeof(autovac_table));
 		tab->at_relid = relid;
 		tab->at_sharedrel = classForm->relisshared;
 		tab->at_vacoptions = VACOPT_SKIPTOAST |
 			(dovacuum ? VACOPT_VACUUM : 0) |
 			(doanalyze ? VACOPT_ANALYZE : 0) |
-			(!wraparound ? VACOPT_NOWAIT : 0);
+			(!wraparound ? VACOPT_NOWAIT : 0) |
+			(dowarmcleanup ? VACOPT_WARM_CLEANUP : 0);
 		tab->at_params.freeze_min_age = freeze_min_age;
 		tab->at_params.freeze_table_age = freeze_table_age;
 		tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age;
 		tab->at_params.multixact_freeze_table_age = multixact_freeze_table_age;
 		tab->at_params.is_wraparound = wraparound;
 		tab->at_params.log_min_duration = log_min_duration;
+		tab->at_params.warmcleanup_index_scale = warmcleanup_index_scale;
 		tab->at_vacuum_cost_limit = vac_cost_limit;
 		tab->at_vacuum_cost_delay = vac_cost_delay;
 		tab->at_relname = NULL;
@@ -2974,7 +2994,8 @@ relation_needs_vacanalyze(Oid relid,
  /* output params below */
 						  bool *dovacuum,
 						  bool *doanalyze,
-						  bool *wraparound)
+						  bool *wraparound,
+						  bool *dowarmcleanup)
 {
 	bool		force_vacuum;
 	bool		av_enabled;
@@ -2986,6 +3007,9 @@ relation_needs_vacanalyze(Oid relid,
 	float4		vac_scale_factor,
 				anl_scale_factor;
 
+	/* constant from reloptions or GUC valriable */
+	float4		warmcleanup_scale_factor;
+
 	/* thresholds calculated from above constants */
 	float4		vacthresh,
 				anlthresh;
@@ -2994,6 +3018,9 @@ relation_needs_vacanalyze(Oid relid,
 	float4		vactuples,
 				anltuples;
 
+	/* number of WARM chains in the table */
+	float4		warmchains;
+
 	/* freeze parameters */
 	int			freeze_max_age;
 	int			multixact_freeze_max_age;
@@ -3026,6 +3053,11 @@ relation_needs_vacanalyze(Oid relid,
 		? relopts->analyze_threshold
 		: autovacuum_anl_thresh;
 
+	/* Use table specific value or the GUC value */
+	warmcleanup_scale_factor = (relopts && relopts->warmcleanup_scale_factor >= 0)
+		? relopts->warmcleanup_scale_factor
+		: autovacuum_warmcleanup_scale;
+
 	freeze_max_age = (relopts && relopts->freeze_max_age >= 0)
 		? Min(relopts->freeze_max_age, autovacuum_freeze_max_age)
 		: autovacuum_freeze_max_age;
@@ -3073,6 +3105,7 @@ relation_needs_vacanalyze(Oid relid,
 		reltuples = classForm->reltuples;
 		vactuples = tabentry->n_dead_tuples;
 		anltuples = tabentry->changes_since_analyze;
+		warmchains = tabentry->n_warm_chains;
 
 		vacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples;
 		anlthresh = (float4) anl_base_thresh + anl_scale_factor * reltuples;
@@ -3089,6 +3122,17 @@ relation_needs_vacanalyze(Oid relid,
 		/* Determine if this table needs vacuum or analyze. */
 		*dovacuum = force_vacuum || (vactuples > vacthresh);
 		*doanalyze = (anltuples > anlthresh);
+
+		/*
+		 * If the number of WARM chains in the is more than the configured
+		 * fraction, then we also do a WARM cleanup. This only triggers at the
+		 * table level, but we then look at each index and do cleanup for the
+		 * index only if the WARM pointers in the index are more than
+		 * configured index-level scale factor. lazy_vacuum_index() later deals
+		 * with that.
+		 */
+		if (*dovacuum && (warmcleanup_scale_factor * reltuples < warmchains))
+			*dowarmcleanup = true;
 	}
 	else
 	{
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index 52fe4ba..f38ce8a 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -226,9 +226,11 @@ typedef struct TwoPhasePgStatRecord
 	PgStat_Counter tuples_inserted;		/* tuples inserted in xact */
 	PgStat_Counter tuples_updated;		/* tuples updated in xact */
 	PgStat_Counter tuples_deleted;		/* tuples deleted in xact */
+	PgStat_Counter tuples_warm_updated;	/* tuples warm updated in xact */
 	PgStat_Counter inserted_pre_trunc;	/* tuples inserted prior to truncate */
 	PgStat_Counter updated_pre_trunc;	/* tuples updated prior to truncate */
 	PgStat_Counter deleted_pre_trunc;	/* tuples deleted prior to truncate */
+	PgStat_Counter warm_updated_pre_trunc;	/* tuples warm updated prior to truncate */
 	Oid			t_id;			/* table's OID */
 	bool		t_shared;		/* is it a shared catalog? */
 	bool		t_truncated;	/* was the relation truncated? */
@@ -1367,7 +1369,8 @@ pgstat_report_autovac(Oid dboid)
  */
 void
 pgstat_report_vacuum(Oid tableoid, bool shared,
-					 PgStat_Counter livetuples, PgStat_Counter deadtuples)
+					 PgStat_Counter livetuples, PgStat_Counter deadtuples,
+					 PgStat_Counter warmchains)
 {
 	PgStat_MsgVacuum msg;
 
@@ -1381,6 +1384,7 @@ pgstat_report_vacuum(Oid tableoid, bool shared,
 	msg.m_vacuumtime = GetCurrentTimestamp();
 	msg.m_live_tuples = livetuples;
 	msg.m_dead_tuples = deadtuples;
+	msg.m_warm_chains = warmchains;
 	pgstat_send(&msg, sizeof(msg));
 }
 
@@ -1396,7 +1400,7 @@ pgstat_report_vacuum(Oid tableoid, bool shared,
 void
 pgstat_report_analyze(Relation rel,
 					  PgStat_Counter livetuples, PgStat_Counter deadtuples,
-					  bool resetcounter)
+					  PgStat_Counter warmchains, bool resetcounter)
 {
 	PgStat_MsgAnalyze msg;
 
@@ -1421,12 +1425,14 @@ pgstat_report_analyze(Relation rel,
 		{
 			livetuples -= trans->tuples_inserted - trans->tuples_deleted;
 			deadtuples -= trans->tuples_updated + trans->tuples_deleted;
+			warmchains -= trans->tuples_warm_updated;
 		}
 		/* count stuff inserted by already-aborted subxacts, too */
 		deadtuples -= rel->pgstat_info->t_counts.t_delta_dead_tuples;
 		/* Since ANALYZE's counts are estimates, we could have underflowed */
 		livetuples = Max(livetuples, 0);
 		deadtuples = Max(deadtuples, 0);
+		warmchains = Max(warmchains, 0);
 	}
 
 	pgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_ANALYZE);
@@ -1437,6 +1443,7 @@ pgstat_report_analyze(Relation rel,
 	msg.m_analyzetime = GetCurrentTimestamp();
 	msg.m_live_tuples = livetuples;
 	msg.m_dead_tuples = deadtuples;
+	msg.m_warm_chains = warmchains;
 	pgstat_send(&msg, sizeof(msg));
 }
 
@@ -1907,7 +1914,10 @@ pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
 		else if (warm)
+		{
+			pgstat_info->trans->tuples_warm_updated++;
 			pgstat_info->t_counts.t_tuples_warm_updated++;
+		}
 	}
 }
 
@@ -2070,6 +2080,12 @@ AtEOXact_PgStat(bool isCommit)
 				/* update and delete each create a dead tuple */
 				tabstat->t_counts.t_delta_dead_tuples +=
 					trans->tuples_updated + trans->tuples_deleted;
+				/* 
+				 * commit or abort, a WARM update generates a WARM chain which
+				 * needs cleanup.
+				 */
+				tabstat->t_counts.t_delta_warm_chains +=
+					trans->tuples_warm_updated;
 				/* insert, update, delete each count as one change event */
 				tabstat->t_counts.t_changed_tuples +=
 					trans->tuples_inserted + trans->tuples_updated +
@@ -2080,6 +2096,12 @@ AtEOXact_PgStat(bool isCommit)
 				/* inserted tuples are dead, deleted tuples are unaffected */
 				tabstat->t_counts.t_delta_dead_tuples +=
 					trans->tuples_inserted + trans->tuples_updated;
+				/* 
+				 * commit or abort, a WARM update generates a WARM chain which
+				 * needs cleanup.
+				 */
+				tabstat->t_counts.t_delta_warm_chains +=
+					trans->tuples_warm_updated;
 				/* an aborted xact generates no changed_tuple events */
 			}
 			tabstat->trans = NULL;
@@ -2136,12 +2158,16 @@ AtEOSubXact_PgStat(bool isCommit, int nestDepth)
 						trans->upper->tuples_inserted = trans->tuples_inserted;
 						trans->upper->tuples_updated = trans->tuples_updated;
 						trans->upper->tuples_deleted = trans->tuples_deleted;
+						trans->upper->tuples_warm_updated =
+							trans->tuples_warm_updated;
 					}
 					else
 					{
 						trans->upper->tuples_inserted += trans->tuples_inserted;
 						trans->upper->tuples_updated += trans->tuples_updated;
 						trans->upper->tuples_deleted += trans->tuples_deleted;
+						trans->upper->tuples_warm_updated +=
+							trans->tuples_warm_updated;
 					}
 					tabstat->trans = trans->upper;
 					pfree(trans);
@@ -2177,9 +2203,13 @@ AtEOSubXact_PgStat(bool isCommit, int nestDepth)
 				tabstat->t_counts.t_tuples_inserted += trans->tuples_inserted;
 				tabstat->t_counts.t_tuples_updated += trans->tuples_updated;
 				tabstat->t_counts.t_tuples_deleted += trans->tuples_deleted;
+				tabstat->t_counts.t_tuples_warm_updated +=
+					trans->tuples_warm_updated;
 				/* inserted tuples are dead, deleted tuples are unaffected */
 				tabstat->t_counts.t_delta_dead_tuples +=
 					trans->tuples_inserted + trans->tuples_updated;
+				tabstat->t_counts.t_delta_warm_chains +=
+					trans->tuples_warm_updated;
 				tabstat->trans = trans->upper;
 				pfree(trans);
 			}
@@ -2221,9 +2251,11 @@ AtPrepare_PgStat(void)
 			record.tuples_inserted = trans->tuples_inserted;
 			record.tuples_updated = trans->tuples_updated;
 			record.tuples_deleted = trans->tuples_deleted;
+			record.tuples_warm_updated = trans->tuples_warm_updated;
 			record.inserted_pre_trunc = trans->inserted_pre_trunc;
 			record.updated_pre_trunc = trans->updated_pre_trunc;
 			record.deleted_pre_trunc = trans->deleted_pre_trunc;
+			record.warm_updated_pre_trunc = trans->warm_updated_pre_trunc;
 			record.t_id = tabstat->t_id;
 			record.t_shared = tabstat->t_shared;
 			record.t_truncated = trans->truncated;
@@ -2298,11 +2330,14 @@ pgstat_twophase_postcommit(TransactionId xid, uint16 info,
 		/* forget live/dead stats seen by backend thus far */
 		pgstat_info->t_counts.t_delta_live_tuples = 0;
 		pgstat_info->t_counts.t_delta_dead_tuples = 0;
+		pgstat_info->t_counts.t_delta_warm_chains = 0;
 	}
 	pgstat_info->t_counts.t_delta_live_tuples +=
 		rec->tuples_inserted - rec->tuples_deleted;
 	pgstat_info->t_counts.t_delta_dead_tuples +=
 		rec->tuples_updated + rec->tuples_deleted;
+	pgstat_info->t_counts.t_delta_warm_chains +=
+		rec->tuples_warm_updated;
 	pgstat_info->t_counts.t_changed_tuples +=
 		rec->tuples_inserted + rec->tuples_updated +
 		rec->tuples_deleted;
@@ -2330,12 +2365,16 @@ pgstat_twophase_postabort(TransactionId xid, uint16 info,
 		rec->tuples_inserted = rec->inserted_pre_trunc;
 		rec->tuples_updated = rec->updated_pre_trunc;
 		rec->tuples_deleted = rec->deleted_pre_trunc;
+		rec->tuples_warm_updated = rec->warm_updated_pre_trunc;
 	}
 	pgstat_info->t_counts.t_tuples_inserted += rec->tuples_inserted;
 	pgstat_info->t_counts.t_tuples_updated += rec->tuples_updated;
 	pgstat_info->t_counts.t_tuples_deleted += rec->tuples_deleted;
+	pgstat_info->t_counts.t_tuples_warm_updated += rec->tuples_warm_updated;
 	pgstat_info->t_counts.t_delta_dead_tuples +=
 		rec->tuples_inserted + rec->tuples_updated;
+	pgstat_info->t_counts.t_delta_warm_chains +=
+		rec->tuples_warm_updated;
 }
 
 
@@ -4526,6 +4565,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
+		result->n_warm_chains = 0;
 		result->changes_since_analyze = 0;
 		result->blocks_fetched = 0;
 		result->blocks_hit = 0;
@@ -5636,6 +5676,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
+			tabentry->n_warm_chains = tabmsg->t_counts.t_delta_warm_chains;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
 			tabentry->blocks_fetched = tabmsg->t_counts.t_blocks_fetched;
 			tabentry->blocks_hit = tabmsg->t_counts.t_blocks_hit;
@@ -5667,9 +5708,11 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			{
 				tabentry->n_live_tuples = 0;
 				tabentry->n_dead_tuples = 0;
+				tabentry->n_warm_chains = 0;
 			}
 			tabentry->n_live_tuples += tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples += tabmsg->t_counts.t_delta_dead_tuples;
+			tabentry->n_warm_chains += tabmsg->t_counts.t_delta_warm_chains;
 			tabentry->changes_since_analyze += tabmsg->t_counts.t_changed_tuples;
 			tabentry->blocks_fetched += tabmsg->t_counts.t_blocks_fetched;
 			tabentry->blocks_hit += tabmsg->t_counts.t_blocks_hit;
@@ -5679,6 +5722,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 		tabentry->n_live_tuples = Max(tabentry->n_live_tuples, 0);
 		/* Likewise for n_dead_tuples */
 		tabentry->n_dead_tuples = Max(tabentry->n_dead_tuples, 0);
+		tabentry->n_warm_chains = Max(tabentry->n_warm_chains, 0);
 
 		/*
 		 * Add per-table stats to the per-database entry, too.
@@ -5904,6 +5948,7 @@ pgstat_recv_vacuum(PgStat_MsgVacuum *msg, int len)
 
 	tabentry->n_live_tuples = msg->m_live_tuples;
 	tabentry->n_dead_tuples = msg->m_dead_tuples;
+	tabentry->n_warm_chains = msg->m_warm_chains;
 
 	if (msg->m_autovacuum)
 	{
@@ -5938,6 +5983,7 @@ pgstat_recv_analyze(PgStat_MsgAnalyze *msg, int len)
 
 	tabentry->n_live_tuples = msg->m_live_tuples;
 	tabentry->n_dead_tuples = msg->m_dead_tuples;
+	tabentry->n_warm_chains = msg->m_warm_chains;
 
 	/*
 	 * If commanded, reset changes_since_analyze to zero.  This forgets any
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 227a87d..8804908 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -193,6 +193,21 @@ pg_stat_get_dead_tuples(PG_FUNCTION_ARGS)
 	PG_RETURN_INT64(result);
 }
 
+Datum
+pg_stat_get_warm_chains(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->n_warm_chains);
+
+	PG_RETURN_INT64(result);
+}
+
 
 Datum
 pg_stat_get_mod_since_analyze(PG_FUNCTION_ARGS)
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 08b6030..81fec03 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,6 +130,7 @@ int			VacuumCostPageMiss = 10;
 int			VacuumCostPageDirty = 20;
 int			VacuumCostLimit = 200;
 int			VacuumCostDelay = 0;
+double		VacuumWarmCleanupScale;
 
 int			VacuumPageHit = 0;
 int			VacuumPageMiss = 0;
@@ -137,3 +138,5 @@ int			VacuumPageDirty = 0;
 
 int			VacuumCostBalance = 0;		/* working state for vacuum */
 bool		VacuumCostActive = false;
+
+double		VacuumWarmCleanupIndexScale = 1;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 8b5f064..ecf8028 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -3017,6 +3017,36 @@ static struct config_real ConfigureNamesReal[] =
 	},
 
 	{
+		{"autovacuum_warmcleanup_scale_factor", PGC_SIGHUP, AUTOVACUUM,
+			gettext_noop("Number of WARM chains prior to cleanup as a fraction of reltuples."),
+			NULL
+		},
+		&autovacuum_warmcleanup_scale,
+		0.1, 0.0, 100.0,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"autovacuum_warmcleanup_index_scale_factor", PGC_SIGHUP, AUTOVACUUM,
+			gettext_noop("Number of WARM pointers prior to cleanup as a fraction of total WARM chains."),
+			NULL
+		},
+		&autovacuum_warmcleanup_index_scale,
+		0.2, 0.0, 100.0,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"vacuum_warmcleanup_index_scale_factor", PGC_USERSET, WARM_CLEANUP,
+			gettext_noop("Number of WARM pointers in the index prior to cleanup as a fraction of total WARM chains."),
+			NULL
+		},
+		&VacuumWarmCleanupIndexScale,
+		0.2, 0.0, 100.0,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"checkpoint_completion_target", PGC_SIGHUP, WAL_CHECKPOINTS,
 			gettext_noop("Time spent flushing dirty buffers during checkpoint, as fraction of checkpoint interval."),
 			NULL
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 3f1a142..61a4e23 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2795,6 +2795,8 @@ DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of dead tuples");
+DATA(insert OID = 3403 (  pg_stat_get_warm_chains	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_warm_chains _null_ _null_ _null_ ));
+DESCR("statistics: number of warm chains");
 DATA(insert OID = 3177 (  pg_stat_get_mod_since_analyze PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_mod_since_analyze _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples changed since last analyze");
 DATA(insert OID = 1934 (  pg_stat_get_blocks_fetched	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_blocks_fetched _null_ _null_ _null_ ));
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index 541c2fa..9914143 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -145,6 +145,8 @@ typedef struct VacuumParams
 	int			log_min_duration;		/* minimum execution threshold in ms
 										 * at which  verbose logs are
 										 * activated, -1 to use default */
+	double		warmcleanup_index_scale; /* Fraction of WARM pointers to cause
+										  * index WARM cleanup */
 } VacuumParams;
 
 /* GUC parameters */
diff --git a/src/include/foreign/fdwapi.h b/src/include/foreign/fdwapi.h
index 6ca44f7..2993b1a 100644
--- a/src/include/foreign/fdwapi.h
+++ b/src/include/foreign/fdwapi.h
@@ -134,7 +134,8 @@ typedef void (*ExplainDirectModify_function) (ForeignScanState *node,
 typedef int (*AcquireSampleRowsFunc) (Relation relation, int elevel,
 											   HeapTuple *rows, int targrows,
 												  double *totalrows,
-												  double *totaldeadrows);
+												  double *totaldeadrows,
+												  double *totalwarmchains);
 
 typedef bool (*AnalyzeForeignTable_function) (Relation relation,
 												 AcquireSampleRowsFunc *func,
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 4c607b2..901960a 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -255,6 +255,7 @@ extern int	VacuumPageDirty;
 extern int	VacuumCostBalance;
 extern bool VacuumCostActive;
 
+extern double VacuumWarmCleanupIndexScale;
 
 /* in tcop/postgres.c */
 
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index b2afd50..f5fc001 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3039,7 +3039,8 @@ typedef enum VacuumOption
 	VACOPT_FULL = 1 << 4,		/* FULL (non-concurrent) vacuum */
 	VACOPT_NOWAIT = 1 << 5,		/* don't wait to get lock (autovacuum only) */
 	VACOPT_SKIPTOAST = 1 << 6,	/* don't process the TOAST table, if any */
-	VACOPT_DISABLE_PAGE_SKIPPING = 1 << 7		/* don't skip any pages */
+	VACOPT_DISABLE_PAGE_SKIPPING = 1 << 7,		/* don't skip any pages */
+	VACOPT_WARM_CLEANUP = 1 << 8	/* do WARM cleanup */
 } VacuumOption;
 
 typedef struct VacuumStmt
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index cd21a78..7d9818b 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -433,6 +433,7 @@ PG_KEYWORD("version", VERSION_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("view", VIEW, UNRESERVED_KEYWORD)
 PG_KEYWORD("views", VIEWS, UNRESERVED_KEYWORD)
 PG_KEYWORD("volatile", VOLATILE, UNRESERVED_KEYWORD)
+PG_KEYWORD("warmclean", WARMCLEAN, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("when", WHEN, RESERVED_KEYWORD)
 PG_KEYWORD("where", WHERE, RESERVED_KEYWORD)
 PG_KEYWORD("whitespace", WHITESPACE_P, UNRESERVED_KEYWORD)
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 99bdc8b..883cbd4 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -110,6 +110,7 @@ typedef struct PgStat_TableCounts
 
 	PgStat_Counter t_delta_live_tuples;
 	PgStat_Counter t_delta_dead_tuples;
+	PgStat_Counter t_delta_warm_chains;
 	PgStat_Counter t_changed_tuples;
 
 	PgStat_Counter t_blocks_fetched;
@@ -167,11 +168,13 @@ typedef struct PgStat_TableXactStatus
 {
 	PgStat_Counter tuples_inserted;		/* tuples inserted in (sub)xact */
 	PgStat_Counter tuples_updated;		/* tuples updated in (sub)xact */
+	PgStat_Counter tuples_warm_updated;	/* tuples warm-updated in (sub)xact */
 	PgStat_Counter tuples_deleted;		/* tuples deleted in (sub)xact */
 	bool		truncated;		/* relation truncated in this (sub)xact */
 	PgStat_Counter inserted_pre_trunc;	/* tuples inserted prior to truncate */
 	PgStat_Counter updated_pre_trunc;	/* tuples updated prior to truncate */
 	PgStat_Counter deleted_pre_trunc;	/* tuples deleted prior to truncate */
+	PgStat_Counter warm_updated_pre_trunc;	/* tuples warm updated prior to truncate */
 	int			nest_level;		/* subtransaction nest level */
 	/* links to other structs for same relation: */
 	struct PgStat_TableXactStatus *upper;		/* next higher subxact if any */
@@ -370,6 +373,7 @@ typedef struct PgStat_MsgVacuum
 	TimestampTz m_vacuumtime;
 	PgStat_Counter m_live_tuples;
 	PgStat_Counter m_dead_tuples;
+	PgStat_Counter m_warm_chains;
 } PgStat_MsgVacuum;
 
 
@@ -388,6 +392,7 @@ typedef struct PgStat_MsgAnalyze
 	TimestampTz m_analyzetime;
 	PgStat_Counter m_live_tuples;
 	PgStat_Counter m_dead_tuples;
+	PgStat_Counter m_warm_chains;
 } PgStat_MsgAnalyze;
 
 
@@ -630,6 +635,7 @@ typedef struct PgStat_StatTabEntry
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
+	PgStat_Counter n_warm_chains;
 	PgStat_Counter changes_since_analyze;
 
 	PgStat_Counter blocks_fetched;
@@ -1156,10 +1162,11 @@ extern void pgstat_reset_single_counter(Oid objectid, PgStat_Single_Reset_Type t
 
 extern void pgstat_report_autovac(Oid dboid);
 extern void pgstat_report_vacuum(Oid tableoid, bool shared,
-					 PgStat_Counter livetuples, PgStat_Counter deadtuples);
+					 PgStat_Counter livetuples, PgStat_Counter deadtuples,
+					 PgStat_Counter warmchains);
 extern void pgstat_report_analyze(Relation rel,
 					  PgStat_Counter livetuples, PgStat_Counter deadtuples,
-					  bool resetcounter);
+					  PgStat_Counter warmchains, bool resetcounter);
 
 extern void pgstat_report_recovery_conflict(int reason);
 extern void pgstat_report_deadlock(void);
diff --git a/src/include/postmaster/autovacuum.h b/src/include/postmaster/autovacuum.h
index d383fd3..19fb0a2 100644
--- a/src/include/postmaster/autovacuum.h
+++ b/src/include/postmaster/autovacuum.h
@@ -39,6 +39,8 @@ extern int	autovacuum_freeze_max_age;
 extern int	autovacuum_multixact_freeze_max_age;
 extern int	autovacuum_vac_cost_delay;
 extern int	autovacuum_vac_cost_limit;
+extern double autovacuum_warmcleanup_scale;
+extern double autovacuum_warmcleanup_index_scale;
 
 /* autovacuum launcher PID, only valid when worker is shutting down */
 extern int	AutovacuumLauncherPid;
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index 2da9115..cd4532b 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -68,6 +68,7 @@ enum config_group
 	WAL_SETTINGS,
 	WAL_CHECKPOINTS,
 	WAL_ARCHIVING,
+	WARM_CLEANUP,
 	REPLICATION,
 	REPLICATION_SENDING,
 	REPLICATION_MASTER,
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index 2b86054..f0dd350 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -278,6 +278,8 @@ typedef struct AutoVacOpts
 	int			log_min_duration;
 	float8		vacuum_scale_factor;
 	float8		analyze_scale_factor;
+	float8		warmcleanup_scale_factor;
+	float8		warmcleanup_index_scale;
 } AutoVacOpts;
 
 typedef struct StdRdOptions
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index f7dc4a4..d34aa68 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1759,6 +1759,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
+    pg_stat_get_warm_chains(c.oid) AS n_warm_chains,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
     pg_stat_get_last_vacuum_time(c.oid) AS last_vacuum,
     pg_stat_get_last_autovacuum_time(c.oid) AS last_autovacuum,
@@ -1907,6 +1908,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
+    pg_stat_all_tables.n_warm_chains,
     pg_stat_all_tables.n_mod_since_analyze,
     pg_stat_all_tables.last_vacuum,
     pg_stat_all_tables.last_autovacuum,
@@ -1951,6 +1953,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
+    pg_stat_all_tables.n_warm_chains,
     pg_stat_all_tables.n_mod_since_analyze,
     pg_stat_all_tables.last_vacuum,
     pg_stat_all_tables.last_autovacuum,
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
index 1f07272..34cdbe5 100644
--- a/src/test/regress/expected/warm.out
+++ b/src/test/regress/expected/warm.out
@@ -745,6 +745,65 @@ SELECT a, b FROM test_toast_warm WHERE b = 104.20;
 (1 row)
 
 DROP TABLE test_toast_warm;
+-- Test VACUUM
+CREATE TABLE test_vacuum_warm (a int unique, b text, c int, d int, e int);
+CREATE INDEX test_vacuum_warm_index1 ON test_vacuum_warm(b);
+CREATE INDEX test_vacuum_warm_index2 ON test_vacuum_warm(c);
+CREATE INDEX test_vacuum_warm_index3 ON test_vacuum_warm(d);
+INSERT INTO test_vacuum_warm VALUES (1, 'a', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (2, 'b', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (3, 'c', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (4, 'd', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (5, 'e', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (6, 'f', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (7, 'g', 100, 200);
+UPDATE test_vacuum_warm SET b = 'u', c = 300 WHERE a = 1;
+UPDATE test_vacuum_warm SET b = 'v', c = 300 WHERE a = 2;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 3;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 4;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 5;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 6;
+-- a plain vacuum cannot clear WARM chains.
+SET enable_seqscan = false;
+SET enable_bitmapscan = false;
+SET seq_page_cost = 10000;
+VACUUM test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
+   Index Cond: (b = 'u'::text)
+   Heap Fetches: 1
+(3 rows)
+
+-- Now set vacuum_warmcleanup_index_scale_factor such that only
+-- test_vacuum_warm_index2 can be cleaned up.
+SET vacuum_warmcleanup_index_scale_factor=0.5;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
+   Index Cond: (b = 'u'::text)
+   Heap Fetches: 1
+(3 rows)
+
+-- All WARM chains cleaned up, so index-only scan should be used now without
+-- any heap fetches
+SET vacuum_warmcleanup_index_scale_factor=0;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect zero heap-fetches now
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
+   Index Cond: (b = 'u'::text)
+   Heap Fetches: 0
+(3 rows)
+
+DROP TABLE test_vacuum_warm;
 -- Toasted heap attributes
 CREATE TABLE toasttest(descr text , cnt int DEFAULT 0, f1 text, f2 text);
 CREATE INDEX testindx1 ON toasttest(descr);
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
index fc80c0f..ae9db9a 100644
--- a/src/test/regress/sql/warm.sql
+++ b/src/test/regress/sql/warm.sql
@@ -285,6 +285,53 @@ SELECT a, b FROM test_toast_warm WHERE b = 104.20;
 
 DROP TABLE test_toast_warm;
 
+-- Test VACUUM
+
+CREATE TABLE test_vacuum_warm (a int unique, b text, c int, d int, e int);
+CREATE INDEX test_vacuum_warm_index1 ON test_vacuum_warm(b);
+CREATE INDEX test_vacuum_warm_index2 ON test_vacuum_warm(c);
+CREATE INDEX test_vacuum_warm_index3 ON test_vacuum_warm(d);
+
+INSERT INTO test_vacuum_warm VALUES (1, 'a', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (2, 'b', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (3, 'c', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (4, 'd', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (5, 'e', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (6, 'f', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (7, 'g', 100, 200);
+
+UPDATE test_vacuum_warm SET b = 'u', c = 300 WHERE a = 1;
+UPDATE test_vacuum_warm SET b = 'v', c = 300 WHERE a = 2;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 3;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 4;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 5;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 6;
+
+-- a plain vacuum cannot clear WARM chains.
+SET enable_seqscan = false;
+SET enable_bitmapscan = false;
+SET seq_page_cost = 10000;
+VACUUM test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+
+-- Now set vacuum_warmcleanup_index_scale_factor such that only
+-- test_vacuum_warm_index2 can be cleaned up.
+SET vacuum_warmcleanup_index_scale_factor=0.5;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+
+
+-- All WARM chains cleaned up, so index-only scan should be used now without
+-- any heap fetches
+SET vacuum_warmcleanup_index_scale_factor=0;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect zero heap-fetches now
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+
+DROP TABLE test_vacuum_warm;
+
 -- Toasted heap attributes
 CREATE TABLE toasttest(descr text , cnt int DEFAULT 0, f1 text, f2 text);
 CREATE INDEX testindx1 ON toasttest(descr);
-- 
2.9.3 (Apple Git-75)

#231Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#213)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Mar 30, 2017 at 7:55 PM, Robert Haas <robertmhaas@gmail.com> wrote:

but
try to access the TOAST table would be fatal; that probably would have
deadlock hazards among other problems.

Hmm. I think you're right. We could make a copy of the heap tuple, drop the
lock and then access TOAST to handle that. Would that work?

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#232Robert Haas
robertmhaas@gmail.com
In reply to: Pavan Deolasee (#231)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Apr 4, 2017 at 10:21 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Thu, Mar 30, 2017 at 7:55 PM, Robert Haas <robertmhaas@gmail.com> wrote:

but
try to access the TOAST table would be fatal; that probably would have
deadlock hazards among other problems.

Hmm. I think you're right. We could make a copy of the heap tuple, drop the
lock and then access TOAST to handle that. Would that work?

Yeah, but it might suck. :-)

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#233Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#232)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Apr 5, 2017 at 8:42 AM, Robert Haas <robertmhaas@gmail.com> wrote:

On Tue, Apr 4, 2017 at 10:21 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Thu, Mar 30, 2017 at 7:55 PM, Robert Haas <robertmhaas@gmail.com>

wrote:

but
try to access the TOAST table would be fatal; that probably would have
deadlock hazards among other problems.

Hmm. I think you're right. We could make a copy of the heap tuple, drop

the

lock and then access TOAST to handle that. Would that work?

Yeah, but it might suck. :-)

Well, better than causing a deadlock ;-)

Lets see if we want to go down the path of blocking WARM when tuples have
toasted attributes. I submitted a patch yesterday, but having slept over
it, I think I made mistakes there. It might not be enough to look at the
caller supplied new tuple because that may not have any toasted values, but
the final tuple that gets written to the heap may be toasted. We could look
at the new tuple's attributes to find if any indexed attributes are
toasted, but that might suck as well. Or we can simply block WARM if the
old or the new tuple has external attributes i.e. HeapTupleHasExternal()
returns true. That could be overly restrictive because irrespective of
whether the indexed attributes are toasted or just some other attribute is
toasted, we will block WARM on such updates. May be that's not a problem.

We will also need to handle the case where some older tuple in the chain
has toasted value and that tuple is presented to recheck (I think we can
handle that case fairly easily, but its not done in the code yet) because
of a subsequent WARM update and the tuples updated by WARM did not have any
toasted values (and hence allowed).

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#234Robert Haas
robertmhaas@gmail.com
In reply to: Pavan Deolasee (#233)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Apr 4, 2017 at 11:43 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

Well, better than causing a deadlock ;-)

Yep.

Lets see if we want to go down the path of blocking WARM when tuples have
toasted attributes. I submitted a patch yesterday, but having slept over it,
I think I made mistakes there. It might not be enough to look at the caller
supplied new tuple because that may not have any toasted values, but the
final tuple that gets written to the heap may be toasted.

Yes, you have to make whatever decision you're going to make here
after any toast-ing has been done.

We could look at
the new tuple's attributes to find if any indexed attributes are toasted,
but that might suck as well. Or we can simply block WARM if the old or the
new tuple has external attributes i.e. HeapTupleHasExternal() returns true.
That could be overly restrictive because irrespective of whether the indexed
attributes are toasted or just some other attribute is toasted, we will
block WARM on such updates. May be that's not a problem.

Well, I think that there's some danger of whittling down this
optimization to the point where it still incurs most of the costs --
in bit-space if not in CPU cycles -- but no longer yields much of the
benefit. Even though the speed-up might still be substantial in the
cases where the optimization kicks in, if a substantial number of
users doing things that are basically pretty normal sometimes fail to
get the optimization, this isn't going to be very exciting outside of
synthetic benchmarks.

Backing up a little bit, it seems like the root of the issue here is
that, at a certain point in what was once a HOT chain, you make a WARM
update, and you make a decision about which indexes to update at that
point. Now, later on, when you traverse that chain, you need to be
able to figure what decide you made before; otherwise, you might make
a bad decision about whether an index pointer applies to a particular
tuple. If the index tuple is WARM, then the answer is "yes" if the
heap tuple is also WARM, and "no" if the heap tuple is CLEAR (which is
an odd antonym to WARM, but leave that aside). If the index tuple is
CLEAR, then the answer is "yes" if the heap tuple is also CLEAR, and
"maybe" if the heap tuple is WARM.

In that "maybe" case, we are trying to reconstruct the decision that
we made when we did the update. If, at the time of the update, we
decided to insert a new index entry, then the answer is "no"; if not,
it's "yes". From an integrity point of view, it doesn't really matter
how we make the decision; what matters is that we're consistent. More
specifically, if we sometimes insert a new index tuple even when the
value has not changed in any user-visible way, I think that would be
fine, provided that later chain traversals can tell that we did that.
As an extreme example, suppose that the WARM update inserted in some
magical way a bitmap of which attributes had changed into the new
tuple. Then, when we are walking the chain following a CLEAR index
tuple, we test whether the index columns overlap with that bitmap; if
they do, then that index got a new entry; if not, then it didn't. It
would actually be fine (apart from efficiency) to set extra bits in
this bitmap; extra indexes would get updated, but chain traversal
would know exactly which ones, so no problem. This is of course just
a gedankenexperiment, but the point is that as long as the insert
itself and later chain traversals agree on the rule, there's no
integrity problem. I think.

The first idea I had for an actual solution to this problem was to
make the decision as to whether to insert new index entries based on
whether the indexed attributes in the final tuple (post-TOAST) are
byte-for-byte identical with the original tuple. If somebody injects
a new compression algorithm into the system, or just changes the
storage parameters on a column, or we re-insert an identical value
into the TOAST table when we could have reused the old TOAST pointer,
then you might have some potentially-WARM updates that end up being
done as regular updates, but that's OK. When you are walking the
chain, you will KNOW whether you inserted new index entries or not,
because you can do the exact same comparison that was done before and
be sure of getting the same answer. But that's actually not really a
solution, because it doesn't work if all of the CLEAR tuples are gone
-- all you have is the index tuple and the new heap tuple; there's no
old heap tuple with which to compare.

The only other idea that I have for a really clean solution here is to
support this only for index types that are amcanreturn, and actually
compare the value stored in the index tuple with the one stored in the
heap tuple, ensuring that new index tuples are inserted whenever they
don't match and then using the exact same test to determine the
applicability of a given index pointer to a given heap tuple. I'm not
sure how viable that is either, but hopefully you see my underlying
point here: it would be OK for there to be cases where we fall back to
a non-WARM update because a logically equal value changed at the
physical level, especially if those cases are likely to be rare in
practice, but it can never be allowed to happen that chain traversal
gets confused about which indexes actually got touched by a particular
WARM update.

By the way, the "Converting WARM chains back to HOT chains" section of
README.WARM seems to be out of date. Any chance you could update that
to reflect the current state and thinking of the patch?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#235Andres Freund
andres@anarazel.de
In reply to: Robert Haas (#234)
Re: Patch: Write Amplification Reduction Method (WARM)

On 2017-04-05 09:36:47 -0400, Robert Haas wrote:

By the way, the "Converting WARM chains back to HOT chains" section of
README.WARM seems to be out of date. Any chance you could update that
to reflect the current state and thinking of the patch?

I propose we move this patch to the next CF. That shouldn't prevent you
working on it, although focusing on review of patches that still might
make it wouldn't hurt either.

- Andres

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#236Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#234)
4 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Apr 5, 2017 at 7:06 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Tue, Apr 4, 2017 at 11:43 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

Well, better than causing a deadlock ;-)

Yep.

Lets see if we want to go down the path of blocking WARM when tuples have
toasted attributes. I submitted a patch yesterday, but having slept over

it,

I think I made mistakes there. It might not be enough to look at the

caller

supplied new tuple because that may not have any toasted values, but the
final tuple that gets written to the heap may be toasted.

Yes, you have to make whatever decision you're going to make here
after any toast-ing has been done.

I am worried that might add more work in that code path since we then have
to fetch attributes for the new tuple as well. May be a good compromise
would be to still only check on the user supplied new tuple, but be
prepared to handle toasted values during recheck. The attached version does
that.

Well, I think that there's some danger of whittling down this
optimization to the point where it still incurs most of the costs --
in bit-space if not in CPU cycles -- but no longer yields much of the
benefit. Even though the speed-up might still be substantial in the
cases where the optimization kicks in, if a substantial number of
users doing things that are basically pretty normal sometimes fail to
get the optimization, this isn't going to be very exciting outside of
synthetic benchmarks.

I agree. Blocking WARM off for too many cases won't serve the purpose.

Backing up a little bit, it seems like the root of the issue here is
that, at a certain point in what was once a HOT chain, you make a WARM
update, and you make a decision about which indexes to update at that
point. Now, later on, when you traverse that chain, you need to be
able to figure what decide you made before; otherwise, you might make
a bad decision about whether an index pointer applies to a particular
tuple. If the index tuple is WARM, then the answer is "yes" if the
heap tuple is also WARM, and "no" if the heap tuple is CLEAR (which is
an odd antonym to WARM, but leave that aside). If the index tuple is
CLEAR, then the answer is "yes" if the heap tuple is also CLEAR, and
"maybe" if the heap tuple is WARM.

That's fairly accurate description of the problem.

The first idea I had for an actual solution to this problem was to
make the decision as to whether to insert new index entries based on
whether the indexed attributes in the final tuple (post-TOAST) are
byte-for-byte identical with the original tuple. If somebody injects
a new compression algorithm into the system, or just changes the
storage parameters on a column, or we re-insert an identical value
into the TOAST table when we could have reused the old TOAST pointer,
then you might have some potentially-WARM updates that end up being
done as regular updates, but that's OK. When you are walking the
chain, you will KNOW whether you inserted new index entries or not,
because you can do the exact same comparison that was done before and
be sure of getting the same answer. But that's actually not really a
solution, because it doesn't work if all of the CLEAR tuples are gone
-- all you have is the index tuple and the new heap tuple; there's no
old heap tuple with which to compare.

Right. The old/new tuples may get HOT pruned and hence we cannot rely on
any algorithm which assumes that we can compare old and new tuples after
the update is committed/aborted.

The only other idea that I have for a really clean solution here is to
support this only for index types that are amcanreturn, and actually
compare the value stored in the index tuple with the one stored in the
heap tuple, ensuring that new index tuples are inserted whenever they
don't match and then using the exact same test to determine the
applicability of a given index pointer to a given heap tuple.

Just so that I understand, are you suggesting that while inserting WARM
index pointers, we check if the new index tuple will look exactly the same
as the old index tuple and not insert a duplicate pointer at all? I
considered that, but it will require us to do an index lookup during WARM
index insert and for non-unique keys, that may or may not be exactly cheap.
Or we need something like what Claudio wrote to sort all index entries by
heap TIDs. If we do that, then the recheck can be done just based on the
index and heap flags (because we can then turn the old index pointer into a
CLEAR pointer. Index pointer is set to COMMON during initial insert).

The other way is to pass old tuple values along with the new tuple values
to amwarminsert, build index tuples and then do a comparison. For duplicate
index tuples, skip WARM inserts.

By the way, the "Converting WARM chains back to HOT chains" section of
README.WARM seems to be out of date. Any chance you could update that
to reflect the current state and thinking of the patch?

Ok. I've extensively updated the README to match the current state of
affairs. Updated patch set attached. I've also added mechanism to deal with
known-dead pointers during regular index scans. We can derive some
knowledge from index/heap states and recheck results. One additional thing
I did which should help Dilip's test case is that we use the index/heap
state to decide whether a recheck is necessary or not. And when we see a
CLEAR pointer to all-WARM tuples, we set the pointer WARM and thus avoid
repeated recheck for the same tuple. My own tests show that the regression
should go away with this version, but I am not suggesting that we can't
come up with some other workload where we still see regression.

I also realised that altering table-level enable_warm reloption would
require AccessExclusiveLock. So included that change too.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

0002-Free-3-bits-in-ip_posid-field-of-the-ItemPointer_v26.patchapplication/octet-stream; name=0002-Free-3-bits-in-ip_posid-field-of-the-ItemPointer_v26.patchDownload
From 046a14badc3f86b1d3a2791db327a61ba51a47e9 Mon Sep 17 00:00:00 2001
From: Pavan Deolasee <pavan.deolasee@gmail.com>
Date: Wed, 29 Mar 2017 10:44:01 +0530
Subject: [PATCH 2/4] Free 3-bits in ip_posid field of the ItemPointerData.

We can use those for storing some other information. Right now only index
methods will use those to store WARM/CLEAR property of an index pointer.
---
 src/include/access/ginblock.h     |  3 ++-
 src/include/access/htup_details.h |  2 +-
 src/include/storage/itemptr.h     | 30 +++++++++++++++++++++++++++---
 src/include/storage/off.h         | 11 ++++++++++-
 4 files changed, 40 insertions(+), 6 deletions(-)

diff --git a/src/include/access/ginblock.h b/src/include/access/ginblock.h
index 438912c..316ab65 100644
--- a/src/include/access/ginblock.h
+++ b/src/include/access/ginblock.h
@@ -135,7 +135,8 @@ typedef struct GinMetaPageData
 	(ItemPointerGetBlockNumberNoCheck(pointer))
 
 #define GinItemPointerGetOffsetNumber(pointer) \
-	(ItemPointerGetOffsetNumberNoCheck(pointer))
+	(ItemPointerGetOffsetNumberNoCheck(pointer) | \
+	 (ItemPointerGetFlags(pointer) << OffsetNumberBits))
 
 #define GinItemPointerSetBlockNumber(pointer, blkno) \
 	(ItemPointerSetBlockNumber((pointer), (blkno)))
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 24433c7..4d614b7 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -288,7 +288,7 @@ struct HeapTupleHeaderData
  * than MaxOffsetNumber, so that it can be distinguished from a valid
  * offset number in a regular item pointer.
  */
-#define SpecTokenOffsetNumber		0xfffe
+#define SpecTokenOffsetNumber		OffsetNumberPrev(OffsetNumberMask)
 
 /*
  * HeapTupleHeader accessor macros
diff --git a/src/include/storage/itemptr.h b/src/include/storage/itemptr.h
index c21d2ad..74eed4e 100644
--- a/src/include/storage/itemptr.h
+++ b/src/include/storage/itemptr.h
@@ -57,7 +57,7 @@ typedef ItemPointerData *ItemPointer;
  *		True iff the disk item pointer is not NULL.
  */
 #define ItemPointerIsValid(pointer) \
-	((bool) (PointerIsValid(pointer) && ((pointer)->ip_posid != 0)))
+	((bool) (PointerIsValid(pointer) && (((pointer)->ip_posid & OffsetNumberMask) != 0)))
 
 /*
  * ItemPointerGetBlockNumberNoCheck
@@ -84,7 +84,7 @@ typedef ItemPointerData *ItemPointer;
  */
 #define ItemPointerGetOffsetNumberNoCheck(pointer) \
 ( \
-	(pointer)->ip_posid \
+	((pointer)->ip_posid & OffsetNumberMask) \
 )
 
 /*
@@ -98,6 +98,30 @@ typedef ItemPointerData *ItemPointer;
 )
 
 /*
+ * Get the flags stored in high order bits in the OffsetNumber.
+ */
+#define ItemPointerGetFlags(pointer) \
+( \
+	((pointer)->ip_posid & ~OffsetNumberMask) >> OffsetNumberBits \
+)
+
+/*
+ * Set the flag bits. We first left-shift since flags are defined starting 0x01
+ */
+#define ItemPointerSetFlags(pointer, flags) \
+( \
+	((pointer)->ip_posid |= ((flags) << OffsetNumberBits)) \
+)
+
+/*
+ * Clear all flags.
+ */
+#define ItemPointerClearFlags(pointer) \
+( \
+	((pointer)->ip_posid &= OffsetNumberMask) \
+)
+
+/*
  * ItemPointerSet
  *		Sets a disk item pointer to the specified block and offset.
  */
@@ -105,7 +129,7 @@ typedef ItemPointerData *ItemPointer;
 ( \
 	AssertMacro(PointerIsValid(pointer)), \
 	BlockIdSet(&((pointer)->ip_blkid), blockNumber), \
-	(pointer)->ip_posid = offNum \
+	(pointer)->ip_posid = (offNum) \
 )
 
 /*
diff --git a/src/include/storage/off.h b/src/include/storage/off.h
index fe8638f..f058fe1 100644
--- a/src/include/storage/off.h
+++ b/src/include/storage/off.h
@@ -26,7 +26,16 @@ typedef uint16 OffsetNumber;
 #define InvalidOffsetNumber		((OffsetNumber) 0)
 #define FirstOffsetNumber		((OffsetNumber) 1)
 #define MaxOffsetNumber			((OffsetNumber) (BLCKSZ / sizeof(ItemIdData)))
-#define OffsetNumberMask		(0xffff)		/* valid uint16 bits */
+
+/*
+ * The biggest BLCKSZ we support is 32kB, and each ItemId takes 6 bytes.
+ * That limits the number of line pointers in a page to 32kB/6B = 5461.
+ * Therefore, 13 bits in OffsetNumber are enough to represent all valid
+ * on-disk line pointers.  Hence, we can reserve the high-order bits in
+ * OffsetNumber for other purposes.
+ */
+#define OffsetNumberBits		13
+#define OffsetNumberMask		((((uint16) 1) << OffsetNumberBits) - 1)
 
 /* ----------------
  *		support macros
-- 
2.9.3 (Apple Git-75)

0003-Main-WARM-patch_v26.patchapplication/octet-stream; name=0003-Main-WARM-patch_v26.patchDownload
From ed94731e62385c3437831b148f34ac6deda268a2 Mon Sep 17 00:00:00 2001
From: Pavan Deolasee <pavan.deolasee@gmail.com>
Date: Wed, 5 Apr 2017 23:26:31 +0530
Subject: [PATCH 3/4] Main WARM patch.

We perform WARM update if the update is not modifying all indexes, but
modifying at least one index and has enough free space in the heap block to
keep the new version of the tuple.

The update works pretty much the same way as HOT updates, but the index whose
key values have changed must receive another index entry, pointing to the same
root of the HOT chain. Such chains which may have more than one index pointers
in at least one index, are called WARM chains. But now since there are 2 index
pointers to the same chain, we must do recheck to confirm that the index
pointer should or should not see the tuple. HOT pruning and other technique
remain the same.

WARM chains must subsequently be cleaned up by removing additional index
pointers. Once cleaned up, they are further be WARM updated and
index-only-scans will work.

To ensure that we don't do wasteful work, we only do WARM update if less than
50% indexes need updates. For anything above that, it probably does not make
sense to do WARM updates because most indexes will receive an update anyways
and cleanup cost will be high.

A new table-level option (enable_warm) is added, the default currently being
ON. When the option is ON, WARM updates are allowed on the table. We allow user
to set enable_warm to OFF. But once it's turned ON, we don't allow turning it OFF
again. This is necessary because once WARM is enabled, the table may have WARM
chains and WARM index pointers and those must be handled correctly.
---
 contrib/bloom/blutils.c                     |   1 +
 contrib/bloom/blvacuum.c                    |   2 +-
 src/backend/access/brin/brin.c              |   1 +
 src/backend/access/common/reloptions.c      |  18 +-
 src/backend/access/gin/ginvacuum.c          |   3 +-
 src/backend/access/gist/gist.c              |   1 +
 src/backend/access/gist/gistvacuum.c        |   3 +-
 src/backend/access/hash/hash.c              |  18 +-
 src/backend/access/hash/hashsearch.c        |   5 +
 src/backend/access/heap/README.WARM         | 400 ++++++++++++
 src/backend/access/heap/heapam.c            | 790 ++++++++++++++++++++---
 src/backend/access/heap/pruneheap.c         |   9 +-
 src/backend/access/heap/rewriteheap.c       |  12 +-
 src/backend/access/heap/tuptoaster.c        |   3 +-
 src/backend/access/index/genam.c            |   4 +
 src/backend/access/index/indexam.c          | 206 +++++-
 src/backend/access/nbtree/nbtinsert.c       | 228 ++++---
 src/backend/access/nbtree/nbtpage.c         |  56 +-
 src/backend/access/nbtree/nbtree.c          | 105 +++-
 src/backend/access/nbtree/nbtsearch.c       |   5 +
 src/backend/access/nbtree/nbtutils.c        | 196 ++++++
 src/backend/access/nbtree/nbtxlog.c         |  27 +-
 src/backend/access/rmgrdesc/heapdesc.c      |  26 +-
 src/backend/access/rmgrdesc/nbtdesc.c       |   4 +-
 src/backend/access/spgist/spgutils.c        |   1 +
 src/backend/access/spgist/spgvacuum.c       |  12 +-
 src/backend/catalog/index.c                 |  71 ++-
 src/backend/catalog/indexing.c              |  60 +-
 src/backend/catalog/system_views.sql        |   4 +-
 src/backend/commands/constraint.c           |   7 +-
 src/backend/commands/copy.c                 |   3 +
 src/backend/commands/indexcmds.c            |  17 +-
 src/backend/commands/tablecmds.c            |  14 +-
 src/backend/commands/vacuumlazy.c           | 668 +++++++++++++++++++-
 src/backend/executor/execIndexing.c         |  21 +-
 src/backend/executor/execReplication.c      |  30 +-
 src/backend/executor/nodeBitmapHeapscan.c   |  13 +-
 src/backend/executor/nodeIndexscan.c        |   4 +-
 src/backend/executor/nodeModifyTable.c      |  27 +-
 src/backend/postmaster/pgstat.c             |   7 +-
 src/backend/replication/logical/decode.c    |  13 +-
 src/backend/storage/page/bufpage.c          |  23 +
 src/backend/utils/adt/pgstatfuncs.c         |  31 +
 src/backend/utils/cache/relcache.c          | 113 +++-
 src/backend/utils/time/combocid.c           |   4 +-
 src/backend/utils/time/tqual.c              |  24 +-
 src/include/access/amapi.h                  |  22 +
 src/include/access/genam.h                  |  22 +-
 src/include/access/heapam.h                 |  31 +-
 src/include/access/heapam_xlog.h            |  24 +-
 src/include/access/htup_details.h           | 116 +++-
 src/include/access/nbtree.h                 |  26 +-
 src/include/access/nbtxlog.h                |  10 +-
 src/include/access/relscan.h                |   9 +-
 src/include/catalog/index.h                 |   7 +
 src/include/catalog/pg_proc.h               |   4 +
 src/include/commands/progress.h             |   1 +
 src/include/executor/executor.h             |   1 +
 src/include/executor/nodeIndexscan.h        |   1 -
 src/include/nodes/execnodes.h               |   1 +
 src/include/pgstat.h                        |   4 +-
 src/include/storage/bufpage.h               |   2 +
 src/include/utils/rel.h                     |  19 +
 src/include/utils/relcache.h                |   5 +-
 src/test/regress/expected/alter_generic.out |   4 +-
 src/test/regress/expected/rules.out         |  12 +-
 src/test/regress/expected/warm.out          | 930 ++++++++++++++++++++++++++++
 src/test/regress/parallel_schedule          |   2 +
 src/test/regress/sql/warm.sql               | 360 +++++++++++
 69 files changed, 4524 insertions(+), 379 deletions(-)
 create mode 100644 src/backend/access/heap/README.WARM
 create mode 100644 src/test/regress/expected/warm.out
 create mode 100644 src/test/regress/sql/warm.sql

diff --git a/contrib/bloom/blutils.c b/contrib/bloom/blutils.c
index f2eda67..b356e2b 100644
--- a/contrib/bloom/blutils.c
+++ b/contrib/bloom/blutils.c
@@ -142,6 +142,7 @@ blhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/contrib/bloom/blvacuum.c b/contrib/bloom/blvacuum.c
index 04abd0f..ff50361 100644
--- a/contrib/bloom/blvacuum.c
+++ b/contrib/bloom/blvacuum.c
@@ -88,7 +88,7 @@ blbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 		while (itup < itupEnd)
 		{
 			/* Do we have to delete this tuple? */
-			if (callback(&itup->heapPtr, callback_state))
+			if (callback(&itup->heapPtr, false, callback_state) == IBDCR_DELETE)
 			{
 				/* Yes; adjust count of tuples that will be left on page */
 				BloomPageGetOpaque(page)->maxoff--;
diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
index 649f348..a0fd203 100644
--- a/src/backend/access/brin/brin.c
+++ b/src/backend/access/brin/brin.c
@@ -119,6 +119,7 @@ brinhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index 6d1f22f..ce7d4da 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -88,6 +88,11 @@
  * Setting parallel_workers is safe, since it acts the same as
  * max_parallel_workers_per_gather which is a USERSET parameter that doesn't
  * affect existing plans or queries.
+ *
+ * Setting enable_warm requires AccessExclusiveLock on the table. This is
+ * essential to ensure that any concurrent scan does not end up ignoring WARM
+ * chains created after enable_warm is turned ON. So we must disallow any
+ * SELECTs while changing this option.
  */
 
 static relopt_bool boolRelOpts[] =
@@ -137,6 +142,15 @@ static relopt_bool boolRelOpts[] =
 		},
 		false
 	},
+	{
+		{
+			"enable_warm",
+			"Table supports WARM updates",
+			RELOPT_KIND_HEAP,
+			AccessExclusiveLock
+		},
+		HEAP_DEFAULT_ENABLE_WARM
+	},
 	/* list terminator */
 	{{NULL}}
 };
@@ -1351,7 +1365,9 @@ default_reloptions(Datum reloptions, bool validate, relopt_kind kind)
 		{"user_catalog_table", RELOPT_TYPE_BOOL,
 		offsetof(StdRdOptions, user_catalog_table)},
 		{"parallel_workers", RELOPT_TYPE_INT,
-		offsetof(StdRdOptions, parallel_workers)}
+		offsetof(StdRdOptions, parallel_workers)},
+		{"enable_warm", RELOPT_TYPE_BOOL,
+		offsetof(StdRdOptions, enable_warm)}
 	};
 
 	options = parseRelOptions(reloptions, validate, kind, &numoptions);
diff --git a/src/backend/access/gin/ginvacuum.c b/src/backend/access/gin/ginvacuum.c
index 26c077a..46ed4fe 100644
--- a/src/backend/access/gin/ginvacuum.c
+++ b/src/backend/access/gin/ginvacuum.c
@@ -56,7 +56,8 @@ ginVacuumItemPointers(GinVacuumState *gvs, ItemPointerData *items,
 	 */
 	for (i = 0; i < nitem; i++)
 	{
-		if (gvs->callback(items + i, gvs->callback_state))
+		if (gvs->callback(items + i, false, gvs->callback_state) ==
+				IBDCR_DELETE)
 		{
 			gvs->result->tuples_removed += 1;
 			if (!tmpitems)
diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c
index 6593771..843389b 100644
--- a/src/backend/access/gist/gist.c
+++ b/src/backend/access/gist/gist.c
@@ -94,6 +94,7 @@ gisthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/gist/gistvacuum.c b/src/backend/access/gist/gistvacuum.c
index 77d9d12..0955db6 100644
--- a/src/backend/access/gist/gistvacuum.c
+++ b/src/backend/access/gist/gistvacuum.c
@@ -202,7 +202,8 @@ gistbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 				iid = PageGetItemId(page, i);
 				idxtuple = (IndexTuple) PageGetItem(page, iid);
 
-				if (callback(&(idxtuple->t_tid), callback_state))
+				if (callback(&(idxtuple->t_tid), false, callback_state) ==
+						IBDCR_DELETE)
 					todelete[ntodelete++] = i;
 				else
 					stats->num_index_tuples += 1;
diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c
index b835f77..571dee8 100644
--- a/src/backend/access/hash/hash.c
+++ b/src/backend/access/hash/hash.c
@@ -75,6 +75,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = hashbuild;
 	amroutine->ambuildempty = hashbuildempty;
 	amroutine->aminsert = hashinsert;
+	amroutine->amwarminsert = NULL;
 	amroutine->ambulkdelete = hashbulkdelete;
 	amroutine->amvacuumcleanup = hashvacuumcleanup;
 	amroutine->amcanreturn = NULL;
@@ -92,6 +93,7 @@ hashhandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -823,6 +825,7 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 			IndexTuple	itup;
 			Bucket		bucket;
 			bool		kill_tuple = false;
+			IndexBulkDeleteCallbackResult	result;
 
 			itup = (IndexTuple) PageGetItem(page,
 											PageGetItemId(page, offno));
@@ -832,13 +835,18 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,
 			 * To remove the dead tuples, we strictly want to rely on results
 			 * of callback function.  refer btvacuumpage for detailed reason.
 			 */
-			if (callback && callback(htup, callback_state))
+			if (callback)
 			{
-				kill_tuple = true;
-				if (tuples_removed)
-					*tuples_removed += 1;
+				result = callback(htup, false, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					kill_tuple = true;
+					if (tuples_removed)
+						*tuples_removed += 1;
+				}
 			}
-			else if (split_cleanup)
+
+			if (!kill_tuple && split_cleanup)
 			{
 				/* delete the tuples that are moved by split. */
 				bucket = _hash_hashkey2bucket(_hash_get_indextuple_hashkey(itup),
diff --git a/src/backend/access/hash/hashsearch.c b/src/backend/access/hash/hashsearch.c
index 2d92049..330ccc5 100644
--- a/src/backend/access/hash/hashsearch.c
+++ b/src/backend/access/hash/hashsearch.c
@@ -59,6 +59,8 @@ _hash_next(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
 	return true;
 }
 
@@ -367,6 +369,9 @@ _hash_first(IndexScanDesc scan, ScanDirection dir)
 	itup = (IndexTuple) PageGetItem(page, PageGetItemId(page, offnum));
 	so->hashso_heappos = itup->t_tid;
 
+	if (scan->xs_want_itup)
+		scan->xs_itup = itup;
+
 	return true;
 }
 
diff --git a/src/backend/access/heap/README.WARM b/src/backend/access/heap/README.WARM
new file mode 100644
index 0000000..ffdb1e1
--- /dev/null
+++ b/src/backend/access/heap/README.WARM
@@ -0,0 +1,400 @@
+src/backend/access/heap/README.WARM
+
+Write Amplification Reduction Method (WARM)
+===========================================
+
+The Heap Only Tuple (HOT) feature greatly eliminated redudant index
+entries and allowed re-use of the dead space occupied by previously
+updated or deleted tuples (see src/backend/access/heap/README.HOT)
+
+One of the necessary conditions for satisfying HOT update is that the
+update must not change a column used in any of the indexes on the table.
+The condition is sometimes hard to meet, especially for complex
+workloads with several indexes on large yet frequently updated tables.
+Worse, sometimes only one or two index columns may be updated, but the
+regular non-HOT update will still insert a new index entry in every
+index on the table, irrespective of whether the key pertaining to the
+index is changed or not.
+
+WARM is a technique devised to address these problems.
+
+
+Update Chains With Multiple Index Entries Pointing to the Root
+--------------------------------------------------------------
+
+When a non-HOT update is caused by an index key change, a new index
+entry must be inserted for the changed index. But if the index key
+hasn't changed for other indexes, we don't really need to insert a new
+entry. Even though the existing index entry is pointing to the old
+tuple, the new tuple is reachable via the t_ctid chain. To keep things
+simple, a WARM update requires that the heap block must have enough
+space to store the new version of the tuple. This is same as HOT
+updates.
+
+In WARM, we ensure that every index entry always points to the root of
+the WARM chain. In fact, a WARM chain looks exactly like a HOT chain
+except for the fact that there could be multiple index entries pointing
+to the root of the chain. So when new entry is inserted in an index for
+updated tuple, and if we are doing a WARM update, the new entry is made
+point to the root of the WARM chain.
+
+For example, if we have a table with two columns and two indexes on each of the
+column. When a tuple is first inserted into the table, we have exactly one
+index entry pointing to the tuple from both indexes.
+
+	lp [1]
+	[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's entry (aaaa) also points to 1
+
+Now if the tuple's second column is updated and if there is room on the
+page, we perform a WARM update. To do so, Index1 does not get any new
+entry and Index2's new entry will still point to the root tuple of the
+chain.
+
+	lp [1]  [2]
+	[1111, aaaa]->[1111, bbbb]
+
+	Index1's entry (1111) points to 1
+	Index2's old entry (aaaa) points to 1
+	Index2's new entry (bbbb) also points to 1
+
+"A update chain which has more than one index entries pointing to its
+root line pointer is called WARM chain and the action that creates a
+WARM chain is called WARM update."
+
+Since all indexes always point to the root of the WARM chain, even when
+there are more than one index entries, WARM chains can be pruned and
+dead tuples can be removed without a need to do corresponding index
+cleanup.
+
+While this solves the problem of pruning dead tuples from a HOT/WARM
+chain, it also opens up a new technical challenge because now we have a
+situation where a heap tuple is reachable from multiple index entries,
+each having a different index key. While MVCC still ensures that only
+valid tuples are returned, a tuple with a wrong index key may be
+returned because of wrong index entries. In the above example, tuple
+[1111, bbbb] is reachable from both keys (aaaa) as well as (bbbb). For
+this reason, tuples returned from a WARM chain must always be rechecked
+for index key-match.
+
+Recheck Index Key Againt Heap Tuple
+-----------------------------------
+
+Since every Index AM has it's own notion of index tuples, each Index AM
+must implement its own method to recheck heap tuples. For example, a
+hash index stores the hash value of the column and hence recheck routine
+for hash AM must first compute the hash value of the heap attribute and
+then compare it against the value stored in the index tuple.
+
+The patch currently implement recheck routines only for btree
+indexes. If the table has an index which doesn't support recheck
+routine, WARM updates are disabled on such tables.
+
+Problem With Duplicate (key, ctid) Index Entries
+------------------------------------------------
+
+The index-key recheck logic works as long as there are no duplicate
+index keys, both pointing to the same WARM chain. In that case, the same
+valid tuple will be reachable via multiple index keys, yet satisfying
+the index key checks. In the above example, if the tuple [1111, bbbb] is
+again updated to [1111, aaaa] and if we insert a new index entry (aaaa)
+pointing to the root line pointer, we will end up with the following
+structure:
+
+	lp [1]  [2]  [3]
+	[1111, aaaa]->[1111, bbbb]->[1111, aaaa]
+
+	Index1's entry (1111) points to 1
+	Index2's oldest entry (aaaa) points to 1
+	Index2's old entry (bbbb) also points to 1
+	Index2's new entry (aaaa) also points to 1
+
+We must solve this problem to ensure that the same tuple is not
+reachable via multiple index pointers. There are couple of ways to
+address this issue:
+
+1. Do not allow WARM update to a tuple from a WARM chain. This
+guarantees that there can never be duplicate index entries to the same
+root line pointer because we must have checked for old and new index
+keys while doing the first WARM update.
+
+2. Do not allow duplicate (key, ctid) index pointers. In the above
+example, since (aaaa, 1) already exists in the index, we must not insert
+a duplicate index entry.
+
+The patch currently implements 1 i.e. do not do WARM updates to a tuple
+from a WARM chain. HOT updates are fine because they do not add a new
+index entry.
+
+Even with the restriction, this is a significant improvement because the
+number of regular UPDATEs are curtailed down to half.
+
+Expression and Partial Indexes
+------------------------------
+
+Expressions may evaluate to the same value even if the underlying column
+values have changed. A simple example is an index on "lower(col)" which
+will return the same value if the new heap value only differs in the
+case sensitivity. So we can not solely rely on the heap column check to
+decide whether or not to insert a new index entry for expression
+indexes. Similarly, for partial indexes, the predicate expression must
+be evaluated to decide whether or not to cause a new index entry when
+columns referred in the predicate expressions change.
+
+(None of these things are currently implemented and we squarely disallow
+WARM update if a column from expression indexes or predicate has
+changed).
+
+
+Efficiently Finding the Root Line Pointer
+-----------------------------------------
+
+During WARM update, we must be able to find the root line pointer of the
+tuple being updated. It must be noted that the t_ctid field in the heap
+tuple header is usually used to find the next tuple in the update chain.
+But the tuple that we are updating, must be the last tuple in the update
+chain. In such cases, the c_tid field usually points the tuple itself.
+So in theory, we could use the t_ctid to store additional information in
+the last tuple of the update chain, if the information about the tuple
+being the last tuple is stored elsewhere.
+
+We now utilize another bit from t_infomask2 to explicitly identify that
+this is the last tuple in the update chain.
+
+HEAP_LATEST_TUPLE - When this bit is set, the tuple is the last tuple in
+the update chain. The OffsetNumber part of t_ctid points to the root
+line pointer of the chain when HEAP_LATEST_TUPLE flag is set.
+
+If UPDATE operation is aborted, the last tuple in the update chain
+becomes dead. The root line pointer information stored in the tuple
+which remains the last valid tuple in the chain is also lost. In such
+rare cases, the root line pointer must be found in a hard way by
+scanning the entire heap page.
+
+Tracking WARM Chains
+--------------------
+
+When a tuple is WARM updated, the old, the new and every subsequent tuple in
+the chain is marked with a special HEAP_WARM_UPDATED flag. We use the last
+remaining bit in t_infomask2 to store this information.
+
+When a tuple is returned from a WARM chain, the caller must do additional
+checks to ensure that the tuple matches the index key. Even if the tuple
+precedes the WARM update in the chain, it must still be rechecked for the index
+key match (case when old tuple is returned by the new index key). So we must
+follow the update chain everytime to the end to check if this is a WARM
+chain.
+
+Converting WARM chains back to HOT chains (VACUUM ?)
+----------------------------------------------------
+
+The current implementation of WARM allows only one WARM update per
+chain. This simplifies the design and addresses certain issues around
+duplicate key scans. But this also implies that the benefit of WARM will be
+no more than 50%, which is still significant, but if we could return
+WARM chains back to normal status, we could do far more WARM updates.
+
+A distinct property of a WARM chain is that at least one index has more
+than one live index entries pointing to the root of the chain. In other
+words, if we can remove duplicate entry from every index or conclusively
+prove that there are no duplicate index entries for the root line
+pointer, the chain can again be marked as HOT.
+
+A WARM chain has two parts, separated by the tuple that caused WARM
+update. All tuples in each part has matching index keys, but certain
+index keys may not match between these two parts. Lets say we mark heap
+tuples in the second part with a special HEAP_WARM_TUPLE flag. Similarly, the
+new index entries caused by the first WARM update are also marked with
+INDEX_WARM_POINTER flags.
+
+There are two distinct parts of the WARM chain. The first part where none of
+the tuples have HEAP_WARM_TUPLE flag set, we call them CLEAR tuples. The second
+part where every tuple has the flag set, we call them WARM tuples. Each of
+these parts satisfy HOT property on its own i.e. all tuples have the same value
+for indexed columns. But these two parts are separated by the WARM update which
+breaks HOT property for one or more indexes.
+
+Heap chain: [1] [2] [3] [4]
+			[aaaa, 1111] -> [aaaa, 1111] -> [bbbb, 1111]W -> [bbbb, 1111]W
+
+Index1: 	(aaaa) points to 1 (satisfies only tuples without W)
+			(bbbb)W points to 1 (satisfies only tuples marked with W)
+
+Index2:		(1111) points to 1 (satisfies tuples with and without W)
+
+
+It's clear that for indexes with both CLEAR and WARM pointers, the CLEAR heap
+tuples will be reachable from CLEAR the index pointer and the WARM heap tuples
+will be reachable from the WARM index pointer. But for indexes which only have
+CLEAR pointers, both CLEAR and WARM heap tuples will be reachable from the
+CLEAR pointers. Note that such indexes must not have received a new index
+entry during WARM update.
+
+During first heap scan of VACUUM, we look for candidate WARM chains. A WARM
+chain is a candidate for conversion if all tuples in the WARM chain are either
+CLEAR tuples or WARM tuples. For all such candidate chains, we remember the
+root line pointer of the chain along with whether the chain has only CLEAR
+tuples or only WARM tuples.
+
+If we have a candidate WARM chain with WARM tuples, then our goal is to remove
+the CLEAR index pointers to such chains. On the other hand, if the candidate
+WARM chain has only CLEAR tuples, our goal is to remove all WARM index pointers
+to the chain. But there is a catch here. For Index2 above, we only have CLEAR
+index pointer, and since all heap tuples, WARM or CLEAR, are reachable only via
+this pointer, it must not be removed. In other words, we should remove CLEAR
+index pointer iff a WARM index pointer to the same root line pointer exists.
+Since index vacuum may visit these pointers in any order, we can't determine in
+a single index pass whether a WARM index pointer exists to a candidate WARM
+chain with all WARM tuples. So in the first index pass we count number of CLEAR
+and WARM pointers to each candidate chain. In the second pass, we remove the
+CLEAR pointer to a WARM chain if another WARM pointer to the chain exists. A
+WARM pointer to a chain with WARM tuples is always preserved, but such pointers
+are converted into CLEAR pointers during the second index scan. Similarly, a
+CLEAR pointer to a chain with CLEAR tuples is always preserved too. A WARM
+pointer to a chain with CLEAR tuples can always be removed since it can happen
+only in case of aborted WARM updates. Note that all index pointers, either
+CLEAR or WARM, to dead tuples are removed during the first index scan itself.
+
+Once we certainly know that all duplicate index pointers have been removed and
+the index pointers have been changed to CLEAR pointers, during the second heap
+scan, we convert the WARM chain by clearing HEAP_WARM_UPDATED and
+HEAP_WARM_TUPLE flags on the tuples.
+
+There are some more problems around aborted vacuums. For example, if vacuum
+aborts after converting WARM index pointer to a CLEAR pointer, but before we
+get chance to remove the existing CLEAR pointer, we will end up with two CLEAR
+pointers to the same root. But since the HEAP_WARM_UPDATED flag on the heap
+tuple is still set, further WARM updates to the chain will not be allowed. We
+will need some special handling for case with multiple CLEAR index pointers. We
+can either leave these WARM chains alone and let them die with a subsequent
+non-WARM update or must apply heap-recheck logic during index vacuum to find
+the dead pointer. But such rechecks will cause random access to the heap and
+won't be very optimal.  Given that vacuum-aborts are not common, we leave this
+case unhandled. We must still check for presence of multiple CLEAR index
+pointers and ensure that we don't accidently remove either of these pointers
+(unless we know which one of those is dead) and also must not allow WARM
+updates to chains with more than one CLEAR pointers.
+
+Tuning AutoVacuum and Manual VACUUM
+------------------ ------------------
+
+The current design of WARM cleanup requires two index passes. We optimise this
+by doing the second pass only for those indexes which have WARM index pointers.
+If we imagine a case where out of N indexes on a table only K gets updated,
+WARM update will create WARM pointers in only those K indexes. As a
+consequence, only those K indexes will require two index scans. When K << N,
+the cost of additional index scan should be limited.
+
+If an UPDATE requires WARM inserts in most of the indexes, then the cost of
+doing WARM updates and the overhead of WARM cleanup, may not be justified. The
+current design thus avoids doing WARM updates when more than 50% indexes
+require WARM inserts. For example, if you've a table with 4 indexes and an
+UPDATE is going to modify 3 of those indexes, WARM update won't be used. But if
+2 or less indexes are being updated, then WARM update will be used, if all
+other conditions are favourable.
+
+To give further control to the user, we've added a few more controlling
+parameters.
+
+autovacuum_warmcleanup_scale_factor - specifies the fraction of the total
+number of tuples for autovacuum to consider WARM cleanup on a table. For
+example, if set to 0.20, WARM cleanup will be done only if the percentage of
+the WARM chains is more than or equal to 20.
+
+autovacuum_warmcleanup_index_scale_factor - specifies the fraction of the WARM
+chains for autovacuum to consider WARM cleanup for a table. So if an index has
+very less WARM inserts, such an index can be skipped from WARM cleanup. Note
+that if an index is skipped, none of the candidate WARM chains pointed to by
+that index can be cleaned up.
+
+Both these parameters can also be specified on per-table basis.
+
+For a manual VACUUM, the user can use a newly added option to force WARM
+cleanup.
+
+Memory Management for VACUUM
+----------------------------
+
+Since WARM cleanup requires tracking of a lot more information than the regular
+VACUUM, we allocate a work area which is large enough to hold the required
+information, but still staying within the set maintenance_work_mem. Obviously,
+if the WARM cleanup is not requested or autovacuum has decided not to do WARM
+cleanup, then the entire memory is available for tracking the dead tuples. But
+when we are doing WARM cleanup, we fill up the work area such that dead tuples
+are added from one end and candidate WARM chains are added from the other end.
+When the allocated work memory is exhausted, we do one round of index and heap
+cleanup and then continue again.
+
+Disabling WARM on a per-table Basis
+-----------------------------------
+
+A new table level option (enable_warm) is added to enable or disable WARM on a
+given table. Note that while you can turn WARM ON, if it's currently OFF, you
+can't turn WARM OFF once it's turned ON. Changing the option requires an
+AccessExclusiveLock on the table.
+
+Online Cleanup of WARM pointers
+-------------------------------
+
+During normal index scans, if we find certain conditions, we can do online
+cleanup of the index pointers. This is very similar to how dead index pointers
+are tracked and marked with the LP_DEAD flag.
+
+During normal index scans, if we find a WARM chain with either all CLEAR or all
+WARM pointers then we do one of the following:
+
+1. If this is a WARM index pointer to a chain with WARM tuples, do nothing.
+2. If this is a CLEAR index pointer to a chain with WARM tuples
+	2a. If recheck returns false, kill the CLEAR pointer.
+	2b. If recheck returns true, convert the CLEAR pointer to a WARM pointer.
+3. If this is a WARM index pointer to a chain with CLEAR tuples, kill the
+   pointer.
+4. If this is a CLEAR index pointer to a chain with CLEAR tuples
+	4a. If recheck returns false, kill the CLEAR pointer
+	4b. If recheck returns true, do nothing.
+
+The choice of 2b is curious because by doing that we are actually making
+changes to the index which did not receive a WARM insert during WARM update.
+But this allows us to avoid repeated "recheck" of the tuple when the same tuple
+is accessed again and again. Since we only use hinting mechanism to mark the
+buffer dirty, this should not cause unnecessary IO.
+
+If we make any changes to the index pointer state, we never WAL log that
+operation.
+
+CREATE INDEX CONCURRENTLY
+-------------------------
+
+Currently CREATE INDEX CONCURRENTLY (CIC) is implemented as a 3-phase
+process.  In the first phase, we create catalog entry for the new index
+so that the index is visible to all other backends, but still don't use
+it for either read or write.  But we ensure that no new broken HOT
+chains are created by new transactions. In the second phase, we build
+the new index using a MVCC snapshot and then make the index available
+for inserts. We then do another pass over the index and insert any
+missing tuples, everytime indexing only it's root line pointer. See
+README.HOT for details about how HOT impacts CIC and how various
+challenges are tackeled.
+
+WARM poses another challenge because it allows creation of HOT chains
+even when an index key is changed. But since the index is not ready for
+insertion until the second phase is over, we might end up with a
+situation where the HOT chain has tuples with different index columns,
+yet only one of these values are indexed by the new index. Note that
+during the third phase, we only index tuples whose root line pointer is
+missing from the index. But we can't easily check if the existing index
+tuple is actually indexing the heap tuple visible to the new MVCC
+snapshot. Finding that information will require us to query the index
+again for every tuple in the chain, especially if it's a WARM tuple.
+This would require repeated access to the index. Another option would be
+to return index keys along with the heap TIDs when index is scanned for
+collecting all indexed TIDs during third phase. We can then compare the
+heap tuple against the already indexed key and decide whether or not to
+index the new tuple.
+
+We solve this problem more simply by disallowing WARM updates until the
+index is ready for insertion. We don't need to disallow WARM on a
+wholesale basis, but only those updates that change the columns of the
+new index are disallowed to be WARM updates.
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 30262ef..a65516d 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -97,9 +97,12 @@ static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				HeapTuple newtup, OffsetNumber root_offnum,
 				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
-static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
+static void HeapCheckColumns(Relation relation,
 							 Bitmapset *interesting_cols,
-							 HeapTuple oldtup, HeapTuple newtup);
+							 HeapTuple oldtup, HeapTuple newtup,
+							 Bitmapset **toasted_attrs,
+							 Bitmapset **compressed_attrs,
+							 Bitmapset **modified_attrs);
 static bool heap_acquire_tuplock(Relation relation, ItemPointer tid,
 					 LockTupleMode mode, LockWaitPolicy wait_policy,
 					 bool *have_tuple_lock);
@@ -1974,6 +1977,212 @@ heap_fetch(Relation relation,
 }
 
 /*
+ * Check status of a (possibly) WARM chain.
+ *
+ * This function looks at a HOT/WARM chain starting at tid and return a bitmask
+ * of information. We only follow the chain as long as it's known to be valid
+ * HOT chain. Information returned by the function consists of:
+ *
+ *  HCWC_WARM_UPDATED_TUPLE - a tuple with HEAP_WARM_UPDATED is found somewhere
+ *  						  in the chain. Note that when a tuple is WARM
+ *  						  updated, both old and new versions are marked
+ *  						  with this flag. So presence of this flag
+ *  						  indicates that a WARM update was performed on
+ *  						  this chain, but the update may have either
+ *  						  committed or aborted.
+ *
+ *  HCWC_WARM_TUPLE  - a tuple with HEAP_WARM_TUPLE is found somewhere in
+ *					  the chain. This flag is set only on the new version of
+ *					  the tuple while performing WARM update.
+ *
+ *  HCWC_CLEAR_TUPLE - a tuple without HEAP_WARM_TUPLE is found somewhere in
+ *  					 the chain. This either implies that the WARM updated
+ *  					 either aborted or it's recent enough that the old
+ *  					 tuple is still not pruned away by chain pruning logic.
+ *
+ *	If stop_at_warm is true, we stop when the first HEAP_WARM_UPDATED tuple is
+ *	found and return information collected so far.
+ */
+HeapCheckWarmChainStatus
+heap_check_warm_chain(Page dp, ItemPointer tid, bool stop_at_warm)
+{
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	HeapCheckWarmChainStatus	status = 0;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
+			break;
+		}
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		if (HeapTupleHeaderIsWarmUpdated(heapTuple.t_data))
+		{
+			/* We found a WARM_UPDATED tuple */
+			status |= HCWC_WARM_UPDATED_TUPLE;
+
+			/*
+			 * If we've been told to stop at the first WARM_UPDATED tuple, just
+			 * return whatever information collected so far.
+			 */
+			if (stop_at_warm)
+				return status;
+
+			/*
+			 * Remember whether it's a CLEAR or a WARM tuple.
+			 */
+			if (HeapTupleHeaderIsWarm(heapTuple.t_data))
+				status |= HCWC_WARM_TUPLE;
+			else
+				status |= HCWC_CLEAR_TUPLE;
+		}
+		else
+			/* Must be a regular, non-WARM tuple */
+			status |= HCWC_CLEAR_TUPLE;
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	/* All OK. No need to recheck */
+	return status;
+}
+
+/*
+ * Scan through the WARM chain starting at tid and reset all WARM related
+ * flags. At the end, the chain will have all characteristics of a regular HOT
+ * chain.
+ *
+ * Return the number of cleared offnums. Cleared offnums are returned in the
+ * passed-in cleared_offnums array. The caller must ensure that the array is
+ * large enough to hold maximum offnums that can be cleared by this invokation
+ * of heap_clear_warm_chain().
+ */
+int
+heap_clear_warm_chain(Page dp, ItemPointer tid, OffsetNumber *cleared_offnums)
+{
+	TransactionId				prev_xmax = InvalidTransactionId;
+	OffsetNumber				offnum;
+	HeapTupleData				heapTuple;
+	int							num_cleared = 0;
+
+	offnum = ItemPointerGetOffsetNumber(tid);
+	heapTuple.t_self = *tid;
+	/* Scan through possible multiple members of HOT-chain */
+	for (;;)
+	{
+		ItemId		lp;
+
+		/* check for bogus TID */
+		if (offnum < FirstOffsetNumber || offnum > PageGetMaxOffsetNumber(dp))
+			break;
+
+		lp = PageGetItemId(dp, offnum);
+
+		/* check for unused, dead, or redirected items */
+		if (!ItemIdIsNormal(lp))
+		{
+			if (ItemIdIsRedirected(lp))
+			{
+				/* Follow the redirect */
+				offnum = ItemIdGetRedirect(lp);
+				continue;
+			}
+			/* else must be end of chain */
+			break;
+		}
+
+		heapTuple.t_data = (HeapTupleHeader) PageGetItem(dp, lp);
+		ItemPointerSetOffsetNumber(&heapTuple.t_self, offnum);
+
+		/*
+		 * The xmin should match the previous xmax value, else chain is
+		 * broken.
+		 */
+		if (TransactionIdIsValid(prev_xmax) &&
+			!TransactionIdEquals(prev_xmax,
+								 HeapTupleHeaderGetXmin(heapTuple.t_data)))
+			break;
+
+
+		/*
+		 * Clear WARM_UPDATED and WARM flags.
+		 */
+		if (HeapTupleHeaderIsWarmUpdated(heapTuple.t_data))
+		{
+			HeapTupleHeaderClearWarmUpdated(heapTuple.t_data);
+			HeapTupleHeaderClearWarm(heapTuple.t_data);
+			cleared_offnums[num_cleared++] = offnum;
+		}
+
+		/*
+		 * Check to see if HOT chain continues past this tuple; if so fetch
+		 * the next offnum and loop around.
+		 */
+		if (!HeapTupleIsHotUpdated(&heapTuple))
+			break;
+
+		/*
+		 * It can't be a HOT chain if the tuple contains root line pointer
+		 */
+		if (HeapTupleHeaderHasRootOffset(heapTuple.t_data))
+			break;
+
+		offnum = ItemPointerGetOffsetNumber(&heapTuple.t_data->t_ctid);
+		prev_xmax = HeapTupleHeaderGetUpdateXid(heapTuple.t_data);
+	}
+
+	return num_cleared;
+}
+
+/*
  *	heap_hot_search_buffer	- search HOT chain for tuple satisfying snapshot
  *
  * On entry, *tid is the TID of a tuple (either a simple tuple, or the root
@@ -1993,11 +2202,15 @@ heap_fetch(Relation relation,
  * Unlike heap_fetch, the caller must already have pin and (at least) share
  * lock on the buffer; it is still pinned/locked at exit.  Also unlike
  * heap_fetch, we do not report any pgstats count; caller may do so if wanted.
+ *
+ * recheck should be set false on entry by caller, will be set true on exit
+ * if a WARM tuple is encountered.
  */
 bool
 heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 					   Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call)
+					   bool *all_dead, bool first_call,
+					   HeapCheckWarmChainStatus *status)
 {
 	Page		dp = (Page) BufferGetPage(buffer);
 	TransactionId prev_xmax = InvalidTransactionId;
@@ -2010,6 +2223,9 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 	if (all_dead)
 		*all_dead = first_call;
 
+	if (status)
+		*status = 0;
+
 	Assert(TransactionIdIsValid(RecentGlobalXmin));
 
 	Assert(ItemPointerGetBlockNumber(tid) == BufferGetBlockNumber(buffer));
@@ -2019,6 +2235,22 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 
 	heapTuple->t_self = *tid;
 
+
+	/*
+	 * Check status of the chain if the caller has asked for it. We do it only
+	 * once for each chain.
+	 *
+	 * XXX This is not very efficient right now, and we should look for
+	 * possible improvements here.
+	 *
+	 * XXX We currently don't support turning enable_warm OFF once it's
+	 * turned ON. But if we ever do that, we must not rely on
+	 * RelationWarmUpdatesEnabled check to decide whether recheck is needed
+	 * or not.
+	 */
+	if (RelationWarmUpdatesEnabled(relation) && status)
+		*status = heap_check_warm_chain(dp, &heapTuple->t_self, false);
+
 	/* Scan through possible multiple members of HOT-chain */
 	for (;;)
 	{
@@ -2051,9 +2283,12 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		ItemPointerSetOffsetNumber(&heapTuple->t_self, offnum);
 
 		/*
-		 * Shouldn't see a HEAP_ONLY tuple at chain start.
+		 * Shouldn't see a HEAP_ONLY tuple at chain start, unless we are
+		 * dealing with a WARM updated tuple in which case deferred triggers
+		 * may request to fetch a WARM tuple from middle of a chain.
 		 */
-		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple))
+		if (at_chain_start && HeapTupleIsHeapOnly(heapTuple) &&
+				!HeapTupleIsWarmUpdated(heapTuple))
 			break;
 
 		/*
@@ -2114,7 +2349,8 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
 		 * Check to see if HOT chain continues past this tuple; if so fetch
 		 * the next offnum and loop around.
 		 */
-		if (HeapTupleIsHotUpdated(heapTuple))
+		if (HeapTupleIsHotUpdated(heapTuple) &&
+			!HeapTupleHeaderHasRootOffset(heapTuple->t_data))
 		{
 			Assert(ItemPointerGetBlockNumber(&heapTuple->t_data->t_ctid) ==
 				   ItemPointerGetBlockNumber(tid));
@@ -2138,18 +2374,48 @@ heap_hot_search_buffer(ItemPointer tid, Relation relation, Buffer buffer,
  */
 bool
 heap_hot_search(ItemPointer tid, Relation relation, Snapshot snapshot,
-				bool *all_dead)
+				bool *all_dead, bool *recheck, Buffer *cbuffer,
+				HeapTuple heapTuple)
 {
 	bool		result;
 	Buffer		buffer;
-	HeapTupleData heapTuple;
+	ItemPointerData ret_tid = *tid;
+	HeapCheckWarmChainStatus status = 0;
 
 	buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	LockBuffer(buffer, BUFFER_LOCK_SHARE);
-	result = heap_hot_search_buffer(tid, relation, buffer, snapshot,
-									&heapTuple, all_dead, true);
-	LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
-	ReleaseBuffer(buffer);
+	result = heap_hot_search_buffer(&ret_tid, relation, buffer, snapshot,
+									heapTuple, all_dead, true, &status);
+
+	/*
+	 * If the chain contains a WARM_UPDATED tuple, then we must do a recheck.
+	 */
+	if (HCWC_IS_WARM_UPDATED(status) && recheck)
+		*recheck = true;
+
+	/*
+	 * If we are returning a potential candidate tuple from this chain and the
+	 * caller has requested for "recheck" hint, keep the buffer locked and
+	 * pinned. The caller must release the lock and pin on the buffer in all
+	 * such cases.
+	 */
+	if (!result || !recheck || !(*recheck))
+	{
+		LockBuffer(buffer, BUFFER_LOCK_UNLOCK);
+		ReleaseBuffer(buffer);
+	}
+
+	/*
+	 * Set the caller supplied tid with the actual location of the tuple being
+	 * returned.
+	 */
+	if (result)
+	{
+		*tid = ret_tid;
+		if (cbuffer)
+			*cbuffer = buffer;
+	}
+
 	return result;
 }
 
@@ -2792,7 +3058,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		{
 			XLogRecPtr	recptr;
 			xl_heap_multi_insert *xlrec;
-			uint8		info = XLOG_HEAP2_MULTI_INSERT;
+			uint8		info = XLOG_HEAP_MULTI_INSERT;
 			char	   *tupledata;
 			int			totaldatalen;
 			char	   *scratchptr = scratch;
@@ -2889,7 +3155,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			/* filtering by origin on a row level is much more efficient */
 			XLogSetRecordFlags(XLOG_INCLUDE_ORIGIN);
 
-			recptr = XLogInsert(RM_HEAP2_ID, info);
+			recptr = XLogInsert(RM_HEAP_ID, info);
 
 			PageSetLSN(page, recptr);
 		}
@@ -3045,7 +3311,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	bool		all_visible_cleared = false;
 	HeapTuple	old_key_tuple = NULL;	/* replica identity of the tuple */
 	bool		old_key_copied = false;
-	OffsetNumber	root_offnum;
+	OffsetNumber	root_offnum = InvalidOffsetNumber;
 
 	Assert(ItemPointerIsValid(tid));
 
@@ -3278,7 +3544,7 @@ l1:
 							  &new_xmax, &new_infomask, &new_infomask2);
 
 	/*
-	 * heap_get_root_tuple_one() may call palloc, which is disallowed once we
+	 * heap_get_root_tuple() may call palloc, which is disallowed once we
 	 * enter the critical section. So check if the root offset is cached in the
 	 * tuple and if not, fetch that information hard way before entering the
 	 * critical section.
@@ -3313,7 +3579,9 @@ l1:
 	}
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	tp.t_data->t_infomask |= new_infomask;
 	tp.t_data->t_infomask2 |= new_infomask2;
@@ -3508,15 +3776,21 @@ simple_heap_delete(Relation relation, ItemPointer tid)
 HTSU_Result
 heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode)
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update)
 {
 	HTSU_Result result;
 	TransactionId xid = GetCurrentTransactionId();
 	Bitmapset  *hot_attrs;
 	Bitmapset  *key_attrs;
 	Bitmapset  *id_attrs;
+	Bitmapset  *exprindx_attrs;
 	Bitmapset  *interesting_attrs;
 	Bitmapset  *modified_attrs;
+	Bitmapset  *notready_attrs;
+	Bitmapset  *compressed_attrs;
+	Bitmapset  *toasted_attrs;
+	List	   *indexattrsList;
 	ItemId		lp;
 	HeapTupleData oldtup;
 	HeapTuple	heaptup;
@@ -3524,8 +3798,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
-	OffsetNumber	offnum;
-	OffsetNumber	root_offnum;
+	OffsetNumber	root_offnum = InvalidOffsetNumber;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3537,6 +3810,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		have_tuple_lock = false;
 	bool		iscombo;
 	bool		use_hot_update = false;
+	bool		use_warm_update = false;
 	bool		hot_attrs_checked = false;
 	bool		key_intact;
 	bool		all_visible_cleared = false;
@@ -3562,6 +3836,10 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 				(errcode(ERRCODE_INVALID_TRANSACTION_STATE),
 				 errmsg("cannot update tuples during a parallel operation")));
 
+	/* Assume no-warm update */
+	if (warm_update)
+		*warm_update = false;
+
 	/*
 	 * Fetch the list of attributes to be checked for various operations.
 	 *
@@ -3582,10 +3860,14 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	key_attrs = RelationGetIndexAttrBitmap(relation, INDEX_ATTR_BITMAP_KEY);
 	id_attrs = RelationGetIndexAttrBitmap(relation,
 										  INDEX_ATTR_BITMAP_IDENTITY_KEY);
+	exprindx_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_EXPR_PREDICATE);
+	notready_attrs = RelationGetIndexAttrBitmap(relation,
+										  INDEX_ATTR_BITMAP_NOTREADY);
 
+	indexattrsList = RelationGetIndexAttrList(relation);
 
 	block = ItemPointerGetBlockNumber(otid);
-	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3605,8 +3887,11 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 		interesting_attrs = bms_add_members(interesting_attrs, hot_attrs);
 		hot_attrs_checked = true;
 	}
+
 	interesting_attrs = bms_add_members(interesting_attrs, key_attrs);
 	interesting_attrs = bms_add_members(interesting_attrs, id_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, exprindx_attrs);
+	interesting_attrs = bms_add_members(interesting_attrs, notready_attrs);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -3623,7 +3908,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	Assert(ItemIdIsNormal(lp));
 
 	/*
-	 * Fill in enough data in oldtup for HeapDetermineModifiedColumns to work
+	 * Fill in enough data in oldtup for HeapCheckColumns to work
 	 * properly.
 	 */
 	oldtup.t_tableOid = RelationGetRelid(relation);
@@ -3650,8 +3935,12 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	}
 
 	/* Determine columns modified by the update. */
-	modified_attrs = HeapDetermineModifiedColumns(relation, interesting_attrs,
-												  &oldtup, newtup);
+	HeapCheckColumns(relation, interesting_attrs,
+								 &oldtup, newtup, &toasted_attrs,
+								 &compressed_attrs, &modified_attrs);
+
+	if (modified_attrsp)
+		*modified_attrsp = bms_copy(modified_attrs);
 
 	/*
 	 * If we're not updating any "key" column, we can grab a weaker lock type.
@@ -3908,8 +4197,10 @@ l2:
 		bms_free(hot_attrs);
 		bms_free(key_attrs);
 		bms_free(id_attrs);
+		bms_free(exprindx_attrs);
 		bms_free(modified_attrs);
 		bms_free(interesting_attrs);
+		bms_free(notready_attrs);
 		return result;
 	}
 
@@ -4034,7 +4325,6 @@ l2:
 		uint16		infomask_lock_old_tuple,
 					infomask2_lock_old_tuple;
 		bool		cleared_all_frozen = false;
-		OffsetNumber	root_offnum;
 
 		/*
 		 * To prevent concurrent sessions from updating the tuple, we have to
@@ -4073,7 +4363,9 @@ l2:
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
-		oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(oldtup.t_data))
+			oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 		oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleClearHotUpdated(&oldtup);
 		/* ... and store info about transaction updating this tuple */
@@ -4227,6 +4519,75 @@ l2:
 		 */
 		if (hot_attrs_checked && !bms_overlap(modified_attrs, hot_attrs))
 			use_hot_update = true;
+		else
+		{
+			/*
+			 * If no WARM updates yet on this chain, let this update be a WARM
+			 * update. We must not do any WARM update even if the previous WARM
+			 * updated at the end aborted. That's why we look at
+			 * HEAP_WARM_UPDATED flag.
+			 *
+			 * We don't do WARM updates if one of the columns used in index
+			 * expressions is being modified. Since expressions may evaluate to
+			 * the same value, even when heap values change, we don't have a
+			 * good way to deal with duplicate key scans when expressions are
+			 * used in the index.
+			 *
+			 * We check if the HOT attrs are a subset of the modified
+			 * attributes. Since HOT attrs include all index attributes, this
+			 * allows to avoid doing a WARM update when all index attributes
+			 * are being updated. Performing a WARM update is not a great idea
+			 * because all indexes will receive a new entry anyways.
+			 *
+			 * We also disable WARM temporarily if we are modifying a column
+			 * which is used by a new index that's being added. We can't insert
+			 * new entries to such indexes and hence we must not allow creating
+			 * on WARM chains which are broken with respect to the new index
+			 * being added.
+			 *
+			 * Finally, we disable WARM if either the old or the new tuple has
+			 * toasted/compressed attributes. Detoasting and decompressing very
+			 * large attributes can make the recheck logic too slow and it can
+			 * make the code to determine modified attributes a lot slower too.
+			 * So if we see such attributes in either old or the new tuple, we
+			 * instead disable WARM. Now this is not full proof because the new
+			 * tuple may not have been toasted when we ran HeapCheckColumns. So
+			 * if the UPDATE is changing previously untoasted attributes to
+			 * toasted attributes, it will skip the check and we will end up
+			 * doing a WARM update. The recheck logic should be prepared to
+			 * handle toasted and compressed values from the heap.
+			 */
+			if (RelationWarmUpdatesEnabled(relation) &&
+				relation->rd_supportswarm &&
+				!HeapTupleIsWarmUpdated(&oldtup) &&
+				!bms_overlap(modified_attrs, exprindx_attrs) &&
+				!bms_is_subset(hot_attrs, modified_attrs) &&
+				!bms_overlap(notready_attrs, modified_attrs) &&
+				!bms_overlap(toasted_attrs, hot_attrs) &&
+				!bms_overlap(compressed_attrs, hot_attrs))
+			{
+				int num_indexes, num_updating_indexes;
+				ListCell *l;
+
+				/*
+				 * Everything else is Ok. Now check if the update will require
+				 * less than or equal to 50% index updates. Anything above
+				 * that, we can just do a regular update and save on WARM
+				 * cleanup cost.
+				 */
+				num_indexes = list_length(indexattrsList);
+				num_updating_indexes = 0;
+				foreach (l, indexattrsList)
+				{
+					Bitmapset  *b = (Bitmapset *) lfirst(l);
+					if (bms_overlap(b, modified_attrs))
+						num_updating_indexes++;
+				}
+
+				if ((double)num_updating_indexes/num_indexes <= 0.5)
+					use_warm_update = true;
+			}
+		}
 	}
 	else
 	{
@@ -4273,6 +4634,32 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+
+		/*
+		 * Even if we are doing a HOT update, we must carry forward the WARM
+		 * flag because we may have already inserted another index entry
+		 * pointing to our root and a third entry may create duplicates.
+		 *
+		 * Note: If we ever have a mechanism to avoid duplicate <key, TID> in
+		 * indexes, we could look at relaxing this restriction and allow even
+		 * more WARM udpates.
+		 */
+		if (HeapTupleIsWarmUpdated(&oldtup))
+		{
+			HeapTupleSetWarmUpdated(heaptup);
+			HeapTupleSetWarmUpdated(newtup);
+		}
+
+		/*
+		 * If the old tuple is a WARM tuple then mark the new tuple as a WARM
+		 * tuple as well.
+		 */
+		if (HeapTupleIsWarm(&oldtup))
+		{
+			HeapTupleSetWarm(heaptup);
+			HeapTupleSetWarm(newtup);
+		}
+
 		/*
 		 * For HOT (or WARM) updated tuples, we store the offset of the root
 		 * line pointer of this chain in the ip_posid field of the new tuple.
@@ -4285,12 +4672,45 @@ l2:
 		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
 			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
+	else if (use_warm_update)
+	{
+		/* Mark the old tuple as HOT-updated */
+		HeapTupleSetHotUpdated(&oldtup);
+		HeapTupleSetWarmUpdated(&oldtup);
+
+		/* And mark the new tuple as heap-only */
+		HeapTupleSetHeapOnly(heaptup);
+		/* Mark the new tuple as WARM tuple */
+		HeapTupleSetWarmUpdated(heaptup);
+		/* This update also starts the WARM chain */
+		HeapTupleSetWarm(heaptup);
+		Assert(!HeapTupleIsWarm(&oldtup));
+
+		/* Mark the caller's copy too, in case different from heaptup */
+		HeapTupleSetHeapOnly(newtup);
+		HeapTupleSetWarmUpdated(newtup);
+		HeapTupleSetWarm(newtup);
+
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
+		else
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
+		/* Let the caller know we did a WARM update */
+		if (warm_update)
+			*warm_update = true;
+	}
 	else
 	{
 		/* Make sure tuples are correctly marked as not-HOT */
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		HeapTupleClearWarmUpdated(heaptup);
+		HeapTupleClearWarmUpdated(newtup);
+		HeapTupleClearWarm(heaptup);
+		HeapTupleClearWarm(newtup);
 		root_offnum = InvalidOffsetNumber;
 	}
 
@@ -4309,7 +4729,9 @@ l2:
 	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
-	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	oldtup.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(oldtup.t_data))
+		oldtup.t_data->t_infomask &= ~HEAP_MOVED;
 	oldtup.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 	/* ... and store info about transaction updating this tuple */
 	Assert(TransactionIdIsValid(xmax_old_tuple));
@@ -4400,7 +4822,10 @@ l2:
 	if (have_tuple_lock)
 		UnlockTupleTuplock(relation, &(oldtup.t_self), *lockmode);
 
-	pgstat_count_heap_update(relation, use_hot_update);
+	/*
+	 * Count HOT and WARM updates separately
+	 */
+	pgstat_count_heap_update(relation, use_hot_update, use_warm_update);
 
 	/*
 	 * If heaptup is a private copy, release it.  Don't forget to copy t_self
@@ -4420,17 +4845,33 @@ l2:
 	bms_free(id_attrs);
 	bms_free(modified_attrs);
 	bms_free(interesting_attrs);
+	bms_free(exprindx_attrs);
+	bms_free(notready_attrs);
 
 	return HeapTupleMayBeUpdated;
 }
 
 /*
- * Check if the specified attribute's value is same in both given tuples.
- * Subroutine for HeapDetermineModifiedColumns.
+ * Check if the specified attribute is toasted or compressed in either
+ * the old or the new tuple. For compressed or toasted attributes, we only do a
+ * very simplistic check for the equality by running datumIsEqual on the
+ * compressed or toasted form. This helps us to perform HOT updates (if other
+ * conditions are favourable) when these attributes are not updated. But we
+ * might not be able to capture all possible scenarios such as when the
+ * toasted/compressed attribute is updated, but the modified value is same as
+ * the original value.
+ *
+ * For WARM updates, we don't care what the equality check returns for
+ * toasted/compressed attributes. If such attributes are used in any of the
+ * indexes, we don't perform WARM updates irrespective of whether they are
+ * modified or not.
+ *
+ * Subroutine for HeapCheckColumns.
  */
-static bool
-heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
-					   HeapTuple tup1, HeapTuple tup2)
+static void
+heap_tuple_attr_check(TupleDesc tupdesc, int attrnum,
+					   HeapTuple tup1, HeapTuple tup2,
+					   bool *toasted, bool *compressed, bool *equal)
 {
 	Datum		value1,
 				value2;
@@ -4438,13 +4879,24 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 				isnull2;
 	Form_pg_attribute att;
 
+	*equal = true;
+	*toasted = *compressed = false;
+
 	/*
 	 * If it's a whole-tuple reference, say "not equal".  It's not really
 	 * worth supporting this case, since it could only succeed after a no-op
 	 * update, which is hardly a case worth optimizing for.
+	 *
+	 * XXX Does thie need special attention in WARM given that we don't want to
+	 * return "not equal" for something that is equal? But how does whole-tuple
+	 * reference ends up in the interesting_attrs list? Regression tests do not
+	 * have covergae for this case as of now.
 	 */
 	if (attrnum == 0)
-		return false;
+	{
+		*equal = false;
+		return;
+	}
 
 	/*
 	 * Likewise, automatically say "not equal" for any system attribute other
@@ -4455,12 +4907,16 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 	{
 		if (attrnum != ObjectIdAttributeNumber &&
 			attrnum != TableOidAttributeNumber)
-			return false;
+		{
+			*equal = false;
+			/* these attributes can neither be toasted not compressed */
+			return;
+		}
 	}
 
 	/*
 	 * Extract the corresponding values.  XXX this is pretty inefficient if
-	 * there are many indexed columns.  Should HeapDetermineModifiedColumns do
+	 * there are many indexed columns.  Should HeapCheckColumns do
 	 * a single heap_deform_tuple call on each tuple, instead?	But that
 	 * doesn't work for system columns ...
 	 */
@@ -4468,17 +4924,32 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 	value2 = heap_getattr(tup2, attrnum, tupdesc, &isnull2);
 
 	/*
+	 * If both are NULL, they can be considered equal.
+	 */
+	if (isnull1 && isnull2)
+	{
+		*equal = true;
+		*toasted = *compressed = false;
+		return;
+	}
+
+	/*
 	 * If one value is NULL and other is not, then they are certainly not
 	 * equal
 	 */
 	if (isnull1 != isnull2)
-		return false;
+		*equal = false;
 
-	/*
-	 * If both are NULL, they can be considered equal.
-	 */
-	if (isnull1)
-		return true;
+	/* attrnum == 0 is already handled above */
+	if ((attrnum < 0) && (*equal))
+	{
+		/*
+		 * The only allowed system columns are OIDs, so do this. OIDs can never
+		 * be compressed or toasted.
+		 */
+		*equal = (DatumGetObjectId(value1) == DatumGetObjectId(value2));
+		return;
+	}
 
 	/*
 	 * We do simple binary comparison of the two datums.  This may be overly
@@ -4489,46 +4960,90 @@ heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,
 	 * classes; furthermore, we cannot safely invoke user-defined functions
 	 * while holding exclusive buffer lock.
 	 */
-	if (attrnum <= 0)
+	Assert(attrnum <= tupdesc->natts);
+	att = tupdesc->attrs[attrnum - 1];
+
+	/*
+	 * If either the old or the new value is toasted, consider the attribute as
+	 * toasted. We don't check if the value is a NULL value.
+	 */
+	if ((att->attlen == -1) && !isnull1 &&
+		(VARATT_IS_EXTERNAL(value1) || VARATT_IS_EXTERNAL(value2)))
 	{
-		/* The only allowed system columns are OIDs, so do this */
-		return (DatumGetObjectId(value1) == DatumGetObjectId(value2));
+		*toasted = true;
 	}
-	else
+
+	/*
+	 * If either the old or the new value is compressed, consider the attribute
+	 * as compressed. We don't check if the value is a NULL value.
+	 */
+	if ((att->attlen == -1) && !isnull2 &&
+		(VARATT_IS_COMPRESSED(value1) || VARATT_IS_COMPRESSED(value2)))
+	{
+		*compressed = true;
+	}
+
+	/*
+	 * Check for equality but only if we haven't already determined above that
+	 * they are not equal. This can happen either because one of the attributes
+	 * is NULL.
+	 */
+	if (*equal)
 	{
-		Assert(attrnum <= tupdesc->natts);
-		att = tupdesc->attrs[attrnum - 1];
-		return datumIsEqual(value1, value2, att->attbyval, att->attlen);
+		*equal = datumIsEqual(value1, value2, att->attbyval, att->attlen);
 	}
 }
 
 /*
- * Check which columns are being updated.
+ * Check the old tuple and the new tuple for the given list of
+ * interesting_cols.
+ *
+ * Given an updated tuple, check if any of the interesting_cols are toasted or
+ * compressed, either in the old or the new tuple. Such columns are returned in
+ * the toasted_attrs and compressed_attrs respectively.
  *
- * Given an updated tuple, determine (and return into the output bitmapset),
- * from those listed as interesting, the set of columns that changed.
+ * Also check which columns are being changed in the update operation. For
+ * toasted/compressed columns, we only do a simple memcmp-based check without
+ * detoasing/decompressing the values. This implies that we might not be able
+ * to capture all cases where two values are equal.
  *
  * The input bitmapset is destructively modified; that is OK since this is
  * invoked at most once in heap_update.
  */
-static Bitmapset *
-HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
-							 HeapTuple oldtup, HeapTuple newtup)
+void
+HeapCheckColumns(Relation relation, Bitmapset *interesting_cols,
+							 HeapTuple oldtup, HeapTuple newtup,
+							 Bitmapset **toasted_attrs,
+							 Bitmapset **compressed_attrs,
+							 Bitmapset **modified_attrs)
 {
 	int		attnum;
-	Bitmapset *modified = NULL;
+
+	*toasted_attrs = NULL;
+	*compressed_attrs = NULL;
+	*modified_attrs = NULL;
 
 	while ((attnum = bms_first_member(interesting_cols)) >= 0)
 	{
+		bool equal, compressed, toasted;
+
 		attnum += FirstLowInvalidHeapAttributeNumber;
 
-		if (!heap_tuple_attr_equals(RelationGetDescr(relation),
-								   attnum, oldtup, newtup))
-			modified = bms_add_member(modified,
+		heap_tuple_attr_check(RelationGetDescr(relation),
+								   attnum, oldtup, newtup, &toasted,
+								   &compressed, &equal);
+		if (!equal)
+			*modified_attrs = bms_add_member(*modified_attrs,
+									  attnum - FirstLowInvalidHeapAttributeNumber);
+		if (toasted)
+			*toasted_attrs = bms_add_member(*toasted_attrs,
+									  attnum - FirstLowInvalidHeapAttributeNumber);
+		if (compressed)
+			*compressed_attrs = bms_add_member(*compressed_attrs,
 									  attnum - FirstLowInvalidHeapAttributeNumber);
 	}
 
-	return modified;
+	return;
 }
 
 /*
@@ -4540,7 +5055,8 @@ HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,
  * via ereport().
  */
 void
-simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
+simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup,
+		Bitmapset **modified_attrs, bool *warm_update)
 {
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
@@ -4549,7 +5065,7 @@ simple_heap_update(Relation relation, ItemPointer otid, HeapTuple tup)
 	result = heap_update(relation, otid, tup,
 						 GetCurrentCommandId(true), InvalidSnapshot,
 						 true /* wait for commit */ ,
-						 &hufd, &lockmode);
+						 &hufd, &lockmode, modified_attrs, warm_update);
 	switch (result)
 	{
 		case HeapTupleSelfUpdated:
@@ -4640,7 +5156,6 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber	block;
-	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4649,11 +5164,10 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	bool		first_time = true;
 	bool		have_tuple_lock = false;
 	bool		cleared_all_frozen = false;
-	OffsetNumber	root_offnum;
+	OffsetNumber	root_offnum = InvalidOffsetNumber;
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
-	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -5745,7 +6259,6 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
-	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5754,7 +6267,6 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
-		offnum = ItemPointerGetOffsetNumber(&tupid);
 
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
@@ -6226,7 +6738,9 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	PageSetPrunable(page, RecentGlobalXmin);
 
 	/* store transaction information of xact deleting the tuple */
-	tp.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+	tp.t_data->t_infomask &= ~HEAP_XMAX_BITS;
+	if (HeapTupleHeaderIsMoved(tp.t_data))
+		tp.t_data->t_infomask &= ~HEAP_MOVED;
 	tp.t_data->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 
 	/*
@@ -6800,7 +7314,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 	 * Old-style VACUUM FULL is gone, but we have to keep this code as long as
 	 * we support having MOVED_OFF/MOVED_IN tuples in the database.
 	 */
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 
@@ -6819,7 +7333,7 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			 * have failed; whereas a non-dead MOVED_IN tuple must mean the
 			 * xvac transaction succeeded.
 			 */
-			if (tuple->t_infomask & HEAP_MOVED_OFF)
+			if (HeapTupleHeaderIsMovedOff(tuple))
 				frz->frzflags |= XLH_INVALID_XVAC;
 			else
 				frz->frzflags |= XLH_FREEZE_XVAC;
@@ -7289,7 +7803,7 @@ heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple)
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid))
@@ -7372,7 +7886,7 @@ heap_tuple_needs_freeze(HeapTupleHeader tuple, TransactionId cutoff_xid,
 			return true;
 	}
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		xid = HeapTupleHeaderGetXvac(tuple);
 		if (TransactionIdIsNormal(xid) &&
@@ -7398,7 +7912,7 @@ HeapTupleHeaderAdvanceLatestRemovedXid(HeapTupleHeader tuple,
 	TransactionId xmax = HeapTupleHeaderGetUpdateXid(tuple);
 	TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
-	if (tuple->t_infomask & HEAP_MOVED)
+	if (HeapTupleHeaderIsMoved(tuple))
 	{
 		if (TransactionIdPrecedes(*latestRemovedXid, xvac))
 			*latestRemovedXid = xvac;
@@ -7447,6 +7961,36 @@ log_heap_cleanup_info(RelFileNode rnode, TransactionId latestRemovedXid)
 }
 
 /*
+ * Perform XLogInsert for a heap-warm-clear operation.  Caller must already
+ * have modified the buffer and marked it dirty.
+ */
+XLogRecPtr
+log_heap_warmclear(Relation reln, Buffer buffer,
+			   OffsetNumber *cleared, int ncleared)
+{
+	xl_heap_warmclear	xlrec;
+	XLogRecPtr			recptr;
+
+	/* Caller should not call me on a non-WAL-logged relation */
+	Assert(RelationNeedsWAL(reln));
+
+	xlrec.ncleared = ncleared;
+
+	XLogBeginInsert();
+	XLogRegisterData((char *) &xlrec, SizeOfHeapWarmClear);
+
+	XLogRegisterBuffer(0, buffer, REGBUF_STANDARD);
+
+	if (ncleared > 0)
+		XLogRegisterBufData(0, (char *) cleared,
+							ncleared * sizeof(OffsetNumber));
+
+	recptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_WARMCLEAR);
+
+	return recptr;
+}
+
+/*
  * Perform XLogInsert for a heap-clean operation.  Caller must already
  * have modified the buffer and marked it dirty.
  *
@@ -7601,6 +8145,7 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	bool		need_tuple_data = RelationIsLogicallyLogged(reln);
 	bool		init;
 	int			bufflags;
+	bool		warm_update = false;
 
 	/* Caller should not call me on a non-WAL-logged relation */
 	Assert(RelationNeedsWAL(reln));
@@ -7612,6 +8157,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	else
 		info = XLOG_HEAP_UPDATE;
 
+	if (HeapTupleIsWarmUpdated(newtup))
+		warm_update = true;
+
 	/*
 	 * If the old and new tuple are on the same page, we only need to log the
 	 * parts of the new tuple that were changed.  That saves on the amount of
@@ -7685,6 +8233,8 @@ log_heap_update(Relation reln, Buffer oldbuf,
 				xlrec.flags |= XLH_UPDATE_CONTAINS_OLD_KEY;
 		}
 	}
+	if (warm_update)
+		xlrec.flags |= XLH_UPDATE_WARM_UPDATE;
 
 	/* If new tuple is the single and first tuple on page... */
 	if (ItemPointerGetOffsetNumber(&(newtup->t_self)) == FirstOffsetNumber &&
@@ -8099,6 +8649,60 @@ heap_xlog_clean(XLogReaderState *record)
 		XLogRecordPageWithFreeSpace(rnode, blkno, freespace);
 }
 
+
+/*
+ * Handles HEAP2_WARMCLEAR record type
+ */
+static void
+heap_xlog_warmclear(XLogReaderState *record)
+{
+	XLogRecPtr	lsn = record->EndRecPtr;
+	xl_heap_warmclear	*xlrec = (xl_heap_warmclear *) XLogRecGetData(record);
+	Buffer		buffer;
+	RelFileNode rnode;
+	BlockNumber blkno;
+	XLogRedoAction action;
+
+	XLogRecGetBlockTag(record, 0, &rnode, NULL, &blkno);
+
+	/*
+	 * If we have a full-page image, restore it (using a cleanup lock) and
+	 * we're done.
+	 */
+	action = XLogReadBufferForRedoExtended(record, 0, RBM_NORMAL, true,
+										   &buffer);
+	if (action == BLK_NEEDS_REDO)
+	{
+		Page		page = (Page) BufferGetPage(buffer);
+		OffsetNumber *cleared;
+		int			ncleared;
+		Size		datalen;
+		int			i;
+
+		cleared = (OffsetNumber *) XLogRecGetBlockData(record, 0, &datalen);
+
+		ncleared = xlrec->ncleared;
+
+		for (i = 0; i < ncleared; i++)
+		{
+			ItemId			lp;
+			OffsetNumber	offnum = cleared[i];
+			HeapTupleData	heapTuple;
+
+			lp = PageGetItemId(page, offnum);
+			heapTuple.t_data = (HeapTupleHeader) PageGetItem(page, lp);
+
+			HeapTupleHeaderClearWarmUpdated(heapTuple.t_data);
+			HeapTupleHeaderClearWarm(heapTuple.t_data);
+		}
+
+		PageSetLSN(page, lsn);
+		MarkBufferDirty(buffer);
+	}
+	if (BufferIsValid(buffer))
+		UnlockReleaseBuffer(buffer);
+}
+
 /*
  * Replay XLOG_HEAP2_VISIBLE record.
  *
@@ -8345,7 +8949,9 @@ heap_xlog_delete(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		HeapTupleHeaderClearHotUpdated(htup);
 		fix_infomask_from_infobits(xlrec->infobits_set,
@@ -8366,7 +8972,7 @@ heap_xlog_delete(XLogReaderState *record)
 		if (!HeapTupleHeaderHasRootOffset(htup))
 		{
 			OffsetNumber	root_offnum;
-			root_offnum = heap_get_root_tuple(page, xlrec->offnum); 
+			root_offnum = heap_get_root_tuple(page, xlrec->offnum);
 			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
 		}
 
@@ -8662,16 +9268,22 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 	Size		freespace = 0;
 	XLogRedoAction oldaction;
 	XLogRedoAction newaction;
+	bool		warm_update = false;
 
 	/* initialize to keep the compiler quiet */
 	oldtup.t_data = NULL;
 	oldtup.t_len = 0;
 
+	if (xlrec->flags & XLH_UPDATE_WARM_UPDATE)
+		warm_update = true;
+
 	XLogRecGetBlockTag(record, 0, &rnode, NULL, &newblk);
 	if (XLogRecGetBlockTag(record, 1, NULL, NULL, &oldblk))
 	{
 		/* HOT updates are never done across pages */
 		Assert(!hot_update);
+		/* WARM updates are never done across pages */
+		Assert(!warm_update);
 	}
 	else
 		oldblk = newblk;
@@ -8731,6 +9343,11 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 								   &htup->t_infomask2);
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
+
+		/* Mark the old tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetWarmUpdated(htup);
+
 		/* Set forward chain link in t_ctid */
 		HeapTupleHeaderSetNextTid(htup, &newtid);
 
@@ -8866,6 +9483,10 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
 
+		/* Mark the new tuple has a WARM tuple */
+		if (warm_update)
+			HeapTupleHeaderSetWarmUpdated(htup);
+
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
@@ -8993,7 +9614,9 @@ heap_xlog_lock(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9072,7 +9695,9 @@ heap_xlog_lock_updated(XLogReaderState *record)
 
 		htup = (HeapTupleHeader) PageGetItem(page, lp);
 
-		htup->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
+		htup->t_infomask &= ~HEAP_XMAX_BITS;
+		if (HeapTupleHeaderIsMoved(htup))
+			htup->t_infomask &= ~HEAP_MOVED;
 		htup->t_infomask2 &= ~HEAP_KEYS_UPDATED;
 		fix_infomask_from_infobits(xlrec->infobits_set, &htup->t_infomask,
 								   &htup->t_infomask2);
@@ -9141,6 +9766,9 @@ heap_redo(XLogReaderState *record)
 		case XLOG_HEAP_INSERT:
 			heap_xlog_insert(record);
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			heap_xlog_multi_insert(record);
+			break;
 		case XLOG_HEAP_DELETE:
 			heap_xlog_delete(record);
 			break;
@@ -9169,7 +9797,7 @@ heap2_redo(XLogReaderState *record)
 {
 	uint8		info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
 
-	switch (info & XLOG_HEAP_OPMASK)
+	switch (info & XLOG_HEAP2_OPMASK)
 	{
 		case XLOG_HEAP2_CLEAN:
 			heap_xlog_clean(record);
@@ -9183,9 +9811,6 @@ heap2_redo(XLogReaderState *record)
 		case XLOG_HEAP2_VISIBLE:
 			heap_xlog_visible(record);
 			break;
-		case XLOG_HEAP2_MULTI_INSERT:
-			heap_xlog_multi_insert(record);
-			break;
 		case XLOG_HEAP2_LOCK_UPDATED:
 			heap_xlog_lock_updated(record);
 			break;
@@ -9199,6 +9824,9 @@ heap2_redo(XLogReaderState *record)
 		case XLOG_HEAP2_REWRITE:
 			heap_xlog_logical_rewrite(record);
 			break;
+		case XLOG_HEAP2_WARMCLEAR:
+			heap_xlog_warmclear(record);
+			break;
 		default:
 			elog(PANIC, "heap2_redo: unknown op code %u", info);
 	}
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index f54337c..6a3baff 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -834,6 +834,13 @@ heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				continue;
 
+			/*
+			 * If the tuple has root line pointer, it must be the end of the
+			 * chain
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			/* Set up to scan the HOT-chain */
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
@@ -928,6 +935,6 @@ heap_get_root_tuple(Page page, OffsetNumber target_offnum)
 void
 heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 {
-	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+	heap_get_root_tuples_internal(page, InvalidOffsetNumber,
 			root_offsets);
 }
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index 2d3ae9b..bd469ee 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -404,6 +404,14 @@ rewrite_heap_tuple(RewriteState state,
 		old_tuple->t_data->t_infomask & HEAP_XACT_MASK;
 
 	/*
+	 * We must clear the HEAP_WARM_TUPLE flag if the HEAP_WARM_UPDATED is
+	 * cleared above.
+	 */
+	if (HeapTupleHeaderIsWarmUpdated(old_tuple->t_data))
+		HeapTupleHeaderClearWarm(new_tuple->t_data);
+
+
+	/*
 	 * While we have our hands on the tuple, we may as well freeze any
 	 * eligible xmin or xmax, so that future VACUUM effort can be saved.
 	 */
@@ -428,7 +436,7 @@ rewrite_heap_tuple(RewriteState state,
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
 
-		/* 
+		/*
 		 * We've already checked that this is not the last tuple in the chain,
 		 * so fetch the next TID in the chain.
 		 */
@@ -737,7 +745,7 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		/* 
+		/*
 		 * Set t_ctid just to ensure that block number is copied correctly, but
 		 * then immediately mark the tuple as the latest.
 		 */
diff --git a/src/backend/access/heap/tuptoaster.c b/src/backend/access/heap/tuptoaster.c
index aa5a45d..bab48fd 100644
--- a/src/backend/access/heap/tuptoaster.c
+++ b/src/backend/access/heap/tuptoaster.c
@@ -1688,7 +1688,8 @@ toast_save_datum(Relation rel, Datum value,
 							 toastrel,
 							 toastidxs[i]->rd_index->indisunique ?
 							 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-							 NULL);
+							 NULL,
+							 false);
 		}
 
 		/*
diff --git a/src/backend/access/index/genam.c b/src/backend/access/index/genam.c
index a91fda7..c650be4 100644
--- a/src/backend/access/index/genam.c
+++ b/src/backend/access/index/genam.c
@@ -115,6 +115,8 @@ RelationGetIndexScan(Relation indexRelation, int nkeys, int norderbys)
 	scan->xactStartedInRecovery = TransactionStartedDuringRecovery();
 	scan->ignore_killed_tuples = !scan->xactStartedInRecovery;
 
+	scan->warm_prior_tuple = false;
+
 	scan->opaque = NULL;
 
 	scan->xs_itup = NULL;
@@ -127,6 +129,8 @@ RelationGetIndexScan(Relation indexRelation, int nkeys, int norderbys)
 	scan->xs_cbuf = InvalidBuffer;
 	scan->xs_continue_hot = false;
 
+	scan->indexInfo = NULL;
+
 	return scan;
 }
 
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c
index cc5ac8b..d655d60 100644
--- a/src/backend/access/index/indexam.c
+++ b/src/backend/access/index/indexam.c
@@ -197,7 +197,8 @@ index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 IndexInfo *indexInfo)
+			 IndexInfo *indexInfo,
+			 bool warm_update)
 {
 	RELATION_CHECKS;
 	CHECK_REL_PROCEDURE(aminsert);
@@ -207,6 +208,12 @@ index_insert(Relation indexRelation,
 									   (HeapTuple) NULL,
 									   InvalidBuffer);
 
+	if (warm_update)
+	{
+		Assert(indexRelation->rd_amroutine->amwarminsert != NULL);
+		return indexRelation->rd_amroutine->amwarminsert(indexRelation, values,
+				isnull, heap_t_ctid, heapRelation, checkUnique, indexInfo);
+	}
 	return indexRelation->rd_amroutine->aminsert(indexRelation, values, isnull,
 												 heap_t_ctid, heapRelation,
 												 checkUnique, indexInfo);
@@ -291,6 +298,25 @@ index_beginscan_internal(Relation indexRelation,
 	scan->parallel_scan = pscan;
 	scan->xs_temp_snap = temp_snap;
 
+	/*
+	 * If the index supports recheck, make sure that index tuple is saved
+	 * during index scans. Also build and cache IndexInfo which is used by
+	 * amrecheck routine.
+	 *
+	 * XXX Ideally, we should look at all indexes on the table and check if
+	 * WARM is at all supported on the base table. If WARM is not supported
+	 * then we don't need to do any recheck. RelationGetIndexAttrBitmap() does
+	 * do that and sets rd_supportswarm after looking at all indexes. But we
+	 * don't know if the function was called earlier in the session when we're
+	 * here. We can't call it now because there exists a risk of causing
+	 * deadlock.
+	 */
+	if (indexRelation->rd_amroutine->amrecheck)
+	{
+		scan->xs_want_itup = true;
+		scan->indexInfo = BuildIndexInfo(indexRelation);
+	}
+
 	return scan;
 }
 
@@ -327,6 +353,7 @@ index_rescan(IndexScanDesc scan,
 	scan->xs_continue_hot = false;
 
 	scan->kill_prior_tuple = false;		/* for safety */
+	scan->warm_prior_tuple = false;		/* for safety */
 
 	scan->indexRelation->rd_amroutine->amrescan(scan, keys, nkeys,
 												orderbys, norderbys);
@@ -358,6 +385,10 @@ index_endscan(IndexScanDesc scan)
 	if (scan->xs_temp_snap)
 		UnregisterSnapshot(scan->xs_snapshot);
 
+	/* Free cached IndexInfo, if any */
+	if (scan->indexInfo)
+		pfree(scan->indexInfo);
+
 	/* Release the scan data structure itself */
 	IndexScanEnd(scan);
 }
@@ -402,6 +433,7 @@ index_restrpos(IndexScanDesc scan)
 	scan->xs_continue_hot = false;
 
 	scan->kill_prior_tuple = false;		/* for safety */
+	scan->warm_prior_tuple = false;		/* for safety */
 
 	scan->indexRelation->rd_amroutine->amrestrpos(scan);
 }
@@ -535,13 +567,14 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
 	/*
 	 * The AM's amgettuple proc finds the next index entry matching the scan
 	 * keys, and puts the TID into scan->xs_ctup.t_self.  It should also set
-	 * scan->xs_recheck and possibly scan->xs_itup/scan->xs_hitup, though we
+	 * scan->xs_tuple_recheck and possibly scan->xs_itup/scan->xs_hitup, though we
 	 * pay no attention to those fields here.
 	 */
 	found = scan->indexRelation->rd_amroutine->amgettuple(scan, direction);
 
-	/* Reset kill flag immediately for safety */
+	/* Reset kill/warm flags immediately for safety */
 	scan->kill_prior_tuple = false;
+	scan->warm_prior_tuple = false;
 
 	/* If we're out of index entries, we're done */
 	if (!found)
@@ -574,7 +607,7 @@ index_getnext_tid(IndexScanDesc scan, ScanDirection direction)
  * dropped in a future index_getnext_tid, index_fetch_heap or index_endscan
  * call).
  *
- * Note: caller must check scan->xs_recheck, and perform rechecking of the
+ * Note: caller must check scan->xs_tuple_recheck, and perform rechecking of the
  * scan keys if required.  We do not do that here because we don't have
  * enough information to do it efficiently in the general case.
  * ----------------
@@ -585,6 +618,8 @@ index_fetch_heap(IndexScanDesc scan)
 	ItemPointer tid = &scan->xs_ctup.t_self;
 	bool		all_dead = false;
 	bool		got_heap_tuple;
+	HeapTuple	heaptup = NULL;
+
 
 	/* We can skip the buffer-switching logic if we're in mid-HOT chain. */
 	if (!scan->xs_continue_hot)
@@ -605,37 +640,171 @@ index_fetch_heap(IndexScanDesc scan)
 
 	/* Obtain share-lock on the buffer so we can examine visibility */
 	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_SHARE);
+
+	/*
+	 * Fetch the status of the WARM chain just once per HOT chain. We cache the
+	 * result in xs_hot_chain_status when the HOT chain is searched for the
+	 * first time.
+	 */
 	got_heap_tuple = heap_hot_search_buffer(tid, scan->heapRelation,
 											scan->xs_cbuf,
 											scan->xs_snapshot,
 											&scan->xs_ctup,
 											&all_dead,
-											!scan->xs_continue_hot);
-	LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+											!scan->xs_continue_hot,
+											scan->xs_continue_hot ? NULL : &scan->xs_hot_chain_status);
 
 	if (got_heap_tuple)
 	{
+		bool res = true;
+		bool tuple_recheck = true;
+		bool is_warm;
+
+		/*
+		 * Ok we got a tuple which satisfies the snapshot, but if its part of a
+		 * WARM chain, we must do additional checks to ensure that we are
+		 * indeed returning a correct tuple. Note that if the index AM does not
+		 * implement amrecheck method, then we don't any additional checks
+		 * since WARM must have been disabled on such tables.
+		 */
+		if (scan->xs_itup && scan->indexRelation->rd_amroutine->amrecheck)
+		{
+			is_warm = scan->indexRelation->rd_amroutine->amiswarm(scan->indexRelation,
+						scan->xs_itup);
+
+			/*
+			 * If the chain has only WARM tuples then a WARM index pointer must
+			 * satisfy all tuples in the chain. So we need not do any recheck,
+			 * but but res to true. On the other hand a WARM pointer to a CLEAR
+			 * chain would not satisfy any tuple in the chain. So we just set
+			 * res to false and avoid a recheck.
+			 *
+			 * If the chain has only CLEAR tuples then a WARM index pointer
+			 * must not satisfy any tuple from the chain. A WARM pointer to a
+			 * CLEAR chain can only occur because of an aborted WARM update. We
+			 * can kill such WARM pointers immediately.
+			 */
+			if (HCWC_IS_ALL_WARM(scan->xs_hot_chain_status))
+			{
+				if (is_warm)
+				{
+					tuple_recheck = false;
+					res = true;
+				}
+			}
+			else if (HCWC_IS_ALL_CLEAR(scan->xs_hot_chain_status))
+			{
+				if (is_warm)
+				{
+					tuple_recheck = false;
+					res = false;
+					scan->kill_prior_tuple = true;
+				}
+			}
+		}
+		else
+		{
+			/*
+			 * We can't do and should not do recheck if the index tuple is not
+			 * available or when the index AM does not implement a recheck
+			 * method.
+			 */
+			tuple_recheck = false;
+		}
+
+		/*
+		 * Ok. We must recheck to decide whether to return a tuple from the
+		 * current index pointer.
+		 *
+		 * XXX If the heap tuple has toasted data, we should get a copy of the
+		 * tuple, drop the buffer lock and then work with the copy. Otherwise
+		 * we might risk getting into a deadlock.
+		 *
+		 * XXX In theory, we don't allow WARM updates when either old and new
+		 * tuple has toasted attributes. But if we ever hit a situation where
+		 * we are presented with a heap tuple with toasted values, recheck
+		 * should be able to handle that.
+		 */
+		if (tuple_recheck)
+		{
+			if (HeapTupleHasExternal(&scan->xs_ctup))
+			{
+				heaptup = heap_copytuple(&scan->xs_ctup);
+				LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+			}
+			else
+				heaptup = &scan->xs_ctup;
+
+			Assert(RelationWarmUpdatesEnabled(scan->heapRelation));
+			res = scan->indexRelation->rd_amroutine->amrecheck(
+						scan->indexRelation,
+						scan->indexInfo,
+						scan->xs_itup,
+						scan->heapRelation,
+						heaptup);
+
+			/*
+			 * If it's a CLEAR pointer to a chain with only WARM tuples then it
+			 * could be the only index pointer pointing to this chain or it
+			 * could be a duplicate CLEAR pointer resulted from an aborted
+			 * vacuum. We consult the recheck result and either kill the
+			 * pointer or mark it WARM to match the state of the chain. This
+			 * avoid repeated evaluation of recheck when the index is
+			 * repeatedly used to query the table.
+			 */
+			if (!is_warm && HCWC_IS_ALL_WARM(scan->xs_hot_chain_status))
+			{
+				if (!res)
+					scan->kill_prior_tuple = true;
+				else
+					scan->warm_prior_tuple = true;
+			}
+
+			/*
+			 * XXX Can we have a CLEAR pointer to a CLEAR chain and still not
+			 * see any tuple from the chain? Should we just Assert that res
+			 * must always be true?
+			 */
+			if (!is_warm && HCWC_IS_ALL_CLEAR(scan->xs_hot_chain_status))
+			{
+				Assert(res);
+				if (!res)
+					scan->kill_prior_tuple = true;
+			}
+		}
+
+		if (heaptup && heaptup != &scan->xs_ctup)
+			heap_freetuple(heaptup);
+		else
+			LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
+
 		/*
 		 * Only in a non-MVCC snapshot can more than one member of the HOT
 		 * chain be visible.
 		 */
 		scan->xs_continue_hot = !IsMVCCSnapshot(scan->xs_snapshot);
 		pgstat_count_heap_fetch(scan->indexRelation);
-		return &scan->xs_ctup;
+
+		if (res)
+			return &scan->xs_ctup;
 	}
+	else
+	{
+		LockBuffer(scan->xs_cbuf, BUFFER_LOCK_UNLOCK);
 
-	/* We've reached the end of the HOT chain. */
-	scan->xs_continue_hot = false;
+		/* We've reached the end of the HOT chain. */
+		scan->xs_continue_hot = false;
 
-	/*
-	 * If we scanned a whole HOT chain and found only dead tuples, tell index
-	 * AM to kill its entry for that TID (this will take effect in the next
-	 * amgettuple call, in index_getnext_tid).  We do not do this when in
-	 * recovery because it may violate MVCC to do so.  See comments in
-	 * RelationGetIndexScan().
-	 */
-	if (!scan->xactStartedInRecovery)
-		scan->kill_prior_tuple = all_dead;
+		/*
+		 * If we scanned a whole HOT chain and found only dead tuples, tell index
+		 * AM to kill its entry for that TID (this will take effect in the next
+		 * amgettuple call, in index_getnext_tid).  We do not do this when in
+		 * recovery because it may violate MVCC to do so.  See comments in
+		 * RelationGetIndexScan().
+		 */
+		if (!scan->xactStartedInRecovery)
+			scan->kill_prior_tuple = all_dead;
+	}
 
 	return NULL;
 }
@@ -719,6 +888,7 @@ index_getbitmap(IndexScanDesc scan, TIDBitmap *bitmap)
 
 	/* just make sure this is false... */
 	scan->kill_prior_tuple = false;
+	scan->warm_prior_tuple = false;
 
 	/*
 	 * have the am's getbitmap proc do all the work.
diff --git a/src/backend/access/nbtree/nbtinsert.c b/src/backend/access/nbtree/nbtinsert.c
index 6dca810..463d4bf 100644
--- a/src/backend/access/nbtree/nbtinsert.c
+++ b/src/backend/access/nbtree/nbtinsert.c
@@ -20,6 +20,7 @@
 #include "access/nbtxlog.h"
 #include "access/transam.h"
 #include "access/xloginsert.h"
+#include "catalog/index.h"
 #include "miscadmin.h"
 #include "storage/lmgr.h"
 #include "storage/predicate.h"
@@ -250,6 +251,10 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 	BTPageOpaque opaque;
 	Buffer		nbuf = InvalidBuffer;
 	bool		found = false;
+	Buffer		buffer;
+	HeapTupleData	heapTuple;
+	bool		recheck = false;
+	IndexInfo	*indexInfo = BuildIndexInfo(rel);
 
 	/* Assume unique until we find a duplicate */
 	*is_unique = true;
@@ -309,6 +314,8 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				curitup = (IndexTuple) PageGetItem(page, curitemid);
 				htid = curitup->t_tid;
 
+				recheck = false;
+
 				/*
 				 * If we are doing a recheck, we expect to find the tuple we
 				 * are rechecking.  It's not a duplicate, but we have to keep
@@ -326,112 +333,153 @@ _bt_check_unique(Relation rel, IndexTuple itup, Relation heapRel,
 				 * have just a single index entry for the entire chain.
 				 */
 				else if (heap_hot_search(&htid, heapRel, &SnapshotDirty,
-										 &all_dead))
+							&all_dead, &recheck, &buffer,
+							&heapTuple))
 				{
 					TransactionId xwait;
+					bool result = true;
 
 					/*
-					 * It is a duplicate. If we are only doing a partial
-					 * check, then don't bother checking if the tuple is being
-					 * updated in another transaction. Just return the fact
-					 * that it is a potential conflict and leave the full
-					 * check till later.
+					 * If the tuple was WARM update, we may again see our own
+					 * tuple. Since WARM updates don't create new index
+					 * entries, our own tuple is only reachable via the old
+					 * index pointer.
 					 */
-					if (checkUnique == UNIQUE_CHECK_PARTIAL)
+					if (checkUnique == UNIQUE_CHECK_EXISTING &&
+							ItemPointerCompare(&htid, &itup->t_tid) == 0)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						*is_unique = false;
-						return InvalidTransactionId;
+						found = true;
+						result = false;
+						if (recheck)
+							UnlockReleaseBuffer(buffer);
 					}
-
-					/*
-					 * If this tuple is being updated by other transaction
-					 * then we have to wait for its commit/abort.
-					 */
-					xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
-						SnapshotDirty.xmin : SnapshotDirty.xmax;
-
-					if (TransactionIdIsValid(xwait))
+					else if (recheck)
 					{
-						if (nbuf != InvalidBuffer)
-							_bt_relbuf(rel, nbuf);
-						/* Tell _bt_doinsert to wait... */
-						*speculativeToken = SnapshotDirty.speculativeToken;
-						return xwait;
+						result = btrecheck(rel, indexInfo, curitup, heapRel, &heapTuple);
+						UnlockReleaseBuffer(buffer);
 					}
 
-					/*
-					 * Otherwise we have a definite conflict.  But before
-					 * complaining, look to see if the tuple we want to insert
-					 * is itself now committed dead --- if so, don't complain.
-					 * This is a waste of time in normal scenarios but we must
-					 * do it to support CREATE INDEX CONCURRENTLY.
-					 *
-					 * We must follow HOT-chains here because during
-					 * concurrent index build, we insert the root TID though
-					 * the actual tuple may be somewhere in the HOT-chain.
-					 * While following the chain we might not stop at the
-					 * exact tuple which triggered the insert, but that's OK
-					 * because if we find a live tuple anywhere in this chain,
-					 * we have a unique key conflict.  The other live tuple is
-					 * not part of this chain because it had a different index
-					 * entry.
-					 */
-					htid = itup->t_tid;
-					if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL))
-					{
-						/* Normal case --- it's still live */
-					}
-					else
+					if (result)
 					{
 						/*
-						 * It's been deleted, so no error, and no need to
-						 * continue searching
+						 * It is a duplicate. If we are only doing a partial
+						 * check, then don't bother checking if the tuple is being
+						 * updated in another transaction. Just return the fact
+						 * that it is a potential conflict and leave the full
+						 * check till later.
 						 */
-						break;
-					}
+						if (checkUnique == UNIQUE_CHECK_PARTIAL)
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							*is_unique = false;
+							return InvalidTransactionId;
+						}
 
-					/*
-					 * Check for a conflict-in as we would if we were going to
-					 * write to this page.  We aren't actually going to write,
-					 * but we want a chance to report SSI conflicts that would
-					 * otherwise be masked by this unique constraint
-					 * violation.
-					 */
-					CheckForSerializableConflictIn(rel, NULL, buf);
+						/*
+						 * If this tuple is being updated by other transaction
+						 * then we have to wait for its commit/abort.
+						 */
+						xwait = (TransactionIdIsValid(SnapshotDirty.xmin)) ?
+							SnapshotDirty.xmin : SnapshotDirty.xmax;
+
+						if (TransactionIdIsValid(xwait))
+						{
+							if (nbuf != InvalidBuffer)
+								_bt_relbuf(rel, nbuf);
+							/* Tell _bt_doinsert to wait... */
+							*speculativeToken = SnapshotDirty.speculativeToken;
+							return xwait;
+						}
 
-					/*
-					 * This is a definite conflict.  Break the tuple down into
-					 * datums and report the error.  But first, make sure we
-					 * release the buffer locks we're holding ---
-					 * BuildIndexValueDescription could make catalog accesses,
-					 * which in the worst case might touch this same index and
-					 * cause deadlocks.
-					 */
-					if (nbuf != InvalidBuffer)
-						_bt_relbuf(rel, nbuf);
-					_bt_relbuf(rel, buf);
+						/*
+						 * Otherwise we have a definite conflict.  But before
+						 * complaining, look to see if the tuple we want to insert
+						 * is itself now committed dead --- if so, don't complain.
+						 * This is a waste of time in normal scenarios but we must
+						 * do it to support CREATE INDEX CONCURRENTLY.
+						 *
+						 * We must follow HOT-chains here because during
+						 * concurrent index build, we insert the root TID though
+						 * the actual tuple may be somewhere in the HOT-chain.
+						 * While following the chain we might not stop at the
+						 * exact tuple which triggered the insert, but that's OK
+						 * because if we find a live tuple anywhere in this chain,
+						 * we have a unique key conflict.  The other live tuple is
+						 * not part of this chain because it had a different index
+						 * entry.
+						 */
+						recheck = false;
+						ItemPointerCopy(&itup->t_tid, &htid);
+						if (heap_hot_search(&htid, heapRel, SnapshotSelf, NULL,
+									&recheck, &buffer, &heapTuple))
+						{
+							bool result = true;
+							if (recheck)
+							{
+								/*
+								 * Recheck if the tuple actually satisfies the
+								 * index key. Otherwise, we might be following
+								 * a wrong index pointer and mustn't entertain
+								 * this tuple.
+								 */
+								result = btrecheck(rel, indexInfo, itup, heapRel, &heapTuple);
+								UnlockReleaseBuffer(buffer);
+							}
+							if (!result)
+								break;
+							/* Normal case --- it's still live */
+						}
+						else
+						{
+							/*
+							 * It's been deleted, so no error, and no need to
+							 * continue searching.
+							 */
+							break;
+						}
 
-					{
-						Datum		values[INDEX_MAX_KEYS];
-						bool		isnull[INDEX_MAX_KEYS];
-						char	   *key_desc;
-
-						index_deform_tuple(itup, RelationGetDescr(rel),
-										   values, isnull);
-
-						key_desc = BuildIndexValueDescription(rel, values,
-															  isnull);
-
-						ereport(ERROR,
-								(errcode(ERRCODE_UNIQUE_VIOLATION),
-								 errmsg("duplicate key value violates unique constraint \"%s\"",
-										RelationGetRelationName(rel)),
-							   key_desc ? errdetail("Key %s already exists.",
-													key_desc) : 0,
-								 errtableconstraint(heapRel,
-											 RelationGetRelationName(rel))));
+						/*
+						 * Check for a conflict-in as we would if we were going to
+						 * write to this page.  We aren't actually going to write,
+						 * but we want a chance to report SSI conflicts that would
+						 * otherwise be masked by this unique constraint
+						 * violation.
+						 */
+						CheckForSerializableConflictIn(rel, NULL, buf);
+
+						/*
+						 * This is a definite conflict.  Break the tuple down into
+						 * datums and report the error.  But first, make sure we
+						 * release the buffer locks we're holding ---
+						 * BuildIndexValueDescription could make catalog accesses,
+						 * which in the worst case might touch this same index and
+						 * cause deadlocks.
+						 */
+						if (nbuf != InvalidBuffer)
+							_bt_relbuf(rel, nbuf);
+						_bt_relbuf(rel, buf);
+
+						{
+							Datum		values[INDEX_MAX_KEYS];
+							bool		isnull[INDEX_MAX_KEYS];
+							char	   *key_desc;
+
+							index_deform_tuple(itup, RelationGetDescr(rel),
+									values, isnull);
+
+							key_desc = BuildIndexValueDescription(rel, values,
+									isnull);
+
+							ereport(ERROR,
+									(errcode(ERRCODE_UNIQUE_VIOLATION),
+									 errmsg("duplicate key value violates unique constraint \"%s\"",
+										 RelationGetRelationName(rel)),
+									 key_desc ? errdetail("Key %s already exists.",
+										 key_desc) : 0,
+									 errtableconstraint(heapRel,
+										 RelationGetRelationName(rel))));
+						}
 					}
 				}
 				else if (all_dead)
diff --git a/src/backend/access/nbtree/nbtpage.c b/src/backend/access/nbtree/nbtpage.c
index f815fd4..061c8d4 100644
--- a/src/backend/access/nbtree/nbtpage.c
+++ b/src/backend/access/nbtree/nbtpage.c
@@ -766,29 +766,20 @@ _bt_page_recyclable(Page page)
 }
 
 /*
- * Delete item(s) from a btree page during VACUUM.
+ * Delete item(s) and clear WARM item(s) on a btree page during VACUUM.
  *
  * This must only be used for deleting leaf items.  Deleting an item on a
  * non-leaf page has to be done as part of an atomic action that includes
- * deleting the page it points to.
+ * deleting the page it points to. We don't ever clear pointers on a non-leaf
+ * page.
  *
  * This routine assumes that the caller has pinned and locked the buffer.
  * Also, the given itemnos *must* appear in increasing order in the array.
- *
- * We record VACUUMs and b-tree deletes differently in WAL. InHotStandby
- * we need to be able to pin all of the blocks in the btree in physical
- * order when replaying the effects of a VACUUM, just as we do for the
- * original VACUUM itself. lastBlockVacuumed allows us to tell whether an
- * intermediate range of blocks has had no changes at all by VACUUM,
- * and so must be scanned anyway during replay. We always write a WAL record
- * for the last block in the index, whether or not it contained any items
- * to be removed. This allows us to scan right up to end of index to
- * ensure correct locking.
  */
 void
-_bt_delitems_vacuum(Relation rel, Buffer buf,
-					OffsetNumber *itemnos, int nitems,
-					BlockNumber lastBlockVacuumed)
+_bt_handleitems_vacuum(Relation rel, Buffer buf,
+					OffsetNumber *delitemnos, int ndelitems,
+					OffsetNumber *clearitemnos, int nclearitems)
 {
 	Page		page = BufferGetPage(buf);
 	BTPageOpaque opaque;
@@ -796,9 +787,20 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 	/* No ereport(ERROR) until changes are logged */
 	START_CRIT_SECTION();
 
+	/*
+	 * Clear the WARM pointers.
+	 *
+	 * We must do this before dealing with the dead items because
+	 * PageIndexMultiDelete may move items around to compactify the array and
+	 * hence offnums recorded earlier won't make any sense after
+	 * PageIndexMultiDelete is called.
+	 */
+	if (nclearitems > 0)
+		_bt_clear_items(page, clearitemnos, nclearitems);
+
 	/* Fix the page */
-	if (nitems > 0)
-		PageIndexMultiDelete(page, itemnos, nitems);
+	if (ndelitems > 0)
+		PageIndexMultiDelete(page, delitemnos, ndelitems);
 
 	/*
 	 * We can clear the vacuum cycle ID since this page has certainly been
@@ -824,7 +826,8 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 		XLogRecPtr	recptr;
 		xl_btree_vacuum xlrec_vacuum;
 
-		xlrec_vacuum.lastBlockVacuumed = lastBlockVacuumed;
+		xlrec_vacuum.ndelitems = ndelitems;
+		xlrec_vacuum.nclearitems = nclearitems;
 
 		XLogBeginInsert();
 		XLogRegisterBuffer(0, buf, REGBUF_STANDARD);
@@ -835,8 +838,11 @@ _bt_delitems_vacuum(Relation rel, Buffer buf,
 		 * is.  When XLogInsert stores the whole buffer, the offsets array
 		 * need not be stored too.
 		 */
-		if (nitems > 0)
-			XLogRegisterBufData(0, (char *) itemnos, nitems * sizeof(OffsetNumber));
+		if (ndelitems > 0)
+			XLogRegisterBufData(0, (char *) delitemnos, ndelitems * sizeof(OffsetNumber));
+
+		if (nclearitems > 0)
+			XLogRegisterBufData(0, (char *) clearitemnos, nclearitems * sizeof(OffsetNumber));
 
 		recptr = XLogInsert(RM_BTREE_ID, XLOG_BTREE_VACUUM);
 
@@ -1882,3 +1888,13 @@ _bt_unlink_halfdead_page(Relation rel, Buffer leafbuf, bool *rightsib_empty)
 
 	return true;
 }
+
+/*
+ * Currently just a wrapper arounf PageIndexClearWarmTuples, but in theory each
+ * index may have it's own way to handle WARM tuples.
+ */
+void
+_bt_clear_items(Page page, OffsetNumber *clearitemnos, uint16 nclearitems)
+{
+	PageIndexClearWarmTuples(page, clearitemnos, nclearitems);
+}
diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c
index 775f2ff..bf5f23f 100644
--- a/src/backend/access/nbtree/nbtree.c
+++ b/src/backend/access/nbtree/nbtree.c
@@ -146,6 +146,7 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->ambuild = btbuild;
 	amroutine->ambuildempty = btbuildempty;
 	amroutine->aminsert = btinsert;
+	amroutine->amwarminsert = btwarminsert;
 	amroutine->ambulkdelete = btbulkdelete;
 	amroutine->amvacuumcleanup = btvacuumcleanup;
 	amroutine->amcanreturn = btcanreturn;
@@ -163,6 +164,8 @@ bthandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = btestimateparallelscan;
 	amroutine->aminitparallelscan = btinitparallelscan;
 	amroutine->amparallelrescan = btparallelrescan;
+	amroutine->amrecheck = btrecheck;
+	amroutine->amiswarm = btiswarm;
 
 	PG_RETURN_POINTER(amroutine);
 }
@@ -315,11 +318,12 @@ btbuildempty(Relation index)
  *		Descend the tree recursively, find the appropriate location for our
  *		new tuple, and put it there.
  */
-bool
-btinsert(Relation rel, Datum *values, bool *isnull,
+static bool
+btinsert_internal(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
-		 IndexInfo *indexInfo)
+		 IndexInfo *indexInfo,
+		 bool warm_update)
 {
 	bool		result;
 	IndexTuple	itup;
@@ -328,6 +332,11 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	itup = index_form_tuple(RelationGetDescr(rel), values, isnull);
 	itup->t_tid = *ht_ctid;
 
+	if (warm_update)
+		ItemPointerSetFlags(&itup->t_tid, BTREE_INDEX_WARM_POINTER);
+	else
+		ItemPointerClearFlags(&itup->t_tid);
+
 	result = _bt_doinsert(rel, itup, checkUnique, heapRel);
 
 	pfree(itup);
@@ -335,6 +344,26 @@ btinsert(Relation rel, Datum *values, bool *isnull,
 	return result;
 }
 
+bool
+btinsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, false);
+}
+
+bool
+btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 IndexInfo *indexInfo)
+{
+	return btinsert_internal(rel, values, isnull, ht_ctid, heapRel,
+			checkUnique, indexInfo, true);
+}
+
 /*
  *	btgettuple() -- Get the next tuple in the scan.
  */
@@ -393,6 +422,20 @@ btgettuple(IndexScanDesc scan, ScanDirection dir)
 				if (so->numKilled < MaxIndexTuplesPerPage)
 					so->killedItems[so->numKilled++] = so->currPos.itemIndex;
 			}
+			else if (scan->warm_prior_tuple)
+			{
+				/*
+				 * Check if the previously fetched tuple should be marked with
+				 * a WARM flag. Similar to killedItems, we don't let to overrun
+				 * the array if the indexscan reverses the direction and we see
+				 * the same tuple twice.
+				 */
+				if (so->setWarmItems == NULL)
+					so->setWarmItems = (int *)
+						palloc(MaxIndexTuplesPerPage * sizeof(int));
+				if (so->numSet < MaxIndexTuplesPerPage)
+					so->setWarmItems[so->numSet++] = so->currPos.itemIndex;
+			}
 
 			/*
 			 * Now continue the scan.
@@ -499,6 +542,9 @@ btbeginscan(Relation rel, int nkeys, int norderbys)
 	so->killedItems = NULL;		/* until needed */
 	so->numKilled = 0;
 
+	so->setWarmItems = NULL;
+	so->numSet = 0;
+
 	/*
 	 * We don't know yet whether the scan will be index-only, so we do not
 	 * allocate the tuple workspace arrays until btrescan.  However, we set up
@@ -528,6 +574,9 @@ btrescan(IndexScanDesc scan, ScanKey scankey, int nscankeys,
 		/* Before leaving current page, deal with any killed items */
 		if (so->numKilled > 0)
 			_bt_killitems(scan);
+		/* Also deal with items which could be marked WARM */
+		if (so->numSet > 0)
+			_bt_warmitems(scan);
 		BTScanPosUnpinIfPinned(so->currPos);
 		BTScanPosInvalidate(so->currPos);
 	}
@@ -587,6 +636,9 @@ btendscan(IndexScanDesc scan)
 		/* Before leaving current page, deal with any killed items */
 		if (so->numKilled > 0)
 			_bt_killitems(scan);
+		/* Also deal with items which could be marked WARM */
+		if (so->numSet > 0)
+			_bt_warmitems(scan);
 		BTScanPosUnpinIfPinned(so->currPos);
 	}
 
@@ -603,6 +655,8 @@ btendscan(IndexScanDesc scan)
 		MemoryContextDelete(so->arrayContext);
 	if (so->killedItems != NULL)
 		pfree(so->killedItems);
+	if (so->setWarmItems != NULL)
+		pfree(so->setWarmItems);
 	if (so->currTuples != NULL)
 		pfree(so->currTuples);
 	/* so->markTuples should not be pfree'd, see btrescan */
@@ -675,6 +729,9 @@ btrestrpos(IndexScanDesc scan)
 			/* Before leaving current page, deal with any killed items */
 			if (so->numKilled > 0)
 				_bt_killitems(scan);
+			/* Also deal with items which could be marked WARM */
+			if (so->numSet > 0)
+				_bt_warmitems(scan);
 			BTScanPosUnpinIfPinned(so->currPos);
 		}
 
@@ -1103,7 +1160,7 @@ btvacuumscan(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 								 RBM_NORMAL, info->strategy);
 		LockBufferForCleanup(buf);
 		_bt_checkpage(rel, buf);
-		_bt_delitems_vacuum(rel, buf, NULL, 0, vstate.lastBlockVacuumed);
+		_bt_handleitems_vacuum(rel, buf, NULL, 0, NULL, 0);
 		_bt_relbuf(rel, buf);
 	}
 
@@ -1201,6 +1258,8 @@ restart:
 	{
 		OffsetNumber deletable[MaxOffsetNumber];
 		int			ndeletable;
+		OffsetNumber clearwarm[MaxOffsetNumber];
+		int			nclearwarm;
 		OffsetNumber offnum,
 					minoff,
 					maxoff;
@@ -1239,7 +1298,7 @@ restart:
 		 * Scan over all items to see which ones need deleted according to the
 		 * callback function.
 		 */
-		ndeletable = 0;
+		ndeletable = nclearwarm = 0;
 		minoff = P_FIRSTDATAKEY(opaque);
 		maxoff = PageGetMaxOffsetNumber(page);
 		if (callback)
@@ -1250,6 +1309,9 @@ restart:
 			{
 				IndexTuple	itup;
 				ItemPointer htup;
+				int			flags;
+				bool		is_warm = false;
+				IndexBulkDeleteCallbackResult	result;
 
 				itup = (IndexTuple) PageGetItem(page,
 												PageGetItemId(page, offnum));
@@ -1276,16 +1338,36 @@ restart:
 				 * applies to *any* type of index that marks index tuples as
 				 * killed.
 				 */
-				if (callback(htup, callback_state))
+				flags = ItemPointerGetFlags(&itup->t_tid);
+				is_warm = ((flags & BTREE_INDEX_WARM_POINTER) != 0);
+
+				if (is_warm)
+					stats->num_warm_pointers++;
+				else
+					stats->num_clear_pointers++;
+
+				result = callback(htup, is_warm, callback_state);
+				if (result == IBDCR_DELETE)
+				{
+					if (is_warm)
+						stats->warm_pointers_removed++;
+					else
+						stats->clear_pointers_removed++;
 					deletable[ndeletable++] = offnum;
+				}
+				else if (result == IBDCR_CLEAR_WARM)
+				{
+					clearwarm[nclearwarm++] = offnum;
+				}
 			}
 		}
 
 		/*
-		 * Apply any needed deletes.  We issue just one _bt_delitems_vacuum()
-		 * call per page, so as to minimize WAL traffic.
+		 * Apply any needed deletes and clearing.  We issue just one
+		 * _bt_handleitems_vacuum() call per page, so as to minimize WAL
+		 * traffic.
 		 */
-		if (ndeletable > 0)
+		if (ndeletable > 0 || nclearwarm > 0)
 		{
 			/*
 			 * Notice that the issued XLOG_BTREE_VACUUM WAL record includes
@@ -1301,8 +1383,8 @@ restart:
 			 * doesn't seem worth the amount of bookkeeping it'd take to avoid
 			 * that.
 			 */
-			_bt_delitems_vacuum(rel, buf, deletable, ndeletable,
-								vstate->lastBlockVacuumed);
+			_bt_handleitems_vacuum(rel, buf, deletable, ndeletable,
+								clearwarm, nclearwarm);
 
 			/*
 			 * Remember highest leaf page number we've issued a
@@ -1312,6 +1394,7 @@ restart:
 				vstate->lastBlockVacuumed = blkno;
 
 			stats->tuples_removed += ndeletable;
+			stats->pointers_cleared += nclearwarm;
 			/* must recompute maxoff */
 			maxoff = PageGetMaxOffsetNumber(page);
 		}
diff --git a/src/backend/access/nbtree/nbtsearch.c b/src/backend/access/nbtree/nbtsearch.c
index 2f32b2e..d4d6063 100644
--- a/src/backend/access/nbtree/nbtsearch.c
+++ b/src/backend/access/nbtree/nbtsearch.c
@@ -1350,6 +1350,9 @@ _bt_steppage(IndexScanDesc scan, ScanDirection dir)
 	/* Before leaving current page, deal with any killed items */
 	if (so->numKilled > 0)
 		_bt_killitems(scan);
+	/* Also deal with items which could be marked as WARM */
+	if (so->numSet > 0)
+		_bt_warmitems(scan);
 
 	/*
 	 * Before we modify currPos, make a copy of the page data if there was a
@@ -1948,4 +1951,6 @@ _bt_initialize_more_data(BTScanOpaque so, ScanDirection dir)
 	}
 	so->numKilled = 0;			/* just paranoia */
 	so->markItemIndex = -1;		/* ditto */
+
+	so->numSet = 0;
 }
diff --git a/src/backend/access/nbtree/nbtutils.c b/src/backend/access/nbtree/nbtutils.c
index 5b259a3..f361bdb 100644
--- a/src/backend/access/nbtree/nbtutils.c
+++ b/src/backend/access/nbtree/nbtutils.c
@@ -20,11 +20,14 @@
 #include "access/nbtree.h"
 #include "access/reloptions.h"
 #include "access/relscan.h"
+#include "access/tuptoaster.h"
+#include "catalog/index.h"
 #include "miscadmin.h"
 #include "utils/array.h"
 #include "utils/lsyscache.h"
 #include "utils/memutils.h"
 #include "utils/rel.h"
+#include "utils/datum.h"
 
 
 typedef struct BTSortArrayContext
@@ -1827,6 +1830,95 @@ _bt_killitems(IndexScanDesc scan)
 	LockBuffer(so->currPos.buf, BUFFER_LOCK_UNLOCK);
 }
 
+/*
+ * This is almost identical to _bt_killitems, but we deal with items which
+ * should be marked as WARM.
+ */
+void
+_bt_warmitems(IndexScanDesc scan)
+{
+	BTScanOpaque so = (BTScanOpaque) scan->opaque;
+	Page		page;
+	BTPageOpaque opaque;
+	OffsetNumber minoff;
+	OffsetNumber maxoff;
+	int			i;
+	int			numSet = so->numSet;
+	bool		setWarmsomething = false;
+
+	Assert(BTScanPosIsValid(so->currPos));
+
+	/*
+	 * Always reset the scan state, so we don't look for same items on other
+	 * pages.
+	 */
+	so->numSet = 0;
+
+	if (BTScanPosIsPinned(so->currPos))
+	{
+		/*
+		 * We have held the pin on this page since we read the index tuples,
+		 * so all we need to do is lock it.  The pin will have prevented
+		 * re-use of any TID on the page, so there is no need to check the
+		 * LSN.
+		 */
+		LockBuffer(so->currPos.buf, BT_READ);
+
+		page = BufferGetPage(so->currPos.buf);
+	}
+	else
+	{
+		Buffer		buf;
+
+		/* Attempt to re-read the buffer, getting pin and lock. */
+		buf = _bt_getbuf(scan->indexRelation, so->currPos.currPage, BT_READ);
+
+		/* It might not exist anymore; in which case we can't hint it. */
+		if (!BufferIsValid(buf))
+			return;
+
+		page = BufferGetPage(buf);
+		if (PageGetLSN(page) == so->currPos.lsn)
+			so->currPos.buf = buf;
+		else
+		{
+			/* Modified while not pinned means hinting is not safe. */
+			_bt_relbuf(scan->indexRelation, buf);
+			return;
+		}
+	}
+
+	for (i = 0; i < numSet; i++)
+	{
+		int			itemIndex = so->setWarmItems[i];
+		BTScanPosItem *kitem = &so->currPos.items[itemIndex];
+		OffsetNumber offnum = kitem->indexOffset;
+
+		Assert(itemIndex >= so->currPos.firstItem &&
+			   itemIndex <= so->currPos.lastItem);
+		if (offnum < minoff)
+			continue;			/* pure paranoia */
+		while (offnum <= maxoff)
+		{
+			ItemId		iid = PageGetItemId(page, offnum);
+			IndexTuple	ituple = (IndexTuple) PageGetItem(page, iid);
+
+			if (ItemPointerEquals(&ituple->t_tid, &kitem->heapTid))
+			{
+				/* found the item */
+				ItemPointerSetFlags(&ituple->t_tid, BTREE_INDEX_WARM_POINTER);
+				setWarmsomething = true;
+				break;			/* out of inner search loop */
+			}
+			offnum = OffsetNumberNext(offnum);
+		}
+	}
+
+	if (setWarmsomething)
+		MarkBufferDirtyHint(so->currPos.buf, true);
+
+	LockBuffer(so->currPos.buf, BUFFER_LOCK_UNLOCK);
+}
 
 /*
  * The following routines manage a shared-memory area in which we track
@@ -2069,3 +2161,107 @@ btproperty(Oid index_oid, int attno,
 			return false;		/* punt to generic code */
 	}
 }
+
+/*
+ * Check if the index tuple's key matches the one computed from the given heap
+ * tuple's attribute
+ */
+bool
+btrecheck(Relation indexRel, IndexInfo *indexInfo, IndexTuple indexTuple1,
+		Relation heapRel, HeapTuple heapTuple)
+{
+	Datum		values[INDEX_MAX_KEYS];
+	bool		isnull[INDEX_MAX_KEYS];
+	bool		isavail[INDEX_MAX_KEYS];
+	int			i;
+	bool		equal;
+	int         natts = indexRel->rd_rel->relnatts;
+	Form_pg_attribute att;
+	IndexTuple	indexTuple2;
+
+	/*
+	 * Currently we don't allow enable_warm to be turned OFF after the table is
+	 * created. But if we ever do that, this assert must be removed since we
+	 * must exercise recheck for all existing WARM chains.
+	 */
+	Assert(RelationWarmUpdatesEnabled(heapRel));
+
+	/*
+	 * Get the index values, except for expression attributes. Since WARM is
+	 * not used when a column used by expressions in an index is modified, we
+	 * can safely assume that those index attributes are never changed by a
+	 * WARM update.
+	 *
+	 * We cannot use FormIndexDatum here because that requires access to
+	 * executor state and we don't have that here.
+	 */
+	FormIndexPlainDatum(indexInfo, heapRel, heapTuple, values, isnull, isavail);
+
+	/*
+	 * Form an index tuple using the heap values first. This allows to then
+	 * fetch index attributes from the current index tuple and the one that is
+	 * formed from the heap values and then do a binary comparison using
+	 * datumIsEqual().
+	 *
+	 * This takes care of doing the right comparison for compressed index
+	 * attributes (we just compare the compressed versions in both tuples) and
+	 * also ensure that we correctly detoast heap values, if need be.
+	 */
+	indexTuple2 = index_form_tuple(RelationGetDescr(indexRel), values, isnull);
+
+	equal = true;
+	for (i = 1; i <= natts; i++)
+	{
+		Datum 	indxvalue1;
+		bool	indxisnull1;
+		Datum	indxvalue2;
+		bool	indxisnull2;
+
+		/* No need to compare if the attribute value is not available */
+		if (!isavail[i - 1])
+			continue;
+
+		indxvalue1 = index_getattr(indexTuple1, i, indexRel->rd_att,
+								   &indxisnull1);
+		indxvalue2 = index_getattr(indexTuple2, i, indexRel->rd_att,
+								   &indxisnull2);
+
+		/*
+		 * If both are NULL, then they are equal
+		 */
+		if (indxisnull1 && indxisnull2)
+			continue;
+
+		/*
+		 * If just one is NULL, then they are not equal
+		 */
+		if (indxisnull1 || indxisnull2)
+		{
+			equal = false;
+			break;
+		}
+
+		/*
+		 * Now just do a raw memory comparison. If the index tuple was formed
+		 * using this heap tuple, the computed index values must match
+		 */
+		att = indexRel->rd_att->attrs[i - 1];
+		if (!datumIsEqual(indxvalue1, indxvalue2, att->attbyval,
+					att->attlen))
+		{
+			equal = false;
+			break;
+		}
+	}
+
+	pfree(indexTuple2);
+
+	return equal;
+}
+
+bool
+btiswarm(Relation indexRel, IndexTuple itup)
+{
+	int flags = ItemPointerGetFlags(&itup->t_tid);
+	return ((flags & BTREE_INDEX_WARM_POINTER) != 0);
+}
diff --git a/src/backend/access/nbtree/nbtxlog.c b/src/backend/access/nbtree/nbtxlog.c
index ac60db0..ef24738 100644
--- a/src/backend/access/nbtree/nbtxlog.c
+++ b/src/backend/access/nbtree/nbtxlog.c
@@ -390,8 +390,8 @@ btree_xlog_vacuum(XLogReaderState *record)
 	Buffer		buffer;
 	Page		page;
 	BTPageOpaque opaque;
-#ifdef UNUSED
 	xl_btree_vacuum *xlrec = (xl_btree_vacuum *) XLogRecGetData(record);
+#ifdef UNUSED
 
 	/*
 	 * This section of code is thought to be no longer needed, after analysis
@@ -482,19 +482,30 @@ btree_xlog_vacuum(XLogReaderState *record)
 
 		if (len > 0)
 		{
-			OffsetNumber *unused;
-			OffsetNumber *unend;
+			OffsetNumber *offnums = (OffsetNumber *) ptr;
 
-			unused = (OffsetNumber *) ptr;
-			unend = (OffsetNumber *) ((char *) ptr + len);
+			/*
+			 * Clear the WARM pointers.
+			 *
+			 * We must do this before dealing with the dead items because
+			 * PageIndexMultiDelete may move items around to compactify the
+			 * array and hence offnums recorded earlier won't make any sense
+			 * after PageIndexMultiDelete is called.
+			 */
+			if (xlrec->nclearitems > 0)
+				_bt_clear_items(page, offnums + xlrec->ndelitems,
+						xlrec->nclearitems);
 
-			if ((unend - unused) > 0)
-				PageIndexMultiDelete(page, unused, unend - unused);
+			/*
+			 * And handle the deleted items too
+			 */
+			if (xlrec->ndelitems > 0)
+				PageIndexMultiDelete(page, offnums, xlrec->ndelitems);
 		}
 
 		/*
 		 * Mark the page as not containing any LP_DEAD items --- see comments
-		 * in _bt_delitems_vacuum().
+		 * in _bt_handleitems_vacuum().
 		 */
 		opaque = (BTPageOpaque) PageGetSpecialPointer(page);
 		opaque->btpo_flags &= ~BTP_HAS_GARBAGE;
diff --git a/src/backend/access/rmgrdesc/heapdesc.c b/src/backend/access/rmgrdesc/heapdesc.c
index 44d2d63..d373e61 100644
--- a/src/backend/access/rmgrdesc/heapdesc.c
+++ b/src/backend/access/rmgrdesc/heapdesc.c
@@ -44,6 +44,12 @@ heap_desc(StringInfo buf, XLogReaderState *record)
 
 		appendStringInfo(buf, "off %u", xlrec->offnum);
 	}
+	else if (info == XLOG_HEAP_MULTI_INSERT)
+	{
+		xl_heap_multi_insert *xlrec = (xl_heap_multi_insert *) rec;
+
+		appendStringInfo(buf, "%d tuples", xlrec->ntuples);
+	}
 	else if (info == XLOG_HEAP_DELETE)
 	{
 		xl_heap_delete *xlrec = (xl_heap_delete *) rec;
@@ -102,7 +108,7 @@ heap2_desc(StringInfo buf, XLogReaderState *record)
 	char	   *rec = XLogRecGetData(record);
 	uint8		info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
 
-	info &= XLOG_HEAP_OPMASK;
+	info &= XLOG_HEAP2_OPMASK;
 	if (info == XLOG_HEAP2_CLEAN)
 	{
 		xl_heap_clean *xlrec = (xl_heap_clean *) rec;
@@ -129,12 +135,6 @@ heap2_desc(StringInfo buf, XLogReaderState *record)
 		appendStringInfo(buf, "cutoff xid %u flags %d",
 						 xlrec->cutoff_xid, xlrec->flags);
 	}
-	else if (info == XLOG_HEAP2_MULTI_INSERT)
-	{
-		xl_heap_multi_insert *xlrec = (xl_heap_multi_insert *) rec;
-
-		appendStringInfo(buf, "%d tuples", xlrec->ntuples);
-	}
 	else if (info == XLOG_HEAP2_LOCK_UPDATED)
 	{
 		xl_heap_lock_updated *xlrec = (xl_heap_lock_updated *) rec;
@@ -171,6 +171,12 @@ heap_identify(uint8 info)
 		case XLOG_HEAP_INSERT | XLOG_HEAP_INIT_PAGE:
 			id = "INSERT+INIT";
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			id = "MULTI_INSERT";
+			break;
+		case XLOG_HEAP_MULTI_INSERT | XLOG_HEAP_INIT_PAGE:
+			id = "MULTI_INSERT+INIT";
+			break;
 		case XLOG_HEAP_DELETE:
 			id = "DELETE";
 			break;
@@ -219,12 +225,6 @@ heap2_identify(uint8 info)
 		case XLOG_HEAP2_VISIBLE:
 			id = "VISIBLE";
 			break;
-		case XLOG_HEAP2_MULTI_INSERT:
-			id = "MULTI_INSERT";
-			break;
-		case XLOG_HEAP2_MULTI_INSERT | XLOG_HEAP_INIT_PAGE:
-			id = "MULTI_INSERT+INIT";
-			break;
 		case XLOG_HEAP2_LOCK_UPDATED:
 			id = "LOCK_UPDATED";
 			break;
diff --git a/src/backend/access/rmgrdesc/nbtdesc.c b/src/backend/access/rmgrdesc/nbtdesc.c
index fbde9d6..6b2c5d6 100644
--- a/src/backend/access/rmgrdesc/nbtdesc.c
+++ b/src/backend/access/rmgrdesc/nbtdesc.c
@@ -48,8 +48,8 @@ btree_desc(StringInfo buf, XLogReaderState *record)
 			{
 				xl_btree_vacuum *xlrec = (xl_btree_vacuum *) rec;
 
-				appendStringInfo(buf, "lastBlockVacuumed %u",
-								 xlrec->lastBlockVacuumed);
+				appendStringInfo(buf, "ndelitems %u, nclearitems %u",
+								 xlrec->ndelitems, xlrec->nclearitems);
 				break;
 			}
 		case XLOG_BTREE_DELETE:
diff --git a/src/backend/access/spgist/spgutils.c b/src/backend/access/spgist/spgutils.c
index e57ac49..59ef7f3 100644
--- a/src/backend/access/spgist/spgutils.c
+++ b/src/backend/access/spgist/spgutils.c
@@ -72,6 +72,7 @@ spghandler(PG_FUNCTION_ARGS)
 	amroutine->amestimateparallelscan = NULL;
 	amroutine->aminitparallelscan = NULL;
 	amroutine->amparallelrescan = NULL;
+	amroutine->amrecheck = NULL;
 
 	PG_RETURN_POINTER(amroutine);
 }
diff --git a/src/backend/access/spgist/spgvacuum.c b/src/backend/access/spgist/spgvacuum.c
index cce9b3f..711d351 100644
--- a/src/backend/access/spgist/spgvacuum.c
+++ b/src/backend/access/spgist/spgvacuum.c
@@ -155,7 +155,8 @@ vacuumLeafPage(spgBulkDeleteState *bds, Relation index, Buffer buffer,
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				deletable[i] = true;
@@ -425,7 +426,8 @@ vacuumLeafRoot(spgBulkDeleteState *bds, Relation index, Buffer buffer)
 		{
 			Assert(ItemPointerIsValid(&lt->heapPtr));
 
-			if (bds->callback(&lt->heapPtr, bds->callback_state))
+			if (bds->callback(&lt->heapPtr, false, bds->callback_state) ==
+					IBDCR_DELETE)
 			{
 				bds->stats->tuples_removed += 1;
 				toDelete[xlrec.nDelete] = i;
@@ -902,10 +904,10 @@ spgbulkdelete(IndexVacuumInfo *info, IndexBulkDeleteResult *stats,
 }
 
 /* Dummy callback to delete no tuples during spgvacuumcleanup */
-static bool
-dummy_callback(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+dummy_callback(ItemPointer itemptr, bool is_warm, void *state)
 {
-	return false;
+	return IBDCR_KEEP;
 }
 
 /*
diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 1eb163f..2c27661 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -54,6 +54,7 @@
 #include "nodes/makefuncs.h"
 #include "nodes/nodeFuncs.h"
 #include "optimizer/clauses.h"
+#include "optimizer/var.h"
 #include "parser/parser.h"
 #include "storage/bufmgr.h"
 #include "storage/lmgr.h"
@@ -114,7 +115,7 @@ static void IndexCheckExclusion(Relation heapRelation,
 					IndexInfo *indexInfo);
 static inline int64 itemptr_encode(ItemPointer itemptr);
 static inline void itemptr_decode(ItemPointer itemptr, int64 encoded);
-static bool validate_index_callback(ItemPointer itemptr, void *opaque);
+static IndexBulkDeleteCallbackResult validate_index_callback(ItemPointer itemptr, bool is_warm, void *opaque);
 static void validate_index_heapscan(Relation heapRelation,
 						Relation indexRelation,
 						IndexInfo *indexInfo,
@@ -1691,6 +1692,20 @@ BuildIndexInfo(Relation index)
 	ii->ii_AmCache = NULL;
 	ii->ii_Context = CurrentMemoryContext;
 
+	/* build a bitmap of all table attributes referred by this index */
+	for (i = 0; i < ii->ii_NumIndexAttrs; i++)
+	{
+		AttrNumber attr = ii->ii_KeyAttrNumbers[i];
+		ii->ii_indxattrs = bms_add_member(ii->ii_indxattrs, attr -
+				FirstLowInvalidHeapAttributeNumber);
+	}
+
+	/* Collect all attributes used in expressions, too */
+	pull_varattnos((Node *) ii->ii_Expressions, 1, &ii->ii_indxattrs);
+
+	/* Collect all attributes in the index predicate, too */
+	pull_varattnos((Node *) ii->ii_Predicate, 1, &ii->ii_indxattrs);
+
 	return ii;
 }
 
@@ -1815,6 +1830,51 @@ FormIndexDatum(IndexInfo *indexInfo,
 		elog(ERROR, "wrong number of index expressions");
 }
 
+/*
+ * This is same as FormIndexDatum but we don't compute any expression
+ * attributes and hence can be used when executor interfaces are not available.
+ * If i'th attribute is available then isavail[i] is set to true, else set to
+ * false. The caller must always check if an attribute value is available
+ * before trying to do anything useful with that.
+ */
+void
+FormIndexPlainDatum(IndexInfo *indexInfo,
+			   Relation heapRel,
+			   HeapTuple heapTup,
+			   Datum *values,
+			   bool *isnull,
+			   bool *isavail)
+{
+	int			i;
+
+	for (i = 0; i < indexInfo->ii_NumIndexAttrs; i++)
+	{
+		int			keycol = indexInfo->ii_KeyAttrNumbers[i];
+		Datum		iDatum;
+		bool		isNull;
+
+		if (keycol != 0)
+		{
+			/*
+			 * Plain index column; get the value we need directly from the
+			 * heap tuple.
+			 */
+			iDatum = heap_getattr(heapTup, keycol, RelationGetDescr(heapRel), &isNull);
+			values[i] = iDatum;
+			isnull[i] = isNull;
+			isavail[i] = true;
+		}
+		else
+		{
+			/*
+			 * This is an expression attribute which can't be computed by us.
+			 * So just inform the caller about it.
+			 */
+			isavail[i] = false;
+			isnull[i] = true;
+		}
+	}
+}
 
 /*
  * index_update_stats --- update pg_class entry after CREATE INDEX or REINDEX
@@ -2929,15 +2989,15 @@ itemptr_decode(ItemPointer itemptr, int64 encoded)
 /*
  * validate_index_callback - bulkdelete callback to collect the index TIDs
  */
-static bool
-validate_index_callback(ItemPointer itemptr, void *opaque)
+static IndexBulkDeleteCallbackResult
+validate_index_callback(ItemPointer itemptr, bool is_warm, void *opaque)
 {
 	v_i_state  *state = (v_i_state *) opaque;
 	int64		encoded = itemptr_encode(itemptr);
 
 	tuplesort_putdatum(state->tuplesort, Int64GetDatum(encoded), false);
 	state->itups += 1;
-	return false;				/* never actually delete anything */
+	return IBDCR_KEEP;				/* never actually delete anything */
 }
 
 /*
@@ -3156,7 +3216,8 @@ validate_index_heapscan(Relation heapRelation,
 						 heapRelation,
 						 indexInfo->ii_Unique ?
 						 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-						 indexInfo);
+						 indexInfo,
+						 false);
 
 			state->tups_inserted += 1;
 		}
diff --git a/src/backend/catalog/indexing.c b/src/backend/catalog/indexing.c
index abc344a..6392f33 100644
--- a/src/backend/catalog/indexing.c
+++ b/src/backend/catalog/indexing.c
@@ -66,10 +66,15 @@ CatalogCloseIndexes(CatalogIndexState indstate)
  *
  * This should be called for each inserted or updated catalog tuple.
  *
+ * If the tuple was WARM updated, the modified_attrs contains the list of
+ * columns updated by the update. We must not insert new index entries for
+ * indexes which do not refer to any of the modified columns.
+ *
  * This is effectively a cut-down version of ExecInsertIndexTuples.
  */
 static void
-CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
+CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple,
+		Bitmapset *modified_attrs, bool warm_update)
 {
 	int			i;
 	int			numIndexes;
@@ -79,12 +84,28 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 	IndexInfo **indexInfoArray;
 	Datum		values[INDEX_MAX_KEYS];
 	bool		isnull[INDEX_MAX_KEYS];
+	ItemPointerData root_tid;
 
-	/* HOT update does not require index inserts */
-	if (HeapTupleIsHeapOnly(heapTuple))
+	/*
+	 * HOT update does not require index inserts, but WARM may need for some
+	 * indexes.
+	 */
+	if (HeapTupleIsHeapOnly(heapTuple) && !warm_update)
 		return;
 
 	/*
+	 * If we've done a WARM update, then we must index the TID of the root line
+	 * pointer and not the actual TID of the new tuple.
+	 */
+	if (warm_update)
+		ItemPointerSet(&root_tid,
+				ItemPointerGetBlockNumber(&(heapTuple->t_self)),
+				HeapTupleHeaderGetRootOffset(heapTuple->t_data));
+	else
+		ItemPointerCopy(&heapTuple->t_self, &root_tid);
+
+
+	/*
 	 * Get information from the state structure.  Fall out if nothing to do.
 	 */
 	numIndexes = indstate->ri_NumIndices;
@@ -112,6 +133,17 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 			continue;
 
 		/*
+		 * If we've done WARM update, then we must not insert a new index tuple
+		 * if none of the index keys have changed. This is not just an
+		 * optimization, but a requirement for WARM to work correctly.
+		 */
+		if (warm_update)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
+		/*
 		 * Expressional and partial indexes on system catalogs are not
 		 * supported, nor exclusion constraints, nor deferred uniqueness
 		 */
@@ -136,11 +168,12 @@ CatalogIndexInsert(CatalogIndexState indstate, HeapTuple heapTuple)
 		index_insert(relationDescs[i],	/* index relation */
 					 values,	/* array of index Datums */
 					 isnull,	/* is-null flags */
-					 &(heapTuple->t_self),		/* tid of heap tuple */
+					 &root_tid,
 					 heapRelation,
 					 relationDescs[i]->rd_index->indisunique ?
 					 UNIQUE_CHECK_YES : UNIQUE_CHECK_NO,
-					 indexInfo);
+					 indexInfo,
+					 warm_update);
 	}
 
 	ExecDropSingleTupleTableSlot(slot);
@@ -168,7 +201,7 @@ CatalogTupleInsert(Relation heapRel, HeapTuple tup)
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 	CatalogCloseIndexes(indstate);
 
 	return oid;
@@ -190,7 +223,7 @@ CatalogTupleInsertWithInfo(Relation heapRel, HeapTuple tup,
 
 	oid = simple_heap_insert(heapRel, tup);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, NULL, false);
 
 	return oid;
 }
@@ -210,12 +243,14 @@ void
 CatalogTupleUpdate(Relation heapRel, ItemPointer otid, HeapTuple tup)
 {
 	CatalogIndexState indstate;
+	bool	warm_update;
+	Bitmapset	*modified_attrs;
 
 	indstate = CatalogOpenIndexes(heapRel);
 
-	simple_heap_update(heapRel, otid, tup);
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 	CatalogCloseIndexes(indstate);
 }
 
@@ -231,9 +266,12 @@ void
 CatalogTupleUpdateWithInfo(Relation heapRel, ItemPointer otid, HeapTuple tup,
 						   CatalogIndexState indstate)
 {
-	simple_heap_update(heapRel, otid, tup);
+	Bitmapset  *modified_attrs;
+	bool		warm_update;
+
+	simple_heap_update(heapRel, otid, tup, &modified_attrs, &warm_update);
 
-	CatalogIndexInsert(indstate, tup);
+	CatalogIndexInsert(indstate, tup, modified_attrs, warm_update);
 }
 
 /*
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 0217f39..4ef964f 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -530,6 +530,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_tuples_deleted(C.oid) AS n_tup_del,
             pg_stat_get_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
@@ -560,7 +561,8 @@ CREATE VIEW pg_stat_xact_all_tables AS
             pg_stat_get_xact_tuples_inserted(C.oid) AS n_tup_ins,
             pg_stat_get_xact_tuples_updated(C.oid) AS n_tup_upd,
             pg_stat_get_xact_tuples_deleted(C.oid) AS n_tup_del,
-            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd
+            pg_stat_get_xact_tuples_hot_updated(C.oid) AS n_tup_hot_upd,
+            pg_stat_get_xact_tuples_warm_updated(C.oid) AS n_tup_warm_upd
     FROM pg_class C LEFT JOIN
          pg_index I ON C.oid = I.indrelid
          LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
diff --git a/src/backend/commands/constraint.c b/src/backend/commands/constraint.c
index e2544e5..330b661 100644
--- a/src/backend/commands/constraint.c
+++ b/src/backend/commands/constraint.c
@@ -40,6 +40,7 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	TriggerData *trigdata = castNode(TriggerData, fcinfo->context);
 	const char *funcname = "unique_key_recheck";
 	HeapTuple	new_row;
+	HeapTupleData heapTuple;
 	ItemPointerData tmptid;
 	Relation	indexRel;
 	IndexInfo  *indexInfo;
@@ -102,7 +103,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 	 * removed.
 	 */
 	tmptid = new_row->t_self;
-	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL))
+	if (!heap_hot_search(&tmptid, trigdata->tg_relation, SnapshotSelf, NULL,
+				NULL, NULL, &heapTuple))
 	{
 		/*
 		 * All rows in the HOT chain are dead, so skip the check.
@@ -166,7 +168,8 @@ unique_key_recheck(PG_FUNCTION_ARGS)
 		 */
 		index_insert(indexRel, values, isnull, &(new_row->t_self),
 					 trigdata->tg_relation, UNIQUE_CHECK_EXISTING,
-					 indexInfo);
+					 indexInfo,
+					 false);
 	}
 	else
 	{
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index 8c58808..1366398 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -2689,6 +2689,8 @@ CopyFrom(CopyState cstate)
 					if (resultRelInfo->ri_NumIndices > 0)
 						recheckIndexes = ExecInsertIndexTuples(slot,
 															&(tuple->t_self),
+															&(tuple->t_self),
+															NULL,
 															   estate,
 															   false,
 															   NULL,
@@ -2843,6 +2845,7 @@ CopyFromInsertBatch(CopyState cstate, EState *estate, CommandId mycid,
 			ExecStoreTuple(bufferedTuples[i], myslot, InvalidBuffer, false);
 			recheckIndexes =
 				ExecInsertIndexTuples(myslot, &(bufferedTuples[i]->t_self),
+									  &(bufferedTuples[i]->t_self), NULL,
 									  estate, false, NULL, NIL);
 			ExecARInsertTriggers(estate, resultRelInfo,
 								 bufferedTuples[i],
diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c
index 4861799..b62b0e9 100644
--- a/src/backend/commands/indexcmds.c
+++ b/src/backend/commands/indexcmds.c
@@ -694,7 +694,14 @@ DefineIndex(Oid relationId,
 	 * visible to other transactions before we start to build the index. That
 	 * will prevent them from making incompatible HOT updates.  The new index
 	 * will be marked not indisready and not indisvalid, so that no one else
-	 * tries to either insert into it or use it for queries.
+	 * tries to either insert into it or use it for queries. In addition to
+	 * that, WARM updates will be disallowed if an update is modifying one of
+	 * the columns used by this new index. This is necessary to ensure that we
+	 * don't create WARM tuples which do not have corresponding entry in this
+	 * index. It must be noted that during the second phase, we will index only
+	 * those heap tuples whose root line pointer is not already in the index,
+	 * hence it's important that all tuples in a given chain, has the same
+	 * value for any indexed column (including this new index).
 	 *
 	 * We must commit our current transaction so that the index becomes
 	 * visible; then start another.  Note that all the data structures we just
@@ -742,7 +749,10 @@ DefineIndex(Oid relationId,
 	 * marked as "not-ready-for-inserts".  The index is consulted while
 	 * deciding HOT-safety though.  This arrangement ensures that no new HOT
 	 * chains can be created where the new tuple and the old tuple in the
-	 * chain have different index keys.
+	 * chain have different index keys. Also, the new index is consulted for
+	 * deciding whether a WARM update is possible, and WARM update is not done
+	 * if a column used by this index is being updated. This ensures that we
+	 * don't create WARM tuples which are not indexed by this index.
 	 *
 	 * We now take a new snapshot, and build the index using all tuples that
 	 * are visible in this snapshot.  We can be sure that any HOT updates to
@@ -777,7 +787,8 @@ DefineIndex(Oid relationId,
 	/*
 	 * Update the pg_index row to mark the index as ready for inserts. Once we
 	 * commit this transaction, any new transactions that open the table must
-	 * insert new entries into the index for insertions and non-HOT updates.
+	 * insert new entries into the index for insertions and non-HOT updates or
+	 * WARM updates where this index needs new entry.
 	 */
 	index_set_state_flags(indexRelationId, INDEX_CREATE_SET_READY);
 
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index d418d56..dbec153 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -9925,6 +9925,7 @@ ATExecSetRelOptions(Relation rel, List *defList, AlterTableType operation,
 	Datum		datum;
 	bool		isnull;
 	Datum		newOptions;
+	Datum		std_options;
 	Datum		repl_val[Natts_pg_class];
 	bool		repl_null[Natts_pg_class];
 	bool		repl_repl[Natts_pg_class];
@@ -9969,7 +9970,7 @@ ATExecSetRelOptions(Relation rel, List *defList, AlterTableType operation,
 		case RELKIND_TOASTVALUE:
 		case RELKIND_MATVIEW:
 		case RELKIND_PARTITIONED_TABLE:
-			(void) heap_reloptions(rel->rd_rel->relkind, newOptions, true);
+			std_options = heap_reloptions(rel->rd_rel->relkind, newOptions, true);
 			break;
 		case RELKIND_VIEW:
 			(void) view_reloptions(newOptions, true);
@@ -9985,6 +9986,17 @@ ATExecSetRelOptions(Relation rel, List *defList, AlterTableType operation,
 			break;
 	}
 
+	if (rel->rd_rel->relkind == RELKIND_RELATION ||
+		rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
+	{
+		bool new_enable_warm = ((StdRdOptions *)(std_options))->enable_warm;
+		if (RelationWarmUpdatesEnabled(rel) && !new_enable_warm)
+			ereport(ERROR,
+					(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+					 errmsg("WARM updates cannot be disabled on the table \"%s\"",
+						 RelationGetRelationName(rel))));
+	}
+
 	/* Special-case validation of view options */
 	if (rel->rd_rel->relkind == RELKIND_VIEW)
 	{
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index 5b43a66..c2d5705 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -104,6 +104,39 @@
  */
 #define PREFETCH_SIZE			((BlockNumber) 32)
 
+/*
+ * Structure to track WARM chains that can be converted into HOT chains during
+ * this run.
+ *
+ * To reduce space requirement, we're using bitfields. But the way things are
+ * laid down, we're still wasting 1-byte per candidate chain.
+ */
+typedef struct LVWarmChain
+{
+	ItemPointerData	chain_tid;			/* root of the chain */
+
+	/*
+	 * 1 - if the chain contains only post-warm tuples
+	 * 0 - if the chain contains only pre-warm tuples
+	 */
+	uint8			is_postwarm_chain:2;
+
+	/* 1 - if this chain must remain a WARM chain */
+	uint8			keep_warm_chain:2;
+
+	/*
+	 * Number of CLEAR pointers to this root TID found so far - must never be
+	 * more than 2.
+	 */
+	uint8			num_clear_pointers:2;
+
+	/*
+	 * Number of WARM pointers to this root TID found so far - must never be
+	 * more than 1.
+	 */
+	uint8			num_warm_pointers:2;
+} LVWarmChain;
+
 typedef struct LVRelStats
 {
 	/* hasindex = true means two-pass strategy; false means one-pass */
@@ -122,6 +155,14 @@ typedef struct LVRelStats
 	BlockNumber pages_removed;
 	double		tuples_deleted;
 	BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */
+
+	/* List of candidate WARM chains that can be converted into HOT chains */
+	/* NB: this list is ordered by TID of the root pointers */
+	int				num_warm_chains;	/* current # of entries */
+	int				max_warm_chains;	/* # slots allocated in array */
+	LVWarmChain 	*warm_chains;		/* array of LVWarmChain */
+	double			num_non_convertible_warm_chains;
+
 	/* List of TIDs of tuples we intend to delete */
 	/* NB: this list is ordered by TID address */
 	int			num_dead_tuples;	/* current # of entries */
@@ -150,6 +191,7 @@ static void lazy_scan_heap(Relation onerel, int options,
 static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats);
 static bool lazy_check_needs_freeze(Buffer buf, bool *hastup);
 static void lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats);
 static void lazy_cleanup_index(Relation indrel,
@@ -157,6 +199,10 @@ static void lazy_cleanup_index(Relation indrel,
 				   LVRelStats *vacrelstats);
 static int lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 				 int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer);
+static int lazy_warmclear_page(Relation onerel, BlockNumber blkno,
+				 Buffer buffer, int chainindex, LVRelStats *vacrelstats,
+				 Buffer *vmbuffer, bool check_all_visible);
+static void lazy_reset_warm_pointer_count(LVRelStats *vacrelstats);
 static bool should_attempt_truncation(LVRelStats *vacrelstats);
 static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
 static BlockNumber count_nondeletable_pages(Relation onerel,
@@ -164,8 +210,15 @@ static BlockNumber count_nondeletable_pages(Relation onerel,
 static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks);
 static void lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr);
-static bool lazy_tid_reaped(ItemPointer itemptr, void *state);
+static void lazy_record_warm_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static void lazy_record_clear_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr);
+static IndexBulkDeleteCallbackResult lazy_tid_reaped(ItemPointer itemptr, bool is_warm, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state);
+static IndexBulkDeleteCallbackResult lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state);
 static int	vac_cmp_itemptr(const void *left, const void *right);
+static int vac_cmp_warm_chain(const void *left, const void *right);
 static bool heap_page_is_all_visible(Relation rel, Buffer buf,
 					 TransactionId *visibility_cutoff_xid, bool *all_frozen);
 
@@ -690,8 +743,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		 * If we are close to overrunning the available space for dead-tuple
 		 * TIDs, pause and do a cycle of vacuuming before we tackle this page.
 		 */
-		if ((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
-			vacrelstats->num_dead_tuples > 0)
+		if (((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
+			vacrelstats->num_dead_tuples > 0) ||
+			((vacrelstats->max_warm_chains - vacrelstats->num_warm_chains) < MaxHeapTuplesPerPage &&
+			 vacrelstats->num_warm_chains > 0))
 		{
 			const int	hvp_index[] = {
 				PROGRESS_VACUUM_PHASE,
@@ -721,6 +776,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			/* Remove index entries */
 			for (i = 0; i < nindexes; i++)
 				lazy_vacuum_index(Irel[i],
+								  (vacrelstats->num_warm_chains > 0),
 								  &indstats[i],
 								  vacrelstats);
 
@@ -743,6 +799,9 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			 * valid.
 			 */
 			vacrelstats->num_dead_tuples = 0;
+			vacrelstats->num_warm_chains = 0;
+			memset(vacrelstats->warm_chains, 0,
+					vacrelstats->max_warm_chains * sizeof (LVWarmChain));
 			vacrelstats->num_index_scans++;
 
 			/* Report that we are once again scanning the heap */
@@ -947,15 +1006,33 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 				continue;
 			}
 
+			ItemPointerSet(&(tuple.t_self), blkno, offnum);
+
 			/* Redirect items mustn't be touched */
 			if (ItemIdIsRedirected(itemid))
 			{
+				HeapCheckWarmChainStatus status = 0;
+
+				if (RelationWarmUpdatesEnabled(onerel))
+					status = heap_check_warm_chain(page, &tuple.t_self, false);
+				if (HCWC_IS_WARM_UPDATED(status))
+				{
+					/*
+					 * A chain which is either complete WARM or CLEAR is a
+					 * candidate for chain conversion. Remember the chain and
+					 * whether the chain has all WARM tuples or not.
+					 */
+					if (HCWC_IS_ALL_WARM(status))
+						lazy_record_warm_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_CLEAR(status))
+						lazy_record_clear_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
 				hastup = true;	/* this page won't be truncatable */
 				continue;
 			}
 
-			ItemPointerSet(&(tuple.t_self), blkno, offnum);
-
 			/*
 			 * DEAD item pointers are to be vacuumed normally; but we don't
 			 * count them in tups_vacuumed, else we'd be double-counting (at
@@ -975,6 +1052,29 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			tuple.t_len = ItemIdGetLength(itemid);
 			tuple.t_tableOid = RelationGetRelid(onerel);
 
+			if (!HeapTupleIsHeapOnly(&tuple))
+			{
+				HeapCheckWarmChainStatus status = 0;
+
+				if (RelationWarmUpdatesEnabled(onerel))
+					status = heap_check_warm_chain(page, &tuple.t_self, false);
+
+				if (HCWC_IS_WARM_UPDATED(status))
+				{
+					/*
+					 * A chain which is either complete WARM or CLEAR is a
+					 * candidate for chain conversion. Remember the chain and
+					 * its color.
+					 */
+					if (HCWC_IS_ALL_WARM(status))
+						lazy_record_warm_chain(vacrelstats, &tuple.t_self);
+					else if (HCWC_IS_ALL_CLEAR(status))
+						lazy_record_clear_chain(vacrelstats, &tuple.t_self);
+					else
+						vacrelstats->num_non_convertible_warm_chains++;
+				}
+			}
+
 			tupgone = false;
 
 			switch (HeapTupleSatisfiesVacuum(&tuple, OldestXmin, buf))
@@ -1040,6 +1140,19 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 							break;
 						}
 
+						/*
+						 * If this tuple was ever WARM updated or is a WARM
+						 * tuple, there could be multiple index entries
+						 * pointing to the root of this chain. We can't do
+						 * index-only scans for such tuples without verifying
+						 * index key check. So mark the page as !all_visible
+						 */
+						if (HeapTupleHeaderIsWarmUpdated(tuple.t_data))
+						{
+							all_visible = false;
+							break;
+						}
+
 						/* Track newest xmin on page. */
 						if (TransactionIdFollows(xmin, visibility_cutoff_xid))
 							visibility_cutoff_xid = xmin;
@@ -1282,7 +1395,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 
 	/* If any tuples need to be deleted, perform final vacuum cycle */
 	/* XXX put a threshold on min number of tuples here? */
-	if (vacrelstats->num_dead_tuples > 0)
+	if (vacrelstats->num_dead_tuples > 0 || vacrelstats->num_warm_chains > 0)
 	{
 		const int	hvp_index[] = {
 			PROGRESS_VACUUM_PHASE,
@@ -1300,6 +1413,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		/* Remove index entries */
 		for (i = 0; i < nindexes; i++)
 			lazy_vacuum_index(Irel[i],
+							  (vacrelstats->num_warm_chains > 0),
 							  &indstats[i],
 							  vacrelstats);
 
@@ -1371,7 +1485,10 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
  *
  *		This routine marks dead tuples as unused and compacts out free
  *		space on their pages.  Pages not having dead tuples recorded from
- *		lazy_scan_heap are not visited at all.
+ *		lazy_scan_heap are not visited at all. This routine also converts
+ *		candidate WARM chains to HOT chains by clearing WARM related flags. The
+ *		candidate chains are determined by the preceeding index scans after
+ *		looking at the data collected by the first heap scan.
  *
  * Note: the reason for doing this as a second pass is we cannot remove
  * the tuples until we've removed their index entries, and we want to
@@ -1380,7 +1497,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 static void
 lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 {
-	int			tupindex;
+	int			tupindex, chainindex;
 	int			npages;
 	PGRUsage	ru0;
 	Buffer		vmbuffer = InvalidBuffer;
@@ -1389,33 +1506,69 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 	npages = 0;
 
 	tupindex = 0;
-	while (tupindex < vacrelstats->num_dead_tuples)
+	chainindex = 0;
+	while (tupindex < vacrelstats->num_dead_tuples ||
+		   chainindex < vacrelstats->num_warm_chains)
 	{
-		BlockNumber tblk;
+		BlockNumber tblk, chainblk, vacblk;
 		Buffer		buf;
 		Page		page;
 		Size		freespace;
 
 		vacuum_delay_point();
 
-		tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
-		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, tblk, RBM_NORMAL,
+		tblk = chainblk = InvalidBlockNumber;
+		if (chainindex < vacrelstats->num_warm_chains)
+			chainblk =
+				ItemPointerGetBlockNumber(&(vacrelstats->warm_chains[chainindex].chain_tid));
+
+		if (tupindex < vacrelstats->num_dead_tuples)
+			tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
+
+		if (tblk == InvalidBlockNumber)
+			vacblk = chainblk;
+		else if (chainblk == InvalidBlockNumber)
+			vacblk = tblk;
+		else
+			vacblk = Min(chainblk, tblk);
+
+		Assert(vacblk != InvalidBlockNumber);
+
+		buf = ReadBufferExtended(onerel, MAIN_FORKNUM, vacblk, RBM_NORMAL,
 								 vac_strategy);
-		if (!ConditionalLockBufferForCleanup(buf))
+
+
+		if (vacblk == chainblk)
+			LockBufferForCleanup(buf);
+		else if (!ConditionalLockBufferForCleanup(buf))
 		{
 			ReleaseBuffer(buf);
 			++tupindex;
 			continue;
 		}
-		tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
-									&vmbuffer);
+
+		/*
+		 * Convert WARM chains on this page. This should be done before
+		 * vacuuming the page to ensure that we can correctly set visibility
+		 * bits after clearing WARM chains.
+		 *
+		 * If we are going to vacuum this page then don't check for
+		 * all-visibility just yet.
+		*/
+		if (vacblk == chainblk)
+			chainindex = lazy_warmclear_page(onerel, chainblk, buf, chainindex,
+					vacrelstats, &vmbuffer, chainblk != tblk);
+
+		if (vacblk == tblk)
+			tupindex = lazy_vacuum_page(onerel, tblk, buf, tupindex, vacrelstats,
+					&vmbuffer);
 
 		/* Now that we've compacted the page, record its available space */
 		page = BufferGetPage(buf);
 		freespace = PageGetHeapFreeSpace(page);
 
 		UnlockReleaseBuffer(buf);
-		RecordPageWithFreeSpace(onerel, tblk, freespace);
+		RecordPageWithFreeSpace(onerel, vacblk, freespace);
 		npages++;
 	}
 
@@ -1434,6 +1587,107 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 }
 
 /*
+ *	lazy_warmclear_page() -- clear various WARM bits on the tuples.
+ *
+ * Caller must hold pin and buffer cleanup lock on the buffer.
+ *
+ * chainindex is the index in vacrelstats->warm_chains of the first dead
+ * tuple for this page.  We assume the rest follow sequentially.
+ * The return value is the first tupindex after the tuples of this page.
+ *
+ * If check_all_visible is set then we also check if the page has now become
+ * all visible and update visibility map.
+ */
+static int
+lazy_warmclear_page(Relation onerel, BlockNumber blkno, Buffer buffer,
+				 int chainindex, LVRelStats *vacrelstats, Buffer *vmbuffer,
+				 bool check_all_visible)
+{
+	Page			page = BufferGetPage(buffer);
+	OffsetNumber	cleared_offnums[MaxHeapTuplesPerPage];
+	int				num_cleared = 0;
+	TransactionId	visibility_cutoff_xid;
+	bool			all_frozen;
+
+	pgstat_progress_update_param(PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED, blkno);
+
+	START_CRIT_SECTION();
+
+	for (; chainindex < vacrelstats->num_warm_chains ; chainindex++)
+	{
+		BlockNumber tblk;
+		LVWarmChain	*chain;
+
+		chain = &vacrelstats->warm_chains[chainindex];
+
+		tblk = ItemPointerGetBlockNumber(&chain->chain_tid);
+		if (tblk != blkno)
+			break;				/* past end of tuples for this block */
+
+		/*
+		 * Since a heap page can have no more than MaxHeapTuplesPerPage
+		 * offnums and we process each offnum only once, MaxHeapTuplesPerPage
+		 * size array should be enough to hold all cleared tuples in this page.
+		 */
+		if (!chain->keep_warm_chain)
+			num_cleared += heap_clear_warm_chain(page, &chain->chain_tid,
+					cleared_offnums + num_cleared);
+	}
+
+	/*
+	 * Mark buffer dirty before we write WAL.
+	 */
+	MarkBufferDirty(buffer);
+
+	/* XLOG stuff */
+	if (RelationNeedsWAL(onerel))
+	{
+		XLogRecPtr	recptr;
+
+		recptr = log_heap_warmclear(onerel, buffer,
+								cleared_offnums, num_cleared);
+		PageSetLSN(page, recptr);
+	}
+
+	END_CRIT_SECTION();
+
+	/* If not checking for all-visibility then we're done */
+	if (!check_all_visible)
+		return chainindex;
+
+	/*
+	 * The following code should match the corresponding code in
+	 * lazy_vacuum_page
+	 **/
+	if (heap_page_is_all_visible(onerel, buffer, &visibility_cutoff_xid,
+								 &all_frozen))
+		PageSetAllVisible(page);
+
+	/*
+	 * All the changes to the heap page have been done. If the all-visible
+	 * flag is now set, also set the VM all-visible bit (and, if possible, the
+	 * all-frozen bit) unless this has already been done previously.
+	 */
+	if (PageIsAllVisible(page))
+	{
+		uint8		vm_status = visibilitymap_get_status(onerel, blkno, vmbuffer);
+		uint8		flags = 0;
+
+		/* Set the VM all-frozen bit to flag, if needed */
+		if ((vm_status & VISIBILITYMAP_ALL_VISIBLE) == 0)
+			flags |= VISIBILITYMAP_ALL_VISIBLE;
+		if ((vm_status & VISIBILITYMAP_ALL_FROZEN) == 0 && all_frozen)
+			flags |= VISIBILITYMAP_ALL_FROZEN;
+
+		Assert(BufferIsValid(*vmbuffer));
+		if (flags != 0)
+			visibilitymap_set(onerel, blkno, buffer, InvalidXLogRecPtr,
+							  *vmbuffer, visibility_cutoff_xid, flags);
+	}
+	return chainindex;
+}
+
+/*
  *	lazy_vacuum_page() -- free dead tuples on a page
  *					 and repair its fragmentation.
  *
@@ -1586,6 +1840,24 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
 	return false;
 }
 
+/*
+ * Reset counters tracking number of WARM and CLEAR pointers per candidate TID.
+ * These counters are maintained per index and cleared when the next index is
+ * picked up for cleanup.
+ *
+ * We don't touch the keep_warm_chain since once a chain is known to be
+ * non-convertible, we must remember that across all indexes.
+ */
+static void
+lazy_reset_warm_pointer_count(LVRelStats *vacrelstats)
+{
+	int i;
+	for (i = 0; i < vacrelstats->num_warm_chains; i++)
+	{
+		LVWarmChain *chain = &vacrelstats->warm_chains[i];
+		chain->num_clear_pointers = chain->num_warm_pointers = 0;
+	}
+}
 
 /*
  *	lazy_vacuum_index() -- vacuum one index relation.
@@ -1595,6 +1867,7 @@ lazy_check_needs_freeze(Buffer buf, bool *hastup)
  */
 static void
 lazy_vacuum_index(Relation indrel,
+				  bool clear_warm,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats)
 {
@@ -1610,15 +1883,87 @@ lazy_vacuum_index(Relation indrel,
 	ivinfo.num_heap_tuples = vacrelstats->old_rel_tuples;
 	ivinfo.strategy = vac_strategy;
 
-	/* Do bulk deletion */
-	*stats = index_bulk_delete(&ivinfo, *stats,
-							   lazy_tid_reaped, (void *) vacrelstats);
+	/*
+	 * If told, convert WARM chains into HOT chains.
+	 *
+	 * We must have already collected candidate WARM chains i.e. chains that
+	 * have either all tuples with HEAP_WARM_TUPLE flag set or none.
+	 *
+	 * This works in two phases. In the first phase, we do a complete index
+	 * scan and collect information about index pointers to the candidate
+	 * chains, but we don't do conversion. To be precise, we count the number
+	 * of WARM and CLEAR index pointers to each candidate chain and use that
+	 * knowledge to arrive at a decision and do the actual conversion during
+	 * the second phase (we kill known dead pointers though in this phase).
+	 *
+	 * In the second phase, for each candidate chain we check if we have seen a
+	 * WARM index pointer. For such chains, we kill the CLEAR pointer and
+	 * convert the WARM pointer into a CLEAR pointer. The heap tuples are
+	 * cleared of WARM flags in the second heap scan. If we did not find any
+	 * WARM pointer to a WARM chain, that means that the chain is reachable
+	 * from the CLEAR pointer (because say WARM update did not add a new entry
+	 * for this index). In that case, we do nothing.  There is a third case
+	 * where we find two CLEAR pointers to a candidate chain. This can happen
+	 * because of aborted vacuums. We don't handle that case yet, but it should
+	 * be possible to apply the same recheck logic and find which of the clear
+	 * pointers is redundant and should be removed.
+	 *
+	 * For CLEAR chains, we just kill the WARM pointer, if it exists, and keep
+	 * the CLEAR pointer.
+	 */
+	if (clear_warm)
+	{
+		/*
+		 * Before starting the index scan, reset the counters of WARM and CLEAR
+		 * pointers, probably carried forward from the previous index.
+		 */
+		lazy_reset_warm_pointer_count(vacrelstats);
+
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase1, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row version, found "
+						"%0.f warm pointers, %0.f clear pointers, removed "
+						"%0.f warm pointers, removed %0.f clear pointers",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples,
+						(*stats)->num_warm_pointers,
+						(*stats)->num_clear_pointers,
+						(*stats)->warm_pointers_removed,
+						(*stats)->clear_pointers_removed)));
+
+		(*stats)->num_warm_pointers = 0;
+		(*stats)->num_clear_pointers = 0;
+		(*stats)->warm_pointers_removed = 0;
+		(*stats)->clear_pointers_removed = 0;
+		(*stats)->pointers_cleared = 0;
+
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_indexvac_phase2, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to convert WARM pointers, found "
+						"%0.f WARM pointers, %0.f CLEAR pointers, removed "
+						"%0.f WARM pointers, removed %0.f CLEAR pointers, "
+						"cleared %0.f WARM pointers",
+						RelationGetRelationName(indrel),
+						(*stats)->num_warm_pointers,
+						(*stats)->num_clear_pointers,
+						(*stats)->warm_pointers_removed,
+						(*stats)->clear_pointers_removed,
+						(*stats)->pointers_cleared)));
+	}
+	else
+	{
+		/* Do bulk deletion */
+		*stats = index_bulk_delete(&ivinfo, *stats,
+				lazy_tid_reaped, (void *) vacrelstats);
+		ereport(elevel,
+				(errmsg("scanned index \"%s\" to remove %d row versions",
+						RelationGetRelationName(indrel),
+						vacrelstats->num_dead_tuples),
+				 errdetail("%s.", pg_rusage_show(&ru0))));
+	}
 
-	ereport(elevel,
-			(errmsg("scanned index \"%s\" to remove %d row versions",
-					RelationGetRelationName(indrel),
-					vacrelstats->num_dead_tuples),
-			 errdetail("%s.", pg_rusage_show(&ru0))));
 }
 
 /*
@@ -1992,9 +2337,11 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 
 	if (vacrelstats->hasindex)
 	{
-		maxtuples = (vac_work_mem * 1024L) / sizeof(ItemPointerData);
+		maxtuples = (vac_work_mem * 1024L) / (sizeof(ItemPointerData) +
+				sizeof(LVWarmChain));
 		maxtuples = Min(maxtuples, INT_MAX);
-		maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData));
+		maxtuples = Min(maxtuples, MaxAllocSize / (sizeof(ItemPointerData) +
+					sizeof(LVWarmChain)));
 
 		/* curious coding here to ensure the multiplication can't overflow */
 		if ((BlockNumber) (maxtuples / LAZY_ALLOC_TUPLES) > relblocks)
@@ -2012,6 +2359,57 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 	vacrelstats->max_dead_tuples = (int) maxtuples;
 	vacrelstats->dead_tuples = (ItemPointer)
 		palloc(maxtuples * sizeof(ItemPointerData));
+
+	/*
+	 * XXX Cheat for now and allocate the same size array for tracking warm
+	 * chains. maxtuples must have been already adjusted above to ensure we
+	 * don't cross vac_work_mem.
+	 */
+	vacrelstats->num_warm_chains = 0;
+	vacrelstats->max_warm_chains = (int) maxtuples;
+	vacrelstats->warm_chains = (LVWarmChain *)
+		palloc0(maxtuples * sizeof(LVWarmChain));
+
+}
+
+/*
+ * lazy_record_clear_chain - remember one CLEAR chain
+ */
+static void
+lazy_record_clear_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	{
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 0;
+		vacrelstats->num_warm_chains++;
+	}
+}
+
+/*
+ * lazy_record_warm_chain - remember one WARM chain
+ */
+static void
+lazy_record_warm_chain(LVRelStats *vacrelstats,
+					   ItemPointer itemptr)
+{
+	/*
+	 * The array shouldn't overflow under normal behavior, but perhaps it
+	 * could if we are given a really small maintenance_work_mem. In that
+	 * case, just forget the last few tuples (we'll get 'em next time).
+	 */
+	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	{
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
+		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 1;
+		vacrelstats->num_warm_chains++;
+	}
 }
 
 /*
@@ -2042,8 +2440,8 @@ lazy_record_dead_tuple(LVRelStats *vacrelstats,
  *
  *		Assumes dead_tuples array is in sorted order.
  */
-static bool
-lazy_tid_reaped(ItemPointer itemptr, void *state)
+static IndexBulkDeleteCallbackResult
+lazy_tid_reaped(ItemPointer itemptr, bool is_warm, void *state)
 {
 	LVRelStats *vacrelstats = (LVRelStats *) state;
 	ItemPointer res;
@@ -2054,7 +2452,207 @@ lazy_tid_reaped(ItemPointer itemptr, void *state)
 								sizeof(ItemPointerData),
 								vac_cmp_itemptr);
 
-	return (res != NULL);
+	return (res != NULL) ? IBDCR_DELETE : IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase1() -- run first pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	ItemPointer		res;
+	LVWarmChain	*chain;
+
+	res = (ItemPointer) bsearch((void *) itemptr,
+								(void *) vacrelstats->dead_tuples,
+								vacrelstats->num_dead_tuples,
+								sizeof(ItemPointerData),
+								vac_cmp_itemptr);
+
+	if (res != NULL)
+		return IBDCR_DELETE;
+
+	chain = (LVWarmChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->warm_chains,
+								vacrelstats->num_warm_chains,
+								sizeof(LVWarmChain),
+								vac_cmp_warm_chain);
+	if (chain != NULL)
+	{
+		if (is_warm)
+			chain->num_warm_pointers++;
+		else
+			chain->num_clear_pointers++;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ *	lazy_indexvac_phase2() -- run second pass of index vacuum
+ *
+ *		This has the right signature to be an IndexBulkDeleteCallback.
+ */
+static IndexBulkDeleteCallbackResult
+lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
+{
+	LVRelStats		*vacrelstats = (LVRelStats *) state;
+	LVWarmChain	*chain;
+
+	chain = (LVWarmChain *) bsearch((void *) itemptr,
+								(void *) vacrelstats->warm_chains,
+								vacrelstats->num_warm_chains,
+								sizeof(LVWarmChain),
+								vac_cmp_warm_chain);
+
+	if (chain != NULL && (chain->keep_warm_chain != 1))
+	{
+		/*
+		 * At no point, we can have more than 1 warm pointer to any chain and
+		 * no more than 2 clear pointers.
+		 */
+		Assert(chain->num_warm_pointers <= 1);
+		Assert(chain->num_clear_pointers <= 2);
+
+		if (chain->is_postwarm_chain == 1)
+		{
+			if (is_warm)
+			{
+				/*
+				 * A WARM pointer, pointing to a WARM chain.
+				 *
+				 * Clear the warm pointer (and delete the CLEAR pointer). We
+				 * may have already seen the CLEAR pointer in the scan and
+				 * deleted that or we may see it later in the scan. It doesn't
+				 * matter if we fail at any point because we won't clear up
+				 * WARM bits on the heap tuples until we have dealt with the
+				 * index pointers cleanly.
+				 */
+				return IBDCR_CLEAR_WARM;
+			}
+			else
+			{
+				/*
+				 * CLEAR pointer to a WARM chain.
+				 */
+				if (chain->num_warm_pointers > 0 &&
+					chain->num_clear_pointers > 0)
+				{
+					/*
+					 * If there exists a WARM pointer to the chain, we can
+					 * delete the CLEAR pointer and clear the WARM bits on the
+					 * heap tuples.
+					 *
+					 * It might look like paranoia that we're also checking for
+					 * num_clear_pointers to be more than 0. After all we are
+					 * currently looking at a CLEAR pointer. But the reason why
+					 * this is important is because online cleanup of
+					 * WARM/CLEAR pointers may set a CLEAR pointer to a WARM
+					 * pointer, but that information may be lost if the buffer
+					 * is discarded before it's written to the disk. So we
+					 * could count the same pointer was a WARM pointer during
+					 * the first index scan and again see that as a CLEAR
+					 * pointer during the second index scan. Checking for both
+					 * WARM and CLEAR pointers ensures that we don't remove a
+					 * CLEAR pointer when a WARM pointer does not exist.
+					 */
+					return IBDCR_DELETE;
+				}
+				else if (chain->num_clear_pointers == 1)
+				{
+					/*
+					 * If this is the only pointer to a WARM chain, we must
+					 * keep the CLEAR pointer.
+					 *
+					 * The presence of WARM chain indicates that the WARM update
+					 * must have been committed good. But during the update
+					 * this index was probably not updated and hence it
+					 * contains just one, original CLEAR pointer to the chain.
+					 * We should be able to clear the WARM bits on heap tuples
+					 * unless we later find another index which prevents the
+					 * cleanup.
+					 */
+					return IBDCR_KEEP;
+				}
+			}
+		}
+		else
+		{
+			/*
+			 * This is a CLEAR chain.
+			 */
+			if (is_warm)
+			{
+				/*
+				 * A WARM pointer to a CLEAR chain.
+				 *
+				 * This can happen when a WARM update is aborted. Later the HOT
+				 * chain is pruned leaving behind only CLEAR tuples in the
+				 * chain. But the WARM index pointer inserted in the index
+				 * remains and it must now be deleted before we clear WARM bits
+				 * from the heap tuple.
+				 */
+				return IBDCR_DELETE;
+			}
+
+			/*
+			 * CLEAR pointer to a CLEAR chain.
+			 *
+			 * If this is the only surviving CLEAR pointer, keep it and clear
+			 * the WARM bits from the heap tuples.
+			 */
+			if (chain->num_clear_pointers == 1)
+				return IBDCR_KEEP;
+
+			/*
+			 * If there are more than 1 CLEAR pointers to this chain, we can
+			 * apply the recheck logic and kill the redudant CLEAR pointer and
+			 * convert the chain. But that's not yet done.
+			 */
+		}
+
+		/*
+		 * For everything else, we must keep the WARM bits and also keep the
+		 * index pointers.
+		 */
+		chain->keep_warm_chain = 1;
+		return IBDCR_KEEP;
+	}
+	return IBDCR_KEEP;
+}
+
+/*
+ * Comparator routines for use with qsort() and bsearch(). Similar to
+ * vac_cmp_itemptr, but right hand argument is LVWarmChain struct pointer.
+ */
+static int
+vac_cmp_warm_chain(const void *left, const void *right)
+{
+	BlockNumber lblk,
+				rblk;
+	OffsetNumber loff,
+				roff;
+
+	lblk = ItemPointerGetBlockNumber((ItemPointer) left);
+	rblk = ItemPointerGetBlockNumber(&((LVWarmChain *) right)->chain_tid);
+
+	if (lblk < rblk)
+		return -1;
+	if (lblk > rblk)
+		return 1;
+
+	loff = ItemPointerGetOffsetNumber((ItemPointer) left);
+	roff = ItemPointerGetOffsetNumber(&((LVWarmChain *) right)->chain_tid);
+
+	if (loff < roff)
+		return -1;
+	if (loff > roff)
+		return 1;
+
+	return 0;
 }
 
 /*
@@ -2170,6 +2768,18 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 						break;
 					}
 
+					/*
+					 * If this or any other tuple in the chain ever WARM
+					 * updated, there could be multiple index entries pointing
+					 * to the root of this chain. We can't do index-only scans
+					 * for such tuples without verifying index key check. So
+					 * mark the page as !all_visible
+					 */
+					if (HeapTupleHeaderIsWarmUpdated(tuple.t_data))
+					{
+						all_visible = false;
+					}
+
 					/* Track newest xmin on page. */
 					if (TransactionIdFollows(xmin, *visibility_cutoff_xid))
 						*visibility_cutoff_xid = xmin;
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index c3f1873..2143978 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -270,6 +270,8 @@ ExecCloseIndices(ResultRelInfo *resultRelInfo)
 List *
 ExecInsertIndexTuples(TupleTableSlot *slot,
 					  ItemPointer tupleid,
+					  ItemPointer root_tid,
+					  Bitmapset *modified_attrs,
 					  EState *estate,
 					  bool noDupErr,
 					  bool *specConflict,
@@ -324,6 +326,17 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 		if (!indexInfo->ii_ReadyForInserts)
 			continue;
 
+		/*
+		 * If modified_attrs is set, we only insert index entries for those
+		 * indexes whose column has changed. All other indexes can use their
+		 * existing index pointers to look up the new tuple
+		 */
+		if (modified_attrs)
+		{
+			if (!bms_overlap(modified_attrs, indexInfo->ii_indxattrs))
+				continue;
+		}
+
 		/* Check for partial index */
 		if (indexInfo->ii_Predicate != NIL)
 		{
@@ -387,10 +400,11 @@ ExecInsertIndexTuples(TupleTableSlot *slot,
 			index_insert(indexRelation, /* index relation */
 						 values,	/* array of index Datums */
 						 isnull,	/* null flags */
-						 tupleid,		/* tid of heap tuple */
+						 root_tid,		/* tid of heap or root tuple */
 						 heapRelation,	/* heap relation */
 						 checkUnique,	/* type of uniqueness check to do */
-						 indexInfo);	/* index AM may need this */
+						 indexInfo,	/* index AM may need this */
+						 (modified_attrs != NULL));	/* is it a WARM update? */
 
 		/*
 		 * If the index has an associated exclusion constraint, check that.
@@ -787,6 +801,9 @@ retry:
 		{
 			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
 				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
+			else
+				ItemPointerCopy(&tup->t_self, &ctid_wait);
+
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execReplication.c b/src/backend/executor/execReplication.c
index f20d728..747e4ce 100644
--- a/src/backend/executor/execReplication.c
+++ b/src/backend/executor/execReplication.c
@@ -399,6 +399,8 @@ ExecSimpleRelationInsert(EState *estate, TupleTableSlot *slot)
 
 		if (resultRelInfo->ri_NumIndices > 0)
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &(tuple->t_self),
+												   NULL,
 												   estate, false, NULL,
 												   NIL);
 
@@ -445,6 +447,8 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 	if (!skip_tuple)
 	{
 		List	   *recheckIndexes = NIL;
+		bool		warm_update;
+		Bitmapset  *modified_attrs;
 
 		/* Check the constraints of the tuple */
 		if (rel->rd_att->constr)
@@ -455,13 +459,35 @@ ExecSimpleRelationUpdate(EState *estate, EPQState *epqstate,
 
 		/* OK, update the tuple and index entries for it */
 		simple_heap_update(rel, &searchslot->tts_tuple->t_self,
-						   slot->tts_tuple);
+						   slot->tts_tuple, &modified_attrs, &warm_update);
 
 		if (resultRelInfo->ri_NumIndices > 0 &&
-			!HeapTupleIsHeapOnly(slot->tts_tuple))
+			(!HeapTupleIsHeapOnly(slot->tts_tuple) || warm_update))
+		{
+			ItemPointerData root_tid;
+
+			/*
+			 * If we did a WARM update then we must index the tuple using its
+			 * root line pointer and not the tuple TID itself.
+			 */
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self,
+						&root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
+
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL,
 												   NIL);
+		}
 
 		/* AFTER ROW UPDATE Triggers */
 		ExecARUpdateTriggers(estate, resultRelInfo,
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index d240f9c..0a8c2eb 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -39,6 +39,7 @@
 
 #include "access/relscan.h"
 #include "access/transam.h"
+#include "access/valid.h"
 #include "executor/execdebug.h"
 #include "executor/nodeBitmapHeapscan.h"
 #include "pgstat.h"
@@ -395,11 +396,21 @@ bitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)
 			OffsetNumber offnum = tbmres->offsets[curslot];
 			ItemPointerData tid;
 			HeapTupleData heapTuple;
+			HeapCheckWarmChainStatus status = 0;
 
 			ItemPointerSet(&tid, page, offnum);
 			if (heap_hot_search_buffer(&tid, scan->rs_rd, buffer, snapshot,
-									   &heapTuple, NULL, true))
+									   &heapTuple, NULL, true, &status))
+			{
 				scan->rs_vistuples[ntup++] = ItemPointerGetOffsetNumber(&tid);
+
+				/*
+				 * If the heap tuple needs a recheck because of a WARM update,
+				 * it's a lossy case.
+				 */
+				if (HCWC_IS_WARM_UPDATED(status))
+					tbmres->recheck = true;
+			}
 		}
 	}
 	else
diff --git a/src/backend/executor/nodeIndexscan.c b/src/backend/executor/nodeIndexscan.c
index 5afd02e..6e48c2e 100644
--- a/src/backend/executor/nodeIndexscan.c
+++ b/src/backend/executor/nodeIndexscan.c
@@ -142,8 +142,8 @@ IndexNext(IndexScanState *node)
 					   false);	/* don't pfree */
 
 		/*
-		 * If the index was lossy, we have to recheck the index quals using
-		 * the fetched tuple.
+		 * If the index was lossy or the tuple was WARM, we have to recheck
+		 * the index quals using the fetched tuple.
 		 */
 		if (scandesc->xs_recheck)
 		{
diff --git a/src/backend/executor/nodeModifyTable.c b/src/backend/executor/nodeModifyTable.c
index 0b524e0..2ad4a2c 100644
--- a/src/backend/executor/nodeModifyTable.c
+++ b/src/backend/executor/nodeModifyTable.c
@@ -513,6 +513,7 @@ ExecInsert(ModifyTableState *mtstate,
 
 			/* insert index entries for tuple */
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												 &(tuple->t_self), NULL,
 												 estate, true, &specConflict,
 												   arbiterIndexes);
 
@@ -559,6 +560,7 @@ ExecInsert(ModifyTableState *mtstate,
 			/* insert index entries for tuple */
 			if (resultRelInfo->ri_NumIndices > 0)
 				recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+													   &(tuple->t_self), NULL,
 													   estate, false, NULL,
 													   arbiterIndexes);
 		}
@@ -892,6 +894,9 @@ ExecUpdate(ItemPointer tupleid,
 	HTSU_Result result;
 	HeapUpdateFailureData hufd;
 	List	   *recheckIndexes = NIL;
+	Bitmapset  *modified_attrs = NULL;
+	ItemPointerData	root_tid;
+	bool		warm_update;
 
 	/*
 	 * abort the operation if not running transactions
@@ -1008,7 +1013,7 @@ lreplace:;
 							 estate->es_output_cid,
 							 estate->es_crosscheck_snapshot,
 							 true /* wait for commit */ ,
-							 &hufd, &lockmode);
+							 &hufd, &lockmode, &modified_attrs, &warm_update);
 		switch (result)
 		{
 			case HeapTupleSelfUpdated:
@@ -1095,10 +1100,28 @@ lreplace:;
 		 * the t_self field.
 		 *
 		 * If it's a HOT update, we mustn't insert new index entries.
+		 *
+		 * If it's a WARM update, then we must insert new entries with TID
+		 * pointing to the root of the WARM chain.
 		 */
-		if (resultRelInfo->ri_NumIndices > 0 && !HeapTupleIsHeapOnly(tuple))
+		if (resultRelInfo->ri_NumIndices > 0 &&
+			(!HeapTupleIsHeapOnly(tuple) || warm_update))
+		{
+			if (warm_update)
+				ItemPointerSet(&root_tid,
+						ItemPointerGetBlockNumber(&(tuple->t_self)),
+						HeapTupleHeaderGetRootOffset(tuple->t_data));
+			else
+			{
+				ItemPointerCopy(&tuple->t_self, &root_tid);
+				bms_free(modified_attrs);
+				modified_attrs = NULL;
+			}
 			recheckIndexes = ExecInsertIndexTuples(slot, &(tuple->t_self),
+												   &root_tid,
+												   modified_attrs,
 												   estate, false, NULL, NIL);
+		}
 	}
 
 	if (canSetTag)
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index 56a8bf2..52fe4ba 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -1888,7 +1888,7 @@ pgstat_count_heap_insert(Relation rel, PgStat_Counter n)
  * pgstat_count_heap_update - count a tuple update
  */
 void
-pgstat_count_heap_update(Relation rel, bool hot)
+pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 {
 	PgStat_TableStatus *pgstat_info = rel->pgstat_info;
 
@@ -1906,6 +1906,8 @@ pgstat_count_heap_update(Relation rel, bool hot)
 		/* t_tuples_hot_updated is nontransactional, so just advance it */
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
+		else if (warm)
+			pgstat_info->t_counts.t_tuples_warm_updated++;
 	}
 }
 
@@ -4521,6 +4523,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_updated = 0;
 		result->tuples_deleted = 0;
 		result->tuples_hot_updated = 0;
+		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
 		result->changes_since_analyze = 0;
@@ -5630,6 +5633,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated = tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted = tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated = tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
@@ -5657,6 +5661,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_updated += tabmsg->t_counts.t_tuples_updated;
 			tabentry->tuples_deleted += tabmsg->t_counts.t_tuples_deleted;
 			tabentry->tuples_hot_updated += tabmsg->t_counts.t_tuples_hot_updated;
+			tabentry->tuples_warm_updated += tabmsg->t_counts.t_tuples_warm_updated;
 			/* If table was truncated, first reset the live/dead counters */
 			if (tabmsg->t_counts.t_truncated)
 			{
diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c
index 5c13d26..7a9b48a 100644
--- a/src/backend/replication/logical/decode.c
+++ b/src/backend/replication/logical/decode.c
@@ -347,7 +347,7 @@ DecodeStandbyOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 static void
 DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 {
-	uint8		info = XLogRecGetInfo(buf->record) & XLOG_HEAP_OPMASK;
+	uint8		info = XLogRecGetInfo(buf->record) & XLOG_HEAP2_OPMASK;
 	TransactionId xid = XLogRecGetXid(buf->record);
 	SnapBuild  *builder = ctx->snapshot_builder;
 
@@ -359,10 +359,6 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 
 	switch (info)
 	{
-		case XLOG_HEAP2_MULTI_INSERT:
-			if (SnapBuildProcessChange(builder, xid, buf->origptr))
-				DecodeMultiInsert(ctx, buf);
-			break;
 		case XLOG_HEAP2_NEW_CID:
 			{
 				xl_heap_new_cid *xlrec;
@@ -390,6 +386,7 @@ DecodeHeap2Op(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 		case XLOG_HEAP2_CLEANUP_INFO:
 		case XLOG_HEAP2_VISIBLE:
 		case XLOG_HEAP2_LOCK_UPDATED:
+		case XLOG_HEAP2_WARMCLEAR:
 			break;
 		default:
 			elog(ERROR, "unexpected RM_HEAP2_ID record type: %u", info);
@@ -418,6 +415,10 @@ DecodeHeapOp(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 			if (SnapBuildProcessChange(builder, xid, buf->origptr))
 				DecodeInsert(ctx, buf);
 			break;
+		case XLOG_HEAP_MULTI_INSERT:
+			if (SnapBuildProcessChange(builder, xid, buf->origptr))
+				DecodeMultiInsert(ctx, buf);
+			break;
 
 			/*
 			 * Treat HOT update as normal updates. There is no useful
@@ -809,7 +810,7 @@ DecodeDelete(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)
 }
 
 /*
- * Decode XLOG_HEAP2_MULTI_INSERT_insert record into multiple tuplebufs.
+ * Decode XLOG_HEAP_MULTI_INSERT_insert record into multiple tuplebufs.
  *
  * Currently MULTI_INSERT will always contain the full tuples.
  */
diff --git a/src/backend/storage/page/bufpage.c b/src/backend/storage/page/bufpage.c
index fdf045a..8d23e92 100644
--- a/src/backend/storage/page/bufpage.c
+++ b/src/backend/storage/page/bufpage.c
@@ -1151,6 +1151,29 @@ PageIndexTupleOverwrite(Page page, OffsetNumber offnum,
 	return true;
 }
 
+/*
+ * PageIndexClearWarmTuples
+ *
+ * Clear the given WARM pointers by resetting the flags stored in the TID
+ * field. We assume there is nothing else in the TID flags other than the WARM
+ * information and clearing all flag bits is safe. If that changes, we must
+ * change this routine as well.
+ */
+void
+PageIndexClearWarmTuples(Page page, OffsetNumber *clearitemnos,
+						 uint16 nclearitems)
+{
+	int			i;
+	ItemId		itemid;
+	IndexTuple	itup;
+
+	for (i = 0; i < nclearitems; i++)
+	{
+		itemid = PageGetItemId(page, clearitemnos[i]);
+		itup = (IndexTuple) PageGetItem(page, itemid);
+		ItemPointerClearFlags(&itup->t_tid);
+	}
+}
 
 /*
  * Set checksum for a page in shared buffers.
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index e0cae1b..227a87d 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -147,6 +147,22 @@ pg_stat_get_tuples_hot_updated(PG_FUNCTION_ARGS)
 
 
 Datum
+pg_stat_get_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+
+Datum
 pg_stat_get_live_tuples(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
@@ -1674,6 +1690,21 @@ pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS)
 }
 
 Datum
+pg_stat_get_xact_tuples_warm_updated(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_TableStatus *tabentry;
+
+	if ((tabentry = find_tabstat_entry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->t_counts.t_tuples_warm_updated);
+
+	PG_RETURN_INT64(result);
+}
+
+Datum
 pg_stat_get_xact_blocks_fetched(PG_FUNCTION_ARGS)
 {
 	Oid			relid = PG_GETARG_OID(0);
diff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c
index bc22098..7bf6c38 100644
--- a/src/backend/utils/cache/relcache.c
+++ b/src/backend/utils/cache/relcache.c
@@ -2339,6 +2339,7 @@ RelationDestroyRelation(Relation relation, bool remember_tupdesc)
 	list_free_deep(relation->rd_fkeylist);
 	list_free(relation->rd_indexlist);
 	bms_free(relation->rd_indexattr);
+	bms_free(relation->rd_exprindexattr);
 	bms_free(relation->rd_keyattr);
 	bms_free(relation->rd_pkattr);
 	bms_free(relation->rd_idattr);
@@ -4353,6 +4354,13 @@ RelationGetIndexList(Relation relation)
 		return list_copy(relation->rd_indexlist);
 
 	/*
+	 * If the index list was invalidated, we better also invalidate the index
+	 * attribute list (which should automatically invalidate other attributes
+	 * such as primary key and replica identity)
+	 */
+	relation->rd_indexattr = NULL;
+
+	/*
 	 * We build the list we intend to return (in the caller's context) while
 	 * doing the scan.  After successfully completing the scan, we copy that
 	 * list into the relcache entry.  This avoids cache-context memory leakage
@@ -4836,15 +4844,20 @@ Bitmapset *
 RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 {
 	Bitmapset  *indexattrs;		/* indexed columns */
+	Bitmapset  *exprindexattrs;	/* indexed columns in expression/prediacate
+									 indexes */
 	Bitmapset  *uindexattrs;	/* columns in unique indexes */
 	Bitmapset  *pkindexattrs;	/* columns in the primary index */
 	Bitmapset  *idindexattrs;	/* columns in the replica identity */
+	Bitmapset  *indxnotreadyattrs;	/* columns in not ready indexes */
 	List	   *indexoidlist;
 	List	   *newindexoidlist;
+	List	   *indexattrsList;
 	Oid			relpkindex;
 	Oid			relreplindex;
 	ListCell   *l;
 	MemoryContext oldcxt;
+	bool		supportswarm = true;/* True if the table can be WARM updated */
 
 	/* Quick exit if we already computed the result. */
 	if (relation->rd_indexattr != NULL)
@@ -4859,6 +4872,10 @@ RelationGetIndexAttrBitmap(Relation relation, IndexAttrBitmapKind attrKind)
 				return bms_copy(relation->rd_pkattr);
 			case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 				return bms_copy(relation->rd_idattr);
+			case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+				return bms_copy(relation->rd_exprindexattr);
+			case INDEX_ATTR_BITMAP_NOTREADY:
+				return bms_copy(relation->rd_indxnotreadyattr);
 			default:
 				elog(ERROR, "unknown attrKind %u", attrKind);
 		}
@@ -4899,9 +4916,12 @@ restart:
 	 * won't be returned at all by RelationGetIndexList.
 	 */
 	indexattrs = NULL;
+	exprindexattrs = NULL;
 	uindexattrs = NULL;
 	pkindexattrs = NULL;
 	idindexattrs = NULL;
+	indxnotreadyattrs = NULL;
+	indexattrsList = NIL;
 	foreach(l, indexoidlist)
 	{
 		Oid			indexOid = lfirst_oid(l);
@@ -4911,6 +4931,7 @@ restart:
 		bool		isKey;		/* candidate key */
 		bool		isPK;		/* primary key */
 		bool		isIDKey;	/* replica identity index */
+		Bitmapset	*thisindexattrs = NULL;
 
 		indexDesc = index_open(indexOid, AccessShareLock);
 
@@ -4935,9 +4956,16 @@ restart:
 
 			if (attrnum != 0)
 			{
+				thisindexattrs = bms_add_member(thisindexattrs,
+							   attrnum - FirstLowInvalidHeapAttributeNumber);
+
 				indexattrs = bms_add_member(indexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
 
+				if (!indexInfo->ii_ReadyForInserts)
+					indxnotreadyattrs = bms_add_member(indxnotreadyattrs,
+							   attrnum - FirstLowInvalidHeapAttributeNumber);
+
 				if (isKey)
 					uindexattrs = bms_add_member(uindexattrs,
 							   attrnum - FirstLowInvalidHeapAttributeNumber);
@@ -4953,10 +4981,31 @@ restart:
 		}
 
 		/* Collect all attributes used in expressions, too */
-		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Expressions, 1, &exprindexattrs);
 
 		/* Collect all attributes in the index predicate, too */
-		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &indexattrs);
+		pull_varattnos((Node *) indexInfo->ii_Predicate, 1, &exprindexattrs);
+
+		/*
+		 * indexattrs should include attributes referenced in index expressions
+		 * and predicates too.
+		 */
+		indexattrs = bms_add_members(indexattrs, exprindexattrs);
+		thisindexattrs = bms_add_members(thisindexattrs, exprindexattrs);
+
+		if (!indexInfo->ii_ReadyForInserts)
+			indxnotreadyattrs = bms_add_members(indxnotreadyattrs,
+					exprindexattrs);
+
+		/*
+		 * Check if the index has amrecheck method defined. If the method is
+		 * not defined, the index does not support WARM update. Completely
+		 * disable WARM updates on such tables.
+		 */
+		if (!indexDesc->rd_amroutine->amrecheck)
+			supportswarm = false;
+
+		indexattrsList = lappend(indexattrsList, thisindexattrs);
 
 		index_close(indexDesc, AccessShareLock);
 	}
@@ -4985,19 +5034,28 @@ restart:
 		bms_free(pkindexattrs);
 		bms_free(idindexattrs);
 		bms_free(indexattrs);
-
+		list_free_deep(indexattrsList);
 		goto restart;
 	}
 
+	/* Remember if the table can do WARM updates */
+	relation->rd_supportswarm = (RelationWarmUpdatesEnabled(relation) && supportswarm);
+
 	/* Don't leak the old values of these bitmaps, if any */
 	bms_free(relation->rd_indexattr);
 	relation->rd_indexattr = NULL;
+	bms_free(relation->rd_exprindexattr);
+	relation->rd_exprindexattr = NULL;
 	bms_free(relation->rd_keyattr);
 	relation->rd_keyattr = NULL;
 	bms_free(relation->rd_pkattr);
 	relation->rd_pkattr = NULL;
 	bms_free(relation->rd_idattr);
 	relation->rd_idattr = NULL;
+	bms_free(relation->rd_indxnotreadyattr);
+	relation->rd_indxnotreadyattr = NULL;
+	list_free_deep(relation->rd_indexattrsList);
+	relation->rd_indexattrsList = NIL;
 
 	/*
 	 * Now save copies of the bitmaps in the relcache entry.  We intentionally
@@ -5010,7 +5068,21 @@ restart:
 	relation->rd_keyattr = bms_copy(uindexattrs);
 	relation->rd_pkattr = bms_copy(pkindexattrs);
 	relation->rd_idattr = bms_copy(idindexattrs);
-	relation->rd_indexattr = bms_copy(indexattrs);
+	relation->rd_exprindexattr = bms_copy(exprindexattrs);
+	relation->rd_indexattr = bms_copy(bms_union(indexattrs, exprindexattrs));
+	relation->rd_indxnotreadyattr = bms_copy(indxnotreadyattrs);
+
+	/*
+	 * create a deep copy of the list, copying each bitmap in the
+	 * CurrentMemoryContext.
+	 */
+	foreach(l, indexattrsList)
+	{
+		Bitmapset *b = (Bitmapset *) lfirst(l);
+		relation->rd_indexattrsList = lappend(relation->rd_indexattrsList,
+				bms_copy(b));
+	}
+
 	MemoryContextSwitchTo(oldcxt);
 
 	/* We return our original working copy for caller to play with */
@@ -5024,6 +5096,10 @@ restart:
 			return bms_copy(relation->rd_pkattr);
 		case INDEX_ATTR_BITMAP_IDENTITY_KEY:
 			return idindexattrs;
+		case INDEX_ATTR_BITMAP_EXPR_PREDICATE:
+			return exprindexattrs;
+		case INDEX_ATTR_BITMAP_NOTREADY:
+			return indxnotreadyattrs;
 		default:
 			elog(ERROR, "unknown attrKind %u", attrKind);
 			return NULL;
@@ -5031,6 +5107,34 @@ restart:
 }
 
 /*
+ * Get a list of bitmaps, where each bitmap contains a list of attributes used
+ * by one index.
+ *
+ * The actual information is computed in RelationGetIndexAttrBitmap, but
+ * currently the only consumer of this function calls it immediately after
+ * calling RelationGetIndexAttrBitmap, we should be fine. We don't expect any
+ * relcache invalidation to come between these two calls and hence don't expect
+ * the cached information to change underneath.
+ */
+List *
+RelationGetIndexAttrList(Relation relation)
+{
+	ListCell   *l;
+	List	   *indexattrsList = NIL;
+
+	/*
+	 * Create a deep copy of the list by copying bitmaps in the
+	 * CurrentMemoryContext.
+	 */
+	foreach(l, relation->rd_indexattrsList)
+	{
+		Bitmapset *b = (Bitmapset *) lfirst(l);
+		indexattrsList = lappend(indexattrsList, bms_copy(b));
+	}
+	return indexattrsList;
+}
+
+/*
  * RelationGetExclusionInfo -- get info about index's exclusion constraint
  *
  * This should be called only for an index that is known to have an
@@ -5636,6 +5740,7 @@ load_relcache_init_file(bool shared)
 		rel->rd_keyattr = NULL;
 		rel->rd_pkattr = NULL;
 		rel->rd_idattr = NULL;
+		rel->rd_indxnotreadyattr = NULL;
 		rel->rd_pubactions = NULL;
 		rel->rd_statvalid = false;
 		rel->rd_statlist = NIL;
diff --git a/src/backend/utils/time/combocid.c b/src/backend/utils/time/combocid.c
index baff998..6a2e2f2 100644
--- a/src/backend/utils/time/combocid.c
+++ b/src/backend/utils/time/combocid.c
@@ -106,7 +106,7 @@ HeapTupleHeaderGetCmin(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 	Assert(TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmin(tup)));
 
 	if (tup->t_infomask & HEAP_COMBOCID)
@@ -120,7 +120,7 @@ HeapTupleHeaderGetCmax(HeapTupleHeader tup)
 {
 	CommandId	cid = HeapTupleHeaderGetRawCommandId(tup);
 
-	Assert(!(tup->t_infomask & HEAP_MOVED));
+	Assert(!(HeapTupleHeaderIsMoved(tup)));
 
 	/*
 	 * Because GetUpdateXid() performs memory allocations if xmax is a
diff --git a/src/backend/utils/time/tqual.c b/src/backend/utils/time/tqual.c
index 519f3b6..e54d0df 100644
--- a/src/backend/utils/time/tqual.c
+++ b/src/backend/utils/time/tqual.c
@@ -186,7 +186,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -205,7 +205,7 @@ HeapTupleSatisfiesSelf(HeapTuple htup, Snapshot snapshot, Buffer buffer)
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -377,7 +377,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -396,7 +396,7 @@ HeapTupleSatisfiesToast(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -471,7 +471,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			return HeapTupleInvisible;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -490,7 +490,7 @@ HeapTupleSatisfiesUpdate(HeapTuple htup, CommandId curcid,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -753,7 +753,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -772,7 +772,7 @@ HeapTupleSatisfiesDirty(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -974,7 +974,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			return false;
 
 		/* Used by pre-9.0 binary upgrades */
-		if (tuple->t_infomask & HEAP_MOVED_OFF)
+		if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -993,7 +993,7 @@ HeapTupleSatisfiesMVCC(HeapTuple htup, Snapshot snapshot,
 			}
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1180,7 +1180,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 		if (HeapTupleHeaderXminInvalid(tuple))
 			return HEAPTUPLE_DEAD;
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_OFF)
+		else if (HeapTupleHeaderIsMovedOff(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
@@ -1198,7 +1198,7 @@ HeapTupleSatisfiesVacuum(HeapTuple htup, TransactionId OldestXmin,
 						InvalidTransactionId);
 		}
 		/* Used by pre-9.0 binary upgrades */
-		else if (tuple->t_infomask & HEAP_MOVED_IN)
+		else if (HeapTupleHeaderIsMovedIn(tuple))
 		{
 			TransactionId xvac = HeapTupleHeaderGetXvac(tuple);
 
diff --git a/src/include/access/amapi.h b/src/include/access/amapi.h
index f919cf8..b33bcda 100644
--- a/src/include/access/amapi.h
+++ b/src/include/access/amapi.h
@@ -13,6 +13,7 @@
 #define AMAPI_H
 
 #include "access/genam.h"
+#include "access/itup.h"
 
 /*
  * We don't wish to include planner header files here, since most of an index
@@ -74,6 +75,14 @@ typedef bool (*aminsert_function) (Relation indexRelation,
 											   Relation heapRelation,
 											   IndexUniqueCheck checkUnique,
 											   struct IndexInfo *indexInfo);
+/* insert this WARM tuple */
+typedef bool (*amwarminsert_function) (Relation indexRelation,
+											   Datum *values,
+											   bool *isnull,
+											   ItemPointer heap_tid,
+											   Relation heapRelation,
+											   IndexUniqueCheck checkUnique,
+											   struct IndexInfo *indexInfo);
 
 /* bulk delete */
 typedef IndexBulkDeleteResult *(*ambulkdelete_function) (IndexVacuumInfo *info,
@@ -152,6 +161,14 @@ typedef void (*aminitparallelscan_function) (void *target);
 /* (re)start parallel index scan */
 typedef void (*amparallelrescan_function) (IndexScanDesc scan);
 
+/* recheck index tuple and heap tuple match */
+typedef bool (*amrecheck_function) (Relation indexRel,
+		struct IndexInfo *indexInfo, IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+
+/* return true if the given index tuple is a WARM tuple */
+typedef bool (*amiswarm_function) (Relation indexRel, IndexTuple indexTuple);
+
 /*
  * API struct for an index AM.  Note this must be stored in a single palloc'd
  * chunk of memory.
@@ -198,6 +215,7 @@ typedef struct IndexAmRoutine
 	ambuild_function ambuild;
 	ambuildempty_function ambuildempty;
 	aminsert_function aminsert;
+	amwarminsert_function amwarminsert;
 	ambulkdelete_function ambulkdelete;
 	amvacuumcleanup_function amvacuumcleanup;
 	amcanreturn_function amcanreturn;	/* can be NULL */
@@ -217,6 +235,10 @@ typedef struct IndexAmRoutine
 	amestimateparallelscan_function amestimateparallelscan;		/* can be NULL */
 	aminitparallelscan_function aminitparallelscan;		/* can be NULL */
 	amparallelrescan_function amparallelrescan; /* can be NULL */
+
+	/* interface function to support WARM */
+	amrecheck_function amrecheck;		/* can be NULL */
+	amiswarm_function  amiswarm;
 } IndexAmRoutine;
 
 
diff --git a/src/include/access/genam.h b/src/include/access/genam.h
index f467b18..965be45 100644
--- a/src/include/access/genam.h
+++ b/src/include/access/genam.h
@@ -75,12 +75,29 @@ typedef struct IndexBulkDeleteResult
 	bool		estimated_count;	/* num_index_tuples is an estimate */
 	double		num_index_tuples;		/* tuples remaining */
 	double		tuples_removed; /* # removed during vacuum operation */
+	double		num_warm_pointers;	/* # WARM pointers found */
+	double		num_clear_pointers;	/* # CLEAR pointers found */
+	double		pointers_cleared;	/* # WARM pointers cleared */
+	double		warm_pointers_removed;	/* # WARM pointers removed */
+	double		clear_pointers_removed;	/* # CLEAR pointers removed */
 	BlockNumber pages_deleted;	/* # unused pages in index */
 	BlockNumber pages_free;		/* # pages available for reuse */
 } IndexBulkDeleteResult;
 
+/*
+ * IndexBulkDeleteCallback should return one of the following
+ */
+typedef enum IndexBulkDeleteCallbackResult
+{
+	IBDCR_KEEP,			/* index tuple should be preserved */
+	IBDCR_DELETE,		/* index tuple should be deleted */
+	IBDCR_CLEAR_WARM	/* index tuple should be cleared of WARM bit */
+} IndexBulkDeleteCallbackResult;
+
 /* Typedef for callback function to determine if a tuple is bulk-deletable */
-typedef bool (*IndexBulkDeleteCallback) (ItemPointer itemptr, void *state);
+typedef IndexBulkDeleteCallbackResult (*IndexBulkDeleteCallback) (
+										 ItemPointer itemptr,
+										 bool is_warm, void *state);
 
 /* struct definitions appear in relscan.h */
 typedef struct IndexScanDescData *IndexScanDesc;
@@ -135,7 +152,8 @@ extern bool index_insert(Relation indexRelation,
 			 ItemPointer heap_t_ctid,
 			 Relation heapRelation,
 			 IndexUniqueCheck checkUnique,
-			 struct IndexInfo *indexInfo);
+			 struct IndexInfo *indexInfo,
+			 bool warm_update);
 
 extern IndexScanDesc index_beginscan(Relation heapRelation,
 				Relation indexRelation,
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 5540e12..1d79467 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -72,6 +72,20 @@ typedef struct HeapUpdateFailureData
 	CommandId	cmax;
 } HeapUpdateFailureData;
 
+typedef int HeapCheckWarmChainStatus;
+
+#define HCWC_CLEAR_TUPLE		0x0001
+#define	HCWC_WARM_TUPLE			0x0002
+#define HCWC_WARM_UPDATED_TUPLE	0x0004
+
+#define HCWC_IS_MIXED(status) \
+	(((status) & (HCWC_CLEAR_TUPLE | HCWC_WARM_TUPLE)) != 0)
+#define HCWC_IS_ALL_WARM(status) \
+	(((status) & HCWC_CLEAR_TUPLE) == 0)
+#define HCWC_IS_ALL_CLEAR(status) \
+	(((status) & HCWC_WARM_TUPLE) == 0)
+#define HCWC_IS_WARM_UPDATED(status) \
+	(((status) & HCWC_WARM_UPDATED_TUPLE) != 0)
 
 /* ----------------
  *		function prototypes for heap access method
@@ -137,9 +151,11 @@ extern bool heap_fetch(Relation relation, Snapshot snapshot,
 		   Relation stats_relation);
 extern bool heap_hot_search_buffer(ItemPointer tid, Relation relation,
 					   Buffer buffer, Snapshot snapshot, HeapTuple heapTuple,
-					   bool *all_dead, bool first_call);
+					   bool *all_dead, bool first_call,
+					   HeapCheckWarmChainStatus *status);
 extern bool heap_hot_search(ItemPointer tid, Relation relation,
-				Snapshot snapshot, bool *all_dead);
+				Snapshot snapshot, bool *all_dead,
+				bool *recheck, Buffer *buffer, HeapTuple heapTuple);
 
 extern void heap_get_latest_tid(Relation relation, Snapshot snapshot,
 					ItemPointer tid);
@@ -161,7 +177,8 @@ extern void heap_abort_speculative(Relation relation, HeapTuple tuple);
 extern HTSU_Result heap_update(Relation relation, ItemPointer otid,
 			HeapTuple newtup,
 			CommandId cid, Snapshot crosscheck, bool wait,
-			HeapUpdateFailureData *hufd, LockTupleMode *lockmode);
+			HeapUpdateFailureData *hufd, LockTupleMode *lockmode,
+			Bitmapset **modified_attrsp, bool *warm_update);
 extern HTSU_Result heap_lock_tuple(Relation relation, HeapTuple tuple,
 				CommandId cid, LockTupleMode mode, LockWaitPolicy wait_policy,
 				bool follow_update,
@@ -176,10 +193,16 @@ extern bool heap_tuple_needs_eventual_freeze(HeapTupleHeader tuple);
 extern Oid	simple_heap_insert(Relation relation, HeapTuple tup);
 extern void simple_heap_delete(Relation relation, ItemPointer tid);
 extern void simple_heap_update(Relation relation, ItemPointer otid,
-				   HeapTuple tup);
+				   HeapTuple tup,
+				   Bitmapset **modified_attrs,
+				   bool *warm_update);
 
 extern void heap_sync(Relation relation);
 extern void heap_update_snapshot(HeapScanDesc scan, Snapshot snapshot);
+extern HeapCheckWarmChainStatus heap_check_warm_chain(Page dp,
+				   ItemPointer tid, bool stop_at_warm);
+extern int heap_clear_warm_chain(Page dp, ItemPointer tid,
+				   OffsetNumber *cleared_offnums);
 
 /* in heap/pruneheap.c */
 extern void heap_page_prune_opt(Relation relation, Buffer buffer);
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index e6019d5..66fd0ea 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -32,7 +32,7 @@
 #define XLOG_HEAP_INSERT		0x00
 #define XLOG_HEAP_DELETE		0x10
 #define XLOG_HEAP_UPDATE		0x20
-/* 0x030 is free, was XLOG_HEAP_MOVE */
+#define XLOG_HEAP_MULTI_INSERT	0x30
 #define XLOG_HEAP_HOT_UPDATE	0x40
 #define XLOG_HEAP_CONFIRM		0x50
 #define XLOG_HEAP_LOCK			0x60
@@ -47,18 +47,23 @@
 /*
  * We ran out of opcodes, so heapam.c now has a second RmgrId.  These opcodes
  * are associated with RM_HEAP2_ID, but are not logically different from
- * the ones above associated with RM_HEAP_ID.  XLOG_HEAP_OPMASK applies to
- * these, too.
+ * the ones above associated with RM_HEAP_ID.
+ *
+ * In PG 10, we moved XLOG_HEAP2_MULTI_INSERT to RM_HEAP_ID. That allows us to
+ * use 0x80 bit in RM_HEAP2_ID, thus potentially creating another 8 possible
+ * opcodes in RM_HEAP2_ID.
  */
 #define XLOG_HEAP2_REWRITE		0x00
 #define XLOG_HEAP2_CLEAN		0x10
 #define XLOG_HEAP2_FREEZE_PAGE	0x20
 #define XLOG_HEAP2_CLEANUP_INFO 0x30
 #define XLOG_HEAP2_VISIBLE		0x40
-#define XLOG_HEAP2_MULTI_INSERT 0x50
+#define XLOG_HEAP2_WARMCLEAR	0x50
 #define XLOG_HEAP2_LOCK_UPDATED 0x60
 #define XLOG_HEAP2_NEW_CID		0x70
 
+#define XLOG_HEAP2_OPMASK		0x70
+
 /*
  * xl_heap_insert/xl_heap_multi_insert flag values, 8 bits are available.
  */
@@ -80,6 +85,7 @@
 #define XLH_UPDATE_CONTAINS_NEW_TUPLE			(1<<4)
 #define XLH_UPDATE_PREFIX_FROM_OLD				(1<<5)
 #define XLH_UPDATE_SUFFIX_FROM_OLD				(1<<6)
+#define XLH_UPDATE_WARM_UPDATE					(1<<7)
 
 /* convenience macro for checking whether any form of old tuple was logged */
 #define XLH_UPDATE_CONTAINS_OLD						\
@@ -225,6 +231,14 @@ typedef struct xl_heap_clean
 
 #define SizeOfHeapClean (offsetof(xl_heap_clean, ndead) + sizeof(uint16))
 
+typedef struct xl_heap_warmclear
+{
+	uint16		ncleared;
+	/* OFFSET NUMBERS are in the block reference 0 */
+} xl_heap_warmclear;
+
+#define SizeOfHeapWarmClear (offsetof(xl_heap_warmclear, ncleared) + sizeof(uint16))
+
 /*
  * Cleanup_info is required in some cases during a lazy VACUUM.
  * Used for reporting the results of HeapTupleHeaderAdvanceLatestRemovedXid()
@@ -388,6 +402,8 @@ extern XLogRecPtr log_heap_clean(Relation reln, Buffer buffer,
 			   OffsetNumber *nowdead, int ndead,
 			   OffsetNumber *nowunused, int nunused,
 			   TransactionId latestRemovedXid);
+extern XLogRecPtr log_heap_warmclear(Relation reln, Buffer buffer,
+			   OffsetNumber *cleared, int ncleared);
 extern XLogRecPtr log_heap_freeze(Relation reln, Buffer buffer,
 				TransactionId cutoff_xid, xl_heap_freeze_tuple *tuples,
 				int ntuples);
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 4d614b7..bcefba6 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -201,6 +201,21 @@ struct HeapTupleHeaderData
 										 * upgrade support */
 #define HEAP_MOVED (HEAP_MOVED_OFF | HEAP_MOVED_IN)
 
+/*
+ * A WARM chain usually consists of two parts. Each of these parts are HOT
+ * chains in themselves i.e. all indexed columns has the same value, but a WARM
+ * update separates these parts. We need a mechanism to identify which part a
+ * tuple belongs to. We can't just look at if it's a
+ * HeapTupleHeaderIsWarmUpdated() because during WARM update, both old and new
+ * tuples are marked as WARM tuples.
+ *
+ * We need another infomask bit for this. But we use the same infomask bit that
+ * was earlier used for by old-style VACUUM FULL. This is safe because
+ * HEAP_WARM_TUPLE flag will always be set along with HEAP_WARM_UPDATED. So if
+ * HEAP_WARM_TUPLE and HEAP_WARM_UPDATED is set then we know that it's
+ * referring to red part of the WARM chain.
+ */
+#define HEAP_WARM_TUPLE			0x4000
 #define HEAP_XACT_MASK			0xFFF0	/* visibility-related bits */
 
 /*
@@ -260,7 +275,11 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x0800 are available */
+#define HEAP_WARM_UPDATED		0x0800	/*
+										 * This or a prior version of this
+										 * tuple in the current HOT chain was
+										 * once WARM updated
+										 */
 #define HEAP_LATEST_TUPLE		0x1000	/*
 										 * This is the last tuple in chain and
 										 * ip_posid points to the root line
@@ -271,7 +290,7 @@ struct HeapTupleHeaderData
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF800	/* visibility-related bits */
 
 
 /*
@@ -396,7 +415,7 @@ struct HeapTupleHeaderData
 /* SetCmin is reasonably simple since we never need a combo CID */
 #define HeapTupleHeaderSetCmin(tup, cid) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	(tup)->t_infomask &= ~HEAP_COMBOCID; \
 } while (0)
@@ -404,7 +423,7 @@ do { \
 /* SetCmax must be used after HeapTupleHeaderAdjustCmax; see combocid.c */
 #define HeapTupleHeaderSetCmax(tup, cid, iscombo) \
 do { \
-	Assert(!((tup)->t_infomask & HEAP_MOVED)); \
+	Assert(!HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_cid = (cid); \
 	if (iscombo) \
 		(tup)->t_infomask |= HEAP_COMBOCID; \
@@ -414,7 +433,7 @@ do { \
 
 #define HeapTupleHeaderGetXvac(tup) \
 ( \
-	((tup)->t_infomask & HEAP_MOVED) ? \
+	HeapTupleHeaderIsMoved(tup) ? \
 		(tup)->t_choice.t_heap.t_field3.t_xvac \
 	: \
 		InvalidTransactionId \
@@ -422,7 +441,7 @@ do { \
 
 #define HeapTupleHeaderSetXvac(tup, xid) \
 do { \
-	Assert((tup)->t_infomask & HEAP_MOVED); \
+	Assert(HeapTupleHeaderIsMoved(tup)); \
 	(tup)->t_choice.t_heap.t_field3.t_xvac = (xid); \
 } while (0)
 
@@ -510,6 +529,21 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+#define HeapTupleHeaderSetWarmUpdated(tup) \
+do { \
+	(tup)->t_infomask2 |= HEAP_WARM_UPDATED; \
+} while (0)
+
+#define HeapTupleHeaderClearWarmUpdated(tup) \
+do { \
+	(tup)->t_infomask2 &= ~HEAP_WARM_UPDATED; \
+} while (0)
+
+#define HeapTupleHeaderIsWarmUpdated(tup) \
+( \
+  ((tup)->t_infomask2 & HEAP_WARM_UPDATED) != 0 \
+)
+
 /*
  * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
  * the TID of the tuple itself in t_ctid field to mark the end of the chain.
@@ -635,6 +669,58 @@ do { \
 )
 
 /*
+ * Macros to check if tuple is a moved-off/in tuple by VACUUM FULL in from
+ * pre-9.0 era. Such tuple must not have HEAP_WARM_TUPLE flag set.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderIsMovedOff(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED_OFF) \
+)
+
+#define HeapTupleHeaderIsMovedIn(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED_IN) \
+)
+
+#define HeapTupleHeaderIsMoved(tuple) \
+( \
+	!HeapTupleHeaderIsWarmUpdated((tuple)) && \
+	((tuple)->t_infomask & HEAP_MOVED) \
+)
+
+/*
+ * Check if tuple belongs to the second part of the WARM chain.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderIsWarm(tuple) \
+( \
+	HeapTupleHeaderIsWarmUpdated(tuple) && \
+	(((tuple)->t_infomask & HEAP_WARM_TUPLE) != 0) \
+)
+
+/*
+ * Mark tuple as a member of the second part of the chain. Must only be done on
+ * a tuple which is already marked a WARM-tuple.
+ *
+ * Beware of multiple evaluations of the argument.
+ */
+#define HeapTupleHeaderSetWarm(tuple) \
+( \
+	AssertMacro(HeapTupleHeaderIsWarmUpdated(tuple)), \
+	(tuple)->t_infomask |= HEAP_WARM_TUPLE \
+)
+
+#define HeapTupleHeaderClearWarm(tuple) \
+( \
+	(tuple)->t_infomask &= ~HEAP_WARM_TUPLE \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
@@ -785,6 +871,24 @@ struct MinimalTupleData
 #define HeapTupleClearHeapOnly(tuple) \
 		HeapTupleHeaderClearHeapOnly((tuple)->t_data)
 
+#define HeapTupleIsWarmUpdated(tuple) \
+		HeapTupleHeaderIsWarmUpdated((tuple)->t_data)
+
+#define HeapTupleSetWarmUpdated(tuple) \
+		HeapTupleHeaderSetWarmUpdated((tuple)->t_data)
+
+#define HeapTupleClearWarmUpdated(tuple) \
+		HeapTupleHeaderClearWarmUpdated((tuple)->t_data)
+
+#define HeapTupleIsWarm(tuple) \
+		HeapTupleHeaderIsWarm((tuple)->t_data)
+
+#define HeapTupleSetWarm(tuple) \
+		HeapTupleHeaderSetWarm((tuple)->t_data)
+
+#define HeapTupleClearWarm(tuple) \
+		HeapTupleHeaderClearWarm((tuple)->t_data)
+
 #define HeapTupleGetOid(tuple) \
 		HeapTupleHeaderGetOid((tuple)->t_data)
 
diff --git a/src/include/access/nbtree.h b/src/include/access/nbtree.h
index f9304db..b319199 100644
--- a/src/include/access/nbtree.h
+++ b/src/include/access/nbtree.h
@@ -391,6 +391,9 @@ typedef struct BTScanOpaqueData
 	int		   *killedItems;	/* currPos.items indexes of killed items */
 	int			numKilled;		/* number of currently stored items */
 
+	/* info about items that are marked as WARM */
+	int		   *setWarmItems;
+	int			numSet;
 	/*
 	 * If we are doing an index-only scan, these are the tuple storage
 	 * workspaces for the currPos and markPos respectively.  Each is of size
@@ -427,6 +430,12 @@ typedef BTScanOpaqueData *BTScanOpaque;
 #define SK_BT_NULLS_FIRST	(INDOPTION_NULLS_FIRST << SK_BT_INDOPTION_SHIFT)
 
 /*
+ * Flags overloaded on t_tid.ip_posid field. They are managed by
+ * ItemPointerSetFlags and corresponing routines.
+ */
+#define BTREE_INDEX_WARM_POINTER	0x01
+
+/*
  * external entry points for btree, in nbtree.c
  */
 extern IndexBuildResult *btbuild(Relation heap, Relation index,
@@ -436,6 +445,10 @@ extern bool btinsert(Relation rel, Datum *values, bool *isnull,
 		 ItemPointer ht_ctid, Relation heapRel,
 		 IndexUniqueCheck checkUnique,
 		 struct IndexInfo *indexInfo);
+extern bool btwarminsert(Relation rel, Datum *values, bool *isnull,
+		 ItemPointer ht_ctid, Relation heapRel,
+		 IndexUniqueCheck checkUnique,
+		 struct IndexInfo *indexInfo);
 extern IndexScanDesc btbeginscan(Relation rel, int nkeys, int norderbys);
 extern Size btestimateparallelscan(void);
 extern void btinitparallelscan(void *target);
@@ -487,10 +500,12 @@ extern void _bt_pageinit(Page page, Size size);
 extern bool _bt_page_recyclable(Page page);
 extern void _bt_delitems_delete(Relation rel, Buffer buf,
 					OffsetNumber *itemnos, int nitems, Relation heapRel);
-extern void _bt_delitems_vacuum(Relation rel, Buffer buf,
-					OffsetNumber *itemnos, int nitems,
-					BlockNumber lastBlockVacuumed);
+extern void _bt_handleitems_vacuum(Relation rel, Buffer buf,
+					OffsetNumber *delitemnos, int ndelitems,
+					OffsetNumber *clearitemnos, int nclearitems);
 extern int	_bt_pagedel(Relation rel, Buffer buf);
+extern void	_bt_clear_items(Page page, OffsetNumber *clearitemnos,
+					uint16 nclearitems);
 
 /*
  * prototypes for functions in nbtsearch.c
@@ -527,6 +542,7 @@ extern IndexTuple _bt_checkkeys(IndexScanDesc scan,
 			  Page page, OffsetNumber offnum,
 			  ScanDirection dir, bool *continuescan);
 extern void _bt_killitems(IndexScanDesc scan);
+extern void _bt_warmitems(IndexScanDesc scan);
 extern BTCycleId _bt_vacuum_cycleid(Relation rel);
 extern BTCycleId _bt_start_vacuum(Relation rel);
 extern void _bt_end_vacuum(Relation rel);
@@ -537,6 +553,10 @@ extern bytea *btoptions(Datum reloptions, bool validate);
 extern bool btproperty(Oid index_oid, int attno,
 		   IndexAMProperty prop, const char *propname,
 		   bool *res, bool *isnull);
+extern bool btrecheck(Relation indexRel, struct IndexInfo *indexInfo,
+		IndexTuple indexTuple,
+		Relation heapRel, HeapTuple heapTuple);
+extern bool btiswarm(Relation indexRel, IndexTuple itup);
 
 /*
  * prototypes for functions in nbtvalidate.c
diff --git a/src/include/access/nbtxlog.h b/src/include/access/nbtxlog.h
index d6a3085..6a86628 100644
--- a/src/include/access/nbtxlog.h
+++ b/src/include/access/nbtxlog.h
@@ -142,7 +142,8 @@ typedef struct xl_btree_reuse_page
 /*
  * This is what we need to know about vacuum of individual leaf index tuples.
  * The WAL record can represent deletion of any number of index tuples on a
- * single index page when executed by VACUUM.
+ * single index page when executed by VACUUM. It also includes tuples which
+ * are cleared of WARM bits by VACUUM.
  *
  * For MVCC scans, lastBlockVacuumed will be set to InvalidBlockNumber.
  * For a non-MVCC index scans there is an additional correctness requirement
@@ -165,11 +166,12 @@ typedef struct xl_btree_reuse_page
 typedef struct xl_btree_vacuum
 {
 	BlockNumber lastBlockVacuumed;
-
-	/* TARGET OFFSET NUMBERS FOLLOW */
+	uint16		ndelitems;
+	uint16		nclearitems;
+	/* ndelitems + nclearitems TARGET OFFSET NUMBERS FOLLOW */
 } xl_btree_vacuum;
 
-#define SizeOfBtreeVacuum	(offsetof(xl_btree_vacuum, lastBlockVacuumed) + sizeof(BlockNumber))
+#define SizeOfBtreeVacuum	(offsetof(xl_btree_vacuum, nclearitems) + sizeof(uint16))
 
 /*
  * This is what we need to know about marking an empty branch for deletion.
diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h
index 3fc726d..c36a682 100644
--- a/src/include/access/relscan.h
+++ b/src/include/access/relscan.h
@@ -101,9 +101,15 @@ typedef struct IndexScanDescData
 	bool		xactStartedInRecovery;	/* prevents killing/seeing killed
 										 * tuples */
 
+	/* signaling to index AM about setting the index pointer WARM */
+	bool		warm_prior_tuple;
+
 	/* index access method's private state */
 	void	   *opaque;			/* access-method-specific info */
 
+	/* IndexInfo structure for this index */
+	struct IndexInfo  *indexInfo;
+
 	/*
 	 * In an index-only scan, a successful amgettuple call must fill either
 	 * xs_itup (and xs_itupdesc) or xs_hitup (and xs_hitupdesc) to provide the
@@ -119,7 +125,7 @@ typedef struct IndexScanDescData
 	HeapTupleData xs_ctup;		/* current heap tuple, if any */
 	Buffer		xs_cbuf;		/* current heap buffer in scan, if any */
 	/* NB: if xs_cbuf is not InvalidBuffer, we hold a pin on that buffer */
-	bool		xs_recheck;		/* T means scan keys must be rechecked */
+	bool		xs_recheck;		/* T means scan keys must be rechecked for each tuple */
 
 	/*
 	 * When fetching with an ordering operator, the values of the ORDER BY
@@ -134,6 +140,7 @@ typedef struct IndexScanDescData
 
 	/* state data for traversing HOT chains in index_getnext */
 	bool		xs_continue_hot;	/* T if must keep walking HOT chain */
+	HeapCheckWarmChainStatus	xs_hot_chain_status;
 
 	/* parallel index scan information, in shared memory */
 	ParallelIndexScanDesc parallel_scan;
diff --git a/src/include/catalog/index.h b/src/include/catalog/index.h
index 20bec90..f92ec29 100644
--- a/src/include/catalog/index.h
+++ b/src/include/catalog/index.h
@@ -89,6 +89,13 @@ extern void FormIndexDatum(IndexInfo *indexInfo,
 			   Datum *values,
 			   bool *isnull);
 
+extern void FormIndexPlainDatum(IndexInfo *indexInfo,
+			   Relation heapRel,
+			   HeapTuple heapTup,
+			   Datum *values,
+			   bool *isnull,
+			   bool *isavail);
+
 extern void index_build(Relation heapRelation,
 			Relation indexRelation,
 			IndexInfo *indexInfo,
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 711211d..3f1a142 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2789,6 +2789,8 @@ DATA(insert OID = 1933 (  pg_stat_get_tuples_deleted	PGNSP PGUID 12 1 0 0 0 f f
 DESCR("statistics: number of tuples deleted");
 DATA(insert OID = 1972 (  pg_stat_get_tuples_hot_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated");
+DATA(insert OID = 3402 (  pg_stat_get_tuples_warm_updated PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated");
 DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_live_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
@@ -2941,6 +2943,8 @@ DATA(insert OID = 3042 (  pg_stat_get_xact_tuples_deleted		PGNSP PGUID 12 1 0 0
 DESCR("statistics: number of tuples deleted in current transaction");
 DATA(insert OID = 3043 (  pg_stat_get_xact_tuples_hot_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_hot_updated _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples hot updated in current transaction");
+DATA(insert OID = 3405 (  pg_stat_get_xact_tuples_warm_updated	PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_tuples_warm_updated _null_ _null_ _null_ ));
+DESCR("statistics: number of tuples warm updated in current transaction");
 DATA(insert OID = 3044 (  pg_stat_get_xact_blocks_fetched		PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_fetched _null_ _null_ _null_ ));
 DESCR("statistics: number of blocks fetched in current transaction");
 DATA(insert OID = 3045 (  pg_stat_get_xact_blocks_hit			PGNSP PGUID 12 1 0 0 0 f f f f t f v r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_xact_blocks_hit _null_ _null_ _null_ ));
diff --git a/src/include/commands/progress.h b/src/include/commands/progress.h
index 9472ecc..b355b61 100644
--- a/src/include/commands/progress.h
+++ b/src/include/commands/progress.h
@@ -25,6 +25,7 @@
 #define PROGRESS_VACUUM_NUM_INDEX_VACUUMS		4
 #define PROGRESS_VACUUM_MAX_DEAD_TUPLES			5
 #define PROGRESS_VACUUM_NUM_DEAD_TUPLES			6
+#define PROGRESS_VACUUM_HEAP_BLKS_WARMCLEARED	7
 
 /* Phases of vacuum (as advertised via PROGRESS_VACUUM_PHASE) */
 #define PROGRESS_VACUUM_PHASE_SCAN_HEAP			1
diff --git a/src/include/executor/executor.h b/src/include/executor/executor.h
index d3849b9..7e1ec56 100644
--- a/src/include/executor/executor.h
+++ b/src/include/executor/executor.h
@@ -506,6 +506,7 @@ extern int	ExecCleanTargetListLength(List *targetlist);
 extern void ExecOpenIndices(ResultRelInfo *resultRelInfo, bool speculative);
 extern void ExecCloseIndices(ResultRelInfo *resultRelInfo);
 extern List *ExecInsertIndexTuples(TupleTableSlot *slot, ItemPointer tupleid,
+					  ItemPointer root_tid, Bitmapset *modified_attrs,
 					  EState *estate, bool noDupErr, bool *specConflict,
 					  List *arbiterIndexes);
 extern bool ExecCheckIndexConstraints(TupleTableSlot *slot, EState *estate,
diff --git a/src/include/executor/nodeIndexscan.h b/src/include/executor/nodeIndexscan.h
index ea3f3a5..ebeec74 100644
--- a/src/include/executor/nodeIndexscan.h
+++ b/src/include/executor/nodeIndexscan.h
@@ -41,5 +41,4 @@ extern void ExecIndexEvalRuntimeKeys(ExprContext *econtext,
 extern bool ExecIndexEvalArrayKeys(ExprContext *econtext,
 					   IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
 extern bool ExecIndexAdvanceArrayKeys(IndexArrayKeyInfo *arrayKeys, int numArrayKeys);
-
 #endif   /* NODEINDEXSCAN_H */
diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h
index fa99244..eed75a8 100644
--- a/src/include/nodes/execnodes.h
+++ b/src/include/nodes/execnodes.h
@@ -133,6 +133,7 @@ typedef struct IndexInfo
 	NodeTag		type;
 	int			ii_NumIndexAttrs;
 	AttrNumber	ii_KeyAttrNumbers[INDEX_MAX_KEYS];
+	Bitmapset  *ii_indxattrs;	/* bitmap of all columns used in this index */
 	List	   *ii_Expressions; /* list of Expr */
 	List	   *ii_ExpressionsState;	/* list of ExprState */
 	List	   *ii_Predicate;	/* list of Expr */
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index e29397f..99bdc8b 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -105,6 +105,7 @@ typedef struct PgStat_TableCounts
 	PgStat_Counter t_tuples_updated;
 	PgStat_Counter t_tuples_deleted;
 	PgStat_Counter t_tuples_hot_updated;
+	PgStat_Counter t_tuples_warm_updated;
 	bool		t_truncated;
 
 	PgStat_Counter t_delta_live_tuples;
@@ -625,6 +626,7 @@ typedef struct PgStat_StatTabEntry
 	PgStat_Counter tuples_updated;
 	PgStat_Counter tuples_deleted;
 	PgStat_Counter tuples_hot_updated;
+	PgStat_Counter tuples_warm_updated;
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
@@ -1285,7 +1287,7 @@ pgstat_report_wait_end(void)
 	(pgStatBlockWriteTime += (n))
 
 extern void pgstat_count_heap_insert(Relation rel, PgStat_Counter n);
-extern void pgstat_count_heap_update(Relation rel, bool hot);
+extern void pgstat_count_heap_update(Relation rel, bool hot, bool warm);
 extern void pgstat_count_heap_delete(Relation rel);
 extern void pgstat_count_truncate(Relation rel);
 extern void pgstat_update_heap_dead_tuples(Relation rel, int delta);
diff --git a/src/include/storage/bufpage.h b/src/include/storage/bufpage.h
index e956dc3..1852195 100644
--- a/src/include/storage/bufpage.h
+++ b/src/include/storage/bufpage.h
@@ -433,6 +433,8 @@ extern void PageIndexMultiDelete(Page page, OffsetNumber *itemnos, int nitems);
 extern void PageIndexTupleDeleteNoCompact(Page page, OffsetNumber offset);
 extern bool PageIndexTupleOverwrite(Page page, OffsetNumber offnum,
 						Item newtup, Size newsize);
+extern void PageIndexClearWarmTuples(Page page, OffsetNumber *clearitemnos,
+						uint16 nclearitems);
 extern char *PageSetChecksumCopy(Page page, BlockNumber blkno);
 extern void PageSetChecksumInplace(Page page, BlockNumber blkno);
 
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index ab875bb..2b86054 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -142,9 +142,16 @@ typedef struct RelationData
 
 	/* data managed by RelationGetIndexAttrBitmap: */
 	Bitmapset  *rd_indexattr;	/* identifies columns used in indexes */
+	Bitmapset  *rd_exprindexattr; /* indentified columns used in expression or
+									 predicate indexes */
+	Bitmapset  *rd_indxnotreadyattr;	/* columns used by indexes not yet
+										   ready */
 	Bitmapset  *rd_keyattr;		/* cols that can be ref'd by foreign keys */
 	Bitmapset  *rd_pkattr;		/* cols included in primary key */
 	Bitmapset  *rd_idattr;		/* included in replica identity index */
+	List	   *rd_indexattrsList;	/* List of bitmaps, describing list of
+									   attributes for each index */
+	bool		rd_supportswarm;/* True if the table can be WARM updated */
 
 	PublicationActions  *rd_pubactions;	/* publication actions */
 
@@ -281,6 +288,7 @@ typedef struct StdRdOptions
 	bool		user_catalog_table;		/* use as an additional catalog
 										 * relation */
 	int			parallel_workers;		/* max number of parallel workers */
+	bool		enable_warm;	/* should WARM be allowed on this table */
 } StdRdOptions;
 
 #define HEAP_MIN_FILLFACTOR			10
@@ -319,6 +327,17 @@ typedef struct StdRdOptions
 	  (relation)->rd_rel->relkind == RELKIND_MATVIEW) ? \
 	 ((StdRdOptions *) (relation)->rd_options)->user_catalog_table : false)
 
+#define HEAP_DEFAULT_ENABLE_WARM	true
+/*
+ * RelationWarmUpdatesEnabled
+ * 		Returns whether the relation supports WARM update.
+ */
+#define RelationWarmUpdatesEnabled(relation) \
+	(((relation)->rd_options && \
+	 (relation)->rd_rel->relkind == RELKIND_RELATION) ? \
+	 ((StdRdOptions *) ((relation)->rd_options))->enable_warm : \
+		HEAP_DEFAULT_ENABLE_WARM)
+
 /*
  * RelationGetParallelWorkers
  *		Returns the relation's parallel_workers reloption setting.
diff --git a/src/include/utils/relcache.h b/src/include/utils/relcache.h
index 81af3ae..06c0183 100644
--- a/src/include/utils/relcache.h
+++ b/src/include/utils/relcache.h
@@ -51,11 +51,14 @@ typedef enum IndexAttrBitmapKind
 	INDEX_ATTR_BITMAP_ALL,
 	INDEX_ATTR_BITMAP_KEY,
 	INDEX_ATTR_BITMAP_PRIMARY_KEY,
-	INDEX_ATTR_BITMAP_IDENTITY_KEY
+	INDEX_ATTR_BITMAP_IDENTITY_KEY,
+	INDEX_ATTR_BITMAP_EXPR_PREDICATE,
+	INDEX_ATTR_BITMAP_NOTREADY
 } IndexAttrBitmapKind;
 
 extern Bitmapset *RelationGetIndexAttrBitmap(Relation relation,
 						   IndexAttrBitmapKind keyAttrs);
+extern List *RelationGetIndexAttrList(Relation relation);
 
 extern void RelationGetExclusionInfo(Relation indexRelation,
 						 Oid **operators,
diff --git a/src/test/regress/expected/alter_generic.out b/src/test/regress/expected/alter_generic.out
index ce581bb..85e4c70 100644
--- a/src/test/regress/expected/alter_generic.out
+++ b/src/test/regress/expected/alter_generic.out
@@ -161,15 +161,15 @@ ALTER SERVER alt_fserv1 RENAME TO alt_fserv3;   -- OK
 SELECT fdwname FROM pg_foreign_data_wrapper WHERE fdwname like 'alt_fdw%';
  fdwname  
 ----------
- alt_fdw2
  alt_fdw3
+ alt_fdw2
 (2 rows)
 
 SELECT srvname FROM pg_foreign_server WHERE srvname like 'alt_fserv%';
   srvname   
 ------------
- alt_fserv2
  alt_fserv3
+ alt_fserv2
 (2 rows)
 
 --
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index d706f42..f7dc4a4 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1756,6 +1756,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_tuples_deleted(c.oid) AS n_tup_del,
     pg_stat_get_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
@@ -1903,6 +1904,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1946,6 +1948,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_upd,
     pg_stat_all_tables.n_tup_del,
     pg_stat_all_tables.n_tup_hot_upd,
+    pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
     pg_stat_all_tables.n_mod_since_analyze,
@@ -1983,7 +1986,8 @@ pg_stat_xact_all_tables| SELECT c.oid AS relid,
     pg_stat_get_xact_tuples_inserted(c.oid) AS n_tup_ins,
     pg_stat_get_xact_tuples_updated(c.oid) AS n_tup_upd,
     pg_stat_get_xact_tuples_deleted(c.oid) AS n_tup_del,
-    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd
+    pg_stat_get_xact_tuples_hot_updated(c.oid) AS n_tup_hot_upd,
+    pg_stat_get_xact_tuples_warm_updated(c.oid) AS n_tup_warm_upd
    FROM ((pg_class c
      LEFT JOIN pg_index i ON ((c.oid = i.indrelid)))
      LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
@@ -1999,7 +2003,8 @@ pg_stat_xact_sys_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname = ANY (ARRAY['pg_catalog'::name, 'information_schema'::name])) OR (pg_stat_xact_all_tables.schemaname ~ '^pg_toast'::text));
 pg_stat_xact_user_functions| SELECT p.oid AS funcid,
@@ -2021,7 +2026,8 @@ pg_stat_xact_user_tables| SELECT pg_stat_xact_all_tables.relid,
     pg_stat_xact_all_tables.n_tup_ins,
     pg_stat_xact_all_tables.n_tup_upd,
     pg_stat_xact_all_tables.n_tup_del,
-    pg_stat_xact_all_tables.n_tup_hot_upd
+    pg_stat_xact_all_tables.n_tup_hot_upd,
+    pg_stat_xact_all_tables.n_tup_warm_upd
    FROM pg_stat_xact_all_tables
   WHERE ((pg_stat_xact_all_tables.schemaname <> ALL (ARRAY['pg_catalog'::name, 'information_schema'::name])) AND (pg_stat_xact_all_tables.schemaname !~ '^pg_toast'::text));
 pg_statio_all_indexes| SELECT c.oid AS relid,
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
new file mode 100644
index 0000000..1f07272
--- /dev/null
+++ b/src/test/regress/expected/warm.out
@@ -0,0 +1,930 @@
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+ a |   b    |  c   |  d  
+---+--------+------+-----
+ 1 | 140001 | foo3 | bar
+(1 row)
+
+-- Check if index only scan works correctly
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab1
+   Recheck Cond: (b = 140001)
+   ->  Bitmap Index Scan on updtst_indx1
+         Index Cond: (b = 140001)
+(4 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx1 on updtst_tab1
+   Index Cond: (b = 140001)
+(2 rows)
+
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+   b    
+--------
+ 140001
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab1;
+------------------
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE a = 1;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+               QUERY PLAN                
+-----------------------------------------
+ Bitmap Heap Scan on updtst_tab2
+   Recheck Cond: (b = 701)
+   ->  Bitmap Index Scan on updtst_indx2
+         Index Cond: (b = 701)
+(4 rows)
+
+SELECT * FROM updtst_tab2 WHERE b = 701;
+ a |  b  |  c   |  d  
+---+-----+------+-----
+ 1 | 701 | foo6 | bar
+(1 row)
+
+VACUUM updtst_tab2;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx2 on updtst_tab2
+   Index Cond: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab2 WHERE b = 701;
+  b  
+-----
+ 701
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab2;
+------------------
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    99
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 1;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 1 | 1421 | foo12 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+       QUERY PLAN        
+-------------------------
+ Seq Scan on updtst_tab3
+   Filter: (b = 701)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 701;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+  b   
+------
+ 1421
+(1 row)
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+SET enable_seqscan = false;
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+ count 
+-------
+    98
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+SELECT * FROM updtst_tab3 WHERE a = 2;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+ a | b | c | d 
+---+---+---+---
+(0 rows)
+
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+ a |  b   |   c   |  d  
+---+------+-------+-----
+ 2 | 1422 | foo22 | bar
+(1 row)
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+                    QUERY PLAN                     
+---------------------------------------------------
+ Index Only Scan using updtst_indx3 on updtst_tab3
+   Index Cond: (b = 702)
+(2 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 702;
+ b 
+---
+(0 rows)
+
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+  b   
+------
+ 1422
+(1 row)
+
+SET enable_seqscan = true;
+DROP TABLE updtst_tab3;
+------------------
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+explain select * from test_warm where lower(a) = 'test';
+                                 QUERY PLAN                                 
+----------------------------------------------------------------------------
+ Bitmap Heap Scan on test_warm  (cost=4.18..12.65 rows=4 width=64)
+   Recheck Cond: (lower(a) = 'test'::text)
+   ->  Bitmap Index Scan on test_warmindx  (cost=0.00..4.18 rows=4 width=0)
+         Index Cond: (lower(a) = 'test'::text)
+(4 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+select *, ctid from test_warm where a = 'test';
+ a | b | ctid 
+---+---+------
+(0 rows)
+
+select *, ctid from test_warm where a = 'TEST';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+                                   QUERY PLAN                                    
+---------------------------------------------------------------------------------
+ Index Scan using test_warmindx on test_warm  (cost=0.15..20.22 rows=4 width=64)
+   Index Cond: (lower(a) = 'test'::text)
+(2 rows)
+
+select *, ctid from test_warm where lower(a) = 'test';
+  a   |  b  | ctid  
+------+-----+-------
+ TEST | foo | (0,2)
+(1 row)
+
+DROP TABLE test_warm;
+--- Test with toast data types
+CREATE TABLE test_toast_warm (a int unique, b text, c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+-- insert a large enough value to cause index datum compression
+INSERT INTO test_toast_warm VALUES (1, repeat('a', 600), 100);
+INSERT INTO test_toast_warm VALUES (2, repeat('b', 2), 100);
+INSERT INTO test_toast_warm VALUES (3, repeat('c', 4), 100);
+INSERT INTO test_toast_warm VALUES (4, repeat('d', 63), 100);
+INSERT INTO test_toast_warm VALUES (5, repeat('e', 126), 100);
+INSERT INTO test_toast_warm VALUES (6, repeat('f', 127), 100);
+INSERT INTO test_toast_warm VALUES (7, repeat('g', 128), 100);
+INSERT INTO test_toast_warm VALUES (8, repeat('h', 3200), 100);
+UPDATE test_toast_warm SET b = repeat('q', 600) WHERE a = 1;
+UPDATE test_toast_warm SET b = repeat('r', 2) WHERE a = 2;
+UPDATE test_toast_warm SET b = repeat('s', 4) WHERE a = 3;
+UPDATE test_toast_warm SET b = repeat('t', 63) WHERE a = 4;
+UPDATE test_toast_warm SET b = repeat('u', 126) WHERE a = 5;
+UPDATE test_toast_warm SET b = repeat('v', 127) WHERE a = 6;
+UPDATE test_toast_warm SET b = repeat('w', 128) WHERE a = 7;
+UPDATE test_toast_warm SET b = repeat('x', 3200) WHERE a = 8;
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+                        QUERY PLAN                         
+-----------------------------------------------------------
+ Index Scan using test_toast_warm_a_key on test_toast_warm
+   Index Cond: (a = 1)
+(2 rows)
+
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+                                                                                                                                                                                                                                                                                                                      QUERY PLAN                                                                                                                                                                                                                                                                                                                      
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Index Scan using test_toast_warm_index on test_toast_warm
+   Index Cond: (b = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'::text)
+(2 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+                                                                                                                                                                                                                                                                                                                      QUERY PLAN                                                                                                                                                                                                                                                                                                                      
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Index Only Scan using test_toast_warm_index on test_toast_warm
+   Index Cond: (b = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'::text)
+(2 rows)
+
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+ a |                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+---+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 1 | qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+ a | b 
+---+---
+(0 rows)
+
+SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+ b 
+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+ a |                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+---+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 1 | qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+ a | b  
+---+----
+ 2 | rr
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+ a |  b   
+---+------
+ 3 | ssss
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+ a |                                b                                
+---+-----------------------------------------------------------------
+ 4 | ttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttt
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+ a |                                                               b                                                                
+---+--------------------------------------------------------------------------------------------------------------------------------
+ 5 | uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+ a |                                                                b                                                                
+---+---------------------------------------------------------------------------------------------------------------------------------
+ 6 | vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+ a |                                                                b                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------
+ 7 | wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+ a |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                b                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 8 | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+(1 row)
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                                    QUERY PLAN                                                                                                                                                                                                                                                                                                                    
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 'qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq'::text)
+(2 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                                    QUERY PLAN                                                                                                                                                                                                                                                                                                                    
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 'qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq'::text)
+(2 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+ a |                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+---+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 1 | qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+                                                                                                                                                                                                                                                                                                            b                                                                                                                                                                                                                                                                                                             
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+ a | b  
+---+----
+ 2 | rr
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+ a |  b   
+---+------
+ 3 | ssss
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+ a |                                b                                
+---+-----------------------------------------------------------------
+ 4 | ttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttttt
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+ a |                                                               b                                                                
+---+--------------------------------------------------------------------------------------------------------------------------------
+ 5 | uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuu
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+ a |                                                                b                                                                
+---+---------------------------------------------------------------------------------------------------------------------------------
+ 6 | vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+ a |                                                                b                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------
+ 7 | wwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+ a |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                b                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
+---+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ 8 | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
+(1 row)
+
+DROP TABLE test_toast_warm;
+-- Test with numeric data type
+CREATE TABLE test_toast_warm (a int unique, b numeric(10,2), c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+INSERT INTO test_toast_warm VALUES (1, 10.2, 100);
+INSERT INTO test_toast_warm VALUES (2, 11.22, 100);
+INSERT INTO test_toast_warm VALUES (3, 12.222, 100);
+INSERT INTO test_toast_warm VALUES (4, 13.20, 100);
+INSERT INTO test_toast_warm VALUES (5, 14.201, 100);
+UPDATE test_toast_warm SET b = 100.2 WHERE a = 1;
+UPDATE test_toast_warm SET b = 101.22 WHERE a = 2;
+UPDATE test_toast_warm SET b = 102.222 WHERE a = 3;
+UPDATE test_toast_warm SET b = 103.20 WHERE a = 4;
+UPDATE test_toast_warm SET b = 104.201 WHERE a = 5;
+SELECT * FROM test_toast_warm;
+ a |   b    |  c  
+---+--------+-----
+ 1 | 100.20 | 100
+ 2 | 101.22 | 100
+ 3 | 102.22 | 100
+ 4 | 103.20 | 100
+ 5 | 104.20 | 100
+(5 rows)
+
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+                        QUERY PLAN                         
+-----------------------------------------------------------
+ Index Scan using test_toast_warm_a_key on test_toast_warm
+   Index Cond: (a = 1)
+(2 rows)
+
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+                    QUERY PLAN                    
+--------------------------------------------------
+ Bitmap Heap Scan on test_toast_warm
+   Recheck Cond: (b = 10.2)
+   ->  Bitmap Index Scan on test_toast_warm_index
+         Index Cond: (b = 10.2)
+(4 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+                    QUERY PLAN                    
+--------------------------------------------------
+ Bitmap Heap Scan on test_toast_warm
+   Recheck Cond: (b = 100.2)
+   ->  Bitmap Index Scan on test_toast_warm_index
+         Index Cond: (b = 100.2)
+(4 rows)
+
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+ a | b 
+---+---
+(0 rows)
+
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+ b 
+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+   b    
+--------
+ 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+ a |   b    
+---+--------
+ 2 | 101.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+ a |   b    
+---+--------
+ 3 | 102.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+ a |   b    
+---+--------
+ 4 | 103.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+ a |   b    
+---+--------
+ 5 | 104.20
+(1 row)
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+         QUERY PLAN          
+-----------------------------
+ Seq Scan on test_toast_warm
+   Filter: (a = 1)
+(2 rows)
+
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+         QUERY PLAN          
+-----------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 10.2)
+(2 rows)
+
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+         QUERY PLAN          
+-----------------------------
+ Seq Scan on test_toast_warm
+   Filter: (b = 100.2)
+(2 rows)
+
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+ a | b 
+---+---
+(0 rows)
+
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+ b 
+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+ a |   b    
+---+--------
+ 1 | 100.20
+(1 row)
+
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+   b    
+--------
+ 100.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+ a |   b    
+---+--------
+ 2 | 101.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+ a |   b    
+---+--------
+ 3 | 102.22
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+ a |   b    
+---+--------
+ 4 | 103.20
+(1 row)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+ a | b 
+---+---
+(0 rows)
+
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+ a |   b    
+---+--------
+ 5 | 104.20
+(1 row)
+
+DROP TABLE test_toast_warm;
+-- Toasted heap attributes
+CREATE TABLE toasttest(descr text , cnt int DEFAULT 0, f1 text, f2 text);
+CREATE INDEX testindx1 ON toasttest(descr);
+CREATE INDEX testindx2 ON toasttest(f1);
+INSERT INTO toasttest(descr, f1, f2) VALUES('two-compressed', repeat('1234567890',1000), repeat('1234567890',1000));
+INSERT INTO toasttest(descr, f1, f2) VALUES('two-toasted', repeat('1234567890',20000), repeat('1234567890',50000));
+INSERT INTO toasttest(descr, f1, f2) VALUES('one-compressed,one-toasted', repeat('1234567890',1000), repeat('1234567890',50000));
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,1) | (two-compressed,0,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012
+ (0,2) | (two-toasted,0,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ (0,3) | ("one-compressed,one-toasted",0,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+-- UPDATE f1 by doing string manipulation, but the updated value remains the
+-- same as the old value
+UPDATE toasttest SET cnt = cnt +1, f1 = trim(leading '-' from '-'||f1) RETURNING substring(toasttest::text, 1, 200);
+                                                                                                substring                                                                                                 
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (two-compressed,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012
+ (two-toasted,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ ("one-compressed,one-toasted",1,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+                  QUERY PLAN                   
+-----------------------------------------------
+ Seq Scan on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,4) | (two-compressed,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012
+ (0,5) | (two-toasted,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ (0,6) | ("one-compressed,one-toasted",1,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+                           QUERY PLAN                            
+-----------------------------------------------------------------
+ Index Scan using testindx2 on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,6) | ("one-compressed,one-toasted",1,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+ (0,4) | (two-compressed,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012
+ (0,5) | (two-toasted,1,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+(3 rows)
+
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+-- UPDATE f1 for real this time
+UPDATE toasttest SET cnt = cnt +1, f1 = '-'||f1 RETURNING substring(toasttest::text, 1, 200);
+                                                                                                substring                                                                                                 
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (two-toasted,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234
+ ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+(3 rows)
+
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+                  QUERY PLAN                   
+-----------------------------------------------
+ Seq Scan on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,7) | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,8) | (two-toasted,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234
+ (0,9) | ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+(3 rows)
+
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+                           QUERY PLAN                            
+-----------------------------------------------------------------
+ Index Scan using testindx2 on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+ ctid  |                                                                                                substring                                                                                                 
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,9) | ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+ (0,7) | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,8) | (two-toasted,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234
+(3 rows)
+
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+-- UPDATE f1 from toasted to compressed
+UPDATE toasttest SET cnt = cnt +1, f1 = repeat('1234567890',1000) WHERE descr = 'two-toasted';
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+                  QUERY PLAN                   
+-----------------------------------------------
+ Seq Scan on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+  ctid  |                                                                                                substring                                                                                                 
+--------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,7)  | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,9)  | ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+ (0,10) | (two-toasted,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+(3 rows)
+
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+                           QUERY PLAN                            
+-----------------------------------------------------------------
+ Index Scan using testindx2 on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+  ctid  |                                                                                                substring                                                                                                 
+--------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,9)  | ("one-compressed,one-toasted",2,-12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567
+ (0,7)  | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,10) | (two-toasted,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+(3 rows)
+
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+-- UPDATE f1 from compressed to toasted
+UPDATE toasttest SET cnt = cnt +1, f1 = repeat('1234567890',2000) WHERE descr = 'one-compressed,one-toasted';
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+                  QUERY PLAN                   
+-----------------------------------------------
+ Seq Scan on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+  ctid  |                                                                                                substring                                                                                                 
+--------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,7)  | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,10) | (two-toasted,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ (0,11) | ("one-compressed,one-toasted",3,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+                           QUERY PLAN                            
+-----------------------------------------------------------------
+ Index Scan using testindx2 on toasttest (actual rows=3 loops=1)
+(1 row)
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+  ctid  |                                                                                                substring                                                                                                 
+--------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+ (0,7)  | (two-compressed,2,-1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901
+ (0,10) | (two-toasted,3,12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345
+ (0,11) | ("one-compressed,one-toasted",3,123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678
+(3 rows)
+
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+DROP TABLE toasttest;
+-- Test enable_warm reloption
+CREATE TABLE testrelopt (a int unique, b int, c int) WITH (enable_warm = true);
+ALTER TABLE testrelopt SET (enable_warm = true);
+ALTER TABLE testrelopt RESET (enable_warm);
+-- should fail since we don't allow turning WARM off
+ALTER TABLE testrelopt SET (enable_warm = false);
+ERROR:  WARM updates cannot be disabled on the table "testrelopt"
+DROP TABLE testrelopt;
+CREATE TABLE testrelopt (a int unique, b int, c int) WITH (enable_warm = false);
+-- should be ok since the default is ON and we support turning WARM ON
+ALTER TABLE testrelopt RESET (enable_warm);
+ALTER TABLE testrelopt SET (enable_warm = true);
+-- should fail since we don't allow turning WARM off
+ALTER TABLE testrelopt SET (enable_warm = false);
+ERROR:  WARM updates cannot be disabled on the table "testrelopt"
+DROP TABLE testrelopt;
diff --git a/src/test/regress/parallel_schedule b/src/test/regress/parallel_schedule
index 9f95b01..cd99f88 100644
--- a/src/test/regress/parallel_schedule
+++ b/src/test/regress/parallel_schedule
@@ -42,6 +42,8 @@ test: create_type
 test: create_table
 test: create_function_2
 
+test: warm
+
 # ----------
 # Load huge amounts of data
 # We should split the data files into single files and then
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
new file mode 100644
index 0000000..fc80c0f
--- /dev/null
+++ b/src/test/regress/sql/warm.sql
@@ -0,0 +1,360 @@
+
+CREATE TABLE updtst_tab1 (a integer unique, b int, c text, d text);
+CREATE INDEX updtst_indx1 ON updtst_tab1 (b);
+INSERT INTO updtst_tab1
+       SELECT generate_series(1,10000), generate_series(70001, 80000), 'foo', 'bar';
+
+-- This should be a HOT update as non-index key is updated, but the
+-- page won't have any free space, so probably a non-HOT update
+UPDATE updtst_tab1 SET c = 'foo1' WHERE a = 1;
+
+-- Next update should be a HOT update as dead space is recycled
+UPDATE updtst_tab1 SET c = 'foo2' WHERE a = 1;
+
+-- And next too
+UPDATE updtst_tab1 SET c = 'foo3' WHERE a = 1;
+
+-- Now update one of the index key columns
+UPDATE updtst_tab1 SET b = b + 70000 WHERE a = 1;
+
+-- Ensure that the correct row is fetched
+SELECT * FROM updtst_tab1 WHERE a = 1;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Even when seqscan is disabled and indexscan is forced
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT * FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Check if index only scan works correctly
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+-- Table must be vacuumed to force index-only scan
+VACUUM updtst_tab1;
+EXPLAIN (costs off) SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+SELECT b FROM updtst_tab1 WHERE b = 70001 + 70000;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab1;
+
+------------------
+
+CREATE TABLE updtst_tab2 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx2 ON updtst_tab2 (b);
+INSERT INTO updtst_tab2
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+UPDATE updtst_tab2 SET b = b + 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo1'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab2 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab2 SET c = 'foo6'  WHERE a = 1;
+
+SELECT count(*) FROM updtst_tab2 WHERE c = 'foo';
+SELECT * FROM updtst_tab2 WHERE c = 'foo6';
+
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE a = 1;
+
+SET enable_seqscan = false;
+EXPLAIN (costs off) SELECT * FROM updtst_tab2 WHERE b = 701;
+SELECT * FROM updtst_tab2 WHERE b = 701;
+
+VACUUM updtst_tab2;
+EXPLAIN (costs off) SELECT b FROM updtst_tab2 WHERE b = 701;
+SELECT b FROM updtst_tab2 WHERE b = 701;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab2;
+------------------
+
+CREATE TABLE updtst_tab3 (a integer unique, b int, c text, d text) WITH (fillfactor = 80);
+CREATE INDEX updtst_indx3 ON updtst_tab3 (b);
+INSERT INTO updtst_tab3
+       SELECT generate_series(1,100), generate_series(701, 800), 'foo', 'bar';
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo1', b = b + 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo2'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo3'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo4'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo5'  WHERE a = 1;
+UPDATE updtst_tab3 SET c = 'foo6'  WHERE a = 1;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo11', b = b + 750 WHERE b = 701;
+UPDATE updtst_tab3 SET c = 'foo12'  WHERE a = 1;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 1;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo6';
+SELECT * FROM updtst_tab3 WHERE c = 'foo12';
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+SELECT * FROM updtst_tab3 WHERE a = 1;
+
+SELECT * FROM updtst_tab3 WHERE b = 701;
+SELECT * FROM updtst_tab3 WHERE b = 1421;
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 701;
+SELECT b FROM updtst_tab3 WHERE b = 1421;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo23'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 700 WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo24'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo25'  WHERE a = 2;
+UPDATE updtst_tab3 SET c = 'foo26'  WHERE a = 2;
+
+-- Abort the transaction and ensure the original tuple is visible correctly
+ROLLBACK;
+
+SET enable_seqscan = false;
+
+BEGIN;
+UPDATE updtst_tab3 SET c = 'foo21', b = b + 750 WHERE b = 702;
+UPDATE updtst_tab3 SET c = 'foo22'  WHERE a = 2;
+UPDATE updtst_tab3 SET b = b - 30 WHERE a = 2;
+COMMIT;
+
+SELECT count(*) FROM updtst_tab3 WHERE c = 'foo';
+SELECT * FROM updtst_tab3 WHERE c = 'foo26';
+SELECT * FROM updtst_tab3 WHERE c = 'foo22';
+
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+SELECT * FROM updtst_tab3 WHERE a = 2;
+
+-- Try fetching both old and new value using updtst_indx3
+SELECT * FROM updtst_tab3 WHERE b = 702;
+SELECT * FROM updtst_tab3 WHERE b = 1422;
+
+VACUUM updtst_tab3;
+EXPLAIN (costs off) SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 702;
+SELECT b FROM updtst_tab3 WHERE b = 1422;
+
+SET enable_seqscan = true;
+
+DROP TABLE updtst_tab3;
+------------------
+
+CREATE TABLE test_warm (a text unique, b text);
+CREATE INDEX test_warmindx ON test_warm (lower(a));
+INSERT INTO test_warm values ('test', 'foo');
+UPDATE test_warm SET a = 'TEST';
+select *, ctid from test_warm where lower(a) = 'test';
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where a = 'test';
+select *, ctid from test_warm where a = 'TEST';
+set enable_bitmapscan TO false;
+explain select * from test_warm where lower(a) = 'test';
+select *, ctid from test_warm where lower(a) = 'test';
+DROP TABLE test_warm;
+
+--- Test with toast data types
+
+CREATE TABLE test_toast_warm (a int unique, b text, c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+
+-- insert a large enough value to cause index datum compression
+INSERT INTO test_toast_warm VALUES (1, repeat('a', 600), 100);
+INSERT INTO test_toast_warm VALUES (2, repeat('b', 2), 100);
+INSERT INTO test_toast_warm VALUES (3, repeat('c', 4), 100);
+INSERT INTO test_toast_warm VALUES (4, repeat('d', 63), 100);
+INSERT INTO test_toast_warm VALUES (5, repeat('e', 126), 100);
+INSERT INTO test_toast_warm VALUES (6, repeat('f', 127), 100);
+INSERT INTO test_toast_warm VALUES (7, repeat('g', 128), 100);
+INSERT INTO test_toast_warm VALUES (8, repeat('h', 3200), 100);
+
+UPDATE test_toast_warm SET b = repeat('q', 600) WHERE a = 1;
+UPDATE test_toast_warm SET b = repeat('r', 2) WHERE a = 2;
+UPDATE test_toast_warm SET b = repeat('s', 4) WHERE a = 3;
+UPDATE test_toast_warm SET b = repeat('t', 63) WHERE a = 4;
+UPDATE test_toast_warm SET b = repeat('u', 126) WHERE a = 5;
+UPDATE test_toast_warm SET b = repeat('v', 127) WHERE a = 6;
+UPDATE test_toast_warm SET b = repeat('w', 128) WHERE a = 7;
+UPDATE test_toast_warm SET b = repeat('x', 3200) WHERE a = 8;
+
+
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+SELECT a, b FROM test_toast_warm WHERE b = repeat('a', 600);
+SELECT b FROM test_toast_warm WHERE b = repeat('a', 600);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('q', 600);
+SELECT b FROM test_toast_warm WHERE b = repeat('q', 600);
+
+SELECT a, b FROM test_toast_warm WHERE b = repeat('r', 2);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('s', 4);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('t', 63);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('u', 126);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('v', 127);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('w', 128);
+SELECT a, b FROM test_toast_warm WHERE b = repeat('x', 3200);
+
+DROP TABLE test_toast_warm;
+
+-- Test with numeric data type
+
+CREATE TABLE test_toast_warm (a int unique, b numeric(10,2), c int);
+CREATE INDEX test_toast_warm_index ON test_toast_warm(b);
+
+INSERT INTO test_toast_warm VALUES (1, 10.2, 100);
+INSERT INTO test_toast_warm VALUES (2, 11.22, 100);
+INSERT INTO test_toast_warm VALUES (3, 12.222, 100);
+INSERT INTO test_toast_warm VALUES (4, 13.20, 100);
+INSERT INTO test_toast_warm VALUES (5, 14.201, 100);
+
+UPDATE test_toast_warm SET b = 100.2 WHERE a = 1;
+UPDATE test_toast_warm SET b = 101.22 WHERE a = 2;
+UPDATE test_toast_warm SET b = 102.222 WHERE a = 3;
+UPDATE test_toast_warm SET b = 103.20 WHERE a = 4;
+UPDATE test_toast_warm SET b = 104.201 WHERE a = 5;
+
+SELECT * FROM test_toast_warm;
+
+SET enable_seqscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+
+SET enable_seqscan TO true;
+SET enable_indexscan TO false;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE a = 1;
+EXPLAIN (costs off) SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+EXPLAIN (costs off) SELECT b FROM test_toast_warm WHERE b = 100.2;
+SELECT a, b FROM test_toast_warm WHERE a = 1;
+SELECT a, b FROM test_toast_warm WHERE b = 10.2;
+SELECT b FROM test_toast_warm WHERE b = 10.2;
+SELECT a, b FROM test_toast_warm WHERE b = 100.2;
+SELECT b FROM test_toast_warm WHERE b = 100.2;
+
+SELECT a, b FROM test_toast_warm WHERE b = 101.22;
+SELECT a, b FROM test_toast_warm WHERE b = 102.222;
+SELECT a, b FROM test_toast_warm WHERE b = 102.22;
+SELECT a, b FROM test_toast_warm WHERE b = 103.20;
+SELECT a, b FROM test_toast_warm WHERE b = 104.201;
+SELECT a, b FROM test_toast_warm WHERE b = 104.20;
+
+DROP TABLE test_toast_warm;
+
+-- Toasted heap attributes
+CREATE TABLE toasttest(descr text , cnt int DEFAULT 0, f1 text, f2 text);
+CREATE INDEX testindx1 ON toasttest(descr);
+CREATE INDEX testindx2 ON toasttest(f1);
+
+INSERT INTO toasttest(descr, f1, f2) VALUES('two-compressed', repeat('1234567890',1000), repeat('1234567890',1000));
+INSERT INTO toasttest(descr, f1, f2) VALUES('two-toasted', repeat('1234567890',20000), repeat('1234567890',50000));
+INSERT INTO toasttest(descr, f1, f2) VALUES('one-compressed,one-toasted', repeat('1234567890',1000), repeat('1234567890',50000));
+
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+
+-- UPDATE f1 by doing string manipulation, but the updated value remains the
+-- same as the old value
+UPDATE toasttest SET cnt = cnt +1, f1 = trim(leading '-' from '-'||f1) RETURNING substring(toasttest::text, 1, 200);
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+
+-- UPDATE f1 for real this time
+UPDATE toasttest SET cnt = cnt +1, f1 = '-'||f1 RETURNING substring(toasttest::text, 1, 200);
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+
+-- UPDATE f1 from toasted to compressed
+UPDATE toasttest SET cnt = cnt +1, f1 = repeat('1234567890',1000) WHERE descr = 'two-toasted';
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+
+-- UPDATE f1 from compressed to toasted
+UPDATE toasttest SET cnt = cnt +1, f1 = repeat('1234567890',2000) WHERE descr = 'one-compressed,one-toasted';
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest;
+SET enable_seqscan TO false;
+SET seq_page_cost = 10000;
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SELECT ctid, substring(toasttest::text, 1, 200) FROM toasttest ORDER BY f1;
+SET enable_seqscan TO true;
+SET seq_page_cost TO default;
+
+DROP TABLE toasttest;
+
+-- Test enable_warm reloption
+CREATE TABLE testrelopt (a int unique, b int, c int) WITH (enable_warm = true);
+ALTER TABLE testrelopt SET (enable_warm = true);
+ALTER TABLE testrelopt RESET (enable_warm);
+-- should fail since we don't allow turning WARM off
+ALTER TABLE testrelopt SET (enable_warm = false);
+DROP TABLE testrelopt;
+
+CREATE TABLE testrelopt (a int unique, b int, c int) WITH (enable_warm = false);
+-- should be ok since the default is ON and we support turning WARM ON
+ALTER TABLE testrelopt RESET (enable_warm);
+ALTER TABLE testrelopt SET (enable_warm = true);
+-- should fail since we don't allow turning WARM off
+ALTER TABLE testrelopt SET (enable_warm = false);
+DROP TABLE testrelopt;
-- 
2.9.3 (Apple Git-75)

0001-Track-root-line-pointer-v23_v26.patchapplication/octet-stream; name=0001-Track-root-line-pointer-v23_v26.patchDownload
From 6b9ff9be78d8b8d51e63549ab620096a95031606 Mon Sep 17 00:00:00 2001
From: Pavan Deolasee <pavan.deolasee@gmail.com>
Date: Tue, 28 Feb 2017 10:34:30 +0530
Subject: [PATCH 1/4] Track root line pointer - v23

Store the root line pointer of the WARM chain in the t_ctid.ip_posid field of
the last tuple in the chain and mark the tuple header with HEAP_TUPLE_LATEST
flag to record that fact.
---
 src/backend/access/heap/heapam.c      | 209 ++++++++++++++++++++++++++++------
 src/backend/access/heap/hio.c         |  25 +++-
 src/backend/access/heap/pruneheap.c   | 126 ++++++++++++++++++--
 src/backend/access/heap/rewriteheap.c |  21 +++-
 src/backend/executor/execIndexing.c   |   3 +-
 src/backend/executor/execMain.c       |   4 +-
 src/include/access/heapam.h           |   1 +
 src/include/access/heapam_xlog.h      |   4 +-
 src/include/access/hio.h              |   4 +-
 src/include/access/htup_details.h     |  97 +++++++++++++++-
 10 files changed, 428 insertions(+), 66 deletions(-)

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 0c3e2b0..30262ef 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -94,7 +94,8 @@ static HeapTuple heap_prepare_insert(Relation relation, HeapTuple tup,
 					TransactionId xid, CommandId cid, int options);
 static XLogRecPtr log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup,
-				HeapTuple newtup, HeapTuple old_key_tup,
+				HeapTuple newtup, OffsetNumber root_offnum,
+				HeapTuple old_key_tup,
 				bool all_visible_cleared, bool new_all_visible_cleared);
 static Bitmapset *HeapDetermineModifiedColumns(Relation relation,
 							 Bitmapset *interesting_cols,
@@ -2264,13 +2265,13 @@ heap_get_latest_tid(Relation relation,
 		 */
 		if ((tp.t_data->t_infomask & HEAP_XMAX_INVALID) ||
 			HeapTupleHeaderIsOnlyLocked(tp.t_data) ||
-			ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid))
+			HeapTupleHeaderIsHeapLatest(tp.t_data, &ctid))
 		{
 			UnlockReleaseBuffer(buffer);
 			break;
 		}
 
-		ctid = tp.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tp.t_data, &ctid);
 		priorXmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		UnlockReleaseBuffer(buffer);
 	}							/* end of loop */
@@ -2401,6 +2402,7 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	bool		all_visible_cleared = false;
+	OffsetNumber	root_offnum;
 
 	/*
 	 * Fill in tuple header fields, assign an OID, and toast the tuple if
@@ -2439,8 +2441,13 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
-	RelationPutHeapTuple(relation, buffer, heaptup,
-						 (options & HEAP_INSERT_SPECULATIVE) != 0);
+	root_offnum = RelationPutHeapTuple(relation, buffer, heaptup,
+						 (options & HEAP_INSERT_SPECULATIVE) != 0,
+						 InvalidOffsetNumber);
+
+	/* We must not overwrite the speculative insertion token. */
+	if ((options & HEAP_INSERT_SPECULATIVE) == 0)
+		HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 	if (PageIsAllVisible(BufferGetPage(buffer)))
 	{
@@ -2668,6 +2675,7 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 	Size		saveFreeSpace;
 	bool		need_tuple_data = RelationIsLogicallyLogged(relation);
 	bool		need_cids = RelationIsAccessibleInLogicalDecoding(relation);
+	OffsetNumber	root_offnum;
 
 	needwal = !(options & HEAP_INSERT_SKIP_WAL) && RelationNeedsWAL(relation);
 	saveFreeSpace = RelationGetTargetPageFreeSpace(relation,
@@ -2738,7 +2746,12 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 		 * RelationGetBufferForTuple has ensured that the first tuple fits.
 		 * Put that on the page, and then as many other tuples as fit.
 		 */
-		RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false);
+		root_offnum = RelationPutHeapTuple(relation, buffer, heaptuples[ndone], false,
+				InvalidOffsetNumber);
+
+		/* Mark this tuple as the latest and also set root offset. */
+		HeapTupleHeaderSetHeapLatest(heaptuples[ndone]->t_data, root_offnum);
+
 		for (nthispage = 1; ndone + nthispage < ntuples; nthispage++)
 		{
 			HeapTuple	heaptup = heaptuples[ndone + nthispage];
@@ -2746,7 +2759,10 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
 			if (PageGetHeapFreeSpace(page) < MAXALIGN(heaptup->t_len) + saveFreeSpace)
 				break;
 
-			RelationPutHeapTuple(relation, buffer, heaptup, false);
+			root_offnum = RelationPutHeapTuple(relation, buffer, heaptup, false,
+					InvalidOffsetNumber);
+			/* Mark each tuple as the latest and also set root offset. */
+			HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
 
 			/*
 			 * We don't use heap_multi_insert for catalog tuples yet, but
@@ -3018,6 +3034,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	HeapTupleData tp;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
 	Buffer		buffer;
 	Buffer		vmbuffer = InvalidBuffer;
 	TransactionId new_xmax;
@@ -3028,6 +3045,7 @@ heap_delete(Relation relation, ItemPointer tid,
 	bool		all_visible_cleared = false;
 	HeapTuple	old_key_tuple = NULL;	/* replica identity of the tuple */
 	bool		old_key_copied = false;
+	OffsetNumber	root_offnum;
 
 	Assert(ItemPointerIsValid(tid));
 
@@ -3069,7 +3087,8 @@ heap_delete(Relation relation, ItemPointer tid,
 		LockBuffer(buffer, BUFFER_LOCK_EXCLUSIVE);
 	}
 
-	lp = PageGetItemId(page, ItemPointerGetOffsetNumber(tid));
+	offnum = ItemPointerGetOffsetNumber(tid);
+	lp = PageGetItemId(page, offnum);
 	Assert(ItemIdIsNormal(lp));
 
 	tp.t_tableOid = RelationGetRelid(relation);
@@ -3199,7 +3218,17 @@ l1:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tp.t_data->t_ctid;
+
+		/*
+		 * If we're at the end of the chain, then just return the same TID back
+		 * to the caller. The caller uses that as a hint to know if we have hit
+		 * the end of the chain.
+		 */
+		if (!HeapTupleHeaderIsHeapLatest(tp.t_data, &tp.t_self))
+			HeapTupleHeaderGetNextTid(tp.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&tp.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tp.t_data);
@@ -3248,6 +3277,22 @@ l1:
 							  xid, LockTupleExclusive, true,
 							  &new_xmax, &new_infomask, &new_infomask2);
 
+	/*
+	 * heap_get_root_tuple_one() may call palloc, which is disallowed once we
+	 * enter the critical section. So check if the root offset is cached in the
+	 * tuple and if not, fetch that information hard way before entering the
+	 * critical section.
+	 *
+	 * Most often and unless we are dealing with a pg-upgraded cluster, the
+	 * root offset information should be cached. So there should not be too
+	 * much overhead of fetching this information. Also, once a tuple is
+	 * updated, the information will be copied to the new version. So it's not
+	 * as if we're going to pay this price forever.
+	 */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tp.t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -3275,8 +3320,10 @@ l1:
 	HeapTupleHeaderClearHotUpdated(tp.t_data);
 	HeapTupleHeaderSetXmax(tp.t_data, new_xmax);
 	HeapTupleHeaderSetCmax(tp.t_data, cid, iscombo);
-	/* Make sure there is no forward chain link in t_ctid */
-	tp.t_data->t_ctid = tp.t_self;
+
+	/* Mark this tuple as the latest tuple in the update chain. */
+	if (!HeapTupleHeaderHasRootOffset(tp.t_data))
+		HeapTupleHeaderSetHeapLatest(tp.t_data, root_offnum);
 
 	MarkBufferDirty(buffer);
 
@@ -3477,6 +3524,8 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 	bool		old_key_copied = false;
 	Page		page;
 	BlockNumber block;
+	OffsetNumber	offnum;
+	OffsetNumber	root_offnum;
 	MultiXactStatus mxact_status;
 	Buffer		buffer,
 				newbuf,
@@ -3536,6 +3585,7 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
 
 
 	block = ItemPointerGetBlockNumber(otid);
+	offnum = ItemPointerGetOffsetNumber(otid);
 	buffer = ReadBuffer(relation, block);
 	page = BufferGetPage(buffer);
 
@@ -3839,7 +3889,12 @@ l2:
 			   result == HeapTupleUpdated ||
 			   result == HeapTupleBeingUpdated);
 		Assert(!(oldtup.t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = oldtup.t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(oldtup.t_data, &oldtup.t_self))
+			HeapTupleHeaderGetNextTid(oldtup.t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(&oldtup.t_self, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(oldtup.t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(oldtup.t_data);
@@ -3979,6 +4034,7 @@ l2:
 		uint16		infomask_lock_old_tuple,
 					infomask2_lock_old_tuple;
 		bool		cleared_all_frozen = false;
+		OffsetNumber	root_offnum;
 
 		/*
 		 * To prevent concurrent sessions from updating the tuple, we have to
@@ -4006,6 +4062,14 @@ l2:
 
 		Assert(HEAP_XMAX_IS_LOCKED_ONLY(infomask_lock_old_tuple));
 
+		/*
+		 * Fetch root offset before entering the critical section. We do this
+		 * only if the information is not already available.
+		 */
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = heap_get_root_tuple(page,
+					ItemPointerGetOffsetNumber(&oldtup.t_self));
+
 		START_CRIT_SECTION();
 
 		/* Clear obsolete visibility flags ... */
@@ -4020,7 +4084,8 @@ l2:
 		HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 		/* temporarily make it look not-updated, but locked */
-		oldtup.t_data->t_ctid = oldtup.t_self;
+		if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			HeapTupleHeaderSetHeapLatest(oldtup.t_data, root_offnum);
 
 		/*
 		 * Clear all-frozen bit on visibility map if needed. We could
@@ -4179,6 +4244,10 @@ l2:
 										   bms_overlap(modified_attrs, id_attrs),
 										   &old_key_copied);
 
+	if (!HeapTupleHeaderHasRootOffset(oldtup.t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&(oldtup.t_self)));
+
 	/* NO EREPORT(ERROR) from here till changes are logged */
 	START_CRIT_SECTION();
 
@@ -4204,6 +4273,17 @@ l2:
 		HeapTupleSetHeapOnly(heaptup);
 		/* Mark the caller's copy too, in case different from heaptup */
 		HeapTupleSetHeapOnly(newtup);
+		/*
+		 * For HOT (or WARM) updated tuples, we store the offset of the root
+		 * line pointer of this chain in the ip_posid field of the new tuple.
+		 * Usually this information will be available in the corresponding
+		 * field of the old tuple. But for aborted updates or pg_upgraded
+		 * databases, we might be seeing the old-style CTID chains and hence
+		 * the information must be obtained by hard way (we should have done
+		 * that before entering the critical section above).
+		 */
+		if (HeapTupleHeaderHasRootOffset(oldtup.t_data))
+			root_offnum = HeapTupleHeaderGetRootOffset(oldtup.t_data);
 	}
 	else
 	{
@@ -4211,10 +4291,22 @@ l2:
 		HeapTupleClearHotUpdated(&oldtup);
 		HeapTupleClearHeapOnly(heaptup);
 		HeapTupleClearHeapOnly(newtup);
+		root_offnum = InvalidOffsetNumber;
 	}
 
-	RelationPutHeapTuple(relation, newbuf, heaptup, false);		/* insert new tuple */
-
+	/* insert new tuple */
+	root_offnum = RelationPutHeapTuple(relation, newbuf, heaptup, false,
+									   root_offnum);
+	/*
+	 * Also mark both copies as latest and set the root offset information. If
+	 * we're doing a HOT/WARM update, then we just copy the information from
+	 * old tuple, if available or computed above. For regular updates,
+	 * RelationPutHeapTuple must have returned us the actual offset number
+	 * where the new version was inserted and we store the same value since the
+	 * update resulted in a new HOT-chain.
+	 */
+	HeapTupleHeaderSetHeapLatest(heaptup->t_data, root_offnum);
+	HeapTupleHeaderSetHeapLatest(newtup->t_data, root_offnum);
 
 	/* Clear obsolete visibility flags, possibly set by ourselves above... */
 	oldtup.t_data->t_infomask &= ~(HEAP_XMAX_BITS | HEAP_MOVED);
@@ -4227,7 +4319,7 @@ l2:
 	HeapTupleHeaderSetCmax(oldtup.t_data, cid, iscombo);
 
 	/* record address of new tuple in t_ctid of old one */
-	oldtup.t_data->t_ctid = heaptup->t_self;
+	HeapTupleHeaderSetNextTid(oldtup.t_data, &(heaptup->t_self));
 
 	/* clear PD_ALL_VISIBLE flags, reset all visibilitymap bits */
 	if (PageIsAllVisible(BufferGetPage(buffer)))
@@ -4266,6 +4358,7 @@ l2:
 
 		recptr = log_heap_update(relation, buffer,
 								 newbuf, &oldtup, heaptup,
+								 root_offnum,
 								 old_key_tuple,
 								 all_visible_cleared,
 								 all_visible_cleared_new);
@@ -4546,7 +4639,8 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	ItemId		lp;
 	Page		page;
 	Buffer		vmbuffer = InvalidBuffer;
-	BlockNumber block;
+	BlockNumber	block;
+	OffsetNumber	offnum;
 	TransactionId xid,
 				xmax;
 	uint16		old_infomask,
@@ -4555,9 +4649,11 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	bool		first_time = true;
 	bool		have_tuple_lock = false;
 	bool		cleared_all_frozen = false;
+	OffsetNumber	root_offnum;
 
 	*buffer = ReadBuffer(relation, ItemPointerGetBlockNumber(tid));
 	block = ItemPointerGetBlockNumber(tid);
+	offnum = ItemPointerGetOffsetNumber(tid);
 
 	/*
 	 * Before locking the buffer, pin the visibility map page if it appears to
@@ -4577,6 +4673,7 @@ heap_lock_tuple(Relation relation, HeapTuple tuple,
 	tuple->t_data = (HeapTupleHeader) PageGetItem(page, lp);
 	tuple->t_len = ItemIdGetLength(lp);
 	tuple->t_tableOid = RelationGetRelid(relation);
+	tuple->t_self = *tid;
 
 l3:
 	result = HeapTupleSatisfiesUpdate(tuple, cid, *buffer);
@@ -4604,7 +4701,11 @@ l3:
 		xwait = HeapTupleHeaderGetRawXmax(tuple->t_data);
 		infomask = tuple->t_data->t_infomask;
 		infomask2 = tuple->t_data->t_infomask2;
-		ItemPointerCopy(&tuple->t_data->t_ctid, &t_ctid);
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &t_ctid);
+		else
+			ItemPointerCopy(tid, &t_ctid);
 
 		LockBuffer(*buffer, BUFFER_LOCK_UNLOCK);
 
@@ -5042,7 +5143,12 @@ failed:
 		Assert(result == HeapTupleSelfUpdated || result == HeapTupleUpdated ||
 			   result == HeapTupleWouldBlock);
 		Assert(!(tuple->t_data->t_infomask & HEAP_XMAX_INVALID));
-		hufd->ctid = tuple->t_data->t_ctid;
+
+		if (!HeapTupleHeaderIsHeapLatest(tuple->t_data, tid))
+			HeapTupleHeaderGetNextTid(tuple->t_data, &hufd->ctid);
+		else
+			ItemPointerCopy(tid, &hufd->ctid);
+
 		hufd->xmax = HeapTupleHeaderGetUpdateXid(tuple->t_data);
 		if (result == HeapTupleSelfUpdated)
 			hufd->cmax = HeapTupleHeaderGetCmax(tuple->t_data);
@@ -5090,6 +5196,10 @@ failed:
 							  GetCurrentTransactionId(), mode, false,
 							  &xid, &new_infomask, &new_infomask2);
 
+	if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+		root_offnum = heap_get_root_tuple(page,
+				ItemPointerGetOffsetNumber(&tuple->t_self));
+
 	START_CRIT_SECTION();
 
 	/*
@@ -5118,7 +5228,10 @@ failed:
 	 * the tuple as well.
 	 */
 	if (HEAP_XMAX_IS_LOCKED_ONLY(new_infomask))
-		tuple->t_data->t_ctid = *tid;
+	{
+		if (!HeapTupleHeaderHasRootOffset(tuple->t_data))
+			HeapTupleHeaderSetHeapLatest(tuple->t_data, root_offnum);
+	}
 
 	/* Clear only the all-frozen bit on visibility map if needed */
 	if (PageIsAllVisible(page) &&
@@ -5632,6 +5745,7 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 	bool		cleared_all_frozen = false;
 	Buffer		vmbuffer = InvalidBuffer;
 	BlockNumber block;
+	OffsetNumber offnum;
 
 	ItemPointerCopy(tid, &tupid);
 
@@ -5640,6 +5754,8 @@ heap_lock_updated_tuple_rec(Relation rel, ItemPointer tid, TransactionId xid,
 		new_infomask = 0;
 		new_xmax = InvalidTransactionId;
 		block = ItemPointerGetBlockNumber(&tupid);
+		offnum = ItemPointerGetOffsetNumber(&tupid);
+
 		ItemPointerCopy(&tupid, &(mytup.t_self));
 
 		if (!heap_fetch(rel, SnapshotAny, &mytup, &buf, false, NULL))
@@ -5869,7 +5985,7 @@ l4:
 
 		/* if we find the end of update chain, we're done. */
 		if (mytup.t_data->t_infomask & HEAP_XMAX_INVALID ||
-			ItemPointerEquals(&mytup.t_self, &mytup.t_data->t_ctid) ||
+			HeapTupleHeaderIsHeapLatest(mytup.t_data, &mytup.t_self) ||
 			HeapTupleHeaderIsOnlyLocked(mytup.t_data))
 		{
 			result = HeapTupleMayBeUpdated;
@@ -5878,7 +5994,7 @@ l4:
 
 		/* tail recursion */
 		priorXmax = HeapTupleHeaderGetUpdateXid(mytup.t_data);
-		ItemPointerCopy(&(mytup.t_data->t_ctid), &tupid);
+		HeapTupleHeaderGetNextTid(mytup.t_data, &tupid);
 		UnlockReleaseBuffer(buf);
 		if (vmbuffer != InvalidBuffer)
 			ReleaseBuffer(vmbuffer);
@@ -5995,7 +6111,7 @@ heap_finish_speculative(Relation relation, HeapTuple tuple)
 	 * Replace the speculative insertion token with a real t_ctid, pointing to
 	 * itself like it does on regular tuples.
 	 */
-	htup->t_ctid = tuple->t_self;
+	HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 	/* XLOG stuff */
 	if (RelationNeedsWAL(relation))
@@ -6121,8 +6237,7 @@ heap_abort_speculative(Relation relation, HeapTuple tuple)
 	HeapTupleHeaderSetXmin(tp.t_data, InvalidTransactionId);
 
 	/* Clear the speculative insertion token too */
-	tp.t_data->t_ctid = tp.t_self;
-
+	HeapTupleHeaderSetHeapLatest(tp.t_data, ItemPointerGetOffsetNumber(tid));
 	MarkBufferDirty(buffer);
 
 	/*
@@ -7470,6 +7585,7 @@ log_heap_visible(RelFileNode rnode, Buffer heap_buffer, Buffer vm_buffer,
 static XLogRecPtr
 log_heap_update(Relation reln, Buffer oldbuf,
 				Buffer newbuf, HeapTuple oldtup, HeapTuple newtup,
+				OffsetNumber root_offnum,
 				HeapTuple old_key_tuple,
 				bool all_visible_cleared, bool new_all_visible_cleared)
 {
@@ -7590,6 +7706,9 @@ log_heap_update(Relation reln, Buffer oldbuf,
 	xlrec.new_offnum = ItemPointerGetOffsetNumber(&newtup->t_self);
 	xlrec.new_xmax = HeapTupleHeaderGetRawXmax(newtup->t_data);
 
+	Assert(OffsetNumberIsValid(root_offnum));
+	xlrec.root_offnum = root_offnum;
+
 	bufflags = REGBUF_STANDARD;
 	if (init)
 		bufflags |= REGBUF_WILL_INIT;
@@ -8244,7 +8363,13 @@ heap_xlog_delete(XLogReaderState *record)
 			PageClearAllVisible(page);
 
 		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = target_tid;
+		if (!HeapTupleHeaderHasRootOffset(htup))
+		{
+			OffsetNumber	root_offnum;
+			root_offnum = heap_get_root_tuple(page, xlrec->offnum); 
+			HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+		}
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8334,7 +8459,8 @@ heap_xlog_insert(XLogReaderState *record)
 		htup->t_hoff = xlhdr.t_hoff;
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
-		htup->t_ctid = target_tid;
+
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->offnum);
 
 		if (PageAddItem(page, (Item) htup, newlen, xlrec->offnum,
 						true, true) == InvalidOffsetNumber)
@@ -8469,8 +8595,8 @@ heap_xlog_multi_insert(XLogReaderState *record)
 			htup->t_hoff = xlhdr->t_hoff;
 			HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 			HeapTupleHeaderSetCmin(htup, FirstCommandId);
-			ItemPointerSetBlockNumber(&htup->t_ctid, blkno);
-			ItemPointerSetOffsetNumber(&htup->t_ctid, offnum);
+
+			HeapTupleHeaderSetHeapLatest(htup, offnum);
 
 			offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 			if (offnum == InvalidOffsetNumber)
@@ -8606,7 +8732,7 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmax(htup, xlrec->old_xmax);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
 		/* Set forward chain link in t_ctid */
-		htup->t_ctid = newtid;
+		HeapTupleHeaderSetNextTid(htup, &newtid);
 
 		/* Mark the page as a candidate for pruning */
 		PageSetPrunable(page, XLogRecGetXid(record));
@@ -8739,13 +8865,17 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		HeapTupleHeaderSetXmin(htup, XLogRecGetXid(record));
 		HeapTupleHeaderSetCmin(htup, FirstCommandId);
 		HeapTupleHeaderSetXmax(htup, xlrec->new_xmax);
-		/* Make sure there is no forward chain link in t_ctid */
-		htup->t_ctid = newtid;
 
 		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
 		if (offnum == InvalidOffsetNumber)
 			elog(PANIC, "failed to add tuple");
 
+		/*
+		 * Make sure the tuple is marked as the latest and root offset
+		 * information is restored.
+		 */
+		HeapTupleHeaderSetHeapLatest(htup, xlrec->root_offnum);
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
@@ -8808,6 +8938,9 @@ heap_xlog_confirm(XLogReaderState *record)
 		 */
 		ItemPointerSet(&htup->t_ctid, BufferGetBlockNumber(buffer), offnum);
 
+		/* For newly inserted tuple, set root offset to itself. */
+		HeapTupleHeaderSetHeapLatest(htup, offnum);
+
 		PageSetLSN(page, lsn);
 		MarkBufferDirty(buffer);
 	}
@@ -8871,11 +9004,17 @@ heap_xlog_lock(XLogReaderState *record)
 		 */
 		if (HEAP_XMAX_IS_LOCKED_ONLY(htup->t_infomask))
 		{
+			ItemPointerData	target_tid;
+
+			ItemPointerSet(&target_tid, BufferGetBlockNumber(buffer), offnum);
 			HeapTupleHeaderClearHotUpdated(htup);
 			/* Make sure there is no forward chain link in t_ctid */
-			ItemPointerSet(&htup->t_ctid,
-						   BufferGetBlockNumber(buffer),
-						   offnum);
+			if (!HeapTupleHeaderHasRootOffset(htup))
+			{
+				OffsetNumber	root_offnum;
+				root_offnum = heap_get_root_tuple(page, offnum);
+				HeapTupleHeaderSetHeapLatest(htup, root_offnum);
+			}
 		}
 		HeapTupleHeaderSetXmax(htup, xlrec->locking_xid);
 		HeapTupleHeaderSetCmax(htup, FirstCommandId, false);
diff --git a/src/backend/access/heap/hio.c b/src/backend/access/heap/hio.c
index 6529fe3..8052519 100644
--- a/src/backend/access/heap/hio.c
+++ b/src/backend/access/heap/hio.c
@@ -31,12 +31,20 @@
  * !!! EREPORT(ERROR) IS DISALLOWED HERE !!!  Must PANIC on failure!!!
  *
  * Note - caller must hold BUFFER_LOCK_EXCLUSIVE on the buffer.
+ *
+ * The caller can optionally tell us to set the root offset to the given value.
+ * Otherwise, the root offset is set to the offset of the new location once its
+ * known. The former is used while updating an existing tuple where the caller
+ * tells us about the root line pointer of the chain.  The latter is used
+ * during insertion of a new row, hence root line pointer is set to the offset
+ * where this tuple is inserted.
  */
-void
+OffsetNumber
 RelationPutHeapTuple(Relation relation,
 					 Buffer buffer,
 					 HeapTuple tuple,
-					 bool token)
+					 bool token,
+					 OffsetNumber root_offnum)
 {
 	Page		pageHeader;
 	OffsetNumber offnum;
@@ -60,17 +68,24 @@ RelationPutHeapTuple(Relation relation,
 	ItemPointerSet(&(tuple->t_self), BufferGetBlockNumber(buffer), offnum);
 
 	/*
-	 * Insert the correct position into CTID of the stored tuple, too (unless
-	 * this is a speculative insertion, in which case the token is held in
-	 * CTID field instead)
+	 * Set block number and the root offset into CTID of the stored tuple, too
+	 * (unless this is a speculative insertion, in which case the token is held
+	 * in CTID field instead).
 	 */
 	if (!token)
 	{
 		ItemId		itemId = PageGetItemId(pageHeader, offnum);
 		Item		item = PageGetItem(pageHeader, itemId);
 
+		/* Copy t_ctid to set the correct block number. */
 		((HeapTupleHeader) item)->t_ctid = tuple->t_self;
+
+		if (!OffsetNumberIsValid(root_offnum))
+			root_offnum = offnum;
+		HeapTupleHeaderSetHeapLatest((HeapTupleHeader) item, root_offnum);
 	}
+
+	return root_offnum;
 }
 
 /*
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index d69a266..f54337c 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -55,6 +55,8 @@ static void heap_prune_record_redirect(PruneState *prstate,
 static void heap_prune_record_dead(PruneState *prstate, OffsetNumber offnum);
 static void heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum);
 
+static void heap_get_root_tuples_internal(Page page,
+				OffsetNumber target_offnum, OffsetNumber *root_offsets);
 
 /*
  * Optionally prune and repair fragmentation in the specified page.
@@ -553,6 +555,17 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 		if (!HeapTupleHeaderIsHotUpdated(htup))
 			break;
 
+
+		/*
+		 * If the tuple was HOT-updated and the update was later
+		 * aborted, someone could mark this tuple to be the last tuple
+		 * in the chain, without clearing the HOT-updated flag. So we must
+		 * check if this is the last tuple in the chain and stop following the
+		 * CTID, else we risk getting into an infinite recursion (though
+		 * prstate->marked[] currently protects against that).
+		 */
+		if (HeapTupleHeaderHasRootOffset(htup))
+			break;
 		/*
 		 * Advance to next chain member.
 		 */
@@ -726,27 +739,47 @@ heap_page_prune_execute(Buffer buffer,
 
 
 /*
- * For all items in this page, find their respective root line pointers.
- * If item k is part of a HOT-chain with root at item j, then we set
- * root_offsets[k - 1] = j.
+ * Either for all items in this page or for the given item, find their
+ * respective root line pointers.
+ *
+ * When target_offnum is a valid offset number, the caller is interested in
+ * just one item. In that case, the root line pointer is returned in
+ * root_offsets.
  *
- * The passed-in root_offsets array must have MaxHeapTuplesPerPage entries.
- * We zero out all unused entries.
+ * When target_offnum is a InvalidOffsetNumber then the caller wants to know
+ * the root line pointers of all the items in this page. The root_offsets array
+ * must have MaxHeapTuplesPerPage entries in that case. If item k is part of a
+ * HOT-chain with root at item j, then we set root_offsets[k - 1] = j. We zero
+ * out all unused entries.
  *
  * The function must be called with at least share lock on the buffer, to
  * prevent concurrent prune operations.
  *
+ * This is not a cheap function since it must scan through all line pointers
+ * and tuples on the page in order to find the root line pointers. To minimize
+ * the cost, we break early if target_offnum is specified and root line pointer
+ * to target_offnum is found.
+ *
  * Note: The information collected here is valid only as long as the caller
  * holds a pin on the buffer. Once pin is released, a tuple might be pruned
  * and reused by a completely unrelated tuple.
+ *
+ * Note: This function must not be called inside a critical section because it
+ * internally calls HeapTupleHeaderGetUpdateXid which somewhere down the stack
+ * may try to allocate heap memory. Memory allocation is disallowed in a
+ * critical section.
  */
-void
-heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+static void
+heap_get_root_tuples_internal(Page page, OffsetNumber target_offnum,
+		OffsetNumber *root_offsets)
 {
 	OffsetNumber offnum,
 				maxoff;
 
-	MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
+	if (OffsetNumberIsValid(target_offnum))
+		*root_offsets = InvalidOffsetNumber;
+	else
+		MemSet(root_offsets, 0, MaxHeapTuplesPerPage * sizeof(OffsetNumber));
 
 	maxoff = PageGetMaxOffsetNumber(page);
 	for (offnum = FirstOffsetNumber; offnum <= maxoff; offnum = OffsetNumberNext(offnum))
@@ -774,9 +807,28 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 
 			/*
 			 * This is either a plain tuple or the root of a HOT-chain.
-			 * Remember it in the mapping.
+			 *
+			 * If the target_offnum is specified and if we found its mapping,
+			 * return.
 			 */
-			root_offsets[offnum - 1] = offnum;
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (target_offnum == offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember it in the mapping. */
+				root_offsets[offnum - 1] = offnum;
+			}
 
 			/* If it's not the start of a HOT-chain, we're done with it */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
@@ -817,15 +869,65 @@ heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
 				!TransactionIdEquals(priorXmax, HeapTupleHeaderGetXmin(htup)))
 				break;
 
-			/* Remember the root line pointer for this item */
-			root_offsets[nextoffnum - 1] = offnum;
+			/*
+			 * If target_offnum is specified and we found its mapping, return.
+			 */
+			if (OffsetNumberIsValid(target_offnum))
+			{
+				if (nextoffnum == target_offnum)
+				{
+					root_offsets[0] = offnum;
+					return;
+				}
+				/*
+				 * No need to remember mapping for any other item. The
+				 * root_offsets array may not even has place for them. So be
+				 * careful about not writing past the array.
+				 */
+			}
+			else
+			{
+				/* Remember the root line pointer for this item. */
+				root_offsets[nextoffnum - 1] = offnum;
+			}
 
 			/* Advance to next chain member, if any */
 			if (!HeapTupleHeaderIsHotUpdated(htup))
 				break;
 
+			/*
+			 * If the tuple was HOT-updated and the update was later aborted,
+			 * someone could mark this tuple to be the last tuple in the chain
+			 * and store root offset in CTID, without clearing the HOT-updated
+			 * flag. So we must check if CTID is actually root offset and break
+			 * to avoid infinite recursion.
+			 */
+			if (HeapTupleHeaderHasRootOffset(htup))
+				break;
+
 			nextoffnum = ItemPointerGetOffsetNumber(&htup->t_ctid);
 			priorXmax = HeapTupleHeaderGetUpdateXid(htup);
 		}
 	}
 }
+
+/*
+ * Get root line pointer for the given tuple.
+ */
+OffsetNumber
+heap_get_root_tuple(Page page, OffsetNumber target_offnum)
+{
+	OffsetNumber offnum = InvalidOffsetNumber;
+	heap_get_root_tuples_internal(page, target_offnum, &offnum);
+	return offnum;
+}
+
+/*
+ * Get root line pointers for all tuples in the page
+ */
+void
+heap_get_root_tuples(Page page, OffsetNumber *root_offsets)
+{
+	return heap_get_root_tuples_internal(page, InvalidOffsetNumber,
+			root_offsets);
+}
diff --git a/src/backend/access/heap/rewriteheap.c b/src/backend/access/heap/rewriteheap.c
index d7f65a5..2d3ae9b 100644
--- a/src/backend/access/heap/rewriteheap.c
+++ b/src/backend/access/heap/rewriteheap.c
@@ -421,14 +421,18 @@ rewrite_heap_tuple(RewriteState state,
 	 */
 	if (!((old_tuple->t_data->t_infomask & HEAP_XMAX_INVALID) ||
 		  HeapTupleHeaderIsOnlyLocked(old_tuple->t_data)) &&
-		!(ItemPointerEquals(&(old_tuple->t_self),
-							&(old_tuple->t_data->t_ctid))))
+		!(HeapTupleHeaderIsHeapLatest(old_tuple->t_data, &old_tuple->t_self)))
 	{
 		OldToNewMapping mapping;
 
 		memset(&hashkey, 0, sizeof(hashkey));
 		hashkey.xmin = HeapTupleHeaderGetUpdateXid(old_tuple->t_data);
-		hashkey.tid = old_tuple->t_data->t_ctid;
+
+		/* 
+		 * We've already checked that this is not the last tuple in the chain,
+		 * so fetch the next TID in the chain.
+		 */
+		HeapTupleHeaderGetNextTid(old_tuple->t_data, &hashkey.tid);
 
 		mapping = (OldToNewMapping)
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -441,7 +445,7 @@ rewrite_heap_tuple(RewriteState state,
 			 * set the ctid of this tuple to point to the new location, and
 			 * insert it right away.
 			 */
-			new_tuple->t_data->t_ctid = mapping->new_tid;
+			HeapTupleHeaderSetNextTid(new_tuple->t_data, &mapping->new_tid);
 
 			/* We don't need the mapping entry anymore */
 			hash_search(state->rs_old_new_tid_map, &hashkey,
@@ -527,7 +531,7 @@ rewrite_heap_tuple(RewriteState state,
 				new_tuple = unresolved->tuple;
 				free_new = true;
 				old_tid = unresolved->old_tid;
-				new_tuple->t_data->t_ctid = new_tid;
+				HeapTupleHeaderSetNextTid(new_tuple->t_data, &new_tid);
 
 				/*
 				 * We don't need the hash entry anymore, but don't free its
@@ -733,7 +737,12 @@ raw_heap_insert(RewriteState state, HeapTuple tup)
 		newitemid = PageGetItemId(page, newoff);
 		onpage_tup = (HeapTupleHeader) PageGetItem(page, newitemid);
 
-		onpage_tup->t_ctid = tup->t_self;
+		/* 
+		 * Set t_ctid just to ensure that block number is copied correctly, but
+		 * then immediately mark the tuple as the latest.
+		 */
+		HeapTupleHeaderSetNextTid(onpage_tup, &tup->t_self);
+		HeapTupleHeaderSetHeapLatest(onpage_tup, newoff);
 	}
 
 	/* If heaptup is a private copy, release it. */
diff --git a/src/backend/executor/execIndexing.c b/src/backend/executor/execIndexing.c
index 108060a..c3f1873 100644
--- a/src/backend/executor/execIndexing.c
+++ b/src/backend/executor/execIndexing.c
@@ -785,7 +785,8 @@ retry:
 			  DirtySnapshot.speculativeToken &&
 			  TransactionIdPrecedes(GetCurrentTransactionId(), xwait))))
 		{
-			ctid_wait = tup->t_data->t_ctid;
+			if (!HeapTupleHeaderIsHeapLatest(tup->t_data, &tup->t_self))
+				HeapTupleHeaderGetNextTid(tup->t_data, &ctid_wait);
 			reason_wait = indexInfo->ii_ExclusionOps ?
 				XLTW_RecheckExclusionConstr : XLTW_InsertIndex;
 			index_endscan(index_scan);
diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c
index 920b120..02f3f32 100644
--- a/src/backend/executor/execMain.c
+++ b/src/backend/executor/execMain.c
@@ -2628,7 +2628,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		 * As above, it should be safe to examine xmax and t_ctid without the
 		 * buffer content lock, because they can't be changing.
 		 */
-		if (ItemPointerEquals(&tuple.t_self, &tuple.t_data->t_ctid))
+		if (HeapTupleHeaderIsHeapLatest(tuple.t_data, &tuple.t_self))
 		{
 			/* deleted, so forget about it */
 			ReleaseBuffer(buffer);
@@ -2636,7 +2636,7 @@ EvalPlanQualFetch(EState *estate, Relation relation, int lockmode,
 		}
 
 		/* updated, so look at the updated row */
-		tuple.t_self = tuple.t_data->t_ctid;
+		HeapTupleHeaderGetNextTid(tuple.t_data, &tuple.t_self);
 		/* updated row should have xmin matching this xmax */
 		priorXmax = HeapTupleHeaderGetUpdateXid(tuple.t_data);
 		ReleaseBuffer(buffer);
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 7e85510..5540e12 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -190,6 +190,7 @@ extern void heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);
+extern OffsetNumber heap_get_root_tuple(Page page, OffsetNumber target_offnum);
 extern void heap_get_root_tuples(Page page, OffsetNumber *root_offsets);
 
 /* in heap/syncscan.c */
diff --git a/src/include/access/heapam_xlog.h b/src/include/access/heapam_xlog.h
index b285f17..e6019d5 100644
--- a/src/include/access/heapam_xlog.h
+++ b/src/include/access/heapam_xlog.h
@@ -193,6 +193,8 @@ typedef struct xl_heap_update
 	uint8		flags;
 	TransactionId new_xmax;		/* xmax of the new tuple */
 	OffsetNumber new_offnum;	/* new tuple's offset */
+	OffsetNumber root_offnum;	/* offset of the root line pointer in case of
+								   HOT or WARM update */
 
 	/*
 	 * If XLOG_HEAP_CONTAINS_OLD_TUPLE or XLOG_HEAP_CONTAINS_OLD_KEY flags are
@@ -200,7 +202,7 @@ typedef struct xl_heap_update
 	 */
 } xl_heap_update;
 
-#define SizeOfHeapUpdate	(offsetof(xl_heap_update, new_offnum) + sizeof(OffsetNumber))
+#define SizeOfHeapUpdate	(offsetof(xl_heap_update, root_offnum) + sizeof(OffsetNumber))
 
 /*
  * This is what we need to know about vacuum page cleanup/redirect
diff --git a/src/include/access/hio.h b/src/include/access/hio.h
index 2824f23..921cb37 100644
--- a/src/include/access/hio.h
+++ b/src/include/access/hio.h
@@ -35,8 +35,8 @@ typedef struct BulkInsertStateData
 }	BulkInsertStateData;
 
 
-extern void RelationPutHeapTuple(Relation relation, Buffer buffer,
-					 HeapTuple tuple, bool token);
+extern OffsetNumber RelationPutHeapTuple(Relation relation, Buffer buffer,
+					 HeapTuple tuple, bool token, OffsetNumber root_offnum);
 extern Buffer RelationGetBufferForTuple(Relation relation, Size len,
 						  Buffer otherBuffer, int options,
 						  BulkInsertState bistate,
diff --git a/src/include/access/htup_details.h b/src/include/access/htup_details.h
index 7b6285d..24433c7 100644
--- a/src/include/access/htup_details.h
+++ b/src/include/access/htup_details.h
@@ -260,13 +260,19 @@ struct HeapTupleHeaderData
  * information stored in t_infomask2:
  */
 #define HEAP_NATTS_MASK			0x07FF	/* 11 bits for number of attributes */
-/* bits 0x1800 are available */
+/* bits 0x0800 are available */
+#define HEAP_LATEST_TUPLE		0x1000	/*
+										 * This is the last tuple in chain and
+										 * ip_posid points to the root line
+										 * pointer
+										 */
 #define HEAP_KEYS_UPDATED		0x2000	/* tuple was updated and key cols
 										 * modified, or tuple deleted */
 #define HEAP_HOT_UPDATED		0x4000	/* tuple was HOT-updated */
 #define HEAP_ONLY_TUPLE			0x8000	/* this is heap-only tuple */
 
-#define HEAP2_XACT_MASK			0xE000	/* visibility-related bits */
+#define HEAP2_XACT_MASK			0xF000	/* visibility-related bits */
+
 
 /*
  * HEAP_TUPLE_HAS_MATCH is a temporary flag used during hash joins.  It is
@@ -504,6 +510,43 @@ do { \
   ((tup)->t_infomask2 & HEAP_ONLY_TUPLE) != 0 \
 )
 
+/*
+ * Mark this as the last tuple in the HOT chain. Before PG v10 we used to store
+ * the TID of the tuple itself in t_ctid field to mark the end of the chain.
+ * But starting PG v10, we use a special flag HEAP_LATEST_TUPLE to identify the
+ * last tuple and store the root line pointer of the HOT chain in t_ctid field
+ * instead.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetHeapLatest(tup, offnum) \
+do { \
+	AssertMacro(OffsetNumberIsValid(offnum)); \
+	(tup)->t_infomask2 |= HEAP_LATEST_TUPLE; \
+	ItemPointerSetOffsetNumber(&(tup)->t_ctid, (offnum)); \
+} while (0)
+
+#define HeapTupleHeaderClearHeapLatest(tup) \
+( \
+	(tup)->t_infomask2 &= ~HEAP_LATEST_TUPLE \
+)
+
+/*
+ * Starting from PostgreSQL 10, the latest tuple in an update chain has
+ * HEAP_LATEST_TUPLE set; but tuples upgraded from earlier versions do not.
+ * For those, we determine whether a tuple is latest by testing that its t_ctid
+ * points to itself.
+ *
+ * Note: beware of multiple evaluations of "tup" and "tid" arguments.
+ */
+#define HeapTupleHeaderIsHeapLatest(tup, tid) \
+( \
+  (((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0) || \
+  ((ItemPointerGetBlockNumber(&(tup)->t_ctid) == ItemPointerGetBlockNumber(tid)) && \
+   (ItemPointerGetOffsetNumber(&(tup)->t_ctid) == ItemPointerGetOffsetNumber(tid))) \
+)
+
+
 #define HeapTupleHeaderSetHeapOnly(tup) \
 ( \
   (tup)->t_infomask2 |= HEAP_ONLY_TUPLE \
@@ -542,6 +585,56 @@ do { \
 
 
 /*
+ * Set the t_ctid chain and also clear the HEAP_LATEST_TUPLE flag since we
+ * now have a new tuple in the chain and this is no longer the last tuple of
+ * the chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderSetNextTid(tup, tid) \
+do { \
+		ItemPointerCopy((tid), &((tup)->t_ctid)); \
+		HeapTupleHeaderClearHeapLatest((tup)); \
+} while (0)
+
+/*
+ * Get TID of next tuple in the update chain. Caller must have checked that
+ * we are not already at the end of the chain because in that case t_ctid may
+ * actually store the root line pointer of the HOT chain.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetNextTid(tup, next_ctid) \
+do { \
+	AssertMacro(!((tup)->t_infomask2 & HEAP_LATEST_TUPLE)); \
+	ItemPointerCopy(&(tup)->t_ctid, (next_ctid)); \
+} while (0)
+
+/*
+ * Get the root line pointer of the HOT chain. The caller should have confirmed
+ * that the root offset is cached before calling this macro.
+ *
+ * Note: beware of multiple evaluations of "tup" argument.
+ */
+#define HeapTupleHeaderGetRootOffset(tup) \
+( \
+	AssertMacro(((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0), \
+	ItemPointerGetOffsetNumber(&(tup)->t_ctid) \
+)
+
+/*
+ * Return whether the tuple has a cached root offset.  We don't use
+ * HeapTupleHeaderIsHeapLatest because that one also considers the case of
+ * t_ctid pointing to itself, for tuples migrated from pre v10 clusters. Here
+ * we are only interested in the tuples which are marked with HEAP_LATEST_TUPLE
+ * flag.
+ */
+#define HeapTupleHeaderHasRootOffset(tup) \
+( \
+	((tup)->t_infomask2 & HEAP_LATEST_TUPLE) != 0 \
+)
+
+/*
  * BITMAPLEN(NATTS) -
  *		Computes size of null bitmap given number of data columns.
  */
-- 
2.9.3 (Apple Git-75)

0004-Provide-control-knobs-to-decide-when-to-do-heap-_v26.patchapplication/octet-stream; name=0004-Provide-control-knobs-to-decide-when-to-do-heap-_v26.patchDownload
From 51843cbf0ca6459c4dc9f1f0936f88a038fadf20 Mon Sep 17 00:00:00 2001
From: Pavan Deolasee <pavan.deolasee@gmail.com>
Date: Wed, 29 Mar 2017 11:16:29 +0530
Subject: [PATCH 4/4] Provide control knobs to decide when to do heap and index
 WARM cleanup.

We provide two knobs to control maintenance activity on WARM. A guc
autovacuum_warm_cleanup_scale_factor can be set to trigger WARM cleanup.
Similarly, a GUC autovacuum_warm_cleanup_index_scale_factor can be set to
determine when to do index cleanup. Note that in the current design VACUUM
needs two index scans to remove a WARM index pointer. Hence we want to do that
work only when it makes sense (i.e. the index has significant number of WARM
pointers)

Similarly, VACUUM command is enhanced to accept another parameter, WARMCLEAN,
and if specified then only WARM cleanup will be carried out.
---
 src/backend/access/common/reloptions.c |  22 +++
 src/backend/catalog/system_views.sql   |   1 +
 src/backend/commands/analyze.c         |  60 +++++--
 src/backend/commands/vacuum.c          |   2 +
 src/backend/commands/vacuumlazy.c      | 320 +++++++++++++++++++++++++--------
 src/backend/parser/gram.y              |  26 ++-
 src/backend/postmaster/autovacuum.c    |  58 +++++-
 src/backend/postmaster/pgstat.c        |  50 +++++-
 src/backend/utils/adt/pgstatfuncs.c    |  15 ++
 src/backend/utils/init/globals.c       |   3 +
 src/backend/utils/misc/guc.c           |  30 ++++
 src/include/catalog/pg_proc.h          |   2 +
 src/include/commands/vacuum.h          |   2 +
 src/include/foreign/fdwapi.h           |   3 +-
 src/include/miscadmin.h                |   1 +
 src/include/nodes/parsenodes.h         |   3 +-
 src/include/parser/kwlist.h            |   1 +
 src/include/pgstat.h                   |  11 +-
 src/include/postmaster/autovacuum.h    |   2 +
 src/include/utils/guc_tables.h         |   1 +
 src/include/utils/rel.h                |   2 +
 src/test/regress/expected/rules.out    |   3 +
 src/test/regress/expected/warm.out     |  59 ++++++
 src/test/regress/sql/warm.sql          |  47 +++++
 24 files changed, 614 insertions(+), 110 deletions(-)

diff --git a/src/backend/access/common/reloptions.c b/src/backend/access/common/reloptions.c
index ce7d4da..c8c7bba 100644
--- a/src/backend/access/common/reloptions.c
+++ b/src/backend/access/common/reloptions.c
@@ -361,6 +361,24 @@ static relopt_real realRelOpts[] =
 	},
 	{
 		{
+			"autovacuum_warmcleanup_scale_factor",
+			"Number of WARM chains prior to WARM cleanup as a fraction of reltuples",
+			RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
+			ShareUpdateExclusiveLock
+		},
+		-1, 0.0, 100.0
+	},
+	{
+		{
+			"autovacuum_warmcleanup_index_scale_factor",
+			"Number of WARM pointers in an index prior to WARM cleanup as a fraction of total WARM chains",
+			RELOPT_KIND_HEAP | RELOPT_KIND_TOAST,
+			ShareUpdateExclusiveLock
+		},
+		-1, 0.0, 100.0
+	},
+	{
+		{
 			"autovacuum_analyze_scale_factor",
 			"Number of tuple inserts, updates or deletes prior to analyze as a fraction of reltuples",
 			RELOPT_KIND_HEAP,
@@ -1362,6 +1380,10 @@ default_reloptions(Datum reloptions, bool validate, relopt_kind kind)
 		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, vacuum_scale_factor)},
 		{"autovacuum_analyze_scale_factor", RELOPT_TYPE_REAL,
 		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, analyze_scale_factor)},
+		{"autovacuum_warmcleanup_scale_factor", RELOPT_TYPE_REAL,
+		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, warmcleanup_scale_factor)},
+		{"autovacuum_warmcleanup_index_scale_factor", RELOPT_TYPE_REAL,
+		offsetof(StdRdOptions, autovacuum) +offsetof(AutoVacOpts, warmcleanup_index_scale)},
 		{"user_catalog_table", RELOPT_TYPE_BOOL,
 		offsetof(StdRdOptions, user_catalog_table)},
 		{"parallel_workers", RELOPT_TYPE_INT,
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 4ef964f..363fdf0 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -533,6 +533,7 @@ CREATE VIEW pg_stat_all_tables AS
             pg_stat_get_tuples_warm_updated(C.oid) AS n_tup_warm_upd,
             pg_stat_get_live_tuples(C.oid) AS n_live_tup,
             pg_stat_get_dead_tuples(C.oid) AS n_dead_tup,
+            pg_stat_get_warm_chains(C.oid) AS n_warm_chains,
             pg_stat_get_mod_since_analyze(C.oid) AS n_mod_since_analyze,
             pg_stat_get_last_vacuum_time(C.oid) as last_vacuum,
             pg_stat_get_last_autovacuum_time(C.oid) as last_autovacuum,
diff --git a/src/backend/commands/analyze.c b/src/backend/commands/analyze.c
index 404acb2..6c4fc4e 100644
--- a/src/backend/commands/analyze.c
+++ b/src/backend/commands/analyze.c
@@ -93,7 +93,8 @@ static VacAttrStats *examine_attribute(Relation onerel, int attnum,
 				  Node *index_expr);
 static int acquire_sample_rows(Relation onerel, int elevel,
 					HeapTuple *rows, int targrows,
-					double *totalrows, double *totaldeadrows);
+					double *totalrows, double *totaldeadrows,
+					double *totalwarmchains);
 static int	compare_rows(const void *a, const void *b);
 static int acquire_inherited_sample_rows(Relation onerel, int elevel,
 							  HeapTuple *rows, int targrows,
@@ -320,7 +321,8 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params,
 	int			targrows,
 				numrows;
 	double		totalrows,
-				totaldeadrows;
+				totaldeadrows,
+				totalwarmchains;
 	HeapTuple  *rows;
 	PGRUsage	ru0;
 	TimestampTz starttime = 0;
@@ -501,7 +503,8 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params,
 	else
 		numrows = (*acquirefunc) (onerel, elevel,
 								  rows, targrows,
-								  &totalrows, &totaldeadrows);
+								  &totalrows, &totaldeadrows,
+								  &totalwarmchains);
 
 	/*
 	 * Compute the statistics.  Temporary results during the calculations for
@@ -631,7 +634,7 @@ do_analyze_rel(Relation onerel, int options, VacuumParams *params,
 	 */
 	if (!inh)
 		pgstat_report_analyze(onerel, totalrows, totaldeadrows,
-							  (va_cols == NIL));
+							  totalwarmchains, (va_cols == NIL));
 
 	/* If this isn't part of VACUUM ANALYZE, let index AMs do cleanup */
 	if (!(options & VACOPT_VACUUM))
@@ -991,12 +994,14 @@ examine_attribute(Relation onerel, int attnum, Node *index_expr)
 static int
 acquire_sample_rows(Relation onerel, int elevel,
 					HeapTuple *rows, int targrows,
-					double *totalrows, double *totaldeadrows)
+					double *totalrows, double *totaldeadrows,
+					double *totalwarmchains)
 {
 	int			numrows = 0;	/* # rows now in reservoir */
 	double		samplerows = 0; /* total # rows collected */
 	double		liverows = 0;	/* # live rows seen */
 	double		deadrows = 0;	/* # dead rows seen */
+	double		warmchains = 0;
 	double		rowstoskip = -1;	/* -1 means not set yet */
 	BlockNumber totalblocks;
 	TransactionId OldestXmin;
@@ -1023,9 +1028,14 @@ acquire_sample_rows(Relation onerel, int elevel,
 		Page		targpage;
 		OffsetNumber targoffset,
 					maxoffset;
+		bool		marked[MaxHeapTuplesPerPage];
+		OffsetNumber root_offsets[MaxHeapTuplesPerPage];
 
 		vacuum_delay_point();
 
+		/* Track which root line pointers are already counted. */
+		memset(marked, 0, sizeof (marked));
+
 		/*
 		 * We must maintain a pin on the target page's buffer to ensure that
 		 * the maxoffset value stays good (else concurrent VACUUM might delete
@@ -1041,6 +1051,9 @@ acquire_sample_rows(Relation onerel, int elevel,
 		targpage = BufferGetPage(targbuffer);
 		maxoffset = PageGetMaxOffsetNumber(targpage);
 
+		/* Get all root line pointers first */
+		heap_get_root_tuples(targpage, root_offsets);
+
 		/* Inner loop over all tuples on the selected page */
 		for (targoffset = FirstOffsetNumber; targoffset <= maxoffset; targoffset++)
 		{
@@ -1069,6 +1082,22 @@ acquire_sample_rows(Relation onerel, int elevel,
 			targtuple.t_data = (HeapTupleHeader) PageGetItem(targpage, itemid);
 			targtuple.t_len = ItemIdGetLength(itemid);
 
+			/*
+			 * If this is a WARM-updated tuple, check if we have already seen
+			 * the root line pointer. If not, count this as a WARM chain. This
+			 * ensures that we count every WARM-chain just once, irrespective
+			 * of how many tuples exist in the chain.
+			 */
+			if (HeapTupleHeaderIsWarmUpdated(targtuple.t_data))
+			{
+				OffsetNumber root_offnum = root_offsets[targoffset];
+				if (!marked[root_offnum])
+				{
+					warmchains += 1;
+					marked[root_offnum] = true;
+				}
+			}
+
 			switch (HeapTupleSatisfiesVacuum(&targtuple,
 											 OldestXmin,
 											 targbuffer))
@@ -1200,18 +1229,24 @@ acquire_sample_rows(Relation onerel, int elevel,
 
 	/*
 	 * Estimate total numbers of rows in relation.  For live rows, use
-	 * vac_estimate_reltuples; for dead rows, we have no source of old
-	 * information, so we have to assume the density is the same in unseen
-	 * pages as in the pages we scanned.
+	 * vac_estimate_reltuples; for dead rows and WARM chains, we have no source
+	 * of old information, so we have to assume the density is the same in
+	 * unseen pages as in the pages we scanned.
 	 */
 	*totalrows = vac_estimate_reltuples(onerel, true,
 										totalblocks,
 										bs.m,
 										liverows);
 	if (bs.m > 0)
+	{
 		*totaldeadrows = floor((deadrows / bs.m) * totalblocks + 0.5);
+		*totalwarmchains = floor((warmchains / bs.m) * totalblocks + 0.5);
+	}
 	else
+	{
 		*totaldeadrows = 0.0;
+		*totalwarmchains = 0.0;
+	}
 
 	/*
 	 * Emit some interesting relation info
@@ -1219,11 +1254,13 @@ acquire_sample_rows(Relation onerel, int elevel,
 	ereport(elevel,
 			(errmsg("\"%s\": scanned %d of %u pages, "
 					"containing %.0f live rows and %.0f dead rows; "
-					"%d rows in sample, %.0f estimated total rows",
+					"%d rows in sample, %.0f estimated total rows; "
+					"%.0f warm chains",
 					RelationGetRelationName(onerel),
 					bs.m, totalblocks,
 					liverows, deadrows,
-					numrows, *totalrows)));
+					numrows, *totalrows,
+					*totalwarmchains)));
 
 	return numrows;
 }
@@ -1428,11 +1465,12 @@ acquire_inherited_sample_rows(Relation onerel, int elevel,
 				int			childrows;
 				double		trows,
 							tdrows;
+				double		twarmchains;
 
 				/* Fetch a random sample of the child's rows */
 				childrows = (*acquirefunc) (childrel, elevel,
 											rows + numrows, childtargrows,
-											&trows, &tdrows);
+											&trows, &tdrows, &twarmchains);
 
 				/* We may need to convert from child's rowtype to parent's */
 				if (childrows > 0 &&
diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c
index 9fbb0eb..52a7838 100644
--- a/src/backend/commands/vacuum.c
+++ b/src/backend/commands/vacuum.c
@@ -103,6 +103,7 @@ ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel)
 		params.freeze_table_age = 0;
 		params.multixact_freeze_min_age = 0;
 		params.multixact_freeze_table_age = 0;
+		params.warmcleanup_index_scale = -1;
 	}
 	else
 	{
@@ -110,6 +111,7 @@ ExecVacuum(VacuumStmt *vacstmt, bool isTopLevel)
 		params.freeze_table_age = -1;
 		params.multixact_freeze_min_age = -1;
 		params.multixact_freeze_table_age = -1;
+		params.warmcleanup_index_scale = -1;
 	}
 
 	/* user-invoked vacuum is never "for wraparound" */
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
index c2d5705..6cf942a 100644
--- a/src/backend/commands/vacuumlazy.c
+++ b/src/backend/commands/vacuumlazy.c
@@ -156,18 +156,23 @@ typedef struct LVRelStats
 	double		tuples_deleted;
 	BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */
 
+	int			maxtuples;		/* maxtuples computed while allocating space */
+	Size		work_area_size;	/* working area size */
+	char		*work_area;		/* working area for storing dead tuples and
+								 * warm chains */
 	/* List of candidate WARM chains that can be converted into HOT chains */
-	/* NB: this list is ordered by TID of the root pointers */
+	/* 
+	 * NB: this list grows from bottom to top and is ordered by TID of the root
+	 * pointers, with the lowest entry at the bottom
+	 */
 	int				num_warm_chains;	/* current # of entries */
-	int				max_warm_chains;	/* # slots allocated in array */
 	LVWarmChain 	*warm_chains;		/* array of LVWarmChain */
 	double			num_non_convertible_warm_chains;
-
 	/* List of TIDs of tuples we intend to delete */
 	/* NB: this list is ordered by TID address */
 	int			num_dead_tuples;	/* current # of entries */
-	int			max_dead_tuples;	/* # slots allocated in array */
 	ItemPointer dead_tuples;	/* array of ItemPointerData */
+
 	int			num_index_scans;
 	TransactionId latestRemovedXid;
 	bool		lock_waiter_detected;
@@ -187,11 +192,12 @@ static BufferAccessStrategy vac_strategy;
 /* non-export function prototypes */
 static void lazy_scan_heap(Relation onerel, int options,
 			   LVRelStats *vacrelstats, Relation *Irel, int nindexes,
-			   bool aggressive);
+			   bool aggressive, double warmcleanup_index_scale);
 static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats);
 static bool lazy_check_needs_freeze(Buffer buf, bool *hastup);
 static void lazy_vacuum_index(Relation indrel,
 				  bool clear_warm,
+				  double warmcleanup_index_scale,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats);
 static void lazy_cleanup_index(Relation indrel,
@@ -207,7 +213,8 @@ static bool should_attempt_truncation(LVRelStats *vacrelstats);
 static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
 static BlockNumber count_nondeletable_pages(Relation onerel,
 						 LVRelStats *vacrelstats);
-static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks);
+static void lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks,
+					   bool dowarmcleanup);
 static void lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr);
 static void lazy_record_warm_chain(LVRelStats *vacrelstats,
@@ -283,6 +290,9 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 						  &OldestXmin, &FreezeLimit, &xidFullScanLimit,
 						  &MultiXactCutoff, &mxactFullScanLimit);
 
+	/* Use default if the caller hasn't specified any value */
+	if (params->warmcleanup_index_scale == -1)
+		params->warmcleanup_index_scale = VacuumWarmCleanupIndexScale;
 	/*
 	 * We request an aggressive scan if the table's frozen Xid is now older
 	 * than or equal to the requested Xid full-table scan limit; or if the
@@ -309,7 +319,8 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 	vacrelstats->hasindex = (nindexes > 0);
 
 	/* Do the vacuuming */
-	lazy_scan_heap(onerel, options, vacrelstats, Irel, nindexes, aggressive);
+	lazy_scan_heap(onerel, options, vacrelstats, Irel, nindexes, aggressive,
+			params->warmcleanup_index_scale);
 
 	/* Done with indexes */
 	vac_close_indexes(nindexes, Irel, NoLock);
@@ -396,7 +407,8 @@ lazy_vacuum_rel(Relation onerel, int options, VacuumParams *params,
 	pgstat_report_vacuum(RelationGetRelid(onerel),
 						 onerel->rd_rel->relisshared,
 						 new_live_tuples,
-						 vacrelstats->new_dead_tuples);
+						 vacrelstats->new_dead_tuples,
+						 vacrelstats->num_non_convertible_warm_chains);
 	pgstat_progress_end_command();
 
 	/* and log the action if appropriate */
@@ -507,10 +519,19 @@ vacuum_log_cleanup_info(Relation rel, LVRelStats *vacrelstats)
  *		If there are no indexes then we can reclaim line pointers on the fly;
  *		dead line pointers need only be retained until all index pointers that
  *		reference them have been killed.
+ *
+ *		warmcleanup_index_scale specifies the number of WARM pointers in an
+ *		index as a fraction of total candidate WARM chains. If we find less
+ *		WARM pointers in an index than the specified fraction, then we don't
+ *		invoke cleanup that index. If WARM cleanup is skipped for any one
+ *		index, the WARM chain can't be cleared in the heap and no further WARM
+ *		updates are possible to such chains. Such chains are also not
+ *		considered for WARM cleanup in other indexes.
  */
 static void
 lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
-			   Relation *Irel, int nindexes, bool aggressive)
+			   Relation *Irel, int nindexes, bool aggressive,
+			   double warmcleanup_index_scale)
 {
 	BlockNumber nblocks,
 				blkno;
@@ -536,6 +557,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		PROGRESS_VACUUM_MAX_DEAD_TUPLES
 	};
 	int64		initprog_val[3];
+	bool		dowarmcleanup = ((options & VACOPT_WARM_CLEANUP) != 0);
 
 	pg_rusage_init(&ru0);
 
@@ -558,13 +580,13 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 	vacrelstats->nonempty_pages = 0;
 	vacrelstats->latestRemovedXid = InvalidTransactionId;
 
-	lazy_space_alloc(vacrelstats, nblocks);
+	lazy_space_alloc(vacrelstats, nblocks, dowarmcleanup);
 	frozen = palloc(sizeof(xl_heap_freeze_tuple) * MaxHeapTuplesPerPage);
 
 	/* Report that we're scanning the heap, advertising total # of blocks */
 	initprog_val[0] = PROGRESS_VACUUM_PHASE_SCAN_HEAP;
 	initprog_val[1] = nblocks;
-	initprog_val[2] = vacrelstats->max_dead_tuples;
+	initprog_val[2] = vacrelstats->maxtuples;
 	pgstat_progress_update_multi_param(3, initprog_index, initprog_val);
 
 	/*
@@ -656,6 +678,11 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		bool		all_frozen = true;	/* provided all_visible is also true */
 		bool		has_dead_tuples;
 		TransactionId visibility_cutoff_xid = InvalidTransactionId;
+		char		*end_deads;
+		char		*end_warms;
+		Size		free_work_area;
+		int			avail_dead_tuples;
+		int			avail_warm_chains;
 
 		/* see note above about forcing scanning of last page */
 #define FORCE_CHECK_PAGE() \
@@ -740,13 +767,39 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		vacuum_delay_point();
 
 		/*
+		 * The dead tuples are stored starting from the start of the work
+		 * area and growing downwards. The candidate warm chains are stored
+		 * starting from the bottom on the work area and growing upwards. Once
+		 * the difference between these two segments is too small to accomodate
+		 * potentially all tuples in the current page, we stop and do one round
+		 * of index cleanup.
+		 */
+		end_deads = (char *)(vacrelstats->dead_tuples + vacrelstats->num_dead_tuples);
+
+		/*
+		 * If we are not doing WARM cleanup, then the entire work area is used
+		 * by the dead tuples.
+		 */
+		if (vacrelstats->warm_chains)
+		{
+			end_warms = (char *)(vacrelstats->warm_chains - vacrelstats->num_warm_chains);
+			free_work_area = end_warms - end_deads;
+			avail_warm_chains = (free_work_area / sizeof (LVWarmChain));
+		}
+		else
+		{
+			free_work_area = vacrelstats->work_area +
+				vacrelstats->work_area_size - end_deads;
+			avail_warm_chains = 0;
+		}
+		avail_dead_tuples = (free_work_area / sizeof (ItemPointerData));
+
+		/*
 		 * If we are close to overrunning the available space for dead-tuple
 		 * TIDs, pause and do a cycle of vacuuming before we tackle this page.
 		 */
-		if (((vacrelstats->max_dead_tuples - vacrelstats->num_dead_tuples) < MaxHeapTuplesPerPage &&
-			vacrelstats->num_dead_tuples > 0) ||
-			((vacrelstats->max_warm_chains - vacrelstats->num_warm_chains) < MaxHeapTuplesPerPage &&
-			 vacrelstats->num_warm_chains > 0))
+		if ((avail_dead_tuples < MaxHeapTuplesPerPage && vacrelstats->num_dead_tuples > 0) ||
+			(avail_warm_chains < MaxHeapTuplesPerPage && vacrelstats->num_warm_chains > 0))
 		{
 			const int	hvp_index[] = {
 				PROGRESS_VACUUM_PHASE,
@@ -776,7 +829,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			/* Remove index entries */
 			for (i = 0; i < nindexes; i++)
 				lazy_vacuum_index(Irel[i],
-								  (vacrelstats->num_warm_chains > 0),
+								  dowarmcleanup && (vacrelstats->num_warm_chains > 0),
+								  warmcleanup_index_scale,
 								  &indstats[i],
 								  vacrelstats);
 
@@ -800,8 +854,7 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 			 */
 			vacrelstats->num_dead_tuples = 0;
 			vacrelstats->num_warm_chains = 0;
-			memset(vacrelstats->warm_chains, 0,
-					vacrelstats->max_warm_chains * sizeof (LVWarmChain));
+			memset(vacrelstats->work_area, 0, vacrelstats->work_area_size);
 			vacrelstats->num_index_scans++;
 
 			/* Report that we are once again scanning the heap */
@@ -1413,7 +1466,8 @@ lazy_scan_heap(Relation onerel, int options, LVRelStats *vacrelstats,
 		/* Remove index entries */
 		for (i = 0; i < nindexes; i++)
 			lazy_vacuum_index(Irel[i],
-							  (vacrelstats->num_warm_chains > 0),
+							  dowarmcleanup && (vacrelstats->num_warm_chains > 0),
+							  warmcleanup_index_scale,
 							  &indstats[i],
 							  vacrelstats);
 
@@ -1518,9 +1572,12 @@ lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats)
 		vacuum_delay_point();
 
 		tblk = chainblk = InvalidBlockNumber;
-		if (chainindex < vacrelstats->num_warm_chains)
-			chainblk =
-				ItemPointerGetBlockNumber(&(vacrelstats->warm_chains[chainindex].chain_tid));
+		if (vacrelstats->warm_chains &&
+			chainindex < vacrelstats->num_warm_chains)
+		{
+			LVWarmChain *chain = vacrelstats->warm_chains - (chainindex + 1);
+			chainblk = ItemPointerGetBlockNumber(&chain->chain_tid);
+		}
 
 		if (tupindex < vacrelstats->num_dead_tuples)
 			tblk = ItemPointerGetBlockNumber(&vacrelstats->dead_tuples[tupindex]);
@@ -1618,7 +1675,8 @@ lazy_warmclear_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 		BlockNumber tblk;
 		LVWarmChain	*chain;
 
-		chain = &vacrelstats->warm_chains[chainindex];
+		/* The warm chains are indexed from bottom */
+		chain = vacrelstats->warm_chains - (chainindex + 1);
 
 		tblk = ItemPointerGetBlockNumber(&chain->chain_tid);
 		if (tblk != blkno)
@@ -1852,9 +1910,11 @@ static void
 lazy_reset_warm_pointer_count(LVRelStats *vacrelstats)
 {
 	int i;
-	for (i = 0; i < vacrelstats->num_warm_chains; i++)
+
+	/* Start from the bottom and move upwards */
+	for (i = 1; i <= vacrelstats->num_warm_chains; i++)
 	{
-		LVWarmChain *chain = &vacrelstats->warm_chains[i];
+		LVWarmChain *chain = (vacrelstats->warm_chains - i);
 		chain->num_clear_pointers = chain->num_warm_pointers = 0;
 	}
 }
@@ -1868,6 +1928,7 @@ lazy_reset_warm_pointer_count(LVRelStats *vacrelstats)
 static void
 lazy_vacuum_index(Relation indrel,
 				  bool clear_warm,
+				  double warmcleanup_index_scale,
 				  IndexBulkDeleteResult **stats,
 				  LVRelStats *vacrelstats)
 {
@@ -1932,25 +1993,57 @@ lazy_vacuum_index(Relation indrel,
 						(*stats)->warm_pointers_removed,
 						(*stats)->clear_pointers_removed)));
 
-		(*stats)->num_warm_pointers = 0;
-		(*stats)->num_clear_pointers = 0;
-		(*stats)->warm_pointers_removed = 0;
-		(*stats)->clear_pointers_removed = 0;
-		(*stats)->pointers_cleared = 0;
+		/*
+		 * If the number of WARM pointers found in the index are more than the
+		 * configured fraction of total candidate WARM chains, then do the
+		 * second index scan to clean up WARM chains.
+		 *
+		 * Otherwise we must set these WARM chains as non-convertible chains.
+		 */
+		if ((*stats)->num_warm_pointers >
+				((double)vacrelstats->num_warm_chains * warmcleanup_index_scale))
+		{
+			(*stats)->num_warm_pointers = 0;
+			(*stats)->num_clear_pointers = 0;
+			(*stats)->warm_pointers_removed = 0;
+			(*stats)->clear_pointers_removed = 0;
+			(*stats)->pointers_cleared = 0;
+
+			*stats = index_bulk_delete(&ivinfo, *stats,
+					lazy_indexvac_phase2, (void *) vacrelstats);
+			ereport(elevel,
+					(errmsg("scanned index \"%s\" to convert WARM pointers, found "
+							"%0.f WARM pointers, %0.f CLEAR pointers, removed "
+							"%0.f WARM pointers, removed %0.f CLEAR pointers, "
+							"cleared %0.f WARM pointers",
+							RelationGetRelationName(indrel),
+							(*stats)->num_warm_pointers,
+							(*stats)->num_clear_pointers,
+							(*stats)->warm_pointers_removed,
+							(*stats)->clear_pointers_removed,
+							(*stats)->pointers_cleared)));
+		}
+		else
+		{
+			int ii;
 
-		*stats = index_bulk_delete(&ivinfo, *stats,
-				lazy_indexvac_phase2, (void *) vacrelstats);
-		ereport(elevel,
-				(errmsg("scanned index \"%s\" to convert WARM pointers, found "
-						"%0.f WARM pointers, %0.f CLEAR pointers, removed "
-						"%0.f WARM pointers, removed %0.f CLEAR pointers, "
-						"cleared %0.f WARM pointers",
-						RelationGetRelationName(indrel),
-						(*stats)->num_warm_pointers,
-						(*stats)->num_clear_pointers,
-						(*stats)->warm_pointers_removed,
-						(*stats)->clear_pointers_removed,
-						(*stats)->pointers_cleared)));
+			/*
+			 * All chains skipped by this index are marked non-convertible.
+			 *
+			 * Start from bottom and move upwards.
+			 */
+			for (ii = 1; ii <= vacrelstats->num_warm_chains; ii++)
+			{
+				LVWarmChain *chain = vacrelstats->warm_chains - ii;
+				if (chain->num_warm_pointers > 0 ||
+					chain->num_clear_pointers > 1)
+				{
+					chain->keep_warm_chain = 1;
+					vacrelstats->num_non_convertible_warm_chains++;
+				}
+			}
+
+		}
 	}
 	else
 	{
@@ -2328,7 +2421,8 @@ count_nondeletable_pages(Relation onerel, LVRelStats *vacrelstats)
  * See the comments at the head of this file for rationale.
  */
 static void
-lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
+lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks,
+				 bool dowarmcleanup)
 {
 	long		maxtuples;
 	int			vac_work_mem = IsAutoVacuumWorkerProcess() &&
@@ -2337,11 +2431,16 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 
 	if (vacrelstats->hasindex)
 	{
+		/*
+		 * If we're not doing WARM cleanup then the entire memory is available
+		 * for tracking dead tuples. Otherwise it gets split between tracking
+		 * dead tuples and tracking WARM chains.
+		 */
 		maxtuples = (vac_work_mem * 1024L) / (sizeof(ItemPointerData) +
-				sizeof(LVWarmChain));
+				dowarmcleanup ? sizeof(LVWarmChain) : 0);
 		maxtuples = Min(maxtuples, INT_MAX);
 		maxtuples = Min(maxtuples, MaxAllocSize / (sizeof(ItemPointerData) +
-					sizeof(LVWarmChain)));
+				dowarmcleanup ? sizeof(LVWarmChain) : 0));
 
 		/* curious coding here to ensure the multiplication can't overflow */
 		if ((BlockNumber) (maxtuples / LAZY_ALLOC_TUPLES) > relblocks)
@@ -2355,21 +2454,29 @@ lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
 		maxtuples = MaxHeapTuplesPerPage;
 	}
 
-	vacrelstats->num_dead_tuples = 0;
-	vacrelstats->max_dead_tuples = (int) maxtuples;
-	vacrelstats->dead_tuples = (ItemPointer)
-		palloc(maxtuples * sizeof(ItemPointerData));
-
-	/*
-	 * XXX Cheat for now and allocate the same size array for tracking warm
-	 * chains. maxtuples must have been already adjusted above to ensure we
-	 * don't cross vac_work_mem.
+	/* Allocate work area of the desired size and setup dead_tuples and
+	 * warm_chains to the start and the end of the area respectively. They grow
+	 * in opposite directions as dead tuples and warm chains are added. Note
+	 * that if we are not doing WARM cleanup then the entire area will only be
+	 * used for tracking dead tuples.
 	 */
-	vacrelstats->num_warm_chains = 0;
-	vacrelstats->max_warm_chains = (int) maxtuples;
-	vacrelstats->warm_chains = (LVWarmChain *)
-		palloc0(maxtuples * sizeof(LVWarmChain));
+	vacrelstats->work_area_size = maxtuples * (sizeof(ItemPointerData) +
+				dowarmcleanup ? sizeof(LVWarmChain) : 0);
+	vacrelstats->work_area = (char *) palloc0(vacrelstats->work_area_size);
+	vacrelstats->num_dead_tuples = 0;
+	vacrelstats->dead_tuples = (ItemPointer)vacrelstats->work_area;
+	vacrelstats->maxtuples = maxtuples;
 
+	if (dowarmcleanup)
+	{
+		vacrelstats->num_warm_chains = 0;
+		vacrelstats->warm_chains = (LVWarmChain *)
+			(vacrelstats->work_area + vacrelstats->work_area_size);
+	}
+	else
+	{
+		vacrelstats->warm_chains = NULL;
+	}
 }
 
 /*
@@ -2379,17 +2486,38 @@ static void
 lazy_record_clear_chain(LVRelStats *vacrelstats,
 					   ItemPointer itemptr)
 {
+	char *end_deads, *end_warms;
+	Size free_work_area;
+
+	if (vacrelstats->warm_chains == NULL)
+	{
+		vacrelstats->num_non_convertible_warm_chains++;
+		return;
+	}
+
+	end_deads = (char *) (vacrelstats->dead_tuples +
+					vacrelstats->num_dead_tuples);
+	end_warms = (char *) (vacrelstats->warm_chains -
+					vacrelstats->num_warm_chains);
+	free_work_area = (end_warms - end_deads);
+
+	Assert(free_work_area >= 0);
 	/*
 	 * The array shouldn't overflow under normal behavior, but perhaps it
 	 * could if we are given a really small maintenance_work_mem. In that
 	 * case, just forget the last few tuples (we'll get 'em next time).
 	 */
-	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	if (free_work_area >= sizeof (LVWarmChain))
 	{
-		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
-		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 0;
+		LVWarmChain *chain;
+
 		vacrelstats->num_warm_chains++;
+	   	chain = vacrelstats->warm_chains - vacrelstats->num_warm_chains;
+		chain->chain_tid = *itemptr;
+		chain->is_postwarm_chain = 0;
 	}
+	else
+		vacrelstats->num_non_convertible_warm_chains++;
 }
 
 /*
@@ -2399,17 +2527,39 @@ static void
 lazy_record_warm_chain(LVRelStats *vacrelstats,
 					   ItemPointer itemptr)
 {
+	char *end_deads, *end_warms;
+	Size free_work_area;
+
+	if (vacrelstats->warm_chains == NULL)
+	{
+		vacrelstats->num_non_convertible_warm_chains++;
+		return;
+	}
+
+	end_deads = (char *) (vacrelstats->dead_tuples +
+					vacrelstats->num_dead_tuples);
+	end_warms = (char *) (vacrelstats->warm_chains -
+					vacrelstats->num_warm_chains);
+	free_work_area = (end_warms - end_deads);
+
+	Assert(free_work_area >= 0);
+
 	/*
 	 * The array shouldn't overflow under normal behavior, but perhaps it
 	 * could if we are given a really small maintenance_work_mem. In that
 	 * case, just forget the last few tuples (we'll get 'em next time).
 	 */
-	if (vacrelstats->num_warm_chains < vacrelstats->max_warm_chains)
+	if (free_work_area >= sizeof (LVWarmChain))
 	{
-		vacrelstats->warm_chains[vacrelstats->num_warm_chains].chain_tid = *itemptr;
-		vacrelstats->warm_chains[vacrelstats->num_warm_chains].is_postwarm_chain = 1;
+		LVWarmChain *chain;
+
 		vacrelstats->num_warm_chains++;
+	   	chain = vacrelstats->warm_chains - vacrelstats->num_warm_chains;
+		chain->chain_tid = *itemptr;
+		chain->is_postwarm_chain = 1;
 	}
+	else
+		vacrelstats->num_non_convertible_warm_chains++;
 }
 
 /*
@@ -2419,12 +2569,20 @@ static void
 lazy_record_dead_tuple(LVRelStats *vacrelstats,
 					   ItemPointer itemptr)
 {
+	char *end_deads = (char *) (vacrelstats->dead_tuples +
+		 	vacrelstats->num_dead_tuples);
+	char *end_warms = (char *) (vacrelstats->warm_chains -
+			vacrelstats->num_warm_chains);
+	Size freespace = (end_warms - end_deads);
+
+	Assert(freespace >= 0);
+	
 	/*
 	 * The array shouldn't overflow under normal behavior, but perhaps it
 	 * could if we are given a really small maintenance_work_mem. In that
 	 * case, just forget the last few tuples (we'll get 'em next time).
 	 */
-	if (vacrelstats->num_dead_tuples < vacrelstats->max_dead_tuples)
+	if (freespace >= sizeof (ItemPointer))
 	{
 		vacrelstats->dead_tuples[vacrelstats->num_dead_tuples] = *itemptr;
 		vacrelstats->num_dead_tuples++;
@@ -2477,10 +2635,10 @@ lazy_indexvac_phase1(ItemPointer itemptr, bool is_warm, void *state)
 		return IBDCR_DELETE;
 
 	chain = (LVWarmChain *) bsearch((void *) itemptr,
-								(void *) vacrelstats->warm_chains,
-								vacrelstats->num_warm_chains,
-								sizeof(LVWarmChain),
-								vac_cmp_warm_chain);
+				(void *) (vacrelstats->warm_chains - vacrelstats->num_warm_chains),
+				vacrelstats->num_warm_chains,
+				sizeof(LVWarmChain),
+				vac_cmp_warm_chain);
 	if (chain != NULL)
 	{
 		if (is_warm)
@@ -2500,13 +2658,13 @@ static IndexBulkDeleteCallbackResult
 lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
 {
 	LVRelStats		*vacrelstats = (LVRelStats *) state;
-	LVWarmChain	*chain;
+	LVWarmChain		*chain;
 
 	chain = (LVWarmChain *) bsearch((void *) itemptr,
-								(void *) vacrelstats->warm_chains,
-								vacrelstats->num_warm_chains,
-								sizeof(LVWarmChain),
-								vac_cmp_warm_chain);
+				(void *) (vacrelstats->warm_chains - vacrelstats->num_warm_chains),
+				vacrelstats->num_warm_chains,
+				sizeof(LVWarmChain),
+				vac_cmp_warm_chain);
 
 	if (chain != NULL && (chain->keep_warm_chain != 1))
 	{
@@ -2619,6 +2777,7 @@ lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
 		 * index pointers.
 		 */
 		chain->keep_warm_chain = 1;
+		vacrelstats->num_non_convertible_warm_chains++;
 		return IBDCR_KEEP;
 	}
 	return IBDCR_KEEP;
@@ -2627,6 +2786,9 @@ lazy_indexvac_phase2(ItemPointer itemptr, bool is_warm, void *state)
 /*
  * Comparator routines for use with qsort() and bsearch(). Similar to
  * vac_cmp_itemptr, but right hand argument is LVWarmChain struct pointer.
+ *
+ * The warm_chains array is sorted in descending order hence the return values
+ * are flipped.
  */
 static int
 vac_cmp_warm_chain(const void *left, const void *right)
@@ -2640,17 +2802,17 @@ vac_cmp_warm_chain(const void *left, const void *right)
 	rblk = ItemPointerGetBlockNumber(&((LVWarmChain *) right)->chain_tid);
 
 	if (lblk < rblk)
-		return -1;
-	if (lblk > rblk)
 		return 1;
+	if (lblk > rblk)
+		return -1;
 
 	loff = ItemPointerGetOffsetNumber((ItemPointer) left);
 	roff = ItemPointerGetOffsetNumber(&((LVWarmChain *) right)->chain_tid);
 
 	if (loff < roff)
-		return -1;
-	if (loff > roff)
 		return 1;
+	if (loff > roff)
+		return -1;
 
 	return 0;
 }
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 9d53a29..1592220 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -433,7 +433,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 %type <node>	overlay_placing substr_from substr_for
 
 %type <boolean> opt_instead
-%type <boolean> opt_unique opt_concurrently opt_verbose opt_full
+%type <boolean> opt_unique opt_concurrently opt_verbose opt_full opt_warmclean
 %type <boolean> opt_freeze opt_default opt_recheck
 %type <defelt>	opt_binary opt_oids copy_delimiter
 
@@ -684,7 +684,7 @@ static Node *makeRecursiveViewSelect(char *relname, List *aliases, Node *query);
 	VACUUM VALID VALIDATE VALIDATOR VALUE_P VALUES VARCHAR VARIADIC VARYING
 	VERBOSE VERSION_P VIEW VIEWS VOLATILE
 
-	WHEN WHERE WHITESPACE_P WINDOW WITH WITHIN WITHOUT WORK WRAPPER WRITE
+	WARMCLEAN WHEN WHERE WHITESPACE_P WINDOW WITH WITHIN WITHOUT WORK WRAPPER WRITE
 
 	XML_P XMLATTRIBUTES XMLCONCAT XMLELEMENT XMLEXISTS XMLFOREST XMLNAMESPACES
 	XMLPARSE XMLPI XMLROOT XMLSERIALIZE XMLTABLE
@@ -10059,7 +10059,7 @@ cluster_index_specification:
  *
  *****************************************************************************/
 
-VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
+VacuumStmt: VACUUM opt_full opt_freeze opt_verbose opt_warmclean
 				{
 					VacuumStmt *n = makeNode(VacuumStmt);
 					n->options = VACOPT_VACUUM;
@@ -10069,11 +10069,13 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
 						n->options |= VACOPT_FREEZE;
 					if ($4)
 						n->options |= VACOPT_VERBOSE;
+					if ($5)
+						n->options |= VACOPT_WARM_CLEANUP;
 					n->relation = NULL;
 					n->va_cols = NIL;
 					$$ = (Node *)n;
 				}
-			| VACUUM opt_full opt_freeze opt_verbose qualified_name
+			| VACUUM opt_full opt_freeze opt_verbose opt_warmclean qualified_name
 				{
 					VacuumStmt *n = makeNode(VacuumStmt);
 					n->options = VACOPT_VACUUM;
@@ -10083,13 +10085,15 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
 						n->options |= VACOPT_FREEZE;
 					if ($4)
 						n->options |= VACOPT_VERBOSE;
-					n->relation = $5;
+					if ($5)
+						n->options |= VACOPT_WARM_CLEANUP;
+					n->relation = $6;
 					n->va_cols = NIL;
 					$$ = (Node *)n;
 				}
-			| VACUUM opt_full opt_freeze opt_verbose AnalyzeStmt
+			| VACUUM opt_full opt_freeze opt_verbose opt_warmclean AnalyzeStmt
 				{
-					VacuumStmt *n = (VacuumStmt *) $5;
+					VacuumStmt *n = (VacuumStmt *) $6;
 					n->options |= VACOPT_VACUUM;
 					if ($2)
 						n->options |= VACOPT_FULL;
@@ -10097,6 +10101,8 @@ VacuumStmt: VACUUM opt_full opt_freeze opt_verbose
 						n->options |= VACOPT_FREEZE;
 					if ($4)
 						n->options |= VACOPT_VERBOSE;
+					if ($5)
+						n->options |= VACOPT_WARM_CLEANUP;
 					$$ = (Node *)n;
 				}
 			| VACUUM '(' vacuum_option_list ')'
@@ -10129,6 +10135,7 @@ vacuum_option_elem:
 			| VERBOSE			{ $$ = VACOPT_VERBOSE; }
 			| FREEZE			{ $$ = VACOPT_FREEZE; }
 			| FULL				{ $$ = VACOPT_FULL; }
+			| WARMCLEAN			{ $$ = VACOPT_WARM_CLEANUP; }
 			| IDENT
 				{
 					if (strcmp($1, "disable_page_skipping") == 0)
@@ -10182,6 +10189,10 @@ opt_freeze: FREEZE									{ $$ = TRUE; }
 			| /*EMPTY*/								{ $$ = FALSE; }
 		;
 
+opt_warmclean: WARMCLEAN							{ $$ = TRUE; }
+			| /*EMPTY*/								{ $$ = FALSE; }
+		;
+
 opt_name_list:
 			'(' name_list ')'						{ $$ = $2; }
 			| /*EMPTY*/								{ $$ = NIL; }
@@ -14886,6 +14897,7 @@ type_func_name_keyword:
 			| SIMILAR
 			| TABLESAMPLE
 			| VERBOSE
+			| WARMCLEAN
 		;
 
 /* Reserved keyword --- these keywords are usable only as a ColLabel.
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 89dd3b3..a157c05 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -117,6 +117,8 @@ int			autovacuum_vac_thresh;
 double		autovacuum_vac_scale;
 int			autovacuum_anl_thresh;
 double		autovacuum_anl_scale;
+double		autovacuum_warmcleanup_scale;
+double		autovacuum_warmcleanup_index_scale;
 int			autovacuum_freeze_max_age;
 int			autovacuum_multixact_freeze_max_age;
 
@@ -338,7 +340,8 @@ static void relation_needs_vacanalyze(Oid relid, AutoVacOpts *relopts,
 						  Form_pg_class classForm,
 						  PgStat_StatTabEntry *tabentry,
 						  int effective_multixact_freeze_max_age,
-						  bool *dovacuum, bool *doanalyze, bool *wraparound);
+						  bool *dovacuum, bool *doanalyze, bool *wraparound,
+						  bool *dowarmcleanup);
 
 static void autovacuum_do_vac_analyze(autovac_table *tab,
 						  BufferAccessStrategy bstrategy);
@@ -2076,6 +2079,7 @@ do_autovacuum(void)
 		bool		dovacuum;
 		bool		doanalyze;
 		bool		wraparound;
+		bool		dowarmcleanup;
 
 		if (classForm->relkind != RELKIND_RELATION &&
 			classForm->relkind != RELKIND_MATVIEW)
@@ -2115,10 +2119,14 @@ do_autovacuum(void)
 		tabentry = get_pgstat_tabentry_relid(relid, classForm->relisshared,
 											 shared, dbentry);
 
-		/* Check if it needs vacuum or analyze */
+		/* 
+		 * Check if it needs vacuum or analyze. For vacuum, also check if it
+		 * needs WARM cleanup.
+		 */
 		relation_needs_vacanalyze(relid, relopts, classForm, tabentry,
 								  effective_multixact_freeze_max_age,
-								  &dovacuum, &doanalyze, &wraparound);
+								  &dovacuum, &doanalyze, &wraparound,
+								  &dowarmcleanup);
 
 		/* Relations that need work are added to table_oids */
 		if (dovacuum || doanalyze)
@@ -2171,6 +2179,7 @@ do_autovacuum(void)
 		bool		dovacuum;
 		bool		doanalyze;
 		bool		wraparound;
+		bool		dowarmcleanup;
 
 		/*
 		 * We cannot safely process other backends' temp tables, so skip 'em.
@@ -2201,7 +2210,8 @@ do_autovacuum(void)
 
 		relation_needs_vacanalyze(relid, relopts, classForm, tabentry,
 								  effective_multixact_freeze_max_age,
-								  &dovacuum, &doanalyze, &wraparound);
+								  &dovacuum, &doanalyze, &wraparound,
+								  &dowarmcleanup);
 
 		/* ignore analyze for toast tables */
 		if (dovacuum)
@@ -2792,6 +2802,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 	HeapTuple	classTup;
 	bool		dovacuum;
 	bool		doanalyze;
+	bool		dowarmcleanup;
 	autovac_table *tab = NULL;
 	PgStat_StatTabEntry *tabentry;
 	PgStat_StatDBEntry *shared;
@@ -2833,7 +2844,8 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 
 	relation_needs_vacanalyze(relid, avopts, classForm, tabentry,
 							  effective_multixact_freeze_max_age,
-							  &dovacuum, &doanalyze, &wraparound);
+							  &dovacuum, &doanalyze, &wraparound,
+							  &dowarmcleanup);
 
 	/* ignore ANALYZE for toast tables */
 	if (classForm->relkind == RELKIND_TOASTVALUE)
@@ -2849,6 +2861,7 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 		int			vac_cost_limit;
 		int			vac_cost_delay;
 		int			log_min_duration;
+		double		warmcleanup_index_scale;
 
 		/*
 		 * Calculate the vacuum cost parameters and the freeze ages.  If there
@@ -2895,19 +2908,26 @@ table_recheck_autovac(Oid relid, HTAB *table_toast_map,
 			? avopts->multixact_freeze_table_age
 			: default_multixact_freeze_table_age;
 
+		warmcleanup_index_scale = (avopts &&
+								   avopts->warmcleanup_index_scale >= 0)
+			? avopts->warmcleanup_index_scale
+			: autovacuum_warmcleanup_index_scale;
+
 		tab = palloc(sizeof(autovac_table));
 		tab->at_relid = relid;
 		tab->at_sharedrel = classForm->relisshared;
 		tab->at_vacoptions = VACOPT_SKIPTOAST |
 			(dovacuum ? VACOPT_VACUUM : 0) |
 			(doanalyze ? VACOPT_ANALYZE : 0) |
-			(!wraparound ? VACOPT_NOWAIT : 0);
+			(!wraparound ? VACOPT_NOWAIT : 0) |
+			(dowarmcleanup ? VACOPT_WARM_CLEANUP : 0);
 		tab->at_params.freeze_min_age = freeze_min_age;
 		tab->at_params.freeze_table_age = freeze_table_age;
 		tab->at_params.multixact_freeze_min_age = multixact_freeze_min_age;
 		tab->at_params.multixact_freeze_table_age = multixact_freeze_table_age;
 		tab->at_params.is_wraparound = wraparound;
 		tab->at_params.log_min_duration = log_min_duration;
+		tab->at_params.warmcleanup_index_scale = warmcleanup_index_scale;
 		tab->at_vacuum_cost_limit = vac_cost_limit;
 		tab->at_vacuum_cost_delay = vac_cost_delay;
 		tab->at_relname = NULL;
@@ -2974,7 +2994,8 @@ relation_needs_vacanalyze(Oid relid,
  /* output params below */
 						  bool *dovacuum,
 						  bool *doanalyze,
-						  bool *wraparound)
+						  bool *wraparound,
+						  bool *dowarmcleanup)
 {
 	bool		force_vacuum;
 	bool		av_enabled;
@@ -2986,6 +3007,9 @@ relation_needs_vacanalyze(Oid relid,
 	float4		vac_scale_factor,
 				anl_scale_factor;
 
+	/* constant from reloptions or GUC valriable */
+	float4		warmcleanup_scale_factor;
+
 	/* thresholds calculated from above constants */
 	float4		vacthresh,
 				anlthresh;
@@ -2994,6 +3018,9 @@ relation_needs_vacanalyze(Oid relid,
 	float4		vactuples,
 				anltuples;
 
+	/* number of WARM chains in the table */
+	float4		warmchains;
+
 	/* freeze parameters */
 	int			freeze_max_age;
 	int			multixact_freeze_max_age;
@@ -3026,6 +3053,11 @@ relation_needs_vacanalyze(Oid relid,
 		? relopts->analyze_threshold
 		: autovacuum_anl_thresh;
 
+	/* Use table specific value or the GUC value */
+	warmcleanup_scale_factor = (relopts && relopts->warmcleanup_scale_factor >= 0)
+		? relopts->warmcleanup_scale_factor
+		: autovacuum_warmcleanup_scale;
+
 	freeze_max_age = (relopts && relopts->freeze_max_age >= 0)
 		? Min(relopts->freeze_max_age, autovacuum_freeze_max_age)
 		: autovacuum_freeze_max_age;
@@ -3073,6 +3105,7 @@ relation_needs_vacanalyze(Oid relid,
 		reltuples = classForm->reltuples;
 		vactuples = tabentry->n_dead_tuples;
 		anltuples = tabentry->changes_since_analyze;
+		warmchains = tabentry->n_warm_chains;
 
 		vacthresh = (float4) vac_base_thresh + vac_scale_factor * reltuples;
 		anlthresh = (float4) anl_base_thresh + anl_scale_factor * reltuples;
@@ -3089,6 +3122,17 @@ relation_needs_vacanalyze(Oid relid,
 		/* Determine if this table needs vacuum or analyze. */
 		*dovacuum = force_vacuum || (vactuples > vacthresh);
 		*doanalyze = (anltuples > anlthresh);
+
+		/*
+		 * If the number of WARM chains in the is more than the configured
+		 * fraction, then we also do a WARM cleanup. This only triggers at the
+		 * table level, but we then look at each index and do cleanup for the
+		 * index only if the WARM pointers in the index are more than
+		 * configured index-level scale factor. lazy_vacuum_index() later deals
+		 * with that.
+		 */
+		if (*dovacuum && (warmcleanup_scale_factor * reltuples < warmchains))
+			*dowarmcleanup = true;
 	}
 	else
 	{
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index 52fe4ba..f38ce8a 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -226,9 +226,11 @@ typedef struct TwoPhasePgStatRecord
 	PgStat_Counter tuples_inserted;		/* tuples inserted in xact */
 	PgStat_Counter tuples_updated;		/* tuples updated in xact */
 	PgStat_Counter tuples_deleted;		/* tuples deleted in xact */
+	PgStat_Counter tuples_warm_updated;	/* tuples warm updated in xact */
 	PgStat_Counter inserted_pre_trunc;	/* tuples inserted prior to truncate */
 	PgStat_Counter updated_pre_trunc;	/* tuples updated prior to truncate */
 	PgStat_Counter deleted_pre_trunc;	/* tuples deleted prior to truncate */
+	PgStat_Counter warm_updated_pre_trunc;	/* tuples warm updated prior to truncate */
 	Oid			t_id;			/* table's OID */
 	bool		t_shared;		/* is it a shared catalog? */
 	bool		t_truncated;	/* was the relation truncated? */
@@ -1367,7 +1369,8 @@ pgstat_report_autovac(Oid dboid)
  */
 void
 pgstat_report_vacuum(Oid tableoid, bool shared,
-					 PgStat_Counter livetuples, PgStat_Counter deadtuples)
+					 PgStat_Counter livetuples, PgStat_Counter deadtuples,
+					 PgStat_Counter warmchains)
 {
 	PgStat_MsgVacuum msg;
 
@@ -1381,6 +1384,7 @@ pgstat_report_vacuum(Oid tableoid, bool shared,
 	msg.m_vacuumtime = GetCurrentTimestamp();
 	msg.m_live_tuples = livetuples;
 	msg.m_dead_tuples = deadtuples;
+	msg.m_warm_chains = warmchains;
 	pgstat_send(&msg, sizeof(msg));
 }
 
@@ -1396,7 +1400,7 @@ pgstat_report_vacuum(Oid tableoid, bool shared,
 void
 pgstat_report_analyze(Relation rel,
 					  PgStat_Counter livetuples, PgStat_Counter deadtuples,
-					  bool resetcounter)
+					  PgStat_Counter warmchains, bool resetcounter)
 {
 	PgStat_MsgAnalyze msg;
 
@@ -1421,12 +1425,14 @@ pgstat_report_analyze(Relation rel,
 		{
 			livetuples -= trans->tuples_inserted - trans->tuples_deleted;
 			deadtuples -= trans->tuples_updated + trans->tuples_deleted;
+			warmchains -= trans->tuples_warm_updated;
 		}
 		/* count stuff inserted by already-aborted subxacts, too */
 		deadtuples -= rel->pgstat_info->t_counts.t_delta_dead_tuples;
 		/* Since ANALYZE's counts are estimates, we could have underflowed */
 		livetuples = Max(livetuples, 0);
 		deadtuples = Max(deadtuples, 0);
+		warmchains = Max(warmchains, 0);
 	}
 
 	pgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_ANALYZE);
@@ -1437,6 +1443,7 @@ pgstat_report_analyze(Relation rel,
 	msg.m_analyzetime = GetCurrentTimestamp();
 	msg.m_live_tuples = livetuples;
 	msg.m_dead_tuples = deadtuples;
+	msg.m_warm_chains = warmchains;
 	pgstat_send(&msg, sizeof(msg));
 }
 
@@ -1907,7 +1914,10 @@ pgstat_count_heap_update(Relation rel, bool hot, bool warm)
 		if (hot)
 			pgstat_info->t_counts.t_tuples_hot_updated++;
 		else if (warm)
+		{
+			pgstat_info->trans->tuples_warm_updated++;
 			pgstat_info->t_counts.t_tuples_warm_updated++;
+		}
 	}
 }
 
@@ -2070,6 +2080,12 @@ AtEOXact_PgStat(bool isCommit)
 				/* update and delete each create a dead tuple */
 				tabstat->t_counts.t_delta_dead_tuples +=
 					trans->tuples_updated + trans->tuples_deleted;
+				/* 
+				 * commit or abort, a WARM update generates a WARM chain which
+				 * needs cleanup.
+				 */
+				tabstat->t_counts.t_delta_warm_chains +=
+					trans->tuples_warm_updated;
 				/* insert, update, delete each count as one change event */
 				tabstat->t_counts.t_changed_tuples +=
 					trans->tuples_inserted + trans->tuples_updated +
@@ -2080,6 +2096,12 @@ AtEOXact_PgStat(bool isCommit)
 				/* inserted tuples are dead, deleted tuples are unaffected */
 				tabstat->t_counts.t_delta_dead_tuples +=
 					trans->tuples_inserted + trans->tuples_updated;
+				/* 
+				 * commit or abort, a WARM update generates a WARM chain which
+				 * needs cleanup.
+				 */
+				tabstat->t_counts.t_delta_warm_chains +=
+					trans->tuples_warm_updated;
 				/* an aborted xact generates no changed_tuple events */
 			}
 			tabstat->trans = NULL;
@@ -2136,12 +2158,16 @@ AtEOSubXact_PgStat(bool isCommit, int nestDepth)
 						trans->upper->tuples_inserted = trans->tuples_inserted;
 						trans->upper->tuples_updated = trans->tuples_updated;
 						trans->upper->tuples_deleted = trans->tuples_deleted;
+						trans->upper->tuples_warm_updated =
+							trans->tuples_warm_updated;
 					}
 					else
 					{
 						trans->upper->tuples_inserted += trans->tuples_inserted;
 						trans->upper->tuples_updated += trans->tuples_updated;
 						trans->upper->tuples_deleted += trans->tuples_deleted;
+						trans->upper->tuples_warm_updated +=
+							trans->tuples_warm_updated;
 					}
 					tabstat->trans = trans->upper;
 					pfree(trans);
@@ -2177,9 +2203,13 @@ AtEOSubXact_PgStat(bool isCommit, int nestDepth)
 				tabstat->t_counts.t_tuples_inserted += trans->tuples_inserted;
 				tabstat->t_counts.t_tuples_updated += trans->tuples_updated;
 				tabstat->t_counts.t_tuples_deleted += trans->tuples_deleted;
+				tabstat->t_counts.t_tuples_warm_updated +=
+					trans->tuples_warm_updated;
 				/* inserted tuples are dead, deleted tuples are unaffected */
 				tabstat->t_counts.t_delta_dead_tuples +=
 					trans->tuples_inserted + trans->tuples_updated;
+				tabstat->t_counts.t_delta_warm_chains +=
+					trans->tuples_warm_updated;
 				tabstat->trans = trans->upper;
 				pfree(trans);
 			}
@@ -2221,9 +2251,11 @@ AtPrepare_PgStat(void)
 			record.tuples_inserted = trans->tuples_inserted;
 			record.tuples_updated = trans->tuples_updated;
 			record.tuples_deleted = trans->tuples_deleted;
+			record.tuples_warm_updated = trans->tuples_warm_updated;
 			record.inserted_pre_trunc = trans->inserted_pre_trunc;
 			record.updated_pre_trunc = trans->updated_pre_trunc;
 			record.deleted_pre_trunc = trans->deleted_pre_trunc;
+			record.warm_updated_pre_trunc = trans->warm_updated_pre_trunc;
 			record.t_id = tabstat->t_id;
 			record.t_shared = tabstat->t_shared;
 			record.t_truncated = trans->truncated;
@@ -2298,11 +2330,14 @@ pgstat_twophase_postcommit(TransactionId xid, uint16 info,
 		/* forget live/dead stats seen by backend thus far */
 		pgstat_info->t_counts.t_delta_live_tuples = 0;
 		pgstat_info->t_counts.t_delta_dead_tuples = 0;
+		pgstat_info->t_counts.t_delta_warm_chains = 0;
 	}
 	pgstat_info->t_counts.t_delta_live_tuples +=
 		rec->tuples_inserted - rec->tuples_deleted;
 	pgstat_info->t_counts.t_delta_dead_tuples +=
 		rec->tuples_updated + rec->tuples_deleted;
+	pgstat_info->t_counts.t_delta_warm_chains +=
+		rec->tuples_warm_updated;
 	pgstat_info->t_counts.t_changed_tuples +=
 		rec->tuples_inserted + rec->tuples_updated +
 		rec->tuples_deleted;
@@ -2330,12 +2365,16 @@ pgstat_twophase_postabort(TransactionId xid, uint16 info,
 		rec->tuples_inserted = rec->inserted_pre_trunc;
 		rec->tuples_updated = rec->updated_pre_trunc;
 		rec->tuples_deleted = rec->deleted_pre_trunc;
+		rec->tuples_warm_updated = rec->warm_updated_pre_trunc;
 	}
 	pgstat_info->t_counts.t_tuples_inserted += rec->tuples_inserted;
 	pgstat_info->t_counts.t_tuples_updated += rec->tuples_updated;
 	pgstat_info->t_counts.t_tuples_deleted += rec->tuples_deleted;
+	pgstat_info->t_counts.t_tuples_warm_updated += rec->tuples_warm_updated;
 	pgstat_info->t_counts.t_delta_dead_tuples +=
 		rec->tuples_inserted + rec->tuples_updated;
+	pgstat_info->t_counts.t_delta_warm_chains +=
+		rec->tuples_warm_updated;
 }
 
 
@@ -4526,6 +4565,7 @@ pgstat_get_tab_entry(PgStat_StatDBEntry *dbentry, Oid tableoid, bool create)
 		result->tuples_warm_updated = 0;
 		result->n_live_tuples = 0;
 		result->n_dead_tuples = 0;
+		result->n_warm_chains = 0;
 		result->changes_since_analyze = 0;
 		result->blocks_fetched = 0;
 		result->blocks_hit = 0;
@@ -5636,6 +5676,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			tabentry->tuples_warm_updated = tabmsg->t_counts.t_tuples_warm_updated;
 			tabentry->n_live_tuples = tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples = tabmsg->t_counts.t_delta_dead_tuples;
+			tabentry->n_warm_chains = tabmsg->t_counts.t_delta_warm_chains;
 			tabentry->changes_since_analyze = tabmsg->t_counts.t_changed_tuples;
 			tabentry->blocks_fetched = tabmsg->t_counts.t_blocks_fetched;
 			tabentry->blocks_hit = tabmsg->t_counts.t_blocks_hit;
@@ -5667,9 +5708,11 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 			{
 				tabentry->n_live_tuples = 0;
 				tabentry->n_dead_tuples = 0;
+				tabentry->n_warm_chains = 0;
 			}
 			tabentry->n_live_tuples += tabmsg->t_counts.t_delta_live_tuples;
 			tabentry->n_dead_tuples += tabmsg->t_counts.t_delta_dead_tuples;
+			tabentry->n_warm_chains += tabmsg->t_counts.t_delta_warm_chains;
 			tabentry->changes_since_analyze += tabmsg->t_counts.t_changed_tuples;
 			tabentry->blocks_fetched += tabmsg->t_counts.t_blocks_fetched;
 			tabentry->blocks_hit += tabmsg->t_counts.t_blocks_hit;
@@ -5679,6 +5722,7 @@ pgstat_recv_tabstat(PgStat_MsgTabstat *msg, int len)
 		tabentry->n_live_tuples = Max(tabentry->n_live_tuples, 0);
 		/* Likewise for n_dead_tuples */
 		tabentry->n_dead_tuples = Max(tabentry->n_dead_tuples, 0);
+		tabentry->n_warm_chains = Max(tabentry->n_warm_chains, 0);
 
 		/*
 		 * Add per-table stats to the per-database entry, too.
@@ -5904,6 +5948,7 @@ pgstat_recv_vacuum(PgStat_MsgVacuum *msg, int len)
 
 	tabentry->n_live_tuples = msg->m_live_tuples;
 	tabentry->n_dead_tuples = msg->m_dead_tuples;
+	tabentry->n_warm_chains = msg->m_warm_chains;
 
 	if (msg->m_autovacuum)
 	{
@@ -5938,6 +5983,7 @@ pgstat_recv_analyze(PgStat_MsgAnalyze *msg, int len)
 
 	tabentry->n_live_tuples = msg->m_live_tuples;
 	tabentry->n_dead_tuples = msg->m_dead_tuples;
+	tabentry->n_warm_chains = msg->m_warm_chains;
 
 	/*
 	 * If commanded, reset changes_since_analyze to zero.  This forgets any
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 227a87d..8804908 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -193,6 +193,21 @@ pg_stat_get_dead_tuples(PG_FUNCTION_ARGS)
 	PG_RETURN_INT64(result);
 }
 
+Datum
+pg_stat_get_warm_chains(PG_FUNCTION_ARGS)
+{
+	Oid			relid = PG_GETARG_OID(0);
+	int64		result;
+	PgStat_StatTabEntry *tabentry;
+
+	if ((tabentry = pgstat_fetch_stat_tabentry(relid)) == NULL)
+		result = 0;
+	else
+		result = (int64) (tabentry->n_warm_chains);
+
+	PG_RETURN_INT64(result);
+}
+
 
 Datum
 pg_stat_get_mod_since_analyze(PG_FUNCTION_ARGS)
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 08b6030..81fec03 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -130,6 +130,7 @@ int			VacuumCostPageMiss = 10;
 int			VacuumCostPageDirty = 20;
 int			VacuumCostLimit = 200;
 int			VacuumCostDelay = 0;
+double		VacuumWarmCleanupScale;
 
 int			VacuumPageHit = 0;
 int			VacuumPageMiss = 0;
@@ -137,3 +138,5 @@ int			VacuumPageDirty = 0;
 
 int			VacuumCostBalance = 0;		/* working state for vacuum */
 bool		VacuumCostActive = false;
+
+double		VacuumWarmCleanupIndexScale = 1;
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 8b5f064..ecf8028 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -3017,6 +3017,36 @@ static struct config_real ConfigureNamesReal[] =
 	},
 
 	{
+		{"autovacuum_warmcleanup_scale_factor", PGC_SIGHUP, AUTOVACUUM,
+			gettext_noop("Number of WARM chains prior to cleanup as a fraction of reltuples."),
+			NULL
+		},
+		&autovacuum_warmcleanup_scale,
+		0.1, 0.0, 100.0,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"autovacuum_warmcleanup_index_scale_factor", PGC_SIGHUP, AUTOVACUUM,
+			gettext_noop("Number of WARM pointers prior to cleanup as a fraction of total WARM chains."),
+			NULL
+		},
+		&autovacuum_warmcleanup_index_scale,
+		0.2, 0.0, 100.0,
+		NULL, NULL, NULL
+	},
+
+	{
+		{"vacuum_warmcleanup_index_scale_factor", PGC_USERSET, WARM_CLEANUP,
+			gettext_noop("Number of WARM pointers in the index prior to cleanup as a fraction of total WARM chains."),
+			NULL
+		},
+		&VacuumWarmCleanupIndexScale,
+		0.2, 0.0, 100.0,
+		NULL, NULL, NULL
+	},
+
+	{
 		{"checkpoint_completion_target", PGC_SIGHUP, WAL_CHECKPOINTS,
 			gettext_noop("Time spent flushing dirty buffers during checkpoint, as fraction of checkpoint interval."),
 			NULL
diff --git a/src/include/catalog/pg_proc.h b/src/include/catalog/pg_proc.h
index 3f1a142..61a4e23 100644
--- a/src/include/catalog/pg_proc.h
+++ b/src/include/catalog/pg_proc.h
@@ -2795,6 +2795,8 @@ DATA(insert OID = 2878 (  pg_stat_get_live_tuples	PGNSP PGUID 12 1 0 0 0 f f f f
 DESCR("statistics: number of live tuples");
 DATA(insert OID = 2879 (  pg_stat_get_dead_tuples	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_dead_tuples _null_ _null_ _null_ ));
 DESCR("statistics: number of dead tuples");
+DATA(insert OID = 3403 (  pg_stat_get_warm_chains	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_warm_chains _null_ _null_ _null_ ));
+DESCR("statistics: number of warm chains");
 DATA(insert OID = 3177 (  pg_stat_get_mod_since_analyze PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_mod_since_analyze _null_ _null_ _null_ ));
 DESCR("statistics: number of tuples changed since last analyze");
 DATA(insert OID = 1934 (  pg_stat_get_blocks_fetched	PGNSP PGUID 12 1 0 0 0 f f f f t f s r 1 0 20 "26" _null_ _null_ _null_ _null_ _null_ pg_stat_get_blocks_fetched _null_ _null_ _null_ ));
diff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h
index 541c2fa..9914143 100644
--- a/src/include/commands/vacuum.h
+++ b/src/include/commands/vacuum.h
@@ -145,6 +145,8 @@ typedef struct VacuumParams
 	int			log_min_duration;		/* minimum execution threshold in ms
 										 * at which  verbose logs are
 										 * activated, -1 to use default */
+	double		warmcleanup_index_scale; /* Fraction of WARM pointers to cause
+										  * index WARM cleanup */
 } VacuumParams;
 
 /* GUC parameters */
diff --git a/src/include/foreign/fdwapi.h b/src/include/foreign/fdwapi.h
index 6ca44f7..2993b1a 100644
--- a/src/include/foreign/fdwapi.h
+++ b/src/include/foreign/fdwapi.h
@@ -134,7 +134,8 @@ typedef void (*ExplainDirectModify_function) (ForeignScanState *node,
 typedef int (*AcquireSampleRowsFunc) (Relation relation, int elevel,
 											   HeapTuple *rows, int targrows,
 												  double *totalrows,
-												  double *totaldeadrows);
+												  double *totaldeadrows,
+												  double *totalwarmchains);
 
 typedef bool (*AnalyzeForeignTable_function) (Relation relation,
 												 AcquireSampleRowsFunc *func,
diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h
index 4c607b2..901960a 100644
--- a/src/include/miscadmin.h
+++ b/src/include/miscadmin.h
@@ -255,6 +255,7 @@ extern int	VacuumPageDirty;
 extern int	VacuumCostBalance;
 extern bool VacuumCostActive;
 
+extern double VacuumWarmCleanupIndexScale;
 
 /* in tcop/postgres.c */
 
diff --git a/src/include/nodes/parsenodes.h b/src/include/nodes/parsenodes.h
index b2afd50..f5fc001 100644
--- a/src/include/nodes/parsenodes.h
+++ b/src/include/nodes/parsenodes.h
@@ -3039,7 +3039,8 @@ typedef enum VacuumOption
 	VACOPT_FULL = 1 << 4,		/* FULL (non-concurrent) vacuum */
 	VACOPT_NOWAIT = 1 << 5,		/* don't wait to get lock (autovacuum only) */
 	VACOPT_SKIPTOAST = 1 << 6,	/* don't process the TOAST table, if any */
-	VACOPT_DISABLE_PAGE_SKIPPING = 1 << 7		/* don't skip any pages */
+	VACOPT_DISABLE_PAGE_SKIPPING = 1 << 7,		/* don't skip any pages */
+	VACOPT_WARM_CLEANUP = 1 << 8	/* do WARM cleanup */
 } VacuumOption;
 
 typedef struct VacuumStmt
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index cd21a78..7d9818b 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -433,6 +433,7 @@ PG_KEYWORD("version", VERSION_P, UNRESERVED_KEYWORD)
 PG_KEYWORD("view", VIEW, UNRESERVED_KEYWORD)
 PG_KEYWORD("views", VIEWS, UNRESERVED_KEYWORD)
 PG_KEYWORD("volatile", VOLATILE, UNRESERVED_KEYWORD)
+PG_KEYWORD("warmclean", WARMCLEAN, TYPE_FUNC_NAME_KEYWORD)
 PG_KEYWORD("when", WHEN, RESERVED_KEYWORD)
 PG_KEYWORD("where", WHERE, RESERVED_KEYWORD)
 PG_KEYWORD("whitespace", WHITESPACE_P, UNRESERVED_KEYWORD)
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 99bdc8b..883cbd4 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -110,6 +110,7 @@ typedef struct PgStat_TableCounts
 
 	PgStat_Counter t_delta_live_tuples;
 	PgStat_Counter t_delta_dead_tuples;
+	PgStat_Counter t_delta_warm_chains;
 	PgStat_Counter t_changed_tuples;
 
 	PgStat_Counter t_blocks_fetched;
@@ -167,11 +168,13 @@ typedef struct PgStat_TableXactStatus
 {
 	PgStat_Counter tuples_inserted;		/* tuples inserted in (sub)xact */
 	PgStat_Counter tuples_updated;		/* tuples updated in (sub)xact */
+	PgStat_Counter tuples_warm_updated;	/* tuples warm-updated in (sub)xact */
 	PgStat_Counter tuples_deleted;		/* tuples deleted in (sub)xact */
 	bool		truncated;		/* relation truncated in this (sub)xact */
 	PgStat_Counter inserted_pre_trunc;	/* tuples inserted prior to truncate */
 	PgStat_Counter updated_pre_trunc;	/* tuples updated prior to truncate */
 	PgStat_Counter deleted_pre_trunc;	/* tuples deleted prior to truncate */
+	PgStat_Counter warm_updated_pre_trunc;	/* tuples warm updated prior to truncate */
 	int			nest_level;		/* subtransaction nest level */
 	/* links to other structs for same relation: */
 	struct PgStat_TableXactStatus *upper;		/* next higher subxact if any */
@@ -370,6 +373,7 @@ typedef struct PgStat_MsgVacuum
 	TimestampTz m_vacuumtime;
 	PgStat_Counter m_live_tuples;
 	PgStat_Counter m_dead_tuples;
+	PgStat_Counter m_warm_chains;
 } PgStat_MsgVacuum;
 
 
@@ -388,6 +392,7 @@ typedef struct PgStat_MsgAnalyze
 	TimestampTz m_analyzetime;
 	PgStat_Counter m_live_tuples;
 	PgStat_Counter m_dead_tuples;
+	PgStat_Counter m_warm_chains;
 } PgStat_MsgAnalyze;
 
 
@@ -630,6 +635,7 @@ typedef struct PgStat_StatTabEntry
 
 	PgStat_Counter n_live_tuples;
 	PgStat_Counter n_dead_tuples;
+	PgStat_Counter n_warm_chains;
 	PgStat_Counter changes_since_analyze;
 
 	PgStat_Counter blocks_fetched;
@@ -1156,10 +1162,11 @@ extern void pgstat_reset_single_counter(Oid objectid, PgStat_Single_Reset_Type t
 
 extern void pgstat_report_autovac(Oid dboid);
 extern void pgstat_report_vacuum(Oid tableoid, bool shared,
-					 PgStat_Counter livetuples, PgStat_Counter deadtuples);
+					 PgStat_Counter livetuples, PgStat_Counter deadtuples,
+					 PgStat_Counter warmchains);
 extern void pgstat_report_analyze(Relation rel,
 					  PgStat_Counter livetuples, PgStat_Counter deadtuples,
-					  bool resetcounter);
+					  PgStat_Counter warmchains, bool resetcounter);
 
 extern void pgstat_report_recovery_conflict(int reason);
 extern void pgstat_report_deadlock(void);
diff --git a/src/include/postmaster/autovacuum.h b/src/include/postmaster/autovacuum.h
index d383fd3..19fb0a2 100644
--- a/src/include/postmaster/autovacuum.h
+++ b/src/include/postmaster/autovacuum.h
@@ -39,6 +39,8 @@ extern int	autovacuum_freeze_max_age;
 extern int	autovacuum_multixact_freeze_max_age;
 extern int	autovacuum_vac_cost_delay;
 extern int	autovacuum_vac_cost_limit;
+extern double autovacuum_warmcleanup_scale;
+extern double autovacuum_warmcleanup_index_scale;
 
 /* autovacuum launcher PID, only valid when worker is shutting down */
 extern int	AutovacuumLauncherPid;
diff --git a/src/include/utils/guc_tables.h b/src/include/utils/guc_tables.h
index 2da9115..cd4532b 100644
--- a/src/include/utils/guc_tables.h
+++ b/src/include/utils/guc_tables.h
@@ -68,6 +68,7 @@ enum config_group
 	WAL_SETTINGS,
 	WAL_CHECKPOINTS,
 	WAL_ARCHIVING,
+	WARM_CLEANUP,
 	REPLICATION,
 	REPLICATION_SENDING,
 	REPLICATION_MASTER,
diff --git a/src/include/utils/rel.h b/src/include/utils/rel.h
index 2b86054..f0dd350 100644
--- a/src/include/utils/rel.h
+++ b/src/include/utils/rel.h
@@ -278,6 +278,8 @@ typedef struct AutoVacOpts
 	int			log_min_duration;
 	float8		vacuum_scale_factor;
 	float8		analyze_scale_factor;
+	float8		warmcleanup_scale_factor;
+	float8		warmcleanup_index_scale;
 } AutoVacOpts;
 
 typedef struct StdRdOptions
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index f7dc4a4..d34aa68 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1759,6 +1759,7 @@ pg_stat_all_tables| SELECT c.oid AS relid,
     pg_stat_get_tuples_warm_updated(c.oid) AS n_tup_warm_upd,
     pg_stat_get_live_tuples(c.oid) AS n_live_tup,
     pg_stat_get_dead_tuples(c.oid) AS n_dead_tup,
+    pg_stat_get_warm_chains(c.oid) AS n_warm_chains,
     pg_stat_get_mod_since_analyze(c.oid) AS n_mod_since_analyze,
     pg_stat_get_last_vacuum_time(c.oid) AS last_vacuum,
     pg_stat_get_last_autovacuum_time(c.oid) AS last_autovacuum,
@@ -1907,6 +1908,7 @@ pg_stat_sys_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
+    pg_stat_all_tables.n_warm_chains,
     pg_stat_all_tables.n_mod_since_analyze,
     pg_stat_all_tables.last_vacuum,
     pg_stat_all_tables.last_autovacuum,
@@ -1951,6 +1953,7 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
     pg_stat_all_tables.n_tup_warm_upd,
     pg_stat_all_tables.n_live_tup,
     pg_stat_all_tables.n_dead_tup,
+    pg_stat_all_tables.n_warm_chains,
     pg_stat_all_tables.n_mod_since_analyze,
     pg_stat_all_tables.last_vacuum,
     pg_stat_all_tables.last_autovacuum,
diff --git a/src/test/regress/expected/warm.out b/src/test/regress/expected/warm.out
index 1f07272..34cdbe5 100644
--- a/src/test/regress/expected/warm.out
+++ b/src/test/regress/expected/warm.out
@@ -745,6 +745,65 @@ SELECT a, b FROM test_toast_warm WHERE b = 104.20;
 (1 row)
 
 DROP TABLE test_toast_warm;
+-- Test VACUUM
+CREATE TABLE test_vacuum_warm (a int unique, b text, c int, d int, e int);
+CREATE INDEX test_vacuum_warm_index1 ON test_vacuum_warm(b);
+CREATE INDEX test_vacuum_warm_index2 ON test_vacuum_warm(c);
+CREATE INDEX test_vacuum_warm_index3 ON test_vacuum_warm(d);
+INSERT INTO test_vacuum_warm VALUES (1, 'a', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (2, 'b', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (3, 'c', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (4, 'd', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (5, 'e', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (6, 'f', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (7, 'g', 100, 200);
+UPDATE test_vacuum_warm SET b = 'u', c = 300 WHERE a = 1;
+UPDATE test_vacuum_warm SET b = 'v', c = 300 WHERE a = 2;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 3;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 4;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 5;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 6;
+-- a plain vacuum cannot clear WARM chains.
+SET enable_seqscan = false;
+SET enable_bitmapscan = false;
+SET seq_page_cost = 10000;
+VACUUM test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
+   Index Cond: (b = 'u'::text)
+   Heap Fetches: 1
+(3 rows)
+
+-- Now set vacuum_warmcleanup_index_scale_factor such that only
+-- test_vacuum_warm_index2 can be cleaned up.
+SET vacuum_warmcleanup_index_scale_factor=0.5;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
+   Index Cond: (b = 'u'::text)
+   Heap Fetches: 1
+(3 rows)
+
+-- All WARM chains cleaned up, so index-only scan should be used now without
+-- any heap fetches
+SET vacuum_warmcleanup_index_scale_factor=0;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect zero heap-fetches now
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+                                        QUERY PLAN                                         
+-------------------------------------------------------------------------------------------
+ Index Only Scan using test_vacuum_warm_index1 on test_vacuum_warm (actual rows=1 loops=1)
+   Index Cond: (b = 'u'::text)
+   Heap Fetches: 0
+(3 rows)
+
+DROP TABLE test_vacuum_warm;
 -- Toasted heap attributes
 CREATE TABLE toasttest(descr text , cnt int DEFAULT 0, f1 text, f2 text);
 CREATE INDEX testindx1 ON toasttest(descr);
diff --git a/src/test/regress/sql/warm.sql b/src/test/regress/sql/warm.sql
index fc80c0f..ae9db9a 100644
--- a/src/test/regress/sql/warm.sql
+++ b/src/test/regress/sql/warm.sql
@@ -285,6 +285,53 @@ SELECT a, b FROM test_toast_warm WHERE b = 104.20;
 
 DROP TABLE test_toast_warm;
 
+-- Test VACUUM
+
+CREATE TABLE test_vacuum_warm (a int unique, b text, c int, d int, e int);
+CREATE INDEX test_vacuum_warm_index1 ON test_vacuum_warm(b);
+CREATE INDEX test_vacuum_warm_index2 ON test_vacuum_warm(c);
+CREATE INDEX test_vacuum_warm_index3 ON test_vacuum_warm(d);
+
+INSERT INTO test_vacuum_warm VALUES (1, 'a', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (2, 'b', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (3, 'c', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (4, 'd', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (5, 'e', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (6, 'f', 100, 200);
+INSERT INTO test_vacuum_warm VALUES (7, 'g', 100, 200);
+
+UPDATE test_vacuum_warm SET b = 'u', c = 300 WHERE a = 1;
+UPDATE test_vacuum_warm SET b = 'v', c = 300 WHERE a = 2;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 3;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 4;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 5;
+UPDATE test_vacuum_warm SET c = 300 WHERE a = 6;
+
+-- a plain vacuum cannot clear WARM chains.
+SET enable_seqscan = false;
+SET enable_bitmapscan = false;
+SET seq_page_cost = 10000;
+VACUUM test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+
+-- Now set vacuum_warmcleanup_index_scale_factor such that only
+-- test_vacuum_warm_index2 can be cleaned up.
+SET vacuum_warmcleanup_index_scale_factor=0.5;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect non-zero heap-fetches here
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+
+
+-- All WARM chains cleaned up, so index-only scan should be used now without
+-- any heap fetches
+SET vacuum_warmcleanup_index_scale_factor=0;
+VACUUM WARMCLEAN test_vacuum_warm;
+-- We expect zero heap-fetches now
+EXPLAIN (analyze, costs off, timing off, summary off) SELECT b FROM test_vacuum_warm WHERE b = 'u';
+
+DROP TABLE test_vacuum_warm;
+
 -- Toasted heap attributes
 CREATE TABLE toasttest(descr text , cnt int DEFAULT 0, f1 text, f2 text);
 CREATE INDEX testindx1 ON toasttest(descr);
-- 
2.9.3 (Apple Git-75)

In reply to: Andres Freund (#235)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Apr 5, 2017 at 11:27 AM, Andres Freund <andres@anarazel.de> wrote:

I propose we move this patch to the next CF.

I agree. I think it's too late to be working out fine details around
TOAST like this. This is a patch that touches the storage format in a
fairly fundamental way.

The idea of turning WARM on or off reminds me a little bit of the way
it was at one time suggested that HOT not be used against catalog
tables, a position that Tom pushed against. I'm not saying that it's
necessarily a bad idea, but we should exhaust alternatives, and have a
clear rationale for it.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#238Robert Haas
robertmhaas@gmail.com
In reply to: Pavan Deolasee (#236)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Apr 5, 2017 at 2:32 PM, Pavan Deolasee <pavan.deolasee@gmail.com> wrote:

The only other idea that I have for a really clean solution here is to
support this only for index types that are amcanreturn, and actually
compare the value stored in the index tuple with the one stored in the
heap tuple, ensuring that new index tuples are inserted whenever they
don't match and then using the exact same test to determine the
applicability of a given index pointer to a given heap tuple.

Just so that I understand, are you suggesting that while inserting WARM
index pointers, we check if the new index tuple will look exactly the same
as the old index tuple and not insert a duplicate pointer at all?

Yes.

I considered that, but it will require us to do an index lookup during WARM
index insert and for non-unique keys, that may or may not be exactly cheap.

I don't think it requires that. You should be able to figure out
based on the tuple being updated and the corresponding new tuple
whether this will bet true or not.

Or we need something like what Claudio wrote to sort all index entries by
heap TIDs. If we do that, then the recheck can be done just based on the
index and heap flags (because we can then turn the old index pointer into a
CLEAR pointer. Index pointer is set to COMMON during initial insert).

Yeah, I think that patch is going to be needed for some of the storage
work I'm interesting in doing, too, so I am tentatively in favor of
it, but I wasn't proposing using it here.

The other way is to pass old tuple values along with the new tuple values to
amwarminsert, build index tuples and then do a comparison. For duplicate
index tuples, skip WARM inserts.

This is more what I was thinking. But maybe one of the other ideas
you wrote here is better; not sure.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#239Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#238)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Apr 6, 2017 at 1:06 AM, Robert Haas <robertmhaas@gmail.com> wrote:

On Wed, Apr 5, 2017 at 2:32 PM, Pavan Deolasee <pavan.deolasee@gmail.com>
wrote:

The other way is to pass old tuple values along with the new tuple

values to

amwarminsert, build index tuples and then do a comparison. For duplicate
index tuples, skip WARM inserts.

This is more what I was thinking. But maybe one of the other ideas
you wrote here is better; not sure.

Ok. I think I suggested this as one of the ideas upthread, to support hash
indexes for example. This might be a good safety-net, but AFAIC what we
have today should work since we pretty much construct index tuples in a
consistent way before doing a comparison.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#240Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Peter Geoghegan (#237)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Apr 6, 2017 at 12:20 AM, Peter Geoghegan <pg@bowt.ie> wrote:

On Wed, Apr 5, 2017 at 11:27 AM, Andres Freund <andres@anarazel.de> wrote:

I propose we move this patch to the next CF.

I agree. I think it's too late to be working out fine details around
TOAST like this. This is a patch that touches the storage format in a
fairly fundamental way.

The idea of turning WARM on or off reminds me a little bit of the way
it was at one time suggested that HOT not be used against catalog
tables, a position that Tom pushed against.

I agree. I am grateful that Tom put his put down and helped me find answers
to all hard problems, including catalog tables and create index
concurrently. So I was very clear in my mind from the very beginning that
WARM must support all these things too. Obviously it still doesn't support
everything like other index methods and expression indexes, but IMHO that's
a much smaller problem. Also, making sure that WARM works on system tables
helped me find any corner bugs which would have otherwise skipped via
regular regression testing.

I'm not saying that it's
necessarily a bad idea, but we should exhaust alternatives, and have a
clear rationale for it.

One reason why it's probably a good idea is because we know WARM will not
effective for all use cases and it might actually cause performance
regression for some of them. Even worse and as Robert fears, it might cause
data loss issues. Though TBH I haven't yet seen any concrete example where
it breaks so badly that it causes data loss, but that may be because the
patch still hasn't received enough eye balls or outside tests. Having table
level option would allow us to incrementally improve things instead of
making the initial patch so large that reviewing it is a complete
nightmare. May be it's already a nightmare.

It's not as if HOT would not have caused regression for some specific use
cases. But I think the general benefit was so strong that we never invested
time in finding and tuning for those specific cases, thus avoided some more
complexity to the code. WARM's benefits are probably not the same as HOT or
our standards may have changed or we probably have resources to do much
more elaborate tests, which were missing 10 years back. But now that we are
aware of some regressions, the choice is between spending considerable
amount of time trying to handle every case vs doing it incrementally and
start delivering to majority of the users, yet keeping the patch at a
manageable level.

Even if we were to provide table level option, my preference would be keep
it ON by default.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#241Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Andres Freund (#235)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Apr 5, 2017 at 11:57 PM, Andres Freund <andres@anarazel.de> wrote:

On 2017-04-05 09:36:47 -0400, Robert Haas wrote:

By the way, the "Converting WARM chains back to HOT chains" section of
README.WARM seems to be out of date. Any chance you could update that
to reflect the current state and thinking of the patch?

I propose we move this patch to the next CF. That shouldn't prevent you
working on it, although focusing on review of patches that still might
make it wouldn't hurt either.

Thank you all for the reviews, feedback, tests, criticism. And apologies
for keep pushing it till the last minute even though it was clear to me
quite some time back the patch is not going to make it. But if I'd given
up, it would have never received whatever little attention it got. The only
thing that disappoints me is that the patch was held back on no strong
technical grounds - at least none were clear to me. There were concerns
about on-disk changes etc, but most on-disk changes were known for 7 months
now. Reminds me of HOT development, when it would not receive adequate
feedback for quite many months, probably for very similar reasons - complex
patch, changes on-disk format, risky, even though performance gains were
quite substantial. I was much more hopeful this time because we have many
more experts now as compared to then, but we probably have equally more
amount of complex patches to review/commit.

I understand that we would like this patch to go in very early in the
development cycle. So as Alvaro mentioned elsewhere, we will continue to
work on it so that we can get it in as soon as v11 tree open. We shall soon
submit a revised version, with the list of critical things so that we can
discuss them here and get some useful feedback. I hope everyone understands
that the feature of this kind won't happen without on-disk format changes.
So to be able to address any concerns, we will need specific feedback and
workable suggestions, if any.

Finally, my apologies for not spending enough time reviewing other patches.
I know its critical, and I'll try to improve on that. Congratulations to
all whose work got accepted and many thanks to all reviewers/committers/CF
managers. I know how difficult and thankless that work is.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#242Bruce Momjian
bruce@momjian.us
In reply to: Pavan Deolasee (#241)
Re: Patch: Write Amplification Reduction Method (WARM)

On Sat, Apr 8, 2017 at 11:36:13PM +0530, Pavan Deolasee wrote:

Thank you all for the �reviews, feedback, tests, criticism. And apologies for
keep pushing it till the last minute even though it was clear to me quite some
time back the patch is not going to make it. But if I'd given up, it would have
never received whatever little attention it got. The only thing that
disappoints me is that the patch was held back on no strong technical grounds -
�at least none were clear to me. There were concerns about on-disk changes etc,
but most on-disk changes were known for 7 months now. Reminds me of HOT
development, when it would not receive adequate feedback for quite many months,
probably for very similar reasons - complex patch, changes on-disk format,
risky, even though performance gains were quite substantial. I was much more
hopeful this time because we have many more experts now as compared to then,
but we probably have equally more amount of complex patches to review/commit.

I am sad to see WARM didn't make it into Postgres 10, but I agree
deferment was the right decision, as painful as that is. We now have
something to look forward to in Postgres 11. :-)

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#243Andres Freund
andres@anarazel.de
In reply to: Pavan Deolasee (#241)
Re: Patch: Write Amplification Reduction Method (WARM)

Hi,

On 2017-04-08 23:36:13 +0530, Pavan Deolasee wrote:

On Wed, Apr 5, 2017 at 11:57 PM, Andres Freund <andres@anarazel.de> wrote:

On 2017-04-05 09:36:47 -0400, Robert Haas wrote:

By the way, the "Converting WARM chains back to HOT chains" section of
README.WARM seems to be out of date. Any chance you could update that
to reflect the current state and thinking of the patch?

I propose we move this patch to the next CF. That shouldn't prevent you
working on it, although focusing on review of patches that still might
make it wouldn't hurt either.

Thank you all for the reviews, feedback, tests, criticism. And apologies
for keep pushing it till the last minute even though it was clear to me
quite some time back the patch is not going to make it.

What confuses me about that position is that people were advocating to
actually commit till literally hours before the CF closed.

But if I'd given
up, it would have never received whatever little attention it got. The only
thing that disappoints me is that the patch was held back on no strong
technical grounds - at least none were clear to me. There were concerns
about on-disk changes etc, but most on-disk changes were known for 7 months
now. Reminds me of HOT development, when it would not receive adequate
feedback for quite many months, probably for very similar reasons - complex
patch, changes on-disk format, risky, even though performance gains were
quite substantial. I was much more hopeful this time because we have many
more experts now as compared to then, but we probably have equally more
amount of complex patches to review/commit.

I don't think it's realistic to expect isolated in-depth review of
on-disk changes, when the rest of the patch isn't in a close-to-ready
shape. The likelihood that further work on the patch invalidates such
in-depth review is significant. It's not like only minor details changed
in the last few months.

I do agree that it's hard to get qualified reviewers on bigger patches.
But I think part of the reaction to that has to be active work on that
front: If your patch needs reviews by committers or other topical
experts, you need to explicitly reach out. There's a lot of active
threads, and nobody has time to follow all of them in sufficient detail
to know that certain core parts of an actively developed patch are ready
for review. Offer tit-for-tat reviews. Announce that your patch is
ready, that you're only waiting for review. Post a summary of open
questions...

Finally, my apologies for not spending enough time reviewing other
patches. I know its critical, and I'll try to improve on that.

I do find it a more than a bit ironic to lament early lack of attention
to your patch, while also being aware of not having done much review.
This can only scale if everyone reviews each others patches, not if
there's a few individuals that have to review everyones patches.

Greetings,

Andres Freund

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#244Robert Haas
robertmhaas@gmail.com
In reply to: Pavan Deolasee (#241)
Re: Patch: Write Amplification Reduction Method (WARM)

On Sat, Apr 8, 2017 at 2:06 PM, Pavan Deolasee <pavan.deolasee@gmail.com> wrote:

Thank you all for the reviews, feedback, tests, criticism. And apologies
for keep pushing it till the last minute even though it was clear to me
quite some time back the patch is not going to make it. But if I'd given up,
it would have never received whatever little attention it got. The only
thing that disappoints me is that the patch was held back on no strong
technical grounds - at least none were clear to me. There were concerns
about on-disk changes etc, but most on-disk changes were known for 7 months
now. Reminds me of HOT development, when it would not receive adequate
feedback for quite many months, probably for very similar reasons - complex
patch, changes on-disk format, risky, even though performance gains were
quite substantial. I was much more hopeful this time because we have many
more experts now as compared to then, but we probably have equally more
amount of complex patches to review/commit.

Yes, and as Andres says, you don't help with those, and then you're
upset when your own patch doesn't get attention. I think there are
two ways that this patch could have gotten the detailed and in-depth
review which it needs. First, I would have been more than happy to
spend time on WARM in exchange for a comparable amount of your time
spent on parallel bitmap heap scan, or partition-wise join, or
partitioning, but that time was not forthcoming. Second, there are
numerous senior reviewers at 2ndQuadrant who could have put time time
into this patch and didn't. Yes, Alvaro did some review, but it was
not in a huge degree of depth and didn't arrive until quite late,
unless there was more to it than what was posted on the mailing list
which, as a reminder, is the place where review is supposed to take
place.

If people senior reviewers with whom you share an employer don't have
time to review your patch, and you aren't willing to trade review time
on other patches for a comparable amount of attention on your own,
then it shouldn't surprise you when people object to it being
committed.

If there is an intention to commit this patch soon after v11
development opens, then signs of serious in-depth review, and
responses to criticisms thus-far proffered, really ought to be in
evidence will in advance of that date. It's slightly better to commit
an inadequately-reviewed patch at the beginning of the cycle than at
the end, but what's even better is thorough review, which I maintain
this patch hasn't really had yet. Amit and others who have started to
dig into this patch a little bit found real problems pretty quickly
when they started digging. Those problems should be addressed, and
review should continue (from whatever source) until no more problems
can be found. Everyone here understands (if they've been paying
attention) that this patch has large benefits in sympathetic cases,
and everyone wants those benefits. What nobody wants (I assume) is
regressions is unsympathetic cases, or data corruption. The patch may
or may not have any data-corrupting bugs, but regressions have been
found and not addressed. Yet, there's still talk of committing this
with as much haste as possible. I do not think that is a responsible
way to do development.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#245Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#244)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Apr 11, 2017 at 7:10 PM, Robert Haas <robertmhaas@gmail.com> wrote:

Yes, and as Andres says, you don't help with those, and then you're
upset when your own patch doesn't get attention.

I am not upset, I was obviously a bit disappointed which I think is a very
natural emotion after spending weeks on it. I am not blaming any one
individual (excluding myself) for that and neither the community at large
for the outcome. And I've moved on. I know everyone is busy getting the
release ready and I see no point discussing this endlessly. We have enough
on our plates for next few weeks.

Amit and others who have started to
dig into this patch a little bit found real problems pretty quickly
when they started digging.

And I fixed them as quickly as humanly possible.

Those problems should be addressed, and
review should continue (from whatever source) until no more problems
can be found.

Absolutely.

The patch may
or may not have any data-corrupting bugs, but regressions have been
found and not addressed.

I don't know why you say that regressions are not addressed. Here are a few
things I did to address the regressions/reviews/concerns, apart from fixing
all the bugs discovered, but please let me know if there are things I've
not addressed.

1. Improved the interesting attrs patch that Alvaro wrote to address the
regression discovered in fetching more heap attributes. The patch that got
committed in fact improved certain synthetic workloads over then master.
2. Based on Petr and your feedback, disabled WARM on toasted attributes to
reduce overhead of fetching/decompressing the attributes.
3. Added code to avoid doing second index scan when the index does not
contain any WARM pointers. This should address the situation Amit brought
up where only one of the indexes receive WARM inserts.
4. Added code to kill wrong index pointers to do online cleanup.
5. Added code to set a CLEAR pointer to a WARM pointer when we know that
the entire chain is WARM. This should address the workload Dilip ran and
found regression (I don't think he got chance to confirm that)
6. Enhanced stats collector to collect information about candidate WARM
chains and added mechanism to control WARM cleanup at the heap as well as
index level, based on configurable parameters. This gives user better
control over the additional work that is required for WARM cleanup.
7. Added table level option to disable WARM if nothing else works.
8. Added mechanism to disable WARM when more than 50% indexes are being
updated. I ran some benchmarks with different percentage of indexes getting
updated and thought this is a good threshold.

I may have missed something, but there is no intention to ignore known
regressions/reviews. Of course, I don't think that every regression will be
solvable, like if you run a CPU-bound workload, setting it up in a way such
that you repeatedly exercise the area where WARM is doing additional work,
without providing any benefit, may be you can still find regression. I am
willing to fix them as long as they are fixable and we are comfortable with
the additional code complexity. IMHO certain trade-offs are good, but I
understand that not everybody will agree with my views and that's ok.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#246Bruce Momjian
bruce@momjian.us
In reply to: Andres Freund (#243)
Re: Patch: Write Amplification Reduction Method (WARM)

On Mon, Apr 10, 2017 at 04:34:50PM -0700, Andres Freund wrote:

Hi,

On 2017-04-08 23:36:13 +0530, Pavan Deolasee wrote:

On Wed, Apr 5, 2017 at 11:57 PM, Andres Freund <andres@anarazel.de> wrote:

On 2017-04-05 09:36:47 -0400, Robert Haas wrote:

By the way, the "Converting WARM chains back to HOT chains" section of
README.WARM seems to be out of date. Any chance you could update that
to reflect the current state and thinking of the patch?

I propose we move this patch to the next CF. That shouldn't prevent you
working on it, although focusing on review of patches that still might
make it wouldn't hurt either.

Thank you all for the reviews, feedback, tests, criticism. And apologies
for keep pushing it till the last minute even though it was clear to me
quite some time back the patch is not going to make it.

What confuses me about that position is that people were advocating to
actually commit till literally hours before the CF closed.

Yes, I was surprised by that too and have privately emailed people on
this topic.

--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#247Amit Kapila
amit.kapila16@gmail.com
In reply to: Pavan Deolasee (#245)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Apr 11, 2017 at 10:50 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

On Tue, Apr 11, 2017 at 7:10 PM, Robert Haas <robertmhaas@gmail.com> wrote:

Yes, and as Andres says, you don't help with those, and then you're
upset when your own patch doesn't get attention.

I am not upset, I was obviously a bit disappointed which I think is a very
natural emotion after spending weeks on it. I am not blaming any one
individual (excluding myself) for that and neither the community at large
for the outcome. And I've moved on. I know everyone is busy getting the
release ready and I see no point discussing this endlessly. We have enough
on our plates for next few weeks.

Amit and others who have started to
dig into this patch a little bit found real problems pretty quickly
when they started digging.

And I fixed them as quickly as humanly possible.

Yes, you have responded to them quickly, but I didn't get a chance to
re-verify all of those. However, I think the main point Robert wants
to say is that somebody needs to dig the complete patch to see if
there is any kind of problems with it.

Those problems should be addressed, and
review should continue (from whatever source) until no more problems
can be found.

Absolutely.

The patch may
or may not have any data-corrupting bugs, but regressions have been
found and not addressed.

I don't know why you say that regressions are not addressed. Here are a few
things I did to address the regressions/reviews/concerns, apart from fixing
all the bugs discovered, but please let me know if there are things I've not
addressed.

1. Improved the interesting attrs patch that Alvaro wrote to address the
regression discovered in fetching more heap attributes. The patch that got
committed in fact improved certain synthetic workloads over then master.
2. Based on Petr and your feedback, disabled WARM on toasted attributes to
reduce overhead of fetching/decompressing the attributes.
3. Added code to avoid doing second index scan when the index does not
contain any WARM pointers. This should address the situation Amit brought up
where only one of the indexes receive WARM inserts.
4. Added code to kill wrong index pointers to do online cleanup.
5. Added code to set a CLEAR pointer to a WARM pointer when we know that the
entire chain is WARM. This should address the workload Dilip ran and found
regression (I don't think he got chance to confirm that)

Have you by any chance tried to reproduce it at your end?

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#248Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Amit Kapila (#247)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Apr 12, 2017 at 9:23 AM, Amit Kapila <amit.kapila16@gmail.com>
wrote:

On Tue, Apr 11, 2017 at 10:50 PM, Pavan Deolasee

And I fixed them as quickly as humanly possible.

Yes, you have responded to them quickly, but I didn't get a chance to
re-verify all of those. However, I think the main point Robert wants
to say is that somebody needs to dig the complete patch to see if
there is any kind of problems with it.

There are no two views about that. I don't even claim that more problems
won't be found during in-depth review. I was only responding to his view
that I did not do much to address the regressions reported during the
review/tests.

5. Added code to set a CLEAR pointer to a WARM pointer when we know that

the

entire chain is WARM. This should address the workload Dilip ran and

found

regression (I don't think he got chance to confirm that)

Have you by any chance tried to reproduce it at your end?

I did reproduce and verified that the new technique helps the case [1]/messages/by-id/CABOikdOTstHK2y0rDk+Y3Wx9HRe+bZtj3zuYGU=VngneiHo5KQ@mail.gmail.com (see
last para). I did not go extra length to check if there are more cases
which can still cause regression, like recheck is applied only once to
each tuple (so the new technique does not yield any benefit) and whether
that still causes regression and by how much. However I ran pure pgbench
workload (only HOT updates) with smallish scale factor so that everything
fits in memory and did not find any regression.

Having said that, it's my view that others need not agree to it, that we
need to distinguish between CPU and IO load since WARM is designed to
address IO problems and not so much CPU problems. We also need to see
things in totality and probably measure updates and selects both if we are
going to WARM update all tuples once and read them once. That doesn't mean
we shouldn't perform more tests and I am more than willing to fix if we
find regression in even a remotely real-world use case.

Thanks,
Pavan

[1]: /messages/by-id/CABOikdOTstHK2y0rDk+Y3Wx9HRe+bZtj3zuYGU=VngneiHo5KQ@mail.gmail.com
/messages/by-id/CABOikdOTstHK2y0rDk+Y3Wx9HRe+bZtj3zuYGU=VngneiHo5KQ@mail.gmail.com

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#249Robert Haas
robertmhaas@gmail.com
In reply to: Pavan Deolasee (#245)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Apr 11, 2017 at 1:20 PM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

I don't know why you say that regressions are not addressed. Here are a few
things I did to address the regressions/reviews/concerns, apart from fixing
all the bugs discovered, but please let me know if there are things I've not
addressed.

I'm making statements based on my perception of the discussion on the
thread. Perhaps you did some work which you either didn't mention or
I missed you mentioning it, but it sure didn't feel like all of the
things reported got addressed.

1. Improved the interesting attrs patch that Alvaro wrote to address the
regression discovered in fetching more heap attributes. The patch that got
committed in fact improved certain synthetic workloads over then master.

Yep, though it was not clear that all of the regressing cases were
actually addressed, at least not to me.

2. Based on Petr and your feedback, disabled WARM on toasted attributes to
reduce overhead of fetching/decompressing the attributes.

But that's not necessarily the right fix, as per
/messages/by-id/CA+TgmoYUfxy1LseDzsw8uuuLUJHH0r8NCD-Up-HZMC1fYDPH3Q@mail.gmail.com
and subsequent discussion. It's not clear to me from that discussion
that we've got to a place where the method used to identify whether a
WARM update happened during a scan is exactly identical to the method
used to decide whether to perform one in the first place.

3. Added code to avoid doing second index scan when the index does not
contain any WARM pointers. This should address the situation Amit brought up
where only one of the indexes receive WARM inserts
4. Added code to kill wrong index pointers to do online cleanup.

Good changes.

5. Added code to set a CLEAR pointer to a WARM pointer when we know that the
entire chain is WARM. This should address the workload Dilip ran and found
regression (I don't think he got chance to confirm that)

Which is clearly a thing that should happen before commit, and really,
you ought to be leading the effort to confirm that, not him. It's
good for him to verify that your fix worked, but you should test it
first.

6. Enhanced stats collector to collect information about candidate WARM
chains and added mechanism to control WARM cleanup at the heap as well as
index level, based on configurable parameters. This gives user better
control over the additional work that is required for WARM cleanup.

I haven't seen previous discussion of this; therefore I doubt whether
we have agreement on these parameters.

7. Added table level option to disable WARM if nothing else works.

-1 from me.

8. Added mechanism to disable WARM when more than 50% indexes are being
updated. I ran some benchmarks with different percentage of indexes getting
updated and thought this is a good threshold.

+1 from me.

I may have missed something, but there is no intention to ignore known
regressions/reviews. Of course, I don't think that every regression will be
solvable, like if you run a CPU-bound workload, setting it up in a way such
that you repeatedly exercise the area where WARM is doing additional work,
without providing any benefit, may be you can still find regression. I am
willing to fix them as long as they are fixable and we are comfortable with
the additional code complexity. IMHO certain trade-offs are good, but I
understand that not everybody will agree with my views and that's ok.

The point here is that we can't make intelligent decisions about
whether to commit this feature unless we know which situations get
better and which get worse and by how much. I don't accept as a
general principle the idea that CPU-bound workloads don't matter.
Obviously, I/O-bound workloads matter too, but we can't throw
CPU-bound workloads under the bus. Now, avoiding index bloat does
also save CPU, so it is easy to imagine that WARM could come out ahead
even if each update consumes slightly more CPU when actually updating,
so we might not actually regress. If we do, I guess I'd want to know
why.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: Robert Haas (#249)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Apr 12, 2017 at 10:12 AM, Robert Haas <robertmhaas@gmail.com> wrote:

I may have missed something, but there is no intention to ignore known
regressions/reviews. Of course, I don't think that every regression will be
solvable, like if you run a CPU-bound workload, setting it up in a way such
that you repeatedly exercise the area where WARM is doing additional work,
without providing any benefit, may be you can still find regression. I am
willing to fix them as long as they are fixable and we are comfortable with
the additional code complexity. IMHO certain trade-offs are good, but I
understand that not everybody will agree with my views and that's ok.

The point here is that we can't make intelligent decisions about
whether to commit this feature unless we know which situations get
better and which get worse and by how much. I don't accept as a
general principle the idea that CPU-bound workloads don't matter.
Obviously, I/O-bound workloads matter too, but we can't throw
CPU-bound workloads under the bus. Now, avoiding index bloat does
also save CPU, so it is easy to imagine that WARM could come out ahead
even if each update consumes slightly more CPU when actually updating,
so we might not actually regress. If we do, I guess I'd want to know
why.

I myself wonder if this CPU overhead is at all related to LP_DEAD
recycling during page splits. I have my suspicions that the recyling
has some relationship to locality, which leads me to want to
investigate how Claudio Freire's patch to consistently treat heap TID
as part of the B-Tree sort order could help, both in general, and for
WARM.

Bear in mind that the recycling has to happen with an exclusive buffer
lock held on a leaf page, which could hold up rather a lot of scans
that need to visit the same value even if it's on some other,
relatively removed leaf page.

This is just a theory.

--
Peter Geoghegan

VMware vCenter Server
https://www.vmware.com/

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#251Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Peter Geoghegan (#250)
Re: Patch: Write Amplification Reduction Method (WARM)

On Thu, Apr 13, 2017 at 2:04 AM, Peter Geoghegan <pg@bowt.ie> wrote:

On Wed, Apr 12, 2017 at 10:12 AM, Robert Haas <robertmhaas@gmail.com>
wrote:

I may have missed something, but there is no intention to ignore known
regressions/reviews. Of course, I don't think that every regression

will be

solvable, like if you run a CPU-bound workload, setting it up in a way

such

that you repeatedly exercise the area where WARM is doing additional

work,

without providing any benefit, may be you can still find regression. I

am

willing to fix them as long as they are fixable and we are comfortable

with

the additional code complexity. IMHO certain trade-offs are good, but I
understand that not everybody will agree with my views and that's ok.

The point here is that we can't make intelligent decisions about
whether to commit this feature unless we know which situations get
better and which get worse and by how much. I don't accept as a
general principle the idea that CPU-bound workloads don't matter.
Obviously, I/O-bound workloads matter too, but we can't throw
CPU-bound workloads under the bus. Now, avoiding index bloat does
also save CPU, so it is easy to imagine that WARM could come out ahead
even if each update consumes slightly more CPU when actually updating,
so we might not actually regress. If we do, I guess I'd want to know
why.

I myself wonder if this CPU overhead is at all related to LP_DEAD
recycling during page splits.

With the respect to the tests that myself, Dilip and others did for WARM, I
think we were kinda exercising the worst case scenario. Like in one case,
we created a table with 40% fill factor, created an index with a large
text column, WARM updated all rows in the table, turned off autovacuum so
that chain conversion does not take place, and then repeatedly run select
query on those rows using the index which did not receive WARM insert.

IOW we were only measuring the overhead of doing recheck by constructing an
index tuple from the heap tuple and then comparing it against the existing
index tuple. And we did find regression, which is not entirely surprising
because obviously that code path does extra work when it needs to do
recheck. And we're only measuring that overhead without taking into account
the benefits of WARM to the system in general. I think counter-argument to
that is, such workload may exists somewhere which might be regressed.

I have my suspicions that the recyling

has some relationship to locality, which leads me to want to
investigate how Claudio Freire's patch to consistently treat heap TID
as part of the B-Tree sort order could help, both in general, and for
WARM.

It could be, especially if we re-redesign recheck solely based on the index
pointer state and the heap tuple state. That could be more performant for
selects and could also be more robust, but will require index inserts to
get hold of the old index pointer (based on root TID), compare it against
the new index tuple and either skip the insert (if everything matches) or
set a PREWARM flag on the old pointer, and insert the new tuple with
POSTWARM flag.

Searching for old index pointer will be non-starter for non-unique indexes,
unless they are also sorted by TID, something that Claudio's patch does.
What I am not sure is whether the patch on its own will stand the
performance implications because it increases the index tuple width (and
probably index maintenance cost too).

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#252Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#249)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Apr 12, 2017 at 10:42 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Tue, Apr 11, 2017 at 1:20 PM, Pavan Deolasee

5. Added code to set a CLEAR pointer to a WARM pointer when we know that

the

entire chain is WARM. This should address the workload Dilip ran and

found

regression (I don't think he got chance to confirm that)

Which is clearly a thing that should happen before commit, and really,
you ought to be leading the effort to confirm that, not him. It's
good for him to verify that your fix worked, but you should test it
first.

Not sure why you think I did not do the tests. I did and reported that it
helps reduce the regression. Last para here: https://www.postgresql.
org/message-id/CABOikdOTstHK2y0rDk%2BY3Wx9HRe%2BbZtj3zuYGU%
3DVngneiHo5KQ%40mail.gmail.com

I understand it might have got lost in the conversation and I possibly did
a poor job of explaining it. From my perspective, I did not want say that
everything is hunky-dory based on my own tests because 1. I probably do not
have access to the same kind of machine Dilip has and 2. It's better to get
it confirmed by someone who initially reported it. Again, I fully respect
that he would be busy with other things and I do not expect him or anyone
else to test/review my patch on priority. The only point I am trying to
make is that I did my own tests and made sure that it helps.

(Having said that, I am not sure if changing pointer state from CLEAR to
WARM is indeed a good change. Having thought more about it and after
looking at the page-split code, I now think that this might just confuse
the WARM cleanup code and make algorithm that much harder to prove)

6. Enhanced stats collector to collect information about candidate WARM
chains and added mechanism to control WARM cleanup at the heap as well as
index level, based on configurable parameters. This gives user better
control over the additional work that is required for WARM cleanup.

I haven't seen previous discussion of this; therefore I doubt whether
we have agreement on these parameters.

Sure. I will bring these up in a more structured manner for everyone to
comment.

7. Added table level option to disable WARM if nothing else works.

-1 from me.

Ok. It's kinda last resort for me too. But at some point, we might want to
make that call if we find an important use case that regresses because of
WARM and we see no way to fix that or at least not without a whole lot of
complexity.

I may have missed something, but there is no intention to ignore known
regressions/reviews. Of course, I don't think that every regression will

be

solvable, like if you run a CPU-bound workload, setting it up in a way

such

that you repeatedly exercise the area where WARM is doing additional

work,

without providing any benefit, may be you can still find regression. I am
willing to fix them as long as they are fixable and we are comfortable

with

the additional code complexity. IMHO certain trade-offs are good, but I
understand that not everybody will agree with my views and that's ok.

The point here is that we can't make intelligent decisions about
whether to commit this feature unless we know which situations get
better and which get worse and by how much.

Sure.

I don't accept as a
general principle the idea that CPU-bound workloads don't matter.
Obviously, I/O-bound workloads matter too, but we can't throw
CPU-bound workloads under the bus.

Yeah, definitely not suggesting that.

Now, avoiding index bloat does
also save CPU, so it is easy to imagine that WARM could come out ahead
even if each update consumes slightly more CPU when actually updating,
so we might not actually regress. If we do, I guess I'd want to know
why.

Well the kind of tests we did to look for regression were worst case
scenarios. For example, in the test where we found 10-15% regression, we
used a wide index (so recheck cost is high), WARM updated all rows,
disabled auto-vacuum (so no chain conversion) and then repeatedly selected
the rows from the index, thus incurring recheck overhead and in fact,
measuring only that.

When I measured WARM on tables with small scale factor so that everything
fits in memory, I found a modest 20% improvement in tps. So, you're right,
WARM might also help in-memory workloads. But that will show up only if we
measure UPDATEs and SELECTs both. If we measure only SELECTs and that too
in a state where we are paying all price for having done a WARM update,
obviously we will only see regression, if any. Not saying we should ignore
that. We should in fact measure all possible loads, and try to fix as many
as we can, especially if they resemble to a real-world use case, but there
will be a trade-off to make. So I highly appreciate Amit and Dilip's help
with coming up additional tests. At least it gives us opportunity to think
how to fix them, even if we can't fix all of them.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#253Jaime Casanova
jaime.casanova@2ndquadrant.com
In reply to: Pavan Deolasee (#236)
1 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On 5 April 2017 at 13:32, Pavan Deolasee <pavan.deolasee@gmail.com> wrote:

Ok. I've extensively updated the README to match the current state of
affairs. Updated patch set attached.

Hi Pavan,

I run a test on current warm patchset, i used pgbench with a scale of
20 and a fillfactor of 90 and then start the pgbench run with 6
clients in parallel i also run sqlsmith on it.

And i got a core dump after sometime of those things running.

The assertion that fails is:

"""
LOG: statement: UPDATE pgbench_tellers SET tbalance = tbalance + 3519
WHERE tid = 34;
TRAP: FailedAssertion("!(((bool) (((const void*)(&tup->t_ctid) !=
((void *)0)) && (((&tup->t_ctid)->ip_posid & ((((uint16) 1) << 13) -
1)) != 0))))", File: "../../../../src/include/access/htup_details.h",
Line: 659)
"""

--
Jaime Casanova www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Attachments:

warm_bt_20170413.txttext/plain; charset=US-ASCII; name=warm_bt_20170413.txtDownload
#254Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Jaime Casanova (#253)
1 attachment(s)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Apr 14, 2017 at 9:21 PM, Jaime Casanova <jaime.casanova@2ndquadrant.
com> wrote:

Hi Pavan,

I run a test on current warm patchset, i used pgbench with a scale of
20 and a fillfactor of 90 and then start the pgbench run with 6
clients in parallel i also run sqlsmith on it.

And i got a core dump after sometime of those things running.

The assertion that fails is:

"""
LOG: statement: UPDATE pgbench_tellers SET tbalance = tbalance + 3519
WHERE tid = 34;
TRAP: FailedAssertion("!(((bool) (((const void*)(&tup->t_ctid) !=
((void *)0)) && (((&tup->t_ctid)->ip_posid & ((((uint16) 1) << 13) -
1)) != 0))))", File: "../../../../src/include/access/htup_details.h",
Line: 659)
"""

Hi Jaime,

Thanks for doing the tests and reporting the problem. Per our chat, the
assertion failure occurs only after a crash recovery. I traced i down to
the point where we were failing to set the root line pointer correctly
during crash recovery. In fact, we were setting it, but after the local
changes are copied to the on-disk image, thus failing to make to the
storage.

Can you please test with the attached patch and confirm it works? I was
able to reproduce the exact same assertion on my end and the patch seems to
fix it. But an additional check won't harm.

I'll include the fix in the next set of patches.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

Attachments:

warm_crash_recovery_fix.patchapplication/octet-stream; name=warm_crash_recovery_fix.patchDownload
diff --git b/src/backend/access/heap/heapam.c a/src/backend/access/heap/heapam.c
index d309dd3..3dd6910 100644
--- b/src/backend/access/heap/heapam.c
+++ a/src/backend/access/heap/heapam.c
@@ -9490,16 +9490,16 @@ heap_xlog_update(XLogReaderState *record, bool hot_update)
 		if (warm_update)
 			HeapTupleHeaderSetWarmUpdated(htup);
 
-		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
-		if (offnum == InvalidOffsetNumber)
-			elog(PANIC, "failed to add tuple");
-
 		/*
 		 * Make sure the tuple is marked as the latest and root offset
 		 * information is restored.
 		 */
 		HeapTupleHeaderSetHeapLatest(htup, xlrec->root_offnum);
 
+		offnum = PageAddItem(page, (Item) htup, newlen, offnum, true, true);
+		if (offnum == InvalidOffsetNumber)
+			elog(PANIC, "failed to add tuple");
+
 		if (xlrec->flags & XLH_UPDATE_NEW_ALL_VISIBLE_CLEARED)
 			PageClearAllVisible(page);
 
diff --git b/src/include/access/htup_details.h a/src/include/access/htup_details.h
index ff22113..87b77eb 100644
--- b/src/include/access/htup_details.h
+++ a/src/include/access/htup_details.h
@@ -557,6 +557,7 @@ HeapTupleHeaderSetHeapLatest(HeapTupleHeader tup, Offset offnum)
 	Assert(OffsetNumberIsValid(offnum));
 
 	tup->t_infomask2 |= HEAP_LATEST_TUPLE;
+	Assert(OffsetNumberIsValid(offnum));
 	ItemPointerSetOffsetNumber(&tup->t_ctid, offnum);
 }
 
@@ -630,7 +631,7 @@ static inline void
 HeapTupleHeaderSetNextTid(HeapTupleHeader tup, ItemPointer tid)
 {
 	ItemPointerCopy(tid, &(tup->t_ctid));
-
+	Assert(ItemPointerIsValid(tid));
 	HeapTupleHeaderClearHeapLatest(tup);
 }
 
#255Robert Haas
robertmhaas@gmail.com
In reply to: Pavan Deolasee (#254)
Re: Patch: Write Amplification Reduction Method (WARM)

On Tue, Apr 18, 2017 at 4:25 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

I'll include the fix in the next set of patches.

I haven't see a new set of patches. Are you intending to continue
working on this?

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#256Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Robert Haas (#255)
Re: Patch: Write Amplification Reduction Method (WARM)

On Wed, Jul 26, 2017 at 6:26 PM, Robert Haas <robertmhaas@gmail.com> wrote:

On Tue, Apr 18, 2017 at 4:25 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

I'll include the fix in the next set of patches.

I haven't see a new set of patches. Are you intending to continue
working on this?

Looks like I'll be short on bandwidth to pursue this further, given other
work commitments including upcoming Postgres-XL 10 release. While I haven't
worked on the patch since April, I think it was in a pretty good shape
where I left it. But it's going to be incredibly difficult to estimate the
amount of further efforts required, especially with testing and validating
all the use cases and finding optimisations to fix regressions in all those
cases. Also, many fundamental concerns around the patch touching the core
of the database engine can only be addressed if some senior hackers, like
you, take serious interest in the patch.

I'll be happy if someone wants to continue hacking the patch further and
get it in a committable shape. I can stay actively involved. But TBH the
amount of time I can invest is far as compared to what I could during the
last cycle.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

In reply to: Pavan Deolasee (#256)
Re: Patch: Write Amplification Reduction Method (WARM)

Pavan Deolasee <pavan.deolasee@gmail.com> wrote:

I'll be happy if someone wants to continue hacking the patch further and
get it in a committable shape. I can stay actively involved. But TBH the
amount of time I can invest is far as compared to what I could during the
last cycle.

That's disappointing.

I personally find it very difficult to assess something like this. The
problem is that even if you can demonstrate that the patch is strictly
better than what we have today, the risk of reaching a local maxima
exists. Do we really want to double-down on HOT?

If I'm not mistaken, the goal of WARM is, roughly speaking, to make
updates that would not be HOT-safe today do a "partial HOT update". My
concern with that idea is that it doesn't do much for the worst case.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#258Pavan Deolasee
pavan.deolasee@gmail.com
In reply to: Peter Geoghegan (#257)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Jul 28, 2017 at 5:57 AM, Peter Geoghegan <pg@bowt.ie> wrote:

Pavan Deolasee <pavan.deolasee@gmail.com> wrote:

I'll be happy if someone wants to continue hacking the patch further and
get it in a committable shape. I can stay actively involved. But TBH the
amount of time I can invest is far as compared to what I could during the
last cycle.

That's disappointing.

Yes, it is even more for me. But I was hard pressed to choose between
Postgres-XL 10 and WARM. Given ever increasing interest in XL and my
ability to control the outcome, I thought it makes sense to focus on XL for
now.

I personally find it very difficult to assess something like this.

One good thing is that the patch is ready and fully functional. So that
allows those who are keen to run real performance tests and see the actual
impact of the patch.

The
problem is that even if you can demonstrate that the patch is strictly
better than what we have today, the risk of reaching a local maxima
exists. Do we really want to double-down on HOT?

Well HOT has served us well for over a decade now. So I won't hesitate to
place my bets on WARM.

If I'm not mistaken, the goal of WARM is, roughly speaking, to make
updates that would not be HOT-safe today do a "partial HOT update". My
concern with that idea is that it doesn't do much for the worst case.

I see your point. But I would like to think this way: does the technology
significantly help many common use cases, that are currently not addressed
by HOT? It probably won't help all workloads, that's given. Also, we don't
have any credible alternative while this patch has progressed quite a lot.
May be Robert will soon present the pluggable storage/UNDO patch and that
will cover everything and more that is currently covered by HOT/WARM. That
will probably make many other things redundant.

Thanks,
Pavan

--
Pavan Deolasee http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

#259Robert Haas
robertmhaas@gmail.com
In reply to: Pavan Deolasee (#258)
Re: Patch: Write Amplification Reduction Method (WARM)

On Fri, Jul 28, 2017 at 12:39 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

I see your point. But I would like to think this way: does the technology
significantly help many common use cases, that are currently not addressed
by HOT? It probably won't help all workloads, that's given. Also, we don't
have any credible alternative while this patch has progressed quite a lot.
May be Robert will soon present the pluggable storage/UNDO patch and that
will cover everything and more that is currently covered by HOT/WARM. That
will probably make many other things redundant.

A lot of work is currently being done on this, by multiple people,
mostly not including me, and a lot of good progress is being made.
But it's not exactly ready to ship, nor will it be any time soon. I
think we can run a 1-client pgbench without crashing the server at
this point, if you tweak the configuration a little bit and don't do
anything fancy like, say, try to roll back a transaction. :-)

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: Pavan Deolasee (#258)
In-place index updates and HOT (Was: Patch: Write Amplification Reduction Method (WARM))

Pavan Deolasee <pavan.deolasee@gmail.com> wrote:

One good thing is that the patch is ready and fully functional. So that
allows those who are keen to run real performance tests and see the actual
impact of the patch.

Very true.

I see your point. But I would like to think this way: does the technology
significantly help many common use cases, that are currently not addressed
by HOT? It probably won't help all workloads, that's given. Also, we don't
have any credible alternative while this patch has progressed quite a lot.
May be Robert will soon present the pluggable storage/UNDO patch and that
will cover everything and more that is currently covered by HOT/WARM. That
will probably make many other things redundant.

Well, I don't assume that it will; again, I just don't know. I agree
with your general assessment of things, which is that WARM, EDB's
Z-Heap/UNDO project, and things like IOTs have significant overlap in
terms of the high-level problems that they fix. While it's hard to say
just how much overlap exists, it's clearly more than a little. And, you
are right that we don't have a credible alternative in this general
category right now. The WARM patch is available today.

As you may have noticed, in recent weeks I've been very vocal about the
role of index bloat in cases where bloat has a big impact on production
workloads. I think that it has an under-appreciated role in workloads
that deteriorate over time, as bloat accumulates. Perhaps HOT made such
a big difference to workloads 10 years ago not just because it prevented
creating new index entries. It also reduced fragmentation of the
keyspace in indexes, by never inserting duplicates in the first place.

I have some rough ideas related to this, and to the general questions
you're addressing. I'd like to run these by you.

In-place index updates + HOT
============================

Maybe we could improve things markedly in this general area by "chaining
together HOT chains", and updating index heap pointers in place, to
point to the start of the latest HOT chain in that chain of chains
(provided the index tuple was "logically unchanged" -- otherwise, you'd
need to have both sets of indexed values at once, of course). Index
tuples therefore always point to the latest HOT chain, favoring recent
MVCC snapshots over older ones.

Pruning
-------

HOT pruning is great because you can remove heap bloat without worrying
about there being index entries with heap item pointers pointing to what
is removed. But isn't that limitation as much about what is in the index
as it is about what is in the heap?

Under this scheme, you don't even have to keep around the old ItemId
stub when pruning, if it's a sufficiently old HOT chain that no index
points to the corresponding TID. That may not seem like a lot of bloat
to have to keep around, but it accumulates within a page until VACUUM
runs, ultimately limiting the effectiveness of pruning for certain
workloads.

Old snapshots/row versions
--------------------------

Superseding HOT chains have their last heap tuple's t_tid point to the
start of the preceding/superseded HOT chain (not their own TID, as
today, which is redundant), which may or may not be on the same heap
page. That's how old snapshots go backwards to get old versions, without
needing their own "logically redundant" index entries. So with UPDATE
heavy workloads that are essentially HOT-safe today, performance doesn't
tank due to a long running transaction that obstructs pruning within a
heap page, and thus necessitates the insertion of new index tuples.
That's the main justification for this entire design.

It's also possible that pruning can be taught that since only one index
update was logically necessary when the to-be-pruned HOT chain was
created, it's worth doing a "retail index tuple deletion" against the
index tuple that was logically necessary, then completely obliterating
the HOT chain, stub item pointer and all.

Bloat and locality
------------------

README.HOT argues against HOT chains that span pages, which this is a
bit like, on the grounds that it's bad news that every recent snapshot
has to go through the old heap page. That makes sense, but only because
the temporal locality there is horrible, which would not be the case
here. README.HOT says that that cost is not worth the benefit of
preventing a new index write, but I think that it ought to take into
account that not all index writes are equal. There is an appreciable
difference between inserting a new tuple, and updating one in-place. We
can remove the cost (hurting new snapshots by making them go through old
heap pages) while preserving most of the benefits (no logically
unnecessary index bloat).

The benefit of HOT is clearly more bloat prevention than not having to
visit indexes at all. InnoDB secondary index updates update the index
twice: The first time, during the update itself, and the second time, by
the purge thread, once the xact commits. Clearly they care about doing
clean-up of indexes eagerly. Also, a key design goal of UNDO within the
original ARIES paper is to make deletion of index tuples make the space
reclaimable immediately, even before the transaction commits. While it
wouldn't be practical to get that to work for the general case on an
MVCC system, I think it can work for logically unchanged index tuples
through in-place index tuple updates. If nothing else, the priorities
for ARIES tell us something.

Obviously what I describe here is totally hand-wavy, and actually
undertaking this project would be incredibly difficult. If nothing else
it may be useful to you, or to others, to hear me slightly reframe the
benefits of HOT in this way. Moreover, a lot of what I'm describing here
has overlap with stuff that I presume that EDB will need for
Z-Heap/UNDO. For example, since it's clear that you cannot immediately
remove an updated secondary index tuple in UNDO, it still has to have
its own "out of band" lifetime. How is it ever going to get physically
deleted, otherwise? So maybe you end up updating that in-place, to point
into UNDO directly, rather than pointing to a heap TID that is
necessarily the most recent version, which could introduce ambiguity
(what happens when it is changed, then changed back?). That's actually
rather similar to what you could do with HOT + the existing heapam,
except that there is a clearer demarcation of "current" (heap) and
"pending garbage" (UNDO) within Robert's design.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#261Claudio Freire
klaussfreire@gmail.com
In reply to: Peter Geoghegan (#260)
Re: In-place index updates and HOT (Was: Patch: Write Amplification Reduction Method (WARM))

On Fri, Jul 28, 2017 at 8:32 PM, Peter Geoghegan <pg@bowt.ie> wrote:

README.HOT says that that cost is not worth the benefit of
preventing a new index write, but I think that it ought to take into
account that not all index writes are equal. There is an appreciable
difference between inserting a new tuple, and updating one in-place. We
can remove the cost (hurting new snapshots by making them go through old
heap pages) while preserving most of the benefits (no logically
unnecessary index bloat).

It's a neat idea.

And, well, now that you mention, you don't need to touch indexes at all.

You can create the new chain, and "update" the index to point to it,
without ever touching the index itself, since you can repoint the old
HOT chain's start line pointer to point to the new HOT chain, create
a new pointer for the old one and point to it in the new HOT chain's
t_tid.

Existing index tuples thus now point to the right HOT chain without
having to go into the index and make any changes.

You do need the new HOT chain to live in the same page for this,
however.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

In reply to: Claudio Freire (#261)
Re: In-place index updates and HOT (Was: Patch: Write Amplification Reduction Method (WARM))

Claudio Freire <klaussfreire@gmail.com> wrote:

README.HOT says that that cost is not worth the benefit of
preventing a new index write, but I think that it ought to take into
account that not all index writes are equal. There is an appreciable
difference between inserting a new tuple, and updating one in-place. We
can remove the cost (hurting new snapshots by making them go through old
heap pages) while preserving most of the benefits (no logically
unnecessary index bloat).

It's a neat idea.

Thanks.

I think it's important to both prevent index bloat, and to make sure
that only the latest version is pointed to within indexes. There are
only so many ways that that can be done. I've tried to come up with a
way of doing those two things that breaks as little of heapam.c as
possible. As a bonus, some kind of super-pruning of many linked HOT
chains may be enabled, which is something that an asynchronous process
can do when triggered by a regular prune within a user backend.

This is a kind of micro-vacuum that is actually much closer to VACUUM
than the kill_prior_tuple stuff, or traditional pruning, in that it
potentially kills index entries (just those that were not subsequently
updated in place, because the new values for the index differed), and
then kills heap tuples, all together, without even keeping around a stub
itemId in the heap. And, chaining together HOT chains also lets us chain
together pruning. Retail index tuple deletion from pruning needs to be
crash safe, unlike LP_DEAD setting.

And, well, now that you mention, you don't need to touch indexes at all.

You can create the new chain, and "update" the index to point to it,
without ever touching the index itself, since you can repoint the old
HOT chain's start line pointer to point to the new HOT chain, create
a new pointer for the old one and point to it in the new HOT chain's
t_tid.

Existing index tuples thus now point to the right HOT chain without
having to go into the index and make any changes.

You do need the new HOT chain to live in the same page for this,
however.

That seems complicated. The idea that I'm trying to preserve here is the
idea that the beginning of a HOT-chain (a definition that includes a
"potential HOT chain" -- a single heap tuple that could later receive a
HOT UPDATE) unambiguously signals a need for physical changes to indexes
in all cases. The idea that I'm trying to move away from is that those
physical changes need to be new index insertions (new insertions should
only happen when it is logically necessary, because indexed values
changed).

Note that this can preserve the kill_prior_tuple stuff, I think, because
if everything is dead within a single HOT chain (a HOT chain by our
current definition -- not a chain of HOT chains) then nobody can need
the index tuple. This does require adding complexity around aborted
transactions, whose new (potential) HOT chain t_tid "backpointer" is
still needed; we must revise the definition of a HOT chain being
all_dead to accommodate that. But for the most part, we preserve HOT
chains as a thing that garbage collection can independently reason
about, process with single page atomic operations while still being
crash safe, etc.

As far as microvacuum style garbage collection goes, at a high level,
HOT chains seem like a good choke point to do clean-up of both heap
tuples (pruning) and index tuples. The complexity of doing that seems
manageable. And by chaining together HOT chains, you can really
aggressively microvacuum many HOT chains on many pages within an
asynchronous process as soon as the long running transaction goes away.
We lean on temporal locality for garbage collection.

There are numerous complications that I haven't really acknowledged but
am at least aware of. For one, when I say "update in place", I don't
necessarily mean it literally. It's probably possible to literally
update in place with unique indexes. For secondary indexes, which should
still have heap TID as part of their keyspace (once you go implement
that, Claudio), we probably need an index insertion immediately followed
by an index deletion, often within the same leaf page.

I hope that this design, such as it is, will be reviewed as a thought
experiment. What would be good or bad about a design like this in the
real world, particularly as compared to alternatives that we know about?
Is *some* "third way" design desirable and achievable, if not this one?
By "third way" design, I mean a design that is much less invasive than
adopting UNDO for MVCC, that still addresses the issues that we
currently have with certain types of UPDATE-heavy workloads, especially
when there are long running transactions, etc. I doubt that WARM meets
this standard, unfortunately, because it doesn't do anything for cases
that suffer only due to a long running xact.

I don't accept that there is a rigid dichotomy between Postgres style
MVCC, and using UNDO for MVCC, and I most certainly don't accept that
garbage collection has been optimized as heavily as the overall heapam.c
design allows for.

--
Peter Geoghegan

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

#263Daniel Gustafsson
daniel@yesql.se
In reply to: Robert Haas (#259)
Re: Patch: Write Amplification Reduction Method (WARM)

On 28 Jul 2017, at 16:46, Robert Haas <robertmhaas@gmail.com> wrote:

On Fri, Jul 28, 2017 at 12:39 AM, Pavan Deolasee
<pavan.deolasee@gmail.com> wrote:

I see your point. But I would like to think this way: does the technology
significantly help many common use cases, that are currently not addressed
by HOT? It probably won't help all workloads, that's given. Also, we don't
have any credible alternative while this patch has progressed quite a lot.
May be Robert will soon present the pluggable storage/UNDO patch and that
will cover everything and more that is currently covered by HOT/WARM. That
will probably make many other things redundant.

A lot of work is currently being done on this, by multiple people,
mostly not including me, and a lot of good progress is being made.
But it's not exactly ready to ship, nor will it be any time soon. I
think we can run a 1-client pgbench without crashing the server at
this point, if you tweak the configuration a little bit and don't do
anything fancy like, say, try to roll back a transaction. :-)

The discussions in this implies that there is a bit more work on this patch,
which also hasn’t moved in the current commitfest, so marking it Returned with
Feedback. Please re-submit this work in a future commitfest when ready for a
new round of reviews.

cheers ./daniel

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers